AI Snake Oil

Share this post

Is AI-generated disinformation a threat to democracy?

www.aisnakeoil.com

Is AI-generated disinformation a threat to democracy?

An essay on the future of generative AI on social media

Sayash Kapoor
and
Arvind Narayanan
Jun 19, 2023
31
Share this post

Is AI-generated disinformation a threat to democracy?

www.aisnakeoil.com
6
Share

We just published an essay titled How to Prepare for the Deluge of Generative AI on Social Media on the Knight First Amendment Institute website. We offer a grounded analysis of what we can do to reduce the harms of generative AI while retaining many of the benefits. This post is a brief summary of the essay. Read the full essay here.

Most conversations about the impact of generative AI on social media focus on disinformation.

Among the many positive and negative ways in which generative AI can be used on social media, disinformation has disproportionately received attention among researchers, policy makers, and civil society.

Disinformation is a serious problem. But we don’t think generative AI has made it qualitatively different. Our main observation is that when it comes to disinfo, generative AI merely gives bad actors a cost reduction, not new capabilities. In fact, the bottleneck has always been distributing disinformation, not generating it, and AI hasn’t changed that.

You’re reading AI Snake Oil, a blog about our upcoming book. Subscribe to get new posts.

So, AI-specific solutions to disinformation, such as watermarking, provenance, and detection of AI-generated content, are barking up the wrong tree. They won’t work, and they solve a non-problem. Instead, we should bolster existing defenses such as fact-checking.

Furthermore, the outsized focus on disinformation means that many urgent problems aren’t getting enough attention, such as nonconsensual deepfake pornography. We offer a four-factor test to help guide the attention of civil society and policy makers to prioritize among various malicious uses.

Beyond malicious uses, many other applications of generative AI are benign, or even useful. While it is important to counteract malicious uses and hold AI companies accountable, an overt focus on malicious uses can leave researchers and public-interest technologists playing Whac-a-mole when new applications are released, instead of proactively steering the uses of generative AI in socially beneficial directions.

In our essay, we zoom out to analyze the breadth of malicious and non-malicious uses of generative AI on social media. Thinking deeply about their benefits and harms can enable researchers and public-interest technologists to be proactive, instead of reacting to the tech industry.

With this in mind, we analyze many types of non-malicious synthetic media, describe the idea of pro-social chatbots, and discuss how platforms might incorporate gen AI into their recommendation engines. Note that non-malicious doesn’t mean harmless. A good example is filters on apps like Instagram and TikTok, which have contributed to unrealistic beauty standards and body-image issues in teenagers.

Our essay is the result of many discussions with researchers, platform companies, public-interest organizations, and policy makers. It is aimed at all these groups. We hope to spur research into understudied applications, as well as the development of guardrails for non-malicious yet potentially harmful applications of generative AI.

Subscribe to AI Snake Oil

Launched 2 years ago

What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

31 Likes
·
6 Restacks
31
Share this post

Is AI-generated disinformation a threat to democracy?

www.aisnakeoil.com
6
Share
6 Comments
Chaos Goblin
Collapsetastic
Jun 20, 2023

Putting "ad personalization" in the positives section certainly is a choice.

Expand full comment
Like (2)
Reply
Share
2 replies by Sayash Kapoor and others
Bjorn L.
Bjorn’s Substack
Jun 20, 2023

"In fact, the bottleneck has always been distributing disinformation, not generating it, and AI hasn’t changed that." Nonsensical statement. Cambridge Analytica being able to even very crudely personalize disinfo at scale just at the cohort level was a big part of how Trump was able to win in 2016. Generative AI exponentializes that capability closer to one-to-one personalization that can also interact with each recipient ongoing.

"It’s true that generative AI reduces the cost of malicious uses. But in many of the most prominently discussed domains, it hasn't led to a novel set of malicious capabilities." No, it's not at all just cost reduction—it's a step change, even just in the one example area I described above.

Why are you guys are working so hard to sweep serious legitimate concerns under the rug way so quickly and with such faulty logic?

"Finally, note that media accounts and popular perceptions of the effectiveness of social media disinformation at persuading voters tend to be exaggerated. For example, Russian influence operations during the 2016 U.S. elections are often cited as an example of election interference. But studies have not detected a meaningful effect of Russian social media disinformation accounts on attitudes, polarization, or voting behavior." Misdirective argument. Did you entirely forget about Cambridge Analytica—or are you just hoping your readers have? Or do you just think the billions of dollars spent on political propaganda—both within the law and outside of it—is done just for the hell of it?

AI Snake Oil, indeed.

Expand full comment
Like
Reply
Share
2 replies
4 more comments...
GPT-4 and professional benchmarks: the wrong answer to the wrong question
OpenAI may have tested on the training data. Besides, human benchmarks are meaningless for bots.
Mar 21, 2023 • 
Arvind Narayanan
 and 
Sayash Kapoor
127
Share this post

GPT-4 and professional benchmarks: the wrong answer to the wrong question

www.aisnakeoil.com
22
AI scaling myths
Scaling will run out. The question is when.
Jun 28 • 
Arvind Narayanan
 and 
Sayash Kapoor
191
Share this post

AI scaling myths

www.aisnakeoil.com
37
Is GPT-4 getting worse over time?
A new paper going viral has been widely misinterpreted
Jul 20, 2023 • 
Arvind Narayanan
 and 
Sayash Kapoor
119
Share this post

Is GPT-4 getting worse over time?

www.aisnakeoil.com
12

Ready for more?

© 2024 Sayash Kapoor and Arvind Narayanan
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great culture
Share

Create your profile

undefined subscriptions will be displayed on your profile (edit)

Skip for now

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.