AI Snake Oil

Share this post

Four more things we worked on in 2022

www.aisnakeoil.com

Four more things we worked on in 2022

We had a busy 2022. Here are a few things we worked on but didn’t cover here.

Arvind Narayanan
and
Sayash Kapoor
Dec 31, 2022
9
Share this post

Four more things we worked on in 2022

www.aisnakeoil.com
1
Share

We’re grateful to you for reading this blog / newsletter. It’s made our book project much more rewarding. 

We had a busy 2022. Here are links to things we worked on but didn’t cover here.

1. The reproducibility crisis in ML-based science. AI hype isn’t limited to commercial products. Researchers hype their results just as much. This has led to overoptimism about ML in many scientific fields including medicine and political science. Over the summer, we organized an online workshop on the topic. Over a thousand people registered and the YouTube livestream has been watched over 5,000 times. The talk videos, slides, and an annotated reading list are available on the workshop website. The event was covered by Nature News. Sayash gave an overview of our work on this topic in a talk at the Lawrence Livermore National Lab.

We have been leading an effort to create a set of guidelines and a checklist to help researchers make their ML-based research reproducible. Please reach out if you’re interested in a draft version.

2. The dangers of flawed AI. One type of AI is particularly ethically worrisome: making decisions about people based on a prediction about what they might do in the future. Examples include criminal risk prediction and some hiring algorithms. In a new working paper titled Against Predictive Optimization, we (along with Angelina Wang and Solon Barocas) challenge the legitimacy of these algorithms. Please reach out if you’re interested in a copy of the paper.

Arvind coauthored a book on fairness and machine learning. It is available online and is nearing publication: all four peer reviewers strongly recommended publication and we have sent the final version to our publisher, MIT Press. Building on some of the points in the book, Arvind presented a lecture/paper on the limits of the quantitative approach to discrimination.

You’re reading AI Snake Oil, a blog about our upcoming book. Subscribe to get new posts.

3. Recommendation algorithms. Arvind is visiting the Knight First Amendment Institute at Columbia, where he is writing about recommender systems on social media — specifically, how they amplify some types of speech and suppress others. He is co-organizing a symposium on algorithmic amplification on April 27/28.

Arvind and Sayash are both on Mastodon. Although Mastodon’s lack of a recommendation algorithm appeals to many of its users, Arvind argues that algorithms aren’t the enemy and should be redesigned instead of abandoned. In another blog post, he explains why TikTok’s seemingly magical recommendation algorithm is actually nothing special, and its real secret sauce is something else.

4. AI hype: podcasts, radio, press quotes. Arvind talked about AI hype on a podcast with Ethan Zuckerman and on a CBC radio interview. Sayash talked about what AI can and can’t do on a KGNU Radio Interview. We were quoted on ChatGPT in various places including the Washington Post, Nature, and Bloomberg. In our previous post, we explained why ChatGPT can be amazingly useful despite being a bullshit generator.

Happy new year!

Subscribe to AI Snake Oil

Launched 2 years ago

What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

9 Likes
9
Share this post

Four more things we worked on in 2022

www.aisnakeoil.com
1
Share
1 Comment
Charlie Pownall
AIAAIC
Dec 31, 2022Liked by Arvind Narayanan

Great stuff, not least re AI hype (and snake oil).

Of which, there are many examples in the AIAAIC Repository:

https://www.aiaaic.org/aiaaic-repository

https://docs.google.com/spreadsheets/d/1Bn55B4xz21-_Rgdr8BBb2lt0n_4rzLGxFADMlVW0PYI/edit?usp=sharing

This free, open resource may be useful to you and your readers.

Expand full comment
Like (1)
Reply
Share
GPT-4 and professional benchmarks: the wrong answer to the wrong question
OpenAI may have tested on the training data. Besides, human benchmarks are meaningless for bots.
Mar 21, 2023 • 
Arvind Narayanan
 and 
Sayash Kapoor
127
Share this post

GPT-4 and professional benchmarks: the wrong answer to the wrong question

www.aisnakeoil.com
22
AI scaling myths
Scaling will run out. The question is when.
Jun 28 • 
Arvind Narayanan
 and 
Sayash Kapoor
191
Share this post

AI scaling myths

www.aisnakeoil.com
37
Is GPT-4 getting worse over time?
A new paper going viral has been widely misinterpreted
Jul 20, 2023 • 
Arvind Narayanan
 and 
Sayash Kapoor
119
Share this post

Is GPT-4 getting worse over time?

www.aisnakeoil.com
12

Ready for more?

© 2024 Sayash Kapoor and Arvind Narayanan
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great culture
Share

Create your profile

undefined subscriptions will be displayed on your profile (edit)

Skip for now

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.