Keep this post on ice and uncork it when the bubble pops. It may mean nothing to you now; I hope it means something when the time comes. 

This post is written with an anguished heart, from love, which is the only good reason to do anything.

Blue Bay by Jack Mancino

I hope that the AI bubble popping is like the FTX collapse 2.0 for effective altruism. Not because it will make funding dry up — it won't. And not because it will have any relation to moral scandal — it won't. But it will be a financial wreck — orders of magnitude larger than the FTX collapse — that could lead to soul-searching for many people in effective altruism, if they choose to respond that way. (It may also have indirect reputational damage for EA by diminishing the credibility of the imminent AGI narrative — too early to tell.)

In the wake of the FTX collapse, one of the positive signs was the eagerness of people to do soul-searching. It was difficult, and it's still difficult, how to make sense of EA's role in FTX. Did powerful people in the EA movement somehow contribute to the scam? Or did they just get scammed too? Were people in EA accomplices or victims? What is the lesson? Is there one? I'll leave that to be sorted out another time. The point here is that people were eager to look for the lesson, if there was one to find, and to integrate it. That's good.

It's highly probable that there is an AI bubble.[1] Nobody can predict when a bubble will pop, even if they can correctly call that there is a bubble. So, we can only say that there is most likely a bubble and it will pop... eventually. Maybe in six months, maybe in a year, maybe in two years, maybe in three years... Who knows. I hope that people will experience the reverberations of that bubble popping — possibly even triggering a recession in the U.S., although it may be a bit like the straw that broke the camel's back in that case — and bring the same energy they brought to the FTX collapse. The EA movement has been incredibly bought-in on AI capabilities optimism and that same optimism is fueling AI investment. The AI bubble popping would be a strong signal that this optimism has been misplaced. 

Unfortunately, it always possible to not learn lessons. The futurist Ray Kurzweil has made many incorrect predictions about the future. His strategy in many such cases is to find a way he can declare he was correct or "essentially correct" (see, e.g., page 132 here). Tesla CEO Elon Musk has been predicting every year for the past seven years or so that Teslas will achieve full autonomy — or something close to it — in a year, or next year, or by the end of the year. Every year it doesn't happen, he just pushes his prediction back a year. And he's done that about seven times. Every year since around 2018, Tesla's achievement of full autonomy (or something close) has been about a year away. 

When the AI bubble pops, I fear both of these reactions. The Kurzweil-style reaction is to interpret the evidence in a way — any way — that allows one to be correct. There are a million ways of doing this. One way would be to tell a story where AI capabilities were indeed on the trajectory originally believed, but AI safety measures — thanks in part to the influence of AI safety advocates — led to capabilities being slowed down, held back, sabotaged, or left on the table in some way. This is not far off from the sorts of things people have already argued. In 2024, the AI researcher and investor Leopold Aschenbrenner published an extremely dubious essay, "Situational Awareness", which, in between made-up graphs, argues that AI models are artificially or unfairly "hobbled" in a way that makes their base, raw capabilities seem significantly less than they really are. By implementing commonsense, straightforward unhobbling techniques, models will become much more capable and reveal their true power. From here, it would only be one more step to say that AI companies deliberately left their models "hobbled" for safety reasons. But this is just one example. There are an unlimited number of ways you could try to tell a story like this.

Arguably, Anthropic CEO Dario Amodei engaged in Kurzweil-style obfuscation of a prediction this year. In mid-March, Amodei predicted by mid-September that 90% of code would be written by AI. When nothing close to this happened, Amodei said, "Some people think that prediction is wrong, but within Anthropic and within a number of companies that we work with, that is absolutely true now." When pressed, he clarified that this was only true "on many teams, not uniformly, everywhere". That's a bailey within a bailey.

The Musk-style reaction is to just to kick the can down the road. People in EA or EA-adjacent communities have already been kicking the can down the road. AI 2027, which was actually AI 2028, is now AI 2029. And that's hardly the only example.[2] Metaculus was at 2030 on AGI early in the year and now it's at 2033.[3] The can is kicked.

There's nothing inherently wrong with kicking the can down the road. There is something wrong with the way Musk has been doing it. At what point does it cross over from making a reasonable, moderate adjustment to making the same mistake over and over? I don't think there's an easy way to answer this question. I think the best you can do is see repeated can kicks as an invitation to go back to basics, to the fundamentals, to adopt a beginner's mind, and try to rethink things from the beginning, over again. As you retrace your steps, you might end up in the same place all over again. But you might notice something you didn't notice before.

There are many silent alarms already ringing about the imminent AGI narrative. One of effective altruism's co-founders, the philosopher and AI governance researcher Toby Ord, wrote brilliantly about one of them. Key quote:

Grok 4 was trained on 200,000 GPUs located in xAI’s vast Colossus datacenter. To achieve the equivalent of a GPT-level jump through RL [reinforcement learning] would (according to the rough scaling relationships above) require 1,000,000x the total training compute. To put that in perspective, it would require replacing every GPU in their datacenter with 5 entirely new datacenters of the same size, then using 5 years worth of the entire world’s electricity production to train the model. So it looks infeasible for further scaling of RL-training compute to give even a single GPT-level boost.

The respected AI researcher Ilya Sutskever, who played a role in kicking off the deep learning revolution in 2012 and who served as OpenAI's Chief Scientist until 2024, has declared that the age of scaling in AI is over, and we have now entered an age of fundamental research. Sutskever highlights “inadequate” generalization as a flaw with deep neural networks and has previously called out out reliability as an issue. A survey from earlier this year found that 76% of AI experts think it's "unlikely" or "very unlikely" that scaling will lead to AGI.[4]

And of course the signs of the bubble are also signs of trouble for the imminent AGI narrative. Generative AI isn't generating profit. For enterprise customers, it can't do much that's practically useful or financially valuable. Optimistic perceptions of AI capabilities are based on contrived, abstract benchmarks with poor construct validity, not hard evidence about real world applications. Call it the mismeasurement of the decade! 

My fear is that EA is going to barrel right into the AI bubble, ignoring these obvious warning signs. I'm surprised how little attention Toby Ord's post has gotten. Ord is respected by all of us in this community and therefore has a big megaphone. Why aren't people listening? Why aren't they noticing this? What is happening?

It's like EA is car blazing down the street at racing speeds, blowing through stop signs, running red lights... heading, I don't know where, but probably nowhere good. I don't know what can stop the momentum now, except maybe something on the scale that the macroeconomy of the United States will be shaken. 

The best outcome would be for the EA community to deeply reflect and to reevaluate the imminent AGI narrative before the bubble pops; the second-best outcome would be to do this soul-searching afterward. So, I hope people will do that soul-searching, like the post-FTX soul-searching, but even deeper. 99%+ of people in EA had no direct personal connection to FTX. Evidence about what EA leaders knew and when they knew it was (and largely still is) scant, making it hard to draw conclusions, as much as people desperately (and nobly) wanted to find the lesson. Not so for AGI. For AGI, most people have some level of involvement, even if small, in shaping the community's views. Everyone's epistemic practices — not "epistemics", which is a made-up word that isn't used in philosophy — are up for questioning here, even for people who just vaguely think I don't really know anything about that but I'll just trust that the community is probably right. 

The science communicator Hank Green has an excellent video from October where he explains some of the epistemology of science and why we should follow Carl Sagan's famous maxim that "extraordinary claims require extraordinary evidence". Hank Green is talking about evidence of intelligent alien life, but what he says applies equally well to intelligent artificial life. When we're encountering something unknown and unprecedented, our observations and measurements should be under a higher level of scrutiny than we accept for ordinary, everyday things. Perversely, the standard of evidence in AGI discourse is the opposite. Arguments and evidence that wouldn't even pass muster as part of an investment thesis are used to forecast the imminent, ultimate end of humanity and the invention of a digital God. What's the base rate of millennialist views being correct? 0.00%? 

Watch the video and replace "aliens" with "AGI":

I feel crazy and I must not be the only one. None of this makes any sense. How did a movement that was originally about rigorous empirical evaluation of charity cost-effectiveness become a community where people accept eschatological arguments based on fake graphs and gut intuition? What?? What are you talking about?! Somebody stop this car! 

And lest you misunderstand me, when I started my Medium blog back in 2015, my first post was about the world-historical, natural historical importance of the seemingly inevitable advent of AGI and superintelligence. On an older blog that no longer exists, posts on this theme go back even further. What a weird irony I find myself in now. The point is not whether AGI is possible in principle or whether it will eventually be created if science and technology continue making progress — it seems hard to argue otherwise — but that this is not the moment. It's not even close to the moment. 

The EA community has a whiff of macho dunk culture at times (so does Twitter, so does life), so I want to be clear that's absolutely not my intention. I'm coming at this from a place of genuine maternal love and concern. What's going on, my babies? How did we get here? What happened to that GiveWell rigour? 

Of course, nobody will listen to me now. Maybe when the bubble pops. Maybe. (Probably not.)

This post is not written to convince anyone today. It's written for the future. It's a time capsule for when the bubble pops. When that moment comes, it's an invitation for sober second thought. It's not an answer, but an unanswered question.  What happened, you guys?

  1. ^

    See "Is the AI Industry in a Bubble?" (November 15, 2025).

  2. ^

    In 2023, 2024, and 2025, Turing Award-winning AI researcher Geoffrey Hinton repeated his low-confidence prediction of AGI in 5-20 years, but it might be taking him too literally to say he pushed back his prediction by 2 years. 

  3. ^

    The median date of AGI has been slipping by 3 years per year. If you update all the way, by 2033, it will have slipped to 2057.

  4. ^

    Another AI researcher, Andrej Karpathy, formerly at OpenAI and Stanford but best-known for playing a leading role in developing Tesla's Full Self-Driving software from 2017 to 2022, made a splash by saying that he thought effective "agentic" applications of AI (e.g. computer-using AI systems à la ChatGPT's Agent Mode) were about a decade away — because this implies Karpathy thinks AGI is at least a decade away. I personally didn't find this too surprising or particularly epistemically significant; Karpathy is far from the first, only, or most prominent AI researcher to say something like this. But I think this broke through a lot of people's filter bubbles because Karpathy is someone they listen to, and it surprised them because they aren't used to hearing even a modestly more conservative view than AGI by 2030, plus or minus two years. 

  5. Show all footnotes
Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 30m read
 · 
This work is my own, written in my spare time, and doesn’t reflect the views of my employer. Less than ~3% of the text is AI-generated. Thank you to Laila Kassam, Haven King-Nobles, Lincoln Quirk, Tom Billington and Harley McDonald-Eckersall for their feedback, which doesn’t imply their endorsement of the ideas presented. Summary * Most animal advocates want sweeping change — to end factory farming at the very least, and often to go even further. But across the movement, we rarely talk in depth about how we'll actually achieve these kinds of long-term goals. * Instead, I believe much of the movement has adopted a mindset I call short-term pragmatism: a focus on measurable, near-term wins that has delivered real victories, but which risks leaving us without a path to our ultimate aims. I suspect the convergence towards this mentality is a reaction to another dominant mindset, passionate idealism. * This post argues that to achieve the long-term goals we truly aspire to, we must think differently. I make the case for visionary pragmatism — a third way that starts with an ambitious end goal and applies clear thinking to achieve it. * To illustrate how animal advocates can position ourselves as a winning movement, I break visionary pragmatism down into six core qualities I believe we should cultivate. These include having a clear vision, orienting towards building power, operating according to credible and transparent theories of victory, and taking a perspective of building an ecosystem over multiple generations. * My aim isn't to claim I have all the answers, but to open up a conversation about how we can genuinely maximise our chances of winning for animals over the long haul. Introductory Context Hi, I’m Dilan[1]. I’ve been deeply active in the animal movement since 2015, across the UK, Australia and the USA. I’ve been a full-time staffer at various levels of movement organisations for the last six years. I’ve spent time in both radical and moderate corners
 ·  · 2m read
 · 
We, the AIM Board and outgoing CEO Joey Savoie, are delighted to announce that Samantha Kagel has been selected as AIM's new CEO, effective December 1, 2025.  Over the last few months, we have been engaged in a highly important activity: finding AIM’s next CEO. This was not an easy position to fill, as we sought someone who could lead the organization to high growth and impact while retaining the core elements that have made AIM unique. We were committed to conducting a thorough search and put out a public call for candidates, considering over 100 applicants from both public applications and referrals. We evaluated external candidates, internal team members, and past charity graduates, and ultimately identified Samantha as the candidate whom we believe will best execute the next stages of AIM’s development.   About Samantha Samantha has served on AIM's executive team as Chief Programs Officer for the past 1.5 years, leading the strategy and delivery of our Charity Entrepreneurship function. In this role, she has demonstrated exceptional capability across multiple dimensions: building collaborative teams, driving strategic execution, and maintaining unwavering focus on impact. Before joining the executive team, Samantha successfully filled nearly every role across our organization, excelling in each one. Prior to joining AIM, she worked as an Organizational Development Consultant for nonprofits and as an Account Strategist at Google, where she managed growth plans for 400+ businesses. We are confident that under Samantha's leadership, AIM will grow to become even more impactful. As CEO, she will bring organizational excellence, an emphasis on scale, and strong collaborative leadership. While leadership transitions are significant moments, AIM's mission, values, and spirit remain constant. Handing over the keys to the castle, or in this case, the whole castle A note from our incoming CEO, Samantha Kagel > "Since I first learned about Charity Entrepreneurshi
 ·  · 11m read
 · 
A note from Naomi Nederlof, Community Building Grants Program Manager: I asked Andy, the lead organizer of EA DC, to write this playbook based on the approaches that have made his group one of the most successful city groups in our program. We initiated this project because we think many new organizers can benefit from concrete and experience-based guidance. I would be excited about more city and national group organisers adopting this advice! Intro I’ve been organizing the Effective Altruism DC group since 2021. I talk to a lot of new EA city, national, and career group organizers about the first things they should aim to do with their groups, and I often find myself repeating the same advice. I’m writing this as a short playbook for a new organizer, with concrete actionable advice I would give to make the first few months of starting an EA professional network go well. This is my model of a successful EA city group. Every EA group organizer should aim for their group to achieve this in their first two months 1) It is very easy for anyone in the area to onboard to the group Your online presence * Maintain an up-to-date nice looking website with a clear "Get Involved" or "Join Us" button on the homepage. * The primary call to action should direct interested individuals to a single, simple sign-up form. * This form should automatically trigger a welcome email that includes: * A brief explanation of the group and EA. * A link to schedule a 15-30 minute introductory call with an organizer or volunteer. You can maintain a “welcome wagon” volunteer team to take these calls with new members if there are too many. * A link to a "New Member Resource Packet" (a Google Doc, Notion page, or page to your website). The resource packet should contain links to foundational EA content (the EA Forum, key articles, 80,000 Hours, Giving What We Can) and an invitation to the group's primary communication channel (Slack or Discord). * The website should