Hide table of contents

If the people arguing that there is an AI bubble turn out to be correct and the bubble pops, to what extent would that change people's minds about near-term artificial general intelligence (AGI)? 

I strongly suspect there is an AI bubble because the financial expectations around AI seem to be based on AI significantly enhancing productivity and the evidence seems to show it doesn't do that yet. This could change — and I think that's what a lot of people in the business world are thinking and hoping. But my view is a) large language models (LLMs) have fundamental weaknesses that make this unlikely and b) scaling is running out of steam.[1]

Scaling running out of steam actually means three things:

1) Each new 10x increase in compute is less practically or qualitatively valuable than previous 10x increases in compute.

2) Each new 10x increase in compute is getting harder to pull off because the amount of money involved is getting unwieldy.

3) There is an absolute ceiling to the amount of data LLMs can train on that they are probably approaching.

So, AI investment is dependent on financial expectations that are depending on LLMs enhancing productivity, which isn't happening and probably won't happen due to fundamental problems with LLMs and due to scaling becoming less valuable and less feasible. This implies an AI bubble, which implies the bubble will eventually pop. 

There are also hints here and there that the companies involved may themselves have started to worry or become a bit desperate. For example, Microsoft ended its exclusive deal to provide compute to OpenAI reportedly out of fears of overbuilding data centres. Some analysts and journalists have become suspicious of what looks like circular financing or round-trip deals between companies. Part of the worry is that, to greatly simplify, if Nvidia gives OpenAI $1 and OpenAI gives Nvidia $1, both companies can put an additional $1 in revenue on their books, but this isn't organic revenue from actually meeting the demand of consumers or businesses. If these deals get too complex and entangled (especially if some of them aren't even known to investors), it might become hard to assess what's the sort of real financial performance investors care about and what's simply an artifact of accounting practices.[2]

So, if the bubble pops, will that lead people who currently have a much higher estimation than I do of LLMs' current capabilities and near-term prospects to lower that estimation? If AI investment turns out to be a bubble, and it pops, would you change your mind about near-term AGI? Would you think it's much less likely? Would you think AGI is probably much farther away?

  1. ^

    Edited on October 22, 2025 at 2:35pm Eastern to add: Toby Ord, a philosopher at Oxford and a co-founder of Giving What We Can, just published a very compelling post about LLM scaling that I highly recommend reading.

  2. ^

    I always warn people who want to get into stock picking that you should have a very high bar for second-guessing the market. I also agree with the standard advice that trying to time the market is incredibly risky and most likely unwise in any instance. So, that's the caveat. 

    I will note, however, that the amount of concentration into AI in the S&P 500 seems to have reduced its diversification by a worrying amount. In the past, it felt like quibbling around the margins to talk about the difference between an S&P 500 market index fund and funds that track the performance of a broad international basket of stocks, including small-cap stocks. Now, I worry about people who have all their money in the S&P 500. But this is not investment advice and you should talk to a professional if you can — ideally one who has a fiduciary duty to you and doesn't have a conflict of interest (e.g. is incentivized to sell you mutual funds with expensive fees).

19

0
1

Reactions

0
1
New Answer
New Comment


1 Answers sorted by

I think there are two categories of answer here: 1) Finance as an input towards AGI, and 2) Finance as an indicator of AGI.

For 1) regardless of whether you think current LLM-based AI has fundamental flaws or not, the fact that insane amounts of capital are going into 5+ competing companies providing commonly-used AI products should be strong evidence that the economics are looking good, and that if AGI is technically possible using something like current tech, then all the incentives and resources are in place to find the appropriate architectures. If suddenly the bubble were to completely burst, even if we believed strongly that LLM-based AGI is imminent, there might be no more free money, so we'd now have an economic bottleneck to training new models. In this scenario, we'd have to update our timelines/estimates significantly (especially if you think straightforward scaling is a our likely pathway to AGI). 

For 2), probably not - depends on the situation. Financial markets are fickle enough that the bubble could pop for a bunch of reasons unrelated to current model trends - rare-earth export controls having an impact, slightly lower uptake figures, the decision of one struggling player (e.g. Meta) to leave the LLM space, or one highly-hyped but ultimately disappointing application, for example. If I was unsure of the reason, would I assume that the market knows something I don't? Probably not. I might update slightly, but I'm not sure to what extent I'd trust the market to provide valuable information about AGI more than direct information about model capabilities and diffusion.

But of course, if we do update on market shifts, it has to be at least somewhat symmetrical. If a market collapse would slow down your timelines, insane market growth should accelerate your timelines for the same reason.

the fact that insane amounts of capital are going into 5+ competing companies providing commonly-used AI products should be strong evidence that the economics are looking good

Can you clarify what you mean by "the economics are looking good”? The economics of what are looking good for what?

I can think of a few different things this could mean, such as:

  • The amount of capital invested, the number of companies investing, and the number of users of AI products indicates there is no AI bubble
  • The amount of capital invested (and the competition) is making AGI more
... (read more)
3
Matrice Jacobine🔸🏳️‍⚧️
Most of OpenAI’s 2024 compute went to experiments
3
Yarrow Bouchard 🔸
This is what Epoch AI says about its estimates: That's kind of interesting in its own right, but I wouldn't say that money allocated toward training compute for LLMs is the same idea as money allocated to fundamental AI research, if that's what you were intending to say.  It's uncontroversial that OpenAI spends a lot on research, but I'm trying to draw a distinction between fundamental research, which, to me, connotes things that are more risky, uncertain, speculative, explorative, and may take a long time to pay off, and research that can be quickly productized.  I don't understand the details of what Epoch AI is trying to say, but I would be curious to learn.  Do unreleased models include as-yet unreleased models such as GPT-5? (The timeframe is 2024 and OpenAI didn't release GPT-5 until 2025.) Would it also include o4? (Is there still going to be an o4?) Or is it specifically models that are never intended to be released? I'm guessing it's just everything that hasn't been released yet, since I don't know how Epoch AI would have any insight into what OpenAI intends to release or not. I'm also curious how much trial and error goes into training for LLMs. Does OpenAI often abort training runs or find the results to be disappointing? How many partial or full training runs go into training one model? For example, what percentage of the overall cost is the $400 million estimated for the final training run of GPT-4.5? 100%? 90%? 50%? 10%?  Overall, this estimate from Epoch AI doesn't seem to tell us much about what amount of money or compute OpenAI is allocating to fundamental research vs. R&D that can quickly be productized. 
1
Jack_S🔸
When I say “the economics are looking good,” I mean that the conditions for capital allocation towards AGI-relevant work are strong. Enormous investment inflows, a bunch of well-capitalised competitors, and mass adoption of AI products means that, if someone has a good idea to build AGI within or around these labs, the money is there. It seems this is a trivial point - if there were significantly less capital, then labs couldn’t afford extensive R&D, hardware or large-scale training runs.  WRT Scaling vs. fundamental research, obviously "fundamental research" is a bit fuzzy, but it's pretty clear that labs are doing a bit of everything. DeepMind is the most transparent about this, they're doing Gemini-related model research, Fundamental science, AI theory and safety etc. and have published thousands of papers. But I'm sure a significant proportion of OpenAI & Anthropic's work can also be classed as fundamental research. 
3
Yarrow Bouchard 🔸
The overall concept we're talking about here is to what extent the outlandish amount of capital that's being invested in AI has increased budgets for fundamental AI research. My sense of this is that it's an open question without a clear answer. DeepMind has always been doing fundamental research, but I actually don't know if that has significantly increased in the last few years. For all I know, it may have even decreased after Google merged Google Brain and DeepMind and seemed to shift focus away from fundamental research and toward productization.  I don't really know, and these companies are opaque and secretive about what they're doing, but my vague impression is that ~99% of the capital invested in AI over the last three years is going toward productizing LLMs, and I'm not sure it's significantly easier to get funding for fundamental AI research now than it was three years ago. For all I know, it's harder. My impression is from anecdotes from AI researchers. I already mentioned Andrej Karpathy saying that he wanted to do fundamental AI research at OpenAI when he re-joined in early 2023, but the company wanted him to focus on product. I got the impression he was disappointed and I think this is a reason he ultimately quit a year later. My understanding is that during his previous stint at OpenAI, he had more freedom to do exploratory research.  The Turing Award-winning researcher Richard Sutton said in an interview something along the lines of no one wants to fund basic research or it's hard to get money to do basic research. Sutton personally can get funding because of his renown, but I don't know about lesser-known researchers.  A similar sentiment was expressed by the AI researcher François Chollet here: Undoubtedly, there is an outrageous amount of money going toward LLM research that can be quickly productized, toward scaling LLM training, and towards LLM deployment. Initially, I thought this meant the AI labs would spend a lot more money on basic re
Curated and popular this week
 ·  · 13m read
 · 
It’s to build the skills required to solve the problems that you want to solve in the world [I am a career advisor at 80,000 Hours. This post is adapted from a talk I gave on career capital to some ambitious altruistic students. If you prefer slides, you can access them here. These ideas are informed by my work at 80k but reflects my personal views.] I’m often asked how to have an impactful fulfilling career. My four word answer is “get good, be known.” My “fits on a postcard” answer is something like this: 1. Identify a problem with a vast scale of harm that is neglected at current margins and that is tractable to solve. Millions will die of preventable diseases this year, billions of animals will be tortured, severe tail risks like nuclear war and catastrophic pandemics still exist, and we might be on the cusp of a misaligned intelligence explosion. You should find an important problem to work on. 2. Obsessively improve at the rare and valuable skills to solve this problem and do so in a legible way for others to notice. Leverage this career capital to keep the flywheel going— skills allow you to solve more problems, which builds more skills. Rare and valuable roles require rare and valuable traits, so get so good they can’t ignore you. Unfortunately, some ambitious and altruistic young people that I speak to seem to have implicitly developed a model that looks more like this: 1. Identify a problem with a vast scale of harm, that is neglected at current margins, and that is tractable to solve. 2. Get a job from the 80,000 Hours job board at a capital-E capital-A Effective Altruist organization right out of college, as fast as possible, otherwise feel like a failure, oh god, oh god... I empathize with this feeling. Ambitious people who care about reducing risk and suffering in the world understandably think it’s the most important thing they can be doing, and often hold themselves to a high standard when trying to get there. Before properly entering the w
 ·  · 6m read
 · 
I really enjoyed reading the "why I donate" posts in the past week, so much so that I felt compelled to add my reflections, in case someone finds my reasons as interesting as I found theirs. 1. My money must be spent on something, might as well spend it on the most efficient things The core reason I give is something that I think is under-represented in the other posts: the money I have and earn will need to be spent on something, and it feels extremely inefficient and irrational to spend it on my future self when it can provide >100x as much to others. To me, it doesn't seem important whether I'm in the global top 10% or bottom 10%, or whether the money I have is due to my efforts or to the place I was born. If it can provide others 100x as much, it just seems inefficient/irrational to allocate it to myself. The post could end here, but there are other secondary reasons/perspectives on why I personally donate that I haven't seen commonly discussed. 2. Spending money is voting on how the global economy allocates its resources In 2017, I read Wealth: The Toxic Byproduct by Kevin Simler. Surprisingly, I don't think it has ever been posted on this forum. I disagree with some of it, but the core points really changed how I think about wealth, earning, and spending. The post is very well written and enjoyable, but it's 2400 words, so copy-pasting some snippets: > A thought experiment — the Congolese Trading Window: > > Suppose one day you wake up to find a large pile of Congolese francs. [...] A window [...] pushes open to reveal the unfamiliar sights of a Congolese outdoor market [...] a man approaches your window. [...] He's asking if you'd like to buy his grain for 500 francs. > > What should you do? [...] > Your plan is to buy grain whenever you think the price is poised to go up in the near future, and sell whenever you think the price is poised to go down. > [...] > Imagine a particular bag of grain that you bought at T1 for 200 francs, and then sold at T
 ·  · 6m read
 · 
Note: This post was crossposted from the Coefficient Giving Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- It can feel hard to help factory-farmed animals. We’re up against a trillion-dollar global industry and its army of lobbyists, marketeers, and apologists. This industry wields vast political influence in nearly every nation and sells its products to most people on earth. Against that, we are a movement of a few thousand full-time advocates operating on a shoestring. Our entire global movement — hundreds of groups combined — brings in less funds in a year than one meat company, JBS, makes in two days. And we have the bigger task. The meat industry just wants to preserve the status quo: virtually no regulation and ever-growing demand for factory farming. We want to upend it — and place humanity on a more humane path. Yet, somehow, we’re winning. After decades of installing battery cages, gestation crates, and chick macerators, the industry is now removing them. Once-dominant industries, like fur farming, are collapsing. And advocates are building momentum toward bigger reforms for all farmed animals. Here are my top ten wins from this year: 1. Liberté et Égalité, for Chickens. France’s largest chicken producer, the LDC Group, committed to adopting the European Chicken Commitment for its two flagship brands by 2028 — a shift that French advocacy group L214 estimates will cover 40% of the national chicken market, or up to 400 million birds each year. Across the Channel, British supermarket chain Waitrose transitioned all its own-brand chicken to comply with the parallel UK Better Chicken Commitment. 2. Guten Cluck! The Wurst Is Over for German Animals. Germany’s top retailer, Edeka, committed to making all of its own-brand chicken products compliant with Germany’s equivalent of the European Chicken Commitment by 2030