A survey of 2,778 AI researchers conducted in October 2023, seven months after the release of GPT-4 in March 2023, used two differently worded definitions of artificial general intelligence (AGI). The first definition, High-Level Machine Intelligence, said the AI system would be able to do any “task” a human can do. The second definition, Full Automation of Labour, said it could do any “occupation”. (Logically, the former definition would seem to imply the latter, but this is apparently not how the survey respondents interpreted it.)
The forecast for High-Level Machine Intelligence was as follows:
- 10% probability by 2027
- 50% probability by 2047
For Full Automation of Labour, the forecast was:
- 10% probability by 2037
- 50% probability by 2116
Another survey result of interest: 76% of AI experts surveyed in 2025 thought it was unlikely or very unlikely that current AI methods, such as large language models (LLMs), could be scaled up to achieve AGI.
Also of interest: there have been at least two surveys of superforecasters about AGI, but, unfortunately, both were conducted in 2022 before the launch of ChatGPT in November 2022. These surveys used different definitions of AGI. Check the sources for details.
The Good Judgment superforecasters gave the following forecast:
- 12% probability of AGI by 2043
- 40% probability of AGI by 2070
- 60% probability of AGI by 2100
The XPT superforecasters forecast:
- 1% probability of AGI by 2030
- 21% probability of AGI by 2050
- 75% probability of AGI by 2100
My opinion: forecasts of AGI are not rigorous and can’t actually tell us much about when AGI will be invented. We should be extremely skeptical of all these numbers because they’re all just guesses.
That said, if you are basing your view of when AGI will be invented largely on the guesses of other people, you should get a clear picture of what those guesses are. If you just rely on the guesses you happen to hear, what you hear will be biased. For example, people are more likely to repeat guesses that AGI will be invented shockingly soon because that’s much more interesting than a guess it will be invented sometime in the 2100s. You might also have a filter bubble or echo chamber bias where you hear a biased sample of guessed based on your social networks rather than a representative sample of AI experts or expert forecasters.
I have never heard a good, principled argument for why people in effective altruism should believe guesses that put AGI much sooner than the guesses above from the AI researchers and the superforecasters. You should worry about selectively accepting some evidence and rejecting other evidence due to confirmation bias.
My personal guess — which is as unrigorous as anyone else’s — is that the probability of AGI before January 1, 2033 is significantly less than 0.1%. One reason I have for thinking this is that the development of AGI will require progress in fundamental science that currently isn’t getting much funding or attention, and that science usually takes a long time to move from a pre-paradigmatic stage to a stage where engineers have mastered building technology using the new scientific paradigm. As far as I know, it’s never in history been only 7 years.
The overall purpose of this post is to expose people in effective altruism to differing viewpoints from what you might have heard so far, and to encourage you to worry about confirmation bias and about filter bubble/echo chamber bias. I strongly believe that, in time, many people in EA will come to regret how the movement’s focus has shifted toward AGI and will come to see it as a mistake. People will wonder why they ever trusted a minority of experts over the majority or why they trusted non-expert bloggers, tweeters, and forum posters over people with expertise in AI or forecasting. I’m not saying that the expert majority forecasts are right, I’m saying that all AGI forecasts are completely unrigorous and worthy of extreme skepticism, but, if you already put your trust in forecasting, at least exposing yourself to the actual diversity of opinion, you might begin to question things you accepted too readily.
A few other relevant ideas to consider:
- LLM scaling may be running out of steam, both in terms of compute and data
- A recent study found that AI coding assistants make human coders 19% less productive
- There is growing concern about an AI financial bubble because LLMs are not turning out to practically useful for as many things as was hoped, e.g., the vast majority of businesses report no financial benefit from using LLMs and many companies have abandoned their efforts to use them
- Despite many years of effort from top AI talent and many billions of dollars of investment, self-driving cars remain at roughly the same level of deployment as 5 years ago or even 10 years ago, nowhere near substituting for human drivers on a large scale[1]
- ^
Andrej Karpathy, an AI researcher formerly at OpenAI who led Tesla’s autonomous driving AI from 2017 to 2022, recently made the following remarks on a podcast:
…self-driving cars are nowhere near done still. The deployments are pretty minimal. Even Waymo and so on has very few cars. … Also, when you look at these cars and there’s no one driving, I actually think it’s a little bit deceiving because there are very elaborate teleoperation centers of people kind of in a loop with these cars. I don’t have the full extent of it, but there’s more human-in-the-loop than you might expect. There are people somewhere out there beaming in from the sky. I don’t know if they’re fully in the loop with the driving. Some of the time they are, but they’re certainly involved and there are people. In some sense, we haven’t actually removed the person, we’ve moved them to somewhere where you can’t see them.