Pronouns: she/her or they/them.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.
This is a good, clear, helpful, informative comment, right up until this last part:
Fun fact: it's actually this same focus on finding causes that are important (potentially large in scale), neglected (not many other people are focused on them) and tractable, that has also led EA to take some "sci-fi doomsday scenarios" like wars between nuclear powers, pandemics, and AI risk, seriously. Consider looking into it sometime -- you might be surprised how plausible and deeply-researched these wacky, laughable, uncool, cringe, "obviously sci-fi" worries really are! (Like that countries might sometimes go to war with each other, or that it might be dangerous to have university labs experimenting with creating deadlier versions of common viruses, or that powerful new technologies might sometimes have risks.)
Nuclear war and pandemics are obviously real risks. Nuclear weapons exist and have been used. The Cold War was a major geopolitical era in recent history. We just lived through covid-19 and there have been pandemics before. The OP specifically only mentioned "some sci-fi doomsday scenarios regarding AI", nothing about nuclear war or pandemics.
The euphemism "powerful new technologies might sometimes have risks" considerably undersells the concept of AI doomsday (or utopia), which is not about the typical risks of new technology but is eschatological and millennialist in scope. New technologies sometimes have risks, but that general concept in no way supports fears of AI doomsday.
As far as I can tell, most AI experts disagree with the view that AGI is likely to be created within the next decade and disagree with the idea that LLMs are likely to scale to AGI. This is entirely unlike the situation with nuclear war or pandemics, where there is much more expert consensus.
I don’t agree that the AI doomsday fears are deeply researched. The more I dive into EA/rationalist/etc. arguments about AGI and AI risk, the more I’m stunned by how unbelievably poorly and shallowly researched most of the arguments are. Many of the people making these arguments seem not to have an accurate grasp of the definitions of important concepts in machine learning, seem not to have considered some of the obvious objections before, make arguments using fake charts with made-up numbers and made-up units, make obviously false and ridiculous claims (e.g. GPT-4 has the general intelligence of a smart high school student), do seat-of-the-pants theorizing about cognitive science and philosophy of mind without any relevant education or knowledge, deny inconvenient facts, jump from merely imagining a scenario to concluding that it’s realistic and likely with little to no evidentiary or argumentative support, treat subjective guesses as data or evidence, and so on. It is some of the worst "scholarship" I have ever encountered in my life. It’s akin to pseudoscience or conspiracy theories — just abysmal, abysmal stuff. The worst irrationality.
The more I raise these topics and invite people to engage with me on them, the worse and worse my impression gets of the "research" behind them. Two years ago, I assumed AGI existential risk discourse was much more rational, thoughtful, and plausible than I do now — that initial impression was from knowing much less than I do now and giving people the benefit of the doubt. I wouldn’t have imagined the ridiculous stuff that gets celebrated as a compelling case would even have been considered acceptable. The errors are so unbelievably bad I’m in disbelief with what people can get away with.
I don’t think it’s fair for you to sneer at the OP for having skepticism about AI doomsday, since their initial reaction is rational and correct, and your defense is, in my opinion, misleading.
I still upvoted this comment, though, since it was mostly helpful and well-argued.
I think near-term AGI is highly unlikely (specifically, I think there's significantly less than a 1 in 5,000 or 0.02% chance of AGI before the end of 2034) and I also think that claims about existential risk from AGI are poorly supported, but my impression of people in EA who do think near-term AGI is likely and do think x-risk from AGI is significant is that a lot of them have negative views on OpenAI. The EA community doesn't have an organization that typically makes position statements on behalf of the community. The most straightforward way to get something like this going would be to post an open letter on the forum and ask people to sign on. Probably some people would sign it. (I wouldn't.)
People in EA might be more reticent to denounce Anthropic, though, given that Holden Karnofsky now works there, Dustin Moskovitz of Good Ventures and Coefficient Giving (formerly Open Philanthropy) is an investor, Joe Carlsmith (formerly of Coefficient Giving) now works there, Amanda Askell has worked there for a while, and so on. Also, some people see Anthropic as the white hat to OpenAI's black hat, even though there's basically no difference (i.e. both just make chatbots and everything's fine).
You could reduce this to a single point probability. The math is a bit complicated but I think you'd end up with a point probability on the order of 0.001% (~10x lower than the original probability). But if I understand correctly, you aren't actually claiming to have a 0.001% credence.
Yeah, I’m saying the probability is significantly less than 0.02% without saying exactly how much less — that’s much harder to pin down, and there are diminishing returns to exactitude here — so that means it’s a range from 0.00% to <0.02%. Or just <0.02%.
The simplest solution, and the correct/generally recommended solution, seems to be to simply express the probability, unqualified.
Thank you. Karma downvotes have ceased to mean anything to me.
People downvote for no discernible reason, at least not reasons that are obvious to me, nor that they explain. I'm left to surmise what the reasons might be, including (in some cases) possibly disagreement, pique, or spite.
Neutrally informative things get downvoted, factual/straightforward logical corrections get downvoted, respectful expressions of mainstream expert opinion get downvoted — everything, anything. The content is irrelevant and the tone/delivery is irrelevant. So, I've stopped interpreting downvotes as information.
Maybe this is a misapplication of the concept of confidence intervals — math is not my strong suit, nor is forecasting, so let me know — but what I had in mind is that I'm forecasting a 0.00% to 0.02% probability range for AGI by the end of 2034, and that if I were to make 100 predictions of a similar kind, more than 95 of them would have the "correct" probability range (whatever that ends up meaning).
But now that I'm thinking about it more and doing a cursory search, I think with a range of probabilities for a given date (e.g. 0.00% to 0.02% by end of 2034) as opposed to a range of years (e.g. 5 to 20 years) or another definite quantity, the probability itself is supposed to represent all the uncertainty and the confidence interval is redundant.
As you can tell, I'm not a forecaster.
That did come across to me when I watched the interview. For example, in my summary:
Sutskever specifically predicts that another 100x scaling of AI models would make a difference, but would not transform AI capabilities.
He was cagey about his specific ideas on the "something important" that "will continue to be missing". He said his company is working on it, but he can’t disclose details.
I find this secrecy to be a bit lame. I like when companies like DeepMind publish replicable research or, better yet, open source code and datasets. Even if you don’t want to go that far, it’s possible to talk about ideas in general terms without giving away the trade secrets that would make them easy to copy.
Most of the startups that have focused primarily on ambitious fundamental AI research — Vicarious and Numenta are the two examples I’m thinking of — have not ended up successfully productizing any of their research (so far). DeepMind’s done amazing work, but the first AI model they developed with major practical usefulness was AlphaFold, six years after their acquisition by Google and ten years after their founding, and they didn’t release a major product until DeepMind merged with Google Brain in 2023 and worked on Gemini. It’s more likely than not that a research-focused startup like Sutskever’s company, Safe Superintelligence, will not have any lucrative, productizable ideas, at least not for a long time, than they will have ideas so great than even just disclosing their general contours will cause other companies to steal away their competitive advantage.
My guess is that Safe Superintelligence doesn’t yet have any fantastic ideas that OpenAI, DeepMind, and others don’t also have, and the secrecy just as conveniently covers for that fact as it protects the company’s trade secrets or IP.
I don’t think AGI is five times less likely than I did a week ago, I realized the number I had been translating my qualitative, subjective intuition into was five times too high. I also didn’t change my qualitative, subjective intuition of the probability of a third-party candidate winning a U.S. presidential election. What changed was just the numerical estimate of that probability — from an arbitrarily rounded 0.1% figure to a still quasi-arbitrary but at least somewhat more rigorously derived 0.02%. The two outcomes remain logically disconnected.
I agree that forecasting AGI is an area where any sense of precision is an illusion. The level of irreducible uncertainty is incredibly high. As far as I’m aware, the research literature on forecasting long-term or major developments in technology has found that nobody (not forecasters and not experts in a field) can do it with any accuracy. With something as fundamentally novel as AGI, there is an interesting argument that it’s impossible, in principle, to predict, since the requisite knowledge to predict AGI includes the requisite knowledge to build it, which we don’t have — or at least I don't think we do.
The purpose of putting a number on it is to communicate a subjective and qualitative sense of probability in terms that are clear, that other people can understand. Otherwise, its hard to put things in perspective. You can use terms like extremely unlikely, but what does that mean? Is something that has a 5% chance of happening extremely unlikely? So, rolling a natural 20 is extremely unlikely? (There are guides to determining the meaning of such terms, but they rely on assigning numbers to the terms, so we’re back to square one.)
Something that works just as well is comparing the probability of one outcome to the probability of another outcome. So, just saying that the probability of near-term AGI is less than the probability of Jill Stein winning the next presidential election does the trick. I don’t know why I always think of things involving U.S. presidents, but my point of comparison for the likelihood of widely deployed superintelligence by the end of 2030 was that I thought it was more likely the JFK assassination turned out to be a hoax, and that JFK was still alive.[1]
I initially resisted putting any definite odds on near-term AGI, but I realized a lack of specificity was hurting my attempts to get my message across.
This approach doesn't work perfectly, either, because what if different people have different opinions/intuitions on the probability of outcomes like Jill Stein winning? But putting low probabilities (well below 1%) into numbers has a counterpart problem in that you don't know if you have the same intuitive understanding as someone else of what a 1 in 1,000 chance, a 1 in 10,000 chance, or a 1 in 100,000 chance means with regard to highly irreducibly uncertain events that are rare (e.g. recent U.S. presidential elections), unprecedented (e.g. AGI), or one-off (e.g. Russia ending the current war against Ukraine), and which can't be statistically or mechanically predicted.
When NASA models the chance of an asteroid hitting Earth as 1 in 25,000 or the U.S. National Weather Service calculates the annual individual risk of being hit by lightning as 1 in 1.22 million, I trust that has some objective, concrete meaning. If someone subjectively guesses that Jill Stein has a 1 in 25,000 chance of winning in 2028, I don't know if someone with a very similar gut intuition about her odds would also say 1 in 25,000, or if they'd say a number 100x higher or lower.
Possibly forecasters and statisticians have a good intuitive sense of this, but most regular people do not.
Slight update to the odds I’ve been giving to the creation of artificial general intelligence (AGI) before the end of 2032. I’ve been anchoring the numerical odds of this to the odds of a third-party candidate like Jill Stein or Gary Johnson winning a U.S. presidential election. That’s something I think is significantly more probable than AGI by the end of 2032. Previously, I’d been using 0.1% or 1 in 1,000 as the odds for this, but I was aware that these odds were probably rounded.
I took a bit of time to refine this. I found that in 2016, FiveThirtyEight put the odds on Evan McMullin — who was running as an independent, not for a third party, but close enough — becoming president at 1 in 5,000 or 0.02%. Even these odds are quasi-arbitrary, since McMullin only became president in simulations where neither of the two major party candidates won a majority of Electoral College votes. In such scenarios, Nate Silver arbitrarily put the odds at 10% that the House would vote to appoint McMullin as the president.
So, for now, it is more accurate for me to say: the probability of the creation of AGI before the end of 2032 is significantly less than 1 in 5,000 or 0.02%.
I can also expand the window of time from the end of 2032 to the end of 2034. That’s a small enough expansion it doesn’t affect the probability much. Extending the window to the end of 2034 covers the latest dates that have appeared on Metaculus since the big dip in its timeline that happened in the month following the launch of GPT-4. By the end of 2034, I still put the odds of AGI significantly below 1 in 5,000 or 0.02%.
My confidence interval is over 95%. [Edited Nov. 28, 2025 at 3:06pm Eastern. See comments below.]
I will continue to try to find other events to anchor my probability to. It’s difficult to find good examples. An imperfect point of comparison is an individual’s annual risk of being struck by lightning, which is 1 in 1.22 million. Over 9 years, the risk is in 1 in 135,000. Since the creation of AGI within 9 years seems less likely to me than that I’ll be struck by lightning, I could also say the odds of AGI’s creation within that timeframe is less than 1 in 135,000 or less than 0.0007%.
It seems like once you get significantly below 0.1%, though, it becomes hard to intuitively grasp the probability of events or find good examples to anchor off of.