Yarrow Bouchard 🔸

1400 karmaJoined Canadastrangecosmos.substack.com

Bio

Pronouns: she/her or they/them. 

Parody of Stewart Brand’s whole Earth button.

I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.

I write on Substack, and used to write on Medium.

Sequences
2

Criticism of specific accounts of imminent AGI
Skepticism about near-term AGI

Comments
692

Topic contributions
13

[Adapted from this comment.]

Two pieces of evidence commonly cited for near-term AGI are AI 2027 and the METR time horizons graph. AI 2027 is open to multiple independent criticisms, one of which is its use of the METR time horizons graph to forecast near-term AGI or AI capabilities more generally. Using the METR graph to forecast near-term AGI or AI capabilities more generally is not supported by the data and methodology used to make the graph.

Two strong criticisms that apply specifically to the AI 2027 forecast are:

  • It depends crucially on the subjective intuitions or guesses or the authors. If you don't personally share the authors' intuitions, or don't personally trust that the authors' intuitions are likely correct, then there is no particular reason to take AI 2027's conclusions seriously.
  • Credible critics claim that the headline results of the AI 2027 timelines model are largely baked in by the authors' modelling decisions, irrespective of what data the model uses. That means, to a large extent, AI 2027's conclusions are not actually determined by the data they use. We already saw with the previous bullet point that the conclusions of AI 2027 are largely a restatement of the authors' personal and contestable beliefs. This is another way in which AI 2027's conclusions are, effectively, a restatement of the pre-existing beliefs or assumptions that the authors chose to embed in their timelines model.

AI 2027 is largely based on extrapolating the METR time horizons graph. The following criticisms of the METR time horizons graph therefore extend to AI 2027:

  • Some of the serious problems and limitations of the METR time horizons graph are sometimes (but not always) clearly disclosed by METR employees. Note the wide difference between the caveated description of what the graph says and the interpretation of the graph as a strong indicator of rapid, exponential improvement in general AI capabilities.
  • Gary Marcus, a cognitive scientist and AI researcher, and Ernest Davis, a computer scientist and AAAI fellow, co-authored a blog post on the METR graph that looks at how the graph was made and concludes that “attempting to use the graph to make predictions about the capacities of future AI is misguided”.
  • Nathan Witkin, a research writer at NYU Stern’s Tech and Society Lab, published a detailed breakdown of some of the problems with METR’s methodology. He concludes that it’s “impossible to draw meaningful conclusions from METR’s Long Tasks benchmark” and that the METR graph “contains far too many compounding errors to excuse”. Witkin calls out a specific tweet from METR, which presents the METR graph in the broad, uncaveated way that the AI 2027 authors interpret it. He calls the tweet “an uncontroversial example of misleading science communication”

Since AI 2027 leans so heavily on this interpretation of the METR graph to make its forecast, it is hard to see how AI 2027 could be credible if its interpretation of the METR graph is not credible. 

It's worth contrasting AI 2027 and similar forecasts of near-term AGI with expert opinion:

  • 76% of AI experts think it is unlikely or very unlikely that existing approaches to AI, which includes LLMs, will scale to AGI. (See page 66 of the AAAI 2025 survey. See also the preceding two pages about open research challenges in AI — such as continual learning, long-term planning, generalization, and causal reasoning — none of which are about scaling more, or at least not uncontroversially so. If you want an example of a specific, prominent AI researcher who emphasizes the importance of fundamental AI research over scaling, Ilya Sutskever believes that further scaling will be inadequate to get to AGI.)
  • Expert surveys about AGI timelines are not necessarily reliable, but the AI Impacts survey in late 2023 found that AI researchers’ median year for AGI is 20 to 90 years later than the AI 2027 scenario.

Two overall takeaways:

  • There are good reasons to be highly skeptical of AI 2027 and the METR time horizons graph as evidence for near-term AGI or for a rapid, exponential increase in general AI capabilities.
  • Peer review in academic research is designed to catch these sort of flaws prior to publication. This means flaws can be fixed, the claims made can be moderated and properly caveated, or publication can be prevented entirely so that research below a certain threshold of quality or rigour is not given the stamp of approval. (This helps readers know what's worth paying attention to and what isn't.) Research published via blogs and self-published reports don't go through academic peer review, and may fall below the standards of academic publishing. In the absence of peer review doing quality control, deeply flawed research, or deeply flawed interpretations of research, may propagate.

Even if there is say, a 30% chance that in the next 5 years even 30-40% of white collar jobs get replaced...that feels like a massive shock to high-income countries way of life, and a major shift in the social contract for the generation of kids who have got into massive debt for university courses only to find substantially reduced market opportunities. That requires substantial state action.

Rather than 30%, I would personally guess the chance is much less than 0.01% (1 in 10,000), possibly less than 0.001% (1 in 100,000) or even 0.0001% (1 in 1 million).

I agree with Huw that there's insufficient evidence to say that AI is causing significant or even measurable unemployment right now, and I'm highly skeptical this will happen anytime soon. Indeed, I'd personally guess there's a ~95% chance there's an AI bubble that will pop sometime within the next several years. So far, AI has stubbornly refused to deliver the sort of productivity or automation that has been promised. I think the problem is a fundamental science problem, not something that can be solved with scaling or incremental R&D. 

However, let's imagine a scenario where in a short time, say 5 years, a huge percentage of jobs get automated by AI — automation of white-collar work on a massive scale.[1] 

What should governments' response be right now, before this has started to happen, or at least before there is broad agreement it has started to happen? People often talk about this as a gloomy, worrying outcome. I suppose it could turn out to be, but why should that be the default assumption? It would lead to much faster economic growth than developed countries are used to seeing. It might even be record-breaking, unprecedented economic growth. It would be a massive economic windfall. That's a good thing.[2]

To be a bit more specific, when people imagine the sort of AI that is capable of automating white-collar work on the scale you're describing (30-40% of jobs), they also often imagine wildly high rates of GDP growth, ranging from 10% to 30%.[3][4] The level of growth is supposed to be commensurate with the percentage of labour that AI can automate. I don't know about these specific figures, but the general idea is intuitive.

Surely passing UBI would become much easier once both a) unemployment significantly increased and b) economic growth significantly increased. There would be both a clear problem to address and a windfall of money with which to address it. By analogy, it would have been much harder for governments to pass stimulus bills in January 2020 in anticipation of covid-19. In March 2020, it was much easier, since the emergency was clear. But the covid-19 emergency caused a recession. What if, instead, it had caused an economic boom, and a commensurate increase in government revenues? Surely, then, it would have been even easier to pass stimulus bills. 

This is why, even if I accept the AI automation scenario for the sake of argument, I don't worry about the practical, logistical, or fiscal obstacles to enacting UBI in a hurry. Governments can send cheques to people on short notice, as we saw with covid-19. This would presumably be especially true if the government and the economy overall were experiencing a windfall from AI. The sort of administrative bottlenecks we saw in some places during in covid-19 — those could be solved by AI, since we're stipulating an unlimited supply of digital white-collar workers. Maybe there are further aspects to implementing UBI that would be more complicated than sending people cheques and that couldn't be assisted by AI. What would those be? 

The typical concerns raised over UBI are that it would be too expensive, that it would be poorly targeted (i.e. it would be more efficient to run means-tested programs), that it would discourage people from working, and that it would reduce economic growth. None of those apply to this scenario. 

If there's more complexity in implementing UBI that I'm not considering, surely in this scenario politicians would quickly become focused on dealing with that complexity, and civil servants (and AI workers) would be assigned to the task. As opposed to something like decarbonizing the economy, UBI seems like it could be implemented relatively quickly and easily, given a sudden emergency that called for it and a sudden windfall of cash. Part of the supposed appeal of UBI is its simplicity relative to means-tested programs like welfare and non-cash-based programs like food stamps and subsidized housing. So, if you're not satisfied with my answer, maybe you could elaborate on why you think it wouldn't be so easy to figure out UBI in a hurry.

As mentioned up top, I regard this just as an interesting hypothetical, since I think the chance of this actually happening is below 0.01% (1 in 10,000).  

  1. ^

    Let's assume, for the sake of argument, that the sort of dire outcomes hypothesized under the heading of AI safety or AI alignment do not occur. Let's assume that AI is safe and aligned, and that it's not able to be misused by humans to destroy or take over the world.

    Let's also assume that AI won't be a monopoly or duopoly or oligopoly but, like today, even open source models that are free to use are a viable alternative to the most cutting-edge proprietary models. We'll imagine that the pricing power of the AI companies will be put in check by competition from both proprietary and open source models. Sam Altman might become a trillionaire, but only a small fraction of the wealth created by AI will be captured by the AI companies. (As an analogy, think about how much wealth is created by office workers using Windows PCs, and how much of that wealth is captured by Microsoft or by PC manufacturers like HP and Dell, or components manufacturers like Intel.)

    I'm putting these other concerns aside in order to focus on labour automation and technological unemployment, since that's the concern you raised.

  2. ^

    The specific worries around AI automation people most commonly cite are about wealth distribution, and about people finding purpose and meaning in their lives if there's large-scale technological unemployment. I'll focus on the wealth distribution worry, since the topic is UBI, and your primary concern seems to be economic or material.

    Some people are also worried about workers being disempowered if they can be replaced by AI, at the same time that capital owners become much wealthier. If they're right to worry about that, then maybe it's important to consider well in advance of it happening. Maybe workers should act while they still have power and leverage. But it's a bit of a separate topic, I think, from whether to start implementing UBI now. Maybe UBI would be one of a suite of policies workers would want to enact in advance of large-scale AI automation of labour, but what's to prevent UBI from being repealed after workers are disempowered?

    For the sake of this discussion, I'll assume that workers (or former workers) will remain politically empowered, and healthy democracies will remain healthy.

  3. ^

    Potlogea, Andrei. “AI and Explosive Growth Redux.” Epoch AI, 20 June 2025, https://epoch.ai/gradient-updates/ai-and-explosive-growth-redux.

  4. ^

    Davidson, Tom. “Could Advanced AI Drive Explosive Economic Growth?” Coefficient Giving, 25 June 2021, https://coefficientgiving.org/research/could-advanced-ai-drive-explosive-economic-growth/.

  5. Show all footnotes

Hence my initial mention of "high state capacity"? But I think it's fair to call abundance a deregulatory movement overall, in terms of, like... some abstract notion of what proportion of economic activity would become more vs less heavily involved with government, under an idealized abundance regime.

I guess it depends what version of abundance you're talking about. I have in mind the book Abundance as my primary idea of what abundance is, and in that version of abundance, I don't think it's clear that a politics of abundance would result in less economic activity being heavily involved with government. It might depend how you define that. If laws, regulations, or municipal processes that obstruct construction count as heavy involvement with the government, then that would count for a lot of economic activity, I guess. But if we don't count that and we do count higher state capacity, like more engineers working for the government, then maybe abundance would lead to a bigger government. I don't know.

I think you're right about why abundance is especially appealing to people of a certain type of political persuasion. A lot of people with more moderate, centrist, technocratic, socially/culturally less progressive, etc. tendencies have shown a lot of enthusiasm about the abundance label. I'm not ready to say that they now own the abundance label and abundance just is moderate, centrist, technocratic, etc. If a lot of emos were a fan of my favourite indie rock band, I wouldn't be ready to call it an emo band, even if I were happy for the emos' support.

There are four reasons I want to deconflate abundance and those other political tendencies:

  1. It's intellectually limiting, and at least partially incorrect, to say that abundance is conceptually the same thing as a lot of other independent things that a lot of people who like abundance happen to also like.
  2. I think the coiners and popularizers of abundance deserve a little consideration, and they don't (necessarily, wholeheartedly) agree with those other political tendencies. For instance, Ezra Klein has, to me, been one of the more persuasive proponents of Black Lives Matter for people with a wonk mindset, and has had guests on his podcast from the policy wonk side of BLM to make their case. Klein and Thompson have both expressed limited, tepid support for left-wing economic populist policies, conditional on abundance-style policies also getting enacted.
  3. I'm personally skeptical of many of the ideas found within those other political tendencies, both on the merits and in terms of what's popular or wins elections. (My skepticism has nothing to do with my skepticism of the ideas put forward in the book Abundance, which overall I strongly support and which are orthogonal to the ideas I'm skeptical of.)
  4. It's politically limiting to conflate abundance and these other political tendencies when this isn't intellectually necessary. Maybe moderates enjoy using abundance as a rallying cry for their moderate politics, but conflating abundance and moderate politics makes it a polarized, factional issue and reduces the likelihood of it receiving broad support. I would rather see people try to find common ground on abundance rather than claim it for their faction. Gavin Newsom and Zohran Mamdani are both into abundance, so why can't it have broad appeal? Why try to make it into a factional issue rather than a more inclusive liberal/left idea?

Edit: I wrote the above before I saw what you added to your comment. I have a qualm with this:

But, uh, this is the EA Forum, which is in part about describing the world truthfully, not just spinning PR for movements that I happen to admire.  And I think it's an appropriate summary of a complex movement to say that abundance stuff is mostly a center-left, deregulatory, etc movement.

I think it really depends on which version of abundance you're talking about. If you're talking about the version in the book Abundance by Ezra Klein and Derek Thompson, or the version that the two authors have more broadly advocated (e.g. on their press tour for the book, or in their writing and podcasts before and after the book was published), then, no, I don't think that's an accurate summary of that particular version of abundance. 

If you're referring to the version of abundance advocated by centrists, moderates, and so on, then, okay, it may be accurate to say that version of abundance is centrist, moderate, etc. But I don't want to limit how I define to "abundance" to just that version, for the reasons I gave above.

I don't think it makes sense to call it "spin" or "PR" to describe an idea in the terms used by the originators of that idea, or in terms that are independently substantively correct, e.g. as supported by examples of progressive supporters of abundance like Mamdani. If your impression of what abundance is comes from centrists, moderates, and so on, then maybe that's why you have the impression that abundance simply is centrist, moderate, etc. and that saying otherwise is "untruthful" or "PR". There is no "canonical" version of abundance, so to some extent, abundance just means what people who use the term want it to mean. So, that impression of abundance isn't straightforwardly wrong. It's just avoidably limited.

Imagine someone complaining -- it's so unfair to describe abundance as a "democrat" movement!!  That's so off-putting for conservatives -- instead of ostracising them, we should be trying to entice them to adopt these ideas that will be good for the american people!  Like Montana and Texas passing great YIMBY laws, Idaho deploying modular nuclear reactors, etc.  In lots of ways abundance is totally coherent with conservative goals of efficient government services, human liberty, a focus on economic growth, et cetera!!

To the extent people care what Abundance says in deciding what abundance is, one could quote from the first chapter of the book, specifically the section "A Liberalism That Builds", which explicitly addresses this topic. 

This might be a correct description of some people who have adopted the abundance label, but it's not a correct description of the book Abundance or its authors, Ezra Klein and Derek Thompson, who coined and popularized abundance (or abundance liberalism) as a political term and originated the abundance movement that's playing out in U.S. politics right now. Abundance is deregulatory on NIMBY restrictions to building housing and environmental bills that are perversely used to block solar and wind projects. However, it also advocates for the government of California to in-house the engineering of its high-speed rail project rather than try to outsource it to private contractors. There is a chapter on government science funding, of which it is strongly in favour. Abundance is in favour of the government getting out of the way, or deregulating, in some areas, such as housing, but in other areas, it's in favour of, for lack of a better term, big government.

Others are of course free to read Abundance and run with it any direction they like, even if the authors disagree with it. Nobody owns the abundance label, so people can use it how they like. But I think the framing of abundance as necessarily or inherently moderate, technocratic, or deregulatory is limiting. That's one particular way that some people think about abundance, but not everybody has to think of it that way, and not even the originators of the idea do. The progressive mayor of New York City, Zohran Mamdani, is a fan of Abundance and recently voiced his support for YIMBY housing reform in the state of New York. Abundance is not synonymous with either the moderate or progressive wings of the Democratic Party; it's a set of ideas that is compatible with either a moderate or progressive political orientation.

Klein and Thompson, and of course Mamdani, are not "unified in their opposition to lefty economic proposals".

I think saying that abundance implies moderate politics or technocracy is not only limiting and at least partially inaccurate, it also encourages progressives to oppose abundance when, as we've seen in Mamdani's case, it is compatible with progressivism and largely independent from and orthogonal to (in the figurative sense) the disagreements between moderates and progressives.

technocratic... more moderate 

Abundance or abundance liberalism originated with the journalists Ezra Klein and Derek Thompson in their book Abundance (which was my favourite non-fiction book of 2025). Since Klein and Thompson popularized the term abundance in Democratic politics, a number of others have latched onto the term and assigned their own meanings to it. Klein and Thompson themselves do not advocate for the Democratic Party to become more moderate. Maybe some people who have picked up the abundance label do. But Klein, for instance, argues that the Democratic Party needs to allow for ideological diversity based on geography. That is, it should be a party that can include both a mayor like Zohran Mamdani in New York City and a senator like Joe Manchin in West Virginia. Klein and Thompson’s own views are not more moderate than the Democratic Party currently is or has been recently. 

One of the popular criticisms voiced against Abundance following the publication of the book is that economic populist policies poll better than abundance-style policies. Klein and Thompson’s reply is that these policies are not incompatible, so it would be possible for Democratic politicians to do both. A mistake many people have made in interpreting abundance liberalism is that it’s not a complete theory of politics or a complete political worldview. It doesn’t answer every question Democratic politicians need to answer. 

Abundance is specifically focused on governments providing people with an abundance of the things they need and expect: housing, infrastructure (e.g., highways and high-speed rail), government administrative services (e.g. timely processing of claims for unemployment benefits), and innovations in science, technology, and medicine that translate into practical applications in the real world. The book offers a diagnosis for why governments, particularly governments run by Democrats, have so often failed to provide people with these things. It also offers ideas for improving Democrats’ governing performance in the aforementioned areas. Other topics are outside of scope, although Klein and Thompson have also expressed their opinions on those topics in places other than their book.

It’s easy to see how a somewhat complex and subtle argument like this gets simplified into ‘the Democratic Party should become more moderate and technocratic’. But that simplified version misses a lot. 

(I wrote a long comment here addressing objections to abundance liberalism and to Coefficient Giving’s work in that area, specifically housing policy reform and metascience.)

The 80,000 Hours video on AI 2027 is missing important caveats. These are the sort of caveats that deserve to emphasized and foregrounded in any discussion of AI 2027.

The three major caveats that should be made about AI 2027 are: 

  1. It depends crucially on the subjective intuitions or guesses or the authors. If you don't personally share the authors' intuitions, or don't personally trust that the authors' intuitions are likely correct, then there is no particular reason to take AI 2027's conclusions seriously. (By the time they finish the 80,000 Hours video, are viewers aware this is the case?)
  2. Credible critics claim that the headline results of the AI 2027 timelines model are largely baked in by the authors' modelling decisions, irrespective of what data the model uses. That means, to a large extent, AI 2027's conclusions are not actually determined by the data they use. We already saw with (1) that the conclusions of AI 2027 are largely a restatement of the authors' personal and contestable beliefs. This is another way in which AI 2027's conclusions are, effectively, a restatement of the pre-existing beliefs or assumptions that the authors chose to embed in their timelines model.
  3. AI 2027 is largely based on extrapolating the METR time horizons graph, which has serious problems and limitations, some of which are sometimes (but not always) clearly disclosed by METR employees. Gary Marcus, a cognitive scientist and AI researcher, and Ernest Davis, a computer scientist and AAAI fellow, co-authored a blog post on the METR graph that looks at how the graph was made and concludes that “attempting to use the graph to make predictions about the capacities of future AI is misguided”. Nathan Witkin, a research writer at NYU Stern’s Tech and Society Lab, published a detailed breakdown of some of the problems with METR’s methodology. He concludes that it’s “impossible to draw meaningful conclusions from METR’s Long Tasks benchmark” and that the METR graph “contains far too many compounding errors to excuse”. Witkin calls out a specific tweet from METR, which presents the METR graph in the broad, uncaveated way that the AI 2027 authors interpret it. He calls the tweet “an uncontroversial example of misleading science communication”. Since AI 2027 leans so heavily on this interpretation of the METR graph to make its forecast, it is hard to see how AI 2027 could be credible if its interpretation of the METR graph is not credible. 

The 80,000 Hours video insinuates that most AI experts agree with the core assumptions or conclusions of AI 2027, but there is evidence to the contrary:

  • 76% of AI experts think it is unlikely or very unlikely that existing approaches to AI, which includes LLMs, will scale to AGI. (See page 66 of the AAAI 2025 survey. See also the preceding two pages about open research challenges in AI — such as continual learning, long-term planning, generalization, and causal reasoning — none of which are about scaling more, or at least not uncontroversially so. If you want an example of a specific, prominent AI researcher who emphasizes the importance of fundamental AI research over scaling, Ilya Sutskever believes that further scaling will be inadequate to get to AGI.)
  • Expert surveys about AGI timelines are not necessarily reliable, but the AI Impacts survey in late 2023 found that AI researchers’ median year for AGI is 20 to 90 years later than the AI 2027 scenario.
  • I can’t find survey data on this, but I get the impression there is a diversity of opinions among AI experts on how existentially dangerous AGI would be. For example, the Turing Award-winning AI researchers Yann LeCun and Richard Sutton both have a very different perspective on this than the AI 2027 authors. Both have been outspoken in their belief that that the AI safety/AI alignment community's perspective on this topic is misguided. 

I’m not sure if 80,000 Hours is interested in making videos that try to explain complexities like these, but, personally, I would love to see that. I think there is immense importance and value in helping viewers understand how conclusions are reached, particularly when they are radical and contentious. Many viewers would surely disagree with the reasoning behind the conclusions that the 80,000 Hours video presents if it were clearly explained to them. 

There is a kernel of truth in this; some version of this argument is a good argument. But the devil is in the details. 

If you’re not a scientist or a person with relevant expertise and you feel inclined to disagree with the ~97-99.9% of climate scientists who think anthropogenic climate change is happening, you better adopt a boatload of epistemic humility. In practice, many non-scientists or non-experts disagree with the scientific consensus. I’m not aware of one example of such a person adopting the appropriate level of epistemic humility.

On the other hand, people invoke this same epistemic humility argument when it comes to fringe stuff like UFOs, ESP, and conspiracy theories. A lot of smart people believe in fringe stuff and, boy howdy, have they ever spent a lot of time researching it and thinking about it. 

I think the difference between these two examples of epistemic humility arguments is some form of Carl Sagan-esque scientific skepticism.[1]

The difference also has to do with the idea of expertise, the idea of science (both the scientific method and scientific institutions or communities), and the idea of academia. This goes beyond the general concept of disagreement, or the general concept of some number of people holding an opinion. Climate scientists are an expert community. People who believe in alien UFOs are not an expert community. There is something to be said here about knowledge production and how it happens, which is deeper than just some group of people believing something.

There’s lots to be said here about epistemology, and about science or scientific skepticism as an approach to knowledge versus less reliable approaches. For instance, around unfalsifiable claims. The claim that there is a 2% chance that AGI or superintelligence will be developed within the next decade is unfalsifiable. Even if the next decade plays out exactly how someone who assigns a 0.002% (or 1 in 50,000) probability to AGI would expect, a person who assigned a 2% or 20% or even 60% probability could still think they were right to do so. It’s not exactly a scientific question. It’s also outside the scope of the sort of forecasting questions that the forecasting research literature can support. This is strongly disanalogous to epistemic deference toward scientific experts.[2]

  1. ^

    In my view, it’s not a coincidence that a hugely disproportionate number of people who believe in a high level of existential risk from near-term AGI a) are strongly skeptical of mainstream, institutional science and strongly advocate fringe, dubious, borderline, or minority scientific views, b) believe in fringe stuff like aliens, conspiracy theories, or ESP, or c) have joined a cult.

    The rate of cult membership among people who believe in a high level of existential risk from near-term AGI has got to be something like 10,000x higher than the general population. The LessWrong community in the Bay Area is, I think, around 1,000 people (if not less) and it has started half a dozen cults in 17 years. Former members have committed as many murders in that timeframe. Small towns with much larger populations and much longer histories typically have never had a single cult (and very rarely have cults that murder people).

    Clearly, the argument for epistemic humility that applies to say, climate scientists, doesn’t apply cleanly to this community. If there is strong independent evidence of a community frequently holding extreme, false beliefs, an appeal to their overall rationality doesn’t work. (Also, LessWrong’s primary thought leader has confidently made incorrect predictions about the timing of AGI and about the imminent existential dangers from advanced technology in the past — another reason for skepticism.) 

    The EA community is also directly implicated. One of the aforementioned cults, Leverage Research, ran the Centre for Effective Altruism for a few years, and organized the first EA conferences.

  2. ^

    The number of people who believe in Hinduism is vastly larger than the number who believe in near-term AGI: 15% of the world population, 1.2 billion people. Far more university professors, scientists, philosophers, and so on believe in Hinduism than near-term AGI. Should you assign more than a 1% chance to the probability that Hinduism is a correct theory of metaphysics, cosmology, consciousness, and morality? The stakes are also very large in this case.

Changing the simulation hypothesis from a simulation of a world full of people to a simulation of an individual throws the simulation argument out the window. Here is how Sean Carroll articulates the first three steps of the simulation argument:

  1. We can easily imagine creating many simulated civilizations.
  2. Things that are that easy to imagine are likely to happen, at least somewhere in the universe.
  3. Therefore, there are probably many civilizations being simulated within the lifetime of our universe. Enough that there are many more simulated people than people like us.

The simulation argument doesn’t apply to you, as an individual. Unless you think that you, personally, are going to create a simulation of a world or an individual — which obviously you’re not.

Changing the simulation hypothesis from a world-scale simulation to an individual-scale simulation also doesn’t change the other arguments against the simulation hypothesis:

The bottoming out argument. This is the one from Sean Carroll. Even if we supposed you, personally, were going to create individual-scale simulations in the future, eventually a nesting cascade of such simulations would exhaust available computation in the top-level universe, i.e. the real universe. The bottom-level simulations within which no further simulations are possible would outnumber higher-level ones. The conclusion of the simulation argument contradicts a necessary premise.[1]

The ethical argument. It would be extremely unethical to imprison an individual in a simulation without their consent, especially a simulation with a significant amount of pain and suffering that the simulators are programming in. Would you create an individual-scale simulation even of an unrealistically pleasant life, let alone a life with significant pain and suffering? If we had the technology to do this today, I think it would be illegal. It would be analogous to false imprisonment, kidnapping, torture, or criminal child abuse (since you are creating this person).

The computational waste argument. The amount of computation required to make an individual-scale simulation would require at least as much computation as creating a digital mind in the real universe. In fact, it would require more computation, since you also to have to simulate the whole world around the individual, not just the individual themselves. If the simulators think marginally, they would prefer to use these resources to create a digital mind in the real universe or put them to some other, better use.

If the point of the simulation is to cater it to the individual’s preferences, we should ask:

a) Why isn’t this actually happening? Why is there so much unnecessary pain and suffering and unpleasantness in every individual’s life? Why simulate the covid-19 pandemic?

b) Why not cater to the individual’s fundamental and overriding preference not to be in a simulation?

c) Why not put these resources toward any number of superior uses that must surely exist?[2]

Perhaps most importantly, changing the simulation hypothesis from world-scale to individual-scale doesn’t change perhaps the most powerful counterargument to the simulation hypothesis:

The unlimited arbitrary, undisprovable hypotheses argument. There is no reason to think the simulation hypothesis makes any more sense or is any more likely to be true than the hypothesis that the world you perceive is an illusion created by an evil demon or a trickster deity like Loki. There are an unlimited number of equally arbitrary and equally unjustified hypotheses of this type that could be generated. In my previous comment, I argued that versions of the simulation hypotheses in which the laws of physics or laws of nature are radically different in the real universe than in the simulation are supernatural hypotheses. Versions of the simulation hypothesis that assume real universe physics is the same as simulation physics suffer from the bottoming out argument and the computational waste argument. So, either way, the simulation hypothesis should be rejected. (Also, whether the simulation has real universe physics or not, the ethical argument applies — another reason to reject it.)

This argument also calls into question why we should think simulation physics is the same as real universe physics, i.e. why we should think the simulation hypothesis makes more sense as a naturalistic hypothesis than a supernatural hypothesis. The simulation hypothesis leans a lot on the idea that humans or post-humans in our hypothetical future will want to create “ancestor simulations”, i.e. realistic simulations of the simulators’ past, which is our present. If there were simulations, why would ancestor simulations be the most common type? Fantasy novels are about equally popular as historical fiction or non-fiction books about history. Would simulations skew toward historical realism significantly more than books currently do? Why not simulate worlds with magic or other supernatural phenomena? (Maybe we should conclude that, since this is more interesting, ghosts probably exist in our simulation. Maybe God is simulated too?) The “ancestor simulation” idea is doing a lot of heavy lifting; it’s not clear that this is in any way a justifiable assumption rather than an arbitrary one. The more I dig into the reasoning behind the simulation hypothesis, the more it feels like Calvinball.[3]

The individual-scale simulation hypothesis also introduces new problems that are unique to it:

Simulation of other minds. If you wanted to build a robot that could perfectly simulate the humans you know best, the underlying software would need to be a digital mind. Since, on the individual-scale simulation hypothesis, you are a digital mind, then the other minds in the simulation — at least the ones you know well — are as real as you are. You could try to argue that these other minds only need to be partially simulated. For example, the mind simulations don’t need to be running when you aren’t observing or interacting with these people. But then why don’t these people report memory gaps? If the answer is that the simulation fills in the gaps with false memories, what process continually generates new false memories? Why would this process be less computationally expensive than just running the simulation normally? (You could also try to say that consciousness is some kind of switch that can be flipped on or off for some simulations but not others. But I can’t think of any theory of consciousness this would be compatible with, and it’s a problem for the individual-scale simulation hypothesis if it just starts making stuff up ad hoc to fit the hypothesis.)

If we decide that at least the people you know well must be fully simulated, in the same way you are, then what about the people they know well? What about the people who they know well know well? If everyone in the world is connected through six degrees of separation or fewer, then it seems like individual-scale simulations are actually impossible and all simulations must be world-scale simulations.

Abandoning the simulation of history at large scale. Individual-scale simulations don’t provide the same informational value that world-scale simulations might. When people talk about why “ancestor simulations” would supposedly be valuable or desired, they usually appeal to the notion of simulating historical events on a large scale. This obviously wouldn’t apply to individual-scale simulations. To the extent credence toward the simulation hypothesis depends on this, an individual-scale simulation hypothesis may be even less credible than a world-scale simulation hypothesis.

The Wikipedia page on the simulation hypothesis notes that it’s a contemporary twist on a centuries-old if not millennia-old idea. We’ve replaced dreams and evil demons with computers, but the underlying idea is largely the same. The reasons to reject it are largely the same, although the simulation argument has some unique weaknesses. That page is a good resource for finding still more arguments against the simulation hypothesis.[4]

  1. ^

    Carroll, who is a physicist and cosmologist, also criticizes the anthropic reasoning of the simulation argument. I recommend reading his post, it’s short and well-written.

  2. ^

    You could try to argue that, despite society’s best efforts, it will be impossible to supress a large number of simulations from being created. Pursuing this line of argument re quires speculating about the specific details of a distant, transhuman or post-human future. Would an individual creating a simulation be more like an individual today operating a meth lab or launching a nuclear ICBM? I’m not sure we can know the answer to this question. If dangerous or banned technologies can’t be controlled, what does this say about existential risk? Will far future, post-human terrorists be able to deploy doomsday devices? If so, that would undermine the simulation argument. (Will post-humans even have the desire to be terrorists, or is that a defect of humanity?)

  3. ^

    Related to this are various arguments that the simulation argument is self-defeating. We infer things about the real universe from our perceived universe. We then conclude that our perceived universe is a simulation. But, if it is, this undermines our ability to infer anything about the real universe from our perceived universe. In fact, this undermines the inference that our perceived universe is a simulation within a real universe. So, the simulation argument defeats itself.

  4. ^

    In addition to all the above, I would be curious to hear empirical, scientific arguments about the amount of computation that might be required for world-scale simulations, which would be partly applicable to individual-scale simulations. Obviously, our universe can’t run a full-scale, one-to-one simulation of our universe with perfect fidelity — that would require more computation, matter, and energy than our universe has. If you only simulate the solar system with perfect fidelity, you can pare that down a lot. You can make other assumptions to pare down the computation required. It’s much less important than all the arguments and considerations described above, but if we get a better understanding of approximately how difficult or costly a world-scale simulation might be, that could help put some considerations like computational waste in perspective. 

  5. Show all footnotes
Load more