Pronouns: she/her or they/them.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.
I write on Substack, and used to write on Medium.
Misinformation and clickbait are also common ways to get attention. I wouldn’t recommend those tactics, either.
The way that a lot of people get attention online is fundamentally destructive. It gets them clicks and ad revenue, but it doesn’t help cause positive change in the world.
I don’t think it makes sense to justify manipulative, dishonest, or deceptive tactics like ragebait on the basis that they are good at getting attention. This is taking a business model from social media, which in some cases is arguably like digital cigarettes, and inappropriately applying it to animal advocacy. If the goal is to get people to scroll a lot and show them a lot of ads, sure, copy the tactics used in social media. But that isn’t the goal here.
One form of ragebait is when you generate rage at a target other than yourself, but another form is when you bait people into getting angry at you (e.g. by expressing an insincere opinion) because that drives engagement, and engagement gets you paid. Making people angry at you is especially not applicable to animal advocacy.
I don't know much about this topic myself, but my understanding is that market efficiency is less about having the objectively correct view (or making the objectively right decision) and more about the difficulty of any individual investor making investments that systematically outperform the market. (An explainer page here helps clarify the concept). So, the concept, I think, is not that the market is always right, but when the market is wrong (e.g. that generative AI is a great investment), you're probably wrong too. Or, more precisely, that you're unlikely to be systematically right more often then the market is right, and systematically wrong less often than the market is wrong.
As I understand it, there are differing views among economists on how efficient the market really is. And there is the somewhat paradoxical fact that people disagreeing with the market is part of what makes it as efficient as it is in the first place. For instance, some people worry that the rise of passive investing (e.g. via Vanguard ETFs) will make the market less efficient, since more people are just deferring to the market to make all the calls, and not trying to make calls themselves. If nobody ever tried to beat the market, then the market would become completely inefficient.
There is an analogy here to forecasting, with regard to epistemic deference to other forecasters versus herding that throws out outlier data and makes the aggregate forecast less accurate. If all forecasters just circularly updated until all their individual views were the aggregate view, surely that would be a big mistake. Right?
Do you have a specific forecast for AGI, e.g. a median year or a certain probability within a certain timeframe?
If so, I'd be curious to know how important AI investment is to that forecast. How much would your forecast change if it turned out the AI industry is in a bubble and the bubble popped, and the valuations of AI-related companies dropped significantly? (Rather than trying to specifically operationalize "bubble", we could just defer the definition of bubble to credible journalists.)
There are a few different reasons you've cited for credence in near-term AGI — investment in AI companies, the beliefs of certain AI industry leaders (e.g. Sam Altman), the beliefs of certain AI researchers (e.g. Geoffrey Hinton), etc. — and I wonder how significant each of them is. I think each of these different considerations could be spun out into its own lengthy discussion.
I wrote a draft of a comment that addresses several different topics you raised, topic-by-topic, but it's far too long (2000 words) and I'll have to put in a lot of work if I want to revise it down to a normal comment length. There are multiple different rabbit holes to go down, like Sam Altman's history of lying (which is why the OpenAI Board fired him) or Geoffrey Hinton's belief that LLMs have near-human-level consciousness.
I feel like going deeper into each individual reason for credence in near-term AGI and figuring out how significant each one is for your overall forecast could be a really interesting discussion. The EA Forum has a little-used feature called Dialogues that could be well-suited for this.
Because timelines are so uncertain (3-15 years?)
In fact, they are much more uncertain than that. AI researchers and superforecasters tend to guess a range between around 20 and 90 years for the median year of AGI.
100 megaseconds (why that unit of measurement?) is 3 years and 2 months. 100 megaseconds from now is March 2029. This is way earlier than the median year of AGI guessed by AI researchers and superforecasters. It's even earlier than Metaculus, which is disproportionately used by people who strongly believe in near-term AGI. Metaculus currently says 2033.
(My personal forecast, for whatever it's worth, is a significantly less than 0.01%, or 1 in 10,000, chance of AGI by the end of 2035 and a ~95% chance that the AI industry is in a bubble that will pop sometime within the next ~5 years.)
[Adapted from this comment.]
Two pieces of evidence commonly cited for near-term AGI are AI 2027 and the METR time horizons graph. AI 2027 is open to multiple independent criticisms, one of which is its use of the METR time horizons graph to forecast near-term AGI or AI capabilities more generally. Using the METR graph to forecast near-term AGI or AI capabilities more generally is not supported by the data and methodology used to make the graph.
Two strong criticisms that apply specifically to the AI 2027 forecast are:
AI 2027 is largely based on extrapolating the METR time horizons graph. The following criticisms of the METR time horizons graph therefore extend to AI 2027:
Since AI 2027 leans so heavily on this interpretation of the METR graph to make its forecast, it is hard to see how AI 2027 could be credible if its interpretation of the METR graph is not credible.
It's worth contrasting AI 2027 and similar forecasts of near-term AGI with expert opinion:
Two overall takeaways:
Even if there is say, a 30% chance that in the next 5 years even 30-40% of white collar jobs get replaced...that feels like a massive shock to high-income countries way of life, and a major shift in the social contract for the generation of kids who have got into massive debt for university courses only to find substantially reduced market opportunities. That requires substantial state action.
Rather than 30%, I would personally guess the chance is much less than 0.01% (1 in 10,000), possibly less than 0.001% (1 in 100,000) or even 0.0001% (1 in 1 million).
I agree with Huw that there's insufficient evidence to say that AI is causing significant or even measurable unemployment right now, and I'm highly skeptical this will happen anytime soon. Indeed, I'd personally guess there's a ~95% chance there's an AI bubble that will pop sometime within the next several years. So far, AI has stubbornly refused to deliver the sort of productivity or automation that has been promised. I think the problem is a fundamental science problem, not something that can be solved with scaling or incremental R&D.
However, let's imagine a scenario where in a short time, say 5 years, a huge percentage of jobs get automated by AI — automation of white-collar work on a massive scale.[1]
What should governments' response be right now, before this has started to happen, or at least before there is broad agreement it has started to happen? People often talk about this as a gloomy, worrying outcome. I suppose it could turn out to be, but why should that be the default assumption? It would lead to much faster economic growth than developed countries are used to seeing. It might even be record-breaking, unprecedented economic growth. It would be a massive economic windfall. That's a good thing.[2]
To be a bit more specific, when people imagine the sort of AI that is capable of automating white-collar work on the scale you're describing (30-40% of jobs), they also often imagine wildly high rates of GDP growth, ranging from 10% to 30%.[3][4] The level of growth is supposed to be commensurate with the percentage of labour that AI can automate. I don't know about these specific figures, but the general idea is intuitive.
Surely passing UBI would become much easier once both a) unemployment significantly increased and b) economic growth significantly increased. There would be both a clear problem to address and a windfall of money with which to address it. By analogy, it would have been much harder for governments to pass stimulus bills in January 2020 in anticipation of covid-19. In March 2020, it was much easier, since the emergency was clear. But the covid-19 emergency caused a recession. What if, instead, it had caused an economic boom, and a commensurate increase in government revenues? Surely, then, it would have been even easier to pass stimulus bills.
This is why, even if I accept the AI automation scenario for the sake of argument, I don't worry about the practical, logistical, or fiscal obstacles to enacting UBI in a hurry. Governments can send cheques to people on short notice, as we saw with covid-19. This would presumably be especially true if the government and the economy overall were experiencing a windfall from AI. The sort of administrative bottlenecks we saw in some places during in covid-19 — those could be solved by AI, since we're stipulating an unlimited supply of digital white-collar workers. Maybe there are further aspects to implementing UBI that would be more complicated than sending people cheques and that couldn't be assisted by AI. What would those be?
The typical concerns raised over UBI are that it would be too expensive, that it would be poorly targeted (i.e. it would be more efficient to run means-tested programs), that it would discourage people from working, and that it would reduce economic growth. None of those apply to this scenario.
If there's more complexity in implementing UBI that I'm not considering, surely in this scenario politicians would quickly become focused on dealing with that complexity, and civil servants (and AI workers) would be assigned to the task. As opposed to something like decarbonizing the economy, UBI seems like it could be implemented relatively quickly and easily, given a sudden emergency that called for it and a sudden windfall of cash. Part of the supposed appeal of UBI is its simplicity relative to means-tested programs like welfare and non-cash-based programs like food stamps and subsidized housing. So, if you're not satisfied with my answer, maybe you could elaborate on why you think it wouldn't be so easy to figure out UBI in a hurry.
As mentioned up top, I regard this just as an interesting hypothetical, since I think the chance of this actually happening is below 0.01% (1 in 10,000).
Let's assume, for the sake of argument, that the sort of dire outcomes hypothesized under the heading of AI safety or AI alignment do not occur. Let's assume that AI is safe and aligned, and that it's not able to be misused by humans to destroy or take over the world.
Let's also assume that AI won't be a monopoly or duopoly or oligopoly but, like today, even open source models that are free to use are a viable alternative to the most cutting-edge proprietary models. We'll imagine that the pricing power of the AI companies will be put in check by competition from both proprietary and open source models. Sam Altman might become a trillionaire, but only a small fraction of the wealth created by AI will be captured by the AI companies. (As an analogy, think about how much wealth is created by office workers using Windows PCs, and how much of that wealth is captured by Microsoft or by PC manufacturers like HP and Dell, or components manufacturers like Intel.)
I'm putting these other concerns aside in order to focus on labour automation and technological unemployment, since that's the concern you raised.
The specific worries around AI automation people most commonly cite are about wealth distribution, and about people finding purpose and meaning in their lives if there's large-scale technological unemployment. I'll focus on the wealth distribution worry, since the topic is UBI, and your primary concern seems to be economic or material.
Some people are also worried about workers being disempowered if they can be replaced by AI, at the same time that capital owners become much wealthier. If they're right to worry about that, then maybe it's important to consider well in advance of it happening. Maybe workers should act while they still have power and leverage. But it's a bit of a separate topic, I think, from whether to start implementing UBI now. Maybe UBI would be one of a suite of policies workers would want to enact in advance of large-scale AI automation of labour, but what's to prevent UBI from being repealed after workers are disempowered?
For the sake of this discussion, I'll assume that workers (or former workers) will remain politically empowered, and healthy democracies will remain healthy.
Potlogea, Andrei. “AI and Explosive Growth Redux.” Epoch AI, 20 June 2025, https://epoch.ai/gradient-updates/ai-and-explosive-growth-redux.
Davidson, Tom. “Could Advanced AI Drive Explosive Economic Growth?” Coefficient Giving, 25 June 2021, https://coefficientgiving.org/research/could-advanced-ai-drive-explosive-economic-growth/.
Hence my initial mention of "high state capacity"? But I think it's fair to call abundance a deregulatory movement overall, in terms of, like... some abstract notion of what proportion of economic activity would become more vs less heavily involved with government, under an idealized abundance regime.
I guess it depends what version of abundance you're talking about. I have in mind the book Abundance as my primary idea of what abundance is, and in that version of abundance, I don't think it's clear that a politics of abundance would result in less economic activity being heavily involved with government. It might depend how you define that. If laws, regulations, or municipal processes that obstruct construction count as heavy involvement with the government, then that would count for a lot of economic activity, I guess. But if we don't count that and we do count higher state capacity, like more engineers working for the government, then maybe abundance would lead to a bigger government. I don't know.
I think you're right about why abundance is especially appealing to people of a certain type of political persuasion. A lot of people with more moderate, centrist, technocratic, socially/culturally less progressive, etc. tendencies have shown a lot of enthusiasm about the abundance label. I'm not ready to say that they now own the abundance label and abundance just is moderate, centrist, technocratic, etc. If a lot of emos were a fan of my favourite indie rock band, I wouldn't be ready to call it an emo band, even if I were happy for the emos' support.
There are four reasons I want to deconflate abundance and those other political tendencies:
Edit: I wrote the above before I saw what you added to your comment. I have a qualm with this:
But, uh, this is the EA Forum, which is in part about describing the world truthfully, not just spinning PR for movements that I happen to admire. And I think it's an appropriate summary of a complex movement to say that abundance stuff is mostly a center-left, deregulatory, etc movement.
I think it really depends on which version of abundance you're talking about. If you're talking about the version in the book Abundance by Ezra Klein and Derek Thompson, or the version that the two authors have more broadly advocated (e.g. on their press tour for the book, or in their writing and podcasts before and after the book was published), then, no, I don't think that's an accurate summary of that particular version of abundance.
If you're referring to the version of abundance advocated by centrists, moderates, and so on, then, okay, it may be accurate to say that version of abundance is centrist, moderate, etc. But I don't want to limit how I define to "abundance" to just that version, for the reasons I gave above.
I don't think it makes sense to call it "spin" or "PR" to describe an idea in the terms used by the originators of that idea, or in terms that are independently substantively correct, e.g. as supported by examples of progressive supporters of abundance like Mamdani. If your impression of what abundance is comes from centrists, moderates, and so on, then maybe that's why you have the impression that abundance simply is centrist, moderate, etc. and that saying otherwise is "untruthful" or "PR". There is no "canonical" version of abundance, so to some extent, abundance just means what people who use the term want it to mean. So, that impression of abundance isn't straightforwardly wrong. It's just avoidably limited.
Imagine someone complaining -- it's so unfair to describe abundance as a "democrat" movement!! That's so off-putting for conservatives -- instead of ostracising them, we should be trying to entice them to adopt these ideas that will be good for the american people! Like Montana and Texas passing great YIMBY laws, Idaho deploying modular nuclear reactors, etc. In lots of ways abundance is totally coherent with conservative goals of efficient government services, human liberty, a focus on economic growth, et cetera!!
To the extent people care what Abundance says in deciding what abundance is, one could quote from the first chapter of the book, specifically the section "A Liberalism That Builds", which explicitly addresses this topic.
This might be a correct description of some people who have adopted the abundance label, but it's not a correct description of the book Abundance or its authors, Ezra Klein and Derek Thompson, who coined and popularized abundance (or abundance liberalism) as a political term and originated the abundance movement that's playing out in U.S. politics right now. Abundance is deregulatory on NIMBY restrictions to building housing and environmental bills that are perversely used to block solar and wind projects. However, it also advocates for the government of California to in-house the engineering of its high-speed rail project rather than try to outsource it to private contractors. There is a chapter on government science funding, of which it is strongly in favour. Abundance is in favour of the government getting out of the way, or deregulating, in some areas, such as housing, but in other areas, it's in favour of, for lack of a better term, big government.
Others are of course free to read Abundance and run with it any direction they like, even if the authors disagree with it. Nobody owns the abundance label, so people can use it how they like. But I think the framing of abundance as necessarily or inherently moderate, technocratic, or deregulatory is limiting. That's one particular way that some people think about abundance, but not everybody has to think of it that way, and not even the originators of the idea do. The progressive mayor of New York City, Zohran Mamdani, is a fan of Abundance and recently voiced his support for YIMBY housing reform in the state of New York. Abundance is not synonymous with either the moderate or progressive wings of the Democratic Party; it's a set of ideas that is compatible with either a moderate or progressive political orientation.
Klein and Thompson, and of course Mamdani, are not "unified in their opposition to lefty economic proposals".
I think saying that abundance implies moderate politics or technocracy is not only limiting and at least partially inaccurate, it also encourages progressives to oppose abundance when, as we've seen in Mamdani's case, it is compatible with progressivism and largely independent from and orthogonal to (in the figurative sense) the disagreements between moderates and progressives.
technocratic... more moderate
Abundance or abundance liberalism originated with the journalists Ezra Klein and Derek Thompson in their book Abundance (which was my favourite non-fiction book of 2025). Since Klein and Thompson popularized the term abundance in Democratic politics, a number of others have latched onto the term and assigned their own meanings to it. Klein and Thompson themselves do not advocate for the Democratic Party to become more moderate. Maybe some people who have picked up the abundance label do. But Klein, for instance, argues that the Democratic Party needs to allow for ideological diversity based on geography. That is, it should be a party that can include both a mayor like Zohran Mamdani in New York City and a senator like Joe Manchin in West Virginia. Klein and Thompson’s own views are not more moderate than the Democratic Party currently is or has been recently.
One of the popular criticisms voiced against Abundance following the publication of the book is that economic populist policies poll better than abundance-style policies. Klein and Thompson’s reply is that these policies are not incompatible, so it would be possible for Democratic politicians to do both. A mistake many people have made in interpreting abundance liberalism is that it’s not a complete theory of politics or a complete political worldview. It doesn’t answer every question Democratic politicians need to answer.
Abundance is specifically focused on governments providing people with an abundance of the things they need and expect: housing, infrastructure (e.g., highways and high-speed rail), government administrative services (e.g. timely processing of claims for unemployment benefits), and innovations in science, technology, and medicine that translate into practical applications in the real world. The book offers a diagnosis for why governments, particularly governments run by Democrats, have so often failed to provide people with these things. It also offers ideas for improving Democrats’ governing performance in the aforementioned areas. Other topics are outside of scope, although Klein and Thompson have also expressed their opinions on those topics in places other than their book.
It’s easy to see how a somewhat complex and subtle argument like this gets simplified into ‘the Democratic Party should become more moderate and technocratic’. But that simplified version misses a lot.
(I wrote a long comment here addressing objections to abundance liberalism and to Coefficient Giving’s work in that area, specifically housing policy reform and metascience.)
The 80,000 Hours video on AI 2027 is missing important caveats. These are the sort of caveats that deserve to emphasized and foregrounded in any discussion of AI 2027.
The three major caveats that should be made about AI 2027 are:
The 80,000 Hours video insinuates that most AI experts agree with the core assumptions or conclusions of AI 2027, but there is evidence to the contrary:
I’m not sure if 80,000 Hours is interested in making videos that try to explain complexities like these, but, personally, I would love to see that. I think there is immense importance and value in helping viewers understand how conclusions are reached, particularly when they are radical and contentious. Many viewers would surely disagree with the reasoning behind the conclusions that the 80,000 Hours video presents if it were clearly explained to them.