Y

Yarrow

583 karmaJoined

Bio

Pronouns: she/her or they/them.

I got interested in EA back before it was called EA, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where EA can fit into my life these days and what it means to me.

Comments
149

Here’s the link to the original post: https://epochai.substack.com/p/the-case-for-multi-decade-ai-timelines

One important point in the post — illustrated with the example of the dot com boom and bust — is that it’s madness to just look at a trend and extrapolate it indefinitely. You need an explanatory theory of why the trend is happening and why it might continue or why it might stop. In the absence of an explanatory understanding of what is happening, you are just making a wild, blind guess about the future.

(David Deutsch makes this point in his awesome book The Beginning of Infinity and in one of his TED Talks.)

A pointed question which Ege Erdil does not ask in the post, but should: is there any hard evidence of AI systems invented within the last 5 years or even the last 10 years doing any labour automation or any measurable productivity augmentation of human workers?

I have looked and I have found very little evidence of this.

One study I found had mixed results. It looked at the use of LLMs to aid people working in customer support, which seems to me like it should be one of the easiest kinds of jobs to automate using LLMs. The study found that the LLMs increased productivity for new, inexperienced employees but decreased productivity for experienced employees who already knew the ins and outs of the job:

These results are consistent with the idea that generative AI tools may function by exposing lower-skill workers to the best practices of higher-skill workers. Lower-skill workers benefit because AI assistance provides new solutions, whereas the best performers may see little benefit from being exposed to their own best practices. Indeed, the negative effects along measures of chat quality—RR and customer satisfaction—suggest that AI recommendations may distract top performers or lead them to choose the faster or less cognitively taxing option (following suggestions) rather than taking the time to come up with their own responses.

If the amount of labour automation or productivity improvement from LLMs is zero or negative, then naively extrapolating this trend forward would mean full labour automation by AI is an infinite amount of time away. But of course I’ve just argued why these kinds of extrapolations are a mistake.

It continually strikes me as odd that people write 3,000-word, 5,000-word, and 10,000-word essays on AGI and don’t ask fundamental questions like this. You’d think if the trend you are discussing is labour automation by AI, you’d want to see if AI is automating any labour in a way we can rigorously measure. Why are people ignoring that obvious question?

Nvidia revenue is a really bad proxy for AI-based labour automation or for the productivity impact of AI. It’s a bad proxy for the same reason capital investment into AI would be a bad proxy. It measures resources going into AI (inputs), not resources generated by AI (outputs).

LLMs seem to be bringing down the costs of software.

Are you aware of hard data that supports this or is this just a guess/general impression?

I've seen very little hard data on the use of LLMs to automate labour or enhance worker productivity. I have tried to find it.

One of the few pieces of high-quality evidence I've found on this topic is this study: https://academic.oup.com/qje/article/140/2/889/7990658 It looked at the use of LLMs to aid people working in customer support.

The results are mixed, suggesting that in some cases LLMs may decrease productivity:

These results are consistent with the idea that generative AI tools may function by exposing lower-skill workers to the best practices of higher-skill workers. Lower-skill workers benefit because AI assistance provides new solutions, whereas the best performers may see little benefit from being exposed to their own best practices. Indeed, the negative effects along measures of chat quality—RR and customer satisfaction—suggest that AI recommendations may distract top performers or lead them to choose the faster or less cognitively taxing option (following suggestions) rather than taking the time to come up with their own responses.

Anecdotally, what I've heard from people who do coding for a job is that AI does somewhat improve their productivity, but only about the same as or less than other tools that make writing code easier. They've said that the LLM filling in the code saves them the time they would have otherwise spent going to Stack Overflow (or wherever) and copying and pasting a code block from there.

Based on this evidence, I am highly skeptical that software development is going to become significantly less expensive in the near term due to LLMs, let alone 10x or 100x less expensive.

One of the best comments I've ever read on the EA Forum! I agree on every point, especially that making up numbers is a bad practice.

I also agree that expanding the reach of effective altruism (including outreach and funding) beyond the Anglosphere countries sounds like a good idea.

And I agree that the kind of projects that get funded and supported (and the kind of people who get funded and supported) seems unduly biased toward a Silicon Valley worldview.

I believe Bob Jacobs is a socialist, although I don't know what version of socialism he supports. "Socialism" is a fraught term and even when people try to clarify what they mean by it, sometimes it still doesn't get less confusing.

I'm inclined to be open-minded towards Bob's critiques of effective altruism, but I get the sense that his critiques of EA and his ideas for reform are going to end up being a microcosm of socialist or left-wing critiques of society at large and socialist or left-wing ideas for reforming society.

My thought on that is summed up in the Beatles' song "Revolution":

You say you got a real solution, well, you know

We'd all love to see the plan

In principle, democracy is good, equality is good, less hierarchy is better than more hierarchy, not being completely reliant on billionaires and centimillionaires is good... But I need to know some more specifics on how Bob wants to achieve those things.

I see it primarily as a social phenomenon because I think the evidence we have today that AGI will arrive by 2030 is less compelling than the evidence we had in 2015 that AGI would arrive by 2030. In 2015, it was a little more plausible that AGI could arrive by 2030 because that was 15 years away and who knows what can happen in 15 years.

Now that 2030 is a little less than 5 years away, AGI by 2030 is a less plausible prediction than it was in 2015 because there's less time left and it's more clear it won't happen.

I don't think the reasons people believe AGI will arrive by 2030 are primarily based on evidence but are primarily a sociological phenomenon. People were ready to believe this regardless of the evidence going back to Ray Kurzweil's The Age of Spiritual Machines in 1999 and Eliezer Yudkowsky's "End-of-the-World Bet" in 2017. People don't really pay attention to whether the evidence is good or bad, they ignore obvious evidence and arguments against near-term AGI, and they mostly make a choice to ignore or attack people who express disagreement and instead tune into the relentless drumbeat of people agreeing with them. This is sociology, not epistemology.

Don't believe me? Talk to me again in 5 years and send me a fruit basket. (Or just kick the can down the road and say AGI is coming in 2035...)

Expert opinion has changed? First, expert opinion is not itself evidence, it's people's opinions about evidence. What evidence are the experts basing their beliefs on? That seems way more important than someone just saying a number based on an intuition.

Second, expert opinion does not clearly support the idea of near-term AGI.

As of 2023, the expert opinion on AGI was... well, first of all, really confusing. The AI Impacts survey found that the experts believed there is a 50% chance by 2047 that "unaided machines can accomplish every task better and more cheaply than human workers." And also that there's a 50% chance that by 2116 "machines could be built to carry out the task better and more cheaply than human workers." I don't know why these predictions are 69 years apart.

Regardless, 2047 is sufficiently far away that it might as well be 2057 or 2067 or 2117. This is just people generating a number using a gut feeling. We don't know how to build AGI and we have no idea how long it will take to figure out how to. No amount of thinking of numbers or saying numbers can escape this fundamental truth.

We actually won't have to wait long to see that some of the most attention-catching near-term AI predictions are false. Dario Amodei, the CEO of Anthropic (a company that is said to be "literally creating God"), has predicted that by some point between June 2025 and September 2025, 90% of all code will written by AI rather than humans. In late 2025 and early 2026, when it's clear Dario was wrong about this (when, not if), maybe some people will start to be more skeptical of attention-grabbing expert predictions. But maybe not.

There are already strong signs of AGI discourse being irrational and absurd. On April 16, 2025, Tyler Cowen claimed that OpenAI's o3 model is AGI and asked, "is April 16th AGI day?". In a follow-up post on April 17, seemingly in response to criticism, he said, "I don’t mind if you don’t want to call it AGI", but seemed to affirm he still thinks o3 is AGI.

On one hand, I hope that in 5 years the people who promoted the idea of AGI by 2030 will lose a lot of credibility and maybe will do some soul-searching to figure out how they could be so wrong. On the other hand, there is nothing preventing people from being irrational indefinitely, such as:

  • Defining whatever exists in 2030 as AGI (Tyler Cowen already did it in 2025, and Ray Kurzweil innovated the technique years ago).
  • Kicking the can down the road a few years, and repeat as necessary (similar to how Elon Musk has predicted that the Tesla fleet will achieve Level 4/5 autonomy in a year every year from 2015 to 2025 and has not given up the game despite his losing streak).
  • Telling a story in which AGI didn't happen only because effective altruists or other good actors successfully delayed AGI development.  

I think part of the sociological problem is that people are just way too polite about how crazy this all is and how awful the intellectual practices of effective altruists have been on this topic. (Sorry!) So, I'm being blunt about this to try to change that a little. 

Good Ventures have stopped funding efforts connected with the rationality community and rationality,

This confused me at first until I looked at the comments. The EA Forum post you linked to doesn't specifically say this and the Good Ventures blog post that forum post links to doesn't specifically say this either. I think you must be referring to the comments on that forum post, particularly between Dustin Moskovitz (who is now shown as "[anonymous]") and Oliver Habryka.

Three comments from Dustin Moskovitz, here, here, and here, which are oblique and confusing (and he seems to say somewhere else he's being vague on purpose), but I think Dustin is saying that he's now wary of funding things related to "the rationalist community" (defined below) for multiple reasons he doesn't fully get into, but which seem to include both a long history of problems and the then-recent Manifest 2024 conference that was hosted at Lighthaven (the venue owned by Lightcone Infrastructure, the organization that runs the LessWrong forum, which is the online home of "the rationalist community") and attracted attention due to the extreme racist views of many of the attendees.

We need to differentiate between 'capital R' Rationality and 'small r' rationality. By 'capital R' Rationality, I mean the actual Rationalist community, centered around Berkeley...

On the other hand, 'small r' rationality is a more general concept. It encompasses the idea of using reason and evidence to form conclusions, scout mindset, and empiricism. It also includes a quest to avoid getting stuck with beliefs resistant to evidence, techniques for reflecting on and improving mental processes, and, yes, many of the core ideas of Rationality, like understanding Bayesian reasoning.

I think the way you tried to make this distinction is not helpful and actually adds to the confusion. We need to distinguish two very different things:

  1. The concept of rationality as it has historically been used for centuries and which is what the vast majority of people on Earth still associate the word "rationality" with today. This older and more universal concept of rationality is discussed in places like the Wikipedia article for rationality and in academic philosophy. Rationality in this sense is usually considered synonymous with "reason", as in "reasoning". You could also try to define rationality as "good thinking" or, as Steven Pinker defines it in an article for Encyclopedia Britannica, as "the use of knowledge to attain goals."
  2. The specific worldview, philosophy, lifestyle, or subculture that people on LessWrong and a small number of people in the San Francisco Bay Area call "rationalism". (Wikipedia calls this "the rationalist community".)

The online and Bay Area-based "rationalist community" (2) tends to believe it has especially good insight into older, more universal concept of rationality (1) and that self-identified "rationalists" (2) are especially good at being rational or practicing rationality in that older, more universal sense (1). Are they?

No.

Calling yourselves "rationalists" and your movement or community "rationalism" is just a PR move, and a pretty annoying one at that. It's annoying for a few reasons, partly because it's arrogant and partly because it leads to confusion like the confusion in this post, where the centuries-old and widely-known concept of rationality (1) gets conflated with an eccentric, niche community (2). It makes ancient, universal terms like "rational" and "rationality" contested ground, with this small group of people with unusual views — many of them irrational — staking a claim on these words.

By analogy, this community could have called itself "the intelligence movement" or "the intelligence community". Its members could have self-identified as something like "intelligent people" or "aspirationally intelligent people". That would have been a little bit more transparently annoying and arrogant.

So, is Good Ventures or effective altruism ever going to disavow or distance itself from the ancient, universal concept of rationality (1)? No. Absolutely not. Never. That would be absurd.

Has Good Ventures disavowed or distanced itself from LessWrong/Bay Area "rationalism" or "the rationalist community" (2)? I don't know, but those comments from Dustin that I linked to above suggest that maybe this is this case.

Will effective altruism disavow or distance itself from LessWrong/Bay Area "rationalism" or "the rationalist community" (2)? I don't know. I want this to happen because I think "the rationalist community" (2) decreases the rationality (1) of effective altruism. The more influence the LessWrong/Bay Area "rationalist" subculture (2) has over effective altruism, the less I like effective altruism and the less I want to be a part of it.

If Dustin and Good Ventures are truly done with "the rationalist community" (2), that sounds like good news for Dustin, for Good Ventures, and probably for effective altruism. It's a small victory for rationality (1).

This story might surprise you if you’ve heard that EA is great at receiving criticisms. I think this reputation is partially earned, since the EA community does indeed engage with a large number of them. The EA Forum, for example, has given “Criticism of effective altruism” its own tag. At the moment of writing, this tag has 490 posts on it. Not bad.

Not only does EA allow criticisms, it sometimes monetarily rewards them. In 2022 there was the EA criticism contest, where people could send in their criticisms of EA and the best ones would receive prize money. A total of $120,000 was awarded to 31 of the contest’s 341 entries. At first glance, this seems like strong evidence that EA rewards critiques, but things become a little bit more complicated when we look at who the winners and losers were.


After giving it a look, the EA Criticism and Red Teaming Contest is not what I would describe as being about "criticism of effective altruism", either in terms of what the contest asked for in the announcement post or in terms of what essays ended up winning the prizes. At least not mostly.

When you say "criticism of effective altruism", that makes me think of the sort of criticism that a skeptical outsider would make about effective altruism. Or that it would be about the kind of thing that might make a self-identified effective altruist think less of effective altruism overall, or even consider leaving the movement.

Out of 31 essays that won prizes, only the following four seem like "criticism of effective altruism", based on the summaries:

  • "Effective altruism in the garden of ends" by Tyler Alterman (second prize)
  • "Notes on effective altruism" by Michael Nielsen (second prize)
  • "Critiques of EA that I want to read" by Abraham Rowe (honourable mention)
  • "Leaning into EA Disillusionment" by Helen (honourable mention)

The essay "Criticism of EA Criticism Contest" by Zvi (which got an honourable mention) points out what I'm pointing out, but I wouldn't count this one because it doesn't actually make criticisms of effective altruism itself.

This is not to say anything about whether the other 27 essays were good or bad, or whether the contest was good or bad. Just that I think this contest was mostly not about "criticisms of EA".

I don't know the first thing about American non-profit law, but a charity turning into a for-profit company seems like it can't possibly be legal, or at least it definitely shouldn't be. 

I think it was a great idea to transition from a full non-profit (or whatever it was — OpenAI's structure is so complicated) to spinning out a capped profit for-profit company that is majority owned by the non-profit. That's an exciting idea! Let investors own up to 49% of the for-profit company and earn up to a 100x return on their investment. Great. 

Maybe more non-profits could try something similar. Novo Nordisk, the company that makes semaglutide (sold under the brand names Ozempic, Rybelsus, and Wegovy) is majority controlled by a non-profit, the Novo Nordisk Foundation. It seems like this model sometimes really works!

But to now give majority ownership and control of the for-profit OpenAI company to outside investors? How could that possibly be justified? 

Is OpenAI really not able to raise enough capital as is? Crunchbase says OpenAI has raised $62 billon so far. I guess Sam Altman wants to raise hundreds of billions if not trillions of dollars, but, I mean, is OpenAI's structure really an obstacle there? I believe OpenAI is near or at the top of the list of private companies that have raised the most capital in history. And the recent funding round of $40 billion, led by SoftBank, is more capital than many large companies have raised through initial public offerings (IPOs). So, OpenAI has raised historic amounts of capital and yet it needs to take majority ownership away from the non-profit so it can raise more?

This change could possibly be legally justified if the OpenAI non-profit's mission had been just to advance AI or something like that. Then I guess the non-profit could spin out startups all it wants, similar to what New Harvest has done with startups that use biotech to produce animal-free animal products. But the OpenAI non-profit's mission was explicitly to put the development of artificial intelligence and artificial general intelligence (AGI) under the control of a non-profit board that would ensure the technology is developed and deployed safely and that its benefits are shared equitably with the world. 

I hope this change isn't allowed to happen. I don't think AGI will be invented particularly soon. I don't think, contra Sam Altman, that OpenAI knows how to build AGI. And yet I still don't think a charity should be able to violate its own mission like this, for no clear social benefit, and when the for-profit subsidiary seems to be doing just fine. 

Metaculus accepts predictions from just anybody, so Metaculus is not an aggregator of expert predictions. It’s not even a prediction market.

I don’t have to tell you that scaling inputs like compute like money, compute, labour, and so on isn’t the same as scaling outputs like capabilities or intelligence. So, evidence that inputs have been increasing a lot is not evidence that outputs have been increasing a lot. We should avoid ambiguating between these two things.

I’m actually not convinced AI can drive a car today in any sense that was not also true 5 years ago or 10 years ago. I have followed the self-driving car industry closely and, internally, companies have a lot of metrics about safety and performance. These are closely held and rarely is anything disclosed to the public.

We also have no idea how much human labour is required in operating autonomous vehicle prototypes, e.g., how often a human has to intervene remotely.

Self-driving car companies are extremely secretive about the information that is the most interesting for judging technological progress. And they simultaneously have strong and aggressive PR and marketing. So, I’m skeptical. Especially since there is a history of companies like Cruise making aggressive, optimistic pronouncements and then abruptly announcing that the company is over.

Elon Musk has said full autonomy is one year away every year since 2015. That’s an extreme case, but others in the self-driving car industry have also set timelines and then blown past them.

Load more