Pronouns: she/her or they/them.
I got interested in EA back before it was called EA, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where EA can fit into my life these days and what it means to me.
This story might surprise you if you’ve heard that EA is great at receiving criticisms. I think this reputation is partially earned, since the EA community does indeed engage with a large number of them. The EA Forum, for example, has given “Criticism of effective altruism” its own tag. At the moment of writing, this tag has 490 posts on it. Not bad.
Not only does EA allow criticisms, it sometimes monetarily rewards them. In 2022 there was the EA criticism contest, where people could send in their criticisms of EA and the best ones would receive prize money. A total of $120,000 was awarded to 31 of the contest’s 341 entries. At first glance, this seems like strong evidence that EA rewards critiques, but things become a little bit more complicated when we look at who the winners and losers were.
After giving it a look, the EA Criticism and Red Teaming Contest is not what I would describe as being about "criticism of effective altruism", either in terms of what the contest asked for in the announcement post or in terms of what essays ended up winning the prizes. At least not mostly.
When you say "criticism of effective altruism", that makes me think of the sort of criticism that a skeptical outsider would make about effective altruism. Or that it would be about the kind of thing that might make a self-identified effective altruist think less of effective altruism overall, or even consider leaving the movement.
Out of 31 essays that won prizes, only the following four seem like "criticism of effective altruism", based on the summaries:
The essay "Criticism of EA Criticism Contest" by Zvi (which got an honourable mention) points out what I'm pointing out, but I wouldn't count this one because it doesn't actually make criticisms of effective altruism itself.
This is not to say anything about whether the other 27 essays were good or bad, or whether the contest was good or bad. Just that I think this contest was mostly not about "criticisms of EA".
I don't know the first thing about American non-profit law, but a charity turning into a for-profit company seems like it can't possibly be legal, or at least it definitely shouldn't be.
I think it was a great idea to transition from a full non-profit (or whatever it was — OpenAI's structure is so complicated) to spinning out a capped profit for-profit company that is majority owned by the non-profit. That's an exciting idea! Let investors own up to 49% of the for-profit company and earn up to a 100x return on their investment. Great.
Maybe more non-profits could try something similar. Novo Nordisk, the company that makes semaglutide (sold under the brand names Ozempic, Rybelsus, and Wegovy) is majority controlled by a non-profit, the Novo Nordisk Foundation. It seems like this model sometimes really works!
But to now give majority ownership and control of the for-profit OpenAI company to outside investors? How could that possibly be justified?
Is OpenAI really not able to raise enough capital as is? Crunchbase says OpenAI has raised $62 billon so far. I guess Sam Altman wants to raise hundreds of billions if not trillions of dollars, but, I mean, is OpenAI's structure really an obstacle there? I believe OpenAI is near or at the top of the list of private companies that have raised the most capital in history. And the recent funding round of $40 billion, led by SoftBank, is more capital than many large companies have raised through initial public offerings (IPOs). So, OpenAI has raised historic amounts of capital and yet it needs to take majority ownership away from the non-profit so it can raise more?
This change could possibly be legally justified if the OpenAI non-profit's mission had been just to advance AI or something like that. Then I guess the non-profit could spin out startups all it wants, similar to what New Harvest has done with startups that use biotech to produce animal-free animal products. But the OpenAI non-profit's mission was explicitly to put the development of artificial intelligence and artificial general intelligence (AGI) under the control of a non-profit board that would ensure the technology is developed and deployed safely and that its benefits are shared equitably with the world.
I hope this change isn't allowed to happen. I don't think AGI will be invented particularly soon. I don't think, contra Sam Altman, that OpenAI knows how to build AGI. And yet I still don't think a charity should be able to violate its own mission like this, for no clear social benefit, and when the for-profit subsidiary seems to be doing just fine.
I don’t have to tell you that scaling inputs like compute like money, compute, labour, and so on isn’t the same as scaling outputs like capabilities or intelligence. So, evidence that inputs have been increasing a lot is not evidence that outputs have been increasing a lot. We should avoid ambiguating between these two things.
I’m actually not convinced AI can drive a car today in any sense that was not also true 5 years ago or 10 years ago. I have followed the self-driving car industry closely and, internally, companies have a lot of metrics about safety and performance. These are closely held and rarely is anything disclosed to the public.
We also have no idea how much human labour is required in operating autonomous vehicle prototypes, e.g., how often a human has to intervene remotely.
Self-driving car companies are extremely secretive about the information that is the most interesting for judging technological progress. And they simultaneously have strong and aggressive PR and marketing. So, I’m skeptical. Especially since there is a history of companies like Cruise making aggressive, optimistic pronouncements and then abruptly announcing that the company is over.
Elon Musk has said full autonomy is one year away every year since 2015. That’s an extreme case, but others in the self-driving car industry have also set timelines and then blown past them.
There’s a big difference between behaviours that, if a human can do them, indicate a high level of human intelligence versus behaviours that we would need to see from a machine to conclude that it has human-level intelligence or something close to it.
For example, if a human can play grandmaster-level chess, that indicates high intelligence. But computers have played grandmaster-level chess since the 1990s. And yet clearly artificial intelligence (AGI) or human-level artificial intelligence (HLAI) has not existed since the 1990s.
The same idea applies to taking exams. Large language models (LLMs) are good at answering written exam questions, but their success on these questions does not indicate they have an equivalent level of intelligence to humans who score similarly on those exams. This is just a fundamental error, akin to saying IBM’s Deep Blue is AGI.
If you look at a test like ARC-AGI-2, frontier AI systems score well below the human average.
On average, it doesn’t appear like AI experts do in fact agree that AGI is likely to arrive within 5 or 10 years, although of course some AI experts do think that. One survey of AI experts found their median prediction is a 50% chance of AGI by 2047 (23 years from now) — which is actually compatible with the prediction from Geoffrey Hinton you cited, who’s thrown out 5 to 20 years with 50% confidence as his prediction.
Another survey found an aggregated prediction that there’s a 50% chance of AI being capable of automating all human jobs by 2116 (91 years from now). I don’t know why those two predictions are so far apart.
If it seems to you like there’s a consensus around short-term AGI, that probably has more to do with who you’re asking or who you’re listening to than what people, in general, actually believe. I think a lot of AGI discourse is an echo chamber where people continuously hear their existing views affirmed and re-affirmed and reasonable criticism of these views, even criticism from reputable experts, is often not met warmly.
Many people do not share the intuition that frontier AI systems are particularly smart or useful. I wrote a post here that points out, so far, AI does not seem to have had much of an impact on either firm-level productivity or economic growth, and has achieved only the most limited amount of labour automation.
LLM-based systems have multiple embarrassing failure modes that seem to reveal they are much less intelligent than they might otherwise appear. These failures seem like fundamental problems with LLM-based systems and not something that anyone currently knows how to solve.
So, you want to try to lock in AI forecasters to onerous and probably illegal contracts that forbid them from founding an AI startup after leaving the forecasting organization? Who would sign such a contract? This is even worse than only hiring people who are intellectually pre-committed to certain AI forecasts. Because it goes beyond a verbal affirmation of their beliefs to actually attempting to legally force them to comply with the (putative) ethical implications of certain AI forecasts.
If the suggestion is simply promoting "social norms" against starting AI startups, well, that social norm already exists to some extent in this community, as evidenced by the response on the EA Forum. But if the norm is too weak, it won’t prevent the undesired outcome (the creation of an AI startup), and if the norm is too strong, I don’t see how it doesn’t end up selecting forecasters for intellectual conformity. Because non-conformists would not want to go along with such a norm (just like they wouldn’t want to sign a contract telling them what they can and can’t do after they leave the forecasting company).
One of the authors responds to the comment you linked to and says he was already aware of the concept of the multiple stages fallacy when writing the paper.
But the point I was making in my comment above is how easy it is for reasonable, informed people to generate different intuitions that form the fundamental inputs of a forecasting model like AI 2027. For example, the authors intuit that something would take years, not decades, to solve. Someone else could easily intuit it will take decades, not years.
The same is true for all the different intuitions the model relies on to get to its thrilling conclusion.
Since the model can only exist by using many such intuitions as inputs, ultimately the model is effectively a re-statement of these intuitions, and putting these intuitions into a model doesn’t make them any more correct.
In 2-3 years, when it turns out the prediction of AGI in 2027 is wrong, it probably won’t be because of a math error in the model but rather because the intuitions the model is based on are wrong.
I don’t know how Epoch AI can both "hire people with a diversity of viewpoints in order to counter bias" and ensure that your former employees won’t try to "cash in on the AI boom in an acceleratory way". These seem like incompatible goals.
I think Epoch has to either:
or
Is there a third option?
This confused me at first until I looked at the comments. The EA Forum post you linked to doesn't specifically say this and the Good Ventures blog post that forum post links to doesn't specifically say this either. I think you must be referring to the comments on that forum post, particularly between Dustin Moskovitz (who is now shown as "[anonymous]") and Oliver Habryka.
Three comments from Dustin Moskovitz, here, here, and here, which are oblique and confusing (and he seems to say somewhere else he's being vague on purpose), but I think Dustin is saying that he's now wary of funding things related to "the rationalist community" (defined below) for multiple reasons he doesn't fully get into, but which seem to include both a long history of problems and the then-recent Manifest 2024 conference that was hosted at Lighthaven (the venue owned by Lightcone Infrastructure, the organization that runs the LessWrong forum, which is the online home of "the rationalist community") and attracted attention due to the extreme racist views of many of the attendees.
I think the way you tried to make this distinction is not helpful and actually adds to the confusion. We need to distinguish two very different things:
The online and Bay Area-based "rationalist community" (2) tends to believe it has especially good insight into older, more universal concept of rationality (1) and that self-identified "rationalists" (2) are especially good at being rational or practicing rationality in that older, more universal sense (1). Are they?
No.
Calling yourselves "rationalists" and your movement or community "rationalism" is just a PR move, and a pretty annoying one at that. It's annoying for a few reasons, partly because it's arrogant and partly because it leads to confusion like the confusion in this post, where the centuries-old and widely-known concept of rationality (1) gets conflated with an eccentric, niche community (2). It makes ancient, universal terms like "rational" and "rationality" contested ground, with this small group of people with unusual views — many of them irrational — staking a claim on these words.
By analogy, this community could have called itself "the intelligence movement" or "the intelligence community". Its members could have self-identified as something like "intelligent people" or "aspirationally intelligent people". That would have been a little bit more transparently annoying and arrogant.
So, is Good Ventures or effective altruism ever going to disavow or distance itself from the ancient, universal concept of rationality (1)? No. Absolutely not. Never. That would be absurd.
Has Good Ventures disavowed or distanced itself from LessWrong/Bay Area "rationalism" or "the rationalist community" (2)? I don't know, but those comments from Dustin that I linked to above suggest that maybe this is this case.
Will effective altruism disavow or distance itself from LessWrong/Bay Area "rationalism" or "the rationalist community" (2)? I don't know. I want this to happen because I think "the rationalist community" (2) decreases the rationality (1) of effective altruism. The more influence the LessWrong/Bay Area "rationalist" subculture (2) has over effective altruism, the less I like effective altruism and the less I want to be a part of it.