This is a special post for quick takes by Neel Nanda. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

In light of recent discourse on EA adjacency, this seems like a good time to publicly note that I still identify as an effective altruist, not EA adjacent.

I am extremely against embezzling people out of billions of dollars of money, and FTX was a good reminder of the importance of "don't do evil things for galaxy brained altruistic reasons". But this has nothing to do with whether or not I endorse the philosophy that "it is correct to try to think about the most effective and leveraged ways to do good and then actually act on them". And there are many people in or influenced by the EA community who I respect and think do good and important work.

As do I brother, thanks for this declaration! I think now might not be the worst time ogir those who do identify directly as EAs to stay so to encourage the movement, especially some of the higher up thought and movement leaders. I don't think a massive sign up form or anything drastic is necessary, just a few higher status people standing up and saying "hey, I still identify with this thing".

That is if they think it isn't an outdated term...

I’m curious what you both think of my impression that the focus on near-term AGI has completely taken over EA and sucked most of the oxygen out of the room.

I was probably one of the first 1,000 people to express an interest in organized effective altruism, back before it was called “effective altruism”. I remember being in the Giving What We Can group on Facebook when it was just a few hundred members, when they were still working on making a website. The focus then was exclusively on global poverty.

Later, when I was involved in a student EA group from around 2015 to 2017, global poverty was still front and centre, animal welfare and vegetarianism/veganism/reducetarianism was secondary, and the conversation about AI was nipping at the margins.

Fast forward to 2025 and it seems like EA is now primarily a millennialist intellectual movement focused on AGI either causing the apocalypse or creating utopia within the next 3-10 years (with many people believing it will happen within 5 years), or possibly as long as 35 years if you’re way far out on the conservative end of the spectrum.

This change has nothing to do with FTX and probably wouldn’t be a reason for anyone at Anthropic to distance themselves from EA, since Anthropic is quite boldly promoting a millennialist discourse around very near-term AGI.

But it is a reason for me not to feel an affinity with the EA movement anymore. It has fundamentally changed. It’s gone from tuberculosis to transhumanism. And that’s just not what I signed up for.

The gentle irony is that I’ve been interested in AGI, transhumanism, the Singularity, etc. for as long as I’ve been interested in effective altruism, if not a little longer. In principle, I endorse some version of many of these ideas.

But when I see the kinds of things that, for example, Dario Amodei and others at Anthropic are saying about AGI within 2 years, I feel unnerved. It feels like I’m at the boundary of the kind of ideas that it makes sense to try to argue against or rationally engage with. Because it doesn’t really feel like a real intellectual debate. It feels closer to someone experiencing some psychologically altered state, like mania or psychosis, where attempting to rationally persuade someone feels inappropriate and maybe even unkind. What do you even do in that situation?

I recently wrote here about why these super short AGI timelines make no sense to me. I read an article today that puts this into perspective. Apple is planning to eventually release a version of Siri that merges the functionality of the old, well-known version of Siri and the new soon-to-be-released version that is based on an LLM. The article says Apple originally wanted to release the merged version of Siri sooner, but now this has been delayed to 2027. Are we going to have AGI before Apple finishes upgrading Siri? These ideas don’t live in the same reality.

To put a fine point on it, I would estimate the probability of AGI being created by January 1, 2030 to be significantly less than the odds of Jill Stein winning the U.S. presidential election in 2028 as the Green Party candidate (not as the leader of either the Democratic or Republican primary), which, to be clear, I think will be roughly as likely as her winning in 2024, 2020, or 2016 was. I couldn’t find any estimates of Stein’s odds of winning either the 2028 election or past elections from prediction markets or election forecast models. At one point, electionbettingodds.com gave her 0.1%, but I don’t if they massively rounded up or if those odds were distorted by a few long-shot bets on Stein. Regardless, I think it’s safe to say the odds of AGI being developed by January 1, 2030 are significantly less than 0.1%.

If I am correct (and I regret to inform you that I am correct), then I have to imagine the credibility of EA will diminish significantly over the next 5 years. Because, unlike FTX scamming people, belief in very near-term AGI is something that many people in EA have consciously, knowingly, deliberately decided to do. Whereas many of the warning signs about FTX were initially only known to insiders, the evidence against very near-term AGI is out in the open, meaning that deciding to base the whole movement on it now is a mistake that is foreseeable and… I’m sorry to say… obvious.

I feel conflicted saying things like this because I can see how it might come across as mean and arrogant. But I don’t think it’s necessarily unkind to try to give someone a reality check under unusual, exceptional circumstances like these.

I think EA has become dangerously insular and — despite the propaganda to the contrary — does not listen to criticism. The idea that EA has abnormal or above-average openness to criticism (compared to what? the evangelical church?) seems only to serve the function of self-licensing. That is, people make token efforts at encouraging or engaging with criticism, and then, given this demonstration of their open-mindedness, become more confident in what they already believed, and feel licensed to ignore or shut down criticism in other instances.

It also bears considering what kind of criticism or differing perspectives actually get serious attention. Listening to someone who suggests that you slightly tweak your views is, from one perspective, listening to criticism, but, from another perspective, it’s two people who already agree talking to each other in an echo chamber and patting themselves on the back for being open-minded. (Is that too mean? I’m really trying not to be mean.)

On the topic of near-term AGI, I see hand-wavey dismissal of contrary views, whether they come from sources like Turing Prize winner and FAIR Chief AI Scientist Yann LeCun, surveys of AI experts, or superforecasters. Some people predict AGI will be created very soon and seemingly a much larger number think it will take much longer. Why believe the former and not the latter? I see people being selective in this way, but I don’t see them giving principled reasons for being selective.

Crucially, AGI forecasts are a topic where intuition plays a huge role, and where intuitions are contagious. A big part of the “evidence” for near-term AGI that people explicitly base their opinion on is what person X, Y, and Z said about when they think AGI will happen. Someone somewhere came up with the image of some people sitting in a circle just saying ever-smaller numbers to each other, back and forth. What exactly would prevent that from being the dynamic?

When it comes to listening to differing perspectives on AGI, what I have seen more often than engaging with open-mindedness and curiosity is a very unfortunate, machismo/hegemonic masculinity-style impulse to degrade or humiliate a person for disagreeing. This is the far opposite of "EA loves criticism”. This is trying to inflict pain on someone you see as an opponent. This is the least intellectually healthy way of engaging in discourse, besides, I guess, I don’t know, shooting someone with a gun if they disagree with you. You might as well just explicitly forbid and censor dissent.

I would like to believe that, in 5 years, the people in EA who have disagreed with me about near-term AGI will snap out of it and send me a fruit basket in gratitude. But they could also do like Elon Musk, who, after predicting fully autonomous Teslas would be available in 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, and 2024, and been wrong on all nine counts, now predicts fully autonomous Teslas will be available in 2025.

In principle, you could predict AGI within 5 years and just have called it a few years too soon. If you can believe in very near-term AGI today, you will probably be able to believe in very near-term AGI when 2030 rolls around, since AI capabilities will only improve.

Or they could go the Ray Kurzweil route. In 2005, Kurzweil predicted that we would have “high-resolution, full-immersion, visual-auditory virtual reality” by 2010. In 2010, when he graded his own predictions, he called this prediction “essentially correct”. This was his explanation:

The computer game industry is rapidly moving in this direction. Technologies such as Microsoft’s Kinect allows players to control a videogame without requiring controllers by detecting the player's body motions. Three-dimensional high-definition television is now available and will be used by a new generation of games that put the user in a full-immersion, high-definition, visual-auditory virtual reality environment.

Kurzweil’s gradings of his own predictions are largely like that. He finds a way to give himself a rating of “correct” or “essentially correct”. Even though he was fully incorrect. I wonder if Dario Amodei will do the same thing in 2030.

In 2030, there will be the option of doubling down on near-term AGI. Either the Elon Musk way — kick the can down the road — or the Ray Kurzweil way — revisionist history. And the best option will be some combination of both.

When people turn out to be wrong, it is not guaranteed to increase their humility or lead to soul searching. People can easily increase their defensiveness and their aggression toward people who disagree with them.

And, so, I don’t think merely being wrong will be enough on its own for EA to pull out of being a millennialist near-term AGI community. That can continue indefinitely even if AGI is over 100 years away. There is no guarantee that EA will self-correct in 5 years.

For these reasons, I don’t feel an affinity toward EA any more — it’s nothing like what it was 10 or 15 years ago — and I don’t feel much hope for it changing back, since I can imagine a scenario where it only gets worse 5 years from now.

Curated and popular this week
2 authors
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 3m read
 · 
We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges –  and in particular the 🔸10% Pledge – the core focus of GWWC’s work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve seen almost 10,000 people take it and give nearly $50M USD to high-impact charities annually. We believe it could become a norm among at least the richest 1% — and likely a much wider segment of the population — which would cumulatively direct an enormous quantity of financial resources towards tackling the world’s most pressing problems.  We initiated this focus on pledges in early 2024, and are doubling down on it in 2025. In line with this, we are retiring various other initiatives we were previously running and which are not consistent with our new strategy. Introducing our BHAG We are setting ourselves a long-term Big Hairy Audacious Goal (BHAG) of 1 million pledgers donating $3B USD to high-impact charities annually, which we will start working towards in 2025. 1 million pledgers donating $3B USD to high-impact charities annually would be roughly equivalent to ~100x GWWC’s current scale, and could be achieved by 1% of the world’s richest 1% pledging and giving effectively. Achieving this would imply the equivalent of nearly 1 million lives being saved[3] every year. See the BHAG FAQ for more info. Working towards our BHAG Over the coming years, we expect to test various growth pathways and interventions that could get us to our BHAG, including digital marketing, partnerships with aligned organisations, community advocacy, media/PR, and direct outreach to potential pledgers. We thin