This is a special post for quick takes by Yarrow. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I used to feel so strongly about effective altruism. But my heart isn't in it anymore.

I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven't been able to sustain a vegan diet for more than a short time. And so on.

But there isn't a community or a movement anymore where I want to talk about these sorts of things with people. That community and movement existed, at least in my local area and at least to a limited extent in some online spaces, from about 2015 to 2017 or 2018.

These are the reasons for my feelings about the effective altruist community/movement, especially over the last one or two years:

-The AGI thing has gotten completely out of hand. I wrote a brief post here about why I strongly disagree with near-term AGI predictions. I wrote a long comment here about how AGI's takeover of effective altruism has left me disappointed, disturbed, and alienated. 80,000 Hours and Will MacAskill have both pivoted to focusing exclusively or almost exclusively on AGI. AGI talk has dominated the EA Forum for a while. It feels like AGI is what the movement is mostly about now, so now I just disagree with most of what effective altruism is about.

-The extent to which LessWrong culture has taken over or "colonized" effective altruism culture is such a bummer. I know there's been at least a bit of overlap for a long time, but ten years ago it felt like effective altruism had its own, unique culture and nowadays it feels like the LessWrong culture has almost completely taken over. I have never felt good about LessWrong or "rationalism" and the more knowledge and experience of it I've gained, the more I've accumulated a sense of repugnance, horror, and anger toward that culture and ideology. I hate to see that become what effective altruism is like.

-The stories about sexual harassment are so disgusting. They're really, really bad and crazy. And it's so annoying how many comments you see on EA Forum posts about sexual harassment that make exhausting, unempathetic, arrogant, and frankly ridiculous statements, if not borderline incomprehensible in some cases. You see these stories of sexual harassment in the posts and you see evidence of the culture that enables sexual harassment in the comments. Very, very, very bad. Not my idea of a community I can wholeheartedly feel I belong to.

-Kind of a similar story with sexism, racism, and transphobia. The level of underreaction I've seen to instances of racism has been crazymaking. It's similar to the comments under the posts about sexual harassment. You see people justifying or downplaying clearly immoral behaviour. It's sickening.

-A lot of the response to the Nonlinear controversy was disheartening. It was disheartening to see how many people were eager to enable, justify, excuse, downplay, etc. bad behaviour. Sometimes aggressively, arrogantly, and rudely. It was also disillusioning to see how many people were so... easily fooled.

-Nobody talks normal in this community. At least not on this forum, in blogs, and on podcasts. I hate the LessWrong lingo. To the extent the EA Forum has its own distinct lingo, I probably hate that too. The lingo is great if you want to look smart. It's not so great if you want other people to understand what the hell you are talking about. In a few cases, it seems like it might even be deliberate obscurantism. But mostly it's just people making poor choices around communication and writing style and word choice, maybe for some good reasons, maybe for some bad reasons, but bad choices either way. I think it's rare that writing with a more normal diction wouldn't enhance people's understanding of what you're trying to say, even if you're only trying to communicate with people who are steeped in the effective altruist niche. I don't think the effective altruist sublanguage is serving good thinking or good communication.

-I see a lot of interesting conjecture elevated to the level of conventional wisdom. Someone in the EA or LessWrong or rationalist subculture writes a creative, original, evocative blog post or forum post and then it becomes a meme, and those memes end up taking on a lot of influence over the discourse. Some of these ideas are probably promising. Many of them probably contain at least a grain of truth or insight. But they become conventional wisdom without enough scrutiny. Just because an idea is "homegrown" it takes on the force of a scientific idea that's been debated and tested in peer-reviewed journals for 20 years, or a widely held precept of academic philosophy. That seems just intellectually the wrong thing to do and also weirdly self-aggrandizing.

-An attitude I could call "EA exceptionalism", where people assert that people involved in effective altruism are exceptionally smart, exceptionally wise, exceptionally good, exceptionally selfless, etc. Not just above the average or median (however you would measure that), but part of a rare elite and maybe even superior to everyone else in the world. I see no evidence this is true. (In these sorts of discussions, you also sometimes see the lame argument that effective altruism is definitionally the correct approach to life because effective altruism means doing the most good and if something isn't doing the most good, then it isn't EA. The obvious implication of this argument is that what's called "EA" might not be true EA, and maybe true EA looks nothing like "EA". So, this argument is not a defense of the self-identified "EA" movement or community or self-identified "EA" thought.)

-There is a dark undercurrent to some EA thought, along the lines of negative utilitarianism, anti-natalism, misanthropy, and pessimism. I think there is a risk of this promoting suicidal ideation because it basically is suicidal ideation.

-Too much of the discourse seems to revolve around how to control people's behaviours or beliefs. It's a bit too House of Cards. I recently read about the psychologist Kurt Lewin's study on the most effective ways to convince women to use animal organs (e.g. kidneys, livers, hearts) in their cooking during meat shortages during World War II. He found that a less paternalistic approach that showed more respect for the women's was more effective in getting them to incorporate animal organs into their cooking. The way I think about this is: you didn't have to be manipulated to get to the point where you are in believing what you believe or caring this much about this issue. So, instead of thinking of how to best manipulate people, think about how you got to the point where you are and try to let people in on that in an honest, straightforward way. Not only is this probably more effective, it's also more moral and shows more epistemic humility (you might be wrong about what you believe and that's one reason not to try to manipulate people into believing it).

-A few more things but this list is already long enough.

Put all this together and the old stuff I cared about (charity effectiveness, giving what I can, expanding my moral circle) is lost in a mess of other stuff that is antithetical to what I value and what I believe. I'm not even sure the effective altruism movement should exist anymore. The world might be better off if it closed down shop. I don't know. It could free up a lot of creativity and focus and time and resources to work on other things that might end up being better things to work on.

I still think there is value in the version of effective altruism I knew around 2015, when the primary focus was on global poverty and the secondary focus was on animal welfare, and AGI was on the margins. That version of effective altruism is so different from what exists today — which is mostly about AGI and has mostly been taken over by the rationalist subculture — that I have to consider those two different things. Maybe the old thing will find new life in some new form. I hope so.

Have Will MacAskill, Nick Beckstead, or Holden Karnofsky responded to the reporting by Time that they were warned about Sam Bankman-Fried's behaviour years before the FTX collapse?

Will responded here.

Has anyone else noticed anti-LGBT and specifically anti-trans sentiment in the EA and rationalist communities? I encountered this recently and it was bad enough that I deactivated my LessWrong account and quit the Dank EA Memes group on Facebook.

I'm sorry you encountered this, and I don't want to minimise your personal experience

I think once any group becoms large enough there will be people who associate with it who harbour all sorts of sentiments including the ones you mention.

On the whole though, i've found the EA community (both online and those I've met in person) to be incredibly pro-LGBT and pro-trans. Both the underlying moral views (e.g. non-traditionalism, impartiality and cosmpolitanism etc) point that way, as do the underlying demographics (e.g. young, high educated, socially liberal)

I think where there might be a split is in progressive (as in, leftist politically) framings of issues and the type of language used to talk about these topics. I think those often find it difficult to gain purchase in EA, especially on the rationalist/LW-adjacent side. But I don't think those mean that the community as a whole, or even the sub-section, are 'anti-LGBT' and 'anti-trans', and I think there are historical and multifacted reasons why there's some emnity between 'progressive' and 'EA' camps/perspectives.

Nevertheless, I'm sorry that you experience this sentiment, and I hope you're feeling ok.

The progressive and/or leftist perspective on LGB and trans people offers the most forthright argument for LGB and trans equality and rights. The liberal and/or centre-left perspective tends to be more milquetoast, more mealy-mouthed, more fence-sitting.

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
Relevant opportunities
25
· · 3m read