The sad decline of effective altruism
A once wholesome and inspiring movement has become entangled with cults and has lost years chasing dead end ideas
In the late 2000s and early 2010s, effective altruism started with two simple ideas:
People in high-income countries who can afford to live comfortably have an obligation to give money to help the world’s poorest people (who live on as little as $3 a day)
The best charities are so much more cost-effective than the worst charities that choosing the right charity is as important as choosing to donate to charity in the first place
The organization Giving What We Can was created around the idea that people would pledge 10% of their income to charities helping the global poor. I found Giving What We Can in the late 2000s in the form of a Facebook group while the website was still under construction. I was one of the first people to take the 10% pledge.
Roughly around the same time, the charity evaluator GiveWell started publishing its rankings of the most cost-effective charities. The Against Malaria Foundation, which distributes anti-malarial bednets in poor countries, has held a top spot for over 15 years. “AMF” has become a shorthand in effective altruist circles for charities working in the global poverty arena.
For me, the definitive early book on effective altruism was the moral philosopher Peter Singer’s 2009 book The Life You Can Save. This book was an expansion and updating of ideas Singer had been writing about for a long time, including in his famous 1972 essay “Famine, Affluence, and Morality”. Singer is perhaps even better known for his work on animal rights and books such as Animal Liberation. I believe it’s largely because of Singer’s founding influence that vegetarianism and veganism, and the overall concern of factory farming, became an important, albeit secondary, facet of effective altruism starting early on. When I co-organized a university effective altruism group in the mid-2010s, although our primary focus was on global poverty, many of our members were also vegetarian or at least seeking to eat less meat.
I feel this is a solid and beautiful foundation for a social movement. In 2013, Peter Singer gave a TED Talk on effective altruism that encapsulates the version of effective altruism that existed at that time. That version of effective altruism is wholesome, logical, and humble. So much of what I loved about effective altruism in the intervening years has been lost. The humility is gone. One of the more off-putting things about the effective altruism subculture nowadays is a prevalent desire to feel superior to other, “normal” people outside the subculture.
This is a betrayal. The founding premise of the movement was: no matter who you are the planet, no matter what country you live in, what language you speak, what the colour of your skin is, your life has equal value to anyone else’s, and if you are suffering, we are obliged to help you. With such egalitarian beginnings, how did effective altruism turn into a way to feel better than everyone else?
To add injury to insult, there has been an unfortunate rise of “luxury effective altruism”, in which glamour and opulence have become increasingly favoured at the same time as global poverty has become increasingly disfavoured as a cause area. The main effective altruism organization, Effective Ventures, which serves as an umbrella for various other effective altruism organizations, infamously purchased a $20 million abbey (think: Downton Abbey), sometimes colloquially referred to as a “castle”, as a venue for conferences, workshops, and retreats. To my knowledge, the abbey saw very little if any actual use. Just two years later, Effective Ventures put the abbey back up for sale for $8 million — that’s a $12 million loss. In recent years, effective altruism has also been shifting its priorities away from global poverty toward more esoteric and futuristic ideas like “longtermism” and artificial general intelligence or AGI, which are a lot more fun to talk about than global poverty (particularly in luxurious venues like an abbey).
It is a bizarre twist of history that a movement founded on radical egalitarianism and the notion of a moderate sacrifice of income for the sake of the global poor became associated with opulence, profligate spending, and feeling superior to others, while largely dropping the focus on the global poor in favour of other concerns. It is as strange to me if the Rotary Club had shifted focus from humble service and humanitarian aid to promoting human settlement on Mars, at the same time it started hosting conferences at five-star hotels in tropical vacation destinations. Effective altruism took the idea it started with and turned it on its head, in a way that really makes you wonder how that happened.
Part of the story is a decline in intellectual standards. In the late 2000s and early 2010s, there was a strong academic influence on effective altruism. Peter Singer is an academic philosopher. Giving What We Can, the organization that promotes the 10% pledge, was co-founded by Toby Ord, another academic philosopher, and Will MacAskill, an academic philosopher who has since left academia as his ideas have become more fringe and bizarre.
GiveWell, the charity evaluator, was founded on the idea of doing rigorous, evidence-based cost-effectiveness evaluations of charities. GiveWell is perhaps the last remaining effective altruist organization that has not experienced a decline, and therefore it’s a notable exception to this overall story. GiveWell’s focus remains purely on global poverty. As far as I can tell, its standards for rigour and evidence remain as high as ever. However, one of GiveWell’s co-founders, Holden Karnofsky, left GiveWell and in recent years became primarily focused on artificial general intelligence. Karnofsky now works at the AI company Anthropic where his focus is AGI safety. (Karnofsky is married to Daniela Amodei, the president of Anthropic, and his brother-in-law is Dario Amodei, Anthropic’s CEO.)
When I was most active in effective altruism in the mid-2010s, the moral and theoretical side of effective altruism came from academic philosophers like Peter Singer, and the empirical, practical side of effective altruism came primarily from GiveWell. That felt like a solid foundation to me. Effective altruism was embracing ideas that were somewhat radical, such as donating 10% of your income to charity, sometimes harshly evaluating charities (which if not done carefully can come across as mean-spirited), and vegetarianism. However, these were only somewhat radical ideas, and all of them felt like a logical extension of things we already do: most of us already donate something to charity, we already evaluate the cost-effectiveness of companies and government programs, and we care about the well-being of cats and dogs. There was typically great care to logically justify these ideas and, in my experience, to also emotionally digest them carefully, with openness, curiosity, and uncertainty.
One error that befell effective altruism, even as early as the mid-2010s, was the human tendency to become overzealous about something you believe, or something you like. If you’re a fan of a certain TV show, or a certain band, it often seems not enough to say that it’s a great band or a great show, but that it’s the best in the world. This is probably helped along by a conflation between the art itself and the experience of finding community among fellow fans. I don’t know that My Little Pony or One Direction are the best TV show or band in the world, but I can believe the experience of love, connection, and belonging one might find through bonding with people who love the same thing you do could be the best thing in the world.
With hobbies, passions, and causes, the tendency seems to be there too, perhaps in even fuller force. As I’ve taken an interest in digital archiving as a hobby, I’ve noticed a number of people are not content to say it’s a fun hobby or an important thing to do, but the most important thing anyone could possibly be doing, even to the extent of scorning other, “normal” people who don’t see its importance.
In my leadership role in a university student group focused on effective altruism, I tried to emphasize self-skepticism and amenability to criticism. Becoming overzealous and dogmatic can cost you everything. That’s how people get radicalized to a dangerous point (particularly in politics). Staying grounded and self-skeptical is how you stop yourself from getting carried away, from taking a good idea too far. But that’s boring and not really as good for your self-esteem. The thing that’s best for your self-esteem, and the thing that feels energizing, exciting as all get out, is to believe that you’re doing the most important thing in the world — which most people aren’t doing — and, furthermore, you’ve cracked the code about what is and isn’t important in life. How intoxicating! What a rush! What a high!
In 2016, as a co-organizer of a university effective altruism group, I was put on a Skype call with a representative of the Local Effective Altruism Network or LEAN, an organization that was supposed to provide advice and support to student groups and city clubs. That conversation was bizarre and disturbing. The LEAN representative who spoke to me and my co-organizer eerily alluded to the idea that effective altruism would “solve all the world’s problems in priority sequence”. I had already been catching inklings of the messiah complex that was growing in effective altruism, and there it was in the starkest relief yet. “Solve all the world’s problems in priority sequence”? What?
The darker and more disturbing part of the call was the ends-mean justification the LEAN representative engaged in. He explained to me and my co-organizer how we should manipulate our fellow students into being good soldiers for the effective altruist cause. He said we should reinforce to our fellow students that their self-esteem and worth is tied to how much they contribute to effective altruism and how committed they are to the movement. The end goal was creating “life-long effective altruists”. To me, this was and is unthinkable.
When people showed up to our effective altruism meetings, I didn’t see them as things to control in service of effective altruism. I saw them as, well… equal human beings, like myself. No one manipulated or brainwashed me into becoming so enthusiastic about effective altruism that I co-organized a student group. Why would that kind of underhanded, abusive tactic be necessary for others? And if people can only be converted to your ideas through psychological control, what makes you so sure your ideas are right?
By the time of this Skype call, I had formed close friendships with some of the people involved in our group and warm acquaintanceships with several others. The idea that I would ever try to emotionally manipulate them in this way was so repugnant, so absurd, so plainly off the table that it doesn’t even compute — it’s hard for me to understand how anyone could ever think that’s a good suggestion. You might as well suggest I should pickpocket my friends and donate the cash to charity.
If anything is going to solve the world’s problems, it will be based in love, and not this wretched stuff. The Local Effective Altruism Network has been defunct since 2020, but the broader cultural problems in effective altruism, such as messianic thinking and ends-means justification of unethical behaviour, sadly remain.
Another error that has afflicted effective altruism since around the same time that the LEAN representative advised me to manipulate my fellow university students is an insatiable appetite for radical, weird, contrarian ideas. In moderation, such an appetite would not be a bad thing. We should have some appetite for novelty and change. We need that to make progress. But in effective altruism, this craving has more the character of addiction. Effective altruists can never get enough strange novelty, and they’re always wanting more, more, more. What’s the hot new idea? How can we upstage ourselves with how weird and radical and mind-blowing it is?
Some of these ideas sound like satire, but as far I can tell, they’re not. (The fact that it’s at all hard to tell is a bad sign.) If you browse the Effective Altruism Forum, you’ll occasionally find moderately upvoted posts on topics such as whether time machines are an existential risk, or posts that argue that we should send radio messages to aliens that warn them humanity might soon create a dangerous artificial general intelligence. For a number of years from roughly 2018 to 2024, “longtermism” was the hot new idea in effective altruism. Longtermism is the idea that if 8 trillion people will exist in the future, they are 1,000 times more important (due to being 1,000 times as numerous) as the 8 billion people who exist in the present. Okay, so what would that imply?
Advocates of longtermism don’t have a good answer. They have had a devil of a time coming up with any reasonable-sounding, practical proposal that doesn’t long predate the word “longtermism”. The practical proposals that sound the most promising focus on taking cautionary measures against low-probability, high-impact events, such as by investing more in asteroid defense or pandemic preparedness. But of course, these ideas themselves aren’t novel. For instance, NASA has been working on asteroid detection since at least the 1990s. To the unfamiliar, reframing asteroids and pandemics in terms of the moral value of far future lives might sound like a novel theoretical contribution that “longtermism” can claim. But it isn’t novel. The term “longtermism” was coined in 2017 and that specific idea traces much further back.
Prior to 2017, there were many years of scholarly writing on existential risk and global catastrophic risks that specifically emphasized the importance of safeguarding against such risks because of the potentially astronomical numbers of human lives that could exist in the far future if such disasters were avoided. Perhaps the earliest example is the philosopher Derek Parfit’s famous book Reasons and Persons from 1984. In 2002, the philosopher Nick Bostrom published a paper called “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards”, which defined an existential risk as an event that would “either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential”. In the subsequent decade or so, Bostrom helped popularize the topic of existential risk. From 2008 to 2010, Bostrom’s Future of Humanity Institute at Oxford University organized an annual Global Catastrophic Risks conference. (You can still find the talks from those conferences online.) In 2008, Oxford University Press published an anthology of essays called Global Catastrophic Risks, which was edited by Bostrom. All this to say, the topic of existential risk or global catastrophic risk, as well as the particular angle of considering vast numbers of potential future lives, was well-covered long before “longtermism” was a word.
If the point of “longtermism” were just to re-brand a pre-existing idea, that would be fine. But this isn’t how longtermism has been presented by Will MacAskill, one of the effective altruist philosophers who coined the term “longtermism”, or how it has been received by the effective altruist community. MacAskill presented longtermism as a novel and exciting idea, and the effective altruist community treated it like a novel and exciting idea. But there’s really nothing of interest there, just a rehashing of prior work.
Other than mitigating existential and global catastrophic risks, when longtermism can muster any kind of reasonable-sounding, concrete idea about what to do, all it offers are vague gestures toward working on long-held, common sense societal goals, such as economic growth, environmentalism, the advancement of science and technology, or moral and social progress. Since there aren’t specific, practical ideas on offer, here, it’s hard to puzzle out what the practical upshot of any of this is. Suppose we took into account potential future lives and decided economic growth is 1,000 times more important than we thought before. What should we do differently? I don’t know. And neither do longtermists.
The closest thing to a longtermism project that is both practical and novel is probably the Patient Philanthropy Project, which has set aside $2.5 million for future philanthropic endeavours. What endeavours? Nobody knows yet. Personally, I don’t think this counts. Just putting money aside and earmarking it for “longtermism”, with no specific use in mind for the money, is not the same as identifying a viable longtermist intervention. The Patient Philanthropy Project has just pre-allocated the money, hoping that a viable intervention will eventually arrive.
The lack of any novel practical prescription from longtermism is not for want of trying. In 2018, Will MacAskill, the former academic philosopher who I mentioned co-founded Giving What We Can, founded a non-profit research organization, the Forethought Foundation, devoted to longtermism. After spending millions in funding, it was not able to come up with a single novel, promising longtermist intervention before it closed in 2024. A 2025 anthology, Essays on Longtermism, published by Oxford University Press, provides no answers, either. Nearly a decade in, so far, longtermism has been a total bust. Oops.
This is something that I’ve seen critics of longtermism and effective altruism get wrong. Some people talk about longtermism as if it’s something disturbing and fanatical. The best I can figure is they must be confusing longtermism with transhumanism. The correct critique of longtermism, in my mind, is not that it’s a spooky, dangerous ideology, but that’s there’s just nothing there. Take out the rehashing of older ideas and the stuff that’s just common sense, like economic growth, and you have nothing left. Rather than something scary, it’s actually boring and disappointing — completely academic and abstract, and not practical at all.
Although the effective altruism community has not decided that longtermism is a failed idea, there’s much less energy behind longtermism than there used to be. Will MacAskill has moved on from longtermism to a full-time focus on artificial general intelligence. He has a new non-profit research organization with a nearly identical name, Forethought Research, which focuses on “AGI preparedness” and “AI macrostrategy”. What are AGI preparedness and AI macrostrategy? I’m still not quite sure, and not for lack of trying to find out. There are the long-established ideas of friendly AI, AI alignment, and AI safety that relate to making a hypothetical future artificial general intelligence a machine angel rather than a machine demon. How is AGI preparedness and AI macrostrategy distinct from AI safety or AI alignment? It’s really not clear. Mostly, it seems to be AI safety or AI alignment under a new name. Uh oh, here we go again… Just like “longtermism” was mainly a re-brand of much older ideas about existential risk and global catastrophic risk, AGI preparedness looks like another MacAskill re-brand.
The most concrete example I can find of MacAskill’s new “AGI preparedness” concept carrying some different meaning from AI safety or AI alignment is in the idea of space governance. If I understand the idea correctly, MacAskill believes that artificial general intelligence will soon lead to an explosive expansion of AIs and maybe humans into outer space. Given this, we need to come up with international laws governing how space will be ruled and owned in the future. Uh, okay. Sure… My thought is: can’t the AGI angels figure that out? And if you buy into this notion, shouldn’t you be fully focused on making sure AGIs are angels (rather than demons) that are capable of coming up with much better laws than humans ever could, rather than rushing now to write international space laws that might soon be moot?
AGI safety and AGI alignment are strange enough ideas already. AGI preparedness is an even stranger idea: some mix of a pointless re-brand of AGI safety/AGI alignment, a few extremely dubious and oddball ideas like space governance, and some gestures at vague outlines of ideas that might not ever get fully filled out (as we saw with longtermism and the previous MacAskill organization called Forethought). Will MacAskill is seen by those in the subculture as perhaps the #1 face of effective altruism. He is widely respected and people follow his lead, intellectually and in spirit. Unfortunately, MacAskill led effective altruists down the garden path with longtermism. MacAskill’s newer, hotter idea, AGI preparedness, seems destined be as much of a boondoggle as longtermism. Yet because MacAskill’s name is attached, and because it superficially appears novel and imaginative, so far effective altruists seem to have warmly and uncritically received the AGI preparedness concept. Effective altruism has not learned from failure, here, and is rushing headlong into another terrible waste of time and money.
How did we get here? How did effective altruism go from the 10% pledge and charity effectiveness to longtermism and AGI preparedness in fifteen years? Part of the answer is that effective altruism has been rotting from the inside for a long time. The current effective altruism movement’s lineage traces back to a shockingly large degree to a Bay Area-based cult called Leverage Research. Leverage Research was founded by a charismatic leader who believed himself to be the greatest philosopher to ever live, and convinced his followers of this. His followers would subject initiates to long interrogations meant to excise “demons” from their brains. “Demons” were initially just the name given to psychological problems within the cult’s fringe belief system. However, over time, paranoia, delusion, and psychosis gripped some of the cult members, and they began to believe literal supernatural demons were possessing them.
The Leverage Research cult was apparently adept at smooth talking and manipulating people. They could present a normal enough face to people outside the cult to not immediately set off alarm bells. Leverage organized the first conferences focused on effective altruism and, later on, played a key role in organizing the first Effective Altruism Global conferences that continue to this day. In the late 2010s, Leverage Research took control of the Centre for Effective Altruism, the most prominent and authoritative effective altruism organization. In 2018 and 2019, Leverage Research cultists filled the top leadership positions at the CEA, including the role of CEO. If you want an analogy, this would be akin to one of L. Ron Hubbard’s lieutenants getting elected president of the United States. At the very highest level, effective altruism got taken over by a cult.
Let me linger on this point to emphasize just how crazy this is. The Centre for Effective Altruism was co-founded by Toby Ord and Will MacAskill (the same two people who-founded Giving What We Can). As I mentioned, MacAskill is probably the #1 face of effective altruism. According to what records I can find, MacAskill served as CEO of the CEA until 2017, and stayed on as president for some time after. For many years, the CEA’s annual budget has been in the millions or tens of millions of dollars. If any one organization is synonymous with effective altruism, it’s the CEA. That it was run by a cult is alarming and disturbing to the extreme.
But Leverage Research is not where effective altruism’s involvement with cults ends. Effective altruism is intimately intertwined with LessWrong, a fringe online community with some offline instantiations in the Bay Area. LessWrong is not exactly a cult itself; it’s more analogous to fringe online communities like QAnon conspiracy theory groups or anti-feminist incel groups. However, LessWrong is a prolific generator of cults. LessWrong has spawned around a half a dozen cults, including the infamous Zizian cult that murdered six people. On top of all that, LessWrong has an infatuation with far-right authoritarian politics and racist pseudoscience. (The Effective Altruism Forum, for its part, has no rules against advocating for racist eugenics programs such as gene therapy for sub-Saharan Black Africans, and moderators have declined to remove such content.)
If there’s one thing that can be blamed for effective altruism’s decline, it’s effective altruism’s relationship with LessWrong. Even more than the takeover by Leverage Research. Effective altruism and LessWrong have become so intertwined that even the line between them is blurred. It’s hard to say where effective altruism ends and the LessWrong community begins. Many people, including many of the most prominent people in effective altruism, are active in both communities. The Effective Altruism Forum runs on LessWrong’s software, and the two sites are integrated so as to allow easy cross-posting between them. In 2025, in his forum post on AGI preparedness, Will MacAskill complained that no one was posting exciting ideas on the Effective Altruism Forum anymore and all the real action was happening on LessWrong.
Above all else, LessWrong is about fear of dangerous AGI. This is the primary, lifelong, monomaniacal preoccupation of LessWrong’s founder and spiritual leader, the blogger and Harry Potter fanfic author Eliezer Yudkowsky. While discussions on the site criss-cross eclectic topics and much lip service is paid to “rationality”, the focus is always and everywhere, first and foremost, the accursed Machine Devil (and the tantalizing but elusive Machine God). The LessWrong community even tried to create a “rationality training” program called the Center for Applied Rationality, but, according to its own employees, it failed at its stated goal because its employees kept manipulating its workshop attendees into fearing existential risk from AGI. (And it also ran a summer camp where it did the same thing to kids! Oops!)
LessWrong’s fear of AGI has so taken over effective altruism that effective altruism is no longer recognizable as what it was in the early 2010s. Effective altruism is now all about fear of dangerous AGI almost as much as LessWrong is. The once respectable career advice organization 80,000 Hours has pivoted to a full-time focus on AGI, which includes producing extraordinarily expensive YouTube videos that manipulate people into feeling afraid about AGI. Unfortunately, some combination of self-deception, delusion, and ends-means justification makes zealous effective altruists willing to do unethical things (remember that Skype call I had with the Local Effective Altruism Network in 2016?).
By locating the root cause of effective altruism’s intellectual decline partly in Leverage Research and LessWrong, I don’t mean to absolve effective altruism of responsibility. Cults and fringe groups are bad, but what kind of farkakte movement lets a cult take over its main organization and a cult-like fringe subculture takes over its subculture? Effective altruism has to own its role in this, and in how this is playing out now. For instance, it’s 80,000 Hours’ decision to make manipulative videos about the coming Machine Devil, and they must take full responsibility for that.
Is there any hope for effective altruism going forward? As much as I want to say yes, I don’t see how there could be. At this point, the required changes would be so tectonic it would be almost the same as starting from scratch. So, I think it would be better to just start from scratch with something new. This pains me to say. Some of my happiest memories are associated with my university effective altruism group. I feel that version of effective altruism was something worth loving, advocating, and sacrificing for. What effective altruism is now is like some weird zombie parasite that’s taken over the old effective altruism’s body and is walking it around like a puppet. There is no good left in effective altruism anymore. Or, rather, it’s so overwhelmed by all the dark, disturbing, crazy, noxious, evil stuff — the cults, the racism, the manipulation — that we should just cut our losses and put a bullet through its rotting zombie head.
And start something new.
I’ve thought about what could serve as an alternative to the current dismal, moribund effective altruism movement. There is some significant number of people who liked early 2010s effective altruism but feel alienated by the current movement. Within the broader effective altruism landscape, there are already organizations that use the term “effective giving” rather than effective altruism. The only real example of effective giving actually splintering off from effective altruism (if it’s even really that) is when the co-founder of a Belgian effective giving organization, Effectief Geven, renounced effective altruism in 2024. The Effectief Geven co-founder, Bob Jacobs, quit effective effective altruism in the wake of a scandal that revealed the alarming extent to which the LessWrong community and some parts of the effective altruism community are sympathetic to white nationalist politics. I respect Jacobs’ courage and backbone in making this decision.
Effective giving is an example of a sub-movement of effective altruism that could, in theory, branch off and take on its own identity. This wouldn’t necessarily require that everyone involved in effective giving renounce effective altruism like Bob Jacobs did. It would just have to be a friendly home for people who do renounce effective altruism. I don’t know that this will happen or that it’s the best path forward. This is just the closest thing to a new alternative that already exists.
If there’s something that comes next after effective altruism, which seeks to recapture the spirit of early 2010s effective altruism, it needs to have anchors. First, it needs intellectual anchors. One of the strengths of the early effective altruist movement was its grounding in academia. Effective altruism was co-founded by academic philosophers (most notably, Peter Singer, Toby Ord, and Will MacAskill). The largest grassroots part of the effective altruism movement has been through student groups at universities. Effective altruism’s slow descent into madness has come, in part, due to its increasing rejection of academia. Here, the corroding influence of LessWrong is clear: the LessWrong community, and its main spiritual leader Eliezer Yudkowsky, are vehemently anti-academia (and anti-journalism, and anti-mainstream institutions in general). My degree was in philosophy and I could rant for an hour about everything that pisses me off about academic philosophy. I don’t mean to defend academia against all criticism. But, let me tell you, academia sure beats the hell out of blogs and forums and tweets, which in the effective altruism community have come to substitute for academic texts.
Effective altruism has become disdainful of outside expertise and manic in its belief in its own intellectual superiority in a way that’s deranged and dangerous. Anything that wants to replace effective altruism and not repeat its mistakes needs to stay anchored. Institutions and institutional processes like academia and journalism, for all their flaws, offer a way for intellectual communities to facilitate investigation and discussion, to self-correct and make progress, and to regulate and police themselves against fraud and error. It’s all well and good to criticize these models, but when the alternative model you’re offering — blogs, forums, and Twitter — sucks so irredeemably hard, you don’t really have a leg to stand on. How would academia’s flaws justify an embrace of an alternative that’s a million times worse?
The new alternative to effective altruism would also need emotional anchors. Part of my theory of where effective altruism went wrong is an addiction to the self-esteem-improving qualities of self-congratulatory contrarianism. That has led effective altruism to increasingly reject expertise and mainstream ideas like liberal egalitarianism (e.g., non-racism), and increasingly to defer to its own homegrown “experts” and embrace ill-thought-out homegrown ideas like longtermism.
As the psychologist Kristin Neff points out, self-esteem is an inherently problematic concept. Self-esteem tells us that to be worthy, to have value, to be lovable and beautiful and deserving, we must be above average. We must be better than others. Since not everyone can be better than everyone else, we employ desperate manoeuvres to obtain self-esteem. Neff’s own prescription is to embrace self-compassion and self-appreciation as alternatives to self-esteem.
In psychology and various spiritual traditions, there are alternatives to self-esteem that fall along similar lines. The emotions researcher Brené Brown has spoken beautifully about the concept of worthiness. One of the central tenets of Hindu metaphysics is that the distinction between any individual human and God is an illusion, and likewise there is no real distinction between individual humans and animals. In the Christian mystical tradition, there are panentheists like Richard Rohr, who tells us, “The great and merciful surprise is that we come to God not by doing it right but by doing it wrong!” The divine comes to us not through perfection, but imperfection. Not through superiority to others, but through the universal experience that connects all of us. As Leonard Cohen said, “There is a crack in everything. That’s how the light gets in.” Or to quote another Christian mystic, Lady Julian, a favourite of Richard Rohr’s, “First, there is the fall, and then we recover from the fall. Both are the mercy of God!” This alternative model of where value and worth come from offers an escape from the desperate manoeuvres we use to get self-esteem — by finding a way to believe that we’re better than others.
What’s sorely missing in contemporary effective altruism is healthy emotional regulation and some kind of healthy spiritual grounding. How do you avoid getting addicted to unhealthy sources of self-esteem? How do you avoid being seduced and corrupted by money, fame, glamour, and power? (Recall the $20 million abbey, and note also effective altruism’s entanglement with the fraudulent cryptocurrency company FTX.) When you’re lost and confused, what is your North Star? When you’re flooded by shame or fear, what do you do? Who do you turn to, and what do you say? And what is your worldview around these things? Do you think that people are inherently bad and your only way to redeem yourself is to be better than everyone else? Do you think you only deserve to be loved or accepted if you achieve enough, succeed enough?
The effective altruism subculture relies on bad tools to cope with these challenges of life. There is a lot of white knuckling. There’s a lot of emotional self-punishment, which when sustained over time contributes to depression, anxiety, and burnout. People engage in a form of emotional avoidance called intellectualization, wherein they avoid feeling uncomfortable emotions like grief, regret, or uncertainty by engaging with difficult subjects through an inauthentic performance of rational thought. When I sense someone is unnecessarily overcomplicating something, or saying what they think sounds good rather than what’s genuine, I often suspect intellectualization is what’s happening underneath.
In the effective altruism community, shaming, ridicule, and personal insults are widespread. Passive aggression is even more widespread. As mentioned, there is some overt endorsement of manipulation tactics — treating people as things to be controlled by you, not equals to be treated as you wish to be treated. Even when this isn’t explicitly endorsed, you can see this tacitly when people talk about how they evangelize effective altruism. It’s a mess. A new alternative to effective altruism would need leaders with good tools and good emotional regulation, and it would need to make this part of the group’s norms and culture as well.
Why go through all this trouble? Well, out of love. Amidst all the chaos and darkness and grinding, relentless violence of the world, the only things that can save us are those that are, finally, rooted in love and compassion and empathy. That’s our North Star. That’s our way home.
This is also why for whatever might come next after effective altruism, community and friendship are so important. The radical question effective altruism asks me is: can I love a sub-Saharan African mom and her two kids enough to give her $1,000 so she can send her kids to school and replace the thatched roof of her house with a metal one? But the much more practical question beneath that question is: if we’re in a humanitarian community together, can I love you? And you can love me? Or at least respect and treat with kindness? Because I don’t think I would have the strength to do the kind of work I’ve done in effective altruism in some replacement movement without holding affection for the people in the community. The community has to give me strength for me to do this work, not drain it.
Historically, I think effective altruism has been entirely unconcerned with this dimension of volunteering and philanthropic work. When I talked to the Local Effective Altruism Network, its view of my relationship to my peers at university was completely instrumentalist: how can I use them to achieve my ends? That’s corrupt and doomed. We can’t use people like that and love the world in any meaningful sense. And in fact, if you look where effective altruists are at, emotionally, these days, a lot of them have grown to hate the world. Or at least that’s how it seems. You see a lot of dark, pessimistic thinking in effective altruism these days, and attendant philosophical views like anti-natalism and negative utilitarianism, whose adherents sometimes say it would be better if the world didn’t exist. When you don’t love the world in the concrete, eventually you start hating it in the abstract.
Whether anything better comes after effective altruism’s sad decline depends on whether the right people are out there with the energy to work on an alternative. After so many years spent off track, I don’t know if there is still a critical mass of people out there who miss the early 2010s effective altruism and still have energy to organize around that original vision. I’ve tried to find people like that, and they seem kind of thin on the ground. But I don’t know. I can only hope they’re out there, somewhere.
If effective altruism were a person, it would like a person I once dearly loved, and was in love with, but who went down a dark path — fell in with the wrong crowd, got addicted to drugs, got politically radicalized into some very dumb and dangerous ideas by Twitter and message boards. That tender core of love is still there, deep inside me, when I think about effective altruism, and will probably never fade. Some part of me will probably always love effective altruism, or at least what effective altruism used to be. But when I look at who that person is today — what effective altruism is today — I can’t help but wince. I see someone who’s in pain, whose emotions and thoughts are all jumbled up inside, and who’s acting out their pain on the world in a way that hurts innocent people. I see someone profoundly lost. And I feel a sharp twinge of empathy, combined with a sad sense of hopelessness and helplessness. I feel for them, but what can I do?
So, effective altruism is something I both love and grieve. Rest in peace, effective altruism, 2007–2026. I’m sorry that you were born into a world that twisted you up so bad, and that you just couldn’t figure it out. I love you and I’ll miss you. One day, maybe we’ll do right by your memory.