Pronouns: she/her or they/them.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.
This is a good post if you view it as a list of frequently asked questions about effective altruism when interacting with people who are new to the concept and a list of potential good answers to those questions — including that sometimes the answer is to just let it go. (If someone is at college just to party, just say "rock on".)
But there’s a fine line between effective persuasion and manipulation. I’m uncomfortable with this:
This is an important conversation to have within EA, but I don't think having that be your first EA conversation is conducive to you joining. I just say something like "Absolutely—they’re imperfect, but the best tools available for now. You're welcome to join one of our meetings where we chat about this type of consideration."
If I were a passer-by who stopped at a table to talk to someone and they said this to me, I would internally think, "Oh, so you’re trying to work me."
Back when I tabled for EA stuff, my approach to questions like this was to be completely honest. If my honest thought was, "Yeah, I don’t know, maybe we’re doing it all wrong," then I would say that.
I don’t like viewing people as a tool to achieve my ends — as if I know better than them and my job in life is to tell them what to do.
And I think a lot of people are savvy enough to tell when you’re working them and recoil at being treated like your tool.
If you want people to be vulnerable and put themselves on the line, you’ve got to be vulnerable and put yourself on the line as well. You’ve got to tell the truth. You’ve got be willing to say, "I don’t know."
Do you want to be treated like a tool? Was being treated like a tool what put you in this seat, talking to passers-by at this table? Why would you think anyone else would be any different? Why not appeal to what’s in them that’s the same as what’s in you that drew you to effective altruism?
When I was an organizing at my university’s EA group, I was once on a Skype call with someone whose job it was to provide resources and advice to student EA groups. I think he was at the Centre for Effective Altruism (CEA) — this would have been in 2015 or 2016 — but I don’t remember for sure.
This was a truly chilling experience because this person advocated what I saw then and still see now as unethical manipulation tactics. He advised us — the group organizers — to encourage other students to tie their sense of self-esteem or self-worthy to how committed they were to effective altruism or how much they contributed to the cause.
This person from CEA or whatever the organization was also said something like, "if we’re successful, effective altruism will solve all the world’s problems in priority sequence". That and the manipulation advice made me think, "Oh, this guy’s crazy."
I recently read about a psychology study about persuading people to eat animal organs during World War II. During WWII, there was a shortage of meat, but animals’ organs were being thrown away, despite being edible. A psychologist (Kurt Lewin) wanted to try two different ways of convincing women to cook with animal organs and feed them to their families.
The first way was to devise a pitch to the women designed to be persuasive, designed to convince them. This is from the position of, "I figured out what’s right, now let me figure out what to say to you to make you do what’s right."
The second way was to pose the situation to the women as the study’s designers themselves thought of it. This is from the position of, "I’m treating you as an equal collaborator on solving this problem, I’m respecting your intellect, and I’m respecting your autonomy."
Five times more women who were treated in the second way cooked with organs, 52% of the group vs. 10%.
Among women who had never cooked with organs before, none of them cooked with organs after being treated the first way. 29% of the women who had never cooked with organs before did so for the first time after being treated the second way.
You can read more about this study here. (There might be different ways to interpret which factors in this experiment were important, but Kurt Lewin himself advocated the view that if you want things to change, get people involved.)
This isn’t just about what’s most effective at persuasion, as if persuasion is the end goal and the only thing that matters. Treating people as intellectual equals also gives them the opportunity to teach you that you’re wrong. And you might be wrong. Wouldn’t you rather know?
I looked at every link in this post and the most useful one for me was this one where you list off examples of uncomfortable cross-cultural interactions from your interviewees. Especially seeing all the examples together rather than just one or two.
I’m a Westerner, but I’m LGBT and a feminist, so I’m familiar with analogous social phenomena. Instances of discrimination or prejudice often have a level of ambiguity. Was that person dismissive toward me because of my identity characteristics or are they just dismissive toward everyone… or were they in a bad mood…? You form a clearer picture when you add up multiple experiences, and especially experiences from multiple people. That’s when you start to see a pattern.
As a person in an identity group that is discriminated against, sometimes you can have a weird feeling that, statistically, you know discrimination is happening, but you don’t know for sure exactly which events are discrimination and which aren’t. Some instances of discrimination are more clear — such as someone invoking a trope or cliché about your group — but any individual instance of someone talking over you, disregarding your opinion, not taking an interest in you, not giving you time to speak, and so on, is theoretically consistent with someone being generally rude or disliking you personally. Stepping back and seeing the pattern is what makes all the difference.
This might be the most important thing that people who do not experience discrimination don’t understand. Some people think that people who experience discrimination are just overly sensitive or are overreacting or are seeing malicious intent where it doesn’t exist. Since so many individual examples of discrimination or potential discrimination can be explained away as someone being generally rude, or in a bad mood, or just not liking someone personally — or whatever — it is possible to deny that discrimination exists, or at least that it exists to the extent that people are claiming.
But discerning causality in the real world is not always so clean and simple and obvious — that’s why we need clinical trials for drugs, for example — and the world of human interaction is especially complex and subtle.
You could look at any one example on the list you gave and try to explain it away. I got the sense that your interviewees shared this sense of ambiguity. For example: "L felt uncertain about what factors contributed to that dynamic, but they suspected the difference in culture may play a part." When you see all the examples collected together, from the experiences of several different people, it is much harder to explain it all away.
You could claim that it's wrong of me to only give one of my children a banana, even if that's the only child who's hungry. Some would say I should always split that banana in half, for egalitarian reasons. This is in stark contrast to EA and hard to rebut respectfully with rigor.
In an undergrad philosophy class, the way my prof described examples like this is as being about equality of regard or equality of concern. For example, if there are two nearby cities and one gets hit by a hurricane, the federal government is justified in sending aid just to the city that’s been damaged by the hurricane, rather than to both cities in order to be "fair". It is fair. The government is responding equally the needs of all people. The people who got hit by the hurricane are more in need of help.
Sam wrote the letter below to our employees and stakeholders about why we are so excited for this new direction.
God. Sam Altman didn't get to do what he wanted, and now we're supposed to believe he's "excited"? This corporate spin is driving me crazy!
But, that aside, I'm glad OpenAI has backed down, possibly because the Attorney General of Delaware or California, or both of them, told OpenAI they would block Sam's attempt to break the OpenAI company free from the non-profit's control.
It seems more likely to me that OpenAI gave up because they had to give up, although this blog post is trying to spin it as if they changed their minds (which I doubt really happened).
Truly a brash move to try to betray the non-profit.
Once again Sam is throwing out gigantic numbers for the amounts of capital he theoretically wants to raise:
We want to be able to operate and get resources in such a way that we can make our services broadly available to all of humanity, which currently requires hundreds of billions of dollars and may eventually require trillions of dollars.
I wonder if his reasoning is that everyone in the world will use ChatGPT, so he multiplies the hardware cost of running one instance of GPT-5 by the world population (8.2 billion), and then adjusts down for utilization. (People gotta sleep and can't use ChatGPT all day! Although maybe they'll run deep research overnight.)
Looks like the lede was buried:
Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
The nonprofit will continue to control the PBC, and will become a big shareholder in the PBC, in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities, consistent with the mission.
At first, I thought this meant the non-profit will go from owning 51% of the company (or whatever it is) to a much smaller percentage. But I tried to confirm this and found an article that claims the OpenAI non-profit only owns 2% of the OpenAI company. I don't know whether that's true. I can't find clear information on the size of the non-profit's ownership stake.
The data in this paper comes from the 2006 paper "Disease Control Priorities in Developing Countries".
I don't understand. Does this paper not support the claim?
I've actually never heard this claim before, personally. Instead, people like Toby Ord talked about how the cost of curing someone's blindness through the Fred Hollows Foundation was 1,000x cheaper than training a seeing eye dog.
Here is the situation we're in with regard to near-term prospects for artificial general intelligence (AGI). This is why I'm extremely skeptical of predictions that we'll see AGI within 5 years.
-Current large language models (LLMs) have extremely limited capabilities. For example, they can't score above 5% on the ARC-AGI-2 benchmark, they can't automate any significant amount of human labour,[1] and they can only augment human productivity in minor ways in limited contexts.[2] They make ridiculous mistakes all the time, like saying something that happened in 2025 caused something that happened in 2024, while listing the dates of the events. They struggle with things that are easy for humans, like playing hangman.
-The capabilities of LLMs have been improving slowly. There is only a modest overall difference between GPT-3.5 (the original ChatGPT model), which came out in November 2022, and newer models like GPT-4o, o4-mini, and Gemini 2.5 Pro.
-There are signs that there are diminishing returns to scaling for LLMs. Increasing the size of models and the size of the pre-training data doesn't seem to be producing the desired results anymore. LLM companies have turned to scaling test-time compute to eke out more performance gains, but how far can that go?
-There may be certain limits to scaling that are hard or impossible to overcome. For example once you've trained a model on all the text that exists in the world, you can't keep training on exponentially[3] more text every year. Current LLMs might be fairly close to running out of exponentially[4] more text to train on, if they haven't run out already.[5]
-A survey of 475 AI experts found that 76% think it's "unlikely" or "very unlikely" that "scaling up current AI approaches" will lead to AGI. So, we should be skeptical of the idea that just scaling up LLMs will lead to AGI, even if LLM companies manage to keep scaling them up and improving their performance by doing so.
-Few people have any concrete plan for how to build AGI (beyond just scaling up LLMs). The few people who do have a concrete plan disagree fundamentally on what the plan should be. All of these plans are in the early-stage research phase. (I listed some examples in a comment here.)
-Some of the scenarios people are imagining where we get to AGI in the near future involve strange, exotic, hypothetical process wherein a sub-AGI AI system can automate the R&D that gets us from a sub-AGI AI system to AGI. This requires two things to be true: 1) that doing the R&D needed to create AGI is not a task that would require AGI or human-level AI and 2) that, in the near term, AI systems somehow advance to the point where they're able to do meaningful R&D autonomously. Given that I can't even coax o4-mini or Gemini 2.5 Pro into playing hangman properly, and given the slow improvement of LLMs and the signs of diminishing returns to scaling I mentioned, I don't see how (2) could be true. The arguments for (1) feel very speculative and handwavy.
Given all this, I genuinely can't understand why some people think there's a high chance of AGI within 5 years. I guess the answer is they probably disagree on most or all of these individual points.
Maybe they think the conventional written question and answer benchmarks for LLMs are fair apples-to-apples comparisons of machine intelligence and human intelligence. Maybe they are really impressed with the last 2 to 2.5 years of progress in LLMs. Making they are confident no limits to scaling or diminishing returns to scaling will stop progress anytime soon. Maybe they are confident that scaling up LLMs is a path to AGI. Or maybe they think LLMs will soon be able to take over the jobs of researchers at OpenAI, Anthropic, and Google DeepMind.
I have a hunch (just a hunch) that it's not a coincidence many people's predictions are converging (or herding) around 2030, give or take a few years, and that 2029 has been the prophesied year for AGI since Ray Kurzweil's book The Age of Spiritual Machines in 1999. It could be a coincidence. But I have a sense that there has been a lot of pent-up energy around AGI for a long time and ChatGPT was like a match in a powder keg. I don't get the sense that people formed their opinions about AGI timelines in 2023 and 2024 from a blank slate.
I think many people have been primed for years by people like Ray Kurzweil and Eliezer Yudkowsky and by the transhumanist and rationalist subcultures to look for any evidence that AGI is coming soon and to treat that evidence as confirmation of their pre-existing beliefs. You don't have to be directly influenced by these people or by these subcultures to be influenced. If enough people are influenced by them or a few prominent people are influenced, then you end up getting influenced all the same. And when it comes to making predictions, people seem to have a bias toward herding, i.e., making their predictions more similar to the predictions they've heard, even if that ends up making their predictions less accurate.
The process by which people come up with the year they think AGI will happen seems especially susceptible to herding bias. You ask yourself when you think AGI will happen. A number pops into your head that feels right. How does this happen? Who knows.
If you try to build a model to predict when AGI will happen, you still can't get around it. Some of your key inputs to the model will require you to ask yourself a question and wait a moment for a number to pop into your head that feels right. The process by which this happens will still be mysterious. So, the model is ultimately no better than pure intuition because it is pure intuition.
I understand that, in principle, it's possible to make more rigorous predictions about the future than this. But I don't think that applies to predicting the development of a hypothetical technology where there is no expert agreement on the fundamental science underlying that technology, and not much in the way of fundamental science in that area at all. That seems beyond the realm of ordinary forecasting.
This post discusses LLMs and labour automation in the section "Real-World Adoption".
One study I found had mixed results. It looked at the use of LLMs to aid people working in customer support, which seems like it should be one of the easiest kinds of jobs to automate using LLMs. The study found that the LLMs increased productivity for new, inexperienced employees but decreased productivity for experienced employees who already knew the ins and outs of the job:
These results are consistent with the idea that generative AI tools may function by exposing lower-skill workers to the best practices of higher-skill workers. Lower-skill workers benefit because AI assistance provides new solutions, whereas the best performers may see little benefit from being exposed to their own best practices. Indeed, the negative effects along measures of chat quality—RR [resolution rate] and customer satisfaction—suggest that AI recommendations may distract top performers or lead them to choose the faster or less cognitively taxing option (following suggestions) rather than taking the time to come up with their own responses.
I'm using "exponentially" colloquially to mean every year the LLM's training dataset grows by 2x or 5x or 10x — something along those lines. Technically, if the training dataset increased by 1% a year, that would be exponential, but let's not get bogged down in unimportant technicalities.
In that case, I apologize. I don't know you and I don't know your background or intentions, and apparently I was wrong about both.
I think the experience you're describing — of feeling a sense of guilt or grief or sadness or obligation that's so big you don't know how to handle it — is something that probably the majority of people who have participated in the effective altruist movement have felt at one time or another. I've seen many people describe feeling this way, both online and in real life.
When I was an organizer at my university's effective altruist group, several of the friends I made through that group expressed these kinds of feelings. This stuff weighed on us heavily.
I haven't read the book Strangers Drowning, but I've heard it described, and I know it's about people who go to extreme lengths to answer the call of moral obligation. Maybe that book would interest you. I don't know.
This topic goes beyond the domain of ethical theory into a territory that is different parts existential, spiritual, and psychotherapeutic. It can be dangerous not to handle this topic with care because it can get out of control. It can contribute to clinical depression and anxiety, it can motivate people to inflict pain on others, or people can become overzealous, overconfident, and adopt an unfair sense of superiority to other people.
I find it useful to draw on examples from fantasy and sci-fi to think about this sort of thing. In the Marvel universe, the Infinity Stones can only be wielded safely by god-like beings and normal humans or mortals die when they try to use them. The Stones even pose a danger to some superhuman beings, like Thanos and the Hulk. In Star Trek: Picard, there is an ancient message left by an unknown, advanced civilization. When people try to watch/listen to the message, it drives most of them to madness. There are other examples of this sort of thing — something so powerful that coming into contact with it, even coming near it, is incredibly dangerous.
To try to reckon with the suffering of the whole world is like that. Not impossible, not something to be avoided forever, but something dangerous to be approached with caution. People who approach it recklessly can destroy themselves, destroy others, or succumb to madness.
There is a connection between reckoning with the world's suffering and one's own personal suffering. In two different ways. First, how we think and feel about one influences how we think and feel about the other. Second, I think a lot of the wisdom about how people should reckon with their own suffering probably applies well to reckoning with the world's suffering. With someone's personal trauma or grief, we know (or at least people who go to therapy know) that it's important for that person to find a safe container to express their thoughts and feelings about it. Talking about it just anywhere or to just anyone, without regard for whether that's a safe container, is unsafe and unwise.
We know that — after the initial shock of a loss or a traumatic event — it isn't healthy for a person to focus on their trauma or grief all the time, to the exclusion of other things. But trying to completely avoid forever it isn't a good strategy either.
We know that the path is never simple, clean, or easy. Connection to other people who have been through or who are going through similar things is often helpful, as is the counsel of a helping professional like a therapist or social worker (or in some cases a spiritual or religious leader), but the help doesn't come in the form of outlining a straightforward step-by-step process. What helps someone reckon with or make sense of their own emotional suffering is often personal to that individual and not generally applicable.
For example, in the beautiful — and unfairly maligned — memoir Eat, Pray, Love, Elizabeth Gilbert talks about a point in her life when she feels completely crushed, and when she's seriously, clinically unwell. She describes how when nothing else feels enjoyable or interesting, she discovers desperately needed pleasure in learning Italian.
I don't think in the lowest times of my life I would find any pleasure in learning Italian. I don't think in the best or most mediocre times of my life I would find any pleasure in learning Italian. The specific thing that helps is usually not generalizable to everyone who's suffering (which, ultimately, is everyone) and is usually not predictable in advance, including by the person who it ends up helping.
So, the question of how to face the world's darkness or the world's suffering, or how to recover from a breakdown when the world's darkness or suffering seems too much, is an answerable question, but it's not answerable in a universal, simple, or direct way. It's about your relationship with the universe, which is something for you and the universe to figure out.
As I indicated above, I like to take things from fantasy and sci-fi to make sense of the world. In The Power of Myth, Joseph Campbell laments that society lacks modern myths. He names Star Wars as the rare exception. (Return of the Jedi came out a few years before The Power of Myth was recorded.) Nowadays, there are lots of modern myths, if you count things like Star Trek, Marvel, X-Men, and Dungeons & Dragons.
I also rely a lot on spiritual and religious teachings. This episode of the RobCast with Rob Bell is relevant to this topic and a great episode. Another great episode, also relevant, is "Light Heavy Light".
In the more psychotherapeutic realm, I love everything Brené Brown has done — her books, her TV show, her audio programs, her TED Talks. I've never heard her directly talk about global poverty, but she talks about so much that is relevant to the questions you asked in one way or another. In her book Rising Strong, she talks about her emotional difficulty facing (literally and figuratively) the people in her city who are homeless. Initially, she decided what she needed to do to resolve this emotional difficulty was to do more to help. She did, but she didn't feel any differently. This led to a deeper exploration.
In her book Braving the Wilderness, she talks about how she processed collective tragedies like the Challenger disaster and the killings of the kids and staff at Sandy Hook Elementary School. This is what you're asking about — how to process grief over tragedies that are collective and shared by the world, not personal just to you.
Finally, a warning. In my opinion, a lot of people in effective altruism, including on the Effective Altruism Forum, have not found healthy ways of reckoning with the suffering of the world. There are a few who are so broken by the suffering of the world that they believe life was a mistake and we would be better off returning to non-existence. (In the Dungeons & Dragons lore, these people would be like the worshippers of Shar.) Many are swept up in another kind of madness: eschatological prophecies around artificial general intelligence. Many numb, detach, or intellectualize rather than feel. A lot of energy goes into fighting.
So, the wisdom you are seeking you will probably not find here. You will find good debates on charity effectiveness. Maybe some okay discussions of ethical theory. Not wisdom on how to deal with the human condition.
Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online:
Feel free to add your own rules of thumb.
This advice comes from the psychologist Harriet Lerner's wonderful book Why Won't You Apologize? — given in the completely different context of close personal relationships. I think it also works here.
Okay. Thanks. I guessed maybe that’s what you were trying to say. I didn’t even look at the paper. It’s just not clear from the post why you’re citing this paper and what point you’re trying to make about it.
I agree that we can’t extrapolate from the claim "the most effective charities at fighting diseases in developing countries are 1,000x more effective than the average charity in that area" to "the most effective charities, in general, are 1,000x more effective than the average charity".
If people are making the second claim, they definitely should be corrected. I already believed you that you’ve heard this claim before, but I’m also seeing corroboration from other comments that this is a commonly repeated claim. It seems like a case of people starting with a narrow claim that was true and then getting a little sloppy and generalizing it beyond what the evidence actually supports.
Trying to say how much more effective the best charities are from the average charity seems like a dauntingly broad question, and I reckon the juice ain’t worth the squeeze. The Fred Hollows Foundation vs. seeing eye dog example gets the point across.