Hide table of contents

Introduction

As a community builder, I sometimes get into conversations with EA-skeptics that aren't going to sway the person I'm talking to. The Tree of Questions is a tool I use to be more sure of having effective conversations, faster identifying the crux. Much of this is inspired by Scott Alexander's "tower of assumptions" and Benjamin Todd's ideas of The Core of EA.

The Tree

  1. a trunk with core ideas almost all EAs accept - and without which you have to jump through some very specific hoops in order to agree with standard EA stances.
  2. Branches for different cause areas or ideas within EA, such as longtermism. If you reject the trunk, there’s no point debating branches. 

All too often, I find people cutting off a branch of this tree, and then believing they've cut down the entire thing. "I'm not an EA because I don't believe in x-risk," is an example. Deciding what assumptions you have to agree with in order to be on those branches is a job for people more knowledgeable about the philosophy behind them. What I present here is questions I ask to know whether someone can even get up the trunk - if they can't, then it's meaningless to help them reach for the branches.

This post is focused on the kinds of conversations where there is some cost to debating. It could be the social cost of yapping too much at a family dinner, it could be the risk of seeming pushy to a friend who's skeptical, or just that you're tabling and you could be talking to someone else. That's why I've listed a few points that I think aren't worth the time/effort to argue against, if someone raises them as objections to the trunk of the EA tree. I'll also list some bad counterarguments that you should practice countering. These are all real examples from my experience in EA Lund (Sweden), so I'm interested to hear from you in the comments if your mileage varies. 

Altruism

The first part of the trunk is to ask "do you care about helping others?" This is actually the first words I say to people when tabling, and I think it's important to frame it in this very normal, easy-to-grasp way. I've heard people talk about EA as maximizing or optimizing, but this is much less attractive and often carries negative connotations. 

Concede:

  • Narrow moral circles. Some only want to help their family, city, or religious group. One person even answered "this is college man, I'm just here to party!"
  • Self-reliance/non-interventionism. This could be based on the empirical claim that intervening makes things worse, or on the moral claim that it's valuable for people to help themselves. You can get away with one followup question here if you find their argument particularly unsound, but I haven't found them convinced even if I can show that it's a bad policy.

Debate briefly:

  • "Altruism is self-serving/virtue signalling." This is a non-sequitur; I asked if you wanted to help others, not why people want to help others.
  • "Giving to one means I have to give to everyone." This is a classic slippery slope, and I trust you to convince them that it's ok to only do what is feasible for you. 

 

Effectiveness

The second question is "is helping a lot better than helping a little?" Saying no to this means effectiveness isn't interesting, and (most likely) neither is EA. I rarely ask this directly because it begs the questions of what "a lot" means, but I do give examples of differences in cost effectiveness and gauge their reaction. 

Concede:

  • "I’m content as long as I do some good every now and then." I think this one is especially important to be respectful about, so you don't come across as pushy. I want to flag that I'm afraid many are put off by EA being demanding already, so that personal fear makes me extra unwilling to argue against this objection.
  • Negative vs positive obligations. Some consider it more important for their own CO2 emissions to be low than for the global ones to be. The focus is on them not doing harm, rather than no harm being done - contrary to what most EAs believe. 

Debate briefly:

  • Uncertainty about others' preferences. While true in one sense, you know for sure that no one wants their child dying from malaria, to be tortured, or to see our species go extinct. This is the level of problems EA operates on, so they might still be interested.
  • Worries about burnout from trying too hard. This is the flip side of EA being as a community demanding to some. You can make big wins here by saying clearly that we'd happily help them avoid trying to hard, while still doing something. You can refer to research showing that do-gooders are happier, if they're amenable to that. 

 

Comparability

Can we quantitatively compare how much good different actions do?" This is often snuck in with the Effectiveness question, because a comparison has already been made when we're comparing a lot to a little. However, I find it important to be attentive to when someone's turned off by the idea of quantifying altruism. 

Concede:

  • "I don’t want to use imperfect metrics." QALYs are imperfect, and so are many similar metrics we use to measure our impact. We miss 2nd order effects which might dominate (e.g. The Meat-Eater Problem), and there can be errors in how they're determined empirically. This is an important conversation to have within EA, but I don't think having that be your first EA conversation is conducive to you joining. I just say something like "Absolutely—they’re imperfect, but the best tools available for now. You're welcome to join one of our meetings where we chat about this type of consideration."
  • Anti-prioritarianism. You could claim that it's wrong of me to only give one of my children a banana, even if that's the only child who's hungry. Some would say I  should always split that banana in half, for egalitarian reasons. This is in stark contrast to EA and hard to rebut respectfully with rigor.

Institutional Trust

To embrace EA, you need to believe that at least some of its flagship organizations and leaders—80,000 Hours, Will MacAskill, Giving What We Can, etc.—are both well-intentioned and capable. Importantly, many skeptics leap straight to this “top of the trunk,” accusing EA groups of corruption or undue influence (e.g., “Open Philanthropy takes dirty billionaire money”). 

While those concerns deserve a thoughtful debate, they should come after someone already agrees that (i) helping strangers matters, (ii) doing more good is better than doing a little, and (iii) we can meaningfully compare different interventions. In other words, don’t let institutional distrust be the very first deal-breaker—focus on the roots before you tackle the branches.

Further Discussions

There are more points central to the thought patterns in EA - expected value, longtermism, sentience considerations, population ethics - but they're not as integral to EA as the ones above. If someone rejects one of them and claims that it's why they reject EA, I'd say they've only sawed off a cluster of branches. 

Comments4


Sorted by Click to highlight new comments since:

You could claim that it's wrong of me to only give one of my children a banana, even if that's the only child who's hungry. Some would say I  should always split that banana in half, for egalitarian reasons. This is in stark contrast to EA and hard to rebut respectfully with rigor.

In an undergrad philosophy class, the way my prof described examples like this is as being about equality of regard or equality of concern. For example, if there are two nearby cities and one gets hit by a hurricane, the federal government is justified in sending aid just to the city that’s been damaged by the hurricane, rather than to both cities in order to be "fair". It is fair. The government is responding equally the needs of all people. The people who got hit by the hurricane are more in need of help.

I like the branching tree metaphor, and I like the attempt to framework these intro conversations — good post.

Executive summary: This reflective, experience-based post introduces the “EA Tree of Questions” as a conversational tool to help community builders quickly identify whether someone shares the core beliefs necessary for meaningful engagement with Effective Altruism, enabling more efficient and respectful dialogue with skeptics.

Key points:

  1. The “EA Tree” metaphor distinguishes between foundational beliefs (the trunk) and more complex cause-specific ideas (the branches); debating advanced topics is often fruitless if someone doesn’t accept the core trunk principles.
  2. Three trunk questions—Altruism, Effectiveness, and Comparability—form the basis for determining if a person is philosophically aligned enough to engage meaningfully with EA ideas.
  3. Practical advice is offered for when to concede, engage, or disengage based on real conversations, aiming to avoid unproductive debates and reduce social costs in outreach settings.
  4. Institutional trust is presented as a later-stage concern that shouldn't be a conversation starter; it matters only after agreement on more fundamental principles.
  5. The post encourages tailoring conversations to a person’s values and level of receptiveness, especially when EA can appear demanding or overly quantitative.
  6. The author invites community input and treats the model as a work-in-progress, acknowledging variability in reactions and emphasizing the importance of respectful engagement.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

This is a good post if you view it as a list of frequently asked questions about effective altruism when interacting with people who are new to the concept and a list of potential good answers to those questions — including that sometimes the answer is to just let it go. (If someone is at college just to party, just say "rock on".) 

But there’s a fine line between effective persuasion and manipulation. I’m uncomfortable with this: 

This is an important conversation to have within EA, but I don't think having that be your first EA conversation is conducive to you joining. I just say something like "Absolutely—they’re imperfect, but the best tools available for now. You're welcome to join one of our meetings where we chat about this type of consideration."

If I were a passer-by who stopped at a table to talk to someone and they said this to me, I would internally think, "Oh, so you’re trying to work me."

Back when I tabled for EA stuff, my approach to questions like this was to be completely honest. If my honest thought was, "Yeah, I don’t know, maybe we’re doing it all wrong," then I would say that. 

I don’t like viewing people as a tool to achieve my ends — as if I know better than them and my job in life is to tell them what to do.

And I think a lot of people are savvy enough to tell when you’re working them and recoil at being treated like your tool. 

If you want people to be vulnerable and put themselves on the line, you’ve got to be vulnerable and put yourself on the line as well. You’ve got to tell the truth. You’ve got be willing to say, "I don’t know."

Do you want to be treated like a tool? Was being treated like a tool what put you in this seat, talking to passers-by at this table? Why would you think anyone else would be any different? Why not appeal to what’s in them that’s the same as what’s in you that drew you to effective altruism? 

When I was an organizing at my university’s EA group, I was once on a Skype call with someone whose job it was to provide resources and advice to student EA groups. I think he was at the Centre for Effective Altruism (CEA) — this would have been in 2015 or 2016 — but I don’t remember for sure. 

This was a truly chilling experience because this person advocated what I saw then and still see now as unethical manipulation tactics. He advised us — the group organizers — to encourage other students to tie their sense of self-esteem or self-worthy to how committed they were to effective altruism or how much they contributed to the cause. 

This person from CEA or whatever the organization was also said something like, "if we’re successful, effective altruism will solve all the world’s problems in priority sequence". That and the manipulation advice made me think, "Oh, this guy’s crazy."

I recently read about a psychology study about persuading people to eat animal organs during World War II. During WWII, there was a shortage of meat, but animals’ organs were being thrown away, despite being edible. A psychologist (Kurt Lewin) wanted to try two different ways of convincing women to cook with animal organs and feed them to their families.

The first way was to devise a pitch to the women designed to be persuasive, designed to convince them. This is from the position of, "I figured out what’s right, now let me figure out what to say to you to make you do what’s right."

The second way was to pose the situation to the women as the study’s designers themselves thought of it. This is from the position of, "I’m treating you as an equal collaborator on solving this problem, I’m respecting your intellect, and I’m respecting your autonomy."

Five times more women who were treated in the second way cooked with organs, 52% of the group vs. 10%.

Among women who had never cooked with organs before, none of them cooked with organs after being treated the first way. 29% of the women who had never cooked with organs before did so for the first time after being treated the second way.

You can read more about this study here. (There might be different ways to interpret which factors in this experiment were important, but Kurt Lewin himself advocated the view that if you want things to change, get people involved.) 

This isn’t just about what’s most effective at persuasion, as if persuasion is the end goal and the only thing that matters. Treating people as intellectual equals also gives them the opportunity to teach you that you’re wrong. And you might be wrong. Wouldn’t you rather know?

Curated and popular this week
 ·  · 1m read
 · 
I recently read a blog post that concluded with: > When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else that matters in your life. But if your tens of thousands of hours at work aren't something you expect to look back on with pride, consider whether there's something else you could be doing professionally that you could feel good about.
 ·  · 14m read
 · 
Introduction In this post, I present what I believe to be an important yet underexplored argument that fundamentally challenges the promise of cultivated meat. In essence, there are compelling reasons to conclude that cultivated meat will not replace conventional meat, but will instead primarily compete with other alternative proteins that offer superior environmental and ethical benefits. Moreover, research into and promotion of cultivated meat may potentially result in a net negative impact. Beyond critique, I try to offer constructive recommendations for the EA movement. While I've kept this post concise, I'm more than willing to elaborate on any specific point upon request. Finally, I contacted a few GFI team members to ensure I wasn't making any major errors in this post, and I've tried to incorporate some of their nuances in response to their feedback. From industry to academia: my cultivated meat journey I'm currently in my fourth year (and hopefully final one!) of my PhD. My thesis examines the environmental and economic challenges associated with alternative proteins. I have three working papers on cultivated meat at various stages of development, though none have been published yet. Prior to beginning my doctoral studies, I spent two years at Gourmey, a cultivated meat startup. I frequently appear in French media discussing cultivated meat, often "defending" it in a media environment that tends to be hostile and where misinformation is widespread. For a considerable time, I was highly optimistic about cultivated meat, which was a significant factor in my decision to pursue doctoral research on this subject. However, in the last two years, my perspective regarding cultivated meat has evolved and become considerably more ambivalent. Motivations and epistemic status Although the hype has somewhat subsided and organizations like Open Philanthropy have expressed skepticism about cultivated meat, many people in the movement continue to place considerable hop
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.