Pronouns: she/her or they/them.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.
Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online:
Feel free to add your own rules of thumb.
This advice comes from the psychologist Harriet Lerner's wonderful book Why Won't You Apologize? — given in the completely different context of close personal relationships. I think it also works here.
Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online:
(Decided to also publish this as a quick take, since it's so generally applicable.)
This advice comes from the psychologist Harriet Lerner's wonderful book Why Won't You Apologize? — given in the completely different context of close personal relationships. I think it also works here.
I find this completely inscrutable. I'm not saying there's anything wrong with it in terms of accuracy, it's just way too in the weeds of the statistics for me to decipher what's going on.
For example, I don't know what a "middle half" or a "central half" is. I looked them up and now I know they are statistics terms, but it would be a lot of work for me to try to figure out what that quoted paragraph is trying to say.
Is AI Impacts going to run this survey again soon? Maybe they can phrase the questions differently in a new survey to avoid this level of confusion between different levels of AI capabilities.
I think if you don't note in the body of the text, rather than just in a footnote, that just anybody can predict anything on Metaculus, then this will inevitably be misleading to anyone who doesn't already know what Metaculus is, since you imply in the post that it's an aggregator of expert predictions when you claim:
On the whole, experts think human-level AI is likely to arrive in your lifetime.
And then go on to list Metaculus as support for this claim. That implies Metaculus is aggregator of expert predictions.
Also, you include Metaculus in a long list of expert predictions without noting that it's different from the other items on the list, which reinforces the implication that it's an aggregator of expert predictions.
I think you should also explain what Samotsvety is in the body of the text and what its forecasters' credentials are.
Invoking "experts" and then using the term this loosely feels misleading.
I think it also bears mentioning the strange feature of the 2023 AI Impacts survey where there's a 69-year gap between the AI experts' prediction of "high-level machine intelligence" and "full automation of labour" (50% chance by 2116). This is such an important (and weird, and confusing) fact about the survey that I think it should be mentioned anytime that survey is brought up.
This is especially relevant since you say:
On the whole, experts think human-level AI is likely to arrive in your lifetime.
And if you think human-level AI means full automation of labour rather than high-level machine intelligence, then a 50% chance by 2116 (91 years from now) is not within the current life expectancy of most adults alive today or even most teenagers.
There is some ambiguity in claims about whether an LLM knows how to do something. The spectrum of knowing how to do things ranges all the way from “Can it do it at least once, ever?” to “Does it do it reliably, every time, without fail?”.
My experience was that I tried to play hangman with o4-mini twice and it failed both times in the same really goofy way, where it counted my guesses wrong when I guessed a letter that was in the word it later said I was supposed to be guessing.
When I played the game with o4-mini where it said the word was “butterfly” (and also said there was no “B” in the word when I guessed “B”), I didn’t prompt it to make the word hard. I just said, after it claimed to have picked the word:
"E. Also, give me a vague hint or a general category."
o4-mini said:
"It’s an animal."
So, maybe asking for a hint or a category is the thing that causes it to fail. I don’t know.
Even if I accepted the idea that the LLM “wants me to lose” (which sounds dubious to me), then it doesn’t know how to do that properly, either. In the “butterfly” example, it could, in theory, have chosen a word retroactively that filled in the blanks but didn’t conflict with any guesses it said were wrong. But it didn’t do that.
In the attempt where the word was “schmaltziness”, o4-mini’s response about which letters were where in the word (which I pasted in a footnote to my previous comment) was borderline incoherent. I could hypothesize that this was part of a secret strategy on its part to follow my directives, but much more likely, I think, is that it just lacks the capability to execute the task reliably.
Fortunately, we don’t have to dwell on hangman too much, since there are rigorous benchmarks like ARC-AGI-2 that show more conclusively the reasoning abilities of o3 and o4-mini are poor compared to typical humans.
I haven’t looked at any surveys, but it seems universal to care about future generations. This doesn’t mean people will necessarily act in a way that protects future generations' interests — doesn’t mean they won’t pollute or deforest, for example — but the idea is not controversial and is widely accepted.
Similarly, I think it’s basically universal to believe that all humans, in principle, have some value and have certain rights that should not be violated, but then, in practice, factors like racism, xenophobia, hatred based on religious fundamentalism, anti-LGBT hatred, etc. lead many people to dehumanize certain humans. There is typically an attempt to morally justify this, though, for example through appeals to “self-defense” (or similar concepts).
If you apply strict standards to the belief that everyone alive today is worthy of moral concern, then some self-identified effective altruists would fail the test, since they hold dehumanizing views about Black people, LGBT people, women, etc.
That’s getting into a different point than I was trying to make in the chunk of text you quoted. Which is just that Will MacAskill didn’t fall out of a coconut tree and come up with the idea that future generations matter yesterday. His university, Oxford, is over 900 years old. I believe in his longtermism book he cites the Iroquois principle of making decisions while considering how they will affect the next seven generations. Historically, many (most?) families on Earth have had close relationships between grandparents and grandchildren. Passing down tradition and transmitting culture (e.g., stories, rituals, moral principles) over long timescales is considered important in many cultures and religions.
There is a risk of a sort of plagiarism with this kind of discourse where people take ideas that have existed for centuries or millennia across many parts of the world and then package them as if they are novel, without adequately acknowledging the history of the ideas. That’s like the effective altruist’s or the ethical theorist’s version of "not invented here".
I guess my mistake was interpreting your quick take as a sincere question rather than a rhetorical question. I now realize you were asking a rhetorical question in order to make an argument and not actually asking for people to try to answer your question.
My initial interpretation — the reason why I replied — was that you were feeling a lot of guilt about your level of moral responsibility for or complicity in global poverty and the harm it does to people. I wanted to help alleviate your guilt, which I think when taken to such an extreme can be paralyzing and counterproductive. I’ve seen no evidence it actually helps anyone and lots of evidence of it doing harm.
I already tried to make several points in my previous comment. I’ll try to make one more.
You say there is “hidden violence” in the world economic system. Well, knowledge is a component of moral culpability. A famous line from the Watergate scandal was this question a U.S. Senator asked about Richard Nixon: "What did the president know and when did he know it?" The extent to which you know about something affects how morally culpable you are.
There is another layer of complexity beyond this. For example, there is the concept in law of willful ignorance. If you get involved in something that you know or have reasonable grounds to believe is criminal activity and choose not to know certain details in order to try to protect yourself from legal liability, this will probably not hold up as a legal defense and you will probably still be held criminally liable.
But I think it would be a stretch to try to apply the concept of “willful ignorance” to global poverty or the world economic system, since people’s ignorance of the “hidden violence” you describe — if it indeed exists — is genuine and not a ruse to try to avoid culpability.
The moral culpability of normal Germans in the 1930s and 1940s is a complex topic that requires knowing a lot about this time and place in history — which I do not. I think everyone would agree that, for example, a child forced to join the Hitler Youth has a lot less moral culpability than someone with a leadership position in the Nazi Party. So, there is some ambiguity in the term "Nazi" that you have to reckon with to discuss this topic.
But I don’t think it is ethical to drag this complex discussion about this period in history into a debate about effective altruism.
Nazi analogies should be used with a lot of sensitivity and care. By invoking Nazi crimes against humanity in order to try to make some rhetorical point about an unrelated topic, you risk diminishing the importance of these grim events and disrespecting the victims. There are hundreds of thousands of Holocaust survivors alive today. There are many Jewish families who lost relatives in the Holocaust. Many families are affected by the intergenerational trauma of the Holocaust. It seems completely disrespectful to them to try to turn their suffering and loss into effective altruist rhetoric.
So, I have indulged the Nazi analogies enough. I will not entertain this any more.
If you want to make an argument that there are high moral demands on us to respond humanely to global poverty, many people — such as Peter Singer, as I mentioned in my previous comment — have argued this using vivid analogies that have captured people’s imaginations and has helped persuade many of them (including me) to try to do more to help the globally poor.
A lot of people within the effective altruist movement seem to basically agree with you. For example, Will MacAskill, one of the founders of the effective altruist movement, has recently said he’s only going to focus on artificial general intelligence (AGI) from now on. The effective altruist organization 80,000 Hours has said more or less the same — their main focus is going to be AGI. For many others in the EA movement, AGI is their top priority and the only thing they focus on.
So, basically, you are making an argument for which there is already a lot of agreement in EA circles.
As you pointed out, uncertainty about the timeline of AGI and doubts about very near-term AGI are one of the main reasons to focus on global poverty, animal welfare, or other cause areas not related to AGI.
There is no consensus on when AGI will happen.
A 2023 survey of AI experts found they believed there is a 50% chance of AI and AI-powered robots being able to automate all human jobs by 2116.
In 2022, a group of 31 superforecasters predicted a 50% chance of AGI by 2081.
My personal belief is that we have no idea how to create AGI and we have no idea when we’ll figure out how to create it. In addition to the expert and superforecaster predictions I just mentioned, I recently wrote a rapid fire list of reasons I think predictions of AGI within 5 years are extremely dubious.
I agree that it's a significant milestone, or at least it might be. I just read this comment a few hours ago (and the Twitter thread it links to) and that dampens my enthusiasm. 43 million words to solve one ARC-AGI-1 puzzle is a lot.
Also, I want to understand more about how ARC-AGI-2 is different from ARC-AGI-1. Chollet has said that about half of the tasks in ARC-AGI-1 turned out to be susceptible to "brute force"-type approaches. I don't know what that means.
I think it's easy to get carried away with the implications of a result like this when you're surrounded by so many voices saying that AGI is coming within 5 years or within 10 years.
My response to François Chollet's comments on o3's high score on ARC-AGI-1 was more like, "Oh, that's really interesting!" rather than making some big change to my views on AGI. I have to say, I was more excited about it before I knew it took 43 million words of text and over 1,000 attempts per task.
I still think no one knows how to build AGI and that (not unrelatedly) we don't know when AGI will be built.
Chollet recently started a new company focused on combining deep learning and program synthesis. That's interesting. He seems to think the major AI labs like OpenAI and Google DeepMind are also working on program synthesis, but I don't know how much publicly available evidence there is for this.
I can add Chollet's company to the list of organizations that I know of that have publicly discussed they're doing R&D related to AGI other than just scaling LLMs. The others I know of:
I might be forgetting one or two. I know in the past Demis Hassabis has made some general comments about DeepMind's research related to AGI, but I don't know of any specifics.
My gut sense is that all of these approaches will fail — program synthesis combined with deep learning, the Alberta Plan, Numenta's Thousand Brains Principles, and Yann LeCun's roadmap. But this is just a random gut intuition and not a serious, considered opinion.
I think the idea that we're barreling toward the imminent, inevitable invention of AGI is wrong. The idea is that AGI is so easy to invent and progress is happening so fast and so spontaneously that we can hardly stop ourselves from inventing AGI.
It would be seen as odd to take this view in any other area of technology, probably even among effective altruists. We would be lucky if we were barreling toward imminent, inevitable nuclear fusion or a universal coronavirus vaccine or a cure for cancer or any number of technologies that don't exist yet that we'd love to have.
Why does no one claim these technologies are being developed so spontaneously, so automatically, that we would have to take serious action to prevent them from being invented soon? Why is the attitude that progress is hard, success is uncertain, and the road is long?
Given that's how technology usually works, and I don't see any reason for AGI to be easier or take less time — in fact, it seems like it should be harder and take longer, since the science of intelligence and cognition is among the least understood areas of science — I'm inclined to guess that most approaches will fail.
Even if the right general approach is found, it could take a very long time to figure out how to actually make concrete progress using that approach. (By analogy, many of the general ideas behind deep learning existed for decades before deep learning started to take off around 2012.)
I'm interested in Chollet's interpretation of the o3 results on ARC-AGI-1 and if there is a genuine, fundamental advancement involved (which today, after finding out those details about o3's attempts, I believe less than I did yesterday) then that's exciting. But only moderately exciting because the advancement is only incremental.
The story that AGI is imminent and if we skirt disaster, we'll land in utopia is exciting and engaging. I think we live in a more boring version of reality (but still, all things considered, a pretty interesting one!) where we're still at the drawing board stage for AGI, people are pitching different ideas (e.g., program synthesis, the Alberta Plan, the Thousand Brain Principles, energy-based self-supervised learning), the way forward is unclear, and we're mostly in the dark about the fundamental nature of intelligence and cognition. Who knows how long it will take us to figure it out.
In that case, I apologize. I don't know you and I don't know your background or intentions, and apparently I was wrong about both.
I think the experience you're describing — of feeling a sense of guilt or grief or sadness or obligation that's so big you don't know how to handle it — is something that probably the majority of people who have participated in the effective altruist movement have felt at one time or another. I've seen many people describe feeling this way, both online and in real life.
When I was an organizer at my university's effective altruist group, several of the friends I made through that group expressed these kinds of feelings. This stuff weighed on us heavily.
I haven't read the book Strangers Drowning, but I've heard it described, and I know it's about people who go to extreme lengths to answer the call of moral obligation. Maybe that book would interest you. I don't know.
This topic goes beyond the domain of ethical theory into a territory that is different parts existential, spiritual, and psychotherapeutic. It can be dangerous not to handle this topic with care because it can get out of control. It can contribute to clinical depression and anxiety, it can motivate people to inflict pain on others, or people can become overzealous, overconfident, and adopt an unfair sense of superiority to other people.
I find it useful to draw on examples from fantasy and sci-fi to think about this sort of thing. In the Marvel universe, the Infinity Stones can only be wielded safely by god-like beings and normal humans or mortals die when they try to use them. (The Stones even pose a danger to some superhuman beings.) In Star Trek: Picard, there is an ancient message left by an unknown, advanced civilization. When people try to watch/listen to the message, it drives most of them to madness. There are other examples of this sort of thing — something so powerful that coming into contact with it, even coming near it, is incredibly dangerous.
To try to reckon with the suffering of the whole world is like that. Not impossible, not something to be avoided forever, but something dangerous to be approached with caution. People who approach it recklessly can destroy themselves, destroy others, or succumb to madness.
There is a connection between reckoning with the world's suffering and one's own personal suffering. In two different ways. First, how we think and feel about one influences how we think and feel about the other. Second, I think a lot of the wisdom about how people should reckon with their own suffering probably applies well to reckoning with the world's suffering. With someone's personal trauma or grief, we know (or at least people who go to therapy know) that it's important for that person to find a safe container to express their thoughts and feelings about it. Talking about it just anywhere or to just anyone, without regard for whether that's a safe container, is unsafe and unwise.
We know that — after the initial shock of a loss or a traumatic event — it isn't healthy for a person to focus on their trauma or grief all the time, to the exclusion of other things. But trying to completely avoid forever it isn't a good strategy either.
We know that the path is never simple, clean, or easy. Connection to other people who have been through or who are going through similar things is often helpful, as is the counsel of a helping professional like a therapist or social worker (or in some cases a spiritual or religious leader), but the help doesn't come in the form of outlining a straightforward step-by-step process. What helps someone reckon with or make sense of their own emotional suffering is often personal to that individual and not generally applicable.
For example, in the beautiful — and unfairly maligned — memoir Eat, Pray, Love, Elizabeth Gilbert talks about a point in her life when she feels completely crushed, and when she's seriously, clinically unwell. She describes how when nothing else feels enjoyable or interesting, she discovers desperately needed pleasure in learning Italian.
I don't think in the lowest times of my life I would find any pleasure in learning Italian. I don't think the best or most mediocre times of my life I would find any pleasure in learning Italian. The specific thing that helps is usually not generalizable to everyone who's suffering (which, ultimately, is everyone) and is usually not predictable in advance, including by the person who it ends up helping.
So, the question of how to face the world's darkness or the world's suffering, or how to recover from a breakdown when the world's darkness or suffering seems too much, is an answerable question, but it's not answerable in a universal, simple, or direct way. It's about your relationship with the universe, which is something for you and the universe to figure out.
As I alluded to above, I like to take things from fantasy and sci-fi to make sense of the world. In The Power of Myth, Joseph Campbell laments that society lacks modern myths. He names Star Wars as the rare exception. (Return of the Jedi came out a few years before The Power of Myth was recorded.) Nowadays, there are lots of modern myths, if you count things like Star Trek, Marvel, X-Men, and Dungeons & Dragons.
I also rely on a lot on spiritual and religious teachings. This episode of the RobCast with Rob Bell is relevant to this topic and a great episode. Another great episode, also relevant, is "Light Heavy Light".
In the more psychotherapeutic realm, I love everything Brené Brown has done — her books, her TV show, her audio programs, her TED Talks. I've never heard her directly talk about global poverty, but she talks about so much that is relevant to the questions you asked in one way or another. In her book Rising Strong, she talks about her emotional difficulty facing (literally and figuratively) the people in her city who are homeless. Initially, she decided what she needed to do to resolve this emotional difficulty was to do more to help. She did, but she didn't feel any differently. This led to a deeper exploration.
In her book Braving the Wilderness, she talks about how she processed collective tragedies like the Challenger disaster and the killings of the kids and staff at Sandy Hook Elementary School. This is what you're asking about — how to process grief over tragedies that are collective and shared by the world, not personal just to you.
Finally, a warning. In my opinion, a lot of people in effective altruism, including on the Effective Altruism Forum, have not found healthy ways of reckoning with the suffering of the world. There are a few who are so broken by the suffering of the world that they believe life was a mistake and we would be better off returning to non-existence. (In the Dungeons & Dragons lore, these people would be like the worshippers of Shar.) Many are swept up in another kind of madness: eschatological prophecies around artificial general intelligence. Many numb, detach, or intellectualize rather than feel. A lot of energy goes into fighting.
So, the wisdom you are seeking you will probably not find here. You will find good debates on charity effectiveness. Maybe some okay discussions of ethical theory. Not wisdom on how to deal with the human condition.