Yarrow🔸

814 karmaJoined Canada

Bio

Pronouns: she/her or they/them. 

I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.

Comments
194

Topic contributions
1

There are a few people who support both effective altruism and radical leftist politics who have written about how these two schools of thought might be integrated. Bob Jacobs, the former organizer of EA Ghent in Belgium, is one. You might be interested in his blog Collective Altruism: https://bobjacobs.substack.com/ 

Another writer you may be interested in is the academic philosopher David Thorstad. I don't know what his political views are. But his blog Reflective Altruism, which is about effective altruism, has covered a few topics relevant to this post, such as billionaire philanthropy, racism, sexism, and sexual harassment in the effective altruist movement: https://reflectivealtruism.com/post-series/

There is also a pseudonymous EA Forum user called titotal whose politics seem leftist or left-leaning. They have written some criticisms of certain aspects of the EA movement both here on the forum and on their blog: https://titotal.substack.com/

I don't know if any of the people I just mentioned wholeheartedly support radical feminism, though. Even among feminists and progressives or leftists, the reputation of radical feminism has been seriously damaged through a series of serious mistakes, including:

  • Support for the oppression of and systemic violence and discrimination against trans people[1]
  • Support for banning pornography[2]
  • Opposition to legalizing or decriminalizing sex work[3]
  • Arguing that most sex is unethical[4]

I'm vaguely aware that probably some radical feminists today take different stances on these topics, and probably there have historically been some radical feminists who have disagreed with these bad opinions, but the movement is tarnished from these mistakes and it will be difficult to recover. 

In my experience, people who have radical leftist economic views are generally hostile to the idea of people in high-income countries donating to charities that provide medicine or anti-malarial bednets or cash to poor people in low-income countries. It's hard for me to imagine much cooperation or overlap between effective altruism and the radical left. 

Effective altruism was founded as a movement focused on the effectiveness of charities that work on global poverty and global health. A lot of radical leftists — I'd guess the majority — fundamentally reject this idea. So, how many radical leftists are realistically going to end up supporting effective altruism? (I'm talking about radical leftists here because most radical feminists and specifically some of the ones you mentioned also have radical leftist economic and political views.)

Finally, although there are many important ideas in radical feminist thought that I think anyone — including effective altruists — could draw from, there is also a large amount of low-quality scholarship and bad ideas to sift through. I already mentioned some of the bad ideas. One example of low-quality scholarship, in my opinion, is adrienne maree brown's book Pleasure Activism. I tried to read this book because it was recommended to me by a friend. 

To give just one example of what I found to be low-quality scholarship, adrienne maree brown believes in vampires, believes she has been bitten by a vampire, and has asked for vampires to turn her into a vampire. 

To give another example, the book is called Pleasure Activism, but it does not give a clear definition or explanation of what the term "pleasure activism" is supposed to mean. If you make a concept the title of your book, and you write a book that is nominally about that concept, then if I read your book, I should be able to understand that concept. Instead, the attempt to define the concept is too brief and too vague. This is the full extent of the definition from the book:

Pleasure activism is the work we do to reclaim our whole, happy, and satisfiable selves from the impacts, delusions, and limitations of oppression and/or supremacy.

Pleasure activism asserts that we all need and deserve pleasure and that our social structures must reflect this. In this moment, we must prioritize the pleasure of those most impacted by oppression.

Pleasure activists seek to understand and learn from the politics and power dynamics inside of everything that makes us feel good. This includes sex and the erotic, drugs, fashion, humor, passion work, connection, reading, cooking and/or eating, music and other arts, and so much more.

Pleasure activists believe that by tapping into the potential goodness in each of us we can generate justice and liberation, growing a healing abundance where we have been socialized to believe only scarcity exists.

Pleasure activism acts from an analysis that pleasure is a natural, safe, and liberated part of life — and that we can offer each other tools and education to make sure sex, desire, drugs, connection, and other pleasures aren’t life-threatening or harming but life-enriching.

Pleasure activism includes work and life lived in the realms of satisfaction, joy, and erotic aliveness that bring about social and political change.

Ultimately, pleasure activism is us learning to make justice and liberation the most pleasurable experiences we can have on this planet.

What is pleasure activism? After reading this, I don't know. I'm not sure if adrienne marie brown knows, either.

To be clear, I'm a feminist, I'm LGBT, I believe in social justice, and I've voted for a social democratic political party multiple times. I took courses on feminist theory and queer studies when I was university and I think a lot of the scholarship in those fields is amazingly good.

But a lot of the radical left, to borrow a bon mot from Noam Chomsky, want to "live in some abstract seminar somewhere". They have no ideas about how to actually make the world better in specific, actionable ways,[5] or they have hazy ideas they can't clearly define or explain (like pleasure activism), or they have completely disastrous ideas that would lead to nightmares in real life (such as economic degrowth or authoritarian communism). 

This is fine if you want to live in some abstract seminar somewhere, if you want to enjoy an aesthetic of radical change while changing nothing — and if we can rely on no governments ever trying to implement the disastrous ideas like degrowth or authoritarian communism that would kill millions of people — but what if you want to help rural families in sub-Saharan Africa not get malaria or afford a new roof for their home or get vaccines or vitamins for the children? Then you've got to put away the inscrutable theory and live in the real world (which does not have vampires in it). 

  1. ^

    See the Wikipedia article on gender-critical feminism or the extraordinarily good video essay "Gender Critical" by the YouTuber and former academic philosopher ContraPoints.

  2. ^

    One ban was actually passed, but then overturned by a court.

  3. ^

    I haven't read this article, but if you're unfamiliar with this topic, at a glance, it seems like a good introduction to the debate: https://scholarlycommons.law.cwsl.edu/fs/242/

  1. ^

    ContraPoints' movie-length video essay "Twilight" covers this topic beautifully. Yes, it's very long, but it's so good! 

  2. ^

    Here's a refreshing instance of some radical leftists candidly admitting this: https://2021.lagrandetransition.net/en/conference-themes/

Show all footnotes

EA should avoid using AI art for non-research purposes?

My strongest reason for disliking AI-generated images is that so often they look tacky, as you aptly said, or even disgustingly bad. 

One of the worst parts of AI-generated art is that sometimes it looks good at a glance and then as you look at it longer, you notice some horribly wrong detail. Human art (if it's good quality) lets you enjoy the small details. It can be a pleasure to discover them. AI-generated art ruins this by punishing you for paying close attention. 

But that's a matter of taste.

What I'm voting "disagree" on is that the EA Forum should have a rule or a strong social norm against using AI-generated images. I don't think people should use ugly images, whether they're AI-generated or free stock photos. But leave it up to people decide on a case-by-case basis which images are ugly and don't make it a rule about categorically banning AI-generated images.

I am trying to be open-minded to the ethical arguments against AI-generated art. I find the discourse frustratingly polarized. 

For example, a lot of people are angry about the supposed environmental impact of AI-generated art, but what is the evidence of this? Anytime I've tried to look up hard numbers on how much energy AI uses, a) it's been hard to find clear, reliable information and b) the estimates I've found tend to be pretty small. 

Similarly, is there evidence that AI-generated images are displacing the labour of human artists? Again, this is something I've tried to look into, but the answer isn't easy to find. There are anecdotes here and there, but it's hard to tell if there is a broader trend that is significantly affecting a large number of artists.

It's difficult to think about the topic of whether artists should need to give permission for their images to be used for AI training or should be compensated if they are. There is no precedent in copyright law to cover this because this technology is unprecedented. For the same reason, there is no precedent in societal norms. We have to decide on a new way of thinking about a new situation, without traditions to rely on. 

So, if the three main ethical arguments against AI-generated art are:

-It harms the environment
-It takes income away from human artists
-AI companies should be required to get permission from artists before training AI models on their work and/or financially compensate them if they do

All three of these arguments feel really unsubstantiated to me. My impression right now is:

-Probably not
-Maybe? What's the evidence?
-Maybe? I don't know. What's the reasoning?

The main aesthetic argument against AI-generated art is of course:

-It's ugly

And I mostly agree. But those ChatGPT images in the Studio Ghibli style are absolutely beautiful. There is a 0% chance I will ever pay an artist to draw a Studio Ghibli-style picture of my cat. But I can use a computer to turn my cat into a funny, cute little drawing. And that's wonderful.

I'm a politically progressive person. I'm LGBT, I'm a feminist, I believe in social justice, I've voted for a social democratic political party multiple times, and I've been in community and in relationship with leftists a lot. I am so sick of online leftist political discourse. 

I am not interested in thinking and talking about celebrities all the time. (So much online leftist discourse is about celebrities.)

I don't want to spend that much time and energy constantly re-evaluating which companies I boycott and whether there's a marginally more ethical alternative.

I don't want every discussion about every topic to be polarized, shut down, moralized, and made into a red line issue where disagreement isn't tolerated. I'm sick of hyperbolic analogies between issues like ChatGPT and serious crimes. (I could give an example I heard but it's so offensive I don't want to repeat it.)

I am fed up with leftists supporting authoritarianism, terrorism, and political assassinations. While moralizing about AI art.

So, please forgive me if I struggle to listen to all of online leftists' complaints with the charity they deserve. I am burnt out on this stuff at this point. 

I don't know how to fix the offline left, but I'm personally so relieved that I don't use microblogging anymore (i.e., Twitter, Bluesky, Mastodon, or Threads) and that I've mostly extricated myself from online leftist discourse otherwise. It's too crazymaking for me to stomach.

There are two philosophies on what the key to life is.

The first philosophy is that the key to life is separate yourself from the wretched masses of humanity by finding a special group of people that is above it all and becoming part of that group.

The second philosophy is that the key to life is to see the universal in your individual experience. And this means you are always stretching yourself to include more people, find connection with more people, show compassion and empathy to more people. But this is constantly uncomfortable because, again and again, you have to face the wretched masses of humanity and say "me too, me too, me too" (and realize you are one of them).

I am a total believer in the second philosophy and a hater of the first philosophy. (Not because it's easy, but because it's right!) To the extent I care about effective altruism, it's because of the second philosophy: expand the moral circle, value all lives equally, extend beyond national borders, consider non-human creatures.

When I see people in effective altruism evince the first philosophy, to me, this is a profane betrayal of the whole point of the movement.

One of the reasons (among several other important reasons) that rationalists piss me off so much is their whole worldview and subculture is based on the first philosophy. Even the word "rationalist" is about being superior to other people. If the rationalist community has one founder or leader, it would be Eliezer Yudkowsky. The way Eliezer Yudkowsky talks to and about other people, even people who are actively trying to help him or to understand him, is so hateful and so mean. He exhales contempt. And it isn't just Eliezer — you can go on LessWrong and read horrifying accounts of how some prominent people in the community have treated their employee or their romantic partner, with the stated justification that they are separate from and superior to others. Obviously there's a huge problem with racism, sexism, and anti-LGBT prejudice too, which are other ways of feeling separate and above.

There is no happiness to be found at the top of a hierarchy. Look at the people who think in the most hierarchical terms, who have climbed to the tops of the hierarchies they value. Are they happy? No. They're miserable. This is a game you can't win. It's a con. It's a lie.

In the beautiful words of the Franciscan friar Richard Rohr, "The great and merciful surprise is that we come to God not by doing it right but by doing it wrong!"

(Richard Rohr's episode of You Made It Weird with Pete Holmes is wonderful if you want to hear more.) 

Okay. Thanks. I guessed maybe that’s what you were trying to say. I didn’t even look at the paper. It’s just not clear from the post why you’re citing this paper and what point you’re trying to make about it. 

I agree that we can’t extrapolate from the claim "the most effective charities at fighting diseases in developing countries are 1,000x more effective than the average charity in that area" to "the most effective charities, in general, are 1,000x more effective than the average charity". 

If people are making the second claim, they definitely should be corrected. I already believed you that you’ve heard this claim before, but I’m also seeing corroboration from other comments that this is a commonly repeated claim. It seems like a case of people starting with a narrow claim that was true and then getting a little sloppy and generalizing it beyond what the evidence actually supports. 

Trying to say how much more effective the best charities are from the average charity seems like a dauntingly broad question, and I reckon the juice ain’t worth the squeeze. The Fred Hollows Foundation vs. seeing eye dog example gets the point across. 

Thank you for explaining. Kindness like this matters to me a lot, and it also matters a lot to me whether someone is aware that another person is in need of their kindness. 

This is a good post if you view it as a list of frequently asked questions about effective altruism when interacting with people who are new to the concept and a list of potential good answers to those questions — including that sometimes the answer is to just let it go. (If someone is at college just to party, just say "rock on".) 

But there’s a fine line between effective persuasion and manipulation. I’m uncomfortable with this: 

This is an important conversation to have within EA, but I don't think having that be your first EA conversation is conducive to you joining. I just say something like "Absolutely—they’re imperfect, but the best tools available for now. You're welcome to join one of our meetings where we chat about this type of consideration."

If I were a passer-by who stopped at a table to talk to someone and they said this to me, I would internally think, "Oh, so you’re trying to work me."

Back when I tabled for EA stuff, my approach to questions like this was to be completely honest. If my honest thought was, "Yeah, I don’t know, maybe we’re doing it all wrong," then I would say that. 

I don’t like viewing people as a tool to achieve my ends — as if I know better than them and my job in life is to tell them what to do.

And I think a lot of people are savvy enough to tell when you’re working them and recoil at being treated like your tool. 

If you want people to be vulnerable and put themselves on the line, you’ve got to be vulnerable and put yourself on the line as well. You’ve got to tell the truth. You’ve got be willing to say, "I don’t know."

Do you want to be treated like a tool? Was being treated like a tool what put you in this seat, talking to passers-by at this table? Why would you think anyone else would be any different? Why not appeal to what’s in them that’s the same as what’s in you that drew you to effective altruism? 

When I was an organizing at my university’s EA group, I was once on a Skype call with someone whose job it was to provide resources and advice to student EA groups. I think he was at the Centre for Effective Altruism (CEA) — this would have been in 2015 or 2016 — but I don’t remember for sure. 

This was a truly chilling experience because this person advocated what I saw then and still see now as unethical manipulation tactics. He advised us — the group organizers — to encourage other students to tie their sense of self-esteem or self-worthy to how committed they were to effective altruism or how much they contributed to the cause.

This person from CEA or whatever the organization was also said something like, "if we’re successful, effective altruism will solve all the world’s problems in priority sequence". That and the manipulation advice made me think, "Oh, this guy’s crazy."

I recently read about a psychology study about persuading people to eat animal organs during World War II. During WWII, there was a shortage of meat, but animals’ organs were being thrown away, despite being edible. A psychologist (Kurt Lewin) wanted to try two different ways of convincing women to cook with animal organs and feed them to their families.

The first way was to devise a pitch to the women designed to be persuasive, designed to convince them. This is from the position of, "I figured out what’s right, now let me figure out what to say to you to make you do what’s right."

The second way was to pose the situation to the women as the study’s designers themselves thought of it. This is from the position of, "I’m treating you as an equal collaborator on solving this problem, I’m respecting your intellect, and I’m respecting your autonomy."

Five times more women who were treated in the second way cooked with organs, 52% of the group vs. 10%.

Among women who had never cooked with organs before, none of them cooked with organs after being treated the first way. 29% of the women who had never cooked with organs before did so for the first time after being treated the second way.

You can read more about this study here. (There might be different ways to interpret which factors in this experiment were important, but Kurt Lewin himself advocated the view that if you want things to change, get people involved.) 

This isn’t just about what’s most effective at persuasion, as if persuasion is the end goal and the only thing that matters. Treating people as intellectual equals also gives them the opportunity to teach you that you’re wrong. And you might be wrong. Wouldn’t you rather know?

I looked at every link in this post and the most useful one for me was this one where you list off examples of uncomfortable cross-cultural interactions from your interviewees. Especially seeing all the examples together rather than just one or two.

I’m a Westerner, but I’m LGBT and a feminist, so I’m familiar with analogous social phenomena. Instances of discrimination or prejudice often have a level of ambiguity. Was that person dismissive toward me because of my identity characteristics or are they just dismissive toward everyone… or were they in a bad mood…? You form a clearer picture when you add up multiple experiences, and especially experiences from multiple people. That’s when you start to see a pattern.

As a person in an identity group that is discriminated against, sometimes you can have a weird feeling that, statistically, you know discrimination is happening, but you don’t know for sure exactly which events are discrimination and which aren’t. Some instances of discrimination are more clear — such as someone invoking a trope or cliché about your group — but any individual instance of someone talking over you, disregarding your opinion, not taking an interest in you, not giving you time to speak, and so on, is theoretically consistent with someone being generally rude or disliking you personally. Stepping back and seeing the pattern is what makes all the difference.

This might be the most important thing that people who do not experience discrimination don’t understand. Some people think that people who experience discrimination are just overly sensitive or are overreacting or are seeing malicious intent where it doesn’t exist. Since so many individual examples of discrimination or potential discrimination can be explained away as someone being generally rude, or in a bad mood, or just not liking someone personally — or whatever — it is possible to deny that discrimination exists, or at least that it exists to the extent that people are claiming.

But discerning causality in the real world is not always so clean and simple and obvious — that’s why we need clinical trials for drugs, for example — and the world of human interaction is especially complex and subtle. 

You could look at any one example on the list you gave and try to explain it away. I got the sense that your interviewees shared this sense of ambiguity. For example: "L felt uncertain about what factors contributed to that dynamic, but they suspected the difference in culture may play a part." When you see all the examples collected together, from the experiences of several different people, it is much harder to explain it all away.

You could claim that it's wrong of me to only give one of my children a banana, even if that's the only child who's hungry. Some would say I  should always split that banana in half, for egalitarian reasons. This is in stark contrast to EA and hard to rebut respectfully with rigor.

In an undergrad philosophy class, the way my prof described examples like this is as being about equality of regard or equality of concern. For example, if there are two nearby cities and one gets hit by a hurricane, the federal government is justified in sending aid just to the city that’s been damaged by the hurricane, rather than to both cities in order to be "fair". It is fair. The government is responding equally the needs of all people. The people who got hit by the hurricane are more in need of help.

Sam wrote the letter below to our employees and stakeholders about why we are so excited for this new direction.

God. Sam Altman didn't get to do what he wanted, and now we're supposed to believe he's "excited"? This corporate spin is driving me crazy!

But, that aside, I'm glad OpenAI has backed down, possibly because the Attorney General of Delaware or California, or both of them, told OpenAI they would block Sam's attempt to break the OpenAI company free from the non-profit's control.

It seems more likely to me that OpenAI gave up because they had to give up, although this blog post is trying to spin it as if they changed their minds (which I doubt really happened).

Truly a brash move to try to betray the non-profit.

Once again Sam is throwing out gigantic numbers for the amounts of capital he theoretically wants to raise:

We want to be able to operate and get resources in such a way that we can make our services broadly available to all of humanity, which currently requires hundreds of billions of dollars and may eventually require trillions of dollars.

I wonder if his reasoning is that everyone in the world will use ChatGPT, so he multiplies the hardware cost of running one instance of GPT-5 by the world population (8.2 billion), and then adjusts down for utilization. (People gotta sleep and can't use ChatGPT all day! Although maybe they'll run deep research overnight.)

Looks like the lede was buried: 

Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.

The nonprofit will continue to control the PBC, and will become a big shareholder in the PBC, in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities, consistent with the mission.

At first, I thought this meant the non-profit will go from owning 51% of the company (or whatever it is) to a much smaller percentage. But I tried to confirm this and found an article that claims the OpenAI non-profit only owns 2% of the OpenAI company. I don't know whether that's true. I can't find clear information on the size of the non-profit's ownership stake. 

The data in this paper comes from the 2006 paper "Disease Control Priorities in Developing Countries".

I don't understand. Does this paper not support the claim? 

I've actually never heard this claim before, personally. Instead, people like Toby Ord talked about how the cost of curing someone's blindness through the Fred Hollows Foundation was 1,000x cheaper than training a seeing eye dog. 

Load more