Yarrow Bouchard🔸

981 karmaJoined Canadamedium.com/@strangecosmos

Bio

Pronouns: she/her or they/them. 

I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.

Comments
268

Topic contributions
1

There is a lot of context to fill people in on and I’ll leave that to the Reflective Altruism posts. I also added some footnotes that provide a bit more context. I wasn’t even really thinking about explaining everything to people who don’t already know the background.

I may be overreacting or you may be underreacting. Who’s to say? The only way to find out is to read the Reflective Altruism posts I cited and get the background knowledge that my quick take presumes. 

I agree that discourse on Twitter is unbelievably terrible, but one of the ways that I believe using Twitter harms your mind is you just hear the most terrible points and arguments all the time, so you come to discount things that sound facially similar in non-Twitter contexts. I advocate that people completely quit Twitter (and other microblogging platforms like Bluesky) because I think it gets people into the habit of thinking in tweets, and thinking in tweets is ridiculous. When Twitter started, it was delightfully inane. The idea of trying to say anything in such a short space was whimsical. That it’s been elevated to a platform for serious discourse is absurd. 

Again, the key context for that comment is that an extremely racist email by the philosopher Nick Bostrom was published that used the N word and said Black people are stupid. The Centre for Effective Altruism (CEA) released a very short, very simple statement that said all people are equal, i.e., in this context, Black people are equal to everyone else. 

The commenter responded harshly against CEA’s statement and argued a point of view that, in context, reads as the view that Black people have less moral value than white people. And since then that commenter was involved in a controversy around racism, i.e., the Manifest 2024 conference. If you’re unfamiliar can read about that conference on Reflective Altruism here

In that post, there’s a quote from Shakeel Hasim, who previously was the Head of Communications at the Centre for Effective Altruism (CEA):

By far the most dismaying part of my work at CEA was the increasing realisation that a big chunk of the rationalist community is just straight up racist. EA != rationalism, but I do think major EA orgs need to do a better job at cutting ties with them. Fixing the rationalism community seems beyond hope for me — prominent leaders are some of the worst offenders here, and it’s hard to see them going away. And the entire “truth seeking” approach is often a thinly veiled disguise for indulging in racism.

So, don’t take my word for it. 

The hazard of speaking too dispassionately or understating things is it gives people a misleading impression. Underreacting is dangerous, just as overreacting is. This is why the harsh language is necessary.

Yes, knowing the context is vital to understanding where the harsh language is coming from, but I wasn’t really writing for people who don’t have the context (or who won’t go and find out what it is). People who don’t know the context can dismiss it, or they can become curious and want to find out more. 

But colder, calmer, more understated language can also be easily dismissed, and is not guaranteed to elicit curiosity, either. And the danger there is that people tend to assume if you’re not speaking harshly and passionately, then what you’re talking about isn’t a big deal. (Also, why should people not just say what they really mean?)

Thanks. I’ll think about the idea of doing a post, but, honestly, what I wrote was what I wanted to write. I don’t see the emotion or the intensity of the writing as a failure or an indulgence, but as me saying what I really mean, and saying what needs to be said. What good’s sugar-coating it?

Something that anyone can do (David Thorstad has given permission in comments I’ve seen) is simply repost the Reflective Altruism posts about LessWrong and about the EA Forum here, on the EA Forum. Those posts are extremely dry, extremely factual, and not particularly opinionated. They’re more investigative than argumentative. 

I have thought about what, practically, to do about these problems in EA, but I don’t think I have particularly clear thoughts or good thoughts on that. An option that would feel deeply regrettable and unfortunate to me would be for the subset of the EA movement that shares my discomfort to try to distinguish itself under some label such as effective giving. Someone could probably come up with a better label if they thought about it for a while.

I hope that there is a way for people like me to save what they love about this movement. I would be curious to hear ideas about this from people who feel similarly.

Huh? Why not just admit your mistake? Why double down on an error? 

By the way, who do you think saved that post in the Wayback Machine on the exact same date it was moved to drafts? A remarkable coincidence, wouldn’t you say?

Your initial comment insinuated that the incidents I described were made up. But the incidents were not made up. They really happened. And I linked both to extensive documentation on Reflective Altruism and directly to a post on the EA Forum so that anyone could verify that the incidents I described occurred. 

There was one incident I described that I chose not to include a link to out of consideration for your coworker. I wanted to avoid presenting the quick take as a personal attack on them. (That was not the point of what I wrote.) I still think that is the right call. But I can privately provide the link to anyone who requests it if there is any doubt this incident actually occurred.

But, in any case, I very much doubt we are going to have a constructive conversation at this point. Even though I strongly disagree with your views and I still think you owe me an apology, I sincerely wish you happiness. 

Thank you for your comment.

I am a strong believer in civility and kindness, and although my quick take used harsh language, I think that is appropriate. I think, in a way, it can even be more respectful to speak plainly, directly, and honestly, as opposed to being passive-aggressive and dressing up insults in formal language.

I am expecting people to know the context or else learn what it is. It would not be economical for me to simply recreate the work already done on Reflective Altruism in my quick take. 

That post only has negative karma because I strong downvoted it. If I remove my strong downvote, it has 1 karma. 8 agrees and 14 disagrees is more disagrees than agrees, but this is not a good ratio. Also, this is about more than just the scores on the post, it’s also about the comments defending the post, both on that post itself and elsewhere, and the scores on those comments. 

I don’t think racist ideas should have 1 karma when you exclude my vote.

I think if someone says in response to a racist email about Black people from someone in our community that Black people have equal value to everyone else, your response should not be to argue that Black people have "approximately" as much value as white people. Normally, I would extend much more benefit of the doubt and try to interpret the comment more charitably, but subsequent evidence — namely, the commenter’s association with and defense of people with extreme racist views — has made me interpret that comment much less charitably than I otherwise would. In any case, even on the most charitable interpretation, it is foolish and morally wrong.

I really agreed with you when I was just glancing at the post trying to get a sense of what it was about, but then I looked at the comments and got convinced to try reading it in earnest, from the beginning. Then I flipped, and now I think the thesis is clear, the individual sentences are clear, and the writing is beautiful.

An unfortunate fact about some academic writing, specifically some writing in philosophy, in many of the humanities, and in some of the social sciences, is that there's a lot of time-wasting, inscrutable papers and books. This kind of writing does not reward additional time and effort spent on reading it, or at least does so at such a miserly trickle that it's not worthwhile. The preponderance of inscrutable texts makes it hard to tell, at a glance, what's not worth reading and what's written in sumptuous prose. This essay is sumptuous prose.

And, indeed, this seems to show your accusation that there was an attempt to hide the post after you brought it up was false. An apology wouldn't hurt!

The other false accusation was that I didn't cite any sources, when in fact I did in the very first sentence of my quick take. Apart from that, I also directly linked to an EA Forum post in my quick take. So, however you slice it, that accusation is wrong. Here, too, an apology wouldn't hurt if you want to signal good faith.

My offer is still open to provide sources for any one factual claim in the quick take if you want to challenge one of them. (But, as I said, I don't want to be here all day, so please keep it to one.)

Incidentally, in my opinion, that post supports my argument about anti-LGBT attitudes on LessWrong. I don't think I could have much success persuading LessWrong users of that, however, and that was not the intention of this quick take.  

The sources are cited in quite literally the first sentence of the quick take.

To my knowledge, every specific factual claim I made is true and none are false. If you want to challenge one specific factual claim, I would be willing to provide sources for that one claim. But I don’t want to be here all day. 

Since I guess you have access to LessWrong’s logs given your bio, are you able to check when and by whom that LessWrong post was moved to drafts, i.e., if it was indeed moved to drafts after your comment and not before, and if it was, whether it was moved to drafts by the user who posted it rather than by a site admin or moderator?

I’m not sure if you’re asking about the METR graph on task length or about the practical use of AI coding assistants, which the METR study found is currently negative.

If I understand it correctly, the METR graph doesn’t measure an exponentially decreasing failure rate, just a 50% failure rate. (There’s also a version of the graph with a 20% failure rate, but that’s not the one people typically cite.)

I also think automatically graded tasks used in benchmarks don’t usually deserve to be called “software engineering” or anything that implies that the actual tasks the LLM is doing are practically useful, economically valuable, or could actually substitute for tasks that humans get paid to do. 

I think many of these LLM benchmarks are trying to measure such narrow things and such toy problems, which seem to be largely selected so as to make the benchmarks easier for LLMs, that they aren’t particularly meaningful. 

In terms of studies of real world performance like METR’s study on human coders using an AI coding assistant, that’s much more interesting and important. Although I find most LLM benchmarks practically meaningless for measuring AGI progress, I think practical performance in economically valuable contexts is much more meaningful. 

My point in the above comment was just that an unambiguously useful AI coding assistant would not by itself be strong evidence for near-term AGI. AI systems mastering games like chess and go is impressive and interesting and probably tells us some information about AGI progress, but if someone pointed to AlphaGo beating Lee Seedol as strong evidence that AGI would have been created within 7 years of that point, they would have been wrong.

In other words, progress in AI probably tells us something about AGI progress, but just taking impressive results in AI and saying that implies AGI within 7 years isn’t correct, or at least it’s unsupported. Why 7 years and not 17 years or 77 years or 177 years?

If you assume whatever rate of progress you like, that will support any timeline you like based on any evidence you like, but, in my opinion, that’s no way to make an argument.

On the topic of betting and investing, it’s true that index funds have exposure to AI, and indeed personally I worry about how much exposure the S&P 500 has (global index funds that include small-cap stocks have less, but I don’t know how much less). My argument in the comment above is simply that if someone thought it was rational to bet some amount of money on AGI arriving within 7 years, then surely it would be rational to invest that same amount of money in a 100% concentrated investment in AI and not, say, the S&P 500.

Load more