In this episode of our podcast, Elizabeth Van Nostrand and I talk to Oliver Habryka of Lightcone Infrastructure about his thoughts on the Open Philanthropy Project, which he believes has become stifled by the PR interests of its primary funder, Good Ventures.

Oliver’s main claim is that around mid 2023 or early 2024, Good Ventures founder Dustin Moskovitz became more concerned about his reputation, and this put a straight jacket over what Open Phil could fund. Moreover it was not enough for a project to be good and pose low reputational risk; it had to be obviously low reputational risk, because OP employees didn’t have enough communication with Good Ventures to pitch exceptions.  According to Habryka.

That’s a big caveat; this podcast is pretty one sided. We invited OpenPhil to send a representative to record their own episode, but they decided to just send a written response (which is linked below and read at end of the episode). If anyone out there wants to asynchronously argue with Habryka on a separate episode, we’d love to hear from you. 

Transcript available here.

Links from the episode:

An Update From Good Ventures (note: Dustin has deleted his account and his comments are listed as anonymous, but are not the only anonymous)

CEA announcing the sale of Wytham Abbey

OpenPhli career page

Job reporting to Amy WL

Zach’s “this is false”

Luke Muelhauser on GV not funding right of center work

Will MacAskill on decentralization and EA

Alexander Berger regrets the Wytham Abbey grant

Single Chan-Zuckerberg employee demanding resignation over failure to moderate Trump posts on Facebook

Letter from 70+ CZ employees asking for more DEI within Chan Zuckerberg Initiative.

OpenPhil’s response

(X-post LessWrong)

26

0
5

Reactions

0
5
Comments1


Sorted by Click to highlight new comments since:

I know very little about the context of the disagreements between Oliver Habryka and Dustin Moskovitz. I've read one of their backs-and-forths in comments on the EA Forum and it was almost impossible to follow what they were talking about, partly due to both of their writing styles, but also probably due to there being a lot of context and background they weren't trying to explain to people like me who weren't already in the know. 

I think Dustin may also have purposely been trying to be a bit vague because he was sick of being criticized by people on the EA Forum and felt that the more he said, the more he would be criticized (he made a comment to that effect). 

So, I really don't know all the details here and could be getting this all wrong. This is just my impression of things knowing as little as I do right now.

One of things Oliver has done a lot in his comments on the EA Forum which has bothered me is to try to shift a debate about what the right thing to do is on a specific topic (e.g., should EA buy a castle, should EA-related organizations invite people with extreme racist views to its conferences) into questioning the motives of people who disagree with him, accusing them of being too concerned with reputation rather than doing the right thing. Oliver seems to think he prioritizes doing the right thing over having a good reputation, but other people do it the other way around.

For example, Oliver holds views and is willing to take actions that I would categorize as racist and that I find morally objectionable for that reason. I'm not nearly the first person to express this. But Oliver's response is not "some people disagree with me because they have different opinions about racism", it's more like "people pretend to disagree with me because they're scared about what people will think and aren't willing to speak the truth". (Just to be clear, these are not real quotes. I'm just paraphrasing what I understood from reading some of Oliver's comments.)

It's a lot less compelling, rhetorically, to say "me and Dustin disagree about what constitutes racism" than to say "Dustin is overly concerned about his personal reputation (and I'm not)". (Again, these are not real quotes.) But it's also dishonest and mean-spirited. 

I think part of the reason some of the discussions about racism get diverted into discussions about EA's reputation is that people are trying to leave a quick comment without getting dragged into an interminable and stressful debate about racism. LessWrong users have an inexhaustible capacity for getting into protracted, technical, and verbose forum debates. In general, people are averse to getting into debates about politics, race and racism, and social justice online. It's tempting to try to get around a 100,000-word debate on the definition of racism by saying "these kinds of words and actions will alienate many people from effective altruism and worsen our reputation". 

Maybe that kind of response makes it seem like reputation is the primary concern. But it's not the primary concern. The primary concern is that racism is evil and the racist actions of Oliver, et al. are evil. And you don't want Oliver to write a 5,000-word comment that cites Astral Codex Ten seven times and LessWrong fourteen times arguing that holding racist views is actually smart that you're going to feel obligated to read and respond to. So, instead you'll just say "this kind of thing is really off-putting to many people, and damaging to our community". And yet Oliver still found a way to respond to this that is about equally as annoying as the thing you were hoping to avoid. He says, "Aha! You care about reputation! I care about truth!" (Again, just to be clear, this is a fake quote.)

Let me repeat the caveat that I get the sense that there's a whole lot of context and background to Dustin and Oliver's disagreements that I don't understand and I'm giving my impression of their disagreements despite this limited understanding. So, I could be getting Dustin's perspective wrong and I could be getting Oliver's perspective wrong.

But, with this limited understanding, my interpretation is that Dustin thinks that Lightcone Infrastructure's and the rationalist community's views and actions are racist and immoral and doesn't want to be morally responsible for funding or supporting racism, either directly or indirectly. That, I think, is his primary reason for cutting ties with Lightcone and the rationalist community, not reputation. Reputation is one thing he's considered, but it's not the only thing and I don't think it's the primary thing. The primary thing is that racism is evil.

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 1m read
 · 
I recently read a blog post that concluded with: > When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else that matters in your life. But if your tens of thousands of hours at work aren't something you expect to look back on with pride, consider whether there's something else you could be doing professionally that you could feel good about.
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would