The two podcasts where I discuss FTX are now out: 

The Sam Harris podcast is more aimed at a general audience; the Spencer Greenberg podcast is more aimed at people already familiar with EA. (I’ve also done another podcast with Chris Anderson from TED that will come out next month, but FTX is a fairly small part of that conversation.)

In this post, I’ll gather together some things I talk about across these podcasts — this includes updates and lessons, and responses to some questions that have been raised on the Forum recently. I’d recommend listening to the podcasts first, but these comments can be read on their own, too. I cover a variety of different topics, so I’ll cover each topic in separate comments underneath this post.

Comments90


Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

On talking about this publicly

A number of people have asked why there hasn’t been more communication around FTX. I’ll explain my own case here; I’m not speaking for others. The upshot is that, honestly, I still feel pretty clueless about what would have been the right decisions, in terms of communications, from both me and from others, including EV, over the course of the last year and a half. I do, strongly, feel like I misjudged how long everything would take, and I really wish I’d gotten myself into the mode of “this will all take years.” 

Shortly after the collapse, I drafted a blog post and responses to comments on the Forum. I was also getting a lot of media requests, and I was somewhat sympathetic to the idea of doing podcasts about the collapse — defending EA in the face of the criticism it was getting. My personal legal advice was very opposed to speaking publicly, for reasons I didn’t wholly understand; the reasons were based on a general principle rather than anything to do with me, as they’ve seen a lot of people talk publicly about ongoing cases and it’s gone badly for them, in a variety of ways. (As I’ve learned more, I’ve come to see that this view has a lot of m... (read more)

Denis
32
13
18

I've had quite a few disagreements with other EA's about this, but I will repeat it here, and maybe get more downvotes. But I've worked for 20 years in a multinational and I know how companies deal with potential reputational damage, and I think we need to at least ask ourselves if it would be wise for us to do differently. 

EA is part of a real world which isn't necessarily fair and logical. Our reputation in this real world is vitally important to the good work we plan to do - it impacts our ability to get donations, to carry out projects, to influence policy. 

We all believe we're willing to make sacrifices to help EA succeed. 

Here's the hard part: Sometimes the sacrifice we have to make is to go against our own natural desire to do what feels right. 

It feels right that Will and other people from EA should make public statements about how bad we feel about FTX and how we'll try to do better in future and so on. 

But the legal advice Will got was correct, and was also what was best for EA. 

There was zero chance that the FTX scandal could reflect positively on EA. But there were steps Will and others could take to minimise the damage to the EA movement.&... (read more)

There is one caveat: if someone acting on behalf on an EA organisation truly did something wrong which contributed to this fraud, then obviously we need to investigate that. But I am not aware of any evidence to suggest that happened. 

I tend to think EA did. Back in September 2023, I argued:

EA contributed to a vast financial fraud, through its:

  • People. SBF was the best-known EA, and one of the earliest 1%. FTX’s leadership was mostly EAs. FTXFF was overwhelmingly run by EAs, including EA’s main leader, and another intellectual leader of EA. 
  • Resources. FTX had some EA staff and was funded by EA investors.
  • PR. SBF’s EA-oriented philosophy on giving, and purported frugality served as cover for his unethical nature.
  • Ideology. SBF apparently had an RB ideology, as a risk-neutral act-utilitarian, who argued a decade ago why stealing was not in-principle wrong, on Felicifia. In my view, his ideology, at least as he professed it, could best be understood as an extremist variant of EA.

Of course, you can argue that contributing (point 1) people-time and (2) resources is consistent with us having just been victims, although I think that glosses over the extent to which EA fol... (read more)

The communication needs to be: EA was defrauded by SBF. He has done us massive harm. We want to make sure nobody will ever do that to EA again. We need to ensure that any public communication puts SBF on one side, and EA on the other side, a victim of his crimes just like the millions of investors. 

Upvoted. 

But a problem is: I don't think many people outside of EA believe that, nor will they believe it merely because EA sources self-interestedly repeat it. They do not have priors to believe EA was not somehow responsible for what happened, and the publicly-available evidence (mainly the Time article) points in the direction of at least some degree of responsibility. The more EA proclaims its innocence without coughing up evidence that is credible to the broader world, the more guilty it looks.

But I've worked for 20 years in a multinational and I know how companies deal with potential reputational damage, and I think we need to at least ask ourselves if it would be wise for us to do differently. 

Consistency in Following the Usual Playbook

The usual playbook, as I see it, includes shutting up and hope that people lose interest and move on. I accept that there's a reason... (read more)

7
Nathan Young
Do you think the legal advice was correct? Or is it possible it was wrong to you? If it was worth spending X millions on community building, feels like it may have been worth risking X/5 on lawsuits to avoid quite a lot of frustration. It seems like when there is a crisis, the rationalists perhaps talk too much (the SSC NYT thing perhaps) but EA elites clam up and suddenly go all "due diligence" not sure that's the right call either. (Not that I would do better).

I feel like "if you get legal advice, follow it" is a pretty widely held and sensible broad principle, and violating it can have very bad personal consequences. I think the bar should be pretty high for someone violating that principle, and I'm not sure "avoiding quite a lot of frustration" meets that bar, especially since the magnitude of the frustration is only obvious in hindsight.

Jason
53
15
0

I have very little doubt that any advice given to an individual with significant potential exposure to keep their mouths shut was correct advice as to that individual's personal interests. I also have very little doubt that anyone who worked for or formally advised FTXFF fits in that category.

To the extent that Nathan is asking about legal advice given to EVF, I don't think the principle would necessarily hold. Legal advice is going to focus relatively more on the client's legal risks, and less so (if at all) on the traditionally-conceived public interest, what is in the interest of the long-term future, etc. I'd say "charitable organizations should act in their own legal self-interest" probably defaults to true, but that it's a fairly weak presumption. With the possible and partial exception of lawyers who are also insiders, I think lawyers will significantly underweight considerations like the epistemic health of the broader EA community and also be seriously limited at estimating the effect of various scenarios on that consideration.

That being said, I doubt Will is in a particularly good position to evaluate the legal advice given to EVF because he was recused from FTX-related stuff due to serious conflicts of interests. If he were a lawyer, he might be in a good position to estimate -- then he'd have both enough knowledge of facts and the right professional background to infer stuff based on that knowledge. But he isn't.

6
Jason
While this is not expressing an opinion on your broader question, I think the distinction between individual legal exposure and organizational exposure is relevant here. It would be problematic to avoid certain collective costs of FTX by unfairly foisting them off on unconsenting individuals and organizations. As Will alluded to, it is possible that the costs would be borne by other EAs, not the speaker. That being said, people could be indemnified. So I think it's plausible to update somewhat the probability that there is some valid reason to fear severe to massive legal exposure to some extent. Or that information would come out in litigation that is more damaging than the inferences to be drawn from silence. (Without inside knowledge, I find that more likely than actual severe liability exposure.)
6
Jonas_
I'd be interested in specific scenarios or bad outcomes that we may have averted. E.g., much more media reporting on the EA-FTX association resulting in significantly greater brand damage? Prompting the legal system into investigating potential EA involvement in the FTX fraud, costing enormous further staff time despite not finding anything? Something else? I'm still not sure what example issues we were protecting against.
Jason
34
3
3
1

much more media reporting on the EA-FTX association resulting in significantly greater brand damage?

Most likely concern in my eyes. 

The media tends to report on lawsuits when they are filed, at which time they merely contain unsubstantiated allegations and the defendant is less likely to comment. It's unlikely that the media would report on the dismissal of a suit, especially if it was for reasons seen as somewhat technical rather than as a clear vindication of the EA individual/organization.

Moreover, it is pretty likely to me that EVF or other EA-affiliated entities have information they would be embarrassed to come out in discovery. This is not based on any belief about misconduct, but the base rate that organizations that had a bad miss / messup have information related thereunto that they would be embarrassed about (and my characterization of a bad miss / messup here, whether or not a liability-creating one).

If a sufficiently motivated plaintiff sued, and came up with a legal theory that survived a motion to dismiss, I think it fairly likely that embarrassing information would need to be disclosed in discovery. They could require various persons and organizations to answer... (read more)

Alameda exile told Time that SBF "didn’t have a distinction between firm capital and trading capital. It was all one pool.” That's at least a badge of fraud (commingling)

Alameda was a prop trading firm, so there isn't normally any distinction between those. The only reason this didn't apply was that there was a third bucket of funds, pass-through custodial funds that belonged to FTX customers, which they evidently didn't pass through due to poor record keeping. That's not as much indicative of fraud, it's indicative of incompetance.

Elon Musk

Stuart Buck asks:

“[W]hy was MacAskill trying to ingratiate himself with Elon Musk so that SBF could put several billion dollars (not even his in the first place) towards buying Twitter? Contributing towards Musk's purchase of Twitter was the best EA use of several billion dollars? That was going to save more lives than any other philanthropic opportunity? Based on what analysis?”

Sam was interested in investing in Twitter because he thought it would be a good investment; it would be a way of making more money for him to give away, rather than a way of “spending” money. Even prior to Musk being interested in acquiring Twitter, Sam mentioned he thought that Twitter was under-monetised; my impression was that that view was pretty widely-held in the tech world. Sam also thought that the blockchain could address the content moderation problem. He wrote about this here, and talked about it here, in spring and summer of 2022. If the idea worked, it could make Twitter somewhat better for the world, too.

I didn’t have strong views on whether either of these opinions were true. My aim was just to introduce the two of them, and let them have a conversation and take it from th... (read more)

titotal
14
16
14

Sam also thought that the blockchain could address the content moderation problem. He wrote about this here, and talked about it here, in spring and summer of 2022. If the idea worked, it could make Twitter somewhat better for the world, too.

 

I think this is an indication that the EA community may have hard a hard time seeing through tech hype. I don't think this this is a good sign now we're dealing with AI companies who are also motivated to hype and spin. 

The linked idea is very obviously unworkable. I am unsurprised that Elon rejected it and that no similar thing has taken off. First, as usual, it could be done cheaper and easier without a blockchain. second, twitter would be giving people a second place to see their content where they don't see twitters ads, thereby shooting themselves in the foot financially for no reason. Third, while facebook and twitter could maybe cooperate here, there is no point in an interchange between other sites like tiktok and twitter as they are fundamentally different formats. Fourth, there's already a way for people to share tweets on other social media sites: it's called "hyperlinks" and "screenshots". Fifth, how do you delete your bad tweets that are ruining your life is they remain permanently on the blockchain? 

For what it's worth SBF put this idea to me in an interview I did with him and I thought it sounded daft at the time, for the reasons you give among others.

He also suggested putting private messages on the blockchain which seemed even stranger and much less motivated.

That said, at the time I regarded SBF as much more of an expert on blockchain technology than I was, which made me reluctant to entirely dismiss it out of hand, and I endorse that habit of mind.

As it turns out people are now doing a Twitter clone on a blockchain and it has some momentum behind it: https://docs.farcaster.xyz/

So my skepticism may yet be wrong — the world is full of wonders that work even though they seem like they shouldn't. Though how a project like that out-competes Twitter given the network effects holding people onto the platform I don't know.

As a data point, I remember reading that Twitter thread and thinking it didn't make a lot of technical sense (I remember also being worried about the lack of forward secrecy since he wanted to store DMs encrypted on the blockchain).

But the goal was to make a lot of money, not to make a better product, and seeing that DogeCoin and NFTs (which also don't make any technical sense) reached a market cap of tens of billions, it didn't seem completely absurd that shoehorning a blockchain in Twitter made business sense.

My understanding was that crypto should often be thought of as a social technology that enables people to be excited about things that have been possible since the early 2000s. At least that's how I explain to myself how I missed out on BTC and NFTs.

In any case, at the time I thought his main goal must have been to increase the value of FTX (or of Solana), which didn't raise any extra red flags in the reference class of crypto.

Re:

that the EA community may have hard a hard time seeing through tech hype

I think it's important to keep in mind that people could have made at least tens of millions by predicting FTX's collapse, this failure of prediction was really not unique to the EA community, and many in the EA community mentioned plenty of times that the value of FTX could go to 0.

I agree it's probably a pretty bad idea but I don't think this supports your conclusion that "the EA community may have hard a hard time seeing through tech hype"

I disagree with that quote but I do think the fact that Will is reporting this story now with a straight face is a bad sign. 

My steelman would be "look if you think two people would have a positive-sum interaction and it's cheap to facilitate that, doing so is a good default". It's not obvious to me that Will spent more than 30 seconds on this. But the defense is "it was cheap and I didn't think about it very hard", not "Sam had ideas for improving twitter".

3
poppinfresh
Your steelman doesn't seem very different from "I didn’t have strong views on whether either of these opinions were true. My aim was just to introduce the two of them, and let them have a conversation and take it from there."
0
Elizabeth
I think if all he'd said was "My aim was just to introduce the two of them, and let them have a conversation and take it from there", I'd have found that a satisfactory answer. It's also not something I considered to need justification in the first place, although I hadn't looked into it very much. I'm inferring from the fact that Will gave a full paragraph explanation of why this seemed high EV indicates that he thinks that reasoning is important. 

What I heard from former Alameda people 

A number of people have asked about what I heard and thought about the split at early Alameda. I talk about this on the Spencer podcast, but here’s a summary. I’ll emphasise that this is me speaking about my own experience; I’m not speaking for others.

In early 2018 there was a management dispute at Alameda Research. The company had started to lose money, and a number of people were unhappy with how Sam was running the company. They told Sam they wanted to buy him out and that they’d leave if he didn’t accept their offer; he refused and they left. 

I wasn’t involved in the dispute; I heard about it only afterwards. There were claims being made on both sides and I didn’t have a view about who was more in the right, though I was more in touch with people who had left or reduced their investment. That included the investor who was most closely involved in the dispute, who I regarded as the most reliable source.

It’s true that a number of people, at the time, were very unhappy with Sam, and I spoke to them about that. They described him as reckless, uninterested in management, bad at managing conflict, and being unwilling to accept a lower... (read more)

Jonas_
109
8
0
4
2

I broadly agree with the picture and it matches my perception. 

That said, I'm also aware of specific people who held significant reservations about SBF and FTX throughout the end of 2021 (though perhaps not in 2022 anymore), based on information that was distinct from the 2018 disputes. This involved things like:

  • predicting a 10% annual risk of FTX collapsing with FTX investors and the Future Fund (though not customers) FTX investors, the Future Fund, and possibly customers losing all of their money, 
    • [edit: I checked my prediction logs and I actually did predict a 10% annual risk of loss of customer funds in November 2021, though I lowered that to 5% in March 2022. Note that I predicted hacks and investment losses, but not fraud.]
  • recommending in favor of 'Future Fund' and against 'FTX Future Fund' or 'FTX Foundation' branding, and against further affiliation with SBF, 
  • warnings that FTX was spending its US dollar assets recklessly, including propping up the price of its own tokens by purchasing large amounts of them on open markets (separate from the official buy & burns), 
  • concerns about Sam continuing to employ very risky and reckless business practices throu
... (read more)
huw
57
16
3

A meta thing that frustrates me here is I haven’t seen much talking about incentive structures. The obvious retort to negative anecdotal evidence is the anecdotal evidence Will cited about people who had previous expressed concerns who continued to affiliate with FTX and the FTXFF, but to me, this evidence is completely meaningless because continuing to affiliate with FTX and FTXFF meant a closer proximity to money. As a corollary, the people who refused to affiliate with them did so at significant personal & professional cost for that two-year period.

Of course you had a hard time voicing these concerns! Everyone’s salaries depended on them not knowing or disseminating this information! (I am not here to accuse anyone of a cover-up, these things usually happen much less perniciously and much more subconsciously)

predicting a 10% annual risk of FTX collapsing with FTX investors and the Future Fund (though not customers) losing all of their money,

Do you know if this person made any money off of this prediction? I know that shorting cryptocurrency is challenging, and maybe the annual fee from taking the short side of a perpetual future would be larger than 10%, not sure, but surely once the FTX balance sheet started circulating that should have increased the odds that the collapse would happen on a short time scale enough for this trade to be profitable?[1]


  1. I feel like I asked you this before but I forgot the answer, sorry. ↩︎

I don't think so, because:

  • A 10–15% annual risk was predicted by a bunch of people up until late 2021, but I'm not aware of anyone believing that in late 2022, and Will points out that Metaculus was predicting ~1.3% at the time. I personally updated downwards on the risk because 1) crypto markets crashed, but FTX didn't, which seems like a positive sign, 2) Sequoia invested, 3) they got a GAAP audit.
  • I don't think there was a great implementation of the trade. Shorting FTT on Binance was probably a decent way to do it, but holding funds on Binance for that purpose is risky and costly in itself.

That said, I'm aware that some people (not including myself) closely monitored the balance sheet issue and subsequent FTT liquidations, and withdrew their full balances a couple days before the collapse.

Is a 10-15% annual risk of failure for a two-year-old startup alarming? I thought base rates were higher, which makes me think I'm misunderstanding your comment.

You also mention that the 10% was without loss of costumer funds, but the Metaculus 1.3% was about loss of costumer funds, which seems very different.

10% chance of yearly failure without loss of customer funds seems more than reasonable, even after Sequoia invested, in such a high-variance environment, and not necessarily a red flag.

A 10-15% annual risk of startup failure is not alarming, but a comparable risk of it losing customer funds is. Your comment prompted me to actually check my prediction logs, and I made the following edit to my original comment:

  • predicting a 10% annual risk of FTX collapsing with FTX investors and the Future Fund (though not customers) FTX investors, the Future Fund, and possibly customers losing all of their money, 
    • [edit: I checked my prediction logs and I actually did predict a 10% annual risk of loss of customer funds in November 2021, though I lowered that to 5% in March 2022. Note that I predicted hacks and investment losses, but not fraud.]
7
Jason
Is the better reference class "two-year old startups" or "companies supposedly worth over $10B" or "startups with over a billion invested"? I assume a 100 percent investor loss would be rare, on an annualized basis, in the latter two -- but was included in the original claim. Most two-year startups don't have nearly the amount of investor money on board that FTX did.
2
Ben_West🔸
Thanks! That's helpful. In particular, I wasn't tracking the 2021 versus 2022 thing.
2
Jonas_
(See my edit)
4
Jason
Optics would be great on that one -- an EA has insight that there's a good chance of FTX collapse (based on not generally-known info / rumors?), goes out and shorts SamCoins to profit on the collapse! Recall that any FTX collapse would gut the FTT token at least, so there would still be big customer losses.
6
Davidmanheim
Gutting the FTT token is customers losing money because of their investing, not customer losses via FTX loss of custodial funds or token, though, isn't it?
0
Jason
That's correct. That being said, wasn't part of the value proposition of FTT that it gave you discounts on FTX? To that extent, it was somewhat like a partial gift certificate for future services. That's still not loss of deposited funds, of course. In any event, the public would not look kindly on a charitable movement accepting nine figures in donations from a company despite having strong semi-inside-knowledge reasons to believe said company was about to collapse in this manner. I was somewhat surprised to see encouragement to disclose information about anyone who traded on that kind of semi-insider knowledge.

Based on some of the follow-up questions, I decided to share this specific example of my thinking at the time (which didn't prevent me from losing some of my savings in the bankruptcy):

4
Jason
Do you recall what your conception of a possible customer loss resulting "from bankruptcy" was, and in particular whether it was (at least largely) limited to "monies lent out for margin trading"? Although I haven't done any research, if user accounts had been appropriately segregated and safeguarded, FTX's creditors (in a hypothetical "normal" bankruptcy scenario) shouldn't have been able to make claims against them. There might have been an exception for those involved in margin trading
6
Jonas_
I recall feeling most worried about hacks resulting in loss of customer funds, including funds not lent out for margin trading. I was also worried about risky investments or trades resulting in depleting cash reservers that could be used to make up for hacking losses. I don't think I ever generated the thought "customer monies need to be segregated, and they might not be", primarily because at the time I wasn't familiar with financial regulations.  E.g. in 2023 I ran across an article written in ~2018 that commented an SIPC payout in a case of a broker co-mingling customer funds with an associated trading firm. If I had read that article in 2021, I would have probably suspected FTX of doing this.

Thanks for writing up these thoughts Will, it is great to see you weighing in on these topics.

I’m unclear on one point (related to Elizabeth’s comments) around what you heard from former Alameda employees when you were initially learning about the dispute. Did you hear any concerns specifically about Sam’s unethical behavior, and if so, did these concerns constitute a nontrivial share of the total concerns you heard? 

I ask because in this comment and on Spencer’s podcast (at ~00:13:32), you characterize the concerns you heard about almost identically. In both cases, you mention a bunch of specific concerns you had heard (company was losing money, Sam’s too risky, he’s a bad manager, he wanted to double down rather than accept a lower return), but they all relate to Sam’s business acumen/competence and there’s no mention of ethical issues. So I’m hoping you can clarify why there’s a discrepancy with Time’s reporting, which specifically mentions that ethical concerns were a significant point of emphasis and that these were communicated directly to you:

[Alameda co-founders wrote a document that] “accuses Bankman-Fried of dismissing calls for stronger accounting and inflating the

... (read more)

It seems there was a lot of information floating around but no one saw it as their responsibility to check whether SBF was fine and there was no central person for information to be given to. Is that correct? 

Has anything been done to change this going forward? 

From personal experience, I thought community health would be responsible, and approached them about some concerns I had, but they were under-resourced in several ways.

I normally think of community health as dealing with interpersonal stuff, and wouldn't have expected them to be equipped to evaluate whether a business was being run responsibly. It seems closer to some of the stuff they're doing now, but at the time the team was pretty constrained by available staff time (and finding it difficult to hire), so I wouldn't expect them to have been doing anything outside of their core competency.

Maybe a lesson is that we should be / should have been clearer about scopes, so there's more of an opportunity to notice when something doesn't belong to anyone?

I'd argue that "checking whether businesses are run responsibly" is out of scope for EA in general.

I think the fitness/suitability of major leaders (at least to the extent we are talking about a time when SBF was on the board) and major donor acceptability evaluation are inherently in scope for any charitable organization or movement.

8
Guy Raveh
Do most charitable organizations have in-house people to examine donors? I'm not saying we shouldn't check, but rather that there shouldn't be people in EA organizations whose job is to do this - rather than organizations just hiring auditors or whomever to do it for them.
9
Aleks_K
Charitable organisations generally do due diligence on large donors and will most likely do this in-house in most cases (perhaps with some external support) - very large organisations (eg Universities) will usually have a specialised in-house team independent from the rest of the operations to do this. It is also likely that at least the larger EA organisations did do due diligence on donations from Sam/FTX, they just decided on balance that it's fine to take the donation.

EV should have due diligence processes in place, instigated by EA's first encounter with a disgraced crypto billionaire/major EA donor (Ben Delo). 

In February 2021, CEA (the EV rebrand hadn't happened yet) wrote

Here’s an update from CEA's operations team, which has been working on updating our practices for handling donations. This also applies to other organizations that are legally within CEA (80,000 Hours, Giving What We Can, Forethought Foundation, and EA Funds).

  • “We are working with our lawyers to devise and implement an overarching policy for due diligence on all of our donors and donations going forward.
  • We've engaged a third party who now conducts KYC (know your client) due diligence research on all major donors (>$20K a year).
  • We have established a working relationship with TRM who conduct compliance and back-tracing for all crypto donations.
4
Jason
It's unclear from that whether the due diligence scaled appropriately with size of donation. I doubt ~anyone is batting an eye at charities that took 25K-50K from SBF, due diligence or no. The process at the tens of millions per year level needs to be bespoke, though.
3
AnonymousEAForumAccount
Yeah, fully agree with this. I hope now that EV and/or EV-affiliated people are talking more about this matter that they'll be willing to share what specific due diligence was done before accepting SBF's gifts and what their due diligence policies look like more generally. 
2
Jason
Unclear, although most nonprofits are attracting significantly less risky donors than crypto people. (SBF wasn't even the first crypto scammer sentenced to a multidecade term in the Southern District of New York in the past twelve months....) I'd suggest that even to the extent a non-profit is generally outsourcing that kind of work, it can't just rely on standard third-party practices where significant information with some indicia of reliability is brought directly to it.
4
Ben Millwood🔸
I don't think the EA movement as a whole can sensibly be assigned a scope, really. But I think we should collectively be open to doing whatever reasonably practicable, ethical things seem most important, without restricting ourselves to only certain kinds of behaviour fitting that description.
8
Guy Raveh
I definitely agree. But I think we're far from it being practically useful for dedicated EAs to do this themselves.
3
RyanCarey
This is who I thought would be responsible too, along with the CEO of CEA, that they report to, (and those working for the FTX Future Fund, although their conflictedness means they can't give an unbiased evaluation). But since the FTX catastrophe, the community health team has apparently broadened their mandate to include "epistemic health" and "Special Projects", rather than narrowing it to focus just on catastrophic risks to the community, which would seem to make EA less resilient in one regard, than it was before. Of course I'm not necessarily saying that it was possible to put the pieces together ahead of time, just that if there was one group responsible for trying, they were it.

Surely one obvious person with this responsibility was Nick Beckstead, who became President of the FTX Foundation in November 2021. That was the key period where EA partnered with FTX. Beckstead had long experience in grantmaking, credibility, and presumably incentive/ability to do due diligence. Seems clear to me from these podcasts that MacAskill (and to a lesser extent the more junior employees who joined later) deferred to Beckstead.

7
RyanCarey
Yes, that's who I meant when I said "those working for the FTX Future Fund"

My understanding is that this wasn't a benign management dispute, it was an ethical dispute about whether to disclose to investors that Alameda had misplaced $4m. SBF's refusal to do so sure seems of a piece with FTX's later issues. 

I do not remember being entirely or even primarily motivated by that issue. I'm not sure where Matt is getting this from, though in his defense he's writing pretty flippantly.  

Matt Levine is quoting from Going Infinite. I do not know who Michael Lewis's source is. I've heard confirming bits and pieces privately, which makes me trust this public version more. Of course that doesn't mean that was everyone's motivation: I'd be very interested to hear whatever you're able to share. 

Thanks, that makes sense. I didn't remember Going Infinite as having made such a strong claim, but maybe I was projecting my own knowledge into the book.

I looked back at the agenda for our resignation/buyout meeting and I don't see anything like "didn't disclose misplaced transfer money to investors". Which doesn't mean that no one had this concern, only that they didn't add it to the agenda, but I do think it would be misleading to describe this as the central concern of the management team, given that we listed other things in the agenda instead of that.[1]

  1. ^

    To preempt a question about what concerns I did have, if not the transfer thing: see my post from last year

    I thought Sam was a bad CEO. I think he literally never prepared for a single one-on-one we had, his habit of playing video games instead of talking to you was “quirky” when he was a billionaire but aggravating when he was my manager, and my recollection is that Alameda made less money in the time I was there than if it had just simply bought and held bitcoin.

    I'm not sure if I would describe the above as a "benign management dispute" (it certainly didn't feel benign to me at the time), but I think it's even less ac

... (read more)
3
Elizabeth
that makes sense, sounds like it wasn't the concern for at least your group. He does describe it as "The rest of the management team was horrified and quit in a huff, loudly telling the investors that Bankman-Fried was dishonest and reckless", so unless there were multiple waves of management quitting it sounds like the book conflated multiple stories. 

Just to clarify, it seems that "The rest of the management team was horrified and quit in a huff, loudly telling the investors that Bankman-Fried was dishonest and reckless" is from Matt Levine, not from Michael Lewis.

I'm quickly skimming the relevant parts of Going Infinite, and it seems to me that Lewis highlights other issues as even more relevant than the missing $4M

4
Angelina Li
Unrelated — I really like this comment + this other comment of yours as good examples of: "I notice the disagreement you are having is about an empirical and easily testable question, let me spend 5 min to grab the nearest data to test this." (I really admire / value this virtue <3 )
5
Ben Millwood🔸
I think this was an example of a disagreement they had, but not the whole disagreement. (Another alleged example was the thing where Tara didn't want Sam to run some trading algorithm unattended, which he agreed to and then did anyway.)

There part where SBF committed to something important in his trading company and then broke the agreement also seems more predictive of fraud than suggested by the phrase "management dispute".

People rarely leave over one thing and different people leave over different reasons. But I expect people hearing "left over ethics disputes" to walk away with a more accurate understanding than "left over a management dispute" (and more details to either sentence would be welcome). 

2
Ben Millwood🔸
Yeah sorry I didn't intend to disagree with you on whether it was a management dispute or an ethics dispute, just that it wasn't only the issue you explicitly named.

Thank you Will! This is very much the kind of reflection and updates that I was hoping to see from you and other leaders in EA for a while.

I do hope that the momentum for translating these reflections into changes within the EA community is not completely gone given the ~1.5 years that have passed since the FTX collapse, but something like this feels like a solid component of a post-FTX response. 

I disagree with a bunch of object-level takes you express here, but your reflections seem genuine and productive and I feel like me and others can engage with them in good faith. I am grateful for that.

Lessons and updates

The scale of the harm from the fraud committed by Sam Bankman-Fried and the others at FTX and Alameda is difficult to comprehend. Over a million people lost money; dozens of projects’ plans were thrown into disarray because they could not use funding they had received or were promised; the reputational damage to EA has made the good that thousands of honest, morally motivated people are trying to do that much harder. On any reasonable understanding of what happened, what they did was deplorable. I’m horrified by the fact that I was Sam’s entry point into EA.

In these comments, I offer my thoughts, but I don’t claim to be the expert on the lessons we should take from this disaster. Sam and the others harmed me and people and projects I love, more than anyone else has done in my life. I was lied to, extensively, by people I thought were my friends and allies, in a way I’ve found hard to come to terms with. Even though a year and a half has passed, it’s still emotionally raw for me: I’m trying to be objective and dispassionate, but I’m aware that this might hinder me.

There are four categories of lessons and updates:

  • Undoing updates made because of FTX
  • Appreciating the new world we’re in 
  • Assessing what changes we could make in EA to make catastrophes like this less likely to happen again
  • Assessing what changes we could make such that EA could handle crises better in the future

On the first two points, the post from Ben Todd is good, though I don’t agree with all of what he says. In my view, the most important lessons when it comes to the first two points, which also have bearing on the third and fourth, are:

  • Against “EA exceptionalism”: without evidence to the contrary, we should assume that people in EA are about average (given their demographics) on traits that don’t relate to EA. Sadly, that includes things like likelihood to commit crimes. We should be especially cautious to avoid a halo effect — assuming that because someone is good in some ways, like being dedicated to helping others, then they are good in other ways, too, like having integrity.  
    • Looking back, there was a crazy halo effect around Sam, and I’m sure that will have influenced how I saw him. Before advising Future Fund, I remember asking a successful crypto investor — not connected to EA — what they thought of him. Their reply was: “He is a god.”
    • In my own case, I think I’ve been too trusting of people, and in general too unwilling to countenance the idea that someone might be a bad actor, or be deceiving me. Given what we know now, it was obviously a mistake to trust Sam and the others, but I think I've been too trusting in other instances in my life, too. I think in particular that I’ve been too quick to assume that, because someone indicates they’re part of the EA team, they are thereby trustworthy and honest. I think that fully improving on this trait will take a long time for me, and I’m going to bear this in mind in which roles I take on in the future. 
  • Presenting EA in the context of the whole of morality. 
    • EA is compatible with very many different moral worldviews, and this ecumenicism was a core reason for why EA was defined as it was. But people have often conflated EA with naive utilitarianism: that promoting wellbeing is the *only* thing that matters.
    • Even on pure utilitarian grounds, you should take seriously the wisdom enshrined in common-sense moral norms, and be extremely sceptical if your reasoning leads you to depart wildly from them. There are very strong consequentialist reasons for acting with integrity and for being cooperative with people with other moral views.
    • But, what’s more, utilitarianism is just one plausible moral view among many, and we shouldn’t be at all confident in it. Taking moral uncertainty into account means taking seriously the consequences of your actions, but it also means respecting common-sense moral prohibitions.[1] 
    • I could have done better in how I’ve communicated on this score. In the past, I’ve emphasised the distinctive aspects of EA, treated the conflation with naive utilitarianism as a confusion that people have, and the response to it as an afterthought, rather than something built into the core of talking about the ideas. I plan to change that, going forward — emphasising more the whole of morality, rather than just the most distinctive contributions that EA makes (namely, that we should be a lot more benevolent and a lot more intensely truth-seeking than common-sense morality suggests).
  • Going even further on legibly acting in accordance with common-sense virtues than one would otherwise, because onlookers will be more sceptical of people associated with EA than they were before. 
    • Here’s an analogy I’ve found helpful. Suppose it’s a 30mph zone, where almost everyone in fact drives at 35mph. If you’re an EA, how fast should you drive?  Maybe before it was ok to go at 35, in line with prevailing norms. Now I think we should go at 30.
  • Being willing to fight for EA qua EA.
    • FTX has given people an enormous stick to hit EA with, and means that a lot of people have wanted to disassociate from EA. This will result in less work going towards the most important problems in the world today - yet another of the harms that Sam and the others caused. 
    • But it means we’ll need, more than ever, for people who believe that the ideas are true and important to be willing to stick up for them, even in the face of criticism that’s often unfair and uncharitable, and sometimes downright mean. 

On the third point — how to reduce the chance of future catastrophes — the key thing, in my view, is to pay attention to people’s local incentives when trying to predict their behaviour, in particular looking at the governance regime they are in. Some of my concrete lessons, here, are:

  • You can’t trust VCs or the financial media to detect fraud.[2] (Indeed, you shouldn’t even expect VCs to be particularly good at detecting fraud, as it’s often not in their self-interest to do so; I found Jeff Kaufman’s post on this very helpful).
  • The base rates of fraud are surprisingly high (here and here).
  • We should expect the base rate to be higher in poorly-regulated industries.
  • The idea that a company is run by “good people” isn't sufficient to counterbalance that. 
    • In general, people who commit white collar crimes often have good reputations before the crime; this is one of the main lessons from Eugene Soltes’s book Why They Do It
    • In the case of FTX: the fraud was committed by Caroline, Gary and Nishad, as well as Sam. Though some people had misgivings about Sam, I haven’t heard the same about the others. In Nishad’s case in particular, comments I’ve heard about his character are universally that he seemed kind, thoughtful and honest. Yet, that wasn’t enough.
    • (This is all particularly on my mind when thinking about the future behaviour of AI companies, though recent events also show how hard it is to get governance right so that it’s genuinely a check on power.)
  • In the case of FTX, if there had been better aggregation of people’s opinions on Sam that might have helped a bit, though as I note in another comment there was a widespread error in thinking that the 2018 misgivings were wrong or that he’d matured. But what would have helped a lot more, in my view, was knowing how poorly-governed the company was — there wasn’t a functional board, or a risk department, or a CFO.

On how to respond better to crises in the future…. I think there’s a lot. I currently have no formal responsibilities over any community organisations, and do limited informal advising, too,[3] so I’ll primarily let Zach (once he’s back from vacation) or others comment in more depth on lessons learned from this, as well as changes that are being made, and planned to be made, across the EA community as a whole. 

But one of the biggest lessons, for me, is decentralisation, and ensuring that people and organisations to a greater extent have clear separation in their roles and activities than they have had in the past. I wrote about this more here. (Since writing that post, though, I now lean more towards thinking that someone should “own” managing the movement, and that that should be the Centre for Effective Altruism. This is because there are gains from “public goods” in the movement that won't be provided by default, and because I think Zach is going to be a strong CEO who can plausibly pull it off.)

In my own case, at the point of time of the FTX collapse, I was:

  • On the board of EV
  • An advisor to Future Fund
  • The most well-known advocate of EA

But once FTX collapsed, these roles interfered with each other. In particular, being on the board of EV and an advisor to Future Fund majorly impacted my ability to defend EA in the aftermath of the collapse and to help the movement try to make sense of what had happened. In retrospect, I wish I’d started building up a larger board for EV (then CEA), and transitioned out of that role, as early as 2017 or 2018; this would have made the movement as a whole more robust.

Looking forward, I’m going to stay off boards for a while, and focus on research, writing and advocacy.

  1. ^

    I give my high-level take on what generally follows from taking moral uncertainty seriously, here: “In general, and very roughly speaking, I believe that maximizing expected choice- worthiness under moral uncertainty entails something similar to a value-pluralist consequentialism-plus-side-constraints view, with heavy emphasis on consequences that impact the long-run future of the human race.”

  2. ^

    There’s a knock against prediction markets, here, too. A Metaculus forecast, in March of 2022 (the end of the period when one could make forecasts on this question), gave a 1.3% chance of FTX making any default on customer funds over the year. The probability that the Metaculus forecasters would have put on the claim that FTX would default on very large numbers of customer funds, as a result of misconduct, would presumably have been lower.

  3. ^

    More generally, I’m trying to emphasise that I am not the “leader” of the EA movement, and, indeed, that I don’t think that the EA movement is the sort of thing that should have a leader. I’m still in favour of EA having advocates (and, hopefully, very many advocates, including people who hopefully get a lot more well-known than I am), and I plan to continue to advocate for EA, but I see that as a very different role. 

  • Going even further on legibly acting in accordance with common-sense virtues than one would otherwise, because onlookers will be more sceptical of people associated with EA than they were before. 
    • Here’s an analogy I’ve found helpful. Suppose it’s a 30mph zone, where almost everyone in fact drives at 35mph. If you’re an EA, how fast should you drive?  Maybe before it was ok to go at 35, in line with prevailing norms. Now I think we should go at 30.

 

Wanting to push back against this a little bit:

  • The big issue here is that SBF was recklessly racing ahead at 60mph, and EAs who saw that didn't prevent him from doing so. So, I think the main lesson here is that EAs should learn to become strict enforcers of 35mph speed limits among their collaborators, which requires courage and skill in speaking out, rather than being highly strictly law-abiding.
  • The vast majority of EAs were/are reasonably law-abiding and careful (going at 35mph) and it seems perfectly fine for them to continue the same way. Extra trustworthiness signalling is helpful insofar as the world distrusts EAs due to what happened at FTX, but this effect is probably not huge.
  • EAs will get less done, be worse collaborators, and lose out on entrepreneurial talent if they become overly cautious. A non-zero level of naughtiness is often desirable, though this is highly domain-dependent.

I hear Will not as saying that going 35mph is in itself wrong in this analogy (necessarily), but that EA is now more-than-average vulnerable to attack and mistrust, so we need to signal our trustworthiness more clearly than others do.

Since writing that post, though, I now lean more towards thinking that someone should “own” managing the movement, and that that should be the Centre for Effective Altruism.

I agree with this. Failing that, I feel strongly that CEA should change its name. There are costs to having a leader / manager / "coordinator-in-chief", and costs to not having such an entity; but the worst of both worlds is to have ambiguity about whether a person or org is filling that role. Then you end up with situations like "a bunch of EAs sit on their hands because they expect someone else to respond, but no one actually takes the wheel", or "an org gets the power of perceived leadership, but has limited accountability because it's left itself a lot of plausible deniability about exactly how much of a leader it is".

There are very strong consequentialist reasons for acting with integrity

 

we should be a lot more benevolent and a lot more intensely truth-seeking than common-sense morality suggests

It concerns me a bit that when legal risk appears suddenly everyone gets very pragmatic in a way that I am not sure feels the same as integrity or truth-seeking. It feels a bit similar to how pragmatic we all were around FTX during the boom. Feels like in crises we get a bit worse at truth seeking and integrity, though I guess many communities do. (Sometimes it feels like in a crisis you get to pick just one thing and I am not convinced the thing the EA community picks is integrity or truth seekingness) 

Also I don't really trust my own judgement here, but while EA may feel more decentralised, a lot of the orgs feel even more centralised around OpenPhil, which feels a bit harder to contact and is doing more work internally. This is their prerogative I guess, but still. 

I am sure while being a figurehead of EA has had a lot of benefits (not all of which I guess you wanted) but I strongly sense it has had a lot of really large costs. Thank you for your work. You're a really talented communicator and networker and at this point probably a skilled board member so I hope that doesn't get lost in all this. 

There’s a knock against prediction markets, here, too. A Metaculus forecast, in March of 2022 (the end of the period when one could make forecasts on this question), gave a 1.3% chance of FTX making any default on customer funds over the year. The probability that the Metaculus forecasters would have put on the claim that FTX would default on very large numbers of customer funds, as a result of misconduct, would presumably have been lower.

Metaculus isn't a prediction market; it's just an opinion poll of people who use the Metaculus website.

agree with "not a prediction market" but think "just an opinion poll" undersells it; people are evaluated and rewarded on their accuracy

Fair! That's at least a super nonstandard example of an "opinion poll".

How I publicly talked about Sam 

Some people have asked questions about how I publicly talked about Sam, on podcasts and elsewhere. Here is a list of all the occasions I could find where I publicly talked about him.  Though I had my issues with him, especially his overconfidence, overall I was excited by him. I thought he was set to do a tremendous amount of good for the world, and at the time I felt happy to convey that thought. Of course, knowing what I know now, I hate how badly I misjudged him, and hate that I at all helped improve his reputation.

Some people have claimed that I deliberately misrepresented Sam’s lifestyle. In a number of places, I said that Sam planned to give away 99% of his wealth, and in this post, in the context of discussing why I think honest signalling is good, I said, “I think the fact that Sam Bankman-Fried is a vegan and drives a Corolla is awesome, and totally the right call”. These statements represented what I believed at the time. Sam said, on multiple occasions, that he was planning to give away around 99% of his wealth, and the overall picture I had of him was highly consistent with that, so the Corolla seemed like an honest si... (read more)

huw
33
21
1

FWIW I find the self-indulgence angle annoying when journalists bring it up, it’s reasonable for Sam to have been reckless, stupid, and even malicious without wanting to see personal material gain from it. Moreover, I think leads others to learn the wrong lessons—as you note in your other comment, the fraud was committed by multiple people with seemingly good intentions; we should be looking more at the non-material incentives (reputation, etc.) and enabling factors of recklessness that led them to justify risks in the service of good outcomes (again, as you do below).

Tiny nit: I didn't and don't read much into the 80k comment on liking nice apartments. It struck me as the easiest way to disclose (imply?) that he lived in a nice place without dwelling on it too much. 

What's your response to this accusation, in Time? This behaviour doesn't sound like you but Naia outright lying would surprise me from my interactions with her. 

Bouscal recalled speaking to Mac Aulay immediately after one of Mac Aulay’s conversations with MacAskill in late 2018. “Will basically took Sam’s side,” said Bouscal, who recalls waiting with Mac Aulay in the Stockholm airport while she was on the phone. (Bouscal and Mac Aulay had once dated; though no longer romantically involved, they remain close friends.) “Will basically threatened Tara,” Bouscal recalls. “I remember my impression being that Will was taking a pretty hostile stance here and that he was just believing Sam’s side of the story, which made no sense to me.”

“He was treating it like a ‘he said-she said,’ even though every other long-time EA involved had left because of the same concerns,” Bouscal adds.

I believe that was discussed in the episode with Spencer. Search for 'threatened' in the transcript linked here.
 

00:22:30 Spencer Greenberg

And then the other thing that some people have claimed is that when Alameda had that original split up early on, where some people in the fact about trans community fled, that you had somehow threatened one of the people that had left. What? What was that all about?

00:22:47 Will MacAskill

Yeah. I mean, so yeah, it felt pretty.

00:22:50 Will MacAskill

This last when I read that because, yeah, certainly didn't have a memory of threatening anyone. And so yeah, I reached out to the person who it was about because it wasn't the person saying that they'd been friend. It was someone else saying that that person had been friend. So yeah, I reached out to them. So there was a conversation between me and that.

00:23:07 Will MacAskill

Person that was like kind of heated like.

00:23:09 Will MacAskill

But yeah, they don't think I was like intending to intimidate them or anything like that. And then it was also like in my memory, not about the Alameda blow up. It was like a.

00:23:18 Will MacAskill

Different issue.

This doesn't feel like a great response to me.

I find it easy to believe there was a heated argument but no threats, because it is easy for things to get exaggerated, and the line between telling someone you no longer trust them because of a disagreement and threatening them is unclear when you are a powerful person who might employ them. But I find Will's claim that the conversation wasn't even about whether Sam was trustworthy or anything related to that, to be really quite hard to believe. It would be weird for someone to be mistaken or exaggerate about that, and I feel like a lie is unlikely, simply because I don't see what anyone would gain from lying to TIME about this.

Nathan's comment here is one case where I really want to know what the people giving agree/disagree votes intended to express. Agreement/disagreement that the behaviour "doesn't sound like Will'? Agreement/disagreement that Naia would be unlikely to be lying? General approval/disapproval of the comment? 

Jonas_
31
10
4
1
1

I disagree-voted because I have the impression that there's a camp of people who left Alameda that has been misleading in their public anti-SBF statements, and has a separate track record of being untrustworthy.

So, given that background, I think it's unlikely that Will threatened someone in a strong sense of the word, and possible that Bouscal or MacAulay might be misleading, though I haven't tried to get to the bottom of it.

Thanks for posting this. If I may I'll ask some more questions below about due diligence, as that's not a subject of your four reply-sections.

I'm not expecting that you'd answer every single one of these questions (there's a lot!), but my hope is that their variety might prompt reflections and recollections. I imagine It could be the case that you can't answer any of the questions below - perhaps you feel its Beckstead's story to tell and you don't want to tell it for him, or Beckstead is currently in law suits and legal jeopardy so this can't be discussed publicly. If so that's understandable.

But it would be great to hear more about this meeting in November/December 2021 with Beckstead and Bankman-Fried. (All quotes from Wei Dai's transcript, all bold text is highlighted by me.)

00:16:27 Will MacAskill

"But then by the end of 2021, so, you know things are opening up after the pandemic. And I go to North America to, you know, reconnect with a bunch of people. Sam at that point, by that point has put Nick Beckstead in charge of the his foundation. 

And so I meet up with Nick and with Sam in order to kind of discuss the strategy for the foundation and at that point it looks like, o

... (read more)
3
michel
Registering that this line of questioning (and volume of questions) strikes me as a bit off-putting/ too intense.  If someone asked about what "What were the key concerns here, and how were they discussed?" [...] "what questions did you ask, and what were the key considerations/evidence?" about interactions I had years ago, I would feel like they're holding me to an unrealistic standard of memory or documentation.   (Although I do acknowledge the mood that these were some really important interactions. Scrutiny is an appropriate reaction, but I still find this off-putting.) 

These seem pretty reasonable questions to me. 

4
michel
Fair enough. Interesting to see people's different intuitions on this. 
6
AnotherAnonymousFTXAccount
I understand that that impression/reaction. As I mentioned, the intention behind offering a bunch of specific and varied questions is that they might prompt reflection from different angles, different thoughts and memories, angles on it that provide new insight or that MacAskill finds more comfortable sharing - not that each would be responded to in forensic detail. 
2
michel
Oh sorry, I missed this! I should have read that more closely before commenting.

Hey Will, 

Just wanted to comment to say it's great to see you back talking publicly.  I was one of the many who first heard you on Making Sense (then Waking Up) back in 2016, and you inspired me to take the GWWC pledge, become a vegan and set up a local EA group.  

I think that for better or worse, the silent majority of the EA movement continue to look to public figures like yourself for guidance.  I think it's great that you've decided to distance yourself from positions that could demand your silence from a legal stand point in the future.  

Keep your head up, don't get jaded, keep fighting the good fight.  We missed you!

There have been various calls for an independent investigation into the relationships between EA and SBF. Do you think such an investigation is warranted? Why or why not?

Will, do you plan on answering questions in the comments here? I realize you're busy and good responses take time, but I wanted to check if answering was on your agenda at all. 

Linking to Zvi's review of the podcast:

https://thezvi.wordpress.com/2024/04/15/monthly-roundup-17-april-2024/

Search for:

Will MaCaskill went on the Sam Harris podcast

 

It's a negative review, but opinions are Zvi's, I didn't hear the podcast myself.

[comment deleted]-4
0
0
Curated and popular this week
 ·  · 2m read
 · 
TLDR  EA is a community where time tracking is already very common and yet most people I talk to don't because 1. It's too much work (when using toggl, clockify, ...) 2. It's not accurate enough (when using RescueTime, rize, ...) I built https://donethat.ai that solves both of these with AI as part of AIM's Founding to Give program. Give it a try (and use discount code "EA" after the 14d trial to get another month free). You should probably track your time I'd argue that for most people, your time is your most valuable resource.[1] Even though your day has 24 hours, eight of those are already used up for sleep, another eight probably for social life, gym, food prep and eating, life admin, commute, leaving max eight hours to have impact. Oliver Burkeman argues in his recent book Meditations for Mortals that eight is still too high - most high impact work gets done in four hours every day - the rest is just fluff and feeling busy.[2] Now, how do you spend those four hours? When it comes to our other scarce resource - money - most people and companies keep budgets, there is a whole discipline of accounting to make sure it's spent wisely. But somehow, for time, we just eyeball it. When tracking time, the objective isn't to set a number and play "number go up." The objective is to understand where you spend your time and help you prioritize and plan better. AI is estimated to increase workforce productivity by 5%.[3] Imagine the increase of productivity if everybody would be better at planning and prioritization. One last reason that is often overlooked: Tracking time can reduce anxiety and guilt. We often feel like we should "do more" but there is always more to do. By setting realistic time-based goals like "work 4h/d on project X" we have a clear measure when we achieved the goal and have also full control over the outcome. If you want to dive deeper than just these handwavy arguments into why it's useful, check out the LW post by Lynette, or the discussion
 ·  · 11m read
 · 
> DMT, the smallest microdose of maybe 5mg with a vape pen stops the worst pain known, in literally 10–20 seconds. Acid and mushrooms as well, but they take time to come up. Even the smallest sub-perceptual dose of DMT stops the pain. > > - Yiftach Yerushalmy, cluster headache patient   > One inhalation [of DMT] will end the attack for most people. Everybody is reporting the exact same thing. […] It could end that attack in less than a minute. […] You can take one inhalation, you can wait 30 seconds, and if that cluster is not gone completely, then you know it's time to take another inhalation. You don't have to wait 2h into a psilocybin trip. > > - Bob Wold, president of Clusterbusters and cluster headache patient   > I use the word game changer because a lot of times our attacks will come at night after we go to sleep. Usually after an attack, half an hour to 40 minutes later, I try to go back to sleep knowing full well that in another hour or so I'm going to get another attack. But having a DMT vape pen right next to my bedside allowed me to hit that pen and within a minute or two I'm closing my eyes and going back to sleep. > > - Joe McKay, retired NYC firefighter, cluster headache patient and advocate   > One of the most incredible experiences of my life was when I first aborted a cluster headache with DMT. That feeling of going from a place of excruciating pain… and feeling the pain fizzle away and die in a matter of seconds. > > Cluster headache patient If we knew that thousands of Americans[1] were being tortured in prison camps every year, no political issue would be more important than making sure to end such a tragedy as quickly as possible, especially if cheap solutions existed. Yet this is essentially the reality for cluster headache patients. Eradicating the torture caused by cluster headaches globally demands finding the most effective treatments and making them universally accessible to sufferers as soon as possible. In recent years, DMT
 ·  · 9m read
 · 
Many thanks to @Felix_Werdermann 🔸 @Engin Arıkan and @Ana Barreiro for your feedback and comments on this, and for the encouragement from many people to finally write this up into an EA forum post. For years, much of the career advice in the Effective Altruism community has implicitly (or explicitly) suggested that impact = working at an EA nonprofit. That narrative made sense when the community and its talent pool were smaller. But as EA grows, it’s worth reassessing whether we’re overconcentrating on nonprofit careers, a trend that may be limiting our community’s impact and leaving higher-leverage opportunities on the table. Why Now? As the EA movement has grown, it has attracted far more talent than the nonprofit sector can realistically absorb. This creates an urgent need to develop alternative pathways for talented, mission-aligned people. Under the current status quo, many end up feeling frustrated after going through multiple highly competitive recruitment rounds with little chance of success. We see this reflected both in community discussions (e.g. numerous EA Forum posts here, here, here and in our own career advising sessions. Beyond these strategic pressures, there are also good reasons to believe that even if the talent bottleneck did not exist, diversifying career paths would still be valuable for advancing the cause. A broader distribution of talent increases our ability to influence powerful institutions, spread ideas, and embed animal-focused perspectives in places where they otherwise would never appear.  In short, this is both a response to immediate pressures within growing talented job seekers in EA and a proactive strategy to accelerate change for the animal welfare movement. By diversifying where our talent goes, we make the movement more resilient, more influential, and better equipped to achieve lasting impact.  Important Caveats This argument is not universal. In fact, there are clear exceptions where nonprofit roles remain both hig