This is a copy of a chat between Claude and Jan. Content may include unverified or unsafe content that do not represent the views of Anthropic. Shared snapshot may contain attachments and data not displayed here.
My credence: 75-80% that skills ARE transferable (i.e., the text's claim is wrong)
Yudkowsky:
Soares:
Base rate question: How often do public intellectuals with substantial followings and technical backgrounds become unemployable when their core thesis loses traction? Historical reference classes (population bomb advocates, nuclear winter researchers) suggest: rarely.
Conditional on low p(doom) update: If Eliezer genuinely updated, this would be a massive epistemic event likely increasing his credibility for intellectual honesty. His audience follows his reasoning style, not just conclusions.
Key asymmetry the text misses: Soares has straightforwardly marketable engineering credentials. The "non-transferable" claim is weakest for him.
Which framing interests you most for an actual bet structure?
The user wants me to think deeply about each of the four proposed tests and provide:
They want me to consider:
Let me think through each test carefully, considering multiple factors.
First, let me search for some current information on AGI company compensations and the current state of the AI talent market to ground my estimates.
I'll search for current information on AI talent compensation and market conditions to ground my estimates properly.
Good data on general AI researcher salaries. Now let me search for more specific information on top-tier AI talent compensation, especially at frontier labs, and also information about Yudkowsky's specific income streams and audience.
Great, now I have data on AI talent compensation. Let me also search for information about Eliezer Yudkowsky's current income streams and social capital, including his Twitter following and Substack/Patreon potential.
Good information on Eliezer's following (~213K followers on X) and speaking fees ($50k-$100k for live events). Let me also search for Nate Soares' background and his former role at Google to get better data on his marketability.
Good data on Soares. Now let me search for some comparable cases of public intellectuals and think tanks, and also look for Substack income data for writers with comparable followings.
Good data. Now let me search for HPMOR readership and the LessWrong/rationalist community size to get a better estimate of Yudkowsky's potential audience.
I have a good amount of data now. Let me search for one more thing - comparable cases of think tank leaders or intellectuals transitioning careers after their organizations lost relevance.
Now I have comprehensive data. Let me create a detailed analysis document with median and expected value estimates for each test.
Good, I've created a comprehensive analysis. Let me now move it to the outputs folder and present it to the user.
Now I can present the file and also provide a summary of the key findings in my response.
I've created and presented a comprehensive analysis. Now let me provide a summary in my response.
Here's the comprehensive analysis with numerical estimates for each test.
| Test | Median Outcome | Expected Value | P(Threshold Met) |
|---|---|---|---|
| 1. Yudkowsky >$250k post-MIRI | $320k/year | $365k/year | 72% |
| 2. Soares offer >$200k | $340k TC | $375k TC | 85% |
| 3. Yudkowsky Substack >$100k ARR | $180k ARR | $280k ARR | 78% |
| 4. Speaking fees ≥$50k/event | $70k/event | $78k/event | 68% |
Soares is the easier case: His Google/Microsoft background, AAAI publications, and the fact that "AI safety" is now a prestige career path make him highly employable. Anthropic's median total compensation is $471,808 , and research engineers at Anthropic can earn up to $690,000 . His MIRI experience is now resume gold rather than resume poison.
Yudkowsky has brand value, not skills in the traditional sense: His ~213K X followers, $50,000-$100,000 speaking fees for live events , HPMOR readership (with a Russian edition that collected 11.4 million rubles from 7,278 backers ), and unique positioning as "the original AI alignment person" are convertible to multiple independent income streams.
The AI talent market is historically hot: AI/ML hiring grew 88% year-on-year in 2025 , and Meta has offered packages worth as much as $300 million over four years to top researchers. Even mid-tier AI safety roles pay $200k-$400k.
Conditional on low p(doom) update: This could paradoxically increase Yudkowsky's value—"the original doomer changed his mind" is a compelling narrative that tech companies might pay premium speaking fees to hear.
The claim "skills non-transferable" is FALSE with ~80% confidence. I'd happily take the YES side on all four proposed bets at the stated probabilities, with Soares being the strongest bet (85% YES) and speaking fee persistence being the weakest (68% YES).