MIRI's financial incentives and AGI risk assessment bias
Shared by Jan

This is a copy of a chat between Claude and Jan. Content may include unverified or unsafe content that do not represent the views of Anthropic. Shared snapshot may contain attachments and data not displayed here.

Please evaluate the "Skills non-transferable False" from this text with the mindset of a superforecaster who wants to bet on it being true or false. (Condition on Eliezer stopping to have high p(doom))

MIRI's business model relies on the opposite narrative. MIRI pays Eliezer Yudkowsky $600,000 a year. It pays Nate Soares $235,000 a year. If they suddenly said that the risk of human extinction from AGI or superintelligence is extremely low, in all likelihood that money would dry up and Yudkowsky and Soares would be out of a job. The financial basis for motivated reasoning is arguably even stronger in MIRI's case than in Mechanize's case. The kind of work MIRI is doing and the kind of experience Yudkowsky and Soares have isn't really transferable to anything else.

As a side quest, propose some form of a test about skills transferability it is possible to bet on

Feb 4

Evaluating "Skills Non-Transferable: FALSE"

My credence: 75-80% that skills ARE transferable (i.e., the text's claim is wrong)

Breaking Down the Actual Skill Sets

Yudkowsky:

  • Large personal following (~500k+ Twitter, HPMOR readership)
  • Prolific writer with demonstrated audience capture
  • Podcast/media presence and speaking ability
  • "Rationality" brand extends beyond AI doom specifically
  • Fiction writing with commercial success

Soares:

  • Former Google engineer (directly marketable)
  • Management/executive experience
  • Technical decision theory work
  • Research leadership

Superforecaster Considerations

Base rate question: How often do public intellectuals with substantial followings and technical backgrounds become unemployable when their core thesis loses traction? Historical reference classes (population bomb advocates, nuclear winter researchers) suggest: rarely.

Conditional on low p(doom) update: If Eliezer genuinely updated, this would be a massive epistemic event likely increasing his credibility for intellectual honesty. His audience follows his reasoning style, not just conclusions.

Key asymmetry the text misses: Soares has straightforwardly marketable engineering credentials. The "non-transferable" claim is weakest for him.


Proposed Testable Bets

  1. Direct employment test: "If MIRI closes within 3 years, Yudkowsky earns >$250k/year within 24 months from non-MIRI sources" — I'd take YES at ~70%
  2. Soares tech reentry: "Soares receives ≥1 offer >$200k from a tech company within 6 months of leaving MIRI" — I'd take YES at ~80%
  3. Platform independence: "Yudkowsky's Substack/Patreon (if launched) exceeds $100k ARR within 12 months" — testable proxy for audience transferability
  4. Speaking market: Track speaking fee offers post any major position change

Which framing interests you most for an actual bet structure?

I like all the tests; for each, can you produce median numerical estimate, and and expected value estimate? Think about the problem holistically including social capital, general intelligence, etc. and 2026 AGI companies compensations. Spend as many tokens as necessary please :)

Feb 4

The user wants me to think deeply about each of the four proposed tests and provide:

  1. Median numerical estimates
  2. Expected value estimates

They want me to consider:

  • Social capital
  • General intelligence
  • 2026 AGI companies compensations

Let me think through each test carefully, considering multiple factors.

First, let me search for some current information on AGI company compensations and the current state of the AI talent market to ground my estimates.

Done

I'll search for current information on AI talent compensation and market conditions to ground my estimates properly.

Rise | AI Talent Salary Report 2026
www.riseworks.io
favicon
Salary: Ai Researcher (January, 2026) United States
www.ziprecruiter.com
favicon
Ai Researcher: Average Salary & Pay Trends 2026 | Glassdoor
www.glassdoor.com
favicon
Artificial Intelligence (AI) Researcher Salary in 2025 | PayScale
www.payscale.com
favicon
2026 Compensation Trends and Salary Guide - Blue Signal Search
bluesignal.com
favicon
AI Compensation strategy: salary and benefits in the AI talent bubble (2025)
www.herohunt.ai
favicon
The AI compensation and talent trends shaping the job market in 2026
ravio.com
favicon
Artificial Intelligence Salary: Your Guide to AI Pay in 2026 | Coursera
www.coursera.org
favicon
Salary: Artificial Intelligence Research Scientist
www.ziprecruiter.com
favicon
Ai Researcher Salary, Hourly Rate (December 01, 2025) in the United States | Salary.com
www.salary.com

Good data on general AI researcher salaries. Now let me search for more specific information on top-tier AI talent compensation, especially at frontier labs, and also information about Yudkowsky's specific income streams and audience.

favicon
Anthropic Salaries | Levels.fyi
www.levels.fyi
favicon
Salaries at OpenAI, Anthropic, Grammarly, Perplexity, and Other Top Startups: Up to $690k a Year | Scroll Media
scroll.media
favicon
AI Compensation strategy: salary and benefits in the AI talent bubble (2025)
www.herohunt.ai
favicon
15 Highest Paying AI Jobs in 2025 and Why Demand Is Exploding
www.finalroundai.com
favicon
Top 10 Highest-Paying AI Jobs in 2025: Salary Ranges & Career Paths Included - The Interview Guys
blog.theinterviewguys.com
favicon
How Much Does Anthropic Pay in 2026? (35 Salaries) | Glassdoor
www.glassdoor.com
AI Paygrades
aipaygrad.es
favicon
AI Specialist Salaries in 2024–2025: A Comprehensive Report
ts2.tech
favicon
Search Salaries. Benchmark Compensation & Careers | 6figr.com
6figr.com
favicon
Meta’s Hiring Spree Raised Compensation for Top AI Engineers and Executives
www.deeplearning.ai

Great, now I have data on AI talent compensation. Let me also search for information about Eliezer Yudkowsky's current income streams and social capital, including his Twitter following and Substack/Patreon potential.

Eliezer Yudkowsky | Speaking Fee | Booking Agent
www.allamericanspeakers.com
favicon
Eliezer Yudkowsky’s Booking Agent and Speaking Fee - Speaker Booking Agency
www.speakerbookingagency.com
favicon
Eliezer Yudkowsky ⏹️ (@ESYudkowsky) / X
twitter.com
favicon
Fame | Eliezer Yudkowsky net worth and salary income estimation Apr, 2023 | People Ai
peopleai.com
favicon
Eliezer Yudkowsky ⏹️ (@ESYudkowsky) / X
x.com
favicon
X
x.com
favicon
Hanging Out My Speaker's Shingle — LessWrong
www.lesswrong.com
favicon
Twitter
twitter.com
favicon
Eliezer Yudkowsky ⏹️'s Twitter Monthly Stats (Social Blade Twitter Statistics)
socialblade.com
favicon
Eliezer Yudkowsky - RationalWiki
rationalwiki.org

Good information on Eliezer's following (~213K followers on X) and speaking fees ($50k-$100k for live events). Let me also search for Nate Soares' background and his former role at Google to get better data on his marketability.

favicon
Nate Soares | ITIF
itif.org
favicon
Nate Soares - Machine Intelligence Research Institute
intelligence.org
favicon
Is it harder to become a MIRI mathematician in 2019 compared to in 2013? — LessWrong
www.lesswrong.com
favicon
Nate Soares
www.penguin.co.uk
favicon
Nate Soares – Porchlight Book Company
www.porchlightbooks.com
Nate Soares Biography | Booking Info for Speaking Engagements
www.allamericanspeakers.com
favicon
Nate Soares - Crunchbase Person Profile
www.crunchbase.com
favicon
Nate Soares, Executive Director, Machine Intelligence Research Institute
www.topionetworks.com
favicon
Razib Khan's Unsupervised Learning: Nate Soares: we are doomed (probably)
unsupervisedlearning.libsyn.com
favicon
The AI disconnect: understanding vs motivation, with Nate Soares - London Futurists
londonfuturists.buzzsprout.com

Good data on Soares. Now let me search for some comparable cases of public intellectuals and think tanks, and also look for Substack income data for writers with comparable followings.

favicon
Substack User and Revenue Statistics (2026)
backlinko.com
favicon
What the Quiet Winners Are Doing on Substack
pubstacksuccess.substack.com
favicon
Let's Talk Income (How Much I Earn With Substack)
pubstacksuccess.substack.com
favicon
Which Substack Publications Make the Most Money (And What's Driving Their Revenue)
www.reallygoodbusinessideas.com
favicon
Substack Top Earners Making $25,000 per Month (Substack Success Stories) - Creatoregg.com
creatoregg.com
favicon
The most we can say about earnings of Substack's top writers - Alexey Guzey
guzey.com
favicon
Start a paid newsletter on Substack
substack.com
favicon
How Much Do You REALLY Earn on Substack? Let’s Talk Fees!
www.onlinewritingclub.com
favicon
Substack’s CEO Reveals Over 50 Authors Earn $1M+ Annually Through Paid Subscriptions
quasa.io
favicon
How Much Money Do You Make as a Substack Bestseller?
clairevenus.substack.com

Good data. Now let me search for HPMOR readership and the LessWrong/rationalist community size to get a better estimate of Yudkowsky's potential audience.

favicon
Harry Potter and the Methods of Rationality - Wikipedia
en.wikipedia.org
Harry Potter and the Methods of Rationality, Chapter 1: A Day of Very Low Probability
hpmor.com
favicon
HARRY POTTER AND THE METHODS OF RATIONALITY by Eliezer Yudkowsky
archive.org
Harry Potter and the Methods of Rationality | Petunia married a professor, and Harry grew up reading science and science fiction.
hpmor.com
favicon
Harry Potter and the Methods of Rationality : Eliezer Yudkowsky : Free Download, Borrow, and Streaming : Internet Archive
archive.org
favicon
Harry Potter and the Methods of Rationality - HPMOR Wiki
hpmor.fandom.com
favicon
Harry Potter and the Methods of Rationality
www.lesswrong.com
favicon
Harry Potter and the Methods of Rationality – The Methods of Rationality Podcast
hpmorpodcast.com
HARRY POTTER AND THE METHODS OF RATIONALITY by Eliezer Yudkowsky
cdn.jsdelivr.net
favicon
Harry Potter and the Methods of Rationality Chapter 1: A Day of Very Low Probability, a harry potter fanfic | FanFiction
www.fanfiction.net

I have a good amount of data now. Let me search for one more thing - comparable cases of think tank leaders or intellectuals transitioning careers after their organizations lost relevance.

Done

Now I have comprehensive data. Let me create a detailed analysis document with median and expected value estimates for each test.

Good, I've created a comprehensive analysis. Let me now move it to the outputs folder and present it to the user.

Now I can present the file and also provide a summary of the key findings in my response.

I've created and presented a comprehensive analysis. Now let me provide a summary in my response.

Done

Here's the comprehensive analysis with numerical estimates for each test.

Summary Table

TestMedian OutcomeExpected ValueP(Threshold Met)
1. Yudkowsky >$250k post-MIRI$320k/year$365k/year72%
2. Soares offer >$200k$340k TC$375k TC85%
3. Yudkowsky Substack >$100k ARR$180k ARR$280k ARR78%
4. Speaking fees ≥$50k/event$70k/event$78k/event68%

Key Insights

Soares is the easier case: His Google/Microsoft background, AAAI publications, and the fact that "AI safety" is now a prestige career path make him highly employable. Anthropic's median total compensation is $471,808 , and research engineers at Anthropic can earn up to $690,000 . His MIRI experience is now resume gold rather than resume poison.

Yudkowsky has brand value, not skills in the traditional sense: His ~213K X followers, $50,000-$100,000 speaking fees for live events , HPMOR readership (with a Russian edition that collected 11.4 million rubles from 7,278 backers ), and unique positioning as "the original AI alignment person" are convertible to multiple independent income streams.

The AI talent market is historically hot: AI/ML hiring grew 88% year-on-year in 2025 , and Meta has offered packages worth as much as $300 million over four years to top researchers. Even mid-tier AI safety roles pay $200k-$400k.

Conditional on low p(doom) update: This could paradoxically increase Yudkowsky's value—"the original doomer changed his mind" is a compelling narrative that tech companies might pay premium speaking fees to hear.

Bottom Line for Betting

The claim "skills non-transferable" is FALSE with ~80% confidence. I'd happily take the YES side on all four proposed bets at the stated probabilities, with Soares being the strongest bet (85% YES) and speaking fee persistence being the weakest (68% YES).

    (cache)MIRI's financial incentives and AGI risk assessment bias | Claude