Pronouns: she/her or they/them.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.
Since we have no real numbers for that narrow angle and it involves important factors we can't mathematically model, I don't know if we can settle that narrow question.
But what about the other narrow question: that if you assume the poorest countries will grow to ~50% of the per capita GDP of the per capita GWP in 100 years, which we assume will continue to grow by 2% annually over that timespan, the cost-effectiveness of saving a life by donating to GiveWell's top charities today is ~3x higher than investing for 100 years and giving in 2126? Does that sound convincing at all to you?
The most arbitrary/most uncertain part of this calculation is how the per capita GDP of the poorest countries will compare to the global average over the very long-term.
By the way, how did you determine that the current margin is either just enough or too much giving on global poverty to be optimal? Why isn't the margin at which delaying is the right move a 10x higher or 10x lower level of aggregate spending? Or 100x higher/lower? How does one determine that? Is there a quantitative, empirical argument based on real data?
Do you really, actually, in practice, recommend that everyone in the world delays all spending on global poverty/global health for 100+ years? As in, the Against Malaria Foundation should stop procuring anti-malarial bednets and just invest all its funds in the Vanguard FTSE Global All Cap Index Fund instead? Partners in Health should wind down its hospitals and become a repository for index funds? If not, why not?
With the closest thing we have to real numbers (that I've been able to figure out, so far, anyway), my back-of-the-envelope calculation above found that it was ~3x as cost-effective to donate money now than to invest and wait 100 years. Do you find that rough math at all convincing?
I don't know how to quantify the economic growth question with anything approaching real numbers. It would probably be a back-of-the-envelope calculation with a lot more steps and a lot more uncertainty than even the non-rigorous calculation I did above. There are many complicated considerations that can't be mathematically modelled.
For example: if wealthy people in wealthy countries have ~1,000x more resources in 100 years, it seems like the marginal cost-effectiveness of any one patient philanthropic foundation on global poverty would decline commensurately, since, all else being equal, you'd think overall giving to global poverty would increase ~1,000x. And as giving increased, you'd think the low-hanging fruit would get picked, economic growth would be stimulated, and global poverty would become incrementally more and more solved, such that the remaining opportunities to give would be much less cost-effective than the ones you started with 100 years ago.
If you think there's at least an, I don't know, 5% chance of transformative AI within the next 100 years, that also changes things. Because transformative AI would cause rapid economic growth all over the planet, and then the marginal cost-effectiveness of your philanthropic funds in 2126 will really have decreased. But of course the invention of transformative AI is impossible to forecast.
You can imagine similar things for other speculative futuristic technologies. If it becomes vastly cheaper to prevent and treat all infectious diseases due to new technologies or biotechnologies, or, say, someone figures out how to wipe out all mosquitoes using a gene drive or something, and countries with high rates of mosquito-borne illness decide to wipe them out, then the cost-effectiveness of any money you were investing long-term to spend on infectious diseases later will drop dramatically.
To simplify it: if you have $1 million earmarked for malaria invested until 2126, and then in 2076 someone finds a super cheap way to quickly eradicate malaria worldwide, then your $1 million is now worthless. By spending it in 2026, you could have saved 285 lives, but now you can save zero lives.
The cost-effectiveness of the spending by whoever does the super cheap way to quickly eradicate malaria is through the roof, but the cost-effectiveness of everyone else's dollars earmarked for malaria drops like a stone. So, if you're not the lucky philanthropist who funds that specific thing, you've made a terrible cost-effectiveness trade-off.
I think that shrimp QALYs and human QALYs have some exchange rate, we just don't have a good handle on it yet. And I think that if we'd decided that difficult things weren't worth doing we wouldn't have done a lot of the things we've already done.
100 years of progress in the science and philosophy of consciousness should settle it. Start by reading a few books on the subject a year for a few years.
I recommend starting with Consciousness Explained by Daniel Dennett. It's one of my favourite books.
Professional philosophers don't even agree on whether dualism, eliminativism, functionalism, identity theory, or panpsychism is true, despite decades of scholarship, so don't expect to quickly find a consensus on finer-grained questions like how to quantify and compare shrimp consciousness (and whether it exists in the first place) to human consciousness. Even if you can form your own view to your own satisfaction within a year, it's unlikely that you'll convince many others that you're right.
On the other hand, you might succeed where thousands of others haven't, and become hailed as one of the greatest living philosophers/scientists.
Didn't you stipulate it would be at least 100 years in the scenario we're imagining? Surely it's worth spending at least 1,000x more resources to end global poverty 100 years sooner? (Otherwise, why not wait 1,000 years or 10,000 years to donate your first dollar to global poverty, if all that matters is the CAGR of your investments?)
But the opportunity cost of not spending the $1 million today — the lost intervening 100 years of economic growth — is surely much more than $867 million? That is, surely it's at least 1,000x better to stimulate faster economic growth in the poorest countries today than it is to do it 100 years from now.
I’ll bet you a $10 donation to the charity of your/my choice that a judge we agree on with formal/credentialed expertise in deep learning research (e.g. an academic or corporate AI researcher) will say that typical autoregressive large language models like GPT-4/GPT-5 or Claude 2/Claude 4.5 have not non-trivially made or constituted progress on the AI research problem of learning from video data via approaches that don’t rely on pixel-level prediction.
I’m open to counter-offers.
I’ll also say yes to anyone who wants to take the other side of this bet.
I’ll bet you a $10 donation to the charity of your/my choice that by December 31, 2026, not all three of these things will be true:
I think that at least one and possibly two or all three of these things won’t be true by December 31, 2026. If at least one of them isn’t true, I win the bet. If all three are true, you win the bet.
I think December 31, 2026 is a reasonable deadline because if this still hasn’t happened by then, then my fundamental point that this conversation is premature will have been proven right.
I’m open to counter-offers.
I’m also open to making this same bet with anyone else, including if more than one person wants to bet me. (Anyone else can feel free to counter-offer me as well.)
If the market cap goes below $200 billion for more than a few days, it probably means the AI bubble popped, and any future donations from Anthropic employees have become highly uncertain.
I could possibly be talked down to $50 million.
There might be a better way to operationalize what meta-EA or EA regranting means. I’m open to suggestions.
This is the hardest condition/resolution criterion to operationalize in a way you can bet on. I’m trying to be lenient while at the same time avoid formulating it such that no matter what empirically happens, the survey respondents/judges will say that starting the discussion now is above the expected value threshold. I'd be willing to abandon this condition/criterion if it's too hard to agree on, but then the bet would be missing one of the cruxes of the disagreement (possibly the most important one).