Pronouns: she/her or they/them.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.
Following up a bit on this, @parconley. The second post in Zvi's covid-19 series is from 6pm Eastern on March 13, 2020. Let's remember where this is in the timeline. From my quick take above:
On March 8, 2020, Italy put a quarter of its population under lockdown, then put the whole country on lockdown on March 10. On March 11, the World Health Organization declared covid-19 a global pandemic. (The same day, the NBA suspended the season and Tom Hanks publicly disclosed he had covid.) On March 12, Ohio closed its schools statewide. The U.S. declared a national emergency on March 13. The same day, 15 more U.S. states closed their schools. Also on the same day, Canada's Parliament shut down because of the pandemic.
Zvi's post from March 13, 2020 at 6pm is about all the school closures that happened that day. (The U.S. state of emergency was declared that morning.) It doesn't make any specific claims or predictions about the spread of the novel coronavirus, or anything else that could be assessed in terms of its prescience. It mostly focuses on the topic of the social functions that schools play (particularly in the United States and in the state of New York specifically) other than teaching children, such as providing free meals and supervision.
The third post from Zvi is on March 17, 2020 and it's mostly a personal blog. There are a few relevant bits. For one, Zvi admits he was surprised at how bad the pandemic was at that point:
Regret I didn’t sell everything and go short, not because I had some crazy belief in efficient markets, but because I didn’t expect it to be this bad and I told myself a few years ago I was going to not be a trader anymore and just buy and hold.
He argues New York City is not locking down soon enough and San Francisco is not locking down completely enough. About San Francisco, one thing he says is:
Local responses much better. Still inadequate. San Francisco on strangely incomplete lock-down. Going on walks considered fine for some reason, very strange.
I don't know how sound this was given what experts knew at the time. It might have been the right call. I don't know. I will just say that, in retrospect, it seems like going outside was one of the things we originally thought wasn't fine that we later thought was actually fine after all.
The next post after that isn't until April 1, 2020. It's about the viral load of covid-19 infections and the question of how much viral load matters. By this point, we're getting into questions about the unfolding of the ongoing pandemic, rather than questions about predicting the pandemic in advance. You could potentially go and assess that prediction track record separately, but that's beyond the scope of my quick take, which was to assess whether LessWrong called covid early.
I genuinely don't know how to answer these polls. I find it much easier to think about a shorter timeframe like the next 20 years — although even just that is hard enough — rather than try to predict the future over a timespan of 275+ years.
I find it much easier to say that the creation of AGI (specifically as I define it here, since some people even call o3 "AGI") is extremely unlikely by 2035 (i.e., much less than a 0.01% or 1 in 10,000 chance), let alone the Singularity.
("Crazy" seems hazy, to the point that it probably needs to be decomposed into multiple different questions to make it a true forecast — although I can respect just asking people for a vague vibe just as a casual exercise, even though it won't resolve unambiguously in retrospect.)
My problem with trying to put a median year on AGI is that I have absolutely no idea how to do that. If science and technology continue, indefinitely, to make progress in the sort of way they have for the last 100-300 years, then it seems inevitable humans will eventually invent AGI. Maybe there's a chance it's unattainable for reasons most academics and researchers interested in the subject don't currently anticipate.
For instance, one estimate is that to rival the computation of a single human brain, a computer would need to consume somewhere between 300 times and 300 billion times as much electricity as the entire United States does currently. If that estimate is accurate, and if building AGI requires that much computation and that much energy, then, at the very least, that makes AGI far less attainable than even some relatively pessimistic and cautious academics and researchers might have guessed. Imagine the amount of scientific and technological progress required to produce that much energy, or to perform that much computation commensurately more energy efficiently.
Let's assume, for the sake of argument, computation and energy are not issues, and it's just about solving the research problems. I just went on random.org and randomly generated a number between 1 and 275, to represent the range of years asked in this polling question. The result I got was 133 years. 133 years from now is 2158. So, can I do better than that? Can I guess a median year that's more likely to accurate, or more likely to be closer, at least, than a random number generator? Do I have a better methodology than using random.org? Why should I think so? This is a fundamental question, and it underlies this whole polling exercise, as well of most if not all forecasting related to AGI. For instance, is there any scientific evidence or historical evidence that anyone has ever been able to predict when scientific research problems would be solved, or when fundamentally new technologies would be developed, with any sort of accuracy at all? If so, where's the evidence? Let's cite it to motivate these exercises. If not, why should we think we're in a different situation now where we are better able to tell how the future will unfold?
The mental picture I have of the long-term future when I think about forecasting when the fundamental science and technology problems pre-requisite to building AGI will be solved is of a thick fog, where I can see clearly only a little distance in front of me, can see foggily a little bit further, and then after that everything descends into completely opaque gray-white mist. Is 2158 the median year? I tried random.org again. I got 58, which would be 2083. Which year is more likely to be the median, 2083 or 2158? Are they equally likely to be the median? I have no idea how to answer these questions. For all I know, they might be fundamentally impossible to answer. The physicist David Deutsch makes the argument (e.g., in this video at 6:25) that we can't predict the content of scientific knowledge we don't yet know, since predicting the content would be equivalent to knowing it now, and we don't know yet it. This makes sense to me.
We don't yet know what the correct theory of intelligence is. We don't know the content of that theory. The theory that human-like intelligence is just current-generation deep neural networks scaled up 1,000x times would imply a close median year for AGI. Other theories of intelligence would imply something else. If the specific micro-architecture of the whole human brain is what's required for human-like intelligence (or general intelligence), then that implies AGI is probably quite far away, since we don't yet know that micro-architecture and don't yet have the tools in neuroscience to find it out. Even if we did know, reconstructing it in a simulation would pose its own set of scientific and technological challenges. Since we don't know what the correct theory of intelligence is, we don't know how hard it will be to build an intelligence like our own using computers, and therefore we can't predict when it will happen.
My personal view that AGI is extremely unlikely (much less than 0.01% likely) before the end of 2035 comes from my beliefs that 1) human-like intelligence is definitely not current-gen deep neural networks scaled up 1,000x, 2) the correct theory of intelligence is not something nearly so simple or easy (e.g., if AGI could have been solved by symbolic AI, it probably would have been a long time ago), and 3) it's extremely unlikely that all the necessary scientific discoveries and technological breakthroughs required to solve everything from fundamental theory to practical implementation will be solved within the next ~9 years. Scientists, philosophers, and AI researchers have been trying to understand the fundamental nature of intelligence for a long time. The foundational research for deep learning goes back around 40 years, and it built on research that's even older than that. Today, if you listen to ambitious AI researchers like Yann LeCun, Richard Sutton, François Chollet, and Jeff Hawkins, they are each confident in a research roadmap to AGI, but they are four completely different roadmaps based on completely different ideas. So, it's not like the science and philosophy of intelligence is converging toward any particular theory or solution.
That's a long, philosophical answer to this quick poll question, but I believe that's the crux of the whole matter.
Well, the evidence is there if you're ever curious. You asked for it, and I gave it.
David Thorstad, who writes the Reflective Altruism blog, is a professional academic philosopher and, until recently, was a researcher at the Global Priorities Institute at Oxford. He was an editor of the recent Essays on Longtermism anthology published by Oxford University Press, which includes an essay co-authored by Will MacAskill, as well as essays by a few other people well-known in the effective altruism community and the LessWrong community. He has a number of published academic papers on rationality, epistemology, cognition, existential risk, and AI. He's also about as deeply familiar with the effective altruist community as it's possible for someone to be, and also has a deep familiarity with the LessWrong community.
In my opinion, David Thorstad has a deeper understanding of the EA community's ideas and community dynamics than many people in the community do, and, given the overlap between the EA community and the LessWrong community, his understanding also extends to a significant degree to the LessWrong community as well. I think people in the EA community are accustomed to drive-by criticisms by people who have paid minimal attention to EA and its ideas, but David has spent years interfacing with the community and doing both academic research and blogging related to EA. So, what he writes are not drive-by criticisms and, indeed, apparently a number of people in EA listen to him, read his blog posts and academic papers, and take him seriously. All this to say, his work isn't something that can be dismissed out of hand. His work is the kind of scrutiny or critical appraisal that people in EA have been saying they want for years. Here it is, so folks better at least give it a chance.
To me, "ugly or difficult topics should be discussed" is an inaccurate euphemism. I don't think the LessWrong community is particularly capable of or competent at discussing ugly or difficult topics. I think they shy away from the ugly and difficult parts, and generally don't have the stomach or emotional stamina to sit through the discomfort. What instead is happening in the LessWrong community is people are credulously accepting ugly, wrong, evil, and stupid ideas in some part due to an inability to handle the discomfort of scrutinizing them and in large part due to just an ideological trainwreck of a community that believes ridiculous stuff all the time (like the many examples I gave above) and typically has atrocious epistemic practices (e.g. people just guess stuff or believe stuff based on a hunch without Googling it; the community is extremely insular and fiercely polices the insider/outsider boundary — landing on the right side of that boundary is sometimes what even determines whether people keep their job, their friends, their current housing, or their current community).
I spun this quick take out as a full post here. When I submitted the full post, there was no/almost no engagement on this quick take. In the future, I'll try to make sure to publish things only as a quick take or only as a full post, but not both. This was a fluke under unusual circumstances.
Feel free to continue commenting here, cross-post comments from here onto the full post, make new comments on the post, or do whatever you want. Thanks to everyone who engaged and left interesting comments.
Help me make sure I’m understanding this right. You’re at position #4 from left to right, so this means 2029 according to your list. So, this means you think there’s a 50% chance of a combination of the "crazy" scenarios happening by 2029, right?
Unfortunately, the EA Forum polls software makes it hard to ask certain kinds of questions. Your prediction is listed as "70% 2026", but that’s just an artifact of the poll software.
To make it clear to readers what people are actually predicting, and to make sure people giving predictions understand the system properly, you might want to add instructions for people to say something like '50% chance the Year of Crazy happens by 2029’ at the top of their comments. That would at least save readers the trouble of cross-referencing the list for every single prediction.
I tried to do a poll on people’s AI bubble predictions and I ran into a similar issue with the poll software displaying the results confusingly.
Sometimes for social change, having the older generation die off or otherwise lose power is useful. There's not much our hypothetical activist could do to accelerate that. One might think, for instance, that a significant decline in religiosity and/or the influence of religious entities is a necessary reagent in this model. While one could in theory put money into attempting to reduce the influence of religion in 1900s public life, I think there would be good reasons not to pursue this approach. Rather, I think it could make more sense for the activist to let the broader cultural and demographic changes to do some of the hard work for them.
I don't agree with this causal model/explanatory theory.
This is some kind of at least partly deterministic theory about culture that says culture is steered by forces that can't be steered by human creativity, agency, knowledge, or effort. I don't agree with that view. I think culture is changed by what people decide to do.
That's not an accounting trick in my book -- there are clear redistributive effects here. If I spend my money on basic science to promote hologram technology, the significant majority of the future benefits of my work are likely going to flow to future for-profit hologram companies, future middle-class+ people in developed countries, and so on. Those aren't the benefits I care about, and Big Hologram isn't likely to pay it forward by mailing a bunch of holograms to disadvantaged children (in your terminology, they are going to free-ride off my past efforts).
That depends on two things:
I guess it could theoretically be true that both assumptions are correct, and maybe we can imagine a scenario where you would have good reasons to believe both of these things, but in practice, in reality, I think it's rare that we ever really know things like that. So, while it's possible to imagine scenarios where the upfront money will definitely be supplied by someone else and the down-the-line money definitely won't, what does this tell us about whether this is a good idea in practice?
The hologram example is making the point: if the pool of dollars required to produce an outcome is a certain amount, the overall cost-effectiveness of producing that outcome doesn't change regardless of which dollars are yours or not. I think your point is: your marginal cost-effectiveness could be much higher or lower depending on what's going to happen if you do nothing. Which is true, I just don't think we can actually know what's going to happen if you do nothing, and the best version of this still seems to be guesswork or hunches.
It also seems like an oddly binary choice of the sort that doesn't really exist in real life. If you have significant philanthropic money, can you really not affect what others do? Let's flip it: if another philanthropist said they would subsidize holograms down the line, that would affect what you would do. So, why not think you have the same power?
What seems to be emerging here is an overall theme of: 'the future will happen the way it's going to happen regardless of what we do about it' vs. 'we have the agency to change how events play out starting right now'. I definitely believe the latter, I definitely disbelieve the former. We have agency. And, on the other hand, we can't predict the future.
Who was it who recently quoted someone, maybe the physicist David Deutsch or the psychologist Steven Pinker, saying something like: how terrible would it be if we could predict the future? Because that would mean we had no agency.
The first post listed there is from March 2, 2020, so that's relatively late in the timeline we're considering, no? That's 3 days later than the February 28 post I discussed above as the first/best candidate for a truly urgent early warning about covid-19 on LessWrong. (2020 was a leap year, so there was a February 29.)
That first post from March 2 also seems fairly simple and not particularly different from the February 28 post (which it cites).