My biggest takeaway from the Essays on Longtermism anthology is that irrecoverable collapse is a serious concern and we should not assume that humanity will rebound from a global catastrophe. The two essays that convinced me of this were "Depopulation and Longtermism" by Michael Geruso and Dean Spears and "Is Extinction Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models" by Gustav Alexandrie and Maya Eden. These essays argue that human population does not automatically or necessarily grow in the rapid, exponential way we became accustomed to over the last few hundred years.

In the discourse on existential risk, it's often assumed that even if only 1% of the human population survives a global disaster, eventually humanity will rebound. On this assumption, while extinction reduces future lives to zero, a disaster that kills 99% of the human population only reduces the eventual number of future lives from some astronomically large figure to some modestly lower astronomically large figure. This idea goes back to Derek Parfit, who (as far as I know) was the first analytic philosopher to discuss human extinction from a population ethics standpoint. Nick Bostrom, who is better known for popularizing the topic of existential risk, has cited Parfit as an influence. So, this assumption has been with us from the beginning.

Irrecoverable collapse, as I would define it, means that population does not ever rebound to pre-collapse levels and science, technology, and industry do not recover to pre-collapse levels, either. So, digital minds and other futuristic fixes don't get us out of the jam. While the two aforementioned papers are primarily about population, the paper on depopulation by Geruso and Spears also persuasively argues that technological progress depends on population. This spells trouble for any scenario where a global catastrophe kills a large percentage of living human beings.[1] 

While a small global population of humans might live on Earth for a very long time, the overall number of future lives will be much less than if science and technology continued to progress, if the global economy continued to grow, and if global population continued to grow or at least stayed roughly steady. If irrecoverable collapse reduces the number of future lives by something like 99.9%, we should be concerned about irrecoverable collapse for the same reason we're concerned about extinction.[2]

For several kinds of existential threat, such as asteroids, pandemics, and nuclear war, it seems like the chance of an event that kills a devastating percentage of the world's population but not 100% is significantly higher than the chance of a full-on extinction event. If irrecoverable collapse scenarios are almost as bad as extinction events, then the putatively greater likelihood of irrecoverable collapse scenarios probably matters a lot!

If irrecoverable collapse reduces the number of future lives by almost as much as extinction and if irrecoverable collapse scenarios are more likely than extinction scenarios, then it may be more important to try to prevent irrecoverable collapse than extinction. In practice, maybe trying to prevent extinction looks the same as trying to prevent sub-extinction disasters. For example, pandemic prevention probably looks similar whether you're trying to prevent another pandemic like covid-19 or a pandemic 10x worse or a pandemic 10x worse than that. However, I can think of two areas where this idea about irrecoverable collapse might be practically relevant:

  1. It might become more important to detect smaller asteroids using space telescopes like NASA's planned NEO Surveyor. It's plausible to think there may be asteroids that are too small to cause human extinction but large enough to cause irrecoverable collapse, especially if they hit a densely populated part of Earth. (Similar reasoning might apply to other threats like large volcanoes.)
  2. Maybe it's worthwhile thinking more about ways to reboot civilization after a collapse. There has been some discussion in the existential risk literature about long-term shelters or refuges, which could be a relevant intervention. See, for example, Nick Beckstead's excellent paper on the topic. However, Beckstead's paper seems to make the assumption that I'm now saying is dubious: if even a small number of people survive, that's good enough.
     

One topic not discussed in Essays on Longtermism is humanity's one-time endowment of easily accessible fossil fuels. These fossil fuels have been used up and if industrial civilization collapsed, it could not be rebooted along the same pathway it originally took. A hopeful idea I once heard offered in this context was that maybe charcoal, which is made from wood, could replace coal. I don't know whether or not that is feasible. This is a worrying problem and if there any good ideas for how to solve it, I would love to hear them.

There are other considerations. For example, if humanity regressed to a pre-scientific stage, are we confident that a Scientific Revolution would eventually happen again? Is the Scientific Revolution inevitable and guaranteed, given enough time, or are we lucky that it happened? 

Let's say we want to juice the odds. Could we store scientific knowledge over the very long term, possibly carved in stone or engraved in nickel, in a way that would remain understandable to people for centuries after a collapse? How might we encourage future people to care about this knowledge? Would people be curious about it? How could we make sure they would find it? 

Not much research has been done into so-called "doomsday archives". To clarify: there has been some research on how to physically store data for a very long time, with proofs of concept that store data in dehydrated DNA or that use lasers to encode data in quartz glass or diamond. However, very little research has been done into how to make information accessible and understandable to a low-tech society that has drifted culturally and linguistically away from the creators of the archive in the centuries following a global disaster.

If irrecoverable collapse is indeed as important as I have entertained in this essay, then a few recommendations follow:

  • People who are concerned about existential risks primarily because of the reduction the number of future lives should look more broadly at mitigating potential disasters that would not cause extinction but might cause an irrecoverable collapse.
  • That same class of people should look into any way that a devastated civilization could recover without the easily accessible fossil fuels that human civilization had the first time around.
  • Another potential research direction is doomsday archives that can preserve knowledge not only physically but also practically for people with limited technology and limited background knowledge.

In short, we should not assume humanity will automatically recover from a sub-extinction global catastrophe and should plan accordingly.

  1. ^

    If we were able to create digital minds, concerns about the biological human population and fertility rates would suddenly become much less pressing. However, getting to the point where we can create digital minds would require that the human population not collapse before then.

  2. ^

    This is not a new idea. As early as 2002, Nick Bostrom defined an existential risk as: "One where an adverse outcome would either annihilate Earth−originating intelligent life or permanently and drastically curtail its potential." Even so, I think this idea has been under-emphasized.

Comments4


Sorted by Click to highlight new comments since:

I agree that extinction has been overemphasized in the discussion of existential risk. I would add that it's not just irrecoverable collapse, but the potential increased risk of subsequent global totalitarianism or worse values ending up in AI. Here are some papers that I have been on that have addressed some of these issues: 1, 2, 3, 4. And here is another relevant paper: 1, and very relevant project 2.

Thanks for sharing the papers. Some of those look really interesting. I’ll try to remember to look at these again when I think of it and have time to absorb them. 

What do you think of the Arch Mission Foundation's Nanofiche archive on the Moon?

Wouldn’t a global totalitarian government — or a global government of any kind — require advanced technology and a highly developed, highly organized society? So, this implies a high level of recovery from a collapse, but, then, why would global totalitarianism be more likely in such a scenario of recovery than it is right now? 

I have personally never bought the idea of “value lock-in” for AGI. It seems like an idea inherited from the MIRI worldview, which is a very specific view on AGI with some very specific and contestable assumptions of what AGI will be like and how it will be built. For instance, the concept of “value lock-in” wouldn’t apply to AGI created through human brain emulation. And for other technological paradigms that could underlie AGI, are they like human brain emulation in this respect or unlike it? But this is starting to get off-topic for this post. 

Wouldn’t a global totalitarian government — or a global government of any kind — require advanced technology and a highly developed, highly organized society? So, this implies a high level of recovery from a collapse, but, then, why would global totalitarianism be more likely in such a scenario of recovery than it is right now? 

Though it may be more likely for the world to go to global totalitarianism after recovery from collapse, I was referring to a scenario where there was not collapse, but the catastrophe pushed us towards totalitarianism. Some people think the world could have ended up totalitarian if World War II had gone differently.

What do you think of the Arch Mission Foundation's Nanofiche archive on the Moon?

I don't think it's the most cost-effective way of mitigating X risk, but I guess you could think of it as plan F:

Plan A: prevent catastrophes

Plan B: contain catastrophes (e.g. not escalating nuclear war or suppressing an extreme pandemic)

Plan C: resilience despite the catastrophe getting very bad (e.g. maintaining civilization despite blocking of sun or collapse of infrastructure because of employee pandemic fear)

Plan D: recover from collapse of civilization

Plan E: refuge in case everyone else died

Plan F: resurrect civilization

I have personally never bought the idea of “value lock-in” for AGI. It seems like an idea inherited from the MIRI worldview, which is a very specific view on AGI with some very specific and contestable assumptions of what AGI will be like and how it will be built. 

I think value lock-in does not depend on the MIRI worldview - here's a relevant article.

Thank you for sharing your perspective. I appreciate it. 

I definitely misunderstood what you were saying about global totalitarianism. Thank you for clarifying. I will say I have a hard time guessing how global totalitarianism might result from a near-miss or a sub-collapse disaster involving one of the typical global catastrophe scenarios, like nuclear war, pandemics (natural or bioengineered), asteroids, or extreme climate change. (Maybe authoritarianism or totalitarianism within some specific countries, sure, but a totalitarian world government?)

To be clear, are you saying that your own paper about storing data on the Moon is also a Plan F? I was curious what you thought of the Arch Mission Foundation because your paper proposes putting data on the Moon and someone has actually done that! They didn't execute your specific idea, of course, but I wondered how you thought their idea stacked up against yours.

I definitely agree that putting data on the Moon should be at best a Plan F, our sixth priority, if not even lower! I think the chances of data on the Moon ever being useful are slim, and I don't want the world to ever get into a scenario where it would be useful!

I think value lock-in does not depend on the MIRI worldview - here's a relevant article.

Ah, I agree, this is correct, but I meant the idea of value lock-in is inherited from a very specific way of thinking about AGI primarily popularized by MIRI and its employees but also popularized by people like Nick Bostrom (e.g. in his 2014 book Superintelligence). Thinking value lock-in is a serious and likely concern with regard to AGI does not require you to subscribe to MIRI's specific worldview or Bostrom's on AGI. So, you're right in that respect. 

But I think if recent history had played a little differently and ideas about AGI had been formed imagining that human brain emulation would be the underlying technological paradigm, or that it would be deep learning and deep reinforcement learning, then the idea of value lock-in would not be as popular in current discussions of AGI as it is. I think the popularity of the value lock-in idea is largely an artifact of the historical coincidence that many philosophical ideas about AGI got formed while symbolic AI or GOFAI was the paradigm people were imagining would produce AGI.

The same could be said for broader ideas about AI alignment. 

Curated and popular this week
 ·  · 13m read
 · 
It’s to build the skills required to solve the problems that you want to solve in the world [I am a career advisor at 80,000 Hours. This post is adapted from a talk I gave on career capital to some ambitious altruistic students. If you prefer slides, you can access them here. These ideas are informed by my work at 80k but reflects my personal views.] I’m often asked how to have an impactful fulfilling career. My four word answer is “get good, be known.” My “fits on a postcard” answer is something like this: 1. Identify a problem with a vast scale of harm that is neglected at current margins and that is tractable to solve. Millions will die of preventable diseases this year, billions of animals will be tortured, severe tail risks like nuclear war and catastrophic pandemics still exist, and we might be on the cusp of a misaligned intelligence explosion. You should find an important problem to work on. 2. Obsessively improve at the rare and valuable skills to solve this problem and do so in a legible way for others to notice. Leverage this career capital to keep the flywheel going— skills allow you to solve more problems, which builds more skills. Rare and valuable roles require rare and valuable traits, so get so good they can’t ignore you. Unfortunately, some ambitious and altruistic young people that I speak to seem to have implicitly developed a model that looks more like this: 1. Identify a problem with a vast scale of harm, that is neglected at current margins, and that is tractable to solve. 2. Get a job from the 80,000 Hours job board at a capital-E capital-A Effective Altruist organization right out of college, as fast as possible, otherwise feel like a failure, oh god, oh god... I empathize with this feeling. Ambitious people who care about reducing risk and suffering in the world understandably think it’s the most important thing they can be doing, and often hold themselves to a high standard when trying to get there. Before properly entering the w
 ·  · 6m read
 · 
I really enjoyed reading the "why I donate" posts in the past week, so much so that I felt compelled to add my reflections, in case someone finds my reasons as interesting as I found theirs. 1. My money must be spent on something, might as well spend it on the most efficient things The core reason I give is something that I think is under-represented in the other posts: the money I have and earn will need to be spent on something, and it feels extremely inefficient and irrational to spend it on my future self when it can provide >100x as much to others. To me, it doesn't seem important whether I'm in the global top 10% or bottom 10%, or whether the money I have is due to my efforts or to the place I was born. If it can provide others 100x as much, it just seems inefficient/irrational to allocate it to myself. The post could end here, but there are other secondary reasons/perspectives on why I personally donate that I haven't seen commonly discussed. 2. Spending money is voting on how the global economy allocates its resources In 2017, I read Wealth: The Toxic Byproduct by Kevin Simler. Surprisingly, I don't think it has ever been posted on this forum. I disagree with some of it, but the core points really changed how I think about wealth, earning, and spending. The post is very well written and enjoyable, but it's 2400 words, so copy-pasting some snippets: > A thought experiment — the Congolese Trading Window: > > Suppose one day you wake up to find a large pile of Congolese francs. [...] A window [...] pushes open to reveal the unfamiliar sights of a Congolese outdoor market [...] a man approaches your window. [...] He's asking if you'd like to buy his grain for 500 francs. > > What should you do? [...] > Your plan is to buy grain whenever you think the price is poised to go up in the near future, and sell whenever you think the price is poised to go down. > [...] > Imagine a particular bag of grain that you bought at T1 for 200 francs, and then sold at T
 ·  · 6m read
 · 
Note: This post was crossposted from the Coefficient Giving Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- It can feel hard to help factory-farmed animals. We’re up against a trillion-dollar global industry and its army of lobbyists, marketeers, and apologists. This industry wields vast political influence in nearly every nation and sells its products to most people on earth. Against that, we are a movement of a few thousand full-time advocates operating on a shoestring. Our entire global movement — hundreds of groups combined — brings in less funds in a year than one meat company, JBS, makes in two days. And we have the bigger task. The meat industry just wants to preserve the status quo: virtually no regulation and ever-growing demand for factory farming. We want to upend it — and place humanity on a more humane path. Yet, somehow, we’re winning. After decades of installing battery cages, gestation crates, and chick macerators, the industry is now removing them. Once-dominant industries, like fur farming, are collapsing. And advocates are building momentum toward bigger reforms for all farmed animals. Here are my top ten wins from this year: 1. Liberté et Égalité, for Chickens. France’s largest chicken producer, the LDC Group, committed to adopting the European Chicken Commitment for its two flagship brands by 2028 — a shift that French advocacy group L214 estimates will cover 40% of the national chicken market, or up to 400 million birds each year. Across the Channel, British supermarket chain Waitrose transitioned all its own-brand chicken to comply with the parallel UK Better Chicken Commitment. 2. Guten Cluck! The Wurst Is Over for German Animals. Germany’s top retailer, Edeka, committed to making all of its own-brand chicken products compliant with Germany’s equivalent of the European Chicken Commitment by 2030