I’ve seen a few people in the LessWrong community congratulate the community on predicting or preparing for covid-19 earlier than others, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it. I looked into this, and as far as I can tell, this self-congratulatory narrative is a complete myth.
Many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, some stores sold out of face masks in several different cities in North America. (One example of many.) The oldest post on LessWrong tagged with "covid-19" is from well after this started happening. (I also searched the forum for posts containing "covid" or "coronavirus" and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described "prepper" who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts don't start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Here's a New York Times article from February 2, 2020, entitled "Wuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Say", well before any of the worried posts on LessWrong:
The Wuhan coronavirus spreading from China is now likely to become a pandemic that circles the globe, according to many of the world’s leading infectious disease experts.
The prospect is daunting. A pandemic — an ongoing epidemic on two or more continents — may well have global consequences, despite the extraordinary travel restrictions and quarantines now imposed by China and other countries, including the United States.
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
It is “increasingly unlikely that the virus can be contained,” said Dr. Thomas R. Frieden, a former director of the Centers for Disease Control and Prevention who now runs Resolve to Save Lives, a nonprofit devoted to fighting epidemics.
The worried posts on LessWrong don't start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDC's guidance at the time about covid-19. It says that CFAR's workshops "should be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced." This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?
By the time the posts on LessWrong get really, really worried, in the last few days of February and the first week of March, much of the rest of the world was reacting in the same way.
From February 14 to February 25, the S&P 500 dropped about 7.5%. Around this time, financial analysts and economists issued warnings about the global economy.
On February 25, 2020, the CDC warned Americans of the possibility that "disruption to everyday life may be severe". The CDC made this bracing statement:
It's not so much a question of if this will happen anymore, but more really a question of when it will happen — and how many people in this country will have severe illness.
Another line from the CDC:
We are asking the American public to work with us to prepare with the expectation that this could be bad.
On February 26, Canada's Health Minister advised Canadians to stockpile food and medication.
The most prominent LessWrong post from late February warning people to prepare for covid came a few days later, on February 28. So, on this comparison, LessWrong was actually slightly behind the curve. (Oddly, that post insinuates that nobody else is telling people to prepare for covid yet, and congratulates itself on being ahead of the curve.)
In the beginning of March, the number of LessWrong posts tagged with covid-19 posts explodes, and the tone gets much more alarmed. The rest of the world was responding similarly at this time. For example, on February 29, 2020, Ohio declared a state of emergency around covid. On March 4, Governor Gavin Newsom did the same in California. The governor of Hawaii declared an emergency the same day, and over the next few days, many more states piled on.
Around the same time, the general public was becoming alarmed about covid. In the last days of February and the first days of March, many people stockpiled food and supplies. On February 29, 2020, PBS ran an article describing an example of this at a Costco in Oregon:
Worried shoppers thronged a Costco box store near Lake Oswego, emptying shelves of items including toilet paper, paper towels, bottled water, frozen berries and black beans.
“Toilet paper is golden in an apocalypse,” one Costco employee said.
Employees said the store ran out of toilet paper for the first time in its history and that it was the busiest they had ever seen, including during Christmas Eve.
A March 1, 2020 article in the Los Angeles Times reported on stores in California running out of product as shoppers stockpiled. On March 2, an article in Newsweek described the same happening in Seattle:
Speaking to Newsweek, a resident of Seattle, Jessica Seu, said: "It's like Armageddon here. It's a bit crazy here. All the stores are out of sanitizers and [disinfectant] wipes and alcohol solution. Costco is out of toilet paper and paper towels. Schools are sending emails about possible closures if things get worse.
In Canada, the public was responding the same way. Global News reported on March 3, 2020 that a Costco in Ontario ran out bottled water, toilet paper, and paper towels, and that the situation was similar at other stores around the country. The spike in worried posts on LessWrong coincides with the wider public's reaction. (If anything, the posts on LessWrong are very slightly behind the news articles about stores being picked clean by shopper stockpiling.)
On March 5, 2020, the cruise ship the Grand Princess made the news because it was stranded off the coast of California due to a covid outbreak on board. I remember this as being one seminal moment of awareness around covid. It was a big story. At this point, LessWrong posts are definitely in no way ahead of the curve, since everyone is talking about covid now.
On March 8, 2020, Italy went on partial lockdown, then on full lockdown on March 10. On March 11, the World Health Organization declared covid-19 a global pandemic. (The same day, the NBA suspended the season and Tom Hanks publicly disclosed he had covid.) On March 12, Ohio closed its schools statewide. The U.S. declared a national emergency on March 13. The same day, 15 more U.S. states close their schools. Also on the same day, Canada's Parliament shut down because of the pandemic. By now, everyone knows it's a crisis.
So, did LessWrong call covid early? I see no evidence of that. The timeline of LessWrong posts about covid follow the same timeline that the world at large reacted to covid, increasing in alarm as journalists, experts, and governments increasingly rang the alarm bells. In some comparisons, LessWrong's response was a little bit behind.
The only curated post from this period (and the post with the third-highest karma, one of only four posts with over 100 karma) tells LessWrong users to prepare for covid three days after the CDC told Americans to prepare, and two days after Canada's Health Minister told Canadians to stockpile food and medication. When that post was published, many people were already stockpiling supplies, partly because government health officials had told them to. (The LessWrong post was originally published on a blog a day before, and based on a note in the text apparently written the day before that, but that still puts the writing of the post a day after the CDC warning.)
Unless there is some evidence that I didn't turn up, it seems pretty clear the self-congratulatory narrative is a myth. The self-congratulation actually started in that post published on February 28, 2020, which, again, is odd given the CDC's warning three days before, analysts' and economists' warnings about the global economy a bit before that, and the New York Times article warning about a probable pandemic at the beginning of the month. The post is slightly behind the curve, but it's gloating as if it's way ahead.
I think people should be skeptical and even distrustful toward the claims of the LessWrong community, both on topics like pandemics and about its own track record and mythology. Obviously this myth is self-serving, and it was pretty easy for me to disprove in a short amount of time — so anyone who is curious can check and see that it's not true. The people in the LessWrong community who believe the community called covid early probably believe that because it's flattering. If they actually wondered if this is true or not and checked the timelines, it would become pretty clear that didn't actually happen.
Rate limiting on the EA Forum is too strict. Given that people karma downvote because of disagreement, rather than because of quality or civility — or they judge quality and/or civility largely on the basis of what they agree or disagree with — there is a huge disincentive against expressing unpopular or controversial opinions (relative to the views of active EA Forum users, not necessarily relative to the general public or relevant expert communities) on certain topics.
This is a message I saw recently:
You aren't just rate limited for 24 hours once you fall below the recent karma threshold (which can be triggered by one comment that is unpopular with a handful of people), you're rate limited for as many days as it takes you to gain 25 net karma on new comments — which might take a while, since you can only leave one comment per day, and, also, people might keep downvoting your unpopular comment. (Unless you delete it — which I think I've seen happen, but I won't do, myself, because I'd rather be rate limited than self-censor.)
The rate limiting system is a brilliant idea for new users or users who have less than 50 total karma — the ones who have little plant icons next to their nam... (read more)
I think this highlights why some necessary design features of the karma system don't translate well to a system that imposes soft suspensions on users. (To be clear, I find a one-comment-per-day limit based on the past 20 comments/posts to cross the line into soft suspension territory; I do not suggest that rate limits are inherently soft suspensions.)
I wrote a few days ago about why karma votes need to be anonymous and shouldn't (at least generally) require the voter to explain their reasoning; the votes suggested general agreement on those points. But a soft suspension of an established user is a different animal, and requires greater safeguards to protect both the user and the openness of the Forum to alternative views.
I should emphasize that I don't know who cast the downvotes that led to Yarrow's soft suspension (which were on this post about MIRI), or why they cast their votes. I also don't follow MIRI's work carefully enough to have a clear opinion on the merits of any individual vote through the lights of the ordinary purposes of karma. So I do not intend to imply dodgy conduct by anyone. But: "Justice must not only be done, but must also be seen to be done." People who are... (read more)
The NPR podcast Planet Money just released an episode on GiveWell.
A number of podcasts are doing a fundraiser for GiveDirectly: https://www.givedirectly.org/happinesslab2025/
Podcast about the fundraiser: https://pca.st/bbz3num9
If the people arguing that there is an AI bubble turn out to be correct and the bubble pops, to what extent would that change people's minds about near-term AGI?
I strongly suspect there is an AI bubble because the financial expectations around AI seem to be based on AI significantly enhancing productivity and the evidence seems to show it doesn't do that yet. This could change — and I think that's what a lot of people in the business world are thinking and hoping. But my view is a) LLMs have fundamental weaknesses that make this unlikely and b) scaling is running out of steam.
Scaling running out of steam actually means three things:
1) Each new 10x increase in compute is less practically or qualitatively valuable than previous 10x increases in compute.
2) Each new 10x increase in compute is getting harder to pull off because the amount of money involved is getting unwieldy.
3) There is an absolute ceiling to the amount of data LLMs can train on that they are probably approaching.
So, AI investment is dependent on financial expectations that are depending on LLMs enhancing productivity, which isn't happening and probably won't happen due to fundamental problems with LLMs and due t... (read more)
Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online:
I just want to point out that I have a degree in philosophy and have never heard the word "epistemics" used in the context of academic philosophy. The word used has always been either epistemology or epistemic as adjective in front of a noun (never on its own, always used as an adjective, not a noun, and certainly never pluralized).
From what I can tell, "epistemics" seems to be weird EA Forum/LessWrong jargon. Not sure how or why this came about, since this is not obscure philosophy knowledge, nor is it hard to look up.
If you Google "epistemics" phil... (read more)
I agree this is just a unique rationalist use. Same with 'agentic' though that has possibly crossed over into the more mainstream, at least in tech-y discourse.
However I think this is often fine, especially because 'epistemics' sounds better than 'epistemic practices' and means something distinct from 'epistemology' (the study of knowledge).
Always good to be aware you are using jargon though!
I find "epistemics" neat because it is shorter than "applied epistemology" and reminds me of "athletics" and the resulting (implied) focus on being more focused on practice. I don't think anyone ever explained what "epistemics" refers to, and I thought it was pretty self-explanatory from the similarity to "athletics".
I also disagree about the general notion that jargon specific to a community is necessarily bad, especially if that jargon has fewer syllables. Most subcultures, engineering disciplines, sciences invent words or abbreviations for more efficient communication, and while some of that may be due to trying to gatekeep, it's so universal that I'd be surprised if it doesn't carry value. There can be better and worse coinages of new terms, and three/four/five-letter abbreviations such as "TAI" or "PASTA" or "FLOP" or "ASARA" are worse than words like "epistemics" or "agentic".
I guess ethics makes the distinction between normative ethics and applied ethics. My understanding is that epistemology is not about practical techniques, and that one can make a distinction here (just like the distinction between "methodology" and "methods").
I tried to figure out if there's a pair of su... (read more)
I used to feel so strongly about effective altruism. But my heart isn't in it anymore.
I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven't been able to sustain a vegan diet for more than a short time. And so on.
But there isn't a community or a movement anymore where I want to talk about these sorts of things with people. That community and movement existed, at least in my local area and at least to a limited extent in some online spaces, from about 2015 to 2017 or 2018.
These are the reasons for my feelings about the effective altruist community/movement, especially over the last one or two years:
-The AGI thing has gotten completely out of hand. I wrote a brief post here about why I strongly disagree with near-term AGI predictions. I wrote a long comment here about how AGI's takeover of effective altruism has left me disappointed, disturbed, and alienated. 80,000 Hours and Will MacAskill have both pivoted to focusing exclusively or almost exclusively on AGI. AGI talk h... (read more)
I'd distinguish here between the community and actual EA work. The community, and especially its leaders, have undoubtedly gotten more AI-focused (and/or publicly admittted to a degree of focus on AI they've always had) and rationalist-ish. But in terms of actual altruistic activity, I am very uncertain whether there is less money being spent by EAs on animal welfare or global health and development in 2025 than there was in 2015 or 2018. (I looked on Open Phil's website and so far this year it seems well down from 2018 but also well up from 2015, but also 2 months isn't much of a sample.) Not that that means your not allowed to feel sad about the loss of community, but I am not sure we are actually doing less good in these areas than we used to.
My memory is a large number of people to the NL controversy seriously, and the original threads on it were long and full of hostile comments to NL, and only after someone posted a long piece in defence of NL did some sympathy shift back to them. But even then there are like 90-something to 30-something agree votes and 200 karma on Yarrow's comment saying NL still seem bad: https://forum.effectivealtruism.org/posts/H4DYehKLxZ5NpQdBC/nonlinear-s-evidence-debunking-false-and-misleading-claims?commentId=7YxPKCW3nCwWn2swb
I don't think people dropped the ball here really, people were struggling honestly to take accusations of bad behaviour seriously without getting into witch hunt dynamics.
Your help requested:
I’m seeking second opinions on whether my contention in Edit #4 at the bottom of this post is correct or incorrect. See the edit at the bottom of the post for full details.
Brief info:
My contention is about the Forecasting Research Institute’s recent LEAP survey.
One of the headline results from the survey is about the probabilities the respondents assign to each of three scenarios.
However, the question uses an indirect framing — an intersubjective resolution or metaprediction framing.
The specific phrasing of the question is q
People in effective altruism or adjacent to it should make some public predictions or forecasts about whether AI is in a bubble.
Since the timeline of any bubble is extremely hard to predict and isn’t the core issue, the time horizon for the bubble prediction could be quite long, say, 5 years. The point would not be to worry about the exact timeline but to get at the question of whether there is a bubble that will pop (say, before January 1, 2031).
For those who know more about forecasting than me, and especially for those who can think of good w... (read more)
Self-driving cars are not close to getting solved. Don’t take my word for it. Listen to Andrej Karpathy, the lead AI researcher responsible for the development of Tesla’s Full Self-Driving software from 2017 to 2022. (Karpathy also did two stints as a researcher at OpenAI, taught a deep learning course at Stanford, and coined the term "vibe coding".)
From Karpathy’s October 17, 2025 interview with Dwarkesh Patel:
Dwarkesh Patel 01:42:55
... (read more)You’ve talked about how you were at Tesla leading self-driving from 2017 to 2022. And you firsthand saw this progress from c
Since my days of reading William Easterly's Aid Watch blog back in the late 2000s and early 2010s, I've always thought it was a matter of both justice and efficacy to have people from globally poor countries in leadership positions at organizations working on global poverty. All else being equal, a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from Canada with an equal level of education, an equal ability to network with the right international organizations, etc.
In practice, this is probably hard to do, since it requires crossing language barriers, cultural barriers, geographical distance, and international borders. But I think it's worth it.
So much of what effective altruism does, including around global poverty, including around the most evidence-based and quantitative work on global poverty, relies on people's intuitions, and people's intuitions formed from living in wealthy, Western countries with no connection to or experience of a globally poor country are going to be less accurate than people who have lived in poor countries and know a lot about them.
Simply put, first-hand experience of poor countries is a form of expertise and organizations run by people with that expertise are probably going to be a lot more competent at helping globally poor people than ones that aren't.
I agree with most of you say here, indeed all things being equal a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from anywhere else. The problem is your caveats - things are almost never equal...
1) Education systems just aren't nearly as good in lower income countries. This means that that education is sadly barely ever equal. Even between low income countries - a Kenyan once joked with me that "a Ugandan degree holder is like a Kenyan high school leaver". If you look at the top echelon of NGO/Charity leaders from low-income who's charities have grown and scaled big, most have been at least partially educated in richer countries
2) Ability to network is sadly usually so so much higher if you're from a higher income country. Social capital is real and insanely important. If you look at the very biggest NGOs, most of them are founded not just by Westerners, but by IVY LEAGUE OR OXBRIDGE EDUCATED WESTERNERS. Paul Farmer (Partners in Health) from Harvard, Raj Panjabi (LastMile Health) from Harvard. Paul Niehaus (GiveDirectly) from Harvard. Rob Mathers (AMF) Harvard AND Cambridge. With those connections you ca... (read more)
What AI model does SummaryBot use? And does whoever runs SummaryBot use any special tricks on top of that model? It could just be bias, but SummaryBot seems better at summarizing stuff then GPT-5 Thinking, o3, or Gemini 2.5 Pro, so I'm wondering if it's a different model or maybe just good prompting or something else.
@Toby Tremlett🔹, are you SummaryBot's keeper? Or did you just manage its evil twin?
There are two philosophies on what the key to life is.
The first philosophy is that the key to life is separate yourself from the wretched masses of humanity by finding a special group of people that is above it all and becoming part of that group.
The second philosophy is that the key to life is to see the universal in your individual experience. And this means you are always stretching yourself to include more people, find connection with more people, show compassion and empathy to more people. But this is constantly uncomfortable because, again and again,... (read more)
[Personal blog] I’m taking a long-term, indefinite hiatus from the EA Forum.
I’ve written enough in posts, quick takes, and comments over the last two months to explain the deep frustrations I have with the effective altruist movement/community as it exists today. (For one, I think the AGI discourse is completely broken and far off-base. For another, I think people fail to be kind to others in ordinary, important ways.)
But the strongest reason for me to step away is that participating in the EA Forum is just too unpleasant. I’ve had fun writing stuff on the... (read more)
Here is the situation we're in with regard to near-term prospects for artificial general intelligence (AGI). This is why I'm extremely skeptical of predictions that we'll see AGI within 5 years.
-Current large language models (LLMs) have extremely limited capabilities. For example, they can't score above 5% on the ARC-AGI-2 benchmark, they can't automate any significant amount of human labour,[1] and they can only augment human productivity in minor ways in limited contexts.[2] They make ridiculous mistakes all the time, like saying somethin... (read more)
Have Will MacAskill, Nick Beckstead, or Holden Karnofsky responded to the reporting by Time that they were warned about Sam Bankman-Fried's behaviour years before the FTX collapse?
Slight update to the odds I’ve been giving to the creation of artificial general intelligence (AGI) before the end of 2032. I’ve been anchoring the numerical odds of this to the odds of a third-party candidate like Jill Stein or Gary Johnson winning a U.S. presidential election. That’s something I think is significantly more probable than AGI by the end of 2032. Previously, I’d been using 0.1% or 1 in 1,000 as the odds for this, but I was aware that these odds were probably rounded.
I took a bit of time to refine this. I found that in 2016, FiveThirtyEight ... (read more)
Yann LeCun (a Turing Award-winning pioneer of deep learning) leaving Meta AI — and probably, I would surmise, being nudged out by Mark Zuckerberg (or another senior Meta executive) — is a microcosm for everything wrong with AI research today.
LeCun is the rare researcher working on fundamental new ideas to push AI forward on a paradigm level. Zuckerberg et al. seem to be abandoning that kind of work to focus on a mad dash to AGI via LLMs, on the view that enough scaling and enough incremental engineering and R&D will push current LLMs all the way ... (read more)
Just calling yourself rational doesn't make you more rational. In fact, hyping yourself up about how you and your in-group are more rational than other people is a recipe for being overconfidently wrong.
Getting ideas right takes humility and curiosity about what other people think. Some people pay lip service to the idea of being open to changing their mind, but then, in practice, it feels like they would rather die than admit they were wrong.
This is tied to the idea of humiliation. If disagreement is a humiliation contest, changing one's mind can fe... (read more)
I’ve seen a few people in the LessWrong community congratulate the community on predicting or preparing for covid-19 earlier than others, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it. I looked into this, and as far as I can tell, this self-congratulatory narrative is a complete myth.
Many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, some stores sold out of face masks in several different cities in North America. (One example of many.) The oldest post on LessWrong tagged with "covid-19" is from well after this started happening. (I also searched the forum for posts containing "covid" or "coronavirus" and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described "prepper" who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts don't start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Here's a New York Times article from February 2, 2020, entitled "Wuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Say", well before any of the worried posts on LessWrong:
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
The worried posts on LessWrong don't start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDC's guidance at the time about covid-19. It says that CFAR's workshops "should be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced." This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?
By the time the posts on LessWrong get really, really worried, in the last few days of February and the first week of March, much of the rest of the world was reacting in the same way.
From February 14 to February 25, the S&P 500 dropped about 7.5%. Around this time, financial analysts and economists issued warnings about the global economy.
On February 25, 2020, the CDC warned Americans of the possibility that "disruption to everyday life may be severe". The CDC made this bracing statement:
Another line from the CDC:
On February 26, Canada's Health Minister advised Canadians to stockpile food and medication.
The most prominent LessWrong post from late February warning people to prepare for covid came a few days later, on February 28. So, on this comparison, LessWrong was actually slightly behind the curve. (Oddly, that post insinuates that nobody else is telling people to prepare for covid yet, and congratulates itself on being ahead of the curve.)
In the beginning of March, the number of LessWrong posts tagged with covid-19 posts explodes, and the tone gets much more alarmed. The rest of the world was responding similarly at this time. For example, on February 29, 2020, Ohio declared a state of emergency around covid. On March 4, Governor Gavin Newsom did the same in California. The governor of Hawaii declared an emergency the same day, and over the next few days, many more states piled on.
Around the same time, the general public was becoming alarmed about covid. In the last days of February and the first days of March, many people stockpiled food and supplies. On February 29, 2020, PBS ran an article describing an example of this at a Costco in Oregon:
A March 1, 2020 article in the Los Angeles Times reported on stores in California running out of product as shoppers stockpiled. On March 2, an article in Newsweek described the same happening in Seattle:
In Canada, the public was responding the same way. Global News reported on March 3, 2020 that a Costco in Ontario ran out bottled water, toilet paper, and paper towels, and that the situation was similar at other stores around the country. The spike in worried posts on LessWrong coincides with the wider public's reaction. (If anything, the posts on LessWrong are very slightly behind the news articles about stores being picked clean by shopper stockpiling.)
On March 5, 2020, the cruise ship the Grand Princess made the news because it was stranded off the coast of California due to a covid outbreak on board. I remember this as being one seminal moment of awareness around covid. It was a big story. At this point, LessWrong posts are definitely in no way ahead of the curve, since everyone is talking about covid now.
On March 8, 2020, Italy went on partial lockdown, then on full lockdown on March 10. On March 11, the World Health Organization declared covid-19 a global pandemic. (The same day, the NBA suspended the season and Tom Hanks publicly disclosed he had covid.) On March 12, Ohio closed its schools statewide. The U.S. declared a national emergency on March 13. The same day, 15 more U.S. states close their schools. Also on the same day, Canada's Parliament shut down because of the pandemic. By now, everyone knows it's a crisis.
So, did LessWrong call covid early? I see no evidence of that. The timeline of LessWrong posts about covid follow the same timeline that the world at large reacted to covid, increasing in alarm as journalists, experts, and governments increasingly rang the alarm bells. In some comparisons, LessWrong's response was a little bit behind.
The only curated post from this period (and the post with the third-highest karma, one of only four posts with over 100 karma) tells LessWrong users to prepare for covid three days after the CDC told Americans to prepare, and two days after Canada's Health Minister told Canadians to stockpile food and medication. When that post was published, many people were already stockpiling supplies, partly because government health officials had told them to. (The LessWrong post was originally published on a blog a day before, and based on a note in the text apparently written the day before that, but that still puts the writing of the post a day after the CDC warning.)
Unless there is some evidence that I didn't turn up, it seems pretty clear the self-congratulatory narrative is a myth. The self-congratulation actually started in that post published on February 28, 2020, which, again, is odd given the CDC's warning three days before, analysts' and economists' warnings about the global economy a bit before that, and the New York Times article warning about a probable pandemic at the beginning of the month. The post is slightly behind the curve, but it's gloating as if it's way ahead.
I think people should be skeptical and even distrustful toward the claims of the LessWrong community, both on topics like pandemics and about its own track record and mythology. Obviously this myth is self-serving, and it was pretty easy for me to disprove in a short amount of time — so anyone who is curious can check and see that it's not true. The people in the LessWrong community who believe the community called covid early probably believe that because it's flattering. If they actually wondered if this is true or not and checked the timelines, it would become pretty clear that didn't actually happen.