Note: This post was crossposted from the Coefficient Giving Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post.
----------------------------------------
It can feel hard to help factory-farmed animals. We’re up against a trillion-dollar global industry and its army of lobbyists, marketeers, and apologists. This industry wields vast political influence in nearly every nation and sells its products to most people on earth.
Against that, we are a movement of a few thousand full-time advocates operating on a shoestring. Our entire global movement — hundreds of groups combined — brings in less funds in a year than one meat company, JBS, makes in two days.
And we have the bigger task. The meat industry just wants to preserve the status quo: virtually no regulation and ever-growing demand for factory farming. We want to upend it — and place humanity on a more humane path.
Yet, somehow, we’re winning. After decades of installing battery cages, gestation crates, and chick macerators, the industry is now removing them. Once-dominant industries, like fur farming, are collapsing. And advocates are building momentum toward bigger reforms for all farmed animals.
Here are my top ten wins from this year:
1. Liberté et Égalité, for Chickens. France’s largest chicken producer, the LDC Group, committed to adopting the European Chicken Commitment for its two flagship brands by 2028 — a shift that French advocacy group L214 estimates will cover 40% of the national chicken market, or up to 400 million birds each year. Across the Channel, British supermarket chain Waitrose transitioned all its own-brand chicken to comply with the parallel UK Better Chicken Commitment.
2. Guten Cluck! The Wurst Is Over for German Animals. Germany’s top retailer, Edeka, committed to making all of its own-brand chicken products compliant with Germany’s equivalent of the European Chicken Commitment by 2030
The Ezra Klein Show (one of my favourite podcasts) just released an episode with GiveWell CEO Elie Hassenfeld!
I’ve seen a few people in the LessWrong community congratulate the community on predicting or preparing for covid-19 earlier than others, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it. I looked into this, and as far as I can tell, this self-congratulatory narrative is a complete myth.
Many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, some stores sold out of face masks in several different cities in North America. (One example of many.) The oldest post on LessWrong tagged with "covid-19" is from well after this started happening. (I also searched the forum for posts containing "covid" or "coronavirus" and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described "prepper" who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many... (read more)
My gloss on this situation is:
YARROW: Boy, one would have to be a complete moron to think that COVID-19 would not be a big deal as late as Feb 28 2020, i.e. something that would imminently upend life-as-usual. At this point had China locked down long ago, and even Italy had started locking down. Cases in the USA were going up and up, especially when you correct for the (tiny) amount of testing they were doing. The prepper community had certainly noticed, and was out in force buying out masks and such. Many public health authorities were also sounding alarms. What kind of complete moron would not see what’s happening here? Why is lesswrong patting themselves on the back for noticing something so glaringly obvious?
MY REPLY: Yes!! Yes, this is true!! Yes, you would have to be a complete moron to not make this inference!! …But man, by that definition, there sure were an awful lot of complete morons around, i.e. most everyone. LessWrong deserves credit for rising WAY above the incredibly dismal standards set by the public-at-large in the English-speaking world, even if they didn’t particularly surpass the higher standards of many virologists, preppers, etc.
My personal experience: As som... (read more)
Thanks for collecting this timeline!
The version of the claim I have heard is not that LW was early to suggest that there might be a pandemic but rather that they were unusually willing to do something about it because they take small-probability high-impact events seriously. Eg. I suspect that you would say that Wei Dai was "late" because their comment came after the nyt article etc, but nonetheless they made 700% betting that covid would be a big deal.
I think it can be hard to remember just how much controversy there was at the time. E.g. you say of March 13, "By now, everyone knows it's a crisis" but sadly "everyone" did not include the California department of public health, who didn't issue stay at home orders for another week.
[I have a distinct memory of this because I told my girlfriend I couldn't see her anymore since she worked at the department of public health (!!) and was still getting a ton of exposure since the California public health department didn't think covid was that big of a deal.]
I think the COVID case usefully illustrates a broader issue with how “EA/rationalist prediction success” narratives are often deployed.
That said, this is exactly why I’d like to see similar audits applied to other domains where prediction success is often asserted, but rarely with much nuance. In particular: crypto, prediction markets, LVT, and more recently GPT-3 / scaling-based AI progress. I wasn’t closely following these discussions at the time, so I’m genuinely uncertain about (i) what was actually claimed ex ante, (ii) how specific those claims were, and (iii) how distinctive they were relative to non-EA communities.
This matters to me for two reasons.
First, many of these claims are invoked rhetorically rather than analytically. “EAs predicted X” is often treated as a unitary credential, when in reality predictive success varies a lot by domain, level of abstraction, and comparison class. Without disaggregation, it’s hard to tell whether we’re looking at genuine epistemic advantage, selective memory, or post-hoc narrative construction.
Second, these track-record arguments are sometimes used—explicitly or implicitly—to bolster the case for concern about AI risks. If the evidenti... (read more)
Rate limiting on the EA Forum is too strict. Given that people karma downvote because of disagreement, rather than because of quality or civility — or they judge quality and/or civility largely on the basis of what they agree or disagree with — there is a huge disincentive against expressing unpopular or controversial opinions (relative to the views of active EA Forum users, not necessarily relative to the general public or relevant expert communities) on certain topics.
This is a message I saw recently:
You aren't just rate limited for 24 hours once you fall below the recent karma threshold (which can be triggered by one comment that is unpopular with a handful of people), you're rate limited for as many days as it takes you to gain 25 net karma on new comments — which might take a while, since you can only leave one comment per day, and, also, people might keep downvoting your unpopular comment. (Unless you delete it — which I think I've seen happen, but I won't do, myself, because I'd rather be rate limited than self-censor.)
The rate limiting system is a brilliant idea for new users or users who have less than 50 total karma — the ones who have little plant icons next to their nam... (read more)
I think this highlights why some necessary design features of the karma system don't translate well to a system that imposes soft suspensions on users. (To be clear, I find a one-comment-per-day limit based on the past 20 comments/posts to cross the line into soft suspension territory; I do not suggest that rate limits are inherently soft suspensions.)
I wrote a few days ago about why karma votes need to be anonymous and shouldn't (at least generally) require the voter to explain their reasoning; the votes suggested general agreement on those points. But a soft suspension of an established user is a different animal, and requires greater safeguards to protect both the user and the openness of the Forum to alternative views.
I should emphasize that I don't know who cast the downvotes that led to Yarrow's soft suspension (which were on this post about MIRI), or why they cast their votes. I also don't follow MIRI's work carefully enough to have a clear opinion on the merits of any individual vote through the lights of the ordinary purposes of karma. So I do not intend to imply dodgy conduct by anyone. But: "Justice must not only be done, but must also be seen to be done." People who are... (read more)
The NPR podcast Planet Money just released an episode on GiveWell.
If the people arguing that there is an AI bubble turn out to be correct and the bubble pops, to what extent would that change people's minds about near-term AGI?
I strongly suspect there is an AI bubble because the financial expectations around AI seem to be based on AI significantly enhancing productivity and the evidence seems to show it doesn't do that yet. This could change — and I think that's what a lot of people in the business world are thinking and hoping. But my view is a) LLMs have fundamental weaknesses that make this unlikely and b) scaling is running out of steam.
Scaling running out of steam actually means three things:
1) Each new 10x increase in compute is less practically or qualitatively valuable than previous 10x increases in compute.
2) Each new 10x increase in compute is getting harder to pull off because the amount of money involved is getting unwieldy.
3) There is an absolute ceiling to the amount of data LLMs can train on that they are probably approaching.
So, AI investment is dependent on financial expectations that are depending on LLMs enhancing productivity, which isn't happening and probably won't happen due to fundamental problems with LLMs and due t... (read more)
A number of podcasts are doing a fundraiser for GiveDirectly: https://www.givedirectly.org/happinesslab2025/
Podcast about the fundraiser: https://pca.st/bbz3num9
Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online:
- Say what you mean, as plainly as possible.
- Try to use words and expressions that a general audience would understand.
- Be more casual and less formal if you think that means more people are more likely to understand what you're trying to say.
- To illustrate abstract concepts, give examples.
- Where possible, try to let go of minor details that aren't important to the main point someone is trying to make. Everyone slightly misspeaks (or mis... writes?) all the time. Attempts to correct minor details often turn into time-consuming debates that ultimately have little importance. If you really want to correct a minor detail, do so politely, and acknowledge that you're engaging in nitpicking.
- When you don't understand what someone is trying to say, just say that. (And be polite.)
- Don't engage in passive-aggressiveness or code insults in jargon or formal language. If someone's behaviour is annoying you, tell them it's annoying you. (If you don't want to do that, then you probably shouldn't try to communicate the same idea in a coded or passive-aggressive way, either.)
- If you're using an uncommon word
... (read more)I used to feel so strongly about effective altruism. But my heart isn't in it anymore.
I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven't been able to sustain a vegan diet for more than a short time. And so on.
But there isn't a community or a movement anymore where I want to talk about these sorts of things with people. That community and movement existed, at least in my local area and at least to a limited extent in some online spaces, from about 2015 to 2017 or 2018.
These are the reasons for my feelings about the effective altruist community/movement, especially over the last one or two years:
-The AGI thing has gotten completely out of hand. I wrote a brief post here about why I strongly disagree with near-term AGI predictions. I wrote a long comment here about how AGI's takeover of effective altruism has left me disappointed, disturbed, and alienated. 80,000 Hours and Will MacAskill have both pivoted to focusing exclusively or almost exclusively on AGI. AGI talk h... (read more)
I'd distinguish here between the community and actual EA work. The community, and especially its leaders, have undoubtedly gotten more AI-focused (and/or publicly admittted to a degree of focus on AI they've always had) and rationalist-ish. But in terms of actual altruistic activity, I am very uncertain whether there is less money being spent by EAs on animal welfare or global health and development in 2025 than there was in 2015 or 2018. (I looked on Open Phil's website and so far this year it seems well down from 2018 but also well up from 2015, but also 2 months isn't much of a sample.) Not that that means your not allowed to feel sad about the loss of community, but I am not sure we are actually doing less good in these areas than we used to.
My memory is a large number of people to the NL controversy seriously, and the original threads on it were long and full of hostile comments to NL, and only after someone posted a long piece in defence of NL did some sympathy shift back to them. But even then there are like 90-something to 30-something agree votes and 200 karma on Yarrow's comment saying NL still seem bad: https://forum.effectivealtruism.org/posts/H4DYehKLxZ5NpQdBC/nonlinear-s-evidence-debunking-false-and-misleading-claims?commentId=7YxPKCW3nCwWn2swb
I don't think people dropped the ball here really, people were struggling honestly to take accusations of bad behaviour seriously without getting into witch hunt dynamics.
I just want to point out that I have a degree in philosophy and have never heard the word "epistemics" used in the context of academic philosophy. The word used has always been either epistemology or epistemic as adjective in front of a noun (never on its own, always used as an adjective, not a noun, and certainly never pluralized).
From what I can tell, "epistemics" seems to be weird EA Forum/LessWrong jargon. Not sure how or why this came about, since this is not obscure philosophy knowledge, nor is it hard to look up.
If you Google "epistemics" phil... (read more)
I agree this is just a unique rationalist use. Same with 'agentic' though that has possibly crossed over into the more mainstream, at least in tech-y discourse.
However I think this is often fine, especially because 'epistemics' sounds better than 'epistemic practices' and means something distinct from 'epistemology' (the study of knowledge).
Always good to be aware you are using jargon though!
I find "epistemics" neat because it is shorter than "applied epistemology" and reminds me of "athletics" and the resulting (implied) focus on being more focused on practice. I don't think anyone ever explained what "epistemics" refers to, and I thought it was pretty self-explanatory from the similarity to "athletics".
I also disagree about the general notion that jargon specific to a community is necessarily bad, especially if that jargon has fewer syllables. Most subcultures, engineering disciplines, sciences invent words or abbreviations for more efficient communication, and while some of that may be due to trying to gatekeep, it's so universal that I'd be surprised if it doesn't carry value. There can be better and worse coinages of new terms, and three/four/five-letter abbreviations such as "TAI" or "PASTA" or "FLOP" or "ASARA" are worse than words like "epistemics" or "agentic".
I guess ethics makes the distinction between normative ethics and applied ethics. My understanding is that epistemology is not about practical techniques, and that one can make a distinction here (just like the distinction between "methodology" and "methods").
I tried to figure out if there's a pair of su... (read more)
People in effective altruism or adjacent to it should make some public predictions or forecasts about whether AI is in a bubble.
Since the timeline of any bubble is extremely hard to predict and isn’t the core issue, the time horizon for the bubble prediction could be quite long, say, 5 years. The point would not be to worry about the exact timeline but to get at the question of whether there is a bubble that will pop (say, before January 1, 2031).
For those who know more about forecasting than me, and especially for those who can think of good w... (read more)
Your help requested:
I’m seeking second opinions on whether my contention in Edit #4 at the bottom of this post is correct or incorrect. See the edit at the bottom of the post for full details.
Brief info:
-
-
-
-
... (read more)My contention is about the Forecasting Research Institute’s recent LEAP survey.
One of the headline results from the survey is about the probabilities the respondents assign to each of three scenarios.
However, the question uses an indirect framing — an intersubjective resolution or metaprediction framing.
The specific phrasing of the question is q
Self-driving cars are not close to getting solved. Don’t take my word for it. Listen to Andrej Karpathy, the lead AI researcher responsible for the development of Tesla’s Full Self-Driving software from 2017 to 2022. (Karpathy also did two stints as a researcher at OpenAI, taught a deep learning course at Stanford, and coined the term "vibe coding".)
From Karpathy’s October 17, 2025 interview with Dwarkesh Patel:
... (read more)Since my days of reading William Easterly's Aid Watch blog back in the late 2000s and early 2010s, I've always thought it was a matter of both justice and efficacy to have people from globally poor countries in leadership positions at organizations working on global poverty. All else being equal, a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from Canada with an equal level of education, an equal ability to network with the right international organizations, etc.
In practice, this is probably hard to do, since it requires crossing language barriers, cultural barriers, geographical distance, and international borders. But I think it's worth it.
So much of what effective altruism does, including around global poverty, including around the most evidence-based and quantitative work on global poverty, relies on people's intuitions, and people's intuitions formed from living in wealthy, Western countries with no connection to or experience of a globally poor country are going to be less accurate than people who have lived in poor countries and know a lot about them.
Simply put, first-hand experience of poor countries is a form of expertise and organizations run by people with that expertise are probably going to be a lot more competent at helping globally poor people than ones that aren't.
I agree with most of you say here, indeed all things being equal a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from anywhere else. The problem is your caveats - things are almost never equal...
1) Education systems just aren't nearly as good in lower income countries. This means that that education is sadly barely ever equal. Even between low income countries - a Kenyan once joked with me that "a Ugandan degree holder is like a Kenyan high school leaver". If you look at the top echelon of NGO/Charity leaders from low-income who's charities have grown and scaled big, most have been at least partially educated in richer countries
2) Ability to network is sadly usually so so much higher if you're from a higher income country. Social capital is real and insanely important. If you look at the very biggest NGOs, most of them are founded not just by Westerners, but by IVY LEAGUE OR OXBRIDGE EDUCATED WESTERNERS. Paul Farmer (Partners in Health) from Harvard, Raj Panjabi (LastMile Health) from Harvard. Paul Niehaus (GiveDirectly) from Harvard. Rob Mathers (AMF) Harvard AND Cambridge. With those connections you ca... (read more)
What AI model does SummaryBot use? And does whoever runs SummaryBot use any special tricks on top of that model? It could just be bias, but SummaryBot seems better at summarizing stuff then GPT-5 Thinking, o3, or Gemini 2.5 Pro, so I'm wondering if it's a different model or maybe just good prompting or something else.
@Toby Tremlett🔹, are you SummaryBot's keeper? Or did you just manage its evil twin?
There are two philosophies on what the key to life is.
The first philosophy is that the key to life is separate yourself from the wretched masses of humanity by finding a special group of people that is above it all and becoming part of that group.
The second philosophy is that the key to life is to see the universal in your individual experience. And this means you are always stretching yourself to include more people, find connection with more people, show compassion and empathy to more people. But this is constantly uncomfortable because, again and again,... (read more)
[Personal blog] I’m taking a long-term, indefinite hiatus from the EA Forum.
I’ve written enough in posts, quick takes, and comments over the last two months to explain the deep frustrations I have with the effective altruist movement/community as it exists today. (For one, I think the AGI discourse is completely broken and far off-base. For another, I think people fail to be kind to others in ordinary, important ways.)
But the strongest reason for me to step away is that participating in the EA Forum is just too unpleasant. I’ve had fun writing stuff on the... (read more)
Here is the situation we're in with regard to near-term prospects for artificial general intelligence (AGI). This is why I'm extremely skeptical of predictions that we'll see AGI within 5 years.
-Current large language models (LLMs) have extremely limited capabilities. For example, they can't score above 5% on the ARC-AGI-2 benchmark, they can't automate any significant amount of human labour,[1] and they can only augment human productivity in minor ways in limited contexts.[2] They make ridiculous mistakes all the time, like saying somethin... (read more)
Have Will MacAskill, Nick Beckstead, or Holden Karnofsky responded to the reporting by Time that they were warned about Sam Bankman-Fried's behaviour years before the FTX collapse?
Will responded here.
Slight update to the odds I’ve been giving to the creation of artificial general intelligence (AGI) before the end of 2032. I’ve been anchoring the numerical odds of this to the odds of a third-party candidate like Jill Stein or Gary Johnson winning a U.S. presidential election. That’s something I think is significantly more probable than AGI by the end of 2032. Previously, I’d been using 0.1% or 1 in 1,000 as the odds for this, but I was aware that these odds were probably rounded.
I took a bit of time to refine this. I found that in 2016, FiveThirtyEight ... (read more)
Yann LeCun (a Turing Award-winning pioneer of deep learning) leaving Meta AI — and probably, I would surmise, being nudged out by Mark Zuckerberg (or another senior Meta executive) — is a microcosm for everything wrong with AI research today.
LeCun is the rare researcher working on fundamental new ideas to push AI forward on a paradigm level. Zuckerberg et al. seem to be abandoning that kind of work to focus on a mad dash to AGI via LLMs, on the view that enough scaling and enough incremental engineering and R&D will push current LLMs all the way ... (read more)
Just calling yourself rational doesn't make you more rational. In fact, hyping yourself up about how you and your in-group are more rational than other people is a recipe for being overconfidently wrong.
Getting ideas right takes humility and curiosity about what other people think. Some people pay lip service to the idea of being open to changing their mind, but then, in practice, it feels like they would rather die than admit they were wrong.
This is tied to the idea of humiliation. If disagreement is a humiliation contest, changing one's mind can fe... (read more)