I.
People love trying to find holes in the drowning child thought experiment. This is natural: it’s obvious you should save the child in the scenario, but much less obvious that you should give lots of charity to poor people (as it seems to imply). So there must be some distinction between the two scenarios. But most people’s cursory and uninspired attempts to find these fail.
For example, some people say the difference is distance; you’re close to the drowning child, but far from people dying in Africa. Here are some thought experiments that challenge that:
You’re a surgeon, using a telepresence robot to operate on someone in China. In the middle of the operation, a medical student watching in the Chinese side starts choking. He is the only other person in the room (besides your patient, who is unconscious), so nobody else can help him. Your robot can do the Heimlich Maneuver and save their life, but it would delay the surgery five minutes while you readjusted the settings afterwards, which would make you late for lunch. Should you save them?
Here it seems obvious that you should save them, even though they’re all the way in China.
Is the problem that “you” are sort of “in” China via your robot, even if not physically? Here’s another example:
The Dublin-NYC Portal is an art installation with branches in Dublin and NYC. If you stand in front of one of the portals, you can see and (not in real life, but in our thought experiment) hear the people on the other side. It’s usually too crowded to really interact with, so one day you (in NYC) go to the installation at 3 AM and are the only person around. You start talking to two Dubliners, who are the only people around on their side of the portal. Suddenly, one of them starts choking. The other shrieks “I need to do the Heimlich Maneuver, but I don’t know how!” You know how, and could easily walk them through it. But it’s cold and you want to go home. Should you help them?
Again, the answer is clear even though you’re 3000 miles away. At this point, saying that you’re “virtually” in Dublin seems like a stretch. Here the issue seems to be some sort of entanglement. But it’s hard to say exactly how the entanglement works, and it doesn’t seem to be a simple one-to-one correspondence where you’re the only person who can help. For example:
You’re at the Sociopathic Jerks Convention (you’re neither sociopathic nor a jerk - you’re the caterer). Everyone is on the lawn of the conference center, waiting for one of the sessions to begin, when you all notice a child drowning in the lake nearby. Along with yourself, there are 1,000 sociopathic jerks. But at the last session, someone took a poll on exactly this question and everyone agreed they wouldn’t lift a finger to help; either you save the child, or nobody does. Do you jump in and save them?
Here it seems like the sociopathic jerks might as well be furniture - their presence doesn’t change your situation compared to the scenario where you’re there alone.
II.
TracingWoodgrains draws off a now-deleted essay by Jaibot which talks about the “Copenhagen interpretation of ethics”. It argues that by “touching” a situation - a vague term having something to do with causal entanglement - you gain moral obligation for it. If you can simply avoid touching it, your moral obligation goes away.
I think this explains half the problem, but I can think of another half that it doesn’t explain. Consider:
You inherit a beautiful cabin in the woods. It’s downstream of a vast semi-magical megacity which is a little denser than should be physically possible. Every time a child falls into any of the megacity’s streams, lakes, or rivers, they get swept away and flow past your cabin; there’s a new drowning child every hour or so.
Assume that all unmentioned details are resolved in whatever way makes the thought experiment most unsettling - so for example, maybe the megacity inhabitants are well-intentioned, but haven’t hired their own lifeguards because their city is so vast that this is only #999 on their list of causes of death and nobody’s gotten around to it yet.
Here I’m split on whether the Copenhagen hypothesis works. A person who lives in the cabin and fails to rescue every child seems much less monstrous than someone who only ever encounters the situation once, even though both of them “touch” the situation exactly as much. Still, as the hypothesis predicts, we are less comfortable with this situation than the normal one where you live far away from the cabin and never worry about it - living near the cabin (“touching” the situation) seems to have some moral impact.
Here’s somewhere I think Copenhagen more clearly fails:
Your regular house burns down, and you have no choice but to live in the cabin for a while. You think to yourself: “Well, every day, I’ll rescue one child, then not worry about it. This is better for the children, since it increases their survival rate from 0 to 1/24. And it’s better for me, since I’m not homeless while I wait for them to rebuild my house). It’s a win-win situation.
So you do this for a few months, and then they rebuild your house, and you move back to your regular hometown. You live there for five years without incident. One day, you see a child drowning in the local river. You don’t recognize them, so it can’t be the child of any of your fellow citizens - probably their parents are some of the travelers who pass through this town on the way to greener pastures. You are late for an important business meeting, you’re wearing a nice suit, and it would be especially annoying to jump in a river right now. In fact, you estimate that it is 10x costlier to rescue this child than any of the children who you encountered at the cabin so long ago. What do you do?
Here Copenhagen fails to predict a difference between refusing to rescue the 37th kid going past the cabin, vs. refusing to rescue the single kid in your hometown; you are “touching” both equally. But I think most people would consider it common sense that refusing to rescue the 37th kid near the cabin is a minor/excusable sin, but refusing to rescue the one kid in your hometown is inexcusable.
Again sticking to a purely descriptive account of intuitions, I think this represents a sort of declining marginal utility of moral goods. The first time you rescue a kid, you get lots of personal benefits (feeling good about yourself, being regarded as a hero, etc). By the 37th time, these benefits are played out. If you refuse to rescue a child for the relatively high benefits of a single situation, we think you must have no moral sense at all. But if you fail to rescue them the 37th time, we think this is pretty understandable and similar to what we would do in the same situation.
(This “declining marginal utility” explanation is less natural than something like “the obligation to rescue all those children is ruining my life”. But I think it’s more accurate; if we come up with a thought experiment where it doesn’t ruin your life in any way - where it only takes a few hours from your day and you have enough left to accomplish everything you need - then it still seems harsh to demand someone rescue 37 children every day. And when there is an actual moral obligation - like parenting your own children - we don’t accept “it will ruin my life” as an excuse to get out of it.)
III.
So these two descriptive theories - the Copenhagen hypothesis, and the declining marginal utility of moral goods - do a good job explaining our intuitions.
But some people leap from there to saying they’re also the right prescriptive theories - they determine what morality really is, and what rules we should follow.
I think this is a gigantic error, the worst thing you could possibly do in this situation. These are essentially rules for looking good to other people. To follow them is to say that you will always optimize for seeming cool, no matter how many people you have to kill in order to do it. So for example:
Because of Copenhagen hypothesis, you decide it’s morally wrong to go to the cabin - by “touching” the situation, you would be accepting blame for all of the drowning children.
But you really want a vacation away from your polluted city, and you can’t afford any other vacation home, so you’re always looking for some way to solve the problem. One day, you encounter a man with a giant truck. He says that for $525, he could uproot your cabin, load it on his truck, and deposit it on unoccupied land near a small camp a few miles away. In this new location, you couldn’t hear the children screaming. In fact, there are some other cabins at the camp, none of them ever think about the death river, and nobody ever blames them for it.
You’re pretty excited about this. But you hear that your wife’s sister’s friend’s niece has a rare disease that would cost exactly $525 to cure (and she is poor, and can’t afford it). Normally you wouldn’t care much about someone so distant from you. But your wife asks - hey, isn’t it sort of hypocritical to move the cabin “in order to be a moral person”, when you’re in fact not helping anyone in any way? Wouldn’t it actually make you more of a moral person to spend the $525 curing her sister’s friend’s niece, then vacation at the cabin using earplugs to not hear the screaming kids (which wouldn’t result in any more kids dying than not going to the cabin, or moving the cabin)?
This is all awkward enough that maybe you want to push the Copenhagenness back a step and just refuse to touch the cabin at all. Refuse to inherit it, lock your door, tell the lawyer who says you own it now that he needs to get off your property or else you’ll shoot. But we can still make your life difficult:
You live in a house in the suburbs. You never even considered living in a cabin in the woods. However, the nearby semi-magical megacity plans to build a dam. This would rearrange its various internal waterways so that all the drowning children would get carried by the current through your backyard; you would be in the exact same situation as the cabin owner. All of your life savings are tied up in your mortgage, the mere threat of this dam has crashed the value of your house, and you can’t afford to move. Luckily, a lobbyist owes you a favor. She offers to repay you in one of two ways.
First, she could lobby the megacity to redirect the dam; this would cause the drowning children to go somewhere else - they would be equally dead, but it’s not your problem.
Second, she could lobby the megacity to hire a part-time lifeguard (full-time is beyond her power), who could save half the drowning children. This still ruins your life (a drowning child passes your house once every two hours), and still crashes your home value to zero, but thousands of children would be saved per year. Also, she has a side business as a handyman, and could install double-pane glass windows on your house so you couldn’t hear the children screaming.
The Copenhagen theorist would be in a bind here. You really want to avoid the dam forcing you to “touch” the situation, and then either spend your whole life saving children, or be culpable for failing to do so. But it seems both heartless and pointless to waste your one lobbyist favor on a river redirection which doesn’t change anything about the real world (as many children will die as ever) when you could instead use it to do lots of good.
My best bet for how a thoughtful Copenhagener would respond is that they would say you had terrible moral luck by happening to end up where the dam was going to redirect the drowning children; however, this itself caused you to “touch on” the situation and now you can be judged for how you respond (including your cowardly response of trying to redirect the river somewhere else).
I don’t buy it.
Unfortunately, the lobbyist dies of a heart attack before you can call in either favor. The dam is built and the children pass by your house. You save one per day, but not all of them.
You have a neighbor. The neighbor lives far enough away that the river doesn’t affect him at all; he couldn’t hear the children in any case.
When you tell him this story, he calls you a monster - how could you only help one child per day?
You agree this is inadequate, and decide to hire a lifeguard yourself to solve the problem. You offer to split the costs with your neighbor: you will pay 80%, he’ll pay 20%.
The neighbor says haha, no way, he’s not going to waste 20% of his money helping some kids he has no relationship with. But he also thinks you’re a monster unless you pay the money.
God notices there is one extra spot in Heaven, and plans to give it to either you or your neighbor. He knows that you jump in the water and save one child per day, and your neighbor (if he were in the same situation) would save zero. He also knows that you are willing to pay 80% of the cost of a lifeguard, and your neighbor (who makes exactly the same amount of money as you) would pay 0%. However, in reality, the river and the drowning children are going by your house, not your neighbor’s house. Which of you should get the spot in Heaven?
Here it seems obvious that you are a better person than your neighbor. But then what remains of the “moral luck” explanation? What remains of Copenhagen, where you are blamed for a situation if you touch it?
Maybe you have to choose to touch it for it to count? But this seems false; in Singer’s original drowning child experiment, you didn’t choose to be the only person near the lake when the kid was drowning. It was just a weird coincidence.
In fact, it seems like we all benefit from the same sort of moral luck as the neighbor. Suppose Alice is born in a gated community in the US, to a family making $200,000/year; she goes to her local college, stays in her rich hometown, and eventually makes $200,000/year herself. There are no poor people near her, so she has few moral obligations. But Bob is born in Zimbabwe, to a rare upper-class well-connected Zimbabwean family making $200,000 year; he inherits his father’s business and also makes $200,000 year himself. But he lives in the middle of horrible poverty. His housekeeper is dying of some easily-cured disease, all of his school friends are dying of easily-cured diseases, every day when he goes to work he has to walk over half-dead people screaming for help. It seems like Alice got lucky by not being Bob; she has no moral obligations, whereas he has many. Suppose that Bob only helps a little bit, enough that we would consider him pretty stingy given his situation - maybe he helps his absolute closest school friend, but lets several other school friends die. And suppose that if Alice was in Bob’s situation, she would do even less, but in fact in real life she satisfies all of her (zero) moral obligations. If there’s only one spot in Heaven, should it go to Alice or Bob?
Someone who’s still desperately trying to preserve Copenhagen would have to say that the “one spot in Heaven” prompt isn’t fair - God presumably has His own criteria which exploit His perfect omniscience, but we humans must think about morality on a merely human level. I still don’t buy it. For one thing, God isn’t using any special omniscient knowledge that we (the people reading this thought experiment) don’t also have and use easily. For another, if you’re even slightly religious, actually getting the literal spot in Heaven should be one of the top things on your mind when you’re deciding whether to be moral or not. Even if you’re atheist, trying to be the sort of person who would get a spot in Heaven, if it existed, seems like a worthier goal than whatever the Copenhagen-follower is doing.
IV.
So again, the question is - what is the right prescriptive theory that doesn’t just explain moral behavior, but would let us feel dignified and non-idiotic if we followed it?
My favorite heuristic for thinking about this is John Rawls’ “original position” - if we were all pre-incarnation angelic intelligences, knowing we would go to Earth and become humans but ignorant of which human we would become, what deals would we strike with each other to make our time on Earth as pleasant as possible? So for example, we would probably agree not to commit rape, because we wouldn’t know if we would be the offender or the victim, and we would expect rape to hurt the victim more than it helped the offender.
Here we would probably agree to save drowning children, because if we were involved in the situation at all, we would have a 50% chance of being the rescuer (minor inconvenience) or the child themselves (life or death importance).
But we would also agree to save people dying of easily-cured diseases in the Third World, because we wouldn’t know if we would be those people either. Everyone would agree to a proposed deal that rich people donate a small fraction of their income to charity, because it would be only a mild inconvenience if they turned out to be rich, but a life-saver if they turned out to be poor.
Further, since we wouldn’t know whether we would be Alice (low level of moral obligation) or Bob (very high level of moral obligation), we would take out insurance by agreeing that everyone needed to pay the same modest amount into the general pot for helping people.
(How much should they pay? Enough to pick the low-hanging fruit and make it so nobody is desperately poor, but not enough to make global capitalism collapse. I think the angelic intelligences would also consider that rich people could defect on the deal after being born, and so try to make the yoke as light as possible.)
A final deal might look like this: we’ll all cooperate by sending a bit of our money to a general pot for helping people in terrible situations. And if there’s a more urgent situation that group contributions can’t help - because for example a child is drowning right now and there’s only one person close enough to save them - then we’ll deputize that one person to save them, and assume it will all even out in the end.
(Actually, even better would be pay that person a reward for their trouble out of the general pot - then there’s no unfairness or special obligation on one person rather than another!)
Here we’re able to bring back all of those things we rejected earlier - proximity, urgency, being the only person available - not because they determine who is worthy of being saved, but in the context of a coalition that plans to save everybody but which in an emergency needs to act through whoever is available. This is no different from a police force which, learning of a serious crime in progress, asks the officer closest to the site to respond, even if that officer isn’t a specialist in that particular type of crime, or even if that officer is one minute away from clocking out and it’s unfair to make them work overtime.
All of this makes perfect sense - except that the coalition is in arrears, there is no general pot, and most bad things go unprevented. Only the extra “save people close to you” rule, tacked on as an afterthought, still functions, because that one makes people look good when they do it and is easier to enforce through reputational mechanisms.
I think you should probably still save someone close to you (eg drowning), partly because this rule is valuable even on its own (ie it’s better to do it than not do it), and partly because, since other people are following it, you actually have a reciprocal obligation to your fellow coalition members here (ie you expect that if your child was drowning, someone else would help, so you’re free-riding if you don’t help them).
If you end up at the death cabin, you don’t have an obligation to save every single child who passes by, because the coalition didn’t intend for the “save drowning children” obligation to be an unusual burden on anyone in particular, and because nobody else is doing this so you’re not betraying fellow coalition members. People may incorrectly think less of you if you don’t do this, and you might want to take action to avoid reputational damage, but this isn’t a moral obligation. The real answer to this problem is that the coalition should split the cost of hiring a lifeguard - or, if for some reason you are the only person who can be in the area, compensate you for your time. Given that the coalition isn’t strong enough to actually do these things, your obligations are limited, and not made any better or worse by living in the cabin vs. further away.
I think it’s virtuous, but not obligatory, to behave as if the coalition is still intact, and try to give a portion of your income to some sort of virtual version of the general pot. You could also think of the government as some sort of very distorted flawed real-life version of the coalition and consider your obligations fulfilled by paying taxes, but I think this is an insult to the angelic intelligences, and you should just go with whatever seems like the closest thing to their original plan without waiting for it to actually be instantiated.
I think this is more dignified than the thing where you try to hire someone for $525 to move your cabin to a different location so you don’t feel like you’re “touching” the problem, or whatever.
Doesn't seem to be deleted: https://laneless.substack.com/p/the-copenhagen-interpretation-of-ethics
It was originally, long before Substack was founded, at a different URL that's no longer online. Possibly people don't know that there's now a Substack.
Oh, thank goodness - I'd have been sad if a "foundational reference" essay that I reread periodically was gone for good. Link rot comes for everything in the end...
This kind of thing is getting far beyond the actual utility of moral thought experiments. Once you're bringing in blatantly nonsensical constructs like the river where all the drowning children from a magical megacity go, you've passed the point where you can get any useful insight from thinking about this hypothetical.
If you want to actually make a moral point around this, it's better to find real-life situations that illustrate your preferred point, even if they're messier or have inconvenient details. The fact that reality has inconvenient details in it is actually germane to moral decision-making.
So much this. My moral intuition just completely checks out somewhere between the examples 2 and 3 and goes "blah, whatever, this is all mega-contrived nonsense, I might just as well imagine me a spaceship while at it". Even though I'm already convinced of the argument Scott makes.
Practically speaking, no one has been persuaded into actually looking into details when they say things like "why would I donate to malaria nets". They fall back onto their preconceptions about how charities are corrupt and oh no nothing ever happens productively when it comes to charities, despite those points being laid out in exhausting detail on givewell's website.
So when people say that hypotheticals are useless and that it takes too much time to find out germane details, it sure does seem like people have a gigantic preference for not having anything damage their self image as a fundamentally morally good person, and this preference happens before any rules about the correct level of meta or object level details arise.
I think the mega-death river is actually a pretty reasonable analogy for many real-life situations. Scott has mentioned the rich Zimbabweans who ignore the suffering of their countrymen. These are analogies for simply turning a blind eye to suffering, and the point being illustrated is that morality does not reasonably have any *actual* relationship with distance or entanglement or whatever, it's just more convenient to request that people close to a situation respond to it.
I think its a good illustration of how to think about this problem
As we are hyperbolic discounters of time, perhaps we similarly discount space.
In Garrett Cullity's The Moral Demands of Affluence his argument is that the "Extreme Argument" (Singer's drowning child) would require us to compromise our own impartially acceptable goods. And we don't even ask that of the people we are saving, so they can't ask it of us. (Kind of hard to do a tl;dr on it because his entire 300 page book is solely on this topic.)
"My strategy is to begin by describing certain personal goods—friendships and commitments to personal projects will be my leading examples—that are in an important sense constituted by attitudes of personal partiality. Focusing on these goods involves no bias towards the well-off: they are goods that have fundamental importance to people’s lives, irrespective of the material standard of living of those who possess them. Next, I shall point out the way in which your pursuit of these goods would be fundamentally compromised if you were attempting to follow the Extreme Demand. Your life would have to be altruistically focused, in a distinctive way that I shall describe. The rest of the chapter then demonstrates that this is not just a tough consequence of the Extreme Demand: it is a reason for rejecting it. If other people’s interests in life are to ground a requirement on us to save them—as surely they do—then, I shall argue, it must be impartially acceptable to pursue the kinds of good that give people such interests. An ethical outlook will be impartially rejectable if it does not properly accommodate the pursuit of these goods—on any plausible conception of appropriate impartiality."
There are no drowning children.
The correct scenario and question to ask is:
"An entire society focuses on a sexual practice that spreads an incurable disease. People are now dying because of this disease."
Is it your moral responsibility to pay to reduce the disease incidence for people in this society given that they are spreading the disease?
If you're going to get moralistic about (I assume) HIV you should also bear in mind it gets transmitted from mothers to newborns, who obviously have no moral responsibility for their plight.
We can focus on the newborns:
"An entire society focuses on a sexual practice that spreads an incurable disease. The disease is also passed on to the children."
->
"An entire society has collectively decided that drowning children is sexually pleasurable. Should you save the child and ignore the sexual practices of the society?"
This is standard motte and bailey. You cannot consider one without the other in this thought experiment and turn around and apply it to real life.
This is kind of an absurd argument given that the sexual practice in question is just "having sex" and infecting children is in no way a necessary consequence for people to fulfill their desire.
This is again a question of moral luck: people in the United States can, relatively trivially, get the medicine necessary to have sex without passing on HIV and people in some nations cannot.
This is false:
> use of a cloth to remove vaginal secretions during intercourse (dry sex) (relative risk, 37.95)
Dry sex has a risk ratio that is higher than blood transfusion (relative risk, 10.89).
The prevalence of dry sex is over 50% in places with high HIV rates.
"Just sex" has a transmission rate of 0.08%, which means someone needs to have sex with an infected person OVER 1000 times to be infected with HIV.
Umm, if you're talking about HIV, don't all sexual practices that involve the exchange of semen, saliva, or vaginal secretions increase the spread of the incurable disease? Doesn't this include all societies that allow or encourage physical sexual connection?
Does it make any difference whether or not you're a member of the society?
Reposting:
This is false:
> use of a cloth to remove vaginal secretions during intercourse (dry sex) (relative risk, 37.95)
Dry sex has a risk ratio that is higher than blood transfusion (relative risk, 10.89).
The prevalence of dry sex is over 50% in places with high HIV rates.
"Just sex" has a transmission rate of 0.08%, which means someone needs to have sex with an infected person OVER 1000 times to be infected with HIV.
> ...behave as if the coalition is still intact...
I think you may have snuck Kant in through the back door. Isn't this kind of what his ethics is? Behave according to those principles that you could reasonably wish were inflexible laws of nature (or, in this case, were agreed to by the angelic coalition).
>My favorite heuristic for thinking about this is John Rawls’ “original position” - if we were all pre-incarnation angelic intelligences, knowing we would go to Earth and become humans but ignorant of which human we would become, what deals would we strike with each other to make our time on Earth as pleasant as possible? So for example, we would probably agree not to commit rape, because we wouldn’t know if we would be the offender or the victim, and we would expect rape to hurt the victim more than it helped the offender.
No, it's trivially obviously false that we would agree to that (or anything else) in this scenario. If we don't have any information about which humans we are, then we're equally likely as not to end up being sadomasochists, so any agreement premised on the assumption that we want to minimize suffering for either ourselves or others is dead on arrival. All other conceivable agreements are also trivially DOA in this scenario, since we also don't have any information about whether we're going to want or care about any possible outcomes that might result. Consistently applied Rawlsianism is just roundabout moral nihilism.
In order for it to be possible that the intelligences behind the veil of ignorance might have any reason to agree to anything, you have to add as a kludge that they know masochism, suicidality, and other such preferences will be highly unusual among humans in the society they're born into, and that it's therefore highly unlikely they'll end up with such traits. But if they can know that, then there's no reason why they can't also know the commonality of other traits, and then there's no reason why they shouldn't be able to at least make a well-informed Bayesian estimate of whether they're more likely to end up the offender or victim in a rape, or whatever else you want them not to know, and so the whole experiment becomes pointless.
I think that's a misstating of what the veil makes you ignorant of.. The point isn't that you don't know anything about the society into which you will be incarnated; the point is that you don't know what role in that society you will have.
Firstly, as a masochist myself, you are heavily misrepresenting masochism. Secondly, as someone who's met a weirdly large number of people who have committed rape, I'm pretty sure the net utility *for rapists* is at least slightly negative - some of them get something out of it, but some of them are deeply traumatized by it and very seriously regret it (and that's ignoring the ones who actually get reported and charged and go to prison, because I haven't met any of those).
The veil of ignorance is about circumstances not values. So you know what you value you just don't know what circumstances you'll end up in.
I would draw a distinction between "observing a problem" and "touching a problem" in jai's original post. Trace is commenting on the "touching" side of things, specifically the pattern where a charity solicits money to solve a problem, spend that money to make poor progress on the problem, and defends this as "everyone's mad at us trying to help even though not trying would be worse". It is possible to fruitfully spend money in distant weird to you circumstances you don't properly understand, but if you think you're helping somewhere you're familiar with, you're more likely to be right.
I think the distance objection does not refer to literal distance, but our lack of knowledge and increase in risk of harm the further we are from the people we're trying to help.
For example, consider the classic insecticide-treated mosquito nets to prevent malaria. Straightforward lifesaving intervention that GiveWell loves, right? It turns out that many of the hungry families who received such nets decided to use them to catch fish instead. This not only failed to prevent malaria, but also poisoned fish and people with insecticide. We didn't save as many drowning children as we hoped, and may have even pushed more of them underwater, because we were epistemically too far away to appreciate the entire socioeconomic context of the problem.
The further you are in physical and social space and time from the people you're trying to help, the greater the risk that your intervention might not only fail to help, but might actually harm. This is the main reason for discount rates. It's not that people in the far future are worth less morally, but that our interventions become more uncertain and risky. We're discounting our actions, not the goals of our actions. Yes, this is learned epistemic helplessness, but it is justified epistemic helplessness.
> It turns out that many of the hungry families who received such nets decided to use them to catch fish instead. This not only failed to prevent malaria, but also poisoned fish and people with insecticide.
I think this Vox article does a good job deflating this claim: https://www.vox.com/future-perfect/2024/1/25/24047975/malaria-mosquito-bednets-prevention-fishing-marc-andreessen
The best study we have on bed net toxicity—as opposed to one 2015 NYT article that made a guess based on one observation in one community—is from a 2021 paper that’s linked in the Vox article. It does a thorough job summarizing all known evidence regarding the issue, and concludes with a lot of uncertainty. However:
> I asked the study’s lead author, David Larsen, chair of the department of public health at Syracuse’s Falk College of Sport & Human Dynamics and an expert on malaria and mosquito-borne illnesses, for his reaction to Andreessen citing his work. He found the idea that one should stop using bednets because of the issues the paper raises ridiculous:
> “Andreessen is missing a lot of the nuance. In another study we discussed with traditional leaders the damage they thought ITNs [insecticide-treated nets] were doing to the fisheries. Although the traditional leaders attributed fishery decline to ITN fishing, they were adamant that the ITNs must continue. Malaria is a scourge, and controlling malaria should be the priority. In 2015 ITNs were estimated to have saved more than 10 million lives — likely 20-25 million at this point.
>“… ITNs are perhaps the most impactful medical intervention of this century. Is there another intervention that has saved so many lives? Maybe the COVID-19 vaccine. ITNs are hugely effective at reducing malaria transmission, and malaria is one of the most impactful pathogens on humanity. My thought is that local communities should decide for themselves through their processes. They should know the potential risk that ITN fishing poses, but they also experience the real risk of malaria transmission.”
There’s no good evidence that bed net toxicity kills a lot of people, and there’s extremely good evidence that they’re one of the best interventions out there for reducing child mortality. See also the article’s comments on nets getting used for fishing; the studies on net effectiveness account for this. Even if the nets do cause some level of harm, the downsides are enormously outweighed by the upsides, which are massive:
> A systematic review by the Cochrane Collaboration, probably the most respected reviewer of evidence on medical issues, found that across five different randomized studies, insecticide-treated nets reduce child mortality from all causes by 17 percent, and save 5.6 lives for every 1,000 children protected by nets.
This doesn’t mean that we should stop studying possible downsides of bed nets or avoid finding ways to improve them, but it does mean that 1) they do prevent malaria, extremely well, and 2) they save pretty much as many children as we thought.
To add, the Against Malaria Foundation specifically knows about this failure mode and sends someone to randomly check up on households to see if they're using the nets correctly. The rate of observed compliance failure isn't close to zero, but it isn't close to a high number either. See: https://www.givewell.org/charities/amf#Monitoring_and_evaluation_2
Maybe I'm too cynical but I haven't seen anyone change their mind when you add context that defies their expectation, I feel like they either sputter about how that's not their real objection (which if you think about it is pretty damn rude to say "this is why I believe in X" and then immediately go "I don't believe in X why would you think I believe in X") or they just stop engaging.
That's good news! Thanks.
But I think we agree that the general principle still stands that moral interventions further in time and space from ourselves generally have more risk. We can reduce the risk with careful study, but helping people far away is rarely as straightforward as "saving a child from drowning" where the benefit is clear and immediate. I find the "drowning child" thought experiment to be unhelpful as a metaphor for that reason.
We're not saving drowning children. We're writing policies to gather resources to hire technicians to build machines to pluck children from rivers at some point in the future. In expectation we aim to save children from drowning, but unlike the thought experiment there are many layers and linkages where things can go wrong, and that should be acknowledged and respected.
Sounds like we need some kind of social arrangement where we are gently compelled to work together to solve social problems cooperatively with roughly equal burdens and benefits, determined by need and ability to contribute. What would we call this...rule of Social cooperation? Perhaps social...ism?
Nah, sounds scary. Lets just keep letting the rules be defined by the Sociopathic Jerks Convention, with voting shares determined by capitol contributions.
>Nah, sounds scary.
Given the history of actually existing socialism, very scary indeed.
Socialism has a history far broader than whatever particular example you are thinking of.
Economic system and body count don't seem correlated meaningfully...mercantilism and capitalism gets colonialism (and neocolonialism) and slavery, also fun wars like Vietnam and Iraq, communists get the great purge and great leap forward.
Authoritarianism has the body count, regardless of if it's socialist, capitalist, theocratic, mercantilism or whatever you prefer.
Reading "More Drowning Children", the thought that came up for me was, "Damn, he has greatest ability to write reticulated hypotheticals which primarily serve to justify his priors of any one I've ever read!"
My second thought: For me, the issue is more, "At the end of this ever-escalating set of drowning children, do I ever get to do anything other than the minimal activities that allow me to survive to rescue more drowning children?" Not what you're getting ads, I know, but what you're doing seems to me to point in that direction.
Seems like the superior solution would be finding an EA / moral entrepreneur who will happily pay market value for the cabin, and then set up a net or chute or some sort of ongoing technological solution that diverts drowning children into an area with towels, snacks, and a phone where they can call their parents for pickup. Parents are charged an entrance fee to enter and retreive their saved children.
I unironically think the moral equivalent of this for Scott's favorite African use cases is something like "sweatshops."
in the dam lobbying example, surely lobbying for a different dam is touching the children
I think at some point Scott has to accept that people reading this blog are exactly the types of people to optimize for their own coolness and not at all for truth seeking or morality, when you see them go into contortions to avoid intuition pumps. The problem is upstream of logical argument, and in whatever thought process preventing them from thinking they could be at all immoral.
I’m confused by the use of ethical thought experiments designed to hone our moral intuitions, but which rely on increasingly fantastical scenarios and ethical epicycles upon epicycles. Mid-way through I was wondering if you were going to say “gotcha! this was all a way of showing that the drowning-child mode of talking about ethics is getting a bit out of hand.” Aren’t there more realistic examples we could be using? Or is the unreality part of the point?
You could have a framework where you expect yourself, and hopefully others, to donate a portion of thier time and/or money to helping others (call it the old 10 percent tithe, although admittedly everyone has thier own number). If you already expect yourself to do this, then adding on saving a drowning kid once hardly costs you more in the big picture, and is the right thing to do since you're uniquely positioned to do it. if it's really important to you, you can just take it out of your mental tithe ledger and skip one life unit of donation that month (although you probably won't because it's in the noise anyway). But if you're by the drowning river and this is happening so often it's significantly cutting into your tithe, it's perfectly reasonable to start actually taking your lifeguard duties out of your mental tithe, and start wondering if this is the most effective way for your tithe to save lives. And if not, then we all reasonably conclude you're fine (even better off) not doing it.
Also this reminds me of my favorite short story:
https://www.newyorker.com/magazine/1996/01/22/the-falls
"... doesn't seem to be a simple one-to-one correspondence where you’re the only person who can help: [sociopathic jerk thought experiment]"
I'm not sure if this tells us much about the effect of other people in real-world moral dilemma; one might hire the bullet and say "sure, in that case, where you know you're the only one who can help, you should—but in any real situation, there will be 1000 other people of whom you know little, and any one of which could help."
That is, if we're considering whether there is some sort of dilution of moral responsibility, I don't think the S.J.C. example really captures the salient considerations/intuitions.
-------------
I disagree with the other commenters about the utility of these thought-experiments in general, though.
They're /supposed/ to be extreme, so as to isolate the effect of x or y factor upon moral judgments—the only other options are to (a) waste all your time arguing small details & becoming confused (or, perhaps, just becoming frustrated by arguing with someone who's become confused) by the interplay of the thousand messy complications in real-world scenarios, or (b) throw up your hands & say "there's no way to systematize it, man, it's just... like... ineffable!"
If there is some issue with one of the thought experiments, such that it does not apply / isn't quite analogous / *is* isomorphic in structure but *isn't* analyzed correctly / etc., it ought to be possible to point it out. (Compare: "Yo, Einstein, man, these Gedankenexperimente are too extreme to ever be useful in reality! Speed of LIGHT? Let's think about more PRACTICAL stuff!")
I can't help but feel that some reactions of "these are too whacky, bro" comes from a sense of frustration at the objector's inability to articulate why the (argument from the) scenario isn't convincing.
I'm sympathetic, though, because I think that sometimes one /can correctly/ dismiss such a scenario—intuiting that there's something wrong—without necessarily being able to put it to one's interlocutor in a convincing way.
Still—no reason to throw the bathwater out with the baby. It's still warm enough to soak in for a while!
Honestly, I don't even understand what your goal with all of this is anymore. How is bringing up all of these absurd hypotheticals supposed to help your interests? There are plenty of real opportunities for charity, and the people have decided what is actually valuable to them. This obsession with arbitrary ethical rules is exactly why people see organizations like EA as a cult.
Bringing up the actual in real life effects of actual charities doesn't seem to motivate anyone, because they fall back onto abstract arguments about why it's not good to do charity that on average saves a life per 6000 dollars. And obviously as you see, it's pointless to discuss hypotheticals when you can have real life details to talk about things.
So yeah, I agree that EA refusing to obey social mores is cultish. Normal people drop it when they see you aren't interested in conversation.
The true position seems to be that it is much, much harder (but not impossible) to go to heaven than most people think, including most religious people (irreligious people presumably think that going to heaven is impossible on account if its non-existence).
However, what if the primary criterion of morality is not aiding other people? What if it is primarily the cultivation of a certain purity or nobility of soul? After all, if the soul is immortal, its quality is infinitely more valuable than any material and temporal vicissitudes.
> But I think most people would consider it common sense that refusing to rescue the 37th kid near the cabin is a minor/excusable sin, but refusing to rescue the one kid in your hometown is inexcusable.
I don't think most people would think this.
Here is a hole that I think is relevant.
I would argue that saving drowning children is actually a very-high-utility action, because you can call the child's parents to pick the child up and they'll be super grateful, and even if they don't pay money, you'll accrue social and reputational benefits. Tacking on "...oh, but your water-damaged suit!" is misleading, because even with a water-damaged suit, saving the child is still obviously net-positive-utility.
(So, for example, if you get the chance to move to a cabin and rescue drowning children all day, you could totally just do that and make a living off it. Start a Patreon, have a little website with a heartwarming story about how you're able to save all these children thanks to the generosity of your patrons. When you save a child, send them back to their parents with a link to your venmo.)
The Drowning Child story takes a situation in which saving the drowning child is obviously high-utility, and conflates it with a situation in which saving the person-with-a-disease is obviously negative-utility.
I don't have a moral about whether you should give lots of money to charity. I just think the drowning child story is misleading us, because it says "...you chose to save the drowning child, so for consistency you should make the same moral decision in other equivalent situations" but the situations are not actually equivalent.
Yes the "touching" thing is dumb but:
"But I think most people would consider it common sense that refusing to rescue the 37th kid near the cabin is a minor/excusable sin, but refusing to rescue the one kid in your hometown is inexcusable."
What?!?! I cannot for a second imagine that a majority of people would say "just picking a number of kids you're down to save is fine in this situation". That there is a diminishing marginal utility of saving dead kids!
If this is happening I genuinely think that someone living in this cabin needs to realize their life has been turned upside down by fate and that their new main life goal has to be "saving the 24 lives that are ending per day" by whatever means possible. Calling every newspaper to make sure people are aware of the 24 daily drowning kids in the river. Begging any person you see to trade with you in saving 12 kids a day so you can sleep. Make other people "touch the problem." Whatever the solution is--if a problem this immediate, substantial, and solvable appears and no one else is doing anything about it, you have to do what you can to get every kid saved.
Of course you have a moral obligation to save every single child that you can. Any other belief is abhorrent. The person who saves the 37th child is more moral than the person who doesn't. And so on into infinity. All proximity does is make this harder to ignore.
The underlying belief here is that a moral code needs to be reasonable enough that you can fulfill it with minimal impact to your life. It needs to be practical. But there's no reason to think that the requirements of living an ethical life are actually achievable. We can understand that but it doesn't excuse it.
The hard part of that is that means you are not a moral person and you are incapable of being a moral person. But if you follow literally any secular or religious ideology that's the inevitable conclusion. The standards of acting morally are so high you naturally fail them.
You are morally obligated to live in a way that you’re fundamentally not capable of. But you’re still responsible. You must still try. Evil is found not just in immoral acts but in apathy toward moral decisions. It is evil look at the dead and suffering and say, "It was inconvenient. I did not care." But you can say, "I was not good enough, I'm sorry."
So... the thing we should do is rebuild the coalition and the general pot, yes?
I was amazed that this essay wasn't about /didn't get to USAID. USAID is a global aid program Trump is essentially closing. As a result he (and America) are being blamed for killing poor foreigners who will apparently no longer survive due to not receiving the aid. Would it not be our problem at all if we'd never given any aid? Are we really the ones killing people by no longer voluntarily providing aid?
https://www.bu.edu/articles/2025/mathematician-tracks-deaths-from-usaid-medicaid-cuts/
https://www.nytimes.com/interactive/2025/03/15/opinion/foreign-aid-cuts-impact.html
https://www.reuters.com/world/us/usaid-official-warns-unnecessary-deaths-trumps-foreign-aid-block-then-says-hes-2025-03-03/
There is an infinitely large hole of human suffering, you will never fill that hole no matter what you do. You can at times prevent people from falling in the hole. But now you may have just created more people who will jump in the hole. But now you have less resources than before to deal with the larger issue you have now supported.