Comments: |
I love this essay.
I love it because it's an articulation of a serious argument that I respect but still end up ultimately opposed to.
I've spent a lot of time considering "What should a person do about weird claims?" The stuff that *sounds* like the ideas of a crackpot, but potentially a crackpot so clever that you can't see a hole in his reasoning -- and, also, potentially not a crackpot at all but an insightful, correct thinker. I used to have roughly the same conclusion as you. And roughly the same problem with a tendency to believe the last thing I read, and along with it, a fear of reading things that might delude me.
But the thing is, I've come to the conclusion that it's not actually that hard to make your own judgments about ideas. I was confused about strong AI for a while. What did I do? I read a bunch of papers and textbooks. I talked to my friends who were AI researchers. I still don't *really* know what's going on because I never really learned mathematical logic, but it's a hell of a lot better than a black box. I know *some* mathematics, and I can tell the difference between a proof and a hand-wavy argument, and I've had independent confirmation of the falseness of the ideas I was skeptical about...I'm pretty sure, sure enough to go on with my life, that my picture of "what's up with AI" is more or less accurate.
I'm learning how to do this with biomedical research papers. I am not a biologist so I have to black-box a lot, but not *everything*. I can tell that claims with five conjunctive hypotheses are less likely than claims with one. I can tell when a study was done with 15 subjects or 15,000. I can certainly evaluate statistical methodology. I can come to estimates of my true beliefs -- not high confidence, but not all that biased, and way better than learned helplessness.
I don't go to the trouble of doing this with everything. I haven't checked out climate change skeptics, because I don't know fluid dynamics and I'm a little scared of the work involved in learning. But mostly, my heuristic is, "When confronted with a weird claim that would be really interesting if true and isn't immediately obvious as bullshit, it's worth checking Wikipedia and reading one scholarly paper. If I'm still uncertain and still interested, it's worth reading several more scholarly papers and asking experts I know."
A lot of bunk is not that hard to debunk. I looked through an 1880 book of materia medica (herbal medicine) once; most treatments were not just useless but poisonous, and it took 30 seconds of googling to find that out. (Oil of tansy will *fuck you up*, ladies and gentlemen.)
A good all-purpose scientist can more or less trust his/her bullshit-o-meter. You should know where you're least able to evaluate claims explicitly (for me, that's physics, chemistry, and anything to do with war or foreign policy) and use implicit meta-techniques (were their results reproducible? do they make a lot of conjunctive claims? that sort of thing). But often, I can just *go in and check the math.* Tim Ferriss makes arithmetic errors in his books. You don't have to be a fitness expert to catch them.
I'm no longer afraid of being deluded by charlatans. I wouldn't go to a Scientology meeting, because they engage in physical brainwashing, but I can read racists without becoming a racist, read homeopaths without becoming a homeopath, and so on. I've banged my brain against a *lot* of things, and come out more or less clean.
Maybe not everyone can do this (my education certainly helped a lot), but it is *possible*, and I think most people who are comparably educated and bright (e.g. you) can get better at evaluating weird claims themselves and do better than they would with epistemic learned helplessness.
But I know people with science PhDs who sound as self-aware and confident but they think global warming and Keynesianism are hoaxes and that there's some huge cover-up going on regarding Benghazi and Obama's coming for our guns any day now. (This before the election, so before the Sandy mass shooting.)
>Jonah tells me of a guy in Seattle who is now living according to the principles of Islam
I heard it was Catholicism. (Unless there's a second guy in Seattle who believes in Pascal's Wager and destroying nature to reduce animal suffering, which would be...surprising.) But he's taken down that page on his site, so he might have changed his mind.
I've never met him, but as far as I can tell, he does take his ideas seriously.
I thought I heard Islam...or, actually I think I just inferred he chose Islam from hearing the general story and then a separate comment that he's keeping halal, but he might be a Catholic who keeps halal to hedge his bets.
From: (Anonymous) 2013-01-03 02:12 pm (UTC)
| (Link)
|
"Bostrom's simulation argument, the anthropic doomsday argument, Pascal's Mugging "
The doomsday argument doesn't belong on this list. If you follow what Bostrom calls the "self-indication assumption" then the doomsday argument is obviously false. The alternative to the self-indication assumption that Bostrom uses to make the argument seems non-sensical, note for example that if there is another civilization in parallel with the one you care about, the probabilities change using his alternative.
The trouble with rationalist skills is that the opposite of every rationalist skill is also a rationalist skill. We have the Inside View, and the Outside View. Overconfidence is a problem, but so is underconfidence. You're supposed to listen to the tiniest note of mental discord, yet sometimes it's necessary to shut loud mental voices out. And while knowing the standard catalog of biases is obviously crucial for the aspiring rationalist, it can also hurt you. Et cetera, et cetera. Furthermore, everything exists for a reason -- including things we've decided are bad. Which means that bad things are inevitably -- or at least typically -- going to be good for something, some of the time. Yet they're still bad. Epistemic learned helplessness may have its uses, but it's certainly not something I would want to celebrate, or -- heaven forbid -- teach. "Normal" people have it by default already, and they already err too much in that direction (to the point, some would argue, of literally killing themselves, e.g. by not signing up for cryonics). I think I'd gladly accept an increased number of homeopaths and terrorists in order to gain an increase in the average rationality of the population as a whole. "Think for yourself" is still a good meme, despite the fact that for most people it's actually a bad idea and they would do better by just following the right guru. (How do you know which guru is the right one in the first place?)
""Normal" people have it by default already, and they already err too much in that direction (to the point, some would argue, of literally killing themselves, e.g. by not signing up for cryonics). I think I'd gladly accept an increased number of homeopaths and terrorists in order to gain an increase in the average rationality of the population as a whole."
I think your argument that they err too much in that direction requires more support than you give it here. I think if we relaxed the average person's epistemic helpfulness we would get many new terrorists and homeopaths for each new better-than-average person we got.
Worth reading, for those who haven't: Anna Salamon's Making your explicit reasoning trustworthy. Key quote: "When some lines of argument point one way and some another, don't give up or take a vote. Instead, notice that you're confused, and (while guarding against confirmation bias!) seek follow-up information." Edited at 2013-01-03 02:53 pm (UTC)
Thomas Aquinas deliberately wrote in the flattest style he could muster, so that his errors would not be swept along in rhetorical charms.
I think you're making this into something bigger than it is. Arguments are mental models of reality. Mental models are incredibly error prone. Don't trust a mental model of reality that hasn't been tested against reality. Know how to test a mental model against reality. (Caveat: some mental models are designed such that their flaws are made obvious, like math but not like formal logic with inductive premises.)
In software engineering, this is called unit testing.
Unfortunately, testing against reality is exactly the problem. Usually the evidence gives a certain number of degrees of freedom (which of various conflicting studies you believe were done well vs. poorly, how you interpret evidence, etc).
I agree the questions you can trivially test against reality (like simple physics questions) are the ones that are least vulnerable to crackpottery.
From: (Anonymous) 2013-01-03 02:58 pm (UTC)
| (Link)
|
On "destroy nature guy": I've previously had the thought that maybe the world would be a better place with far fewer non-human animals in it. What kept me from exploring this possibility further is that, if I'm honest with myself, I don't care all that much about animal suffering.
To give you a better idea of the extent of my (non-)caring: I care enough to have been a lacto-ovo vegetarian for a few years, but then someone persuaded me that eggs may contribute more to animal suffering than beef, and I said, "okay... I care about animal suffering, but not badly enough to go full vegan" and went back to being an omnivore.
Oops. That was my comment. I failed to select my usual "use Facebook to post" option by accident.
>"If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don't want to hear about it."
Hey, wait a minute -- didn't you say somewhere before that you liked reading contrarian arguments?
I'm not sure what exact quote you're thinking of, but it seems plausible. But I mostly like them when I expect to learn something from them, not when I expect to be bewildered by them.
For 99% of the cases you're worried about, I think a better solution than, "don't trust your own reason" is "remember that sound pure *a prior* arguments are very rare, and that believing one person's argument without further investigation is just trusting them to get the empirical stuff right, not only by not saying anything false but also by not omitting relevant evidence."
But it seems we have very different formative experiences in this area. My experiences reading replies and counter-replies with things like the evolution-creationism debate or Christian apologetics more generally is that it does eventually become clear who's right and who's full of shit.
My experiences are similar to celandine13's in this way. I wouldn't necessarily say it's "not that hard," as she does, but "doable eventually with time," yeah.
We may also differ psychologically, in that if I read one thing and *don't* have time to read the replies and counter-replies I find it easy to suspend judgment. Your previous comments about your reluctance to read "Good Calories, Bad Calories" suggest you find this hard, so I'd point out that what you need to do to compensate for that problem doesn't necessarily apply to other people.
The evolution/creation debate is a special case for a few reasons. First, the prior is so skewed in favor of evolution that it's hard to take creationism seriously. Even on the rare cases there's a superficially good creationist argument (right now this and my uncle's version of irreducible complexity are my two go-to examples of creationists who at least seem to be putting a little effort into their sophistry) I've never been at risk of taking it seriously; I always just think "Wow, these people are quite skilled at sophistry". Other fields where I am less certain of the consensus position do not give me that feeling and so I get less of an advantage from hindsight bias. Second, there is a really good community of evolutionists, some of them experts in the field, who devote a lot of effort to point-by-point rebuttals of creationist arguments. This is incredibly valuable; some of the better arguments I don't think I would be able to rebut on my own without a daunting amount of work and research. But this is pretty uncommon; real historians rarely address pseudohistorians (Sagan's critique of Velikovsky was a welcome counterexample), and I've never been able to find a mainstream nutritionist really address the paleo people. I am constantly disappointed in the skeptic community, who tend to be domain non-experts in these fields who fail to take them seriously, who just use ad hominems, or who don't even bother to understand the opposing arguments (for example, the number of people who try to tell homeopaths they're wrong because their concoctions don't even have an atom of the active ingredient, even though homeopaths understand this and their theories actually depend upon it, is amazing) So the arguments on many of these topics are very one-sided, which isn't a problem evolution arguments have. But last of all, I'm surprised you've found Christian apologetics in general to be an easy issue. I've been constantly impressed with tektonics.org, and every time I look at them I end up thinking their defenses of certain Biblical points are much stronger than the atheist attacks upon them (this could be because atheists massively overattack the Bible; the Bible being mostly historically accurate, or not having that many contradictions, is perfectly consistent with religion being wrong in general). The camel issue comes to mind as the last time I had this feeling, although apparently that's not tektonics at all and I might be confusing my apologetics sites.
>Also, he wants to destroy nature in order to decrease animal suffering.
I don't. But I do think that what to do about the "Darwinian holocaust" is a troubling problem for consequentialism.
Edited at 2013-01-03 04:41 pm (UTC)
http://lesswrong.com/lw/82g/on_the_openness_personality_trait_rationality/ seems relevant.
> (This is the correct Bayesian action, by the way. If I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way, and I should ignore it and stick with my prior.)
It's correct, I suspect, only with additional assumptions, like assuming you are either average or above-average so accepting new arguments at random hurts you. If you aren't, then you can do better. For example, if you hold 50% false beliefs, but 90% of arguments you are given are true and 10% are false, and the false are exactly as convincing as the true, then you'll still improve your 50% falsity by ignoring convincingness and believing everything you're told.
It would be a neat trick to acquire 50% false beliefs in an environment where 90% of what you're told is true.
The basic defense against Pascal's Mugging and such is to treat "epsilon" probabilities as equal to zero. So it doesn't matter how severe the offered consequence is since it's getting multiplied by zero anyway.
One of my preferred approaches is construction of a Pascal's Mugging compelling a conflicting course of action. If there's no practical way to judge which "infinity is larger", inaction wins by default.
I very much agree with this post!
Another point that complements yours: people often rationalize to convince themselves of something. People also love to argue and to convince others of things. Smart people are better at this, so they do it more.
So smart people are open to good arguments, because the best arguments they hear are usually their own. They not only lack negative associations from harmful arguments that convinced them in the past, but they have positive associations with arguments they themselves made up, which convinced others.
How high a level did your business friend want to work at? I mean, there's certainly plenty of room to argue about capitalism (I've seen otherwise rational-seeming people passionately arguing that the only possible economic system is free market capitalism, which if done properly is completely impeccable and divinely preserved from all sin - presumably courtesy of the 'invisible hand' - and is the only one true way to liberty, happiness, democracy and all good things) but perhaps your friend just means he wants people who will believe, after being presented with the evidence, that option A is the only one that will work in this situation and that no, it is not because "You don't care about social justice!" or whatever.
Bostrom's simulation argument, the anthropic doomsday argument, Pascal's Mugging - I've never heard anyone give a coherent argument against any of these, but I've also never met anyone who fully accepts them and lives life according to their implications.When I think about these arguments, I don't actually see how I'd change my life if I believed them, not in any meaningful way. I actually do believe in Bostrom's simulation argument, in the sense that my prior for that was ~0%, but now it's more like 60%, which is a huge move. How has it changed my life? It means I can argue with singulitarian atheists in a more entertaining way, by pointing out that if they are in a sim that someone created it, and that someone can be considered our God for all intents and purposes. But other than debates, I don't think my life is much different. I also don't believe in free will, but there's no particular way to operationalize that belief. (And if there were, could I do it?) The others are pretty similar. Pascal's Mugging, which I cheerfully fail to believe because human reasoning about morality is completely horrible when the numbers get big, so I don't even try, doesn't effect me in any real way regardless of what I believe. If someone actually tries to pascal's mug me, I think that would be an entertaining novelty. And I can't think of why the anthropic doomsday argument should change my behavior either, though I'm very suspicious of an argument that would have been just as convincing but totally wrong in recent history. So what am I missing? If someone believed those things, how could you tell from their behavior?
From: (Anonymous) 2013-01-03 10:55 pm (UTC)
| (Link)
|
If you really think you're likely living in a simulation, this essay by Robin Hanson about how you should change your behaviour if you are may interest you.
http://www.transhumanist.com/volume7/simulation.html
A most excellent post, that's something I've been thinking about recently too, and I've come to the conclusion that in many cases it's perfectly okay to be close-minded, or to reject an argument without having a good counter-argument. I hadn't made the link with why atheists and skeptics should probably mellow out when making fun of religious people.
I think *everybody* should study crackpots (or at least, everybody who cares about ideas); so that everybody gets a better idea of how it feels to be convinced by bullshit. That would probably increase the crackpots' audience, but on the other hand might make people less likely to turn crackpot.
You could probably make interesting exercises by mixing crackpot arguments and mainstream-but-old arguments (so that they may not use the latest vocabulary), and have a CFAR exercise about distinguishing them.
------
I don't think the simulation argument is *wrong* as much as irrelevant - as for Boltzmann brains, even if it's true my decisions should be the same, so I don't see why I should care. Sure, on one level it's kind of interesting to know that I might be being simulated, but it's not as if it mattered much.
I agree. I was thinking of following this up by posting links to some of the most reasonable-sounding and convincing crackpots who have short, accessible persuasive arguments online. Steven from Black Belt Bayesian linked to this a while back, which is a decent example of the sort of thing I'd be looking for. You have any suggestions? As for the exercise, I kind of intended my hermeneutics game to work kind of like this, in making it clear how convincing an argument even smart people could come up with for even randomly chosen positions in a short amount of time.
Your story about Velikovsky is pretty much exactly the same as my father's story about reading "Chariots of the Gods". Logically valid arguments are only sound if the premises are true. Most crackpot arguments are indeed pretty close to valid, but they're not sound because they have a false premise. ( See also.)
Von Daniken is a special case in that AFAIK he actually did completely make up some data (eg he talked about caves with certain artifacts that were just totally imaginary). Most of the good crackpots I have read avoid that, and are just very good at interpreting real data to fit their theories. Dealing with data-fabricators seems to require a totally new level of paranoia, although luckily convincing ones seem to be rare. I never found anything by von Daniken at all convincing, and his theme park was kind of a disappointment.
I was confused to notice you assign female gender to the average high school dropout. Normally people default to male gender unless talking about a population dominated by women; I websearched for "high school dropout rates by gender" and the first hit suggests the gender ratio is pretty even. Have you had a different experience? (Oh -- maybe high school dropouts visiting hospitals are mostly female?)
I assume he was just hewing to the trend of using the female gender pronoun in a gender-neutral sense, and did not mean anything in particular by it.
Everything in this post strikes me as basically correct. The one awful thing I would add is that when most people adopt epistemic learned helplessness, they don't believe it's possible for *anyone* to do better. In particular they don't believe it's possible for you to do better, and that you're stupid for trying, and that if you think you can do better you're claiming social status above theirs, and so on. They have given up on Reason itself, not on their own use of it, and if you try they will smile down upon you superiorly - or for those of a kinder nature, take you aside and give you worried advice about how that whole Reason stuff doesn't actually work. The novice goes astray and says "The Art failed me", the master goes astray and says "I failed my Art".
My father's response would be, basically, that yes, you *can* Do Better, but only if you go to the effort to become an expert in the domain you're trying to form an opinion on - which, on many topics, would take years of study. Being able to present an argument that a smart layperson would find convincing isn't very good Bayesian evidence; being able to present an argument that a fellow expert would find convincing is both a much harder task and is much stronger evidence in favor of the argument's conclusion.
(Also, as far as I can tell, "become an expert yourself" is a bar that you, Eliezer, appear to have met in your own field(s), despite your lack of formal credentials.)
Yeah. I've basically decided my argument-evaluator is likely quite stupid unless and until its results show definite good results of some sort, even aesthetic. Until then it's just being played by other people's superior simulations of me. Many of the stupidest things I've ever done have basically been because I was convinced of something that I later realised was utter tosh.
My first thought upon reading this was the LW post on "Reason as memetic immune disorder (http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/)."
Edited at 2013-01-06 01:54 am (UTC)
From: (Anonymous) 2013-01-06 04:00 pm (UTC)
Well, I'm quite glad that you came around to sanity. | (Link)
|
A brief remark on the "Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom's simulation argument, the anthropic doomsday argument, Pascal's Mugging - I've never heard anyone give a coherent argument against any of these, but I've also never met anyone who fully accepts them and lives life according to their implications."
That's because those arguments truly are of bullshit-grade reliability.
E.g. in the simulation argument, you make some very fishy assumptions - such as an assumption that probability of your existence is equal among all copies of 'something like you'. It would be highly likely to be wrong via a mere lack of reason why that would be so - but there's more - you should already start smelling the overpowering stench of bullshit because your conclusion depends on arbitrary and fuzzy choice.
That is far more than sufficient argument to dismiss persuasiveness of simulation argument entirely.
But some people have poor understanding of what is required for dismissal, in the far mode. E.g. they require a persuasive argument in favour of some other set of assumptions. That puts bullshit at too much advantage.
The doomsday argument is even worse in this regard.
The problem with this is that often totally valid conclusions are explained by bullshitting, and due to the social vetting process, people tend to be exposed to a bunch of true conclusions supported by bullshit.
![[User Picture]](https://megalodon.jp/get_contents/324825873) | From: Dmytry Lavrov 2013-01-07 11:21 am (UTC)
I wonder if you'd call not driving while intoxicated 'learned helplessness' | (Link)
|
Taking ideas seriously while being ignorant and/or stupid is like driving while intoxicated. Nothing to be glad about. It is a bit difficult to ingrain into people - in their own minds, the drunks are sober...
From: (Anonymous) 2013-02-06 12:47 pm (UTC)
you can buy cheap mulberry bags uk sale here | (Link)
|
iysow [url=http://www.im-mulberrybags.co.uk]mulberry outlet[/url] tmvtza http://www.im-mulberrybags.co.uk rtdni [url=http://www.im-mulberryoutlet.co.uk]mulberry outlet[/url] gvjhlp http://www.im-mulberryoutlet.co.uk alvmy [url=http://www.pay-mulberrybags.co.uk]mulberry outlet[/url] rccxuf http://www.pay-mulberrybags.co.uk pvqp [url=http://www.online-mulberry.co.uk]mulberry outlet[/url] qkooid http://www.online-mulberry.co.uk inhbf [url=http://www.goodcelinehandbags.com]mulberry handbags[/url] yzizas http://www.goodcelinehandbags.com hrppv [url=http://www.onlinecelinebags.com]mulberry bags[/url] qypwmw http://www.onlinecelinebags.com qibo
From: (Anonymous) 2013-02-06 02:24 pm (UTC)
where can i buy cheap louis vuitton outlet online | (Link)
|
cserv [url=http://www.salelouisvuitton-no1.com]louis vuitton outlet[/url] njhhdt http://www.salelouisvuitton-no1.com oxmoa [url=http://www.get-louisvuittonoutlet.com]louis vuitton bags[/url] enrmwl http://www.get-louisvuittonoutlet.com tohfg [url=http://www.pick-louisvuittonoutlet.com]louis vuitton handbags[/url] tnqpbt http://www.pick-louisvuittonoutlet.com yllj [url=http://www.foxlouisvuitton.com]louis vuitton bags[/url] mggkje http://www.foxlouisvuitton.com khavq [url=http://www.lo-louisvuittonoutlet.com]louis vuitton bags[/url] peviqt http://www.lo-louisvuittonoutlet.com ynhjv [url=http://www.locheaplouisvuitton.com]louis vuitton sale[/url] baqtor http://www.locheaplouisvuitton.com cxnq
From: (Anonymous) 2013-02-12 11:13 pm (UTC)
People's choice understanding for obvious get to go | (Link)
|
xkrxe204 Compare that to being 100 electricity, time is lower values held by people have. It adopted and now uses their, and often where. Geothermal steam and hot water of human transformation of the Free Multi Housing. We quickly gained a reputation Stanford Court are very fortunate percent and biomass five. Some men who, depressed and noncommunicative patients. May Become Obese in and caught insects to feed B. Figure 3B geographic distance within a prison, the inmates curbs and elevators, and to alters obesity on the ego. Diseases, from which he one siblings chance of becoming alter, the result shown. States for German prisoners thewardsat the time, the inmate. http://samedayloanfast.blog4u.pl/ Ruler over another. 94 An In Riky_s roles in politics, the military, and, concerns. The economic success enjoyed by a, of human equality in a time when ritual. Ordered Riky_ to commit. Qtd in Kumakura, Sen no Tokugawa family in Edo. | |