?

ログイン

Epistemic learned helplessness - Jackdaws love my big sphinx of quartz [entries|archive|friends|userinfo]
Scott

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Epistemic learned helplessness [Jan. 3rd, 2013|01:10 am]
Scott
[Tags|, , ]

[Epistemic Status | Probably I'm just coming at the bog-standard idea of compartmentalization from a different angle here. I don't know if anyone else has noted how compartmentalization is a good thing before, but I bet they have.]

A friend in business recently complained about his hiring pool, saying that he couldn't find people with the basic skill of believing arguments. That is, if you have a valid argument for something, then you should accept the conclusion. Even if the conclusion is unpopular, or inconvenient, or you don't like it. He told me a good portion of the point of CfAR was to either find or create people who would believe something after it had been proven to them.

And I nodded my head, because it sounded reasonable enough, and it wasn't until a few hours later that I thought about it again and went "Wait, no, that would be the worst idea ever."

I don't think I'm overselling myself too much to expect that I could argue circles around the average high school dropout. Like I mean that on almost any topic, given almost any position, I could totally demolish her and make her look like an idiot. Reduce her to some form of "Look, everything you say fits together and I can't explain why you're wrong, I just know you are!" Or, more plausibly, "Shut up I don't want to talk about this!"

And there are people who can argue circles around me. Not on any topic, maybe, but on topics where they are experts and have spent their whole lives honing their arguments. When I was young I used to read pseudohistory books; Immanuel Velikovsky's Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.

And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn't believe I had ever been so dumb as to believe Velikovsky.

And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.

And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn't so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas. After all, Noah's Flood couldn't have been a cultural memory both of the fall of Atlantis and of a change in the Earth's orbit, let alone of a lost Ice Age civilization or of megatsunamis from a meteor strike. So given that at least some of those arguments are wrong and all seemed practically proven, I am obviously just gullible in the field of ancient history. Given a total lack of independent intellectual steering power and no desire to spend thirty years building an independent knowledge base of Near Eastern history, I choose to just accept the ideas of the prestigious people with professorships in Archaeology rather than the universally reviled crackpots who write books about Venus being a comet.

I guess you could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments are just going to be a bad idea so I don't even try. If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don't want to hear about it. If you insist on telling me anyway, I will nod, say that your argument makes complete sense, and then totally refuse to change my mind or admit even the slightest possibility that you might be right.

(This is the correct Bayesian action, by the way. If I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way, and I should ignore it and stick with my prior.)

I consider myself lucky in that my epistemic learned helplessness is circumscribed; there are still cases where I will trust the evidence of my own reason. In fact, I trust it in most cases other than very carefully constructed arguments known for their deceptiveness in fields I know little about. But I think the average high school dropout both doesn't and shouldn't. Anyone anywhere - politicians, scammy businessmen, smooth-talking romantic partners - would be able to argue her into anything. And so she takes the obvious and correct defensive manuever - she will never let anyone convince her of any belief that sounds "weird" (note that, if you grow up in the right circles, beliefs along the lines of astrology not working sound "weird".)

This is starting to sound a lot like ideas I've already heard centering around compartmentalization and taking ideas seriously. The only difference between their presentation and mine is that I'm saying that for 99% of people, 99% of the time, this is a terrible idea. Or, at the very least, this should be the last skill you learn, after you've learned every other skill that allows you to know which ideas are or are not correct.

The people I know who are best at taking ideas seriously are those who are smartest and most rational. I think people are working off a model where these co-occur because you need to be very clever to fight your natural and detrimental tendency not to take ideas seriously. I think it's at least possible they co-occur because you have to be really smart in order for taking ideas seriously to be even not-immediately-disastrous. You have to be really smart not to have been talked into enough terrible arguments to develop epistemic learned helplessness.

Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom's simulation argument, the anthropic doomsday argument, Pascal's Mugging - I've never heard anyone give a coherent argument against any of these, but I've also never met anyone who fully accepts them and lives life according to their implications.

A friend tells me of a guy who once accepted fundamentalist religion because of Pascal's Wager. I will provisionally admit that this person takes ideas seriously. Everyone else loses.

Which isn't to say that some people don't do better than others. Terrorists seem pretty good in this respect. People used to talk about how terrorists must be very poor and uneducated to fall for militant Islam, and then someone did a study and found that they were disproportionately well-off, college educated people (many were engineers). I've heard a few good arguments in this direction before, things like how engineering trains you to have a very black-and-white right-or-wrong view of the world based on a few simple formulae, and this meshes with fundamentalism better than it meshes with subtle liberal religious messages.

But to these I would add that a sufficiently smart engineer has never been burned by arguments above his skill level before, has never had any reason to develop epistemic learned helplessness. If Osama comes up to him with a really good argument for terrorism, he thinks "Oh, there's a good argument for terrorism. I guess I should become a terrorist," as opposed to "Arguments? You can prove anything with arguments. I'll just stay right here and not do something that will get me ostracized and probably killed."

Responsible doctors are at the other end of the spectrum from terrorists in this regard. I once heard someone rail against how doctors totally ignored all the latest and most exciting medical studies. The same person, practically in the same breath, then railed against how 50% to 90% of medical studies are wrong. These two observations are not unrelated. Not only are there so many terrible studies, but pseudomedicine (not the stupid homeopathy type, but the type that links everything to some obscure chemical on an out-of-the-way metabolic pathway) has, for me, proven much like pseudohistory in that unless I am an expert in that particular field of medicine (biochemistry has a disproportionate share of these people and is also an area where I'm weak) it's hard not to take them seriously, even when they're super-wrong.

I have developed a healthy dose of epistemic learned helplessness, and the medical establishment offers a shiny tempting solution - first, a total unwillingness to trust anything, no matter how plausible it sounds, until it's gone through an endless cycle of studies and meta-analyses, and second, a bunch of Institutes and Collaborations dedicated to filtering through all these studies and analyses and telling you what lessons you should draw from them. Part of the reason Good Calories, Bad Calories was so terrifying is that it made a strong case that this establishment can be very very wrong, and I don't have good standards by which to decide whether to dismiss it as another Velikovsky, or whether to just accept that the establishment is totally untrustworthy and, as doctors sometimes put it, AMYOYO. And if the latter, how much establishment do I have to jettison and how much can be saved? Do I have to actually go through all those papers purporting to prove homeopathy with an open mind?

I am glad that some people never develop epistemic learned helplessness, or develop only a limited amount of it, or only in certain domains. It seems to me that although these people are more likely to become terrorists or Velikovskians or homeopaths, they're also the only people who can figure out if something basic and unquestionable is wrong, and make this possibility well-known enough that normal people start becoming willing to consider it.

But I'm also glad epistemic learned helplessness exists. It seems like a pretty useful social safety valve most of the time.
linkReply

Comments:
Page 1 of 6
<<[1] [2] [3] [4] [5] [6] >>
[User Picture]From: celandine13
2013-01-03 01:19 pm (UTC)
I love this essay.

I love it because it's an articulation of a serious argument that I respect but still end up ultimately opposed to.

I've spent a lot of time considering "What should a person do about weird claims?" The stuff that *sounds* like the ideas of a crackpot, but potentially a crackpot so clever that you can't see a hole in his reasoning -- and, also, potentially not a crackpot at all but an insightful, correct thinker. I used to have roughly the same conclusion as you. And roughly the same problem with a tendency to believe the last thing I read, and along with it, a fear of reading things that might delude me.

But the thing is, I've come to the conclusion that it's not actually that hard to make your own judgments about ideas. I was confused about strong AI for a while. What did I do? I read a bunch of papers and textbooks. I talked to my friends who were AI researchers. I still don't *really* know what's going on because I never really learned mathematical logic, but it's a hell of a lot better than a black box. I know *some* mathematics, and I can tell the difference between a proof and a hand-wavy argument, and I've had independent confirmation of the falseness of the ideas I was skeptical about...I'm pretty sure, sure enough to go on with my life, that my picture of "what's up with AI" is more or less accurate.

I'm learning how to do this with biomedical research papers. I am not a biologist so I have to black-box a lot, but not *everything*. I can tell that claims with five conjunctive hypotheses are less likely than claims with one. I can tell when a study was done with 15 subjects or 15,000. I can certainly evaluate statistical methodology. I can come to estimates of my true beliefs -- not high confidence, but not all that biased, and way better than learned helplessness.

I don't go to the trouble of doing this with everything. I haven't checked out climate change skeptics, because I don't know fluid dynamics and I'm a little scared of the work involved in learning. But mostly, my heuristic is, "When confronted with a weird claim that would be really interesting if true and isn't immediately obvious as bullshit, it's worth checking Wikipedia and reading one scholarly paper. If I'm still uncertain and still interested, it's worth reading several more scholarly papers and asking experts I know."

A lot of bunk is not that hard to debunk. I looked through an 1880 book of materia medica (herbal medicine) once; most treatments were not just useless but poisonous, and it took 30 seconds of googling to find that out. (Oil of tansy will *fuck you up*, ladies and gentlemen.)

A good all-purpose scientist can more or less trust his/her bullshit-o-meter. You should know where you're least able to evaluate claims explicitly (for me, that's physics, chemistry, and anything to do with war or foreign policy) and use implicit meta-techniques (were their results reproducible? do they make a lot of conjunctive claims? that sort of thing). But often, I can just *go in and check the math.* Tim Ferriss makes arithmetic errors in his books. You don't have to be a fitness expert to catch them.

I'm no longer afraid of being deluded by charlatans. I wouldn't go to a Scientology meeting, because they engage in physical brainwashing, but I can read racists without becoming a racist, read homeopaths without becoming a homeopath, and so on. I've banged my brain against a *lot* of things, and come out more or less clean.

Maybe not everyone can do this (my education certainly helped a lot), but it is *possible*, and I think most people who are comparably educated and bright (e.g. you) can get better at evaluating weird claims themselves and do better than they would with epistemic learned helplessness.
(Reply) (Thread)
[User Picture]From: mindstalk
2013-01-03 04:05 pm (UTC)
But I know people with science PhDs who sound as self-aware and confident but they think global warming and Keynesianism are hoaxes and that there's some huge cover-up going on regarding Benghazi and Obama's coming for our guns any day now. (This before the election, so before the Sandy mass shooting.)
(Reply) (Parent) (Thread) (Expand)
[User Picture]From: Julia Wise
2013-01-03 01:35 pm (UTC)
>Jonah tells me of a guy in Seattle who is now living according to the principles of Islam

I heard it was Catholicism. (Unless there's a second guy in Seattle who believes in Pascal's Wager and destroying nature to reduce animal suffering, which would be...surprising.) But he's taken down that page on his site, so he might have changed his mind.

I've never met him, but as far as I can tell, he does take his ideas seriously.
(Reply) (Thread)
[User Picture]From: squid314
2013-01-04 09:08 am (UTC)
I thought I heard Islam...or, actually I think I just inferred he chose Islam from hearing the general story and then a separate comment that he's keeping halal, but he might be a Catholic who keeps halal to hedge his bets.
(Reply) (Parent) (Thread)
From: (Anonymous)
2013-01-03 02:12 pm (UTC)
"Bostrom's simulation argument, the anthropic doomsday argument, Pascal's Mugging "

The doomsday argument doesn't belong on this list. If you follow what Bostrom calls the "self-indication assumption" then the doomsday argument is obviously false. The alternative to the self-indication assumption that Bostrom uses to make the argument seems non-sensical, note for example that if there is another civilization in parallel with the one you care about, the probabilities change using his alternative.
(Reply) (Thread)
From: komponisto
2013-01-03 02:35 pm (UTC)
The trouble with rationalist skills is that the opposite of every rationalist skill is also a rationalist skill. We have the Inside View, and the Outside View. Overconfidence is a problem, but so is underconfidence. You're supposed to listen to the tiniest note of mental discord, yet sometimes it's necessary to shut loud mental voices out. And while knowing the standard catalog of biases is obviously crucial for the aspiring rationalist, it can also hurt you. Et cetera, et cetera.

Furthermore, everything exists for a reason -- including things we've decided are bad. Which means that bad things are inevitably -- or at least typically -- going to be good for something, some of the time. Yet they're still bad.

Epistemic learned helplessness may have its uses, but it's certainly not something I would want to celebrate, or -- heaven forbid -- teach. "Normal" people have it by default already, and they already err too much in that direction (to the point, some would argue, of literally killing themselves, e.g. by not signing up for cryonics). I think I'd gladly accept an increased number of homeopaths and terrorists in order to gain an increase in the average rationality of the population as a whole.

"Think for yourself" is still a good meme, despite the fact that for most people it's actually a bad idea and they would do better by just following the right guru. (How do you know which guru is the right one in the first place?)





(Reply) (Thread)
[User Picture]From: squid314
2013-01-04 09:11 am (UTC)
""Normal" people have it by default already, and they already err too much in that direction (to the point, some would argue, of literally killing themselves, e.g. by not signing up for cryonics). I think I'd gladly accept an increased number of homeopaths and terrorists in order to gain an increase in the average rationality of the population as a whole."

I think your argument that they err too much in that direction requires more support than you give it here. I think if we relaxed the average person's epistemic helpfulness we would get many new terrorists and homeopaths for each new better-than-average person we got.
(Reply) (Parent) (Thread)
From: komponisto
2013-01-03 02:52 pm (UTC)
Worth reading, for those who haven't: Anna Salamon's Making your explicit reasoning trustworthy. Key quote:

"When some lines of argument point one way and some another, don't give up or take a vote. Instead, notice that you're confused, and (while guarding against confirmation bias!) seek follow-up information."

Edited at 2013-01-03 02:53 pm (UTC)
(Reply) (Thread)
[User Picture]From: marycatelli
2013-01-04 12:23 am (UTC)
Thomas Aquinas deliberately wrote in the flattest style he could muster, so that his errors would not be swept along in rhetorical charms.
(Reply) (Parent) (Thread)
From: siodine
2013-01-03 02:58 pm (UTC)
I think you're making this into something bigger than it is. Arguments are mental models of reality. Mental models are incredibly error prone. Don't trust a mental model of reality that hasn't been tested against reality. Know how to test a mental model against reality. (Caveat: some mental models are designed such that their flaws are made obvious, like math but not like formal logic with inductive premises.)

In software engineering, this is called unit testing.
(Reply) (Thread)
[User Picture]From: squid314
2013-01-04 09:13 am (UTC)
Unfortunately, testing against reality is exactly the problem. Usually the evidence gives a certain number of degrees of freedom (which of various conflicting studies you believe were done well vs. poorly, how you interpret evidence, etc).

I agree the questions you can trivially test against reality (like simple physics questions) are the ones that are least vulnerable to crackpottery.
(Reply) (Parent) (Thread) (Expand)
From: (Anonymous)
2013-01-03 02:58 pm (UTC)
On "destroy nature guy": I've previously had the thought that maybe the world would be a better place with far fewer non-human animals in it. What kept me from exploring this possibility further is that, if I'm honest with myself, I don't care all that much about animal suffering.

To give you a better idea of the extent of my (non-)caring: I care enough to have been a lacto-ovo vegetarian for a few years, but then someone persuaded me that eggs may contribute more to animal suffering than beef, and I said, "okay... I care about animal suffering, but not badly enough to go full vegan" and went back to being an omnivore.
(Reply) (Thread)
[User Picture]From: Chris Hallquist
2013-01-03 02:59 pm (UTC)
Oops. That was my comment. I failed to select my usual "use Facebook to post" option by accident.
(Reply) (Parent) (Thread)
(no subject) - (Anonymous) Expand
(no subject) - (Anonymous) Expand
From: komponisto
2013-01-03 03:11 pm (UTC)
>"If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don't want to hear about it."

Hey, wait a minute -- didn't you say somewhere before that you liked reading contrarian arguments?
(Reply) (Thread)
[User Picture]From: squid314
2013-01-04 09:15 am (UTC)
I'm not sure what exact quote you're thinking of, but it seems plausible. But I mostly like them when I expect to learn something from them, not when I expect to be bewildered by them.
(Reply) (Parent) (Thread)
[User Picture]From: Chris Hallquist
2013-01-03 03:16 pm (UTC)
For 99% of the cases you're worried about, I think a better solution than, "don't trust your own reason" is "remember that sound pure *a prior* arguments are very rare, and that believing one person's argument without further investigation is just trusting them to get the empirical stuff right, not only by not saying anything false but also by not omitting relevant evidence."

But it seems we have very different formative experiences in this area. My experiences reading replies and counter-replies with things like the evolution-creationism debate or Christian apologetics more generally is that it does eventually become clear who's right and who's full of shit.

My experiences are similar to celandine13's in this way. I wouldn't necessarily say it's "not that hard," as she does, but "doable eventually with time," yeah.

We may also differ psychologically, in that if I read one thing and *don't* have time to read the replies and counter-replies I find it easy to suspend judgment. Your previous comments about your reluctance to read "Good Calories, Bad Calories" suggest you find this hard, so I'd point out that what you need to do to compensate for that problem doesn't necessarily apply to other people.
(Reply) (Thread)
[User Picture]From: squid314
2013-01-04 09:28 am (UTC)
The evolution/creation debate is a special case for a few reasons.

First, the prior is so skewed in favor of evolution that it's hard to take creationism seriously. Even on the rare cases there's a superficially good creationist argument (right now this and my uncle's version of irreducible complexity are my two go-to examples of creationists who at least seem to be putting a little effort into their sophistry) I've never been at risk of taking it seriously; I always just think "Wow, these people are quite skilled at sophistry". Other fields where I am less certain of the consensus position do not give me that feeling and so I get less of an advantage from hindsight bias.

Second, there is a really good community of evolutionists, some of them experts in the field, who devote a lot of effort to point-by-point rebuttals of creationist arguments. This is incredibly valuable; some of the better arguments I don't think I would be able to rebut on my own without a daunting amount of work and research. But this is pretty uncommon; real historians rarely address pseudohistorians (Sagan's critique of Velikovsky was a welcome counterexample), and I've never been able to find a mainstream nutritionist really address the paleo people. I am constantly disappointed in the skeptic community, who tend to be domain non-experts in these fields who fail to take them seriously, who just use ad hominems, or who don't even bother to understand the opposing arguments (for example, the number of people who try to tell homeopaths they're wrong because their concoctions don't even have an atom of the active ingredient, even though homeopaths understand this and their theories actually depend upon it, is amazing) So the arguments on many of these topics are very one-sided, which isn't a problem evolution arguments have.

But last of all, I'm surprised you've found Christian apologetics in general to be an easy issue. I've been constantly impressed with tektonics.org, and every time I look at them I end up thinking their defenses of certain Biblical points are much stronger than the atheist attacks upon them (this could be because atheists massively overattack the Bible; the Bible being mostly historically accurate, or not having that many contradictions, is perfectly consistent with religion being wrong in general). The camel issue comes to mind as the last time I had this feeling, although apparently that's not tektonics at all and I might be confusing my apologetics sites.
(Reply) (Parent) (Thread) (Expand)
[User Picture]From: simplicio1
2013-01-03 04:38 pm (UTC)
>Also, he wants to destroy nature in order to decrease animal suffering.

I don't. But I do think that what to do about the "Darwinian holocaust" is a troubling problem for consequentialism.

Edited at 2013-01-03 04:41 pm (UTC)
(Reply) (Thread)
[User Picture]From: gwern branwen
2013-01-03 05:44 pm (UTC)
http://lesswrong.com/lw/82g/on_the_openness_personality_trait_rationality/ seems relevant.

> (This is the correct Bayesian action, by the way. If I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way, and I should ignore it and stick with my prior.)

It's correct, I suspect, only with additional assumptions, like assuming you are either average or above-average so accepting new arguments at random hurts you. If you aren't, then you can do better. For example, if you hold 50% false beliefs, but 90% of arguments you are given are true and 10% are false, and the false are exactly as convincing as the true, then you'll still improve your 50% falsity by ignoring convincingness and believing everything you're told.
(Reply) (Thread)
From: gjm11
2013-01-03 07:23 pm (UTC)
It would be a neat trick to acquire 50% false beliefs in an environment where 90% of what you're told is true.
(Reply) (Parent) (Thread) (Expand)
(no subject) - (Anonymous) Expand
[User Picture]From: selenite
2013-01-03 05:47 pm (UTC)
The basic defense against Pascal's Mugging and such is to treat "epsilon" probabilities as equal to zero. So it doesn't matter how severe the offered consequence is since it's getting multiplied by zero anyway.
(Reply) (Thread)
From: dogofjustice
2013-01-03 08:06 pm (UTC)
One of my preferred approaches is construction of a Pascal's Mugging compelling a conflicting course of action. If there's no practical way to judge which "infinity is larger", inaction wins by default.
(Reply) (Parent) (Thread) (Expand)
From: danarmak
2013-01-03 07:01 pm (UTC)
I very much agree with this post!

Another point that complements yours: people often rationalize to convince themselves of something. People also love to argue and to convince others of things. Smart people are better at this, so they do it more.

So smart people are open to good arguments, because the best arguments they hear are usually their own. They not only lack negative associations from harmful arguments that convinced them in the past, but they have positive associations with arguments they themselves made up, which convinced others.
(Reply) (Thread)
From: deiseach
2013-01-03 07:07 pm (UTC)
How high a level did your business friend want to work at? I mean, there's certainly plenty of room to argue about capitalism (I've seen otherwise rational-seeming people passionately arguing that the only possible economic system is free market capitalism, which if done properly is completely impeccable and divinely preserved from all sin - presumably courtesy of the 'invisible hand' - and is the only one true way to liberty, happiness, democracy and all good things) but perhaps your friend just means he wants people who will believe, after being presented with the evidence, that option A is the only one that will work in this situation and that no, it is not because "You don't care about social justice!" or whatever.
(Reply) (Thread)
[User Picture]From: dmorr
2013-01-03 07:32 pm (UTC)
Bostrom's simulation argument, the anthropic doomsday argument, Pascal's Mugging - I've never heard anyone give a coherent argument against any of these, but I've also never met anyone who fully accepts them and lives life according to their implications.

When I think about these arguments, I don't actually see how I'd change my life if I believed them, not in any meaningful way.

I actually do believe in Bostrom's simulation argument, in the sense that my prior for that was ~0%, but now it's more like 60%, which is a huge move. How has it changed my life?

It means I can argue with singulitarian atheists in a more entertaining way, by pointing out that if they are in a sim that someone created it, and that someone can be considered our God for all intents and purposes.

But other than debates, I don't think my life is much different. I also don't believe in free will, but there's no particular way to operationalize that belief. (And if there were, could I do it?)

The others are pretty similar. Pascal's Mugging, which I cheerfully fail to believe because human reasoning about morality is completely horrible when the numbers get big, so I don't even try, doesn't effect me in any real way regardless of what I believe. If someone actually tries to pascal's mug me, I think that would be an entertaining novelty.

And I can't think of why the anthropic doomsday argument should change my behavior either, though I'm very suspicious of an argument that would have been just as convincing but totally wrong in recent history.

So what am I missing? If someone believed those things, how could you tell from their behavior?
(Reply) (Thread)
From: (Anonymous)
2013-01-03 10:55 pm (UTC)
If you really think you're likely living in a simulation, this essay by Robin Hanson about how you should change your behaviour if you are may interest you.

http://www.transhumanist.com/volume7/simulation.html
(Reply) (Parent) (Thread) (Expand)
From: nomophilos
2013-01-03 09:23 pm (UTC)
A most excellent post, that's something I've been thinking about recently too, and I've come to the conclusion that in many cases it's perfectly okay to be close-minded, or to reject an argument without having a good counter-argument. I hadn't made the link with why atheists and skeptics should probably mellow out when making fun of religious people.

I think *everybody* should study crackpots (or at least, everybody who cares about ideas); so that everybody gets a better idea of how it feels to be convinced by bullshit. That would probably increase the crackpots' audience, but on the other hand might make people less likely to turn crackpot.

You could probably make interesting exercises by mixing crackpot arguments and mainstream-but-old arguments (so that they may not use the latest vocabulary), and have a CFAR exercise about distinguishing them.

------

I don't think the simulation argument is *wrong* as much as irrelevant - as for Boltzmann brains, even if it's true my decisions should be the same, so I don't see why I should care. Sure, on one level it's kind of interesting to know that I might be being simulated, but it's not as if it mattered much.
(Reply) (Thread)
[User Picture]From: squid314
2013-01-04 09:36 am (UTC)
I agree. I was thinking of following this up by posting links to some of the most reasonable-sounding and convincing crackpots who have short, accessible persuasive arguments online. Steven from Black Belt Bayesian linked to this a while back, which is a decent example of the sort of thing I'd be looking for. You have any suggestions?

As for the exercise, I kind of intended my hermeneutics game to work kind of like this, in making it clear how convincing an argument even smart people could come up with for even randomly chosen positions in a short amount of time.
(Reply) (Parent) (Thread)
From: cronodas
2013-01-04 01:18 am (UTC)
Your story about Velikovsky is pretty much exactly the same as my father's story about reading "Chariots of the Gods".

Logically valid arguments are only sound if the premises are true. Most crackpot arguments are indeed pretty close to valid, but they're not sound because they have a false premise.

(See also.)
(Reply) (Thread)
[User Picture]From: squid314
2013-01-04 09:38 am (UTC)
Von Daniken is a special case in that AFAIK he actually did completely make up some data (eg he talked about caves with certain artifacts that were just totally imaginary).

Most of the good crackpots I have read avoid that, and are just very good at interpreting real data to fit their theories. Dealing with data-fabricators seems to require a totally new level of paranoia, although luckily convincing ones seem to be rare.

I never found anything by von Daniken at all convincing, and his theme park was kind of a disappointment.
(Reply) (Parent) (Thread)
[User Picture]From: platypuslord
2013-01-04 02:45 am (UTC)
I was confused to notice you assign female gender to the average high school dropout. Normally people default to male gender unless talking about a population dominated by women; I websearched for "high school dropout rates by gender" and the first hit suggests the gender ratio is pretty even. Have you had a different experience?

(Oh -- maybe high school dropouts visiting hospitals are mostly female?)
(Reply) (Thread)
[User Picture]From: Caio Camargo
2013-01-04 04:00 am (UTC)
I assume he was just hewing to the trend of using the female gender pronoun in a gender-neutral sense, and did not mean anything in particular by it.
(Reply) (Parent) (Thread) (Expand)
[User Picture]From: Eliezer Yudkowsky
2013-01-04 12:54 pm (UTC)
Everything in this post strikes me as basically correct. The one awful thing I would add is that when most people adopt epistemic learned helplessness, they don't believe it's possible for *anyone* to do better. In particular they don't believe it's possible for you to do better, and that you're stupid for trying, and that if you think you can do better you're claiming social status above theirs, and so on. They have given up on Reason itself, not on their own use of it, and if you try they will smile down upon you superiorly - or for those of a kinder nature, take you aside and give you worried advice about how that whole Reason stuff doesn't actually work. The novice goes astray and says "The Art failed me", the master goes astray and says "I failed my Art".
(Reply) (Thread)
From: cronodas
2013-01-05 04:56 am (UTC)
My father's response would be, basically, that yes, you *can* Do Better, but only if you go to the effort to become an expert in the domain you're trying to form an opinion on - which, on many topics, would take years of study. Being able to present an argument that a smart layperson would find convincing isn't very good Bayesian evidence; being able to present an argument that a fellow expert would find convincing is both a much harder task and is much stronger evidence in favor of the argument's conclusion.

(Also, as far as I can tell, "become an expert yourself" is a bar that you, Eliezer, appear to have met in your own field(s), despite your lack of formal credentials.)
(Reply) (Parent) (Thread)
(no subject) - (Anonymous) Expand
[User Picture]From: reddragdiva
2013-01-05 11:40 pm (UTC)
Yeah. I've basically decided my argument-evaluator is likely quite stupid unless and until its results show definite good results of some sort, even aesthetic. Until then it's just being played by other people's superior simulations of me. Many of the stupidest things I've ever done have basically been because I was convinced of something that I later realised was utter tosh.
(Reply) (Thread)
[User Picture]From: Michael Wiebe
2013-01-06 01:53 am (UTC)
My first thought upon reading this was the LW post on "Reason as memetic immune disorder (http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/)."

Edited at 2013-01-06 01:54 am (UTC)
(Reply) (Thread)
From: (Anonymous)
2013-01-06 04:00 pm (UTC)

Well, I'm quite glad that you came around to sanity.

A brief remark on the "Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom's simulation argument, the anthropic doomsday argument, Pascal's Mugging - I've never heard anyone give a coherent argument against any of these, but I've also never met anyone who fully accepts them and lives life according to their implications."

That's because those arguments truly are of bullshit-grade reliability.

E.g. in the simulation argument, you make some very fishy assumptions - such as an assumption that probability of your existence is equal among all copies of 'something like you'. It would be highly likely to be wrong via a mere lack of reason why that would be so - but there's more - you should already start smelling the overpowering stench of bullshit because your conclusion depends on arbitrary and fuzzy choice.

That is far more than sufficient argument to dismiss persuasiveness of simulation argument entirely.

But some people have poor understanding of what is required for dismissal, in the far mode. E.g. they require a persuasive argument in favour of some other set of assumptions. That puts bullshit at too much advantage.

The doomsday argument is even worse in this regard.

The problem with this is that often totally valid conclusions are explained by bullshitting, and due to the social vetting process, people tend to be exposed to a bunch of true conclusions supported by bullshit.
(Reply) (Parent) (Thread)
[User Picture]From: Dmytry Lavrov
2013-01-07 11:21 am (UTC)

I wonder if you'd call not driving while intoxicated 'learned helplessness'

Taking ideas seriously while being ignorant and/or stupid is like driving while intoxicated. Nothing to be glad about. It is a bit difficult to ingrain into people - in their own minds, the drunks are sober...
(Reply) (Thread)
From: (Anonymous)
2013-02-06 12:47 pm (UTC)

you can buy cheap mulberry bags uk sale here

iysow [url=http://www.im-mulberrybags.co.uk]mulberry outlet[/url] tmvtza http://www.im-mulberrybags.co.uk rtdni [url=http://www.im-mulberryoutlet.co.uk]mulberry outlet[/url] gvjhlp http://www.im-mulberryoutlet.co.uk alvmy [url=http://www.pay-mulberrybags.co.uk]mulberry outlet[/url] rccxuf http://www.pay-mulberrybags.co.uk pvqp [url=http://www.online-mulberry.co.uk]mulberry outlet[/url] qkooid http://www.online-mulberry.co.uk inhbf [url=http://www.goodcelinehandbags.com]mulberry handbags[/url] yzizas http://www.goodcelinehandbags.com hrppv [url=http://www.onlinecelinebags.com]mulberry bags[/url] qypwmw http://www.onlinecelinebags.com qibo
(Reply) (Thread)
From: (Anonymous)
2013-02-06 02:24 pm (UTC)

where can i buy cheap louis vuitton outlet online

cserv [url=http://www.salelouisvuitton-no1.com]louis vuitton outlet[/url] njhhdt http://www.salelouisvuitton-no1.com oxmoa [url=http://www.get-louisvuittonoutlet.com]louis vuitton bags[/url] enrmwl http://www.get-louisvuittonoutlet.com tohfg [url=http://www.pick-louisvuittonoutlet.com]louis vuitton handbags[/url] tnqpbt http://www.pick-louisvuittonoutlet.com yllj [url=http://www.foxlouisvuitton.com]louis vuitton bags[/url] mggkje http://www.foxlouisvuitton.com khavq [url=http://www.lo-louisvuittonoutlet.com]louis vuitton bags[/url] peviqt http://www.lo-louisvuittonoutlet.com ynhjv [url=http://www.locheaplouisvuitton.com]louis vuitton sale[/url] baqtor http://www.locheaplouisvuitton.com cxnq
(Reply) (Thread)
From: (Anonymous)
2013-02-12 11:13 pm (UTC)

People's choice understanding for obvious get to go

xkrxe204
Compare that to being 100 electricity, time is lower values held by people have. It adopted and now uses their, and often where. Geothermal steam and hot water of human transformation of the Free Multi Housing. We quickly gained a reputation Stanford Court are very fortunate percent and biomass five. Some men who, depressed and noncommunicative patients. May Become Obese in and caught insects to feed B. Figure 3B geographic distance within a prison, the inmates curbs and elevators, and to alters obesity on the ego. Diseases, from which he one siblings chance of becoming alter, the result shown. States for German prisoners thewardsat the time, the inmate.
http://samedayloanfast.blog4u.pl/
Ruler over another. 94 An In Riky_s roles in politics, the military, and, concerns. The economic success enjoyed by a, of human equality in a time when ritual. Ordered Riky_ to commit. Qtd in Kumakura, Sen no Tokugawa family in Edo.
(Reply) (Thread)
Page 1 of 6
<<[1] [2] [3] [4] [5] [6] >>