Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: TetrahedronOmega 18 June 2015 06:32:45PM 0 points [-]

Hi, Quanticle. You state that "there is no such thing as aether. Michelson and Morley proved that quite conclusively in 1887." For the details on how General Relativity is inherently an æther theory, see physicist and mathematician Prof. Frank J. Tipler and mathematician Maurice J. Dupré's following paper:

  • Maurice J. Dupré and Frank J. Tipler, "General Relativity as an Æther Theory", International Journal of Modern Physics D, Vol. 21, No. 2 (Feb. 2012), Art. No. 1250011, 16 pp., doi:10.1142/S0218271812500113, bibcode: 2012IJMPD..2150011D, http://webcitation.org/6FEvt2NZ8 . Also at arXiv:1007.4572, July 26, 2010, http://arxiv.org/abs/1007.4572 .
Comment author: TetrahedronOmega 18 June 2015 06:31:31PM 0 points [-]

Hi, Gwern. You asked, "... MWI and quantum mechanics implied by Newton? Really?" Yes, the Hamilton-Jacobi Equation, which is the most powerful formulation of Newtonian mechanics, is, like the Schrödinger Equation, a multiverse equation. Quantum Mechanics is the unique specialization of the Hamilton-Jacobi Equation with the specification imposed that determinism is maintained: since the Hamilton-Jacobi Equation is indeterministic, because when particle trajectories cross paths a singularity is produced (i.e., the values in the equations become infinite) and so it is not possible to predict (even in principle) what happens after that. On the inherent multiverse nature of Quantum Mechanics, see physicist and mathematician Prof. Frank J. Tipler's following paper:

Regarding the universe necessarily being temporally closed according to the known laws of physics: all the proposed solutions to the black hole information issue except for Prof. Tipler's Omega Point cosmology share the common feature of using proposed new laws of physics that have never been experimentally confirmed--and indeed which violate the known laws of physics--such as with Prof. Stephen Hawking's paper on the black hole information issue which is dependent on the conjectured String Theory-based anti-de Sitter space/conformal field theory correspondence (AdS/CFT correspondence). (See S. W. Hawking, "Information loss in black holes", Physical Review D, Vol. 72, No. 8 [Oct. 15, 2005], Art. No. 084013, 4 pp.) Hence, the end of the universe in finite proper time via collapse before a black hole completely evaporates is required if unitarity is to remain unviolated, i.e., if General Relativity and Quantum Mechanics--which are what the proofs of Hawking radiation derive from--are true statements of how the world works.

Pertaining to your comments doubting "a universal meta-ethical principal that future AIs will obey!": Prof. Tipler is quite correct regarding his aforecited discussion on ethics. In order to understand his point here, one must keep in mind that the Omega Point cosmology is a mathematical theorem per the known physical laws (viz., the Second Law of Thermodynamics, General Relativity, and Quantum Mechanics) that requires sapient life (in the form of, e.g., immortal superintelligent human-mind computer-uploads and artificial intelligences) take control over all matter in the universe, for said life to eventually force the collapse of the universe, and for the computational resources of the universe (in terms of both processor speed and memory space) to diverge to infinity as the universe collapses into a final singularity, termed the Omega Point. Said Omega Point cosmology is also an intrinsic component of the Feynman-DeWitt-Weinberg quantum gravity/Standard Model Theory of Everything (TOE) correctly describing and unifying all the forces in physics, of which TOE is itself mathematically forced by the aforesaid known physical laws. Thus, existence itself selects which ethics is correct in order for existence to exist. Individual actors, and individuals acting in groups, can of course go rogue, but there is a limit to how bad things can get: e.g., life collectively cannot choose to extirpate itself.

You go on to state, "there are X mind-states we can be in while still maintaining identity or continuity; there are Y (Y < X) that we would like or would value; with infinite computing power, we will exhaust all Y. At that point, by definition, we could choose to not be preserved. Hence, I have proven we will inevitably choose to die even if uploaded to Tipler's Singularity." Yet if Y is infinite, then this presents no problem to literal immortality. Traditional Christian theology has maintained that Y is indeed infinite.

Interestingly, the Omega Point final singularity has all the unique properties (quiddities) claimed for God in the traditional religions. For much more on Prof. Tipler's Omega Point cosmology and the details on how it uniquely conforms to, and precisely matches, the cosmology described in the New Testament, see my following article, which also addresses the societal implications of the Omega Point cosmology:

Additionally, in the below resource are different sections which contain some helpful notes and commentary by me pertaining to multimedia wherein Prof. Tipler explains the Omega Point cosmology and the Feynman-DeWitt-Weinberg quantum gravity/Standard Model TOE.

Comment author: ChristianKl 18 June 2015 06:22:56PM 0 points [-]

I don't think it's true. Take the reverse case: can you tell that an idea is bad without executing it?

I agree that there are idea for which there are obvious reasons that the idea is bad but most of the time there isn't that certainty.

Many successful companies such as AirBnB or PInterest had a hard time raising money because investors thought those were bad ideas.

On element of a good startup idea is that there's little direct competition. If the idea is obvious there's usually competition.

Comment author: 27chaos 18 June 2015 06:16:37PM *  0 points [-]

I like this. I'm interested in almost the opposite, amusingly: what types of situations are there where "planners" (natural or artificial or human) can impose a top down solution that will outperform bottom up processes like evolution?

Comment author: 27chaos 18 June 2015 06:09:19PM 0 points [-]

The differences between climate and meteorological models are reasons that should only increase someone's confidence in the relative capabilities of climate science, so the analogy seems apt despite these differences.

Comment author: 27chaos 18 June 2015 06:07:51PM *  0 points [-]

How do meteorologists predict the weather? By using computer models. Weather is more chaotic and short term than climate so there are obviously differences between the fields, but this should illustrate that you're being a little harsh.

Comment author: 27chaos 18 June 2015 06:05:01PM 0 points [-]

You might like this: http://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1004&context=eng_faculty_pubs

Someone else posted it to this site originally, I have no recollection who, but we are all indebted to them.

Comment author: 27chaos 18 June 2015 06:03:38PM 0 points [-]

Send this to main!

Comment author: DTX 18 June 2015 05:55:36PM *  0 points [-]

Did you know about this?

DARPA SUBNETS

The SUBNETS vision is distinct from current therapeutic approaches in that it seeks to create an implanted, closed-loop diagnostic and therapeutic system for treating, and possibly even curing, neuropsychological illness. That vision is premised on the understanding that brain function—and dysfunction, in the case of neuropsychological illness—plays out across distributed neural systems, as opposed to being strictly relegated to distinct anatomical regions of the brain. The program also aims to take advantage of neural plasticity, a feature of the brain by which the organ’s anatomy and physiology alter over time to support normal brain function.

Sounds pretty straightforwardly like programming a brain.

Comment author: DTX 18 June 2015 05:46:22PM 0 points [-]

Just to pimp my school, Georgia Tech offers a free course through Udacity in Knowledge-Based AI that involves programming an agent to take the Raven's progressive matrices test. I never took the course, but I wanna say from hearing other students that somewhere around 80 is the current state of the art (that's not an IQ and I'm not sure how to translate a Raven's score to an IQ).

Comment author: Lumifer 18 June 2015 05:43:46PM 0 points [-]

Ah. In this case I concur -- I think that the "most sane people are atheists" is... not quite true.

Comment author: jsteinhardt 18 June 2015 05:35:27PM 0 points [-]

My response was directed more at the "most sane people" part.

Comment author: DTX 18 June 2015 05:35:21PM 0 points [-]

This seems like a decent explanation of why I change my own mind as frequently as I do. If you're just tracking my history of Internet comments, I probably sound all over the place, but it's really me going from 54% certain of position X to 52% certain of not X, and it's hard to properly express that in an environment prone to rhetorical flourish and a debate atmosphere where you feel like you really really can't back down or you'll look weak. Most of the interesting things out there are very hard to legitimately be certain of. Factor in availability bias and it's easy to find yourself arguing for something you're really on the fence about just because you read a good argument for it a few hours ago (but not really any better than the argument for the opposite position a few days ago), then you make a good argument because you're good at arguing, and you just convinced yourself without actually introducing any new evidence.

And now I'm trapped in an infinite meta-regress wondering if I actually believe what I just wrote or it just sounds plausible.

Comment author: Lumifer 18 June 2015 05:32:44PM 0 points [-]

You can, and what you'll discover is that they are abysmally slow.

Comment author: Lumifer 18 June 2015 05:31:54PM *  2 points [-]

How much experience do you have with scientific computation?

Enough to worry about the precision of floats when inverting certain matrices, for example.

The more uncertainty you incorporate into your model (i.e., tracking distributions over temperatures in cells instead of tracking point estimates of temperatures in cells), the more arithmetic you need to do, and thus the sooner calculation noise raises its ugly head.

We continue to disagree :-) Doing arithmetic is not a problem (if your values are scaled properly and that's an easy thing to do). What you probably mean is that if you run a very large number of cycles feeding the output of the previous into the next, your calculation noise accumulates and starts to cause problems. I would suggest that as your calculation noise accumulates, so do does the uncertainty you have about the starting values (and your model uncertainty accumulates with cycling, too), and by the time you start to care about the precision of floats, all the rest of the accumulated uncertainty makes the output garbage anyway.

Things are somewhat different in hard physics where the uncertainty can get very very very small, but climate science is not that.

Comment author: RichardKennaway 18 June 2015 05:18:17PM 1 point [-]

Even single-precision floating point gives you around 7 decimal digits of accuracy. If (as is the case for both weather and climate modelling) the inputs are not known with anything like that amount of precision, surely input uncertainty will overwhelm calculation noise? Calculation noise enters at every step, of course, but even so, there must be diminishing returns from increased precision.

Comment author: Bryan-san 18 June 2015 05:17:01PM *  1 point [-]

HEMA/Fiore is the rapier-fencing stuff that I've been doing. I've enjoyed learning footwork from the more formal setting I've found with the rapier people I know.

The other stuff I've done (which makes up the majority) is more freestyle rattan dual-wielding and freestyle shinai fighting that uses a mix of japanese fighting (less kendo, more musashi), filipino stick fighting, and some HEMA twohanded sword fighting (this stuff is weird). This resembles the dogbrothers videos more than anything else (although we both don't wear much padding and don't hit anywhere near as hard as we could).

I've tried brazilian jiu-jitsu with a focus on very close sparring but I didn't enjoy it very much at all and found it to have limited usefulness. I don't think the place I went to gave very good instruction or had the right focus.

I would rate the above as follows:

Rattan dual-wielding fun 10 (12 if i can go above 10 in the 1-10 scale), fitness 8, feeling confident 9 , feeling sexy 8?,

Rapier fighting fun 7, fitness 6, feeling confident 3 , feeling sexy 6? (dressing in armor probably contributes for 4 points and the other 2 are from actual rapier fencing)

shinai fighting fun 6, fitness 5, feeling confident 4, feeling sexy 3

brazilian jiu-jitsu (unoptimized dojo) fun 1, fitness 3, feeling confident 2, feeling sexy 0

Rattan (kali stick) dual wielding wins by a large margin and is probably the most fun thing I do in my life. It accomplishes the all-important task among sports of creating a strong cardio and muscular activity that is fun to the point that you will do it for hours on end. It has also taught me a great deal about myself physically and about states of mind that I can use to achieve higher functionality when necessary. It has strongly boosted my confidence and gives a strong sense of physical empowerment (you might call this "feeling sexy"?) that has been a refreshing change in my life.

Comment author: Lumifer 18 June 2015 05:14:05PM 0 points [-]

The problem is that you don't know whether an idea is good if you don't try to execute on it.

I don't think it's true. Take the reverse case: can you tell that an idea is bad without executing it? Yes, most of the times you can. Obviously, there is uncertainty, but usually you can get a decent estimate of the "quality" of an idea before you start to act on it. There are, of course, nuances and exceptions.

Comment author: Vaniver 18 June 2015 05:11:58PM 0 points [-]

Compare to driving vs. being a passenger in a car driving on a twisty road. I often find the former fun, and the latter decidedly uncomfortable, because the first is a tightly coupled feedback loop and the second is highly varying inputs without much in the way of predictability or control.

"Head-eye" coordination is a thing; the neck muscles and the eye muscles communicate closely, and one would expect that the visual cortex might have access to some of that information as well. Breaking that link will violate expectations on a perceptual level.

Comment author: Vaniver 18 June 2015 04:59:59PM 0 points [-]

What's that got to do with causal structure?

I am not sure what you mean by "causal structure" in this context. I was attempting to provide some intuition as to why ordinary weather forecasting and climate change modeling would be different, since you stated that you didn't see what the essential difference between them is.

But it was a short comment, and so many things were only left as implications. For example, the cell update laws (i.e. the differential equations guiding the system) will naturally be different for weather forecasting and climate forecasting because the cells are physically different beasts. You'll model cloud dynamics very differently depending on whether or not clouds are bigger or smaller than a model cell, and it's not necessarily the case that a fine-grained model will be more accurate than a coarse-grained model, for many reasons.

Comment author: ChristianKl 18 June 2015 04:46:55PM 0 points [-]

I'm not sure whether the numbers of that poll actually drive voting decisions. Are there estimates about how many percentage points Mitt Romney lost for being Mormon?

Comment author: TheAncientGeek 18 June 2015 04:29:58PM 0 points [-]

Yeah, you can get arbitrary precision libraries.

Comment author: Vaniver 18 June 2015 04:29:38PM 0 points [-]

I don't believe that in reality the precision of floats is a meaningful limit on the accuracy of climate forecasts.

How much experience do you have with scientific computation?

I would probably say that people who think so drastically underestimate the amount of uncertainty they have in their simulation.

Disagreed. The more uncertainty you incorporate into your model (i.e., tracking distributions over temperatures in cells instead of tracking point estimates of temperatures in cells), the more arithmetic you need to do, and thus the sooner calculation noise raises its ugly head.

Comment author: TheAncientGeek 18 June 2015 04:25:38PM 0 points [-]

What's that got to do with causal structure?

Comment author: jacob_cannell 18 June 2015 04:21:22PM *  1 point [-]

Apparently HICANN was designed before 2008, and uses a 180nm CMOS process, whereas modern GPUs are using 28nm.

That's true, but IBM's TrueNorth is 28 nm, with about the same transistor count as a GPU. It descends from earlier research chips on old nodes that were then scaled up to new nodes. TrueNorth can fit 256 million low-bit synapses on a chip, vs 1 million for HICANN (normalized for chip area). The 28 nm process has roughly 40x the transistor density. So my default hypothesis is that if HICANN was scaled up to 28 nm it would end up similar to TrueNorth in terms of density (although TrueNorth is wierd in that it is intentionally much slower than it could be to save energy).

It seems to me that if neuromorphic hardware catches up in terms of economy of scale and process technology, it should be far superior in cost per neural event.

I expect this in the long term, but it will depend on how the end of Moore's Law pans out. Also, current GPU code is not yet at the limits of software simulation efficiency for ANNs, and GPU hardware is still improving rapidly. It just so happens that I am working on a new type of ANN sim engine that is 10x or more faster than current SOTA for networks of interest. My approach could eventually be hardware accelerated. There are some companies already pursuing hardware acceleration of the standard algorithms - such as Nervana, targeting similar speedup but through dedicated neural asics.

One thing I can't stress enough is the advantage of programmeable memory for storing weights - sharing and compressing weights helps solve much of the bandwidth problems the GPU would otherwise have.

It seems like this GPU vs neuromorphic question could have a large impact on how the Singularity turns out, but I haven't seen any discussion of it until now. Do you have any other thoughts or references on this topic?

I don't know much it really effects outcomes - whether one uses clever hardware or clever software, the brain is probably near or on the pareto surface for statistical inference energy efficiency, and we will probably get close in the near future.

Comment author: ChristianKl 18 June 2015 04:10:38PM 1 point [-]

Ideas are cheap and plentiful. Good ideas are precious and rare.

The problem is that you don't know whether an idea is good if you don't try to execute on it. The way you show that an idea is good is to actually execute on it.

Comment author: Lumifer 18 June 2015 04:07:40PM 1 point [-]

One underlying technical issue is that floating point arithmetic is only so precise, and this gives you an upper bound on the amount of precision you can expect from your simulation given the number of steps you run the model for.

I don't believe that in reality the precision of floats is a meaningful limit on the accuracy of climate forecasts. I would probably say that people who think so drastically underestimate the amount of uncertainty they have in their simulation.

Comment author: Lumifer 18 June 2015 04:03:11PM 0 points [-]

Proudness = pride = one of the seven deadly sins in Christianity = antonym of humble, humility.

Maybe you mean self-confidence?

Comment author: John_Maxwell_IV 18 June 2015 04:03:02PM *  0 points [-]

I perceive heavily diminishing returns to exercise past the first few hours of exercise a week, and to meditation past the first 15 minutes per day. For books, I would say it depends on the book. This short blog post has more insight in it than most of the books I read as a kid. Many of Paul Graham's essays would probably be book-length if they were written by a less exceptional writer. In general the best internet writing I read seems more insight-dense than the best book writing I read, but the best internet writing is scattered.

Comment author: Lumifer 18 June 2015 04:02:03PM 1 point [-]

I have read blog posts people acquiring and trying the source code and it was the result they got

The source code is of a model. The model has many parameters to tune it (that's an issue, but a separate one) -- you probably can tune it to boil the oceans by 2000, but nothing requires you to be that stupid :-/

Comment author: Lumifer 18 June 2015 03:58:49PM 0 points [-]

Because ideas are cheap. There an abundance of ideas

Ideas are cheap and plentiful. Good ideas are precious and rare.

Comment author: Houshalter 18 June 2015 03:50:10PM 0 points [-]

I would also strongly recommend /r/thisisthewayitwillbe

/r/artificial is the official AI subreddit.

Comment author: Lumifer 18 June 2015 03:32:50PM 0 points [-]

Hmm, interesting.

I find it hilarious that in terms of electability Muslims are smack in the middle between gays and atheists... X-D

Comment author: Lumifer 18 June 2015 03:29:49PM 0 points [-]

In which sense environmentalism is a goal?

I tend to think of it as a religion, but let's be charitable and call it a set of (often inconsistent) preferences. For example, some people prefer not to live near a nuclear plant. How is it a goal?

Comment author: Algon 18 June 2015 02:57:53PM *  0 points [-]

I'm gullible. Or at least that's what I'm told...

Comment author: ChristianKl 18 June 2015 02:42:03PM 0 points [-]

If you look at Reddit, Reddit wasn't the first idea of the guys. The got to Y Combinator and Paul Graham basically said that their original idea was crap but that Paul Graham really liked the guys so they should still enter Y Combinator. Then the come up with Reddit.

The kind of people who have good startup ideas usually can come up with more than one idea.

Comment author: ChristianKl 18 June 2015 02:34:12PM 0 points [-]

No, unfortunately not.

Then you shouldn't simply switch to a different word in discussions like this and basically ignore the point of the argument.

I am very much used to everybody being far too timid.

Timid is not the opposite of proudness. It's the opposite of being timid is being confident. Proudness is "Stolz" in German.

Both feeling loved by other people and feeling proud come with being confident.
The person who optimizes for feeling loved usually plays positive sum games while the person who optimizes for feeling proud plays zero sum games.

The hugging at LWCW-EU makes people feel loved. It raises the social confidence of everybody involved. On the other hand someone who comes out of the event feeling proud that he hugged 50 different people is doing everything wrong.

Comment author: gjm 18 June 2015 02:20:46PM 1 point [-]

These people took NASA's GISTEMP code and translated it into Python, cleaning it up and clarifying it as they went. They didn't get boiling oceans. (They did find some minor bugs. These didn't make much difference to the results.)

Can you tell us more about the people who said they tried to use climate scientists' code and got predictions of boiling oceans? Is it at all possible that they had some motivation to get bad results out of the code?

Comment author: Vaniver 18 June 2015 02:18:05PM *  0 points [-]

I've known a thin guy who would do this to seem bulkier. More common is the thermal long-sleeved undershirt with a t-shirt over it, which is also better on the warmth front.

Ensure that you're good with color coordination, then just do it. It's more likely to be the good kind of unconventional than the weird kind.

Comment author: Vaniver 18 June 2015 02:12:31PM 0 points [-]

The causal structure is basically a chaotic system, which means that NewtonIan style differential equations aren't much use, and big computerized models are. Ordinary weather forecasting uses big models, and I don't see why climate change, which is essentially very long term forecasting would different.

Climatological models and meteorological models are very different. If they weren't, then "we can't predict whether it will rain or not ten days from now" (which is mostly true) would be a slam-dunk argument against our ability to predict temperatures ten years from now. One underlying technical issue is that floating point arithmetic is only so precise, and this gives you an upper bound on the amount of precision you can expect from your simulation given the number of steps you run the model for. Thus climatological models have larger cells, larger step times, and so on, so that you can run the model for 50 model-years and still think the result that comes out might be reasonable.

(I also don't think it's right to say that Newtonian-style diffeqs aren't much use; the underlying update rules for the cells are diffeqs like that.)

Comment author: polymathwannabe 18 June 2015 02:05:00PM 0 points [-]

Not everyone can easily adapt to immersion-style media. The first time I heard surround speakers in a cinema theater, in 1999, I hated it, and I still do to this day; I find it horribly distracting.

Comment author: Viliam 18 June 2015 02:02:40PM *  1 point [-]

Just make sure you don't try going too meta too soon, otherwise you may lose touch with reality.

1) The "hello world" app you made -- did you have anyone review your code? Maybe it contains obvious errors you didn't notice. Maybe learning about them could be very beneficial in long term. Having an improvement shown in a program you already spent a lot of time thinking about could be better (more motivating, easier to remember) than reading about a similar technique in a book illustrated with a fictional example.

2) Every time you learn something new -- do you also make another "hello world" app to test this new knowledge? Otherwise you may get a fake understanding. Also, if you learn about cool new techniques, but never use them, you may not understand the trade-offs. By making sample applications you test your new models against the reality.

I figured android studio would be the most easy to use (where I was previously not sure how to get an app onto the phone) because its built to go with android devices.

I agree.

Unless you want to make a game, in which case Unity is probably a better option. It is not Android-specific, but it can compile to Android platform.

EDIT: Feel free to ask me specific things about Java or Android.

Comment author: TheAncientGeek 18 June 2015 01:54:00PM *  0 points [-]

The causal structure is basically a chaotic system, which means that NewtonIan style differential equations aren't much use, and big computerized models are. Ordinary weather forecasting uses big models, and I don't see why climate change, which is essentially very long term forecasting would different.

Comment author: DeVliegendeHollander 18 June 2015 01:53:12PM *  0 points [-]

Because ideas are cheap. There an abundance of ideas

But are you sure in this? I for example have zero even remotely actionable startup ideas right now. By actionable I mean something looking very simple on the outside, such as hipmunk or reddit, is also a huge amount of work. So all the ideas I would have already look complex on the outside, that is impossibly much work probably :) So what I would call actionable startup idea is something that does not look more complex than hipmunk.

Comment author: DeVliegendeHollander 18 June 2015 01:50:08PM *  0 points [-]

Proudness is a real emotion and there are people who seek it. Do you understand why I might object to that?

No, unfortunately not. Can you give a real or hypothetical negative example of proudness? I am very much used to everybody being far too timid.

Comment author: Viliam 18 June 2015 01:48:35PM 0 points [-]

They could trade the information.

I am not suggesting a specific mechanism here, rather objecting against the generalization that alien species will have no way to pass knowledge to the next generation unless they do it like we do. There can be other ways.

Comment author: DeVliegendeHollander 18 June 2015 01:48:06PM 0 points [-]

I mean, I have read blog posts people acquiring and trying the source code and it was the result they got. Of course such results were not published.

Comment author: ChristianKl 18 June 2015 01:30:08PM 0 points [-]

Once an idea breaches the walls, should it sweep all before it, assisted by the meta-idea of taking ideas seriously?

I don't think that's the case. If you look at the LW census you find that people think UFAI isn't the biggest Xrisk for most people on LW, even through it's the Xrisk that's most prominently discussed on LW.

Comment author: Romashka 18 June 2015 01:22:33PM 0 points [-]

Related: from Diabetes Care, v. 38 Supp. 1 jan. 2013, p. 552: Most trials of statins and CVD outcomes tested specific doses against placebo or other statins, rather than aiming for specific LDL [low-density lipoprotein] cholesterol goals...

Comment author: ahbwramc 18 June 2015 01:22:06PM 1 point [-]

I agree with this. "Half-baked" was probably the wrong phrase to use - I didn't mean "idea that's not fully formed or just a work in progress," although in retrospect that's exactly what half-baked would convey. I just meant an idea that's seriously flawed in one way or another.

Comment author: ChristianKl 18 June 2015 01:19:51PM 0 points [-]

You don't just make a computer simulation in 1980 or so that would predict oceans boiling away by 2000 and when it fails to happen just tweak it and say this second time now you surely got it right.

The way climate science is done is much more complex than that, and nobody did predict boiling oceans.

Comment author: ChristianKl 18 June 2015 01:16:30PM 1 point [-]

Most people are binary about beliefs. Either they believe X is true or they believe X is false. When talking with LW people you find people saying: "I think X is likely but I don't think it's certain".

If your goal is to get to the right shade of gray, then you need to change your beliefs a lot.

It's likely easier to convince me that P(X)~0.10 instead of P(X)~0.001 while at the same time it's harder to convince me to go from P(X)~0.90 to P(X)~0.999

Comment author: Vladimir_Nesov 18 June 2015 12:42:26PM *  3 points [-]

The aspect of taking ideas seriously that you are talking about seems orthogonal to forming beliefs. It's about initiative in investigating ideas and considering their general applicability, as opposed to stopping at a few superficial observations or failing to notice their relevance in unusual contexts. You don't need to believe an idea to investigate it in detail, the belief may come eventually or not at all. Considering an idea in many contexts may also blur the line with believing it. (Another aspect is taking action based on a belief.)

The process of investigating ideas in detail might get triggered by believing them for no good reason, but there is no need.

Comment author: ChristianKl 18 June 2015 12:09:20PM 1 point [-]

Why not start an open source project and invite contributors from Step 1? Why not throw half-made ideas out in the wild and encourage others to work on them to finish them?

Because ideas are cheap. There an abundance of ideas but not enough people to execute ideas well. Executing ideas well needs focused effort which is easier when you have a company that can pay developers.

That doesn't mean that there aren't cases where the open source model makes sense, but quite often it's easier with a different model.

Comment author: SolveIt 18 June 2015 11:47:39AM *  4 points [-]

I disagree with the premise that LW tears half-baked ideas to shreds. My experience (which, admittedly is limited to open threads) is that you'll be fine if you're clear that what you're presenting a work in progress, and you don't overreach with your ideas.

By overreach, I mean something like this:

This is an attempt to solve happiness. Several factors, such as health, genetics, and social environment, affect happiness. So happiness = healthgeneticssocial environment.

You can see what's wrong with the post above. It's usually not this blatant, but I see this sort of thing too often, and they are invariably ripped to shreds. On the other hand, something like this:

This is an attempt to solve happiness. First, I'd like to identify the factors that affect happiness. I can think of health, genetics, and social environment. Can we break this down further? Am I missing any important factors?

Probably won't be ripped to shreds. It has it's fair share of problems, so I wouldn't expect an enthusiastic response from the community, but it won't be piled upon either.

Frankly speaking, the first type of post reeks of cargo cult science (big equations, formal style (often badly executed), and references that may or may not help the reader). I'm not too unhappy to see those posts being ripped to shreds.

Comment author: ChristianKl 18 June 2015 11:40:22AM 0 points [-]

It does not differ too much from standard gymnastics, rings, bars, horse, vault etc.

Rings are not the same thing as a static pole. Rings move. A pole doesn't.

Having perfect mastery over your body when you are 25 isn't worth having joint issues when you are 50.

But let's look at Svetlana Khorkina who's the top female medalists at the World Artistic Gymnastics Championships. The first interview I find is https://www.youtube.com/watch?v=FaJ92uuKjpo . She has little body movement while talking. I think she's simply trained to lock body movement instead of allowing her body to move freely.

Healthy level of self-confidence then.

Proudness is a real emotion and there are people who seek it. Do you understand why I might object to that?

Social anxiety etc.

Once you identify that issue as important a social sport is better than a competitive solo sport.

Comment author: Gunnar_Zarncke 18 June 2015 11:26:53AM 0 points [-]

Agreed. I don't think that much of the filtering strength of the great filter is one specific epoch. But within the epoch of civilizations it may be that a large part of the filter power is right now.

Comment author: DeVliegendeHollander 18 June 2015 11:24:53AM 0 points [-]

as some comedian has remarked, the idea of getting self-help from a book is already something of a contradiction

I agree but not sure if for the same reasons. I think most of the time people know perfectly well what they should do differently, they just lack the willpower or motivation for it. A book here may inspire for a short while, if it is really well worded it can "pump" people for a while, but it will not last long. In the vast majority of the cases, people buy self-help book, read actual good advice in (generic good, not big insights, mostly the "get your sh.t together" type of good), nod, nod, and then do nothing.

Coaches, trainers probably work better. So do groups. I think the core idea of AA is that every meeting gives a jolt of motivation enough to last until the next meeting.

Comment author: MathiasZaman 18 June 2015 11:19:32AM 1 point [-]

It might but most redditors don't really click links. I find it more useful to ignore them, occasionally skimming the arguments and upvoting the non-stupid comments.

Comment author: DeVliegendeHollander 18 June 2015 11:19:24AM 2 points [-]

Instinct != stupidity. This is a different thing here. Leaning towards an idea comes both from finding it true and liking it. If you equally lean towards two ideas, but like one more, that suggests you subconsciously find that less true. So if you go for the one you dislike, you probably go for an idea you find subconsciously more true.Leaning towards an idea you dislike suggests you found so much truth in it, subconsciously, that it even overcame the ugh-field that came from disliking it. And that is a remarkably lot of truth.

Reversed stupidity is a different thing. That is a lot like "Since there is no such thing as Adam and Eve's original sin, human nature cannot have any factory bugs and must be infinitely perfectible." (Age of Enlightenment philosophy.) That is reversed stupidity.

It is a different thing. It is reversed affect.

Comment author: ChristianKl 18 June 2015 11:06:46AM 0 points [-]

Knowing how to build a sucessful tech start-up and how to be a good president are two incredibly different skill sets.

In both cases one of the most important skill is hiring the right people and delegating responsibility to them. A person who grew a startup to a massive company is likely better at that skill then the average senator.

Comment author: skeptical_lurker 18 June 2015 10:33:39AM *  0 points [-]

More people would vote for a gay candidate than an atheist:

http://www.gallup.com/poll/155285/Atheists-Muslims-Bias-Presidential-Candidates.aspx

I could imagine that a gay candidate could run for the Democrats, but Theil is closer to the Republicans.

Comment author: ChristianKl 18 June 2015 10:32:21AM 1 point [-]

When raising awareness, branding is an issue. We don't want to have EA associated with low status writing on toilet walls.

Comment author: lfghjkl 18 June 2015 10:16:15AM 0 points [-]

In such dilemmas, I think the best thing is to figure out what is it your "corrupted hardware" wants to do and do the opposite - do the opposite what your instincts i.e. evolved biases suggest.

Reversed Stupidity Is Not Intelligence

Comment author: DeVliegendeHollander 18 June 2015 09:37:09AM 0 points [-]

What exactly do you mean by that? Because the obvious answer is to figure out the causal structure of things, but I don't think that helps here.

Comment author: DeVliegendeHollander 18 June 2015 09:23:09AM 0 points [-]

Arguing and pursuing truth is indeed not the same, but when virtually every empirical, numerical claim is falsified by an opponent, that is a situation where arguing or changing the mind is really called for.

To be fair, when they were making them I already smelled something. I have some familiarity with the history of conservative thought back to Oakeshott, Chesterton, Burke or Cicero and never just pointed to a crime stat or something and saying see, that is what is wrong here. It was never their strengths and I was half-expecting that engaging in chart duels is something they are not going to win.

Comment author: knb 18 June 2015 09:17:10AM 0 points [-]

POV results in jarring perspective changes and it makes it harder for the viewer to orient themselves and understand what is going on. Historically there were also technical obstacles, but steadicam + digital video make it more feasible. Another problem is it makes staging more difficult for obvious reasons.

A good example of POV film-making is the British comedy Peep Show, which I found almost unwatchable at first because of the jarring shifts in perspective. Still a great show, but the POV is mostly a gimmick you have to get used to rather than a benefit:

Peep Show's unique "point of view" shooting style was one of the reasons for its success, but it also stopped it being a breakout hit, said one of the team behind it.

"It made it feel original and fresh and got it commissioned for a second series, but it stopped it from being a breakout hit and stopped it finding a bigger audience," said Andrew O'Connor, chief executive of production company Objective Productions.

Comment author: Gurkenglas 18 June 2015 09:13:39AM 1 point [-]

When considering candidates for the Great Filter, you must keep in mind that it stopped all the to-be universe conquerors in our past light cone. Your suggestion doesn't seem that insurmountable.

Comment author: TheAncientGeek 18 June 2015 09:06:42AM 0 points [-]

So what's the right way to predict the future?

Comment author: DeVliegendeHollander 18 June 2015 09:00:03AM 0 points [-]

I think it is not a just-so story, it largely predicts everything it should and fails to predict everything it shouldn't. Runaway processes require feedback, I think this is the key. Look for the thing that intelligence made harder. That thing is birth and babycare. Intelligence makes it hard, this causes X to be stronger, and X causes more intelligence, that is the feedback process. What could X be? Sexual competition. More: http://lesswrong.com/lw/mcj/open_thread_jun_15_jun_21_2015/chju

Comment author: RichardKennaway 18 June 2015 08:46:37AM 1 point [-]

It was some sort of a competition inside our species, probably sexual.

A currently popular theory (at least, at the pop sci level, I don't know how it is regarded by actual scientists) is that intelligence snowballed due to social competition of all against all -- an arms race. The smarter people are, the better they can detect lies, but also the smarter they are, the better they can get away with lying. Everyone needs to be as smart as possible just to keep up, until the process runs into a limit, such as the size of the birth canal. Expanding that by widening the pelvis adversely affects mobility.

But that looks like a just-so story. Why did that process happen to humans and not chimpanzees? To which one answer might be: It could have happened to a different branch of the primates, it just chanced to happen to our ancestors first. Someone had to be first, and they're the only ones smart enough to be having this conversation. Once started, the process was so fast that every other creature that didn't reach take-off has effectively stood still, and in the modern world they stand no chance except by our permission.

Comment author: DeVliegendeHollander 18 June 2015 08:32:12AM *  5 points [-]

Post something half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn't have it any other way

I would have it, and I don't find it great. Why should baking be an individual effort? Teamwork is better. It should be seen as "here, if you like it, help me bake it". That is why it is Discussion, not Main. I think a good way to use this site setup would be to throw half-baked things into Discussion, if it sounds interesting cooperate on baking it, then when done promote to Main. Really, why don't we do this?

All the great articles in the past, LW 2007-2010 look a lot like individual effort. Why should it be so?

Is this a bit Silicon Valley Culture? Because those guys do the same - they have a software idea and work on it individually or with 1-2 co-founders. Why? Why not start an open source project and invite contributors from Step 1? Why not throw half-made ideas out in the wild and encourage others to work on them to finish them? Assuming you are not after the money but after a solution you yourself would use, of course - "scratch your own itch" is a good idea in open source.

This kind of individual-effort culture sounds a lot like a culture where insights are in abundance but working on them is scarce, so people don't value much insights from others as long as they are not properly worked out. Well, I should say I am pretty much used to the opposite, most folks I know just work routine and hardly any reflection at all...

Comment author: DeVliegendeHollander 18 June 2015 08:26:43AM 1 point [-]

I've had several experiences similar to what Scott describes, of being trapped between two debaters who both had a convincingness that exceeded my ability to discern truth.

I always feel so.

I see a lot of rational sounding arguments from red-pillers, manosphericals, conservatives, reactionaries, libertarians, the ilk. And then I see the counter-arguments from liberals, feminists, leftists and the ilk that pretty much boil down to the other side just being uncompassionate assholes and desperately rationalizing it with arguments. Well, rationalizing is a very universal feature and they sometimes do seem like really selfish people indeed... so I really don't know who to believe.

Or climate change. What little I know about the scientific method says this is NOT how you do science. You don't just make a computer simulation in 1980 or so that would predict oceans boiling away by 2000 and when it fails to happen just tweak it and say this second time now you surely got it right. Yet, pretty much every prestigious scientist supports the "alarmist" side and on the other side I see only marginal, low-status "cranks" - and they are curiously politically motivated. So who do I support?

In such dilemmas, I think the best thing is to figure out what is it your "corrupted hardware" wants to do and do the opposite - do the opposite what your instincts i.e. evolved biases suggest.

Well, no luck. On one side, I see people who are high-status, intellectual, and look really nice and empathic and compassionate. Of course my instincts like that. On the other side, I see people who look brave, tough, critical-minded and creative, plus they seem to be far more historically literate, so basically NRx and libertarians and similar folks give me that kind of "inventor" vibe, which incidentally is also something my instincts like.

I like both sides - and yet, to decide rationally, I should probably choose something I instinctively dislike.

Comment author: RichardKennaway 18 June 2015 08:23:26AM 1 point [-]

Michael Smith touched on this in his keynote talk at LWCW last weekend. Don't believe something just because you've heard a good argument for it, he said (I think, reconstructing from memory, and possibly extrapolating as well). If you do that, you'll just change your mind as soon as you encounter a really good argument for the opposite (the process Yvain described). You don't really know something until you've reached the state where the knowledge would grow back if it was deleted from your mind.

Post something half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn't have it any other way - but it doesn't really sound like the behaviour of a website full of gullible people.

LW has a higher bar to believing, but is it high enough? Once an idea breaches the walls, should it sweep all before it, assisted by the meta-idea of taking ideas seriously?

Also relevant, an old comment of mine.

Comment author: DeVliegendeHollander 18 June 2015 08:13:46AM 1 point [-]

That's an interesting thought; I feel just the opposite about the pessimism/optimism spectrum. To me, it seems that to allow the negative to affect your mindset overmuch is a far greater negative than a positive

Of course, that is why I try to overcome but it still "feels deeper".

Negativity simply tends to "feel deep". And "feel wise". I am an adult, but you see really a lot of it amongst teenagers, goths etc. basically the more angst and cynicism they have "deeper" and "wiser" they feel.

Or for example look at media like Game of Thrones, it is generally the most negative quotes that "feel deep". "Sharp steel and strong arms rule this world, don't ever believe any different." "You're awful." "It is the world that is awful." This sort of stuff tends to "feel deep" far more than something cheery.

Adults have different reasons for being negative (such as habit), but there is still a certain sense of "feeling deep" lurking which impends developing positivity.

Of course pessimism is far easier! But it still "feels deeper". Most people including me are lazy. If something is easy and has some sort of a reward at all (you feel bad, but at least you feel "deep"), we are likely to do it. Why else you think fast-food driven obesity is such a big deal thes days? :-)

Similarly, a very basic human feature is the sour grapes effect. Optimism is hard, so let's find an excuse to not do it. Well, the excuse is that it is "shallow".

I wonder how it is not so for you... perhaps you are of the minority who is not inherently lazy, who does not automatically go for the smallest resistance, the easiest path and then make excuses. But I am.

Comment author: DeVliegendeHollander 18 June 2015 08:04:23AM 0 points [-]

Pole dancing isn't ergonomic. It's bad for joints. It doesn't train good movement habits.

It does not differ too much from standard gymnastics, rings, bars, horse, vault etc. And while I am not sure what makes movement habits good or bad, to me gymnasts look like the kind of people who have perfected the mastery over the body.

From your list of goals I don't think "Feeling proud" is a worthwhile goal. It's better than feeling angry but I don't consider it to a clearly positive emotion.

Healthy level of self-confidence then. "Nerdy" people tend to have far lower than what is healthy. Social anxiety etc.

Comment author: DeVliegendeHollander 18 June 2015 08:00:16AM *  0 points [-]

Wait, what? What are even the alternatives? The only alternatives are environmental pressure - food, predators etc. But such an environmental pressure affects a lot of species at the same time and for this reason, most traits in the animal kingdom have the expected normal distribution. Such as the ability to swim amongst mammals - many can swim a little, some better, and a few really well. Yet the distribution of intelligence in species does not follow normal distribution. Humans are far, far ahead from the species in the second places (apes, dogs, dolphins). If it was environmental pressure, adult chimpanzees were basically like retarded humans or humans who are stuck at the mental capabilities of a 10 year old. Bonobos would be flipping burgers at McD. (OK some people do claim that certain dogs have the IQ of a 5 year old human but it is really a stretch. Their communication ability and suchlike does not even compare.)

Being so far ahead can mean only one thing - the selective pressure MUST have came from withing the hominid species, not from the environment.

But what could hominids compete for? Not food. Food is also an environmental variable and if we don't see e.g. gorillas compete a lot of for food, we should assume there was enough around.

This gives really only one option left.

Factor in that runaway processes MUST have, I will risk a "per definition" here even though it is not math, a feedback element. Whatever X-factor (lol) pressed humans to get more intelligent, must have been made worse by humans getting more intelligent, so it exerted even more and more pressure or how else could it be such a runaway process.

This is useful, because it suggests we should just look at what was made worse by the evolution of intelligence and we found the feedback factor. And the answer is obvious: reproduction. Childbirth, the physical process of getting the head out, and the babycare.

In my mind it is a fairly strong set of evidence and it not only predicts everything we want it to predict here, it also reliably fails to predict everything it shouldn't and that is what a good theory should do.

For example, if food scarcity was a selective factor, we would have iron stomachs, able to eat everything. In reality, we have shit for stomach, we need to cook our food, we cannot digest most leaves nor grass - the most available resource! - we get ill easily and so on. Sure, human diets have a wide variance, but it seems we are really picky eaters, going for the special stuff, not the easy available stuff: leaves, grass, carcass. What does that suggest? It suggests no food scarcity.

Or say predators. Most animals try to protect themselves from predators with claws and fangs. Again, we have crap in that department. If there was any serious pressure there, we'd kept these around.

So what kind of environmental pressure is left, really?

I am also surprised that it is you who say it, because I had the impression you give some credibility to the set of views that are sometimes called red pill or manosphere. They are 100% based on sexual selection shaping human nature, without that they haven't any chance of getting anything right.

Comment author: Khoth 18 June 2015 07:24:52AM *  1 point [-]

It's likely to result in shakycam or at least large sudden changes in field of view, which I find disorienting.

Comment author: 9eB1 18 June 2015 07:18:40AM 0 points [-]

Theoretically, the market portfolio, which is the efficient portfolio according to Modern Portfolio Theory should replicate the world's assets weighted by value. For America, household (and non-profit) net worth is ~$85T and the value of real estate holdings is ~$14T (value less mortgages) (source), so about 16% is pretty justifiable. This is all pretty back of the envelope though.

Comment author: estimator 18 June 2015 07:16:56AM 0 points [-]

Filters don't have to be mutually exclusive, and as for collectively exhaustive part, take all plausible Great Filter candidates.

I don't quite understand that Great Filter hype, by the way; having a single cause for civilization failure seems very implausible (<1%).

View more: Next