Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Why I think worse than death outcomes are not a good reason for most people to avoid cryonics

Synaptic 11 June 2017 03:55PM
Content note: torture, suicide, things that are worse than death


TLDR: The world is certainly a scary place if you stop to consider all of the tail risk events that might be worse than death. It's true that there is a tail risk of experiencing one of these outcomes if you choose to undergo cryonics, but it's also true that you risk these events by choosing not to kill yourself right now, or before you are incapacitated by a TBI or neurodegenerative disease. I think these tail risk events are extremely unlikely and I urge you not to kill yourself because you are worried about them, but I also think that they are extremely unlikely in the case of cryonics and I don't think that the possibility of them occurring should stop you from pursuing cryonics. 

I

Several members of the rationalist community have said that they would not want to undergo cryonics on their legal deaths because they are worried about a specific tail risk: that they might be revived in a world that is worse than death, and that doesn't allow them to kill themselves. For example, lukeprog mentioned this in a LW comment

Why am I not signed up for cryonics?

Here's my model.

In most futures, everyone is simply dead.

There's a tiny sliver of futures that are better than that, and a tiny sliver of futures that are worse than that.

What are the relative sizes of those slivers, and how much more likely am I to be revived in the "better" futures than in the "worse" futures? I really can't tell.

I don't seem to be as terrified of death as many people are. A while back I read the Stoics to reduce my fear of death, and it worked. I am, however, very averse to being revived into a worse-than-death future and not being able to escape.

I bet the hassle and cost of cryonics disincentivizes me, too, but when I boot up my internal simulator and simulate a world where cryonics is free, and obtained via a 10-question Google form, I still don't sign up. I ask to be cremated instead.

Cryonics may be reasonable for someone who is more averse to death and less averse to worse-than-death outcomes than I am. Cryonics may also be reasonable for someone who has strong reasons to believe they are more likely to be revived in better-than-death futures than in worse-than-death futures. Finally, there may be a fundamental error in my model.

#####

In this post I'm going to explain why I think that, with a few stipulations, the risks of these worse-than-death tail events occurring are close to what you might experience by choosing to undergo your natural lifespan. Therefore, based on revealed preference, in my opinion they are not a good reason for most people to not undergo cryonics. (Although there are, of course, several other reasons for which you might choose to not pursue cryonics, which will not be discussed here.) 

II

First, some points about the general landscape of the problem, which you are welcome to disagree with: 

- In most futures, I expect that you will still be able to kill yourself. In these scenarios, it's at least worth seeing what the future world will be like so you can decide whether or not it is worth it for you.  
- Therefore, worse-than-death futures are exclusively ones in which you are not able to kill yourself. Here are two commonly discussed scenarios for this, and why I think they are unlikely:  
-- You are revived as a slave for a future society. This is very unlikely for economic reasons: a society with sufficiently advanced technology that it can revive cryonics patients can almost certainly extend lifespan indefinitely and create additional humans at low cost. If society is evil enough to do this, then creating additional humans as slaves is going to be cheaper than reviving old ones with a complicated technology that might not work. 
-- You are revived specifically by a malevolent society/AI that is motivated to torture humans. This is unlikely for scope reasons: any society/AI with sufficiently advanced technology to do this can create/simulate additional persons that will to fit their interests more precisely. For example, an unfriendly AI would likely simulate all possible human/animal/sentient minds until the heat death of the universe, using up all available resources in the universe in order to do so. Your mind, and minds very similar to yours, would already likely be included in these simulations many times over. In this case, doing cryonics would not actually make you worse off. (Although of course you would already be quite badly off and we should definitely try our best to avoid this extremely unlikely scenario!) 

If you are worried about a particular scenario, you can stipulate to your cryonics organization that you would like to be removed from preservation in intermediate steps that make that scenario more likely, thus substantially reducing the risk of them occurring. For example, you might say: 

- If a fascist government that tortures its citizens indefinitely and doesn't allow them to kill themselves seems likely to take over the world, please cremate me. 
- If an alien spaceship with likely malicious intentions approaches the earth, please cremate me. 
- If a sociopath creates an AI that is taking over foreign cities and torturing their inhabitants, please cremate me. 

In fact, you probably wouldn't have to ask... in most of these scenarios, the cryonics organization is likely to remove you from preservation in order to protect you from these bad outcomes out of compassion.   

But even with such a set of stipulations or compassionate treatment by your cryonics organization, it's still possible that you could be revived in a worse-than-death scenario. As Brian Tomasik puts it

> Yeah, that would help, though there would remain many cases where bad futures come too quickly (e.g., if an AGI takes a treacherous turn all of a sudden).

However, here I would like to point out an additional point: there's no guarantee that these bad scenarios couldn't happen too quickly for you to react today, or in the future before your legal death. 

If you're significantly worried about worse than death outcomes happening in a possible future in which you are cryopreserved, then it seems like you should also be worried about one of them happening in the relatively near term as well. It also seems that you should be anti-natalist. 

III

You might argue that this is still your true rejection, and that while it's true that a faster-than-react-able malevolent agent could take over the world now or in the near future, you would rather trust yourself to kill yourself than trust your cryonics organization take you out of preservation in these scenarios. 

This is a reasonable response, but one possibility that you might not be considering is that you might undergo a condition that renders you unable to make that decision. 

For example, people can live for decades with traumatic brain injuries, with neurodegenerative diseases, in comas, or other conditions that prevent them from making the decision to kill themselves but retain core aspects of their memories personality that make them "them" (but perhaps is not accessible because of damage to communication systems in the brain). If aging is slowed, these incapacitating conditions could last for longer periods of time. 

It's possible that while you're incapacitated by one of these unfortunate conditions, a fascist government, evil aliens, or a malevolent AI will take over. 

These incapacitating conditions are each somewhat unlikely to occur, but if we're talking about tail events, they deserve consideration. And they aren't necessarily less unlikely than being revived from cryostasis, which is of course also far from guaranteed to work.

It might sound like my point here is "cryonics: maybe not that much worse than living for years in a completely incapacitating coma?", which is not necessarily the most ringing endorsement of cryonics, I admit. 

But my main point here is that your revealed preferences might indicate that you are more willing to tolerate some very, very small probability of things going horribly wrong than you realize. 

So if you're OK with the risk that you will end up in a worse-than-death scenario even before you do cryonics, then you may also be OK with the risk that you will end up in a worse-than-death scenario after you are preserved via cryonics (both of which seem very, very small to me). Choosing cryonics doesn't "open up" this tail risk that is very bad and would never occur otherwise. It already exists. 

We are the Athenians, not the Spartans

7 wubbles 11 June 2017 05:53AM

The Peloponnesian War was a war between two empires: the seadwelling Athenians, and the landlubber Spartans. Spartans were devoted to duty and country, living in barracks and drinking the black broth. From birth they trained to be the caste dictators of a slaveowning society, which would annually slay slaves to forestall a rebellion. The most famous Spartan is Leonidas, who died in a heroic last stand delaying the invasion of the Persians. To be a Spartan was to live a life devoted to toughness and duty.

Famous Athenians are Herodotus, inventor of history, Thucydides, Socrates, Plato, Hippocrates of the oath medical students still take, all the Greek playwrights, etc.  Attic Greek is the Greek we learn in our Classics courses. Athens was a city where the students of the entire known Greek world would come to learn from the masters, a maritime empire with hundreds of resident aliens, where slavery was comparable to that of the Romans. Luxury apartments, planned subdivisions, sexual hedonism, and free trade made up the life of the Athenian elite.

These two cities had deeply incompatible values. Spartans lived in fear that the Helots would rebel and kill them. Deeply suspicious of strangers, they imposed oligarchies upon the cities they conquered. They were described by themselves and others cautious and slow to act. Athenians by contrast prized speed and risk in their enterprises. Foreigners could live freely in Athens and even established their own temples. Master and slave comedies of Athens inspired PG Woodhouse.

All intellectual communities are Athenian in outlook. We remember Sparta for its killing and Athens for its art. If we want the rationalist community to tackle the hard problems, if we support a world that is supportive of human values and beauty, if we yearn to end the plagues of humanity, our values should be Athenian: individualistic, open, trusting, enamoured of beauty. When we build social technology, it should not aim to cultivate values that stand against these.

High trust, open, societies are the societies where human lives are most improved. Beyond merely being refugees for the persecuted they become havens for intellectual discussion and the improvement of human knowledge and practice. It is not a coincidence that one city produced Spinoza, Rubens, Rembrandt, van Dyke, Huygens, van Leeuwenhoek, and Grotius in a few short decades, while dominating the seas and being open to refugees.

Sadly we seem to have lost sight of this in the rationality community. Increasingly we are losing touch as a community with the outside intellectual world, without the impetus to study what has been done before and what the research lines are in statistics, ML, AI, epistemology, biology, etc. While we express that these things are important, the conversation doesn't seem to center around the actual content of these developments. In some cases (statistics) we're actively hostile to understanding some of the developments and limitations of our approach as a matter of tribal marker.

Some projects seem to me to be likely to worsen this, either because they express Spartan values or because they further physical isolation in ways that will act to create more small-group identification.

What can we do about this? Holiday modifications might help with reminding us of our values, but I don't know how we can change the community's outlook more directly. We should strive to stop merely acting on the meta-level and try to act on the object level more as a community. And lastly, we should notice that our values are real and not universal, and that they need defending.

[Link] Where do hypotheses come from?

3 c0rw1n 11 June 2017 04:05AM

Bi-Weekly Rational Feed

9 deluks917 10 June 2017 09:56PM

===Highly Recommended Articles:

Bring Up Genius by Viliam (lesswrong) - An "80/20" translation. Positive motivation. Extreme resistance from the Hungarian government and press. Polgar's five principles. Biting criticism of the school system. Learning in early childhood. Is Genius a gift or curse? Celebrity. Detailed plan for daily instruction. Importance of diversity. Why chess? Teach the chess with love, playfully. Emancipation of women. Polgar's happy family.

The Shouting Class by Noah Smith - The majority of comments come from a tiny minority of commentators. Social media is giving a bullhorn to the people who constantly complain. Negativity is contagious. The level of discord in society is getting genuinely dangerous. The French Revolution. The author criticizes shouters on the Left and Right.

How Givewell Uses Cost Effectiveness Analyses by The GiveWell Blog - GiveWell doesn't take its estimates literally, unless one charity is measured as 2-3x as cost-effective GiveWell is unsure if a difference exists. Cost-effective is however the most important factor in GiveWell's recommendations. GiveWell goes into detail about how it deals with great uncertainty and suboptimal data.

Mode Collapse And The Norm One Principle by tristanm (lesswrong) - Generative Adversarial Networks. Applying the lessons of Machine Learning to discourse. How to make progress when the critical side of discourse is very powerful. "My claim is that any contribution to a discussion should satisfy the "Norm One Principle." In other words, it should have a well-defined direction, and the quantity of change should be feasible to implement."

The Face Of The Ice by Sarah Constantin (Otium) - Mountaineering. Survival Mindset vs Sexual-Selection Mindset. War and the Wilderness. Technical Skill.

Bayes: A Kinda Sorta Masterpost by Nostalgebraist - A long and very well thought-out criticism of Bayesianism. Explanation of Bayesian methodology. Comparison with classical statistics. Arguments for Bayes. The problem of ignored hypotheses with known relations. The problem of new ideas. Where do priors come from? Regularization and insights from machine learning.

===Scott:

SSC Journal Club Ai Timelines by Scott Alexander - A new paper surveying what Ai experts think about Ai progress. Contradictory results about when Ai will surpass humans at all tasks. Opinions on Ai risk, experts are taking the arguments seriously.

Terrorism and Involuntary Commitment by Scott Alexander (Scratchpad) - The leader of the terrorist attack in London was in a documentary about jihadists living in Britain. “Being the sort of person who seems likely to commit a crime isn’t illegal.” Involuntary commitment.

Is Pharma Research Worse Than Chance by Scott Alexander - The most promising drugs of the 21st century are MDMA and ketamine (third is psilocybin). These drugs were all found by the drug community. Maybe pharma should look for compounds with large effect sizes instead of searching for drugs with no side-effects.

Open Thread 77- Opium Thread by Scott Alexander - Bi-weekly open thread. Includes some comments of the week and an update on translating "Bringing Up Genius".

Third and Fourth Thoughts on Dragon Army by SlateStarScratchpad. - Scott goes from Anti-Anti-Dragon-Army to Anti-Dragon-Army. He then gets an email from Duncan and updates in favor of the position that Duncan thought things out well.

Hungarian Education III Mastering The Core Teachings Of The Budapestians by Scott Alexander - Lazlo Polgar wanted to prove he could intentionally raise chess geniuses. He raised the number 1,2 and 6 female chess players in the world?

Four Nobel Truths by Scott Alexander - Four Graphs describing facts about Israeli/Askenazi Nobel Prizes.

===Rationalist:

The Precept Of Niceness by H i v e w i r e d - Prisoner's Dilemma's. Even against a truly alien opponent you should still cooperate as long as possible on the iterated prisoner's dilemma, even with fixed round lengths, play tit-for-tat. Niceness is the best strategy.

Epistemology Vs Critical Thinking by Onemorenickname (lesswrong) - Epistemies work. General approaches don't work. Scientific approaches work. Epistemic effort vs Epistemic status. Criticisms of lesswrong Bayesianism.

Tasting Godhood by Agenty Duck - Poetic and personal. Wine tasting. Empathizing with other people. Seeing others as whole people. How to dream about other people. Sci-fi futures. Tasting godhood is the same as tasting other people. Looking for your own godhood.

Bayes: A Kinda Sorta Masterpost by Nostalgebraist - A long and very well thought-out criticism of Bayesianism. Explanation of Bayesian methodology. Comparison with classical statistics. Arguments for Bayes. The problem of ignored hypotheses with known relations. The problem of new ideas. Where do priors come from? Regularization and insights from machine learning.

Dichotomies by mindlevelup - 6 short essays about dichotomies and whats useful about noticing them. Fast vs Slow thinking. Focused vs Diffuse Mode. Clean vs Dirty Thinking. Inside vs Outside View. Object vs Meta level. Generative vs Iterative Mode. Some conclusions about the method.

How Men And Women Perceive Relationships Differently by AellaGirl - Survey Results about Relationship quality over time. Lots of graphs and a link to the raw data. "In summary, time is not kind. Relationships show an almost universal decrease in everything good the longer they go on. Poly is hard, and you have to go all the way to make it work – especially for men. Religion is also great, if you’re a man. Women get more excited and insecure, men feel undesirable."

Summer Programming by Jacob Falkovich (Put A Number On It!) - Jacob's Summer writing plan. Re-writing part of the lesswrong sequences. Ribbonfarm's longform blogging course on refactored perception.

Bet Or Update Fixing The Will to Wager Assumption by cousin_it (lesswrong) - Betting with better informed agents is irrational. Bayesian agents should however update their prior or agree to bet. Good discussion in comments.

Kindness Against The Grain by Sarah Constantin (Otium) - Sympathy and forgiveness evolved to follow local incentive gradients. Some details on we sympathize with and who we don't. The difference between a good deal and a sympathetic deal. Smooth emotional gradients and understanding what other people want. Forgiveness as not following the local gradient and why this can be useful.

Bring Up Genius by Viliam (lesswrong) - An "80/20" translation. Positive motivation. Extreme resistance from the Hungarian government and press. Polgar's five principles. Biting criticism of the school system. Learning in early childhood. Is Genius a gift or curse? Celebrity. Detailed plan for daily instruction. Importance of diversity. Why chess? Teach the chess with love, playfully. Emancipation of women. Polgar's happy family.

Deorbiting A Metaphor by H i v e w i r e d - Another post in the origin sequence. Rationalist Myth-making. (note: I am unlikely to keep linking all of these. Follow hivewired’s blog)

Conformity Excuses by Robin Hanson - Human behavior is often explained by pressure to conform. However we consciously experience much less pressure. Robin discusses a list of ways to rationalize conforming.

Becoming A Better Community by Sable (lesswrong) - Lesswrong holds its memebers to a high standard. Intimacy requires unguarded spontaneous interactions. Concrete ideas to add more fun and friendship to lesswrong.

Optimizing For Meta Optimization by H i v e w i r e d - A very long list of human cultural universals and comments on which ones to encourage/discourage: Myths, Language, Cognition, Society. Afterwards some detailed bullet points about an optimal dath ilanian culture.

On Resignation by Small Truths - Artificial intelligence. "It’s an embarrassing lapse, but I did not think much about how the very people who already know all the stuff I’m learning would behave. I wasn’t thinking enough steps ahead. Seen in this context, Neuralink isn’t an exciting new tech venture so much as a desperate hope to mitigate an unavoidable disaster."

Cognitive Sciencepsychology As A Neglected by Kaj Sotala (EA forum) - Ways psychology could benefit AI safety: "The psychology of developing an AI safety culture, Developing better analyses of 'AI takeoff' scenarios, Defining just what it is that human values are, Better understanding multi-level world-models." Lots of interesting links.

Mode Collapse And The Norm One Principle by tristanm (lesswrong) - Generative Adversarial Networks. Applying the lessons of Machine Learning to discourse. How to make progress when the critical side of discourse is very powerful. "My claim is that any contribution to a discussion should satisfy the "Norm One Principle." In other words, it should have a well-defined direction, and the quantity of change should be feasible to implement."

Finite And Infinite by Sarah Constantin (Otium) - "James Carse, in Finite and Infinite Games, sets up a completely different polarity, between infinite game-playing (which is open-ended, playful, and non-competitive) vs. finite game-playing (which is definite, serious, and competitive)." Playfulness, property, and cooperating with people who seriously weird you out.

Script for the rationalist seder is linked by Raemon (lesswrong) - An explanation of Rationalist Seder, a remix of the Passover Seder refocused on liberation in general. A story of two tribes and the power of stories. The full Haggadah/script for the rationalist Seder is linked.

The Personal Growth Cycle by G Gordon Worley (Map and Territory) - Stages of Development. "Development starts from a place of integration, followed by disintegration into confusion, which through active efforts at reintegration in a safe space results in development. If a safe space for reintegration is not available, development may not proceed."

Until We Build Dath Ilan by H i v e w i r e d - Eliezer's Sci-fi utopia Dath Ilan. The nature of the rationalist community. A purpose for the rationality community. Lots of imagery and allusions. A singer is someone who tries to do good.

Do Ai Experts Exist by Bayesian Investor - Some of the numbers from " When Will AI Exceed Human Performance? Evidence from AI Experts" don't make sense.

Relinquishment Cultivation by Agenty Duck - Agenty Duck designs meditation to cultivate the attitude of "If X is true I wish to believe X, if X is not true I wish to believe not X". The technique is inspired by 'loving-kindness' meditation.

10 Incredible Weaknesses Of The Mental Health by arunbharatula (lesswrong) - Ten arguments that undermine the credibility of the mental health workforce. Some of the arguments are sourced and argued significantly more thoroughly than other.

Philosophical Parenthood by SquirrelInHell - Updateless Decision theory. Ashkenazi intelligence. "In this post, I will lay out a strong philosophical argument for rational and intelligent people to have children. It's important and not obvious, so listen well."

On Connections Between Brains And Computers by Small Truths - A condensation of Tim Ubran's 36K word article about Neuralink. The astounding benefits of having even a SIRI level Ai responding directly to your thoughts. The existential threat of Ai means that mind-computer links are worth the risks.

Thoughts Concerning Homeschooling by Ozy (Thing of Things) - Evidence that many public school practices are counter-productive. Stats on the academic performance of home-schoolers. Educating 'weird awkward nerds'.

The Face Of The Ice by Sarah Constantin (Otium) - Mountaineering. Survival Mindset vs Sexual-Selection Mindset. War and the Wilderness. Technical Skill.

===EA:

Review Of Ea New Zealands Doing Good Better Book by cafelow (EA forum) - New Zealand EAs gave out 250 copies of "Doing Good Better". 80 of the recipients responded to a follow up survey. The results were extremely encouraging. Survey details and discussion. Possible flaws with the giveaway and survey.

Announcing Effective Altruism Grants by Maxdalton (EA forum) - CEA is giving out £100,000 grants for personal projects. "We believe that providing those people with the resources that they need to realize their potential could be a highly effective use of resources." A list of what projects could get funded, the list is very broad. Evaluation criteria.

A Powerful Weapon in the Arsenal (Links Post) by GiveDirectly - 8 Links on Basic Income, Effective Altruism, Cash Transfers and Donor Advised Funds

A Paradox In The Measurement Of The Value Of Life by klloyd (EA forum) - Eight Thousand words on: “A Health Economics Puzzle: Why are there apparent inconsistencies in the monetary valuation of a statistical life (VSL) and a quality-adjusted life year (QALY$)?”

New Report Consciousness And Moral Patienthood by Open Philosophy - “In short, my tentative conclusions are that I think mammals, birds, and fishes are more likely than not to be conscious, while (e.g.) insects are unlikely to be conscious. However, my probabilities are very “made-up” and difficult to justify, and it’s not clear to us what actions should be taken on the basis of such made-up probabilities.”

Adding New Funds To Ea Funds by the Center for Effective Altruism (EA forum) - The Center for Effective Altruism wants feedback on whether it should add more EA funds. Each question is followed by a detailed list of critical considerations.

How Givewell Uses Cost Effectiveness Analyses by The GiveWell Blog - GiveWell doesn't take its estimates literally, unless one charity is measured as 2-3x as cost-effective GiveWell is unsure if a difference exists. Cost-effective is however the most important factor in GiveWell's recommendations. GiveWell goes into detail about how it deals with great uncertainty and suboptimal data.

The Time Has come to Find Out [Links] by GiveDirectly - 8 media links related to Cash Transfers, Give Directly and Effective Altruism.

Considering Considerateness: Why Communities Of Do Gooders Should Be by The Center for Effective Altruism - Consequentialist reasons to be considerate and trustworthy. Detailed and contains several graphs. Include practical discussions of when not to be considerate and how to handle unreasonable preferences. The conclusion discusses how considerate EAs should be. The bibliography contains many very high quality articles written by the community.

===Politics and Economics:

Summing Up My Thoughts On Macroeconomics by Noah Smith - Slides from Noah's talk at the Norwegian Finance Ministry. Comparison of Industry, Central Bank and Academic Macroeconomics. Overview of important critiques of academic macro. The DGSE standard mode and ways to improve it. What makes a good Macro theory. Go back to the microfoundations.

Why Universities Cant Be The Primary Site Of Political Organizing by Freddie deBoer - Few people on campus. Campus activism is seasonal. Students are an itinerant population. Town and gown conflicts. Students are too busy. First priority is employment. Is activism a place for student growth?. Labor principles.

Some Observations On Cis By Default Identification by Ozy (Thing of Things) - Many 'cis-by-default' people are repressing or not noticing their gender feelings. This effect strongly depends on a person's community.

One Day We Will Make Offensive Jokes by AellaGirl - "This is why I feel suspicious of some groups that strongly oppose offensive jokes – they have the suspicion that every person is like my parents – that every human “actually wants” all the terrible things to happen."

Book Review Weapons Of Math Destruction by Zvi Moshowitz - Extremely long. "What the book is actually mostly about on its surface, alas, is how bad and unfair it is to be a Bayesian. There are two reasons, in her mind, why using algorithms to be a Bayesian is just awful."

A Brief Argument With Apparently Informed Global Warming Denialists by Artir (Nintil) - Details of the back and forth argument. So commentary on practical rationality and speculation about how the skeptic might have felt.

The Shouting Class by Noah Smith - The majority of comments come from a tiny minority of commentators. Social media is giving a bullhorn to the people who constantly complain. Negativity is contagious. The level of discord in society is getting genuinely dangerous. The French Revolution. The author criticizes shouters on the Left and Right.

Population By Country And Region 10K BCE to 2016 CE by Luke Muehlhauser - 204 countries, 27 region. Links to the database used and a forthcoming explanatory paper. From 10K BCE to 0 CE gaps are 1000 years. From 0 CE to 1700 CE gaps are 100 years. After that they are 10 years long.

Regulatory Lags For New Technology 2013 Notes by gwern (lesswrong) - Gwern looks at the history of regulation for high frequency trading, self driving cars and hacking. The post is mostly comprised of long quotes from articles linked by gwern.

Two Economists Ask Teachers To Behave As Irrational Actors by Freddie deBoer - A response to Cowen's interview of Raj Chetty. Standard Education reform rhetoric implies that hundreds of thousands of teachers need to be fired. However teachers don't control most of the important inputs to student performance. You won't get more talented teachers unless you increase compensation.

Company Revenue Per Employee by Tyler Cowen - The energy sector has high revenue per employee. The highest score was attained by a pharmaceutical distributor. Hotels, restaurants and consumer discretionaries do the worst on this metric. Tech has a middling performance.

===Misc:

A Remark On Usury by Entirely Useless - "To take usury for money lent is unjust in itself, because this is to sell what does not exist, and this evidently leads to inequality which is contrary to justice." Thomas Aquinas is quoted at length explaining the preceding statement. EntirelyUseless argues that Aquinas mixes up the buyer and the seller.

Bike To Work Houston by Mr. Money Mustache - How a lawyer bikes to work in Houston. Bikes are surprisingly fast relative to cars in cities. Houston is massive.

Fuckers Vs Raisers by AellaGirl - Evolutionary psychology. The qualities that are attractive in a guy who sleeps around are also attractive in a guy who wants to settle down.

Reducers Transducers And Coreasync In Clojure by Eli Bendersky - "I find it fascinating how one good idea (reducers) morphed into another (transducers), and ended up mating with yet another, apparently unrelated concept (concurrent pipelines) to produce some really powerful coding abstractions."

Thingness And Thereness by Venkatesh Rao (ribbonfarm) - The relation between politics, home and frontier. Big Data, deep learning and the blockchain. Liminal spaces and conditions.

Create 2314 by protokol2020 - Find the shortest algorithm to create the number 2314 using a prescribed set of operations.

Text To Speech Speed by Jeff Kaufman - Text to speech has become a very efficient way to interact with computers. Questions about settings. Very short.

Hello World! Stan, Pymc3 and Edward by Bob Carpenter (Gelman's Blog) - Comparison of the three frameworks. Test case of Bayesian linear regression. Extendability and efficiency of the frameworks is discussed.

Computer Science Majors by Tyler Cowen - Tyler links to an article by Dan wang. The author gives 11 reasons why CS majors are rare, none of which he finds convincing. Eventually the author seems to conclude that the 2001 bubble, changing nature of the CS field, power law distribution in developer productivity and lack of job security are important causes.

Beespotting On I-5 by Eukaryote - Drive from San Fran to Seattle. The vast agricultural importance of Bees. Improving Bee quality of life.

===Podcast:

81 Leaving Islam by Waking Up with Sam Harris - "Sarah Haider. Her organization Ex-Muslims of North America, how the political Left is confused about Islam, "rape culture" under Islam, honesty without bigotry, stealth theocracy, immigration, the prospects of reforming Islam"

Newcomers by Venam - A transcript of a discussion about advice for new Unix users. Purpose. Communities. Learning by Yourself. Technical Tips. Venam linked tons of podcast transcripts today. Check them out.

Masha Gessen, Russian-American Journalist by The Ezra Klein Show - Trump and Russia, plausible and sinister explanation. Ways Trump is and isn't like Putin, studying autocracies, the psychology of Jared Kushner

Christy Ford by EconTalk - "A history of how America's health care system came to be dominated by insurance companies or government agencies paying doctors per procedure."

Nick Szabo by Tim Ferriss - "Computer scientist, legal scholar, and cryptographer best known for his pioneering research in digital contracts and cryptocurrency."

The Road To Tyranny by Waking Up with Sam Harris - Timothy Snyder. His book On Tyranny: Twenty Lessons from the Twentieth Century.

Hans Noel On The Role Of Ideology In Politics by Rational Speaking - "Why the Democrats became the party of liberalism and the Republicans the party of conservatism, whether voters are hypocrites in the way they apply their ostensible ideology, and whether politicians are motivated by ideals or just self-interest."

Stupid Questions June 2017

2 gilch 10 June 2017 06:32PM

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

[Link] Two Major Obstacles for Logical Inductor Decision Theory

1 endoself 10 June 2017 05:48AM

Epistemology vs Critical Thinking

0 Onemorenickname 10 June 2017 02:41AM

Short vocabulary points :

  • By epistemy, I refer to to the second meaning of epistemology in the wiktionary (ie, a particular theory of knowledge). Polysemy is bad and should be fixed when possible.
  • By episteme, I mean the knowledge and understanding of a given science at a given point in time.
  • By field, I mean a set of related thoughts. A science is a field with an epistemy.
  • Epistemy comes from "epistimi", meaning science. I like the identification of a science with its epistemy.

Epistemic ... :
  • Effort. Much more reasoning behind that post. I'm mostly trying to see if people are interested. If they are, much more writing will ensue.
  • Status. Field : Rationalist epistemology. Phase : pre-epistemy.

Epistemies work.

General approaches don't work.

Model-checking, validity and proof-search can be hard. Like, NP, PSPACE, non-elementary hard or even undecidable. Particularly, validity of propositions in first-order logic is undecidable.

Our propositions about the world are more complex than what's described by first order logic. Making it impossible to prove validity of propositions in the general case. As such, trying to find a general logic to deal with the world (ie, critical thinking) is energy badly spent.

Specific approaches work.

This problem has been answered in fields relying on logic. For instance : model checking, type theory or not-statistical computational linguistics. The standard answer is finding specialized and efficiently computable logics.

However, not every field can afford a full formalization. At least, as humans studying the world, we can't. Epistemies can be seen as detailed informal efficient logics. They give us a particular way to study some particular thing, just like logics. They don't provide mathematical guarantees, but they still can offer guarantees.

Science faced that problem, as the study of world by humans. However, critical thinking wasn't enough. That's partly why we moved from philosophy to sciences. The Science solution was to subdivide the world into several overlapping-but-independently-experimentable parts.

Thus, rather than by its object of study, a science is defined by a combination of its object of study and its epistemy. This explains why 3 different articles studying logic can be discriminated by their science : to Philosophy, Math and Theoretical Computer Science.

Implications.

Stopping redundancy.

Valuing critical thought led to a high amount of redundancy. Anyone can dump their ideas and have it judged by the community, provided a bit of critical thinking has been done. The core insight being that critical thinking should filter most of the bad ideas.

However, if the subject of the idea relies a bit on a non-epistemied technical field, obfuscating lack of consistency/thorough-thinking becomes very easy. As such, community time is spent finding obvious flaws in a reasoning when the author could do it alone provided there was an appropriate epistemy.

Epistemic effort.

As such, before suggesting new models, one should confront it to the standard epistemy of the fields the model belong to. That epistemy can be as simple as some sanity checks, eg : "Does this model lead to X, Y or Z contradiction ? If it does, it's a bad one. Else, it is a good one.". If there is no standard epistemy in the given field, working on one is critical.

I agree Raemon's post about using "epistemic effort" instead of "epistemic status". Following the previous line of thought, I think "epistemic status" should refer to an epistemic status (and the field relative to which it is defined) instead of the epistemic effort. I see 3 kinds of epistemic status, that could be refined :

1. Pre-epistemy : Thoughts meant to gather comments. Models trying to see if modelling a particular subject is worth it or works well.

2. Epistemy building : Defining the epistemy meta-assumptions. Defining the epistemy logic. Defining the epistemy facts (eg, Which sources are relevant in that field ? Which meta-facts are relevant in that field ?).

3. Post-epistemy : Once the epistemy is defined, anything benefiting the science's episteme. Facts, models, questioning the epistemy (might lead to forks, eg, math and computer science).

Misc.

"Bayesian probabilities"

Initially, I thought that someone putting a probability in front of a belief had an objective meaning. I asked around for an epistemy, and I have been told that it was only a way to express more precisely a subjective feeling.

However, it looks like there might be a confusion between the map and the territory when I see things like bet-to-update. Because when I see "Bayesian rational agent", it feels like we should be supposed to be bayesian rational agents in the general case. (Which I think is an AGI-complete problem.)

Bayesian framework

Bayesian rules and its derivatives define the "proof rules" part of an agent's epistemy. But axioms are still required, a world, a way to gather facts and such. It also relies on meta-assumption for efficiency and relevancy. Bayesian rules are not enough to define an epistemy.

Therefore, not only I am strongly prejudiced against someone self-describing as a bayesianist because of the "I apply the same epistemy everywhere"-approach, but also because it isn't a proper epistemy.

There are better ways to say "I know the Bayes' rule, and how it might apply to real-life situation." than "I'm bayesianist".

Maybe "bayesianist" solely means "I update my beliefs based on evidence", but I think "open-minded" is the right word for that.

Not even wrong

Showing not-even-wrong-ness is possible in sciences with an epistemy. (Well, it's possible to show it to people who know that epistemy. Showing someone who don't know maths that his "informal" maths aren't even maths is hard.)

In other fields, we are subject to too much not-even-wrong-ness. I'd like to link some LW posts to exemplify my point, but I think it might violate the local BNBR policy.

Questions

Do you think defining a meta-epistemy (ie, an epistemy to the Rationalist epistemology) is important ?

Do you think defining sub-epistemies is important ?

If you don't, why ?

 

 

 

 agree : The direct transitivity is meant. To agree something and to agree with/to something have different connotations.

Humans are not agents: short vs long term

2 Stuart_Armstrong 09 June 2017 11:16AM

Crossposted at the Intelligent Agents Forum.

This is an example of humans not being (idealised) agents.

Imagine a human who has a preference to not live beyond a hundred years. However, they want to live to next year, and it's predictable that every year they are alive, they will have the same desire to survive till the next year.

This human (not a completely implausible example, I hope!) has a contradiction between their long and short term preferences. So which is accurate? It seems we could resolve these preferences in favour of the short term ("live forever") or the long term ("die after a century") preferences.

Now, at this point, maybe we could appeal to meta-preferences - what would the human themselves want, if they could choose? But often these meta-preferences are un- or under-formed, and can be influenced by how the question or debate is framed.

Specifically, suppose we are scheduling this human's agenda. We have the choice of making them meet one of two philosophers (not meeting anyone is not an option). If they meet Professor R. T. Long, he will advise them to follow long term preferences. If instead, they meet Paul Kurtz, he will advise them to pay attention their short term preferences. Whichever one they meet, they will argue for a while and will then settle on the recommended preference resolution. And then they will not change that, whoever they meet subsequently.

Since we are doing the scheduling, we effectively control the human's meta-preferences on this issue. What should we do? And what principles should we use to do so?

It's clear that this can apply to AIs: if they are simultaneously aiding humans as well as learning their preferences, they will have multiple opportunities to do this sort of preference-shaping.

Bring up Genius

33 Viliam 08 June 2017 05:44PM

(This is a "Pareto translation" of Bring up Genius by László Polgár, the book recently mentioned at Slate Star Codex. I hope that selected 20% of the book text, translated approximately, could still convey 80% of its value, while taking an order of magnitude less time and work than a full and precise translation. The original book is written in an interview form, with questions and answers; to keep it short, I am rewriting it as a monologue. I am also taking liberty of making many other changes in style, and skipping entire parts, because I am optimizing for my time. Instead of the Hungarian original, I am using an Esperanto translation Eduku geniulon as my source, because that is the language I am more fluent in.)

Introduction

Genius = work + luck

This is my book written in 1989 about 15 years of pedagogic experiment with my daughters. It is neither a recipe, nor a challenge, just a demonstration that it is possible to bring up a genius intentionally.

The so-called miracle children are natural phenomena, created by their parents and society. Sadly, many potential geniuses disappear without anyone noticing the opportunity, including themselves.

Many people in history did a similar thing by accident; we only repeated it on purpose.

1. Secrets of the pedagogic experiment

1.1. The Polgár family

The Polgár sisters (Susan, Sofia, Judit) are internationally as famous as Rubik Ernő, the inventor of the Rubik Cube.

Are they merely their father's puppets, manipulated like chess figures? Hardly. This level of success requires agency and active cooperation. Puppets don't become geniuses. Contrariwise, I provided them opportunity, freedom, and support. They made most of the decisions.

You know what really creates puppets? The traditional school system. Watch how kids, eagerly entering school in September, mostly become burned out by Christmas.

Not all geniuses are happy. Some are rejected by their environment, or they fail to achieve their goals. But some geniuses are happy, accepted by their environment, succeed, and contribute positively to the society. I think geniuses have a greater chance to be happy in life, and luckily my daughters are an example of that.

I was a member of the Communist Party for over ten years, but I disagreed with many things; specifically the lack of democracy, and the opposition to elite education.

I work about 15 hours a day since I was a teenager. I am obsessed with high quality. Some people say I am stubborn, even aggressive. I am trying hard to achieve my goals, and I experienced a lot of frustration; seems to me some people were trying to destroy us. We were threatened by high-ranking politicians. We were not allowed to travel abroad until 1985, when Susan was already the #1 in international ranking of female chess players.

But I am happy that a have a great family, happy marriage, three successful children, and my creative work has an ongoing impact.

1.2 Nature or nurture?

I believe that any biologically healthy child can be brought up to a genius. Me and my wife have read tons of books and studies. Researching the childhoods of many famous people that they all specialized early, and each of them had a strongly supportive parent or teacher or trainer. We concluded: geniuses are not born; they are made. We proved that experimentally. We hope that someone will build a coherent pedagogical system based on our hypothesis.

Most of what we know about genetics [as of 1989] is about diseases. Healthy brains are flexible. Education was considered important by Watson and Adler. But Watson never actually received the "dozen healthy infants" to bring up, so I was the first one to do this experiment. These are my five principles:

* Human personality is an outcome of the following three: the gifts of nature, the support of environment, and the work of one's own. Their relative importance depends on age: biology is strongest with the newborn, society with the ten years old, and later the importance of one's own actions grows.

* There are two aspects of social influence: the family, and the culture. Humans are naturally social, so education should treat the child as a co-author of themselves.

* I believe that any healthy child has sufficient general ability, and can specialize in any type of activity. Here I differ from the opinion of many teachers and parents who believe that the role of education is to find a hidden talent in the child. I believe that the child has a general ability, and achieves special skills by education.

* The development of the genius needs to be intentionally organized; it will not happen at random.

* People should strive for maximum possible self-realization; that brings happiness both to them and to the people around them. Pedagogy should not aim for average, but for excellence.

2. A different education

2.1. About contemporary schools

We homeschooled our children. Today's schools set a very low bar, and are intolerant towards people different from the average by their talent or otherwise. They don't prepare for real life; don't make kids love learning; don't instigate greater goals; bring up neither autonomous individuals nor collectives.

Which is an unsurprising outcome, if you only have one type of school, each school containing a few exceptional kids among many average ones and a few feeble ones. Even the average ones are closer to the feeble ones that to the exceptional ones. And the teacher, by necessity, adapts to the majority. There is not enough space for individual approach, but there is a lot of mindless repetition. Sure, people talk a lot about teaching problem-solving skills, but that never happens. Both the teachers and the students suffer at school.

The gifted children are bored, and even tired, because boredom is more tedious than appropriate effort. The gifted children are disliked, just like everyone who differs from the norm. Many gifted children acquire psycho-somatic problems, such as insomnia, headache, stomach pain, neuroses. Famous people often had trouble at school; they were considered stupid and untalented. There is bullying, and general lack of kindness. There are schools for gifted children in USA and USSR, but somehow not in Hungary [as of 1989].

I had to fight a lot to have my first daughter home-schooled. I was afraid school would endanger the development of her abilities. We had support of many people, including pedagogues, but various bureaucrats repeatedly rejected us, sometimes with threats. Finally we received an exceptional permission by the government, but it only applied for one child. So with the second daughter we had to go through the same process again.

2.2. Each child is a promise

It is crucial to awaken and keep the child's interest, convince them that the success is achievable, trust them, and praise them. When the child likes the work, it will work fruitfully for long time periods. A profound interest develops personality and skills. A motivated child will achieve more, and get tired less.

I believe in positive motivation. Create a situation where many successes are possible. Successes make children confident; failures make them insecure. Experience of success and admiration by others motivates and accelerates learning. Failure, fear, and shyness decrease the desire to achieve. Successes in one field even increase confidence in other fields.

Too much praise can cause overconfidence, but it is generally safer to err on the side of praising more rather than less. However, the praise must be connected to a real outcome.

Discipline, especially internal psychological, also increases skills.

I believe the age between 3 and 6 years is very important, and very underestimated. No, those children are not too young to learn. Actually, that's when their brains are evolving the most. They should learn foreign languages. In multilingual environments children do that naturally.

Play is important for children, but play is not an opposite of work. Gathering information and solving problems is fun. Provide meaningful activities, instead of compartmentalized games. A game without learning is merely a surrogate activity. Gifted children prefer games that require mental activity. There is a continuum between learning and playing (just like between work and hobby for adults). Brains, just like muscles, becomes stronger by everyday activity.

My daughters used intense methods to learn languages; and chess; and table tennis. Is there a risk of damaging their personality by doing so? Maybe, but I believe the risks of damaging the personality by spending six childhood years without any effort are actually greater.

When my daughters were 15, 9, 8 years old, we participated in a 24-hour chess tournament, where you had to play 100 games in 24 hours. (Most participants were between age 25 and 30.) Susan won. The success rates during the second half of the tournament were similar to those during the first half of the tournament, for all three girls, which shows that children are capable of staying focused for long periods of time. But this was an exceptional load.

2.3. Genius - a gift or a curse?

I am not saying that we should bring up each child as a genius; only that bringing up children as geniuses is possible. I oppose uniform education, even a hypothetical one that would use my methods.

Public ideas of geniuses is usually one of two extremes. Either they are all supposed to be weird and half-insane; or they are all supposed to be CEOs and movie stars. Psychology has already moved beyond this. They examined Einstein's brain, but found no difference in weight or volume compared with an average person. For me, genius is an average person who has achieved their full potential. Many famous geniuses attribute their success to hard work, discipline, attention, love of work, patience, time.

All healthy newborns are potential geniuses, but whether they become actual geniuses, depends on their environment, education, and their own effort. For example, in the 20th century more people became geniuses than in the 19th or 18th century, inter alia because of social changes. Geniuses need to be liberated. Hopefully in the future, more people will be free and fully developed, so being a genius will become a norm, not an exception. But for now, there are only a few people like that. As people grow up, they lose the potential to become geniuses. I estimate that an average person's chance to become a genius is about 80% at age 1; 60% at age 3; 50% at age 6; 40% at age 12; 30% at age 16; 20% at age 18; only 5% at age 20. Afterwards it drops to a fraction of percent.

A genius child can surpass their peers by 5 or 7 years. And if a "miracle child" doesn't become a "miracle adult", I am convinced that their environment did not allow them to. People say some children are faster and some are slower; I say they don't grow up in the same conditions. Good conditions allow one to progress faster. But some philosophers or writers became geniuses at old age.

People find it difficult to accept those who differ from the average. Even some scientists; for example Einstein's theory of relativity was opposed by many. My daughters are attacked not just by public opinion, but also by fellow chess players.

Some geniuses are unhappy about their situation. But many enjoy the creativity, perceived beauty, and success. Geniuses can harm themselves by having unrealistic expectations of their goals. But most of the harm comes from outside, as a dismissal of their work, or lack of material and moral support, baseless criticism. Nowadays, one demagogue can use the mass communication media to poison the whole population with rage against the representatives of national culture.

As the international communication and exchange of ideas grows, geniuses become more important than ever before. Education is necessary to overcome economical problems; new inventions create new jobs. But a genius provokes the anger of people, not by his behavior, but by his skills.

2.4. Should every child become a celebrity?

I believe in diversity in education. I am not criticizing teachers for not doing things my way. There are many other attempts to improve education. But I think it is now possible to aim even higher, to bring up geniuses. I can imagine the following environments where this could be done:

* Homeschooling, i.e. teaching your biological or adopted children. Multiple families could cooperate and share their skills.

* Specialized educational facility for geniuses; a college or a family-type institution.

Homeschooling, or private education with parental oversight, are the ancient methods for bringing up geniuses. Families should get more involved in education; you can't simply outsource everything to a school. We should support families willing to take an active role. Education works better in a loving environment.

Instead of trying to a find a talent, develop one. Start specializing early, at the age of 3 or 4. One cannot become an expert on everything.

My daughters played chess 5 or 6 hours a day since their age of 4 or 5. Similarly, if you want ot become a musician, spend 5 or 6 hours a day doing music; if a physicist, do physics; if a linguist, do languages. With such intense instruction, the child will soon feel the knowledge, experience success, and soon becomes able to use this knowledge independently. For example, after learning Esperanto 5 or 6 hours a day for a few months, the child can start corresponding with children from other countries, participate at international meet-ups, and experience the conversations in a foreign language. That is at the same time pleasant, useful for the child, and useful for the society. The next year, start with English, then German, etc. Now the child enjoys this, because it obviously makes sense. (Unlike at school, where most learning feels purposeless.) In chess, the first year makes you an average player, three years a great player, six years a master, fifteen years a grandmaster. When a 10-years old child surpasses an average adult at some skill, it is highly motivating.

Gifted children need financial support, to cover the costs of books, education, and travel.

Some people express concern that early specialization may lead to ignorance of everything else. But it's the other way round; abilities formed in one area can transfer to other areas. One learns how to learn.

Also, the specialization is relative. If you want to become e.g. a computer programmer, you will learn maths, informatics, foreign languages; when you become famous, you will travel, meet interesting people, experience different cultures. My daughters, in addition to being chess geniuses, speak many foreign languages, travel, do sports, write books, etc. Having deep knowledge about something doesn't imply ignorance about everything else. On the other hand, a misguided attempt to become an universalist can result in knowing nothing, in mere pretend-knowledge of everything.

Emotional and moral education must do together with the early specialization, to develop a complex personality. We wanted our children to be enthusiastic, courageous, persistent, to be objective judges of things and people, to resist failure and avoid temptations of success, to handle frustration and tolerate criticism even when it is wrong, to make plans, to manage their emotions. Also, to love and respect people, and to prefer creative work to physical pleasure or status symbols. We told them that they can achieve greatness, but that there can be only one world champion, so their goal should rather be to become good chess players, be good at sport, and be honest people.

Pedagogy puts great emphasis on being with children of the same age. I think that mental peers are more important than age peers. It would harm a gifted child to be forced to spend most of their time exclusively among children of the same age. On the other hand, spending most of the time with adults brings the risk that the child will learn to rely on them all the time, losing independence and initiative. You need to find a balance. I believe the best company would be of similar intellectual level, similar hobbies, and good relations.

For example, if Susan at 13 years old would be forced to play chess exclusively with 13 years old children, it would harm both sides. She could not learn anything from them; they would resent losing constantly.

Originally, I hoped I could bring up each daughter as a genius in a different field (e.g. mathematics, chess, music). It would be a more convincing evidence that you can bring up a genius of any kind. And I believe I would have succeeded, but I was constrained by money and time. We would need three private teachers, would have to go each day to three different places, would have to buy books for maths and chess and music (and the music instruments). By making them one team, things became easier, and the family has more things in common. Some psychologists worried that children could be jealous of each other, and hate each other. But we brought them up properly, and this did not happen.

This is how I imagine a typical day at a school for geniuses:

* 4 hours studying the subject of specialization, e.g. chess;

* 1 hour studying a foreign language; Esperanto at the first year, English at the second, later choose freely; during the first three months this would increase to 3 hours a day (by reducing the subject of specialization temporarily); traveling abroad during the summer;

* 1 hour computer science;

* 1 hour ethics, psychology, pedagogy, social skills;

* 1 hour physical education, specific form chosen individually.

Would I like to teach at such school? In theory yes, but in practice I am already burned out from the endless debates with authorities, the press, opinionated pedagogues and psychologists. I am really tired of that. The teachers in such school need to be protected from all this, so they can fully focus on their work.

2.5. Esperanto: the first step in learning foreign languages

Our whole family speaks Esperanto. It is a part of our moral system, a tool for equality of people. There are many prejudices against it, but the same was true about all progressive ideas. Some people argue by Bible that multiple languages are God's punishment we have to endure. Some people invested many resources into learning 2 or 3 or 4 foreign languages, and don't want to lose the gained position. Economically strong nations enforce their own languages as part of dominance, and the speakers of other languages are discriminated against. Using Esperanto as everyone's second language would make the international communication more easy and egalitarian. But considering today's economical pressures, it makes sense to learn English or Russian or Chinese next.

Esperanto has a regular grammar with simple syntax. It also uses many Latin, Germanic, and Slavic roots, so as an European, even if you are not familiar with the language, you will probably recognize many words in a text. This is an advantage from pedagogical point of view: you can more easily learn its vocabulary and its grammar; you can learn the whole language about 10 times easier than other languages.

It makes a great example of the concept of a foreign language, which pays off when learning other languages later. It is known that having learned one foreign language makes learning another foreign language easier. So, if learning Esperanto takes 10 times less time than learning another language, such as English, then if already knowing another foreign language makes learning the second one at least 10% more efficient, it makes sense to learn Esperanto first. Also, Esperanto would be a great first experience for students who have difficulty learning languages; they would achieve success faster.

3. Chess

3.1. Why chess?

Originally, we were deciding between mathematics, chess, and foreign languages. Finally we chose chess, because the results in that area are easy to measure, using a traditional and objective system, which makes it easier to prove whether the experiment succeeded or failed. Which was a lucky choice in hindsight, because back then we had no idea how many obstacles we will have to face. If we wouldn't be able to prove our results unambiguously, the attacks against us would have been much stronger.

Chess seemed sufficiently complex (it is a game, a science, an art, and a sport at the same time), so the risks of overspecialization were smaller; even if children would later decide they are tired of chess, they would keep some transferable skills. And the fact that our children were girls was a bonus: we were able to also prove that girls can be as intellectually able as boys; but for this purpose we needed an indisputable proof. (Although, people try to discount this proof anyway, saying things like: "Well, chess is simple, but try doing the same in languages, mathematics, or music!")

The scientific aspect of chess is that you have to follow the rules, analyze the situation, apply your intuition. If you have a favorite hypothesis, for example a favorite opening, but you keep losing, you have to change your mind. There is an aesthetic dimension in chess; some games are published and enjoyed not just because of their impressive logic, but because they are beautiful in some sense, they do something unexpected. And most people are not familiar with this chess requires great physical health. All the best chess players do some sport, and it is not a coincidence. Also it is organized similarly to sports: it has tournaments, players, spectators; you have to deal with the pain of losing, you have to play fair, etc.

3.2. How did the Polgár sisters start learning chess?

I don't have a "one weird trick" to teach children chess; it's just my general pedagogical approach, applied to chess. Teach the chess with love, playfully. Don't push it too forcefully. Remember to let the child win most of the time. Explain to the child that things can be learned, and that this also applies to chess. Don't worry if the child keeps jumping during the game; it could be still thinking about the game. Don't explain everything; provide the child an opportunity to discover some things independently. Don't criticize failure, praise success.

Start with shorter lessons, only 30 minutes and then have a break. Start by solving simple problems. Our girls loved the "checkmate in two/three moves" puzzles. Let the child play against equally skilled opponents often. For a child, it is better to play many quick games (e.g. with 5-minute timers), than a few long ones. Participate in tournaments appropriate for the child's current skill.

We have a large library of different games. They are indexed by strategy, and by names of players. So the girls can research their opponent's play before the tournament.

When a child loses the tournament, don't criticize them; the child is already sad. Offer support; help them analyze the mistakes.

When my girls write articles about chess, it makes them think deeply about the issue.

All three parts of the game opening, middle game, ending require same amount of focus. Some people focus too much on the endings, and neglect the rest. But at tournament, a bad opening can ruin the whole game.

Susan had the most difficult situation of the three daughters. In hindsight, having her learn 7 or 8 foreign languages was probably too much; some of that time would be better spent further improving her chess skills. As the oldest one, she also faced the worst criticism from haters; as a consequence she became the most defensive player of them. The two younger sister had the advantage that they could oppose the same pressures together. But still, I am sure that without those pressures, they also could have progressed even faster.

Politicians influenced the decisions of the Hungarian Chess Association; as a result my daughters were often forbidden from participation at international youth competitions, despite being the best national players. They wanted to prevent Susan from becoming the worldwide #1 female chess player. Once they even "donated" 100 points to her competitor, to keep Susan at the 2nd place. Later they didn't allow her to participate in the international male tournaments, although her results in the Hungarian male tournaments qualified her for that. The government regularly refused to issue passports to us, claiming that "our foreign travels hurt the public order". Also, it was difficult to find a trainer for my daughters, despite them being at the top of world rankings. Only recently we received a foreign help; a patron from Netherlands offered to pay trainers and sparring partners for my daughters, and also bought Susan a personal computer. A German journalist gave us a program and a database, and taught children how to use it.

The Hungarian press kept attacking us, published fake facts. We filed a few lawsuits, and won them all, but it just distracted us from our work. The foreign press whether writing from the chess, psychological, or pedagogical perspectives was fair to us; they wrote almost 40 000 articles about us, so finally even the Hungarian chess players, psychologists and pedagogues could learn about us from them.

At the beginning, I was a father, a trainer, and a manager to my daughters. But I am completely underqualified to be their trainer these days, so I just manage their trainers.

Until recently no one believed women could play chess on level comparable with men. Now the three girls together have about 40 Guiness records; they repeatedly outperformed their former records. In a 1988 interview Karpov said: "Susan is extraordinarily strong, but Judit... at such age, neither me nor Kasparov could play like Judit plays."

3.3. How can we make our children like chess?

Some tips for teaching chess to 4 or 5 years old children. First, I made a blank square divided into 8x8 little squares, with named rows and columns. I named a square, my daughter had to find it; then she named a square and I had to find it. Then we used the black-and-white version, and we were guessing the color of the named square without looking.

Then we introduced kings, in a "king vs king" combat; the task was to reach the opposing row of the board with your king. Then we added a pawn; the goal remained to reach the opposing row. After a month of playing, we introduced the queen, and the concept of checkmate. Later we gradually added the remaining pieces (knights were the most difficult).

Then we solved about thousand "checkmate in one move" puzzles. Then two moves, three moves, four moves. That took another 3 or 4 months. And only afterwards we started really playing against each other.

To provide an advantage for the child, don't play with less pieces, because that changes the structure of the game. Instead, provide yourself a very short time limit, or deliberately make a mistake, so the child can learn to notice them.

Have patience, if some phase takes a lot of time. On stronger fundamentals, you can later build better. This is where I think our educational system makes great mistakes. Schools don't teach intensely, so children keep forgetting most of what they learned during the long spaces between the lessons. And then, despite not having fully mastered the first step, they move to the second one, etc.

3.4. Chess and psychology

Competitive chess helps develop personality: will, emotion, perseverance, self-discipline, focus, self-control. It develops intellectual skills: memory, combination skills, logic, proper use of intuition. Understanding your opponent's weakness will help you.

People overestimate how much IQ tests determine talent. Measurements of people talented in different areas show that their average is only a bit above the average of the population.

3.5. Emancipation of women

Some people say, incorrectly, that my daughter won the male chess championship. But there is officially no such thing as "male chess championship", there is simply chess championship, open to both men and women. (And then, there is a separate female chess championship, only for women, but that is considered second league.)

I prepared the plan for my children before they were born. I didn't know I would have all girls, so I did not expect this special problem: the discrimination of women. I wanted to bring up my daughter Susan exactly according to the plan, but many people tried to prevent it; they insisted that she cannot compete with boys, that she should only compete with girls. Thus my original goal of proving that you can bring up a genius, became indirectly a goal of proving that there are no essential intellectual differences between men and women, and therefore one can't use that argument as an excuse for subjugation of women.

People kept telling me that I can only bring up Susan to be a female champion, not to compete with men. But I knew that during elementary school, girls can compete with boys. Only later, when they start playing the female role, when they are taught to clean the house, wash laundry, cook, follow the fashion, pay attention to details of clothing, and try getting married as soon as possible when they are expected to do other things than boys are expected to do that has a negative impact on developing their skills. But family duties and bringing up children can be done by both parents together.

Women can achieve same results, if they can get similar conditions. I tried to do that for my daughters, but I couldn't convince the whole society to treat them the same.

We know about differences between adult men and women, but we don't know whether they were caused by biology or education. And we know than e.g. in mathematics and languages, during elementary and high schools girls progress at the same pace as boys, and only later the differences appear. This is an evidence in favor of equality. We do not know what children growing up without discrimination would be like.

On the other hand, the current system also provides some advantages for women; for example the female chess players don't need to work that hard to become the (female) elite, and some of them don't want to give that up. Such women are among the greatest opponents of my daughters.

4. The meaning of this whole affair

4.1. Family value

I am certain that without a good family background the success of my daughters would not be possible. It is important, before people marry, to have a clear idea of what expect from their marriage. When partners cooperate, the mutual help, the shared experiences, education of children, good habits, etc. can deepen their love. Children need family without conflicts to feel safe. But of course, if the situation becomes too bad, the divorce might become the way to reduce conflicts.

To bring up a genius, it is desirable for one parent to stay at home and take care of children. But it can be the father, too.

[Klára Polgár says:] When I met László, my first impression was that he was an interesting person full of ideas, but one should not believe even half of them.

When Susan was three and half, László said it was time for her to specialize. She was good at math; at the age of four she already learned the material of the first four grades. Once she found chess figures in the box, and started playing with them as toys. László was spending a lot of time with her, and one day I was surprised to see them playing chess. László loved chess, but I never learned it.

So, we could have chosen math or foreign languages, but we felt that Susan was really happy playing chess, and she started being good at it. But our parents and neighbors shook their heads: "Chess? For a girl?" People told me: "What kind of a mother are you? Why do you allow your husband to play chess with Susan?" I had my doubts, but now I believe I made the right choice.

People are concerned whether my children had real childhood. I think they are at least as happy as their peers, probably more.

I always wanted to have a good, peaceful family life, and I believe I have achieved that. [End of Klára's part.]

4.2. Being a minority

It is generally known that Jewish people achieved many excellent results in intellectual fields. Some ask whether the cause of this is biologic or social. I believe it is social.

First, Jewish families are usually traditional, stable, and care a lot about education. They knew that they will be discriminated against, and will have to work twice as hard, and that at any moment they may be forced to leave their home, or even country, so their knowledge might be the only thing they will always be able to keep. Jewish religion requires parents to educate their children since early childhood; Talmud requires parents to become the child's first teachers.

4.3. Witnesses of the genius education: the happy children

I care about happiness of my children. But not only I want to make them happy, I also want to develop their ability to be happy. And I think that being a genius is the most certain way. The life of a genius may be difficult, but happy anyway. On the other hand, average people, despite seemingly playing it safe, often become alcoholics, drug addicts, neurotics, loners, etc.

Some geniuses become unhappy with their profession. But even then I believe it is easier for a genius to change professions.

Happiness = work + love + freedom + luck

People worry whether child geniuses don't lose their childhood. But the average childhood is actually not as great as people describe it; many people do not have a happy childhood. Parents want to make their children happy, but they often do it wrong: they buy them expensive toys, but they don't prepare them for life; they outsource that responsibility to school, which generally does not have the right conditions.

And when parents try to fully develop the capabilities of their children, instead of social support they usually get criticism. People will blame them for being overly ambitious, for pushing the children to achieve things they themselves failed at. I personally know people who tried to educate their children similarly to how we did, but the press launched a full-scale attack against them, and they gave up.

My daughters' lives are full of variety. They have met famous people: presidents, prime ministers, ambassadors, princess Diana, millionaires, mayors, UN delegates, famous artists, other olympic winners. They appeared in television, radio, newspapers. They traveled around the whole world; visited dozens of famous places. They have hobbies. They have friends in many parts of the world. And our house is always open to guests.

4.4. Make your life an ethical model

People reading this text may be surprised that they expected a rational explanation, while I mention emotions and morality a lot. But those are necessary for good life. Everyone should try to improve themselves in these aspects. The reason why I did not give up, despite all the obstacles and malice, is because for me, to live morally and create good, is an internal law. I couldn't do otherwise. I already know that even writing this very book will initiate more attacks, but I am doing it regardless.

And morality is also a thing we are not born with, but which needs to be taught to us, preferably in infancy. And we need to think about it, instead of expecting it to just happen. And the schools fail in this, too. I see it as an integral part of bringing up a genius.

One should aim to be a paragon; to live in a way that will make others want to follow you. Learn and work a lot; expect a lot from yourself and from others. Give love, and receive love. Live in peace with yourself and your neighbors. Work hard to be happy, and to make other people happy. Be a humanist, fight against prejudice. Protect the peace of the family, bring up your children towards perfection. Be honest. Respect freedom of yourself and of the others. Trust humanity; support the communities small and large. Etc.

(The book finishes by listing the achievements of the Polgár sisters, and by their various photos: playing chess, doing sports. I'll simply link their Wikipedia pages: Susan, Sofia, Judit. I hope you enjoyed reading this experimental translation; and if you think I omitted something important, feel free to add the missing parts in the comments. Note: I do believe that this book is generally correct and useful, but that doesn't mean I necessarily agree with every single detail. The opinions expressed here belong to the author; of course, unless some of them got impaired by my hasty translation.)

Research Assistant positions at the Future of Humanity Institute and Centre for Effective Altruism

5 crmflynn 08 June 2017 05:20PM

Both the Future of Humanity Institute (FHI) at the University of Oxford, and the Centre for Effective Altruism (CEA), are advertising for research assistant positions for Toby Ord as he writes a book on Existential Risk. These positions are each for 6 months initially, with the possibility of extension.

Details of the CEA position, including information on how to apply, are available here (https://www.centreforeffectivealtruism.org/careers/research-assistant/). The deadline is 14 June.

Details of the FHI position, including information on how to apply, are available here (https://tinyurl.com/yc9n9e2q). The deadline is 22 June.

 

It is worth noting that the FHI position will not provide visa sponsorship, but it is possible that the CEA position will. Accordingly, non-EU citizens are especially encouraged to apply to the CEA position

Destroying the Utility Monster—An Alternative Formation of Utility

0 DragonGod 08 June 2017 12:37PM

NOTE: This post contains LaTeX; it is recommended that you install “TeX the World” (for chromium users), “TeX All the Things” or other TeX/LaTeX extensions to view the post properly.
 

Destroying the Utility Monster—An Alternative Formation of Utility

I am a rational egoist, but that is only because there is no existing political system/social construct I identify with. If there was one I identified with, I would be strongly utilitarian. In all moral thought experiments, I err on the side of utilitarianism, and I’m faithful in my devotion to its tenets. There are some criticisms against utilitarianism, and one of the most common—and most powerful—is the utility monster which allegedly proves “utilitarianism is not egalitarian’’. [1]
 
For those who may not understand the terms, I shall define them below:

Utilitarianism is an ethical theory that states that the best action is the one that maximizes utility. "Utility" is defined in various ways, usually in terms of the well-being of sentient entities. Jeremy Bentham, the founder of utilitarianism, described utility as the sum of all pleasure that results from an action, minus the suffering of anyone involved in the action. Utilitarianism is a version of consequentialism, which states that the consequences of any action are the only standard of right and wrong. Unlike other forms of consequentialism, such as egoism, utilitarianism considers all interests equally.

[2]

The utility monster is a thought experiment in the study of ethics created by philosopher Robert Nozick in 1974 as a criticism of utilitarianism
A hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.[1] Nozick writes:
“Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose ... the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.”
 
This thought experiment attempts to show that utilitarianism is not actually egalitarian, even though it appears to be at first glance.

[1]  
I first found out about the utility monster a few months ago, and pondered on it for a while, before filing it away. Today, I formalised a system for reasoning about utility that would not only defeat the utility monster, but make utilitarianism more egalitarian. I shall state my system, and then explain each of the points in more detail below.
 

Dragon’s System:

  1. All individuals have the same utility system.
  2. $U: -1 <= U <= 1$.
  3. The sum of the utility of an event and its negation is $0$.
  4. Specifically, the sum total of all positive utilities an individual can derive (for unique events without double counting) is $1$.
  5. Specifically, the sum total of all negative utilities an individual can derive (for unique events without double counting) is $-1$.
  6. At any given time, the sum total of an individual's potential utility space is $0$.
  7. To increase the utility of an event, you have to decrease the utility of its negation.
  8. To decrease the utility of an event you have to increase the utility of its negation.
  9. An event and its negation cannot have the same utility unless both are $0$.
  10. If two events are independent then the utility of both events occurring is the sum of their individual utilities.

Explanation:

  1. The same system for appropriating utility is applied to all individuals. This is for the purposes of consistency and to be more egalitarian.
  2. The Utility an individual can get from an event is between $-1$ and $1$. To derive the Utility an individual gains from any event $E_i$, let the utility of $E_i$ under more traditional systems be $W_i$. $U_i = \frac{W_i}{\sum_{k = 1}^n} \forall E_i: W_i > 0$. In English:

    Express the positive utility of each individual as a fraction of their total positive utility across all possible events (without double counting any utility).

  3. For every event that can occur, there’s a corresponding event that represents that event not occurring called its negation; every event has a negation. If an individual gains positive utility from an event happening, then they must gain equivalent negative utility from the event not happening. The utility they derive from an event and its negation must sum to $0$. Such is only logical. The positive utility you gain from an event happening, is proportional to the negative utility g

  4. This follows from the method of deriving “2” explained above.

  5. This follows from the method of deriving “2” explained above.

  6. This follows from “2” and “3”.

  7. This follows from “3”.

  8. This follows from “3”.

  9. This follows from “3”.

  10. This is via intuition. Two events $A$ and $B$ are independent if the utility of $A$ does not depend on the occurrence of $B$ nor does $B$ in any way affect the utility of $A$ and vice versa. If such is true, then to calculate the utility of $A$ and $B$, we need only sum the individual utilities of $A$ and $B$.
     
    It can be seen that my system can be reduced to postulates “1”, “2”, “3”, “6” and “10”. The ten point system is for the sake of clarity which always supersedes brevity and eloquence.
     
    If any desire the concise version:

  11. All individuals have the same utility system.

  12. $U: -1 <= U <= 1$.

  13. The sum of the utility of an event and its negation is $0$.

  14. At any given time, the sum total of an individual's potential utility space is $0$.

  15. If two events are independent then the utility of both events occurring is the sum of their individual utilities.

 

Glossary

Individual: This refers to any sapient entity; generally, this is restricted to humans, but if another conscious life-form (being aware of their own awareness, and capable of conceiving “dubito, ergo cogito, ergo sum—res cogitans”) decided to adopt this system, then it applies to them as well.
Event: Any well-defined outcome from which an individual can derive utility—positive or negative.
Negation: The negation of an event refers to the event not occurring. If event $A$ is the event that I die, then $\neg A$ is the event that I don’t die (i.e. live). If $B$ is the event that I win the lottery, then $\neg B$ is the event that I don’t win the lottery.
Utility Space: The set containing all events from which an individual can possibly derive utility from. This set is finite.
Utility Preferences: The mapping of each event in an individual’s utility space to the fractional utility they derive from the event, and the implicit ordering of events according to it.
 

Assumptions:

Each individual’s utility preferences are unique. No two individuals have the same utility space with the same values for all events therein.
 
We deal only with the utility space of an individual at a given point in time. For example, an immortal who values their continued existence does not value their existence for eternity with ~1.0 utility, but their existence for the next time period, and as such the immortal and mortal may derive same utility from their continued existence. Once an individual receives units of a resource, their utility space is re-evaluated in light of that. After each event, the utility space is re-evaluated.
  The capacity to derive utility (CDU) of any individual is finite. No one is allowed to have infinite CDU. (It may be possible that an individual’s capacity to derive utility is vastly greater than several other individuals (utility monster) but the utility is normalised to deal specifically with such existences). No one has the right to have a greater capacity to derive utility than other individuals. We normalise the utility of every individuals, such that the maximum utility any individual can derive is 1. This makes the system egalitarian as every individual is given equal maximum (and minimum) utility regardless of their CDU.
 
The Utility space of an individual is finite. There are only so many events that you can possibly derive utility from. The death of an individual you do not know about is not an event you can derive utility from (assuming you don’t also find out about their death). Individuals can only be affected (positively or negatively) by a finite number of events.
 

Some Inferences:

A change in an individual’s CDU does not produce a change in normalised utility, unless there’s also a change in their utility preferences.
A change in an individual’s utility preferences is necessary and sufficient to produce a change in their normalised utility.
 

Conclusion

Any Utility system that conforms to these 5 axioms destroys the utility monster. I think the main problems of traditional utility systems, was unbounded utility, and as such they were indeed not egalitarian. My system destroys the concept of unbounded utility by considering the utility of an event to an individual as the fraction of their total utility from their utility space. This means no individual can have their total (positive or negative) utility space sum to more than any other. The sum total of the utility space for all individuals is equal. I believe this makes a utility system in which every individual is equally represented and is truly egalitarian.
This is a concept still in its infancy, so do critique, comment and make suggestions. I will listen to all feedback and use it to develop the system. This only intends to provide a different paradigm for reasoning about utility, especially in the context of egalitarianism. I did not attempt to formalise a mathematical system for calculating utility, and did not accept to do so due to lacking the mathematical acumen to do. I would especially welcome suggestions for calculating utility of dependent events, and other scenarios. This is not a system of utilitarianism and does not pretend to be such; this is only a paradigm for reasoning about utility. This system can however be applied to existing utilitarian systems.
 

References

[1] https://en.wikipedia.org/wiki/Utility\_monster
[2] https://en.wikipedia.org/wiki/Utilitarianism

Bet or update: fixing the will-to-wager assumption

22 cousin_it 07 June 2017 03:03PM

(Warning: completely obvious reasoning that I'm only posting because I haven't seen it spelled out anywhere.)

Some people say, expanding on an idea of de Finetti, that Bayesian rational agents should offer two-sided bets based on their beliefs. For example, if you think a coin is fair, you should be willing to offer anyone a 50/50 bet on heads (or tails) for a penny. Jack called it the "will-to-wager assumption" here and I don't know a better name.

In its simplest form the assumption is false, even for perfectly rational agents in a perfectly simple world. For example, I can give you my favorite fair coin so you can flip it and take a peek at the result. Then, even though I still believe the coin is fair, I'd be a fool to offer both sides of the wager to you, because you'd just take whichever side benefits you (since you've seen the result and I haven't). That objection is not just academic, using your sincere beliefs to bet money against better informed people is a bad idea in real world markets as well.

Then the question arises, how can we fix the assumption so it still says something sensible about rationality? I think the right fix should go something like this. If you flip a coin and peek at the result, then offer me a bet at 90:10 odds that the coin came up heads, I must either accept the bet or update toward believing that the coin indeed came up heads, with at least these odds. I don't get to keep my 50:50 beliefs about the coin and refuse the bet at the same time. More generally, a Bayesian rational agent offered a bet (by another agent who might have more information) must either accept the bet or update their beliefs so the bet becomes unprofitable. The old obligation about offering two-sided bets on all your beliefs is obsolete, use this one from now on. It should also come in handy in living room Bayesian scuffles, throwing some money on the table and saying "bet or update!" has a nice ring to it.

What do you think?

New circumstances, new values?

7 Stuart_Armstrong 06 June 2017 08:20AM

Crossposted at the Intelligent Agents Forum.

Quick, is there anything wrong with a ten minute pleasant low-intensity conversation with someone we happen to disagree with?

Our moral intuitions say no, as do our legal system and most philosophical or political ideals since the enlightenment.

Quick, is there anything wrong with brainwashing people into perfectly obedient sheep, willing and eager to take any orders and betray all their previous ideals?

There’s a bit more disagreement there, but that generally is seen as a bad thing.

But what happens when the low-intensity conversation and the brainwashing are the same thing? At the moment, no human can overwhelm most other humans in the course of ten minutes talking, and rewrite their goals into anything else. But an AI may well be capable of doing so - people have certainly fallen in love within less than ten minutes, and we don’t know how “hard” this is to pull off, in some absolute sense.

This is a warning that relying on revealed and stated preferences or meta-preferences won’t be enough. Our revealed and (most) stated preferences are that the ten minute conversation is probably ok. But disentangling how much of that “ok” relies on our understanding the consequences will be a challenge.

Becoming a Better Community

8 Sable 06 June 2017 07:11AM

So I've been following Project Hufflepuff, the efforts of the rationalist community to become, rather than better rationalists (per se), but a better community.  I recently read the summary of the recent Project Hufflepuff Unconference, and I had a thought.

 

The Problem

LessWrong And Guardedness

I can only speak to my own experiences in joining the community, but I have always felt that the rationalist community holds its members to a very high standard.  This isn't a bad thing but it creates, at least in me, a sense of guardedness.  I don't want to be the rationalist who sounds stupid or the one who contributes less to the conversation.

 

Every post I've made here on LessWrong (not that there have been many), has been reviewed and edited with the same kind of diligence that I normally reserve for graded essays or business documentation.  Other online communities I'm a part of (and meatspace communities) require far less diligence from me as a contributor.  (Note: This isn't a value judgement, rather a description of my experience.)

 

However, my best experiences in communities and friendships have generally occurred in very unguarded atmospheres.  Not that my friends and I aren't smart or can't be smart, but most of the fun I've had with them happens when we're playing card or board or video games, or just hanging out and talking.  Doing things like going out to eat, playing ping-pong, and talking about bad TV shows have led to some of the strongest relationships in my life.

 

So Where Is The Fun?

So - where is this in the Rationalist Community?  Now, it is very possible that the fun is there and I'm simply missing it.  I haven't been to any meetups, I don't live in the bay area, and I don't even know any rationalists in meatspace.  But if it is, aside from the occasional meetup, I don't see any evidence of it.

 

I tried to do some research on how friendships and communities are formed, and there seemed to be little consensus in the field.  A New York Times article on making friendships as an adult mentions three factors: 

As external conditions change, it becomes tougher to meet the three conditions that sociologists since the 1950s have considered crucial to making close friends: proximity; repeated, unplanned interactions; and a setting that encourages people to let their guard down and confide in each other, said Rebecca G. Adams, a professor of sociology and gerontology at the University of North Carolina at Greensboro. This is why so many people meet their lifelong friends in college, she added.

I was unable to find this in an actual paper, but a brief perusal of the Stanford Encyclopedia of Philosophy's page on friendship at least shows that people who think about the topic seem to agree that there has to be some kind of intimacy involved in a friendship.  And while there are certainly rationalists who are friends, for me becoming a rationalist and joining the community has not yet materialized into any specific friendships.  While that is on my shoulders, I believe it highlights a distinction I want to make.

 

If what we have in common, as Rationalists, is a shared way of thinking and a shared set of goals (e.g. save the world, improve the rationality waterline, etc.), then the relationship I share with the community strikes me as more as an alliance than a friendship.

 

Allies want the same goals, and may use similar methodologies to achieve them, but they are not friends.  I wouldn't tell my ally about an embarrassing dream I had, or get drunk with them and make fun of bad movies.

 

I don't mean to get hung up on meanings - the words themselves aren't important.  But from what I have seen, the community, especially those outside the Bay Area, lack the unguarded intimacy I see in my close friendships, and that I think are a key component of community-building.  I'd be willing to bet that even in meetups, many (>20%) of Rationalists feel the weight of the high standards of the community, and are thus more guarded than they are in relationships with less expectations.

 

What I'm trying to get at is that I haven't experienced an unguarded interaction with a rationalist, online or in meatspace.  I always want to be at the top of my game, always trying to reason better, and remember all the things I've learned about biases and probability theory.  And I suspect that low-standards unguarded interactions have something to do with growing friendships and communities.

 

So, for an East-coaster with a computer:

 

Where is the fun?  Where are the rationalist video game tournaments?  Robot fights?  Words with Friends who are rationalists?

 

Where is the chilling and watching all the Lord of the Rings movies together?  The absurd Dungeons and Dragons campaigns because everyone is a plotter and there are too many plots?

 

A Few Suggested Solutions

Everyone in the Rationalist community wants to help.  We want to save the world, and that's great.  But...not everything has to be about saving the world.  If the goal of an activity is community/friendship building, why can't it be otherwise pointless?  Why can't it be silly and inane and utterly irrational?

 

So, in the interests of Project Hufflepuff, I spent some time thinking about ways to improve/change the situation.

 

The Hero/Sidekick/Dragon Project

There was a series of posts in 2015 that had to do with different people wanting to take different roles in projects, be it the hero, the sidekick, the dragon, etc.  An effort was made to match people up, but as far as I can tell, it petered out, because I haven't seen anything to do with it since then (I would be happy to be wrong about this).  I'll link the posts here; the first is, in particular, excellent: the issue in general, an attempt at matchmaking, and a discussion of matchmaking methods.

I might suggest an open thread that functions as a classified ad, e.g. Help Wanted, must be able to XYZ, or Sidekick In Need of Hero, must live in X area, etc.

I'd also like to mention that the project in question shouldn't have to be about friendly AI or effective altruism; I think that developing an effective partnership is valuable by itself.

 

Online Gaming

Is there a reason that members of the community can't game together online?  This post on Overwatch provides at least a small amount of evidence that the community would have enough members interested to form teams, and team-building seems to be one of the goals.

 

Fun Projects 

I can think of plenty of challenging projects that require a team that I'd love to do, but that have almost nothing to do with world-saving at any scale.  Things like making a robot, or coding a game, or writing a book or play.  Does this happen in the community?  If not, I think it might help.  Again, the goal would be to create an unguarded atmosphere to foster friendships and team-building.

 

Rationalist Buddy System

I'd like to distinguish this from the Hero/Sidekick idea above.  I know that I could use a rationalist buddy to pair up with.  Many motivational and anti-akrasia techniques require social commitment, and Beeminder can only go so far.  Having a person to talk things through, experiment with anti-akrasi techniques, or just to inspire and be inspired by would be insanely helpful for me, and I suspect for many of us.  I'm vaguely reminded of the 12-step program's sponsors, if only in the way they support people going through the program.

I'm not sure how to execute this, but I think it has the potential to be useful enough to be worth trying.

 

Rationalist Big/Little Program

One of the things I got out of the Project Hufflepuff Unconference Notes was that making newcomers feel welcome was an issue.  An idea to change this was a "welcoming committee":

Welcoming Committee (Mandy Souza, Tessa Alexanian)

Oftentimes at events you'll see people who are new, or who don't seem comfortable getting involved with the conversation. Many successful communities do a good job of explicitly welcoming those people. Some people at the unconference decided to put together a formal group for making sure this happens more. 

I would like to suggest some version of the Big/Little program.  For those who don't know, the idea is that established members of the community volunteer to be "Bigs," and when a newcomer appears (a "Little") they are matched with a Big.  The Big then takes on the role of a guide, providing the Little an easier introduction to the community.  This idea has been used in many different environments, and has helped me personally in the past.

Perhaps people could sign up on some sort of permanent thread that they're willing to be Bigs, and then lurkers and first-time posters could be encouraged to PM them?

In Conclusion

It seems to me as though the high standards of the Rationalist community promote a guarded atmosphere, which hampers the development of close friendships and the community.  I've outlined a few ways that may help create places within the community where standards can be lowered and guards relaxed without (hopefully) compromising its high standards elsewhere.

I realize that most of this post is based upon my personal observations and experiences, which are anecdotal evidence and thus Not To Be Trusted.  I am prepared to be wrong, and would welcome the correction.

Let me know what you think.

Argument From Infinity

0 DragonGod 05 June 2017 09:33PM

Note: This post contains LaTeX; it is recommended that you install “TeX the World” (for chromium users), “TeX All the Things” or other TeX/LaTeX extensions to view the post properly.

The Argument From Infinity

If you live forever then you will definitely encounter a completely terrible scenario like being trapped in a black hole or something.

 
I have noticed a tendency, for people to conclude that an infinite set implies that the set contains some potential element $Y$.
 
Say for example, that you live forever, this means that your existence is an infinite set. Let’s denote your existence as $E$.
 
$E = {x_1, x_2, x_3, …}$
Where each $x_i$ is some event that can potentially happen to you.
  The fallacy of infinity is positing that because $E$ is infinite, $E$ contains $x_j$.
 
However, this is simply wrong. Before I prove that the infinity fallacy is in fact a logical fallacy, I will posit a hypothesis as to the underlying cause of the fallacy of infinity.
 
I suspect it is because people have a poor understanding of the nature of infinity. They assume, that because $E$ is infinite, $E$ contains all potential $x_i$. If $E$ did not contain any potential $x_i$, then $E$ would not be infinite, and since the premise is that $E$ is infinite, then $E$ contains $x_j$.
&nsbp;

Counter Argument.

I shall offer an algorithm that would demonstrate how to generate an infinite number of infinite subsets from an infinite set.
 
Pick an element $i$ in $N$. Exclude $i$ from $N$. You have generated an infinite subset of $N$.
There are $\aleph0$ possible such infinite subsets.
Pick any two elements from $n$ and exclude them. You have generated another infinite subset of $N$. There are $\aleph_0$ \choose $2$ possible infinite subsets.
In general, we can generate an infinite subset by excluding $k$ elements from $N$. The number of such infinite subsets generated is $\aleph_0$ \choose $k$.
 
To find out the total number of infinite subsets that can be generated, take
$$\sum
{k=1}{\aleph_0} {\aleph_0 \choose k}$$

However, these are only the infinite subsets of finite complements. To get infinite subsets of infinite complements, we can pick any (finite) subset of $\aleph_1$, and find the product of that set. Take only all multiples of that set, or exclude all multiples of that set. That gives you $2$ infinite subsets for each finite subset of $N$.
I can generate more infinite sets, by taking any infinite sets, and adding any $k$ excluded elements to it—or similarly subtracting $k$ elements from it.
However, this algorithm doesn’t generate all possible infinite subsets of $N$ (e.g the prime numbers, the Fibonacci numbers, coprime numbers, or any infinite subset that satisfies property $P$ e.g solutions to equations with more unknowns than conditions etc). The total number of possible infinite subsets (including those not generated by my algorithm) is $>= \aleph_1$ (around the same cardinality as the real numbers).
 
To explain the counter argument in simple terms:

There are an infinite number of even numbers, but none of them are odd.
There are an infinite number of prime numbers but none of them are $6$.
There are an infinite number of multiples of $7$, but none of them are prime save $7$ itself.

The number of possible infinite subsets is far more than the number of elements in the parent set. In fact, for any event $x$ (or finite set of $x$), the number of infinite sets that do not include any $x_i$ is infinite. To posit that simply because $E$ is infinite, that $E$ contains $X_i$, is then wrong.  

Alternative Formulation/Charitable Hypothesis.

This states a weaker form of the infinity fallacy, and a better argument.

If you leave forever, the probability is arbitrarily close to 1 that you would end up in a completely terrible scenario.

Let the set of events anathema to you be denoted $F: F = {y_1, y_2, y_3, …, y_m}$.
 
We shall now attempt to construct $E$.
For each element $x_i$ in a set $A$, the probability that $x_i$ is not in $F = \frac{# A - # F}{# A}.
 
$${# A - # F}{# A}{# A} \to 0 \,\,\, as \,\,\,\, #A \to \infty$$.
Thus, when $# A = \infty$
$Pr($\neg bad event$) = 0$ $Pr($bad event$) = 1 – Pr(\neg$ bad event$)$.
$1 – 0 = 1$.
$\therefore$ the probability that you would encounter a bad event is infinitely close to $1$.
 

Comment

I cannot comprehend how probability works in the face of infinity, so I can’t respond to the above formulation (which if valid, I’ll label the “infinity heuristic”).
 
Another popular form of the argument from infinity:

If you put a million monkeys on a million type writers and let them type forever, the entire works of Shakespeare would eventually be produced.

There is an actual proof of this which is sound. This implies, that a random number generator on any countable set will generate every element in that set. The entire sample space would be enumerated. However, there are several possible infinite sets that do not have all the elements in the set. It bears mention though, that I am admittedly terrible at intuiting infinity.

The question remains though: is the argument from infinity a fallacy or a heuristic?  
What do you guys think? Is the argument from infinity the “infinity heuristic”, or is it just a fallacy?

Mode Collapse and the Norm One Principle

11 tristanm 05 June 2017 09:30PM

[Epistemic status: I assign a 70% chance that this model proves to be useful, 30% chance it describes things we are already trying to do to a large degree, and won't cause us to update much.] 

I'm going to talk about something that's a little weird, because it uses some results from some very recent ML theory to make a metaphor about something seemingly entirely unrelated - norms surrounding discourse. 

I'm also going to reach some conclusions that surprised me when I finally obtained them, because it caused me to update on a few things that I had previously been fairly confident about. This argument basically concludes that we should adopt fairly strict speech norms, and that there could be great benefit to moderating our discourse well. 

I argue that in fact, discourse can be considered an optimization process and can be thought of in the same way that we think of optimizing a large function. As I will argue, thinking of it in this way will allow us to make a very specific set of norms that are easy to think about and easy to enforce. It is partly a proposal for how to solve the problem of dealing with speech that is considered hostile, low-quality, or otherwise harmful. But most importantly, it is a proposal for how to ensure that the discussion always moves in the right direction: Towards better solutions and more accurate models. 

It will also help us avoid something I'm referring to as "mode collapse" (where new ideas generated are non-diverse and are typically characterized by adding more and more details to ideas that have already been tested extensively). It's also highly related to the concepts discussed in the Death Spirals and the Cult Attractor portion of the Sequences. Ideally, we'd like to be able to make sure that we're exploring as much of the hypothesis space as possible, and there's good reason to believe we're probably not doing this very well.  

The challenge: Making sure we're searching for the global optimum in model-space sometimes requires reaching out blindly into the frontiers, the not well-explored regions, which runs the risk of ending up somewhere very low-quality or dangerous. There are also sometimes large gaps between very different regions of model-space where the quality of the model is very low in-between, but very high on each side of the gap. This requires traversing through potentially dangerous territory and being able to survive the whole way through.

(I'll be using terms like "models" and "hypotheses" quite often, and I hope this isn't confusing. I am using them very broadly, to refer to both theoretical understandings of phenomenon and blueprints for practical implementations of ideas). 

We desire to have a set of principles which allows us to do this safely - to think about models of the world that are new and untested, solutions for solving problems that have never been done in a similar way - and they should ensure that, eventually, we can reach the global optimum. 

Before we derive that set of principles, I am going to introduce a topic of interest from the field of Machine Learning. This topic will serve as the main analogy for the rest of this piece, and serve as a model for how the dynamics of discourse should work in the ideal case. 

I. The Analogy: Generative Adversarial Networks

For those of you who are not familiar with the recent developments in deep-learning, Generative Adversarial Networks (GANs)[intro pdf here] are a new type of generative model class that are ideal for producing high-quality samples from very high-dimensional, complex distributions. They have caused great buzz and hype in the deep-learning community due to how impressive some of the samples they produce are, and how efficient they are at generation.

Put simply, a generator model and a critic (sometimes called a discriminator) model perform a two-player game where the critic is trained to distinguish between samples produced by the generator and the "true" samples taken from the data distribution. In turn, the generator is trained to maximize the critic's loss function. Both models are usually parametrized by deep neural networks and can be trained by taking turns running a gradient descent step on each. The Nash equilibrium of this game is when the generator's distribution matches that of the data distribution perfectly. This is never really borne out in practice, but sometimes it gets so close that we don't mind. 

GANs have one principal failure mode, which is often thought to be due to the instability of the system, which is often called "mode collapse" (a term I'm going to appropriate to refer to a much broader concept). It was often believed that, if a careful balance between the generator and critic could not be maintained, one would eventually overpower the other - leading the critic to provide either useless or overly harsh information to the generator. Useless information will cause the generator to update very slowly or not at all, and overly harsh information will lead the samples to "collapse" to a small region of the data space that are the easiest targets for the generator to hit.  

This problem was essentially solved earlier this year due to a series of papers that propose modifications to the loss functions that GANs use, and, most crucially, add another term to the critic's loss which stabilizes the gradient (with respect to the inputs) to have a norm close to one. It was recognized that we actually desire an extremely powerful critic so that the generator can make the best updates it possibly can, but the updates themselves can't go beyond what the generator is capable of handling. With these changes to the GAN formulation, it became possible to use crazy critic networks such as ultra-deep ResNets and train them as much as desired before updating the generator network.  

The principle behind their operation is rather simple to describe, but unfortunately, it is much more difficult to explain why they work so well. However, I believe that as long as we know how to make one, and know specific implementation details that improve their stability, then I believe their principles can be applied more broadly to achieve success in a wide variety of regimes. 

II. GANs as a Model of Discourse

In order to use GANs as a tool for conceptual understanding of discourse, I propose to model of the dynamics of debate as a collection of hypothesis-generators and hypothesis-critics. This could be likened to the structure of academia - researchers publish papers, they go through peer-review, the work is iterated on and improved - and over time this process converges to more and more accurate models of reality (or so we hope). Most individuals within this process play both roles, but in theory this process would still work even if they didn't. For example, Isaac Newton was a superb hypothesis generator, but he also had some wacky ideas that most of us would consider to be obviously absurd. Nevertheless, calculus and Newtonian physics became a part of our accepted scientific knowledge, and alchemy didn't. The system adopted and iterated on his good ideas while throwing away the bad. 

Our community should be capable of something similar, while doing it more efficiently and not requiring the massive infrastructure of academia. 

A hypothesis-generator is not something that just randomly pulls out a model from model-space. It proposes things that are close modifications of things it already holds to be likely within its model (though I expect this point to be debatable). Humans are both hypothesis-generators and hypothesis-critics. And as I will argue, that distinction is not quite as sharply defined as one would think. 

I think there has always been an underlying assumption within the theory of intelligence that creativity and recognition / distinction are fundamentally different. In other words, one can easily understand Mozart to be a great composer, but it is much more difficult to be a Mozart. Naturally this belief entered it's way into the field of Artificial Intelligence too, and became somewhat of a dogma. Computers might be able to play Chess, they might be able to play Go, but they aren't doing anything fundamentally intelligent. They lack the creative spark, they work on pure brute-force calculation only, with maybe some heuristics and tricks that their human creators bestowed upon them.  

GANs seem to defy this principle. Trained on a dataset of photographs of human faces, a GAN generator learns to produce near-photo-realistic images that nonetheless do not fully match any the faces the critic network saw (one of the reasons why CelebA was such a good choice to test these on), and are therefore in some sense producing things which are genuinely original. It may have once been thought that there was a fundamental distinction between creation and critique, but perhaps that's not really the case. GANs were a surprising discovery, because they showed that it was possible to make impressive "creations" by starting from random nonsense and slowly tweaking it in the direction of "good" until it eventually got there (well okay, that's basically true for the whole of optimization, but it was thought to be especially difficult for generative models).

What does this mean? Could someone become a "Mozart" by beginning a musical composition from random noise and slowly tweaking it until it became a masterpiece?

The above seems to imply "yes, perhaps." However, this is highly contingent on the quality of the "tweaking." It seems possible only as long as the directions to update in are very high quality. What if they aren't very high quality? What if they point nowhere, or in very bad directions?

I think the default distribution of discourse is that it is characterized by a large number of these directionless, low quality contributions. And that it's likely that this is one of the main factors behind mode collapse. This is related to what has been noted before: Too much intolerance for imperfect ideas (or ideas outside of established dogma) in a community prevent useful tasks from being accomplished, and progress from being made. Academia does not seem immune to this problem. Where low-quality or hostile discussion is tolerated is where this risk is greatest.  

Fortunately, making sure we get good "tweaks" seems to be the easy part. Critique is in high abundance. Our community is apparently very good at it. We also don't need to worry much about the ratio of hypothesis-generators to hypothesis-critics, as long as we can establish good principles that allow us to follow GANs as closely as possible. The nice feature of the GAN formulation is that you are allowed to make the critic as powerful as you want. In fact, the critic should be more powerful than the generator (If the generator is too powerful, it just goes directly to the argmax of the critic). 

(In addition, any collection of generators is a generator, and any collection of critics is a critic. So this formulation can be applied to the community setting).

III. The Norm One Principle

So the question then becomes, how do we take an algorithm governing a game between models much simpler than a human, and use the same tweaks which consist of nothing more than a few very simple equations? 

Here what I devise is a strategy for taking the concept of the norm of the critic gradient being as close to one as possible, and using that as a heuristic for how to structure appropriate discourse. 

(This is where my argument gets more speculative and I expect to update this a lot, and where I welcome the most criticism).

What I propose is that we begin modeling the concept of "criticism" based on how useful it is to the idea-generator receiving the criticism. Under this model, I think we should start breaking down criticism into two fundamental attributes:

  1. Directionality - does the criticism contain highly useful information, such that the "generator" knows how to update their model / hypothesis / proposal?
  2. Magnitude - Is the criticism too harsh, does it point to something completely unlike the original proposal, or otherwise require changes that aren't feasible for the generator to make?

My claim is that any contribution to a discussion should satisfy the "Norm One Principle." In other words, it should have a well-defined direction, and the quantity of change should be feasible to implement.

If a critique can satisfy our requirements for both directionality and magnitude, then it serves a useful purpose. The inverse claim to this is that if we can't follow these requirements, we risk falling into mode collapse, and the ideas commonly proposed are almost indistinguishable from the ones which preceded them, and ideas which deviate too far from the norm are harshly condemned and suppressed. 

I think it's natural to question whether or not restricting criticism to follow certain principles is a form of speech suppression that prevents useful ideas from being considered. But the pattern I'm proposing doesn't restrict the "generation" process, the creative aspect which produces new hypotheses. It doesn't restrict the topics that can be discussed. It only restricts the criticism of those hypotheses, such that they are maximally useful to the source of the hypothesis. 

One of the primary fears behind having too much criticism is that it discourages people from contributing because they want to avoid the negative feedback. But under the Norm One Principle, I think it is useful to distinguish between disagreement and criticism. I think if we're following these norms properly, we won't need to consider criticism to be a negative reward. In fact, criticism can be positive. Agreement could be considered "criticism in the same direction you are moving in." Disagreement would be the opposite. And these norms also eliminate the kind of feedback that tends to be the most discouraging. 

For example, some things which violate "Norm One":

  • Ad hominem attacks (typically directionless). 
  • Affective Death Spirals (unlimited praise or denunciation is usually directionless, and usually very high magnitude). 
  • Signs that cause aversion (things I "don't like", that trigger my System 1 alarms, which probably violates both directionality and magnitude). 
  • Lengthy lists of changes to make (norm greater than 1, ideally we want to try to focus on small sets of changes that have the highest priority). 
  • Repetition of points that have already been made (norm greater than one). 

One of my strongest hopes is that whomever is playing the part of the "generator" is able to compile the list of critiques easily and use them to update somewhere close to the optimal direction. This would be difficult if the sum of all critiques is either directionless (many critics point in opposite or near-opposite directions) or very high-magnitude (Critics simply say to get as far away from here as possible). 

But let's suppose that each individual criticism satisfies the Norm One principle. We will also assume that the generator is weighing each critique by their respect for whoever produced it, which I think is highly likely. Then the generator should be able to move in a direction unless the sum of the directions completely cancel out. It is unlikely for this to happen - unless there is very strong epistemic disagreement in the community over some fundamental assumptions (in which case the conversation should probably move over to that). 

In addition, it also becomes less likely for the directions to cancel out as the number of inputs increases. Thus, it seems that proposals for new models should be presented to a wide audience, and we should avoid the temptation to keep our proposals hidden to all except for a small set of people we trust.

So I think that in general, this proposed structure should tend to increase the amount of collective trust we have in the community, and that it favors transparency and favors diversity of viewpoints. 

But what of the possible failure modes of this plan? 

This model should fail if the specific details of its implementation either remove too much discussion, or fail to deal with individuals who refuse to follow the norms and refuse to update. Any implementation should allow room for anyone to update. Someone who posts an extremely hostile, directionless comment should be allowed chances to modify their contribution. The only scenario in which the "banhammer" becomes appropriate is when this model fails to apply: The cardinal sin of rationality, the refusal to update. 

IV. Building the Ideal "Generator"

As a final point, I'll note that the above assumes that generators will be able to update their models incrementally. The easy part, as I mentioned, was obtaining the updates, the hard part is accumulating them. This seems difficult with the infrastructure we have in place. What we do have is a good system for posting proposals and receiving feedback (The blog post / comment thread set-up), but this assumes that each "generator" is keeping track of their models by themselves and has to be fully aware of the status of other models on their own. There is no centralized "mixture model" anywhere that contains the full set of models weighted by how much probability they are given by the community. Currently, we do not have a good solution for this problem. 

However, it seems that the first conception of Arbital was centered around finding a solution to this kind of problem:

Arbital has bigger ambitions than even that. We all dream of a world that eliminates the duplication of effort in online argument - a world where, the same way that Wikipedia centralized the recording of definite facts, an argument only needs to happen once, instead of being reduplicated all over the Internet; with all the branches of the argument neatly recorded in the same place, along with some indication of who believes what. A world where 'just check Arbital' had the same status for determining the current state of debates, as 'just check Wikipedia' now has when somebody starts arguing about the population of Melbourne. There's entirely new big subproblems and solutions, not present at all in the current Arbital, that we'd need to tackle that considerably more difficult problem. But to solve 'explaining things' is something of a first step. If you have a single URL that you can point anyone to for 'explaining Bayes', and if you can dispatch people to different pages depending on how much math they know, you're starting to solve some of the key subproblems in removing the redundancy in online arguments.

If my proposed model is accurate, then it suggests that the problem Arbital aims to solve is in fact quite crucial to solve, and that the developers of Arbital should consider working through each obstacle they face without pivoting from this original goal. I feel confident enough that this goal should be high priority that I'd be willing to support its development in whatever way is deemed most helpful and is feasible for me (I am not an investor, but I am a programmer and would also be capable of making small donations, or contributing material). 

The only thing that this model would require for Arbital to do would be to make it as open as possible to contribute, and then perform heavy moderation or filtering of contributed content (but importantly not the other way around, where it is closed to small group of trusted people).

Currently, the incremental changes that would have to be made to LessWrong and related sites like SSC would simply be increased moderation of comment quality. Otherwise, any further progress on the problem would require overcoming much more serious obstacles requiring significant re-design and architecture changes. 

Everything I've written above is also subject to the model I've just outlined, and therefore I expect to make incremental updates as feedback to this post accrues.

My initial prediction for feedback to this post is that the ideas might be considered helpful and offer a useful perspective or a good starting point, but that there are probably many details that I have missed that would be useful to discuss, or points that were not quite well-argued or well thought-out. I will look out for these things in the comments.   

The Simple World Hypothesis

3 DragonGod 05 June 2017 07:34PM

Part of a Series in the Making: "If I Were God".

Introduction

The current universe is the simplest possible universe with the same degree of functionality.

 
This hypothesis posits that the current universe is the simplest universe possible which can do all that our universe can do. Here, simplicity refers to the laws which make up the universe. It may be apt to mention the Multiverse Axioms at this juncture:
Axiom 1 (axiom of consistency):

Any possible universe is logically consistent and strictly adheres to well-defined laws.

Axiom 2 (axiom of inclusivity):

Whatever can happen (without violating 1) happens—and in every way possible (without violating 1).

Axiom 3 (axiom of simplicity):
The underlying laws governing the Multiverse are as simple as possible (while permitting 1 and 2).

 
The simple world hypothesis posits that our universe has the fewest laws which can enable the same degree of functionality that it currently possesses. I’ll explain the concept of “degree of functionality”. Take two universes: U_i and U_j with degrees of functionality d_i and d_j. Then the below three statements are true:
d_i > d_j implies that U_i can simulate U_j.
d_j < d_i implies that U_j cannot simulate U_i. d_i = d_j implies that U_i can simulate U_j, and U_j can in turn simulate U_i.

 
Let’s consider a universe like Conway’s Game of Life. It is far simpler than our universe and possesses only four laws. The simple world hypothesis argues that Conway’s Game of Life (U_c) cannot simulate our universe (U_0). The degree of functionality of Conway’s Game of Life (d_c) < the degree of functionality of our universe d_0. An advance prediction of the simple world hypothesis regarding U_c is the below:

Human level intelligence cannot emerge in U_c.

The above implicitly assumes that Conway’s Game of Life is simpler than our universe—is that really true?  

Simplicity

It is only prudent that I clarify what it is I mean by simplicity. For any two Universes U_i and U_j, let their simplicity be denoted S_i and S_j respectively. The simplicity of a universe is the Kolmogorov complexity of the set of laws which make up that universe.
For U_c, those laws are:
1. Any live cell with fewer than two live neighbours dies, as if caused by underpopulation.
2. Any live cell with two or three live neighbours lives on to the next generation.
3. Any live cell with more than three live neighbours dies, as if by overpopulation.
4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

 
At this point, I find it prudent to mention the topic of Kolmogorov Complexity. The Kolmogorov Complexity of an object is the length (in bits) of the shortest computer program (in a predetermined language) that produces that object as output. Let’s pick any (sensible) Turing Complete Language T_x. We’re concerned with the binary length of the shortest T_x program that produces the laws that describe U_i. When discussing the simplicity of a universe, we refrain from mentioning its initial state; the degrees of functionality are qualitative and not quantitative. For example, A universe U_1 which contains only the milky way, will have d_1 = d_0. As such, we take only the Kolmogorov complexity of the laws describing the Universe, and not the Kolmogorov complexity of the universe itself. For any U_i and U_j, let the Kolmogorov complexity of the laws describing U_i and U_j be K_i and K_j respectively.

S_i = K_i-1
S_j = K_j-1

 

Interlude

Let the set of universes which confirm to the multiverse axiom be denoted M.

 

Weak Hypothesis

According to the simple world hypothesis, no U_z with K_z < K_0 has d_z >= d_0.

To be mathematically precise:
For all U_z in M there does not exist U_z: K_z < K_0 and d_z >= d_0.

 

Strong Hypothesis

The strong hypothesis generalises the weak form of the simple world hypothesis to all universes.

The degree of functionality of a universe is directly proportional to its Kolmogorov complexity.

To be mathematically precise:
For all U_z, U_y in M there does not exist U_z: K_z < K_y and d_z >= d_y.

 

Rules That Govern Universes.

When I refer to the “rules that govern a universe”, or “rules upon which a universe is constructed”, I refer to a set of axioms. The principles of formal logic are part of the Multiverse axiom, and no possible Universe can violate that. As such, the principles of formal logic are a priori part of any possible Universe U_z in M.
 
The rules that govern the Universe, are only those set of axioms upon which the Universe is constructed in tandem with the principles of formal logic. For example, in our Universe the laws that govern it would not include Newtonian mechanics (as such is merely a special case of Einstein’s underlying theories on relativity). I suspect (with P > 0.67) that the law(s) that govern our Universe, would be the Theory of Everything (TOE) and/or Grand Unified Theory (GUT). All other laws can be derived from them in combination with the underlying laws of formal logic.

 

Degree of Functionality

The degree of functionality of a Universe U_z (d_z) refers to the maximum complexity (qualitatively not quantitatively; e.g. a human brain is more complicated than a supercluster absent of life) that can potentially emerge from that universe from any potential initial state. Taking U_c to illustrate my point, the maximum complexity that any valid configuration of U_c can produce is d_c. I suspect that human level intelligence (qualitatively and not quantitatively; i.e. artificial super intelligence is included in this category. I refer merely to the potential to conceive the thought “dubito, ergo cogito, ergo sum—res cogitans”) is d_0.

Simulating a Universe.

When I mention a Universe, I do not refer specifically to that Universe itself—and all it contains—but to the set of laws (axioms) upon which that Universe is constructed. Any Universe that has the same base laws as ours—or mathematically/logically equivalent base laws—is isomorphic to our universe. I shall define a set of Universes A_i. A_i is the set of universes that possess the same set of base laws L_i or a mathematical/logical equivalent. The set of laws that govern our Universe is L_0. In my example above, U_1 is a member of A_0.
 
Initially, I ignored the initial state/conditions of the Universe stating them irrelevant in respect to describing the universe. For any universe U_i, let the initial state of the universe be F_{i0}. Let the set of all possible initial states (for all universes) be B. B: B = {F_{{i_1}{j_1}}, F_{{i_1}{j_2}}, F_{{i_1}{j_3}}, …., F_{{i_1}{j_n}}, {F_{{i_2}{j_1}}, F_{{i_2}{j_2}}, F_{{i_2}{j_3}}, …., F_{{i_2}{j_n}}, {F_{{i_3}{j_1}}, F_{{i_3}{j_2}}, F_{{i_3}{j_3}}, …., F_{{i_1}{j_n}}, …, F_{{i_n}{j_n}}}. Let the current/final (whichever one we are concerned with) state of any U_i be G_ij.
 
I shall now explain what it means for a Universe U_i to simulate another Universe U_j.

U_i simulates U_j if for all F_{{i_j}{j_y}} in B, there exists a F_{i{j_l}} such that G_ij = G_{i{j_l}}.

In concise English:

U_i simulates U_j if for every (valid) possible initial configuration of U_j which maps to a current/final state, there exists a valid configuration of U_i which produces U_j’s current/final state.

When I refer to producing the state of another Universe, I refer to expressing all the information that the other universe does. The rules for transformation and extracting of information confirm to the third axiom:

The underlying laws governing the universe are as simple as possible while permitting 1 and 2.

 

Expressive Power.

Earlier, I introduced the concept of A_i for any given set of Laws L_i that governs a universe. When we mention U_i, we are in fact talking of U_i in conjunction with some initial state. If we ignore the initial state, we are left with only L_i. I mentioned earlier that the degree of functionality of a Universe is the maximum complexity that can emerge from some valid configuration of that Universe. The expressive power of a universe is the expressive power of it’s L_i.

The expressive power of a given L_i is the set of L_j which L_i can potentially simulate.

The set of L_j that can be concisely and coherently represented in L_i is the expressive power of L_I. I shall once again rely on Conway’s Game of Life (U_c). The 4 laws governing U_c can be represented in L_0, and as such L_0 has an expressive power E_0 >= E_c the expressive power of U_c. As such, E_c is a subset of L_0. If a Universe U_i can simulate another Universe U_J, then it follows that whatever U_j can simulate, then U_i can too. Thus, if U_i can simulate U_j, then E_j is a subset of E_i.

To conceive of a Universe U_i, we merely need conceive L_i. If we can conceive L_i and concisely define it, then it follows that U_0 can simulate U_i. I argue that this is so because if we could conceive and concisely define L_i then U_i can be simulate as a computer program. Any simulation that a subset/member of a universe can perform is a simulation that the universe itself can perform.
 
An important argument derives from the above:

It is impossible for the human mind (or any other agent) in a universe U_i to conceive another Universe U_j with a greater degree of functionality than U_i.

The above is true, because if we could conceive it, we could define its laws, and if we could define its laws, we could simulate it with a computer program.  
The below is also self-evident:

No universe U_i can simulate another universe U_j with a greater degree of functionality than it.
The below is true, because the degree of functionality is defined as the maximum complexity that can arise from a universe. If U_i simulates U_j, and d_i < d_j, then we have a contradiction as U_i simulated more complexity than the maximum complexity U_i can lead to.

Concluding from the above, the below is self-evident:

The degree of functionality of a universe is that universe itself.

The maximum complexity a universe can lead to is itself. Simulating a Universe involves simulating all that universe simulates. Let the maximum complexity that U_i can lead to be denoted C_i. Simulating U_I involves simulating C_i and as such the complexity of simulating U_i >= complexity of simulating C_i. Therefore, the greatest complexity U_i can lead to is a simulation of U_i.
 
However, can a universe simulate itself? I accepted that as true on principle, but is it really? For a universe to simulate itself, it must also simulate the simulation which would simulate the simulation and beginning an infinite regress. If any universe has finite information content, then it cannot simulate itself?
As such, the universe itself serves as a strict upper boundary for the complexity that a universe can lead to.
 
If a Universe attempted to simulate itself, how many simulations would there be? Would the cardinality of simulations be countably infinite? Or non-countably infinite? The answer to that question determines how plausible a universe simulating itself would be.
 
What about U_i being able to simulate U_j, and U_j in turn being able to simulate U_i? This implies U_i can simulate U_j simulating U_i simulating U_j … beginning another infinite regress. How many simulations are needed? Do the universes involved need to be able to hold aleph_k different simulations? Do the laws constructing those universes permit that?
 

Criticism of the Simple world hypothesis

While sounding nice in theory, the simple world hypothesis—in both of its forms—offers no insight into the origin of the Universe. One may ask “Why Simplicity?” “What would cause the simple world hypothesis to be true?” “What is necessary for all universes to behave as the simple world hypothesis predicts?” Indeed, might the simple world hypothesis not violate Occam’s razor by positing that all universes confirm to it?

I suggest that the simple world hypothesis does not describe the origin of the universe—that was never its aim to begin with. It merely seeks to describe how universes are, and not how they came to be.

 

Trivia

I conceived the simple world hypothesis when I was thinking up a blog post titled “Why Occam’s Razor?”. I had intended to make an argument along the lines of: “Even if the simple world hypothesis is false, Occam’s razor is still valuable because…”, going along that train of thought, I realised that I would have to define the simple world hypothesis.

 

Conclusion

I do not endorse this hypothesis; I believe in something called “informed opinion”, and due to my abject lack of knowledge regarding physics I do not consider myself as having an informed opinion on the Universe. Indeed, the simple world hypothesis was conceived to aid an argument I was thinking up to support Occam’s razor. However, I admit that if I were to design the base laws that would be used in addition with the laws constructing each possible universe, then the simple world hypothesis would be true. However, I’m not the one—if there is any—who designed the base laws that would be used in addition with the laws constructing each possible universe, and as such I do not support the simple world hypothesis. Indeed, there is not even enough evidence to locate the simple world hypothesis among the space of possible hypothesis, and as such, I do not—I cannot as long as I profess rationality—accept it—yet.

Indeed, as Aristotle said:

It is the mark of an educated mind to be able to entertain a thought without fully accepting it.

 
I shall describe the “multiverse axiom” and “base laws” in more detail in a subsequent blog post.

[Link] Cognitive Science/Psychology As a Neglected Approach to AI Safety

4 Kaj_Sotala 05 June 2017 01:55PM

Open thread, June 5 - June 11, 2017

0 Elo 05 June 2017 04:23AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Birth of a Stereotype

0 DragonGod 05 June 2017 03:29AM

I imagine that many ‘enlightened’ people disbelieve in stereotypes, thinking them useless, and something below them; only the ignorant masses rely on stereotypical thinking and generalisations. Stereotypes are improbable, and making such generalisations from anecdotal evidence at best is faulty. I used to think like that—and I probably still do—nevertheless, someone raised an argument that stereotypes aren’t useless; they exist for a reason. There was a reason why the stereotype was born. As such, you would expect that a stereotype conveys at least some information and is better than nothing.


I think a few factors contribute to the formation of a stereotype. I will take a minute to explain a stereotype. A stereotype is a relation of the form X => Y. It maps a class of people/individuals/what have you to a property X. For example, people who wear glasses are smart. Occasionally, some individuals may conceive the relation as Y <=> X. E.g. Smart people wear glasses. I suspect this is due to reasons unrelated to the stereotype (e.g. inability to distinguish between ’=>’ and ’<=>’). I hope this is not common among the general population—the average human can’t be that irrational, right? I shall give a charitable interpretation of the masses, and discuss only the relation 'X => Y’. I would stick to two particularly conspicuous stereotypes and try to hypothesise how they originated.   For one, anecdotal evidence, combined with the availability heuristic make people overrate the occurrence of certain relations. I shall pick the “people who wear glasses are smart” example: Some smart people use glasses. Due to the availability of smart people who use glasses the relation 'glasses’ => 'smart’ gets reinforced. However, this hides an implicit assumption; that the average IQ/intelligence (perceived or otherwise)/'smartness’ of people who wear glasses. Is this true? Normally, my common sense tells me that this is false, however there is correlation between height and intelligence: NCBIWikipediamedicalxpress. I have not yet read any of them in sufficient detail—or any detail for that matter—as at the time of writing this, but I have bookmarked the links for future study. I cannot ascertain whether glasses actually correlate to intelligence and the effort required for that, is more than I’m willing to commit to an article I am writing out of boredom. I would appreciate if the more medically knowledgeable readers fixed my ignorance.

That said, I shall offer a hypothesis for the scenario in which there is no correlation between glasses wearing and intelligence. I would try not to violate Occam’s razor, but there is no way I’m going to go through the rigours of Solomonoff induction—I’ll probably have to learn that in detail first—for this. Nor am I going to investigate the Kolmogorov complexity (something I more or less understand) either. I have reason to believe, that events which are strange or different from the norm leave a stronger mark in our memory. Events that are emotionally charged are often associated with the emotion and are retained more in episodic memory Psychology Today. On first encountering the glasses users that were smart the fact that they wore glasses might have left a deep impression on the people who encountered them and may have been associated with their perceived intelligence. The brain is also an obsessive pattern matching engine drawing connections even when they are not there: Scientific AmericanWikipedia Psych Central, and so, the relation 'wears glasses’ => 'smart’ may have started to take root. Furthermore, categorising smart people and glasses together is easy. Kahneman 2011 “Thinking Fast and Slow”[1] brought forth the theory of 'cognitive ease’; people’s brains work along the line of least strain or greatest cognitive ease. If there are two decisions/decision making procedures, we tend to go with the one that is associated with more cognitive ease. I suggest that this (meta) heuristic of maximising cognitive ease, would make us more readily associate two characteristics with each other when there may be no logical reason for such an association. This may have led to the association of 'glasses’ and 'intelligence’.

The above is the charitable hypothesis. I decline—at this juncture—to mention the less charitable one.

After the formation of the stereotype, it was reinforced in a feedback loop. The presence of the stereotype in the media reinforced its availability in our brains, and primed us to notice it when it does occur. Furthermore confirmation (positive) bias, may make us selectively notice the manifestations of the stereotype, and ignore the many cases when the stereotype was wrong. I do remember feeling disappointed when a kid I met in primary school was nowhere near as smart as I had expected. I wonder if that was when I started to dislike stereotypes. I wonder if there might be some groupthink involved in reinforcing the stereotype? I certainly suspect it (probability greater than 0.75 based solely on anecdotal evidence) in my society (Nigeria) but does such thinking abound in the Western World as well? I would appreciate feedback from my users on this issue.  
I suspect that the representativeness heuristic combined with base rate neglect, cause people to overrate the proportion of a certain group of people who fit a certain characteristic. In the glasses example, people neglect the rate of people with glasses compared to the general population. Combine this with glasses taking to be representative of smart people and we have them overrating the proportion of smart people who wear glasses.

Borrowing again from Kahneman 2011, I posit that stereotypes from an easy to implement and convenient heuristic (cognitive ease), and such are applied widely. Rather than dealing with each s in X as a separate individual that needs to be considered separately and handled as such, we pull up any stereotypes we know about X, and by deductive reasoning apply it to s. We can now deal with s on a better footing than when we started. Such reasoning is not in fact wrong, and would in fact be advisable—if the stereotypes were actually accurate. If for example 80% of glasses wearers had IQs north of 100, then relying on the stereotype is better than going in with zero information. Alas, the stereotypes seem to be unfounded. The perceived utility of stereotypes may contribute to the feedback loop, and make them much harder to kill.

The non-charitable hypothesis.

One stereotype that frustrated me—not just due to its inanity—but due to its sheer intractability; try as I could, I could not decipher the origin of the stereotype: “blondes are dumb”. I eventually realised/was made to realise the (a plausible) origin of that stereotype. I shall describe the conversation below for your benefit referring to myself as 'DG’ and my conversation partner as 'Alpha’.

DG: “It was a rant; I was venting my frustration about Nigeria. People do not read rants to gain an unbiased opinion. I even put 'warning rant ahead in the post’.”
Alpha: “I know, but you are contributing to the stereotype they have about Nigeria. Anyone that reads this would now start thinking that all Nigerians are like this.”
DG: “Stereotypes are so stupid. I mean they’re completely baseless and unfounded. Look at the 'blondes are dumb’ stereotype.”
Alpha: “Stereotypes exist for a reason!”
DG: “What could possibly be the cause for the 'blondes are dumb’ stereotype? Like how? How could it possibly exist?”
Alpha: “….”
DG: ….”
Alpha: “Well you know blondes are generally considered more attractive…” DG: “That’s it! Blondes are aesthetically more pleasant, and as such are more likely to work in jobs that are less intellectual, and considered the domain of bimbos.”
Alpha: “Blondes would be more likely to be hosts, waitresses…”
DG: “And strippers and porn stars. People are jealous; others can’t have everything, so they tend to bring them down and pin other qualities on them to make up for any advantages they have dumb blondes compensate for their beauty with lowered intelligence.”

I am not Yudkowsky, and so I would not proffer an evolutionary psychology hypothesis for why blondes are more attractive among Caucasians (plus I don’t know shit about evolutionary biology, much less evolutionary psychology, and so I’ll be talking out of my ass—which would be fine if I were trying to guess the teacher’s password—however, seeing as that is not my aim here, I’ll refrain). The fact is that blondes are considered more attractive among Caucasians (relative to other Caucasians and—I suspect—to other races as well). People do not like that the Universe isn’t fair; they prefer to believe in a just world. I mean wouldn’t it be more convenient if the world were just and fair; only your hardwork mattered, no one was unfairly gifted in the genetic lottery? I suspect that the just world hypothesis is why there is a certain class of people (significant and maybe the majority, but I lack the relevant statistics) who try to downplay IQ and argue that it is irrelevant, and not a true measure of intelligence. Einstein’s 'quote’ (alleged if the “ were not clear enough) about how "everyone is a genius but if you judge a fish by its ability to climb a tree it would live its whole life believing it’s stupid”, is often quoted among that class as a holy maxim, and used to defend the convenient status quo of equality in the genetic lottery. I find the entire quote and mindset bogus, but I’ve digressed enough already, and shall now steer this article back on track. “Blondes can’t both be beautiful and have the same smarts as everyone else”: this jealousy helped reinforce the stereotype of the blonde bimbo. The fact that the proportion of blondes among bimbo professions may have in fact been higher, would have helped legitimise the stereotype. People see what they want to see, and as such they would ignore the proportion of blondes among the intellectual elite as well. I have learned not to underestimate the human mind’s capability of cognitive dissonance—it was 17 years before I finally chose Science myself after all.

Unlike the 'glasses’ => 'smart’ stereotype, I suspect the 'blonde’ => 'stupid’ stereotype is motivated almost entirely by the jealousy of the masses. Even as education may eradicate the former stereotype, I suspect the latter would last for a while longer due to the pervasiveness of the “just world hypothesis” and the desire for fairness of the masses.

References

[1] Kahneman Daniel “Thinking Fast and Slow” 2011 Ch 5. pp 65-76. Random House of Canada Limited.

A Comment on Expected Utility Theory

0 DragonGod 05 June 2017 03:26AM

A Comment on Expected Utility Theory

Expected utility theory/expected value decision making—as the case may be—is quite interesting I guess. In times, past (for a few months at longest as I am a neophyte to rationality) I habitually trusted the answers expected utility theory provided without bothering to test them, or ponder for myself why they would be advisable. I mean, I first learned the concept of expected value in statistics and when we studied it in Operations Research—as part of an introduction to decision theory—it just seemed to make sense. However, after the recent experiment I did (the decision problem between a guaranteed \$250,000 and a 10\% chance to get \$10,000,000) I began to start doubting the appropriateness of Expected Utility Theory. Over 85\% of subjects chose the first option, despite the latter having an expected value 4 times higher than the former. I myself realised that the only scenario in which I would choose the \$10,000,000 was one in which \$250,000 was an amount I could pass up. Now I am fully cognisant of expected utility theory, and my decision to pick the first option did not seem to be prey to any bias, so a suspicion on the efficacy of expected utility theory began to develop in my mind. I took my experiment to http://www.reddit.com/r/lesswrong; a community who I expected would be more rational decision makers—they only confirmed the decision making of the first group. I realised then, that something was wrong; my map didn’t reflect the territory. If expected utility theory was truly so sound, then a community of rationalists should have adhered to its dictates. I filed this information at the back of my mind. My brain began working on it, and today while I was reading “Thinking, Fast and Slow” by Daniel Kahneman my brain delivered an answer to me.  

I do not consider myself a slave to rationality; it is naught but a tool for me to achieve my goals. A tool to help me “win”, and to do so consistently. If any ritual of cognition causes me to lose, then I abandon it. There is no sentimentality on the road to victory, and above all I endeavour to be efficient—ruthlessly so if needed. As such, I am willing to abandon any theory of decision making, when I determine it would cause me to lose. Nevertheless, as a rationalist I had to wonder; if expected utility theory was so feeble a stratagem, why had it stuck around for so long? I decided to explore the theory from its roots; to derive it for myself so to speak; to figure out where the discrepancy had come from.

Expected Utility Theory, aims to maximise the Expected Utility of a decision which is naught but the average utility of that decision—the average payoff.  

Average payoff is given by the formula: 
\[E_{j} = Pr_i*G_{ij} \tag{1}\]
Where 
\(E_j\) = Expected value of Decision \(j\) 
\(P_j\) = Probability of Scenario \(i\) 
\(G_{ij}\) = Payoff of Decision \(j\) under Scenario \(i\).

What caught my interest when I decided to investigate expected utility theory from its roots, was the use of probability in the formula.

Now the definition of probability is: 
\[Pr(i) = \lim_{n \to \infty} \frac{\sum i}{n} \tag{2}\]       
Where \(\sum i\) is to be understood to be \(f i\) the frequency of \(i\).         
If I keep in mind the definition of probability, I find something interesting; Expected Utility Theory maximises my payoff in the long run. For decision problems, which are iterated—in which I play the game several times—then Expected Utility Theory is my best bet. The closer the number of iterations are to infinity, the closer the probability is to the ratio above.  

Substituting \((2)\) into \((1)\) we get:   
\[E_j = \frac{\sum i}{n} * G_{ij} \tag{(3)}\]

What Expected Utility theory tells us is to choose the highest \(E_j\); this is only guaranteed to be the optimum decision in a scenario where \((1)\) = \((3)\) I.e.

  1. The decision problem has a (sufficiently) large number of iterations. 
  2. The decision problem involves a (sufficiently) large number of scenarios. 
What exactly constitutes “large” is left to the reader’s discretion. However, \(2\) is definitely not large. To a rely on expected utility theory in a non—iterated game with only two scenarios can easily lead to fatuous decision making. In such problems like the one I posited in my “experiment” a sensible decision-making procedure is the maximum likelihood method; pick the decision that gives the highest payoff in the most likely scenario. However, even that heuristic may not be advisable; what if Scenario \(i\) as a probability of \(0.5 - \epsilon\) and the second scenario \(j\) has a probability of \(0.5 + \epsilon\)? Merely relying on the maximum likelihood heuristic is unwise. \(epsilon\) here stands for a small number—the definition of small is left to the user’s discretion.

After much deliberation, I reached a conclusion; in any non—iterated game in which a single scenario has an overwhelming high probability \(Pr = 1 - \epsilon\), then the maximum likelihood approach is the rational decision-making approach. Personally, I believe \(\epsilon\) should be \(\ge 0.005\) and set mine at around \(0.1\).

I may in future revisit this writeup, and add a mathematical argument for the application of the Maximum likelihood approach over the Expected Utility approach but for now, I shall posit a simpler argument:

The Expected Utility approach is sensible only in that it maximises winnings in the long run—by its very design, it is intended for games that are iterated and/or in which there is a large number of scenarios. In games where this is not true—with few scenarios and a single instance—there is sufficient variation in the event that occurs that there is a significant deviation of the actual payoff from the expected payoff. To ignore this deviation is oversimplification, and—I’ll argue—irrational. In the experiment I listed above, the actual payoff for the second decision was \$0 or \$10,000,000; the former scenario having a likelihood of 90\% and the latter a 10\%. The expected value is \$1,000,000 but the standard deviation of the payoffs from the expected value—in this case \$3,000,000—is 300\% the mean. In such cases, I conclude that the expected utility approach is simply unreliable—and expectably so—it was never designed for such problems in the first place (pun intended).

Rationality as A Value Decider

1 DragonGod 05 June 2017 03:21AM

Rationality As a Value Decider

A Different Concept of Instrumental Rationality

Eliezer Yudkowsky defines instrumental rationality as “systematically achieving your values” and goes on to say: “Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this ‘winning.’” [1]
 
I agree with Yudkowsky’s concept of rationality as a method for systematised winning. It is why I decided to pursue rationality—that I may win. However, I personally disagree with the notion of “systematically achieving your values” simply because I think it is too vague. What are my values? Happiness and personal satisfaction? I find that you can maximise this by joining a religious organisation, in fact I think I was happiest in a time before I discovered the Way. But this isn’t the most relevant, maximising your values isn’t specific enough for my taste, it’s too vague for me.
 
“Likewise, decision theory defines what action I should take based on my beliefs. For any consistent set of beliefs and preferences I could have about Bob, there is a decision-theoretic answer to how I should then act in order to satisfy my preferences.” [2]
 
This implies that instrumental rationality is specific; from the above statement, I infer:
“For any decision problem to any rational agent with a specified psyche, there is only one correct choice to make.”
 
However, if we only seek to systematically achieve our values, I believe that instrumental rationality fails to be specific—it is possible that there’s more than one solution to a problem in which we merely seek to maximise or values. I cherish the specificity of rationality; there is a certain comfort, in knowing that there is a single correct solution to any problem, a right decision to make for any game—one merely need find it. As such, I sought a definition of rationality that I personally agree with; one that satisfies my criteria for specificity; one that satisfies my criteria for winning. The answer I arrived at was: “Rationality is systematically achieving your goals.”
 
I love the above definition; it is specific—gone is the vagueness and uncertainty of achieving values. It is simple—gone is the worry over whether value X should be an instrumental value or a terminal value. Above all, it is useful—I know whether or not I have achieved my goals, and I can motivate myself more to achieve them. Rather than thinking about vague values I think about my life in terms of goals:
“I have goal X how do I achieve it?”
If necessary, I can specify sub goals and sub goals for those sub goals. I find that thinking about your life in terms of goals to be achieved is a more conducive model for problem solving, a more efficient model—a useful model. I am many things, and above them all I am a utilitist—the worth of any entity is determined by its utility to me. I find the model of rationality as a goal enabler a more useful model.
 
Goals and values are not always aligned. For example, consider the problem below:

Jane is the captain of a boat full with 100 people. The ship is about to capsize and would, unless ten people are sacrificed. Jane’s goal is to save as many people as possible. Jane’s values hold human lives sacred. Sacrificing ten people has a 100% chance of saving 90 people, while sacrificing no one and going with plan delta has a 10% chance to save the 100, and a 90% chance for everyone to die.

 

The sanctity of human life is a terminal value for Jane. Jane when seeking to actualise her values, may well choose to go with plan delta, which has a 90% chance to prevent her from achieving her goals.
 
Values may be misaligned with goals, values may be inhibiting towards achieving our goals. Winning isn’t achieving your values; winning is achieving your goals.

Goals

I feel it is apt to define goals at this juncture, lest the definition be perverted and only goals aligned with values be considered “true/good goals”.
 
Goals are any objectives a self aware agent consciously assigns itself to accomplish.
 
There are no true goals, no false goals, no good goals, no bad goals, no worthy goals, no worthless goals; there are just goals.
 
I do not consider goals something that “exist to affirm/achieve values"—you may assign yourself goals that affirm your values, or goals that run contrary to them—the difference is irrelevant, we work to achieve those goals you have specified.

The Psyche

The Psyche is an objective map that describes a self-aware agent that functions as a decision maker—rational or not. The sum total of an individual’s beliefs—all knowledge is counted as belief—values and goals form their psyche. The psyche is unique to each individual. The psyche is not a subjective evaluation of an individual by themselves, but an objective evaluation of the individual as they would appear to an omniscient observer. An individual’s psyche includes the totality of their map. The psyche is— among other things—a map that describes a map so to speak.
 
When a decision problem is considered, the optimum solution to such a problem cannot be considered without considering the psyche of that individual. The values that individual holds, the goals they seek to achieve and their mental map of the world.
 
Eliezer Yudkowsky seems to believe that we have an extremely limited ability to alter our psyche. He posits, that we can’t choose to believe the sky is green at will. I never really bought this, and especially due to personal anecdotal evidence. Yet, I’ll come back to altering beliefs later.
 
Yudkowsky describes the human psyche as: “a lens that sees its own flaws”. [3] I personally would extend this definition; we are not merely “a lens that sees its own flaws”, we are also “a lens that corrects itself”—the self-aware AI that can alter its own code. The psyche can be altered at will—or so I argue.
 
I shall start with values. Values are neither permanent nor immutable. I’ve had a slew of values over the years; while Christian, I valued faith, now I adhere to Thomas Huxley’s maxim:

Scepticism is the highest of duties; blind faith the one unpardonable sin.

 

Another one: prior to my enlightenment I held emotional reasoning in high esteem, and could be persuaded by emotional arguments, after my enlightenment I upheld rational reasoning. Okay, that isn’t entirely true; my answer to the boat problem had always been to sacrifice the ten people, so that doesn’t exactly work, but I was more emotional then, and could be swayed by emotional arguments. Before I discovered the Way earlier this year (when I was fumbling around in the dark searching for rationality) I viewed all emotion as irrational, and my values held logic and reason above all. Back then, I was a true apath, and completely unfeeling. I later read arguments for the utility of emotions, and readjusted my values accordingly. I have readjusted my values several times along the journey of life; just recently, I repressed my values relating to pleasure from feeding—to aid my current routine of intermittent fasting. I similarly repressed my values of sexual arousal/pleasure—I felt it will make me more competent. Values can be altered, and I suspect many of us have done it at least once in our lives—we are the lens that corrects itself.

Getting back to belief (whether you can choose to believe the sky is green at will) I argue that you can, it is just a little more complicated than altering your values. Changing your beliefs—changing your actual anticipation controllers—truly redrawing the map, would require certain alterations to your psyche in order for it to retain a semblance of consistency. In order to be able to believe the sky is green, you would have to:

  • Repress your values that make you desire true beliefs.
  • Repress your values that make you give priority to empirical evidence.
  • Repress your vales that make you sceptical.  
  • Create (or grow if you already have one) a new value that supports blind faith.

Optional:

  • Repress your values that support curiosity. 
  • Create (or grow if you already have one) a new value that supports ignorance.

By the time, you’ve done the ‘edits’ listed above, you would be able to freely believe that the sky is green, or snow is black, or that the earth rests on the back of a giant turtle, or a teapot floats in the asteroid belt. I’m warning you though, by the time you’ve successfully accomplished the edits above, your psyche would be completely different from now, and you will be—I argue—a different person. If any of you were worried that the happiness of stupidity was forever closed to you, then fear not; it is open to you again—if you truly desire it. Be forewarned; the “you” that would embrace it would be different from the “you” now, and not one I’m sure I’d want to associate with. The psyche is alterable; we are the masters of our own mind—the lens that corrects itself.
 
I do not posit, that we can alter all of our psyche (I suspect that there are aspects of cognitive machinery that are unalterable; “hardcoded” so to speak. However, my neuroscience is non-existent—as such I shall leave this issue to those more equipped to comment on it.

Values as Tools

In my conception of instrumental rationality, values are no longer put on a pedestal, they are no longer sacred; there are no more terminal values anymore—only instrumental. Values aren’t the masters anymore, they’re slaves—they’re tools.
 
The notion of values as tools may seem disturbing for some, but I find it to be quite a useful model, and such I shall keep it.
 
Take the ship problem Jane was presented with above, had Jane deleted her value which held human life as sacred, she would have been able to make the decision with the highest probability of achieving her goals. She could even add a value that suppressed empathy, to assist her in similar situations—though some might feel that is overkill. I once asked a question on a particular subreddit:
“Is altruism rational?”
My reply was a quick and dismissive:
“Rationality doesn’t tell you what values to have, it only tells you how to achieve them.”
 
The answer was the standard textbook reply that anyone who had read the sequences or RAZ (Rationality: From AI to Zombies) would produce; I had read neither at the time. Nonetheless, I was reading HPMOR (Harry Potter and the Methods of Rationality), and that did sound like something Harry would say. After downloading my own copy of RAZ, I found that the answer was indeed correct—as long as I accepted Yudkowsky’s conception of instrumental rationality. Now that I reject it, and consider rationality as a tool to enable goals, I have a more apt response:

What are your goals?

 

If your goals are to have a net positive effect on the world (do good so to speak) then altruism may be a rational value to have. If your goals are far more selfish, then altruism may only serve as a hindrance.

The utility of “Values as Tools” isn’t just that some values may harm your goals, nay it does much more. The payoff of a decision is determined by two things:

  1. How much closer it brings you to the realisation of your goals? 
  2. How much it aligns with your values?

 
Choosing values that are doubly correlated with your current goals (you actualise your values when you make goal conducive decisions, and you run opposite to your values when you make goal deleterious decisions) exaggerates the positive payoff of goal conducive decisions, and the negative payoff of goal deleterious decisions. This aggrandising of the payoffs of decisions serves as a strong motivator towards making goal conducive decisions— large rewards, large punishment—a perfect propulsion system so to speak.

The utility of the “Values as Tools” approach is that it serves as a strong motivator towards goal conducive decision making.

Conclusion

It has been brought to my attention that a life such as the one I describe may be “an unsatisfying life” and “a life not worth living”. I may reply that I do not seek to maximise happiness, but that may be dodging the issue; I first conceived rationality as a value decider when thinking about how I would design an AI—it goes without saying that humans are not computers.

I offer a suggestion: order your current values in a scale of preference. Note the value (or set thereof) utmost in your scale of preference. The value that if you had to achieve only one value, that you would choose. Pick a goal that is aligned with that value (or set thereof). That goal shall be called your “prime goal”.The moment you pick your prime goal, you fix it. From now on, you no longer change your goals to align with your values. You change your values to align with your goals. Your aim in life is to achieve your prime goal, and you pick values and subgoals that would help you achieve your prime goal.

References:

[1] Eliezer Yudkowsky, “Rationality: From AI to Zombies”, pg 7, 2015, MIRI, California.
[2] Eliezer Yudkowsky, “Rationality: From AI to Zombies”, pg 203, 2015, MIRI, California.
[3] Eliezer Yudkowsky, “Rationality: From AI to Zombies”, pg 40, 2015, MIRI, California.

Rationalist Seder: Dayenu, Lo Dayenu

0 Raemon 04 June 2017 08:55PM

There's one more piece of the NYC Rationalist Seder Haggadah that I wanted to pull out, to refer to in isolation. I think is quite relevant to some current concerns in the evolving Rationality Community, and which is interesting in particular because of how it's evolved over the past 6 years.

"Dayenu" is a traditional Jewish song, roughly a thousand years old. It describes a number of gifts that God gave the Jewish people. For each gift/verse, lyrics culminate with "Dayenu", or "it would have been enough." 

At the first rationalist Seder, Zvi made two, ahem, rather significant changes to the song. 

The first dealt with the fact that, well, we're basically a bunch of atheists, and even if we weren't, God slaying a bunch of firstborn children just isn't the sort of thing we're super in favor of these days.

The second change dealt with that that... obviously each individual miracle *wouldn't* have been enough to free the Jewish people. Freeing them from Egypt but not parting the Red Sea to let escape when Pharoah has second thoughts would very much *not* have been sufficient.

And beyond that, Less Wrong culture is emphatically based around the status quo not being satisfactory. To constantly aspire to something better.

Zvi's new version of the song told the story of human history, and it did so from the framing of "Lo Dayenu" - not enough. If we had discovered fire, but not developed agriculture, our journey would not have been finished.

But, in the spirit of cultural pendulums that swim back and forth to overcompensate for previous failures, a years later Daniel Speyer took a second pass at revising the song:

Traditionally, we sing “Dayenu”: it would have been enough.

Our sages asked: what do we mean by this?  In some of the traditional pairings, one step without the next would have left us all dead!  How can that be enough?  And it was answered: celebrate each step toward freedom as if it were enough, then start out on the next step. If we reject each step because it is not the whole liberation, we will never achieve the whole liberation.

And yet, if we celebrate our past victories and become complacent, so too will we never achieve the whole liberation.  And so we have come to sing “Lo Dayenu”: it would not have been enough.

And it has almost been said, “Keep two truths in your pocket, and take them out according to the need of the moment.  Let one be, ‘we have achieved great things’ and the other be ‘we have a terribly long way yet to go’.”

Determining which moment needs which truth is left as an exercise for the reader.

Lately I've been thinking about the tension between trying to hold the community to a higher standard, without making people feel judged, or not good enough. And relatedly, how to set standards for a given space that is communicated and enforced fairly, but acknowledges that different people are at different points on their own journey. Holding both truths in your head at the same time feels pertinent.

I'll delve more into that in a future blogpost. Meanwhile, here's the current text of the song as sung by the NYC community, which alternates between the two concepts.

[Notes: A) This is the sort of song designed to be sung by people who've had 3 glasses of wine. B) "Dayenu" and "Lo Dayenu" end up getting stressed fairly differently to make them scan. The former is "Daaa-yeee-nuu", the latter is "Looo Daaa yenu."]


Dayenu / Lo Dayenu

Had we crawled forth from the ocean,
but not learned to speak with language,
but not learned to speak with language, Lo Dayenu!

Lo dayenu. Lo dayenu. Lo dayenu. Dayenu, dayenu,  dayenu
  Lo dayenu. Lo dayenu. Lo dayenu. Dayenu,  Da-ye-nu

Had we learned to speak with language,
but not mastered wheat and olives,
but not mastered wheat and olives, Dayenu!

Had we mastered wheat and olives,
but not raised ourselves stone cities,
but not raised ourselves stone cities, Lo Dayenu!

Had we raised ourselves stone cities,
but not written tomes of wisdom,
but not written tomes of wisdom, Dayenu!

Had we written tomes of wisdom,
but not severed law from vengeance,
but not severed law from vengeance, Lo Dayenu!

Lo dayenu. Lo dayenu. Lo dayenu. Dayenu, dayenu,  dayenu
Lo dayenu. Lo dayenu. Lo dayenu. Dayenu,  Da-ye-nu

Had we severed law from vengeance,
but not learned to bake and slice bread,
but not learned to bake and slice bread, Dayenu!

Had we learned to bake and slice bread,
but not mapped out all Earth's surface,
but not mapped out all Earth's surface, Lo Dayenu!

Had we mapped out all Earth's surface,
but not crafted printing presses,
but not crafted printing presses, Dayenu!

Had we crafted printing presses,
but not named the rights of humans,
but not named the rights of humans, Lo Dayenu!

Lo dayenu. Lo dayenu. Lo dayenu. Dayenu, dayenu,  dayenu
Lo dayenu. Lo dayenu. Lo dayenu. Dayenu,  Da-ye-nu

Had we named the rights of humans,
but not thought of mass production,
but not thought of mass production, Dayenu!

Had we thought of mass production,
but not tamed and harnessed lightning,
but not tamed and harnessed lightning, Lo Dayenu!

Had we tamed and harnessed lightning,
but not taught it math and logic,
but not taught it math and logic, Dayenu!

Had we taught light math and logic,
but not banished death forever,
but not banished death forever, Lo Dayenu!

Lo dayenu. Lo dayenu. Lo dayenu. Dayenu, Dayenu, Dayenu
  Lo dayenu. Lo dayenu. Lo da-ye-nu. da-ye-nuuuuuuuuuuuuu!

[Link] The Personal Growth Cycle

0 gworley 04 June 2017 05:20PM

A new, better way to read the Sequences

12 SaidAchmiz 04 June 2017 05:10AM

A new way to read the Sequences:

https://www.readthesequences.com

It's also more mobile-friendly than a PDF/mobi/epub.

(The content is from the book — Rationality: From AI to Zombies. Books I through IV are up already; Books V and VI aren't up yet, but soon will be.)

Edit: Book V is now up.

Edit 2: Book VI is now up.

Edit 3: A zipped archive of the site (for offline viewing) is now available for download.

Rationalist Seder: A Story of War

5 Raemon 03 June 2017 08:17PM

For whatever reason, the rationality community is inordinately Jewish. Among other things, this resulted in 2011 in people in New York putting together "Rationalist Seder", a reframing of the story of Jewish Liberation From Egypt to reflect upon liberation in general. What does it mean to be free? What are the many things we might want to be free from?

As a non-Jewish person, it took me a few years to really wrap my head around the holiday, but it's evolved into one of the more poignant moments throughout my year.

The holiday was begun by Zvi Mowshowitz, and has been modified by various people in different geographic corners. In NYC, for the past few years it's been refined and tweaked by Daniel Speyer, who's aimed to invoke a strong sense of history that feels purposeful, connected to the original story, uniquely "rationalist", and feels very much like a holiday in the ancient, venerable and valuable sense.

The complete text of Dan's most recent iteration is here. It contains a mix of retellings of traditional Jewish stories, abridged essays from around the rationalsphere, and a few original pieces. It has a lot of nice gems that I'd like to be able to refer to more easily, but they make the most sense in context with each other. 

But one particular new story stands alone, and establishes the frame around which the Rationalist Seder is told:

(Image from the wikimedia foundation, credit to Thomas Forester)

A Story of War

Two tribes live next to each other. Each fears the other will attack, and so builds weapons to hold in readiness.  And then, seeing that the other has built weapons, builds more weapons.  Their clothes are threadbare.  Their children are hungry.  But still they spend their time making weapons, lest the other tribe build more.  They would prefer to live in peace, and make no weapons, but whichever tribe adopted that policy first would surely be killed.

Are these tribes free?  There is no pharoah putting the whip to their backs, but still they do not live as they choose.

In the next valley, there are two more tribes.  They distrust each other as much as the first two, but they are ruled by a powerful empire. The empire forbids tribes to fight each other, and enforces that rule with unstoppable legions.  And so these two tribes have the peace and prosperity that the first two tribes wanted.

And in the valley beyond that, there are two more tribes, who only think they are ruled by a powerful empire. The empire has long since collapsed, but they still believe that if they fight, the empire will come and punish them.  And so they don’t fight. And by the most naive interpretation of counterfactuals, their belief is true.  And they too, live in peace and prosperity.

That is the power of a story.

They also pay taxes to the empire, by floating valuable timber down a river from which no one will collect it. That too, is the power of a story.

And in a farther, more distant valley are two tribes who really understand timeless decision theory. They should publish a paper or something.

Link to 2017 Haggadah

 

 

Futarchy, Xrisks, and near misses

6 Stuart_Armstrong 02 June 2017 08:02AM

Crossposted at the Intelligent Agent Forum.

All the clever ways of getting betting markets to take xrisks into account suffer from one big flaw: the rational xrisk bettor only makes money if the xrisk actually happens.

Now, the problem isn't because "when everyone is dead, no-one can collect bets". Robin Hanson has suggested some interesting ideas involving tickets for refuges (shelters from the disaster), and many xrisks will be either survivable (they are called xrisks, after all) or will take some time to reach extinction (such as a nuclear winter leading to a cascade of failures). Even if markets are likely to collapse after the event, they are not certain to collapse, and in theory we can also price in efforts to increase the resilience of markets and see how changes in that resilience changes the prices of refuge tickets.

The main problem, however, is just how irrational people are about xrisks, and how little discipline the market can bring to them. Anyone who strongly *over-estimates* the probability of an xrisk can expect to gradually lose all their money if they act on that belief. But someone who under-estimates xrisk probability will not suffer until an xrisk actually happens. And even then, they will only suffer in a few specific cases (refuge prices are actually honoured and those without them suffer worse fates). This is, in a way, the ultimate Talebian about black swan: huge market crashes are far more common and understandable than xrisks.

Since that's the case, it might be better to set up a market in near misses (an idea I've heard before, but can't source right now). A large meteor that shoots between the Earth and the Moon; conventional wars involving nuclear powers; rates of nuclear or biotech accidents. All these are survivable, and repeated, so the market should be much better at converging, with the overoptimistic repeatedly chastised as well as the overpessimistic.

 

Book recommendation requests

8 ChristianKl 01 June 2017 10:33PM

Do you want to learn about a topic, you know little about? Books are great, but if you don't a know a topic it's hard to know which book to chose.

This thread exists to request recommendations about what to read on a given topic. 

June 2017 Media Thread

2 ArisKatsaris 01 June 2017 06:17AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

[Link] Strong men are socialist - how to use a study's own data to disprove it

6 Jacobian 31 May 2017 04:18AM

Regulatory lags for New Technology [2013 notes]

5 gwern 31 May 2017 01:27AM

I found some old notes from June 2013 on time delays in how fast one can expect Western political systems & legislators to respond to new technical developments.

In general, response is slow and on the order of political cycles; one implication I take away is that a takeoff an AI could happen over half a decade or more without any meaningful political control and would effectively be a ‘fast takeoff’, especially if it avoids any obvious mistakes.

1 Regulatory lag

“Regulatory delay” is the delay between the specific action required by regulators or legislatures to permit some new technology or method and the feasibility of the technology or method; “regulatory lag” is the converse, then, and is the gap between feasibility and reactive regulation of new technology. Computer software (and artificial intelligence in particular) is mostly unregulated, so it is subject to lag rather than delay.

Unfortunately almost all research seems to focus on modeling lags in the context of heavily regulated industries (especially natural monopolies like insurance or utilities), and few focus on compiling data on how long a lag can be expected between a new innovation or technology and its regulation. As one would expect, the few results point to lags on the order of years; for example, Ippolito 1979 (“The Effects of Price Regulation in the Automobile Insurance Industry”) finds that the period of price changes goes from 11 months in unregulated US states to 21 months in regulated states, suggesting the price-change framework itself causes a lag of almost a year.

Below, I cover some specific examples, attempting to estimate the lags myself:

(Nuclear weapons would be an interesting example but it’s hard to say what ‘lag’ would be inasmuch as they were born in government control and are subject to no meaningful global control; however, if the early proposals for a world government or unified nuclear weapon organization had gone through, they would also have represented a lag of at least 5 years.)

1.1 Hacking

Computer hacking existed for quite a while before relevant laws were passed and serious law enforcement began, which is typically considered to have begun with Operation Sundevil:

Prior to 1990, people who manipulated telecommunication systems, known as phreakers, were generally not prosecuted within the United States. The majority of phreakers used software to obtain calling card numbers and built simple tone devices in order to make free telephone calls. A small elite, and highly technical segment of phreakers were more interested in information about the inner workings of the telecommunication system than in making free phone calls. Phone companies complained of financial losses from phreaking activities.[5] The switch from analog to digital equipment began to expose more of the inner workings of telephone companies as hackers began to explore the inner workings, switches and trunks. Due to a lack of laws and expertise on the part of American law enforcement, few cases against hackers were prosecuted until Operation Sundevil.[4]

However, starting in 1989, the US Secret Service (USS), which had been given authority from Congress to deal with access device fraud as an extension of wire fraud investigations under Title 18 (§ 1029), began to investigate. Over the course of the 18 month long investigation, the USS gathered alleged evidence of rampant credit card and calling card fraud over state lines.[6]

This gives a time-delay of decades from the first phreaks (eg. Steve Jobs & Wozniak selling blue boxes in 1971 after blue boxing was discovered in the mid-60s) to the mid-1980s with the passage of the Computer Fraud and Abuse Act & Computer Security Act of 1987; prosecution was sporadic and light even after that, for example Julian Assange as Mendax in 1991 was raided and ultimately released with a fine in 1995. (Since then, at least 49 states have passed laws dealing with hacking with an international convention spreading post-2001.)

1.2 High frequency trading

HFT, while apparently only becoming possible in 1998, was marginal up until 2005 where it grew dramatically and became controversial with the 2010 flash crash and the 2012 Knight Capital fiasco. Early SEC rule-changes did little to address the issue; no US legislation has been passed, or appears viable given Wall Street lobbying. European Parliament legislation is pending, but highly controversial with heavy lobbying from London. Otherwise, legislation has been passed only in places that are irrelevant (eg. Germany). Given resistance in NYC & London, and the slow movement of the SEC, there will not be significant HFT regulation (for better or worse) for years to come, and it is likely irrelevant as the area matures and excess profits disappear - “How the Robots Lost: High-Frequency Trading’s Rise and Fall”.

“Insight: Chicago Fed warned on high-frequency trading”, Reuters:

More than two years ago, the Federal Reserve Bank of Chicago was pushing the Securities and Exchange Commission to get serious about the dangers of super-fast computer-driven trading. Only now is the SEC getting around to taking a closer look at some of those issues…Even as the SEC gears up for a meeting on Tuesday to discuss software glitches and how to tame rapid-fire trading, the eighth public forum it has had in two years on market structure issues, regulators in Canada, Australia and Germany are moving ahead with plans to introduce speed limits to safeguard markets from the machines…To be sure, it is not as if the SEC has simply stood idly by and allowed the machines to run amok. The agency did put in place some new safeguards such as circuit breakers on stocks, after the May 2010 flash crash. The circuit breakers are intended to prevent a market-wide crash by briefly halting trading in particular stocks displaying sharp price moves within a 5-minute window, giving the algorithms a chance to let go of trading patterns that may have turned into vicious cycles…And recently, the SEC fined the New York Stock Exchange’s operator, NYSE Euronext, $5 million for allegedly giving some customers “an improper head start” on proprietary trading information.

“High speed trading begets high speed regulation: SEC response to flash crash, rash”, Serritella 2010:

The SEC has been quick to react to the Flash Crash, determined to avoid such market disruptions in the future.57 On June 10, 2010, the SEC published new rules (Rules), which require trading centers to halt trading in certain individual securities and derivatives if pricing thresholds are reached.58 Until December 10, 2010, the Rules are in a pilot period so that they may be adjusted and expanded.59

…The SEC published its new Rules slightly over a month after the Flash Crash and did so in an expedited manner in order ―to prevent a recurrence‖ of the May 6, 2010 market disruptions. RULES, supra note 5, at 4. ―The Commission believes that accelerating approval of these proposals is appropriate as it will enable the Exchanges nearly immediately to begin coordinating trading pauses across markets in the event of sudden changes in the value of the S&P 500 Index stocks.‖ Id. at 12. The Commission was ―concerned that events such as those that occurred on May 6 can seriously undermine the integrity of the U.S. securities markets. Accordingly, it is working on a variety of fronts to assess the causes and contributing factors of the May 6 market disruption and to fashion policy responses that will help prevent a recurrence.‖ Id. at 4.

…The new Rules tighten the thresholds and, for the first time, centralize the control of circuit breakers.63 Circuit breakers simply refer to the ability of exchanges to temporarily halt trading in a security or derivative to avert sell-offs during periods of extreme downward pressure, or to close the markets before the end of the normal trading day.64 While, previously, the exchanges each controlled their own circuit breakers, they all generally adhered to the thresholds and formulas set forth in NYSE Rule 80B.65 Rule 80B has three different thresholds-10%, 20% and 30%-each of which is tied to the DJIA and if met would result in a ―time out‖ to market activity altogether on any exchange to execute a circuit breaker mechanism.66 Despite the extreme price movements of May 6, 2010, the circuit breakers’ lowest threshold was not met, as, at its worst point in the Flash Crash, the DJIA was down 9.16%-lower than the 10% drop required to trigger the circuit breakers under Rule 80B.67 The SEC, recognizing that using the DJIA as a benchmark for circuit breakers may obscure extreme price movements in individual securities or derivatives, has extended the utility of circuit breakers to target individual securities and derivatives.68 Under the new Rules, the exchanges are required to issue five minute trading halts in a security if the price of that security moves at least 10% in either direction from its price in the preceding five minute period.69 To avoid interfering with the openings and closings of markets, these requirements are only in force from 9:45 a.m. to 3:35 p.m.70 The Rules do not displace Rule 80B’s mandates, rather they supplement their preexisting coverage with the ability to target individual securities whose volatility may not have enough of an effect on the DJIA to otherwise trigger a circuit breaker.71

…While still in their infancy, the new Rules suffer from crucial limitations which threaten their efficacy, not the least of which is that they only apply to stocks in the S&P 50075 and Russell 100076 indexes as well as select derivatives.77…For example, since the SEC’s new circuit breaker requirements do not only apply to all securities and derivatives, it is possible that trading could be halted in a given security while sales in one of its derivatives continue unabated, thus, frustrating the exchanges’ congressional mandate to promote market integrity and protect investors79 as well as fostering a disconnect between the prices of the derivative-an ETF, for example-and its underlying trading-halted security.

“U.S. Leads in High-Frequency Trading, Trails in Rules”, Bloomberg editors op-ed:

In contrast to this go-slow approach, Germany, Canada, Australia and the European Union are taking up some of the tools the U.S. should consider to keep computerized trading from running amok. To cite a few examples:

  • In Germany, legislation is pending to require high- frequency traders to register so regulators can better track their market moves.
  • Canada charges fees to firms that attempt to clog markets with buy and sell orders, as well as cancellations, a practice known as quote stuffing. High-frequency trading firms sometimes do this to overload the less-sophisticated trading systems of rivals and exploit minuscule and fleeting price discrepancies.
  • Australia will ask trading firms to conduct stress tests to gauge how they deal with market shocks.
  • The EU is reviewing a number of measures including one that would require a trading firm to honor a bid for half a second, a lifetime in a market where trades can be executed in microseconds.

It wouldn’t hurt if U.S. regulators took a look at these options and considered a few others, as well.

“SEC Leads From Behind as High-Frequency Trading Shows Data Gap”, Businessweek:

The U.S. Securities and Exchange Commission, stung by criticism that it lacks the knowledge to analyze the computerized trading that has come to dominate American stock markets, is planning to catch up. Initiatives to increase the breadth of data received from exchanges and to record orders from origination to execution are at the center of the effort. Gregg Berman, who holds a doctorate in physics from Princeton University, will head the commission’s planned office of analytics and research…“It’s amazing it’s taken 30 years,” Weild, a former vice chairman of NASDAQ Stock Market, said in a phone interview. “Meanwhile, there’s been an arms race on Wall Street and the SEC is outclassed in its ability to reconstruct events and look for vulnerabilities.”…The audit trail won’t be in place for several years and the industry hasn’t figured out how much it will cost and who will pay for it. Midas [overnight analysis] will be fully rolled out by the end of 2012, Berman said. It won’t include information about the one-third of trading that occurs away from exchanges.

“As SEC Listens to HFT and Exchanges, Europe Drives Discussion”:

In the City of London and European trading centers, the wait is on for the publication this week of the regulatory proposals previewed last week by the European Parliament’s Economic and Monetary Affairs Committee, which sent shockwaves through the worldwide HFT community. The Committee proposed a mandated half-second freeze for all trading orders, not only in equities, but every market, including fixed income and other asset classes. While the requirement to keep all trades alive for at least 500 milliseconds went far beyond what was expected in terms of orders, markets analyst Rebecca Healey told MNI there is believed to be an even more game-changing proposal in the document itself, one that could threaten the use of so-called dark pools by investors.

“City of London opposes tighter regulation of high-frequency trading”, Financial News:

MEPs [Members of European Parliament] this week unanimously voted through rules that will severely limit the controversial share-trading practice of high frequency trading, as part of a financial sector reform bill…Members of the economic and monetary affairs committee voted through proposals under the revised Markets in Financial Derivatives legislation - known as Mifid 2…Mifid 2 also includes measures designed to limit speculation on commodity markets, which has been blamed for distorting food prices and harming the world’s poorest populations. And there are rules aimed at protecting investors from being sold inappropriate products. Last week the Bureau reported that MEPs were planning tough new curbs on HFT, despite stiff opposition from the City of London and the wider financial sector. Stock exchanges, which receive a significant portion of their income from trading fees and high-tech services for HFT, were said to be ‘vocal’ about their desire to avoid regulation. UK-based stock exchanges have argued that they already have circuit breakers so there is no need to officially mandate them…But it would be premature to announce HFT’s demise in Europe: the bill still has a way to go before it becomes law. The three-tiered structure of European lawmaking means that the legislation was previously passed by the European Commission and draft legislation will now go before the European Union’s finance ministers. The three versions will then be reconciled through a ‘trialogue’ process.

The move to put the brakes on HFT across Europe has met stiff opposition from the City of London and the UK government.‘We must be careful not to introduce measures based on the assumption that high frequency trading is, per se, harmful to markets,’ warned the Financial Services Authority (FSA), responding to the draft legislation on behalf of the government. It rejected several of the proposed measures. The British Bankers’ Association described the requirement for some traders to become market makers as ‘particularly onerous’. The Treasury supports HFT arguing it brings liquidity to markets and reduces costs. Exchanges and the HFT industry have campaigned against the measures in public and behind closed doors. Exchanges have been ‘very vocal’ about HFT, and a key priority has been to ‘preserve all HFT’, Kay Swinburne told the Bureau…Since January 2010, the LSE and its lobbyists, City law firm Freshfields, have met MEPs from the three main British parties at least 15 times to discuss Mifid and similar legislation, lobbying registers show. Freshfields’ public affairs director Christiaan Smits met with the Conservatives’ lead on Mifid, Dr Kay Swinburne, eight times in two years on behalf of the LSE. Other intense discussions have been going on behind the scenes. In total, exchanges including Nasdaq, Deutsche Borse, NYSE Euronext, Bats Europe and Chi-X, and public affairs firms hired by them, have met with British MEPs at least 49 times over the period to discuss Mifid 2. Fleishman Hillard earned up to €50,000 (£40,000) each last year from representing exchange Chi-X and trading platform Equiduct, as well as up to €150,000 representing investment company Citadel, which has a substantial HFT arm. Specialist HFT companies have banded together to form a campaign group, the FIA European Principal Traders’ Association (Epta), which has issued position papers and lobbied politicians in the UK and EU…Epta has paid Brussels-based lobbyist Hume Brophy at least €100,000 last year to make its case in Brussels, EU lobbying registers show.

“Yet again, the UK government has sided with the robotraders on a Robin Hood Tax”, New Statesman:

Yet as the Bureau for Investigative Journalism revealed last week, of a 31-member panel tasked by the UK Government to assess Mifid II, 22 members were from the financial services, 16 linked to the HFT industry. A study by the Bureau last year revealed that over half the funding for the Conservative Party came from the financial sector, 27 per cent coming from hedge funds, financiers and private equity firms. This perhaps helps explain how the interests of a select group of traders get confused with the interests of the economy as a whole…Yet the UK Government has again chosen to stand apart in blocking a Europe wide-FTT, turning down billions in desperately needed revenue that could help save jobs, protect the poorest and avoid the worst in cuts to public services. Instead, advice of previous Party Treasurers Michael Spencer and Peter Cruddas was heeded, who infamously lobbied against the FTT. Both incidentally own multi-million pound financial firms which would be hit by such a tax.

“French Fin Min: Need regulation of high frequency trading”, FrenchTribune.com:

The Finance Minister of France, Christine Lagarde said on Thursday that there is a requirement for more regulation of high frequency trading with the majority of the effects seeming to be negative and resulting in artificial moves in the markets…However, she is having conflicts with UK’s Financial Services Authority regarding the regulation of high frequency trading firms, which make use of automated software and super-fast telecommunications networks which is capable of trading in milliseconds.

“German government to propose tighter regulation of high-frequency trading”, Washington Post:

Germany’s Finance Ministry said Tuesday a draft law will be considered by Chancellor Angela Merkel’s Cabinet on Wednesday. The bill would require traders to get special permission before they can deploy computers to carry out millions of trades a second to exploit split-penny price differences. Such trades would also have to be specially labeled and stock exchanges would need to ensure trading can quickly be suspended when an error occurs.

“Super funds want computer trading checks”

The $46 billion AustralianSuper said HFT could make a “positive contribution” to financial markets but some strategies were designed to exploit other participants and harmed market integrity. “HFT strategies that are manipulative in nature . . . are problematic and ultimately raise the cost of investing and unfairly redistribute profits,” said Innes McKeand, AustralianSuper head of equities…Associations such as Industry Super Network called for a crackdown on HFT before it became dominant in Australia. The Australian Council of Trade Unions also asked for a ban on HFT until regulators “completed a detailed assessment” of the role of such trades. Russell Investments head of implementation in Australia Adam van Ness said HFT volumes were increasing locally. However, there was no straight answer as to whether HFT provided perks, or hindered, investors and traders. “It’s an open debate of whether it’s good or bad,” said Ness. The Australian Securities and Investments Commission “is taking a closer look at this - the outcome of which will make [HFT] a bit more restrictive here”…“In particular, a kill-switch requirement might have limited the extent of the Knight Capital losses as it could have facilitated a speedier termination of faulty orders.”

“High frequency trading and its impact on market quality”, Brogaard 2010:

Congress and regulators have begun to take notice and vocalize concern with HFT. The Securities and Exchange Commission (SEC) issued a Concept Release regarding the topic on January 14, 2010 requesting feedback on how HFTs operate and what benefits and costs they bring with them (SEC, January 14, 2010). The Dodd Frank Wall Street Reform and Consumer Protection Act calls for an in depth study on HFT (Section 967(2)(D)). The Commodity Futures Trading Commission (CFTC) has created a technology advisory committee to address the development of high frequency trading. Talk of regulation on HFT has already begun. Given the lack of empirical foundation for such regulation, the framework for regulation is best summarized by Senator Ted Kaufman, “Whenever you have a lot of money, a lot of change, and no regulation, bad things happen” (Kardos and Patterson, January 18, 2010). There has been a proposal (House Resolution 1068) to impose a per-trade tax of .25%.

“The rise of computerized high frequency trading: use and controversy”, McGowan 2010:

The dramatic increase in HFT is most likely due to its profitability. Despite the economic recession, high- frequency trading has been considered by many to be the biggest “cash cow” on Wall Street and it is estimated that it generates approximately $15- $25 billion in revenue.16 [See Tyler Durden, “Goldman’s $4 Billion High Frequency Trading Wildcard”, ZEROHEDGE (Jul. 17, 2009, 2:16AM) (discussing estimates by the FIX Protocol, an organization that maintains a messaging standard for the real-time electronic exchange of securities transactions).]

Office space in such areas sometimes costs an astronomical amount, but firms are willing to pay for it. For instance, in Chicago, 6 square feet of space in the data center where the exchanges also house their computers can go for $2,000 or more a month.57 Despite these high prices, the number of firms that co-locate at exchanges such as the NASDAQ has doubled over the last year.58 57 See Moyer & Lambert, supra note 6 (stating that some trading firms even spend 100 times that much to house their servers). 58 Sal L. Arnuk & Joseph Saluzzi, “Toxic Equity Trading Order Flow on Wall Street: The Real Force Behind the Explosion in Volume and Volatility”, THEMIS TRADING LLC WHITE PAPER.

Today, latency arbitragers use algorithms to create models of great complexity that can involve hundreds of securities in many different markets. This practice is highly lucrative. For instance, the financial markets research and advisory firm TABB Group has estimated that annual aggregate profits of low-latency arbitrage strategies exceed $21 billion, an amount which is spread out among the few hundred firms that deploy them. [See Iati, supra note 13 (quoting TABB group’s estimate).]… Because of high frequency trading’s prominence, the next few years of changing regulations will be extremely interesting. The SEC has a very difficult job ahead of it in attempting to regulate these innovative practices while at the same time upholding the agency’s primary concerns: protecting the average investor and ensuring markets remain relatively efficient.

…Additionally, the lack of regulation on naked access allows a reckless high frequency trader to conceivably pump out hundreds of thousands of faulty orders in the two minute period it typically takes to rectify a trading system glitch. [See MOYER & LAMBERT, supra note 6.] Sang Lee, a market analyst from Aite Group, believes that “[i]n the worst case scenario, electronic fat fingering or intentional trading fraud could take down not only the sponsored participants, but also the sponsoring broker and its counterparties, leading to an uncontrollable domino effect that would threaten overall systematic market stability.”107 Because of these doomsday scenarios and others advanced by some Democratic lawmakers, the SEC will most likely propose rules to limit this practice in the upcoming months.

“High-Speed Trading No Longer Hurtling Forward”, 14 October 2012:

Profits from high-speed trading in American stocks are on track to be, at most, $1.25 billion this year, down 35% from last year and 74% lower than the peak of about $4.9 billion in 2009, according to estimates from the brokerage firm Rosenblatt Securities. By comparison, Wells Fargo and JPMorgan Chase each earned more in the last quarter than the high-speed trading industry will earn this year. While no official data is kept on employment at the high-speed firms, interviews with more than a dozen industry participants suggest that firms large and small have been cutting staff, and in some cases have shut down. The firms also are accounting for a declining percentage of a shrinking pool of stock trading, from 61% three years ago to 51% now, according to the Tabb Group, a data firm…The challenges facing speed-focused firms are many, the biggest being the drop in trading volume on stock markets around the world in each of the last four years. This has made it harder to make profits for traders who quickly buy and sell shares offered by slower investors. In addition, traditional investors like mutual funds have adopted the high-speed industry’s automated strategies and moved some of their business away from the exchanges that are popular with high-speed traders. Meanwhile, the technological costs of shaving further milliseconds off trade times has become a bigger drain on many companies…At the same time that the firms are making trims, regulators around the world have increased their scrutiny of high-speed traders, and the structure of the financial markets has continued to shift. Executives at the trading firms worry that new regulations could curtail business even more, but so far regulators in the United States have taken few steps to rein in trading practices…The contraction is also pushing the firms to move into trading of other financial assets, like international stocks and currencies. High-speed firms accounted for about 12% of all currency trading in 2010; this year, it is set to be up to 28 percent, according to the consulting firm Celent. But executives at several high-speed firms said that trading in currencies and other assets was not making up for the big declines in their traditional areas of United States stocks, futures and options. Sun Trading in Chicago bought a firm that allowed it to begin the automated trading of bonds earlier this year. That did not make up for the 40 employees the company cut in 2011.

1.3 Self-driving cars

LW post

The first success inaugurating the modern era can be considered the 2005 DARPA Grand Challenge where multiple vehicles completed the course. The first legislation of any kind addressing autonomous cars was Nevada’s 2011 approval. 5 states have passed legislation dealing with autonomous cars.

However, these laws are highly preliminary and all the analyses I can find agree that they punt on the real legal issues of liability; they permit relatively little.

1.3.1 Lobbying, Liability, and Insurance

(Warning: legal analysis quoted at length in some excerpts.)

“Toward Robotic Cars”, Thrun 2010 (pre-Google):

Junior’s behavior is governed by a finite state machine, which provides for the possibility that common traffic rules may leave a robot without a legal option as how to proceed. When that happens, the robot will eventually invoke its general-purpose path planner to find a solution, regardless of traffic rules. [Raising serious issues of liability related to potentially making people worse-off]

“Google Cars Drive Themselves, in Traffic” (PDF), NYT 2010:

But the advent of autonomous vehicles poses thorny legal issues, the Google researchers acknowledged. Under current law, a human must be in control of a car at all times, but what does that mean if the human is not really paying attention as the car crosses through, say, a school zone, figuring that the robot is driving more safely than he would? And in the event of an accident, who would be liable - the person behind the wheel or the maker of the software?

“The technology is ahead of the law in many areas,” said Bernard Lu, senior staff counsel for the California Department of Motor Vehicles. “If you look at the vehicle code, there are dozens of laws pertaining to the driver of a vehicle, and they all presume to have a human being operating the vehicle.” The Google researchers said they had carefully examined California’s motor vehicle regulations and determined that because a human driver can override any error, the experimental cars are legal. Mr. Lu agreed.

“Calif. Greenlights Self-Driving Cars, But Legal Kinks Linger”:

For instance, if a self-driving car runs a red light and gets caught, who gets the ticket? “I don’t know - whoever owns the car, I would think. But we will work that out,” Gov. Brown said at the signing event for California’s bill to legalize and regulate the robotic cars. “That will be the easiest thing to work out.” Google co-founder Sergey Brin, who was also at the ceremony, jokingly said “self-driving cars don’t run red lights.” That may be true, but Bryant Walker Smith, who teaches a class at Stanford Law School this fall on the law supporting self-driving cars, says eventually one of these vehicles will get into an accident. When it does, he says, it’s not clear who will pay.

…Or is it the company that wrote the software? Or the automaker that built the car? When it came to assigning responsibility, California decided that a self-driving car would always have a human operator. Even if that operator wasn’t actually in the car, that person would be legally responsible. It sounds straightforward, but it’s not. Let’s say the operator of a self-driving car is inebriated; he or she is still legally the operator, but the car is driving itself. “That was a decision that department made - that the operator would be subject to the laws, including laws against driving while intoxicated, even if the operator wasn’t there,” Walker Smith says…Still, issues surrounding liability and who is ultimately responsible when robots take the wheel are likely to remain contentious. Already trial lawyers, insurers, automakers and software engineers are queuing up to lobby rule-makers in California’s capital.

“Google’s Driverless Car Draws Political Power: Internet Giant Hones Its Lobbying Skills in State Capitols; Giving Test Drives to Lawmakers”, WSJ, 12 October 2012:

Overall, Google spent nearly $9 million in the first half of 2012 lobbying in Washington for a wide variety of issues, including speaking to U.S. Department of Transportation officials and lawmakers about autonomous vehicle technology, according to federal records, nearing the $9.68 million it spent on lobbying in all of 2011. It is unclear how much Google has spent in total on lobbying state officials; the company doesn’t disclose such data.

…In most states, autonomous vehicles are neither prohibited nor permitted-a key reason why Google’s fleet of autonomous cars secretly drove more than 100,000 miles on the road before the company announced the initiative in fall 2010. Last month, Mr. Brin said he expects self-driving cars to be publicly available within five years.

In January 2011, Mr. Goldwater approached Ms. Dondero Loop and the Nevada assembly transportation committee about proposing a bill to direct the state’s department of motor vehicles to draft regulations around the self-driving vehicles. “We’re not saying, ‘Put this on the road,’” he said he told the lawmakers. “We’re saying, ‘This is legitimate technology,’ and we’re letting the DMV test it and certify it.” Following the Nevada bill’s passage, legislators from other states began showing interest in similar legislation. So Google repeated its original recipe and added an extra ingredient: giving lawmakers the chance to ride in one of its about a dozen self-driving cars…In California, an autonomous-vehicle bill became law last month despite opposition from the Alliance of Automobile Manufacturers, which includes 12 top auto makers such as GM, BMW and Toyota. The group had approved of the Florida bill. Dan Gage, a spokesman for the group, said the California legislation would allow companies and individuals to modify existing vehicles with self-driving technology that could be faulty, and that auto makers wouldn’t be legally protected from resulting lawsuits. “They’re not all Google, and they could convert our vehicles in a manner not intended,” Mr. Gage said. But Google helped push the bill through after spending about $140,000 over the past year to lobby legislators and California agencies, according to public records

As with California’s recently enacted law, Cheh’s [Washington D.C.] bill requires that a licensed driver be present in the driver’s seat of these vehicles. While seemingly inconsequential, this effectively outlaws one of the more promising functions of autonomous vehicle technology: allowing disabled people to enjoy the personal mobility that most people take for granted. Google highlighted this benefit when one of its driverless cars drove a legally blind man to a Taco Bell. Bizarrely, Cheh’s bill also requires that autonomous vehicles operate only on alternative fuels. While the Google Self-Driving Car may manifest itself as an eco-conscious Prius, self-driving vehicle technology has nothing to do with hybrids, plug-in electrics or vehicles fueled with natural gas. The technology does not depend on vehicle make or model, but Cheh is seeking to mandate as much. That could delay the technology’s widespread adoption for no good reason…Another flaw in Cheh’s bill is that it would impose a special tax on drivers of autonomous vehicles. Instead of paying fuel taxes, “Owners of autonomous vehicles shall pay a vehicle-miles travelled (VMT) fee of 1.875 cents per mile.” Administrative details aside, a VMT tax would require drivers to install a recording device to be periodically audited by the government. There may be good reasons to replace fuel taxes with VMT fees, but greatly restricting the use of a potentially revolutionary new technology by singling it out for a new tax system would be a mistake.

“Driverless cars are on the way. Here’s how not to regulate them.”

“How autonomous vehicle policy in California and Nevada addresses technological and non-technological liabilities”, Pinto 2012:

The State of Nevada has adopted one policy approach to dealing with these technical and policy issues. At the urging of Google, a new Nevada law directs the Nevada Department of Motor Vehicles (NDMV) to issue regulations for the testing and possible licensing of autonomous vehicles and for licensing the owners/drivers of these vehicles. There is also a similar law being proposed in California with details not covered by Nevada AB 511. This paper evaluates the strengths and weaknesses of the Nevada and California approaches

Another problem posed by the non-computer world is that human drivers frequently bend the rules by rolling through stop signs and driving above speed limits. How does a polite and law-abiding robot vehicle act in these situations? To solve this problem, the Google Car can be programmed for different driving personalities, mirroring the current conditions. On one end, it would be cautious, being more likely to yield to another car and strictly following the laws on the road. At the other end of the spectrum, the robocar would be aggressive, where it is more likely to go first at the stop sign. When going through a four-way intersection, for example, it yields to other vehicles based on road rules; but if other cars don’t reciprocate, it advances a bit to show to the other drivers its intention.

However, there is a time period between a problem being diagnosed and the car being fixed. In theory, one would disable the vehicle remotely and only start it back up when the problem is fixed. However in reality, this would be extremely disruptive to a person’s life as they would have to tow their vehicle to the nearest mechanic or autonomous vehicle equivalent to solve the issue. Google has not developed the technology to approach this problem, instead relying on the human driver to take control of the vehicle if there is ever a problem in their test vehicles.

[previous Lu quote about human-centric laws] …this can create particularly tricky situations such as deciding whether the police should have the right to pull over autonomous vehicles, a question yet to be answered. Even the chief counsel of the National Highway Traffic Safety Administration admits that the federal government does not have enough information to determine how to regulate driverless technologies. This can become a particularly thorny issue when there is the first accident between autonomous and self driving vehicles and how to go about assigning liability.

This question of liability arose during an [unpublished 11 Feb 2012] interview on the future of autonomous vehicles with Roger Noll. Although Professor Noll hasn’t read the current literature on this issue, he voiced concern over what the verdict of the first trial between an accident between an autonomous vehicle and normal car will be. He believes that the jury will almost certainly side with the human driver despite the details of the case, as he eloquently put in his husky Utah accent and subsequent laughter, “how are we going to defend the autonomous vehicle; can we ask it to testify for itself?” To answer Roger Noll’s question, Brad Templeton’s blog elaborates how he believes that liability reasons are a largely unimportant question for two reasons. First, in new technology, there is no question that any lawsuit over any incident involving the cars will include the vendor as the defendant so potential vendors must plan for liability. For the second reason, Brad Templeton makes an economic argument that the cost of accidents is borne by car buyers through higher insurance premiums. If the accidents are deemed the fault of the vehicle maker, this cost goes into the price of the car, and is paid for by the vehicle maker’s insurance or self- insurance. Instead, Brad Templeton believes that the big question is whether the liability assigned in any lawsuit will be significantly greater than it is in ordinary collisions because of punitive damages. In theory, robocars should drive the costs down because of the reductions in collisions, and that means savings for the car buyer and for society and thus cheaper auto insurance. However, if the cost per collision is much higher even though the number of collisions drops, there is uncertainty over whether autonomous vehicles will save money for both parties.

California’s Proposition 103 dictates that any insurance policy’s price must be based on weighted factors, and the top 3 weighted factors must be, 1. driving record, 2. number of miles driven and 3. number of years of experience. Other factors like the type of car someone has (i.e. autonomous vehicle) will be weighed lower. Subsequently, this law makes it very hard to get cheap insurance for a robocar.

Nevada Policy: AB 511 Section 8 This short piece of legislation accomplishes the goal of setting good standards for the DMV to follow. By setting general standards (part a), insurance requirements (part b), and safety standards (part c), this sets a precedent for these areas without being too limited with details, leaving them to be decided by the DMV instead of the politicians. …part b only discusses insurance briefly, saying the state must, “Set forth requirements for the insurance that is required to test or operate an autonomous vehicle on a highway within this State.” The definitions set in the second part of Section 8 are not specific enough. Following the open-ended standards set in the earlier part of the Section 8 is good for continuity, but not technically addressing the problem. According to Ryan Calo, Director of Privacy and Robotics for Stanford Law School’s Center for Internet and Society (CIS), the bill’s definition of “autonomous vehicles” is unclear and circular. In the context of this legislation, autonomous driving is seen as a binary system of existence, but in reality, it falls more under a spectrum.

Overall, AB 511 did not address either the technological liabilities and barely mentioned the non-technological liabilities that are necessary to overcome for future success of autonomous vehicles. Since it was the first type of legislation to ever approach the issue of autonomous vehicles, it is understandable that the policymakers did not want to go into specifics and instead rely on future regulation to determine the details.

California Policy: SB 1298…would require the adoption of safety standards and performance requirements to ensure the safe operation and testing of “autonomous vehicles” on California public roads. The bill would allow autonomous vehicles to be operated or tested on the public roads on the condition they meet safety standards and performance requirements of the bill. SB 1298’s 66 lines of text is also considerably longer than AB 511’s 12 lines of relevant text (the entirety of AB 511 is much longer but consists of irrelevant information for the purposes of autonomous cars). would require the adoption of safety standards and performance requirements to ensure the safe operation and testing of “autonomous vehicles” on California public roads. The bill would allow autonomous vehicles to be operated or tested on the public roads on the condition they meet safety standards and performance requirements of the bill. SB 1298’s 66 lines of text is also considerably longer than AB 511’s 12 lines of relevant text (the entirety of AB 511 is much longer but consists of irrelevant information for the purposes of autonomous cars).

SB 1298 has clear intentions to have company developed vehicles by saying in Section 2, Part B that, “autonomous vehicles have been operated safely on public roads in the state in recent years by companies developing and testing this technology” and how these companies have set the standard for what safety standards will be necessary for future testing by others. This part of the legislation implicitly supports Google’s autonomous vehicle because it has the most extensively tested fleet of vehicles out of all the companies, and all this testing has been nearly exclusively done in California. This bill is an improvement over AB 511 by putting more control in the hands of Google to focus on developing the technology, which is a signal by the policymakers to create a climate favorable for Google’s innovation within the constraints of keeping society safe.

To avoid setting a dangerous precedent for liability in accidents, policymakers can consider protecting the car companies from frivolous and malicious lawsuits. Without such legislation, future plaintiffs will be justified to sue Google and put full liability on them. There are also potential free riding effects of the economic moral hazard of putting the blame on the company that makes the technology, not the company that manufactures the vehicle. Since we are assuming that autonomous vehicle technology will all come from one source of Google, then any accident that occurs will pin the blame primarily on Google, the common denominator, not as much as on the car manufacturer…Policy that ensures the costs per accident remains close to today’s current cost will save money for both the insurer and customer. This could potentially mean putting a cap on rewards towards the recipients or punishments towards the company to limit shocks to the industry. Overall, a policymaker can choose to create a gradual limit on the amount of liability placed on the vendor based on certain technology or scaling issues that are met without accidents.

SB 1298 manages to cover some of the shortcomings of AB 511, such as how to improve upon the definition of an autonomous vehicle, as well as looking more towards the future by giving Google more responsibility and alleviating some of the non-technical liability by considering their product “under development”. However, both pieces of legislation fail to address the specific technical liabilities such as bugs in the code base or computer attacks, and non-technical liabilities such as insurance or accident liability.

“Can I See Your License, Registration and C.P.U.?”, Tyler Cowen; see also his “What do the laws against driverless cars look like?”:

The driverless car is illegal in all 50 states. Google, which has been at the forefront of this particular technology, is asking the Nevada legislature to relax restrictions on the cars so it can test some of them on roads there. Unfortunately, the very necessity for this lobbying is a sign of our ambivalence toward change. Ideally, politicians should be calling for accelerated safety trials and promising to pass liability caps if the cars meet acceptable standards, whether that be sooner or later. Yet no major public figure has taken up this cause.

Enabling the development of driverless cars will require squadrons of lawyers because a variety of state, local and federal laws presume that a human being is operating the automobiles on our roads. No state has anything close to a functioning system to inspect whether the computers in driverless cars are in good working order, much as we routinely test emissions and brake lights. Ordinary laws change only if legislators make those revisions a priority. Yet the mundane political issues of the day often appear quite pressing, not to mention politically safer than enabling a new product that is likely to engender controversy.

Politics, of course, is often geared toward preserving the status quo, which is highly visible, familiar in its risks, and lucrative for companies already making a profit from it. Some parts of government do foster innovation, such as DARPA, the Defense Advanced Research Projects Agency, which is part of the Defense Department. DARPA helped create the Internet and is supporting the development of the driverless car. It operates largely outside the public eye; the real problems come when its innovations start to enter everyday life and meet political resistance and disturbing press reports.

…In the meantime, transportation is one area where progress has been slow for decades. We’re still flying 747s, a plane designed in the 1960s. Many rail and bus networks have contracted. And traffic congestion is worse than ever. As I’ argued in a previous column, this is probably part of a broader slowdown of technological advances.

But it’s clear that in the early part of the 20th century, the original advent of the motor car was not impeded by anything like the current mélange of regulations, laws and lawsuits. Potentially major innovations need a path forward, through the current thicket of restrictions. That debate on this issue is so quiet shows the urgency of doing something now.

Ryan Calo of the CIS argues essentially that no specific law bans autonomous cars and the threat of the human-centric laws & regulations is overblown. (See the later Russian incident.)

“SCU conference on legal issues of robocars”, Brad Templeton:

Liability: After a technology introduction where Sven Bieker of Stanford outlined the challenges he saw which put fully autonomous robocars 2 decades away, the first session was on civil liability. The short message was that based on a number of related cases from the past, it will be hard for manufacturers to avoid liability for any safety problems with their robocars, even when the systems were built to provide the highest statistical safety result if it traded off one type of safety for another. In general when robocars come up as a subject of discussion in web threads, I frequently see “Who will be liable in a crash” as the first question. I think it’s a largely unimportant question for two reasons. First of all, when the technology is new, there is no question that any lawsuit over any incident involving the cars will include the vendor as the defendant, in many cases with justifiable reasons, but even if there is no easily seen reason why. So potential vendors can’t expect to not plan for liability. But most of all, the reality is that in the end, the cost of accidents is borne by car buyers. Normally, they do it by buying insurance. But if the accidents are deemed the fault of the vehicle maker, this cost goes into the price of the car, and is paid for by the vehicle maker’s insurance or self-insurance. It’s just a question of figuring out how the vehicle buyer will pay, and the market should be capable of that (though see below.) No, the big question in my mind is whether the liability assigned in any lawsuit will be significantly greater than it is in ordinary collisions where human error is at fault, because of punitive damages…Unfortunately, some liability history points to the latter scenario, though it is possible for statutes to modify this.

Insurance: …Because Prop 103 [specifying insurance by weighted factors, see previous] is a ballot proposition, it can’t easily be superseded by the legislature. It takes a 2/3 vote and a court agreeing the change matches the intent of the original ballot proposition. One would hope the courts would agree that cheaper insurance to encourage safer cars would match the voter intent, but this is a challenge.

Local and criminal laws: The session on criminal laws centered more on the traffic code (which isn’t really criminal law) and the fact it varies a lot from state to state. Indeed, any robocar that wants to operate in multiple states will have to deal with this, though fortunately there is a federal standard on traffic controls (signs and lights) to rely on. Some global standards are a concern - the Geneva convention on traffic laws requires every car has a driver who is in control of the vehicle. However, I think that governments will be able to quickly see - if they want to - that these are laws in need of updating. Some precedent in drunk driving can create problems - people have been convicted of DUI for being in their car, drunk, with the keys in their pocket, because they had clear intent to drive drunk. However, one would hope the possession of a robocar (of the sort that does not need human manual driving) would express an entirely different intent to the law.

“Definition of necessary vehicle and infrastructure systems for Automated Driving”, European Commission report 29 June 2011:

Yet another paramount aspect tightly related to automated driving at present and in the near future, and certainly related to autonomous driving in the long run, is the interpretation of the Vienna Convention. It will be shown in the report how this European legislation is commonly interpreted, how it creates the framework necessary to deploy on a large scale automated and cooperative driving systems, and what legal limitations are foreseen in making the new step toward autonomous driving. The report analyses in the same context other conventions and legislative acts, searches for gaps in the current legislation and makes an interesting link with the aviation industry where several lessons can be learnt from.

It seems appropriate to end this summary with a few remarks not directly related to the subject of this report, but worth in the process of thinking about automated driving, cooperative driving, and autonomous driving. The progress in the human history has systematically taken the path of the shortest resistance and has often bypassed governmental rules, business models, and the obvious thinking. At the end of the 1990s nobody was anticipating the prominent role the smart phone would have in 10 years, but scientists were busy planning journeys to Mars within the same timeframe. The latter has not happened and will probably not happen soon… One lesson humanity has learnt during its existence is that historical changes that followed the path of the minimum resistance triggered at a later stage fundamental changes in the society. “A car is a car” like David Strickland, administrator of the National Highway Traffic Safety Administration (NHTSA) in the U.S. said in his speech at the Telematics Update conference in Detroit, June 2011, but it may drive soon its progress along a historical path of minimum resistance.

An automated driving systems needs to meet the Vienna Convention (see Section 3, aspect 2). The private sector, especially those who are in the end responsible for the performance of the vehicle, should be involved in the discussion.

The Vienna Convention on Road Traffic is an international treaty designed to facilitate international road traffic and to increase road safety by standardizing the uniform traffic rules among the contracting parties. This convention was agreed upon at the United Nations Economic and Social Council’s Conference on Road Traffic (October 7, 1968 - November 8, 1968). It came into force on May 21 1977. Not all EU countries have ratified the treaty, see Figure 13 (e.g. Ireland, Spain and UK did not). It should be noted that in 1968, animals were still used for traction of vehicles and the concept of autonomous driving was considered to be science fiction. This is important when interpreting the text of the treaty: in a strict interpretation to the letter of the text, or interpretation of what is meant (at that time).

The common opinion of the expert panel is that the Vienna Convention will have only a limited effect on the successful deployment of automated driving systems due to several reasons:

  • OEMs already deal with the situation that some of the Advanced Driver Assistance Systems touch the Vienna Convention today. For example, they provide an on/off switch for ADAS or allow an overriding of the functions by the driver. They develop their ADAS in line with the RESPONSE Code of Practice (2009) [41] following the principle that the driver is in control and remains responsible. In addition, the OEMs have a careful marketing strategy and they do not exaggerate and do not claim that an ADAS is working in all driving situations or that there is a solution to “all” safety problems.
  • Automation is not black and white, automated or not automated, but much more complex, involving many design dimensions. A helpful model of automation is to consider different levels of assistance and automation that can e.g. be organized on a 1d- scale [42]. Several levels could be within the Vienna Convention, while extreme levels are outside of today’s version of the Vienna Convention. For example, one partitioning could be to have levels of automation Manual, Assisted, Semi-Automated, Highly Automated, and Fully Automated driving, see Figure 14. In highly automated driving, the automation has the technical capabilities to drive almost autonomously, but the driver is still in the loop and able to take over control when it is necessary. Fully automated driving like PRT, where the driver is not required to monitor the automation and does not have the ability to take over control, seems not to be covered by the Vienna Convention.

Criteria for deciding if the automation is still in line with the Vienna Convention could be:

  • the involvement of the driver in the driving task (vehicle control),
  • the involvement of the driver in monitoring the automation and the traffic environment,
  • the ability to take over control or to override the automation
  • The Vienna Convention already contains openings, or is variable, or can be changed.

It contains a certain variability regarding the autonomy in the means of transportation, e.g. “to control the vehicle or guide the animals”. It is obvious that some of the current technological developments were not foreseen by the authors of the Vienna Convention. Issues like platooning are not addressed. The Vienna Convention already contains in Annex 5 (chapter 4, exemptions) an opening to be investigated with appropriate legal expertise:

“For domestic purposes, Contracting Parties may grant exemptions from the provisions of this Annex in respect of: (c) Vehicles used for experiments whose purpose is to keep up with technical progress and improve road safety; (d) Vehicles of a special form or type, or which are used for particular purposes under special conditions”. - In addition, the Vienna Convention can be changed. The last change was made in 2006. A new paragraph (paragraph 6) was added to Article 8 stating that the driver should minimize any activity other than driving.

…different understandings of the term “to control” with no clear consensus [44]: 1. Control in a sense of influencing e.g. the driver controls the vehicle movements, the driver can override the automation and/or the driver can switch the automation off. 2. Control in a sense of monitoring e.g. the driver monitors the actions of the automation. Both interpretations allow the use of some form of automation in a vehicle as it can be seen in today’s cars where e.g. ACC or emergency brake assistance systems etc. are available.

The first interpretation allows automation that can be overridden by the driver or that reacts in emergency situations only when the driver cannot cope with the situation anymore. Forms of automation that cannot be overridden seem to be not in line with the first interpretation [45, p. 818]. The second interpretation is more flexible and would allow also forms of automation that cannot be overridden and are within the Vienna Convention as long as the driver monitors the automation [44]. …In the literature, some other assistance and automation functions were appraised by juridical experts. For example, [46] postulates that automatic emergency braking systems are in line with the Vienna Convention as long as they react only when a crash is unavoidable (collision mitigation). Otherwise a conflict between the driver’s intention (here, steering) and the reaction of the automation (here, braking) cannot be excluded. Albrecht [47] concludes that an Intelligent Speed Adaptation (ISA) which cannot be overridden by the driver is not in line with the Vienna Convention because it is not consistent with Article 8 and Article 13 of the Vienna Convention.

…As soon as data from the vehicle is used for V2X-communication or is stored in the vehicle itself, data protection and privacy issues become relevant. Directives and documents that need to be checked include:

  • Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data;
  • Directive 2010/40/EU on the framework for the deployment of Intelligent Transport Systems in the field of road transport and for interfaces with other modes of transport;
  • WP 29 Working document on data protection and privacy implications in the eCall initiative and the European Data Protection Supervisor (EDPS) opinion on ITS Action Plan and Directive.

The bottleneck is that at the current stage of development the risk related costs and benefits of viable deployment paths are unknown, combined with the fact that the deployment paths themselves are wide open because the possible deployment scenarios are not assessed and debated in a political environment. There is currently no consensus amongst stakeholders on which of the deployment scenarios proposed will eventually prevail…Changes in EU legislation might change the role of players and increase the risk for them. Any change in EU legislation will change the position of the players, and uncertainty in which direction this change (gap) would go adds to the risk. This prohibits players from having an outspoken opinion on the issue. If an update of existing legislation is considered, this should be European legislation, not national legislation. It would be better to go for a world-wide harmonized legislation, when it is decided to take that path.

A useful case study for understanding the issues associated with automated driving can be found in SAFESPOT [4] which can be viewed as a parallel to automated driving functions (for more details, see Appendix I. Related to aspect 3). SAFESPOT provided an in-depth analysis of the legal aspects of the service named ‘Speed Warning’, in two configurations V2I and V2V. It is performed against two fundamentally different law schemes, namely Dutch and English law. This analysis concluded that the concept of co-operative systems raises questions and might complicate legal disputes. This is for several reasons:

  • There are more parties involved, all with their own responsibilities for the proper functioning of elements of a cooperative system.
  • Growing technical interdependencies between vehicles, and between vehicles and the infrastructure, may also lead to system failure, including scenarios that may be characterised as an unlucky combination of events (“a freak accident”) or as a failure for which the exact cause simply cannot be traced back (because of the technical complexity).
  • Risks that cannot be influenced by the people who suffer the consequences tend to be judged less acceptable by society and, likewise, from a legal point of view.
  • The in-depth analysis of SAFESPOT concluded that (potential) participants such as system producers and road managers may well be exposed to liability risks. Even if the driver of the probe vehicle could not successfully claim a defense (towards other road users), based on a failure of a system, system providers and road managers may still remain (partially) responsible through the mechanism of subrogation and right of recourse.
  • Current law states that the driver must be in control of his vehicle at all times. In general, EU drivers are prohibited to exhibit dangerous behaviour while driving. The police have prosecuted drivers in the UK for drinking and/or eating; i.e. only having one hand on the steering wheel. The use of a mobile phone while driving is prohibited in many European countries, only use of phones equipped for hands free operation are permitted. Liability still rests firmly with the driver for the safe operation of vehicles.

New legislation may be required for automated driving. It is highly unlikely that any OEM or supplier will risk introducing an automatic driving vehicle (where responsibility for safe driving is removed from the driver) without there being a framework of new legislation which clearly sets out where their responsibility and liability begins and ends. In some ways it could be seen as similar to warranty liability, the OEM warrants certain quality and performance levels, backed by reciprocal agreements within the supply chain. Civil (and possibly criminal) liability in the case of accidents involving automated driving vehicles is a major issue that can truly delay the introduction of these technologies…Since there are no statistical records of the effects of automated driving systems, the entrepreneurship of insurers should compensate for the issue of unknown risks…The following factors are regarded as hindering an optimal role to be played by the insurance industry in promoting new safety systems through their insurance policies:

  • Premium-setting is based on statistical principles, resulting in a time-lag problem;
  • Competition/sensitive relationships with clients;
  • Investment costs (e.g. aftermarket installations);
  • Administrative costs;
  • Market regulation

No precedence lawsuits of liability with automated systems have happened to date. The Toyota malfunctions of their brake-by-wire system in 2010 did not end in a lawsuit. A system like parking assist is technically not redundant. What would happen if the driver claimed he/she could not override the brakes? For (premium) insurance a critical mass is required, so initially all stake-holders including governments should potentially play a role.

“Automotive Autonomy: Self-driving cars are inching closer to the assembly line, thanks to promising new projects from Google and the European Union”, Wright 2011:

The Google project has made important advances over its predecessor, consolidating down to one laser rangefinder from five and incorporating data from a broader range of sources to help the car make more informed decisions about how to respond to its external environment. “The threshold for error is minuscule,” says Thrun, who points out that regulators will likely set a much higher bar for safety with a self-driving car than for one driven by notoriously error-prone humans.

“The future of driving, Part III: hack my ride”, Lee 2008:

Of course, one reason that private investors might not want to invest in automotive technologies is the risk of excessive liability in the case of crashes. The tort system serves a valuable function by giving manufacturers a strong incentive to make safe, reliable products. But too much tort liability can have the perverse consequence of discouraging the introduction of even relatively safe products into the marketplace. Templeton tells Ars that the aviation industry once faced that problem. At one point, “all of the general aviation manufacturers stopped making planes because they couldn’t handle the liability. They were being found slightly liable in every plane crash, and it started to cost them more than the cost of manufacturing the plane.” Airplane manufacturers eventually convinced Congress to place limits on their liability. At the moment, crashes tend to lead to lawsuits against human drivers, who rarely have deep pockets. Unless there is evidence that a mechanical defect caused the crash, car manufacturers tend not to be the target of most accident-related lawsuits. That would change if cars were driven by software. And because car manufacturers have much deeper pockets than individual drivers do, plaintiffs are likely to seek much larger damages than they would against human drivers. That could lead to the perverse result that even safer self-driving cars would be more expensive to insure than human drivers. Since car manufacturers, rather than drivers, would be the first ones sued in the event of an accident, car companies are likely to protect themselves by buying their own insurance. And if insurance premiums get too high, they may take the route the aviation industry did and seek limits on liability. An added benefit for consumers is that most would never have to worry about auto insurance. Cars would come preinsured for the life of the vehicle (or at least the life of the warranty)…Self-driving vehicles will sit at the intersection of two industries that are currently subject to very different regulatory regimes. The automobile industry is heavily regulated, while the software industry is largely unregulated at all. The most fundamental decision regulators will need to make is whether one of these existing regulatory regimes will be suitable for self-driving technologies, or whether an entirely new regulatory framework will be needed to accommodate them.

http://www.917wy.com/topicpie/2008/11/future-of-driving-part-3/2

It’s inevitable that at some point, a self-driving vehicle will be involved in a fatal crash which generates worldwide publicity. Unfortunately, even if self-driving vehicles have amassed an overall safety record that’s superior to that of human drivers, the first crash is likely to prompt calls for drastic restrictions on the use of self-driving technologies. It will therefore be important for business leaders and elected officials to lay the groundwork by both educating the public about the benefits of self-driving technologies and managing expectations so that the public isn’t too surprised when crashes happen. Of course, if the first self-driving cars turn out to be significantly less safe than the average human driver, then they should be pulled off the streets and re-tooled. But this seems unlikely to happen. A company that introduced self-driving technology into the marketplace before it was ready would not only have trouble convincing regulators that its cars are safe, but it would be risking ruinous lawsuits, as well. The far greater danger is that the combination of liability fears and red tape will cause the United States to lose the initiative in self-driving technologies. Countries such as China, India, and Singapore that have more autocratic regimes or less-developed economies may seize the initiative and introduce self-driving cars while American policymakers are still debating how to regulate them. Eventually, the specter of other countries using technologies that aren’t available in the United States will spur American politicians into action, but only after several thousand Americans lose their lives unnecessarily at the hands of human drivers.

…One likely area of dispute is whether people will be allowed to modify the software on their own cars. The United States has a long tradition of people tinkering with both their cars and their computers. No doubt, there will be many people who are interested in modifying the software on their self-driving cars. But there is likely to be significant pressure for legislation criminalizing unauthorized tinkering with self-driving car software. Both car manufacturers and (as we’ll see shortly) the law enforcement community are likely to be in favor of criminalizing the modification of car software. And they’ll have a plausible safety argument: buggy car software would be dangerous not only to the car owner but to others on the road. The obvious analogy is to the DMCA, which criminalized unauthorized tinkering with copy protection schemes. But there are also important differences. One is that car manufacturers will be much more motivated to prevent tinkering than Apple or Microsoft are. If manufacturers are liable for the damage done by their vehicles, then tinkering not only endangers lives, but their bottom lines as well. It’s unlikely that Apple would ever sue people caught jailbreaking their iPhones. But car manufacturers probably will contractually prohibit tinkering and then sue those caught doing it for breach of contract.

http://www.917wy.com/topicpie/2008/11/future-of-driving-part-3/3

The more stalwart advocate of locked-down cars is likely to be the government, because self-driving car software promises to be a fantastic tool for social control. Consider, for example, how useful locked-down cars could be to law enforcement. Rather than physically driving to a suspect’s house, knocking on his door (or not), and forcibly restraining, handcuffing, and escorting a suspect to the station, police will be able to simply seize a suspect’s self-driving car remotely and order it to drive to the nearest police station. And that’s just the beginning. Locked-down car software could be used to enforce traffic laws, to track and log peoples’ movements for later review by law enforcement, to enforce curfews, to clear the way for emergency vehicles, and dozens of other purposes. Some of these functions are innocuous. Others will be very controversial. But all of them depend on restricting user control over their own vehicles. If users were free to swap in custom software, they might disable the government’s “back door” and re-program it to ignore government requirements. So the government is likely to push hard for laws mandating that only government-approved software run self-driving cars.

…It’s too early to say exactly what the car-related civil liberties fights will be about, or how they will be resolved. But one thing we can say for certain is that the technical decisions made by today’s computer scientists will be important for setting the stage for those battles. Advocates for online free speech and anonymity have been helped tremendously by the fact that the Internet was designed with an open, decentralized architecture. The self-driving cars of the future are likely to be built on top of software tools that are being developed in today’s academic labs. By thinking carefully about the ways these systems are designed, today’s computer scientists can give tomorrow’s civil liberties their best shot at preserving automotive freedom.

http://www.917wy.com/topicpie/2008/11/future-of-driving-part-3/4

In our interview with him, Congressman Adam Schiff described the public’s perception of autonomous driving technologies as a reflection of his own reaction to the idea: one that is a mixture of both fascination and skepticism. Schiff explained that the public’s fascination comes from amazement at how advanced this technology already has become, plus with Google’s sponsorship and endorsement it becomes even more alluring.

Skepticism of autonomous vehicle technologies comes from a missing element of trust. According to Clifford Nass, a professor of communications and sociology at Stanford University, this trust is an aspect of public opinion that must be earned through demonstration more so than through use. When people see a technology in action, they will begin to trust it. Professor Nass specializes in studying the way in which human beings relate to technology, and he has published several books on the topic including The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships. In our interview with him, Professor Nass explained that societal comfort with technology is gained through experience, and acceptance occurs when people have seen a technology work enough times collectively. He also pointed out that it took a long time for people to develop trust in air transportation, something that we almost take for granted now. It is certainly not the case that autonomous cars need to be equivalent in safety to plane flight before the public would adopt them. However, as Noel du Toit pointed out, we have a higher expectation for autonomous cars than we do for ourselves. Simply put, if we are willing to relinquish the “control” over our vehicles to an autonomous power, it will likely have to be under the condition that the technology drives more adeptly than we ever possibly could. Otherwise, there will simply be no trusting it. Interestingly, du Toit brought up a recent botched safety demonstration by Volvo in May of 2010. In the demonstration, Volvo showcased to the press how its emergency braking system works as part of an “adaptive cruise control” system. These systems allow a driver to set both a top speed and a following distance, which the vehicle then automatically maintains. As a consequence, if the preceding vehicle stops short, the system acts as the foundation for an emergency-braking maneuver. However, In Volvo’s demonstration the car smashed directly into a trailer13. Even though the system worked fine in several cases during the day’s worth of demonstrations, video of that one mishap went viral and did little to help the public gain trust in the technology.

Calo pointed out at that future issues related to autonomous vehicles would be approached from a standpoint of “negative liabilities”, meaning that we can assume something is legal unless there exist explicit laws against it. This discussion also led to the concept of what a driverless car would look like to bystanders, and the kind of panic that might garner. A real-life example of this occurred in Moscow during the VisLab van trek to Shanghai11. In this case, an autonomous electric van was stopped by Russian authorities due to its apparent lack of a driver behind the wheel. Thankfully, engineers present were able to convince the Russian officer who stopped the vehicle not to issue a ticket. The above [Nevadan] legislation fits in well with the information that we collected from Congressman Schiff about potential federal involvement in autonomous vehicle technology. Basically, Schiff relayed the idea that a strong governmental role expected for this technology would come in the form of regulating safety. Furthermore, he called attention to hefty governmental requirements for crash testing that every new vehicle must meet before it is allowed on the road.

In autonomous driving, liability concerns can be inferred through a couple of examples. In one example, Noel du Toit described DARPA’s use of hired stunt drivers to share the testing grounds with driverless vehicle entries in the 2007 Urban Challenge. This behavior clearly illustrates the level of precaution that the DARPA officials felt it was necessary to take. In another example, Dmitri Dolgov expounded on how Google’s cars are never driving by themselves; whenever they are operated on public roads, there are at least two well-trained operators in the car. Dolgov went on to say that these operators “are in control at all times”, which helps illustrate Google’s position-they are not taking any chances when it comes to liabilities. Kent Kresa, former CEO of Northrup Grumman and interim chairman of GM in 2009, was also concerned about the liability issues presented by autonomous vehicles. Kresa felt that a future with driverless cars piloting the streets was somewhat unimaginable at present, especially when one considers the possibility of a pedestrian getting hit. In the case of such a collision it is still very unclear who would be at fault. Whether or not the company that made the vehicle would be responsible is at present unknown.

A conversation we had with Bruce Gillman, the public information officer for the Los Angeles Department of Transportation (DOT), revealed that the department is very busy putting out many other fires. Gillman noted that DOT is focused on getting people out of their cars and onto bikes or into buses. Thus, autonomous vehicles are not on their radar. Moreover, Gillman was adamant that DOT would wait until autonomous vehicles were being manufactured commercially before addressing any issues concerning them. His viewpoint certainly reinforces that idea that supportive infrastructure updates coming form a city government level would be unlikely. No matter what adoption pathway is used, federal government financial support could come in the form of incentives and subsidies like those seen during the initial rollout of hybrid vehicles. However, Brian Thomas explained that this would only be possible if the federal government was willing to do a cost-benefit valuation for the mainstream introduction of autonomous vehicles.

http://www.pickar.caltech.edu/e103/Final%20Exams/Autonomous%20Vehicles%20for%20Personal%20Transport.pdf [shades of Amara’s law: we always overestimate in the short run & underestimate in the long run]

Car manufacturers might be held liable for a larger share of the accidents-a responsibility they are certain to resist. (A legal analysis by Nidhi Kalra and her colleagues at the RAND Corporation suggests this problem is not insuperable.) –“Leave the Driving to It”, Brian Hayes American Scientist 2011

The RAND report: “Liability and Regulation of Autonomous Vehicle Technologies”, Kalra et al 2009:

In this work, we first evaluate how the existing liability regime would likely assign responsibility in crashes involving autonomous vehicle technologies. We identify the controlling legal principles for crashes involving these technologies and examine the implications for their further development and adoption. We anticipate that consumer education will play an important role in reducing consumer overreliance on nascent autonomous vehicle technologies and minimizing liability risk. We also discuss the possibility that the existing liability regime will slow the adoption of these socially desirable technologies because they are likely to increase liability for manufacturers while reducing liability for drivers. Finally, we discuss the possibility of federal preemption of state tort suits if the U.S. Department of Transportation (US DOT) promulgates regulations and some of the implications of eliminating state tort liability. Second, we review the existing literature on the regulatory environment for autonomous vehicle technologies. To date, there are no government regulations for these technologies, but work is being done to develop initial industry standards.

…Additionally, for some systems, the driver is expected to intervene when the system cannot control the vehicle completely. For example, if a very rapid stop is required, ACC may depend on the driver to provide braking beyond its own capabilities. ACC also does not respond to driving hazards, such as debris on the road or potholes-the driver is expected to intervene. Simultaneously, research suggests that drivers using these conveniences often become complacent and slow to intervene when necessary; this behavioral adaptation means drivers are less responsive and responsible than if they were fully in control (Rudin-Brown and Parker, 2004). Does such evidence suggest that manufacturers may be responsible for monitoring driver behavior as well as vehicle behavior? Some manufacturers have already taken a step toward ensuring that the driver assumes responsibility and is attentive, by requiring the driver to periodically depress a button or by monitoring the driver by sensing eye movements and grip on the steering wheel. As discussed later, litigation may occur around the issue of driver monitoring and the danger of the driver relying on the technology for something that it is not designed to accomplish.

…Ayers (1994) surveyed a range of emerging autonomous vehicle technologies and automated highways, evaluated the likelihood of a shift in liability occurring, discussed the appropriateness of government intervention, and highlighted the most-promising interventions for different technologies. Ayers found that collision warning and collision-avoidance systems “are likely to generate a host of negligence suits against auto manufacturers” and that liability disclaimers and federal regulations may be the most effective methods of dealing with the liability concerns (p. 21). The report was written before many of these technologies appeared on the market, and Ayers further speculated that “the liability for almost all accidents in cars equipped with collision-avoidance systems would conceivably fall on the manufacturer” (p. 22), which could “delay or even prevent the deployment of collision warning systems that are cost-effective in terms of accident reduction” (p. 25). Syverud (1992) examines the legal cases stemming from the introduction of air bags, antilock brakes, cruise control, and cellular telephones to provide some general lessons for the liability concerns for autonomous vehicle technologies. In another report, Syverud (1993) examines the legal barriers to a wide range of IVHSs and finds that liability poses a significant barrier particularly to autonomous vehicle technologies that take control of the vehicle. In this work, Syverud’s interviews with manufacturers reveal that liability concerns had already adversely affected research and development in these technologies in several companies. One interviewee is quoted as saying that “IVHS will essentially remain ‘information technology and a few pie-in-the sky pork barrel control technology demonstrations, at least in this country, until you lawyers do something about products liability law’” (1993, p. 25).

…While the victims in these circumstances could presumably sue the vehicle manufacturer, products liability lawsuits are more expensive to bring and take more time to resolve than run-of-the-mill automobile-crash litigation. This shift in responsibility from the driver to the manufacturer may make no-fault automobile-insurance regimes more attractive. They are designed to provide compensation to victims relatively quickly, and they do not depend upon the identification of an “at-fault” party

…Suppose that autonomous vehicle technologies are remarkably effective at virtually eliminating minor crashes caused by human error. But it may be that the comparatively few crashes that do occur usually result in very serious injuries or fatalities (e.g., because autonomous vehicles are operating at much higher speeds or densities). This change in the distribution of crashes may affect the economics of insuring against them. Actuarially, it is much easier for an insurance company to calculate the expected costs of somewhat common small crashes than of rarer, much larger events. This may limit the downward trend in automobile-insurance costs that we would otherwise expect.

…Suppose that most cars brake automatically when they sense a pedestrian in their path. As more cars with this feature come to be on the road, pedestrians may expect that cars will stop, in the same way that people stick their limbs in elevator doors confident that the door will automatically reopen. The general level of pedestrian care may decline as people become accustomed to this common safety feature. But if there were a few models of cars that did not stop in the same way, a new category of crashes could emerge. In this case, should pedestrians who wrongly assume that a car would automatically stop and are then injured be able to recover? To allow recovery in this instance would seem to undermine incentives for pedestrians to take efficient care. On the other hand, allowing the injured pedestrian to recover may encourage the universal adoption of this safety feature. Since negligence is defined by unreasonableness, the evolving set of shared assumptions about the operation of the roadways-what counts as “reasonable”-will determine liability. Fourth, we think that it is not likely that operators of partially or fully autonomous vehicles will be found strictly liable with driving such vehicles as an ultrahazardous activity. As explained earlier, these technologies will be introduced incrementally and will initially serve merely to aid the driver rather than take full control of the vehicle. This will give the public and courts time to become familiar with the capabilities and limits of the technology. As a result, it seems unlikely that courts will consider its gradual introduction and use to be ultrahazardous. On the other hand, this would not be true if a person attempted to operate a car fully autonomously before the technology adequately matured. Suppose, for example, that a home hobbyist put together his own autonomous vehicle and attempted to operate it on public roads. Victims of any crashes that resulted may well be successful in convincing a court to find the operator strictly liable on the grounds that such activity was ultrahazardous.

…Product-liability law can be divided into theories of liability and kinds of defect. Theories of liability include negligence, misrepresentation, warranty, and strict liability.22 Types of defect include manufacturing defects, design defects, and warning defects. A product-liability lawsuit will involve one or more theories of manufacturer liability attached to a specific allegation of a type of defect. In practice, the legal tests for the theories of liability often overlap and, depending on the jurisdiction, may be identical. … While it is difficult to generalize, automobile (and subsystem) manufacturers may fare well under a negligence standard that uses a cost-benefit analysis that includes crashes avoided from the use of autonomous vehicle technologies. Automakers can argue that the overall benefits from the use of a particular technology outweigh the risks. The number of crashes avoided by the use of these technologies is probably large. …Unfortunately, the socially optimal liability rule is unclear. Permitting the defendant to include the long-run benefits in the cost-benefit analysis may encourage the adoption of technology that can indeed save many lives. On the other hand, it may shield the manufacturer from liability for shorter-run decisions that were inefficiently dangerous. Suppose, for example, that a crash-prevention system operates successfully 70% of the time but that, with additional time and work, it could have been designed to operate successfully 90% of the time. Then suppose that a victim is injured in a crash that would have been prevented had the system worked 90% of the time. Assume that the adoption of the 70-percent technology is socially desirable but the adoption of the 90-percent technology would be even more socially desirable. How should the cost-benefit analysis be conducted? Is the manufacturer permitted to cite the 70% of crashes that were prevented in arguing for the benefits of the technology? Or should the cost-benefit analysis focus on the manufacturer’s failure to design the product to function at 90-percent effectiveness? If the latter, the manufacturer might not employ the technology, thereby leading to many preventable crashes. In calculating the marginal cost of the 90-percent technology, should the manufacturer be able to count the lives lost in the delay in implementation as compared to possible release of the 70-percent technology? …Tortious misrepresentation may play a role in litigation involving crashes that result from autonomous vehicle technologies. If advertising overpromises the benefits of these technologies, consumers may misuse them. Consider the following hypothetical scenario. Suppose that an automaker touts the “autopilot-like” features of its ACC and lane-keeping function. In fact, the technologies are intended to be used by an alert driver supervising their operation. After activating the ACC and lane-keeping function, a consumer assumes that the car is in control and falls asleep. Due to road resurfacing, the lane-keeping function fails, and the automobile leaves the roadway and crashes into a tree. The consumer then sues the automaker for tortious misrepresentation based on the advertising that suggested that the car was able to control itself.

…Finally, it is also possible that auto manufacturers will be sued for failing to incorporate autonomous vehicle technologies in their vehicles. While absence of available safety technology is a common basis for design- defect lawsuits (e.g., Camacho v. Honda Motor Co., 741 P.2d 1240, 1987, overturning summary dismissal of suit alleging that Honda could easily have added crash bars to its motorcycles, which would have prevented the plaintiff’s leg injuries), this theory has met with little success in the automotive field because manufacturers have successfully argued that state tort remedies were preempted by federal regulation (Geier v. American Honda Motor Co., 529 U.S. 861, 2000, finding that the plaintiff’s claim that the manufacturer was negligent for failing to include air bags was implicitly preempted by the National Traffic and Motor Vehicle Safety Act). We discuss preemption and the relationship between regulation and tort in Section 4.3.

…Preemption has arisen in the automotive context in litigation over a manufacturer’s failure to install air bags. In Geier v. American Honda Motor Co. (2000), the U.S. Supreme Court found that state tort litigation over a manufacturer’s failure to install air bags was preempted by the National Traffic and Motor Vehicle Safety Act (Pub. L. No. 89-563). More specifically, the Court found that the Federal Motor Vehicle Safety Standard (FMVSS) 208, promulgated by the US DOT, required manufacturers to equip some but not all of their 1987 vehicle-year vehicles with passive restraints. Because the plaintiffs’ theory that the defendants were negligent under state tort law for failing to include air bags was inconsistent with the objectives of this regulation (FMVSS 208), the Court held that the state lawsuits were preempted. Presently, there has been very little regulation promulgated by the US DOT with respect to autonomous vehicle technologies. Should the US DOT promulgate such regulation, it is likely that state tort law claims that were found to be inconsistent with the objective of the regulation would be held to be preempted under the analysis used in Geier. Substantial litigation might be expected as to whether particular state-law claims are, in fact, inconsistent with the objectives of the regulation. Resolution of those claims will depend on the specific state tort law claims, the specific regulation, and the court’s analysis of whether they are “inconsistent.” …Our analysis necessarily raises a more general question: Why should we be concerned about liability issues raised by a new technology? The answer is the same as for why we care about tort law at all: that a tort regime must balance economic incentives, victim compensation, and corrective justice. Any new technology has the potential to change the sets of risks, benefits, and expectations that tort law must reconcile. …Congress could consider creating a comprehensive regulatory regime to govern the use of these technologies. If it does so, it should also consider preempting inconsistent state-court tort remedies. This may minimize the number of inconsistent legal regimes that manufacturers face and simplify and speed the introduction of this technology. While federal preemption has important disadvantages, it might speed the development and utilization of this technology and should be considered, if accompanied by a comprehensive federal regulatory regime.

…This tension produced “a standoff between airbag proponents and the automakers that resulted in contentious debates, several court cases, and very few airbags” (Wetmore, 2004, p. 391). In 1984, the US DOT passed a ruling requiring vehicles manufactured after 1990 to be equipped with some type of passive restraint system (e.g., air bags or automatic seat belts) (Wetmore, 2004); in 1991, this regulation was amended to require air bags in particular in all automobiles by 1999 (Pub. L. No. 102-240). The mandatory performance standards in the FMVSS further required air bags to protect an unbelted adult male passenger in a head-on, 30 mph crash. Additionally, by 1990, the situation had changed dramatically, and air bags were being installed in millions of cars. Wetmore attributes this development to three factors: First, technology had advanced to enable air-bag deployment with high reliability; second, public attitude shifted, and safety features became important factors for consumers; and, third, air bags were no longer being promoted as replacements but as supplements to seat belts, which resulted in a sharing of responsibility between manufacturers and passengers and lessened manufacturers’ potential liability (Wetmore, 2004). While air bags have certainly saved many lives, they have not lived up to original expectations: In 1977, NHTSA estimated that air bags would save on the order of 9,000 lives per year and based its regulations on these expectations (Thompson, Segui-Gomez, and Graham, 2002). Today, by contrast, NHTSA calculates that air bags saved 8,369 lives in the 14 years between 1987 and 2001 (Glassbrenner, undated). Simultaneously, however, it has become evident that air bags pose a risk to many passengers, particularly smaller passengers, such as women of small stature, the elderly, and children. NHTSA (2008a) determined that 291 deaths were caused by air bags between 1990 and July 2008, primarily due to the extreme force that is necessary to meet the performance standard of protecting the unbelted adult male passenger. Houston and Richardson (2000) describe the strong reaction to these losses and a backlash against air bags, despite their benefits. The unintended consequences of air bags have led to technology developments and changes to standards and regulations. Between 1997 and 2000, NHTSA developed a number of interim solutions designed to reduce the risks of air bags, including on-off switches and deployment with less force (Ho, 2006). Simultaneously, safer air bags, called advanced air bags, were developed that deploy with a force tailored to the occupant by taking into account the seat position, belt usage, occupant weight, and other factors. In 2000, NHTSA mandated that the introduction of these advanced air bags begin in 2003 and that, by 2006, every new passenger vehicle would include these safety measures (NHTSA, 2000). What lessons does this experience offer for regulation of autonomous vehicle technologies? We suggest that modesty and flexibility are necessary. The early air-bag regulators envisioned air bags as being a substitute for seat belts because the rates of seat-belt usage were so low and appeared intractable. Few anticipated that seat-belt usage would rise as much over time as it has and that air bags would eventually be used primarily as a supplement rather than a substitute for seat belts. Similarly unexpected developments are likely to arise in the context of autonomous vehicle technologies. In 2006, for example, Honda introduced its Accord model in the UK with a combined lane-keeping and ACC system that allows the vehicle to drive itself under the driver’s watch; this combination of features has yet to be introduced in the United States (Miller, 2006). Ho (2006, p. 27) observes a general trend that “the U.S. market trails Europe, and the European market trails Japan by 2 to 3 years.” What is the extent of these differences? What aspects of the liability and regulatory rules in those countries have enabled accelerated deployment? What other factors are at play (e.g., differences in consumers’ sensitivity to price)?

“New Technology - Old Law: Autonomous Vehicles and California’s Insurance Framework”, Peterson 2012:

This Article will address this issue and propose ways in which auto insurance might change to accommodate the use of AVs. Part I briefly reviews the background of insurance regulation nationally and in California. Part II discusses general insurance and liability issues related to AVs. Part III discusses some challenges that insurers and regulators may face when setting rates for AVs, both generally and under California’s more idiosyncratic regulatory structure. Part IV discusses challenges faced by California insurers who may want to reduce rates in a timely way when technological improvements rapidly reduce risk.

…When working within the context of a file-and-use or use-and-file environment, AVs will present only modest challenges to an insurer that wants to write these policies. The main challenge will arise from the fact that the policy must be rated for a new technology that may have an inadequate base of experience for an actuary to estimate future losses.21 “Prior approval” states, like California, require that automobile rates be approved prior to their use in the marketplace.22 These states rely more on regulation than on competition to modulate insurance rates.23 In California, automobile insurance rates are approved in a two-step process. The first step is the creation of a “rate plan.”24 The rate plan considers the insurer’s entire book of business in the relative line of insurance and asks the question: How much total premium must the insurer collect in order to cover the projected risks, overhead and permitted profit for that line?25 The insurer then creates a “class plan.” The class plan asks the question: How should different policyholders’ premiums be adjusted up or down based on the risks presented by different groups or classes of policyholders?26 Among other factors, the Department of Insurance requires that the rating factors comply with California law and be justified by the loss experience for the group.27 Rating a new technology with an unproven track record may include a considerable amount of guesswork. …California is the largest insurance market in the United States, and it is the sixth largest among the countries of the world.28 Cars are culture in this most populous state. There are far more insured automobiles in California than any other state.29

…Although adopted by the barest majority, [California’s] Proposition 103 [see previous discussion of its 3-part requirement for rating insurance premiums] may be amended by the legislature only by a two-thirds vote, and then only if the legislation “further[s] [the] purposes” of Proposition 103.68 Thus, Proposition 103 and the regulations adopted by the Department of Insurance are the matrix in which most (but not all) insurance is sold and regulated in California.69 …The most sensible approach to this dilemma, at least with respect to AVs, would be to abolish or substantially re-order the three mandatory rating factors. However, this is more easily said than done. As noted above, amending Proposition 103 requires a two-thirds vote of the legislature.160 Moreover, section 8(b) of the Proposition provides: “The provisions of this act shall not be amended by the Legislature except to further its purposes.”161 Both of these requirements can be formidable hurdles. Persistency discounts serve as an example. Most are aware that their insurer discounts their rates if they have been with the insurer for a period of time.162 This is called the “persistency discount.” The discount is usually justified on the basis that persistency saves the insurer the producing expenses associated with finding a new insured. If one wants to change insurers, Proposition 103 does not permit the subsequent insurer to match the persistency discount offered by the insured’s current insurer.163 Thus, the second insurer could not compete by offering the same discount. Changing insurers, then, was somewhat like a taxable event. The “tax” is the loss of the persistency discount when purchasing the new policy. The California legislature concluded that this both undermined competition and drove up the cost of insurance by discouraging the ability to shop for lower rates. …Despite these legislative findings, the Court of Appeal held the amendment invalid because, in the Court’s view, it did not further the purposes of Proposition 103.165 The Court also held that Proposition 103 vests only the Insurance Commissioner with the power to set optional rating factors.166 Thus, the legislature, even by a super majority, may not be authorized to adopt rating factors for auto insurance. Following this defeat in the courts, promoters of “portable persistency” qualified a ballot initiative to amend this aspect of Proposition 103. With a vote of 51.9% to 48.1%, the initiative failed in the June 8, 2010 election.167

…The State of Nevada recently adopted regulations for licensing the testing of AVs in the state. The regulations would require insurance in the minimum amounts required for other cars “for the payment of tort liabilities arising from the maintenance or use of the motor vehicle.”73 The regulation, however, does not suggest how the tort liability may arise. If there is no fault on the part of the operator or owner, then liability may arise, if at all, only for the manufacturer or supplier. Manufacturers and suppliers are not “insureds” under the standard automobile policy-at least so far. Thus, for the reasons stated above, owners, manufacturers and suppliers may fall outside the coverage of the policy.

…One possible approach would be to invoke the various doctrines of products liability law. This would attach the major liability to sellers and manufacturers of the vehicle. However, it is doubtful that this is an acceptable approach for several reasons. For example, while some accidents are catastrophic, fortunately most accidents cause only modest damages. By contrast, products liability lawsuits tend to be complex and expensive. Indeed, they may require the translation of hundreds or thousands of engineering documents-perhaps written in Japanese, Chinese or Korean…See In re Puerto Rico Electric Power Authority, 687 F.2d 501, 505 (1st Cir. 1982) (stating each party to bear translation costs of documents requested by it but cost possibly taxable to prevailing party). Translation costs of Japanese documents in range of $250,000, and translation costs of additional Spanish documents may exceed that amount.

…Commercial insurers of manufacturers and suppliers are not encumbered with Proposition 103’s unique automobile provisions,197 therefore they need not offer a GDD, nor need they conform to the ranking of the mandatory rating factors. To the extent that the risks of AVs are transferred to them, the insurance burden passed to consumers in the price of the car can reflect the actual, and presumably lower, risk presented by AVs. As noted above, however, for practical reasons some rating factors, such as annual miles driven and territory, cannot properly be reflected in the automobile price. Moving from the awkward and arbitrary results mandated by Proposition 103’s rating factors to a commercial insurance setting that cannot properly reflect some other rating factors is also an awkward trade-off. At best, it may be a choice of the least worst. Another viable solution might to be to amend the California Insurance Code section 660(a) to exclude from the definition of “policy” those policies covering liability for AVs (at least when operated in autonomous mode). Since Proposition 103 incorporates section 660(a), this would likely require a two-thirds vote of the legislature and the amendment would have to “further the purposes” of Proposition 103. Assuming a two-thirds vote could be mustered, the issue would then be whether the amendment furthers the purposes of the Proposition. To the extent that liability moves from fault-based driving to defect-based products liability, the purposes underlying the mandatory rating factors and the GDD simply cannot be accomplished. Manufacturers will pass these costs through to automobile buyers free of the Proposition’s restraints. Since the purposes of the Proposition, at least with respect to liability coverage,199 simply cannot be accomplished when dealing with self-driving cars, amending section 660(a) would not frustrate the purposes of Proposition 103.

…Filing a “complete rate application with the commissioner” is a substantial impediment to reducing rates. A complete rate application is an expensive, ponderous and time-consuming process. A typical filing may take three to five months before approval. Some applications have even been delayed for a year.205 In 2009, when insurers filed many new rate plans in order to comply with the new territorial rating regulations, delays among the top twenty private passenger auto insurers ranged from a low of 54 days (Viking) to a high of 558 days (USAA and USAA Casualty). Many took over 300 days (e.g., State Farm Mutual, Farmers Insurance Exchange, Progressive Choice).206 …n addition, once an application to lower rates is filed, the Commissioner, consumer groups, and others can intervene and ask that the rates be lowered even further.207 Thus, an application to lower a rate by 6% may invite pressure to lower it even further.208 If they “substantially contributed, as a whole” to the decision, a consumer group can also bill the insurance company for its legal, advocacy, and witness fees.209

…Unless ways can be found to conform Proposition 103 to this new reality, insurance for AVs is likely to migrate to a statutory and regulatory environment untrammeled by Proposition 103-commercial policies carried by manufacturers and suppliers. This migration presents its own set of problems. While the safety of AVs could be more fairly rated, other important rating factors, such as annual miles driven and territory, must be compromised. Whether this migration occurs will also depend on how liability rules do or do not adjust to a world in which people will nevertheless suffer injuries from AVs, but in which it is unlikely our present fault rules will adequately address compensation. If concepts of non-delegable duty, agency, or strict liability attach initial liability to owners of faulty cars with faultless drivers, the insurance burden will first be filtered through automobile insurance governed by Proposition 103. These insurers will then pass the losses up the distribution line to the insurers of suppliers and manufacturers that are not governed by Proposition 103. Manufacturers and suppliers will then pass the insurance cost back to AV owners in the cost of the vehicle. The insurance load reflected in the price of the car will pass through to automobile owners free of any of the restrictions imposed by Proposition 103. There will be no GDD, such as it is, no mandatory rating factors, and, depending on where the suppliers’ or manufacturers’ insurers are located, more flexible rating. One may ask: What is gained by this merry-go-round?

“‘Look Ma, No Hands!’: Wrinkles and Wrecks in the Age of Autonomous Vehicles”, Garza 2012

The benefits of these systems cannot be overestimated given that one-third of drivers admit to having fallen asleep at the wheel within the previous thirty days.31 …If the driver fails to react in time, it applies 40% of the full braking power to reduce the severity of the collision.39 In the most advanced version, the CMBS performs all of the functions described above, and it will also stop the car automatically to avoid a collision when traveling under ten miles-per-hour.40 Car companies are hesitant to push the automatic braking threshold too far out of fear that ‚fully ‘automatic’ braking systems will shift the responsibility of avoiding an accident from the vehicle’s driver to the vehicle’s manufacturer.’41…See Larry Carley, Active Safety Technology: Adaptive Cruise Control, Lane Departure Warning & Collision Mitigation Braking, IMPORT CAR (June 16, 2009), http://www.import-car.com/Article/58867/active_safety_technology_adaptive_cruise_control_lane_departure_warning__collision_mitigation_braking.aspx

…Automobile products liability cases are typically divided into two categories: ‚(1) accidents caused by automotive defects, and (2) aggravated injuries caused by a vehicle’s failure to be sufficiently ‘crashworthy’ to protect its occupants in an accident.‘79 …For example, a car suffers from a design defect when a malfunction in the steering wheel causes a crash. 81 Additionally, plaintiffs have alleged and prevailed on manufacturing- defect claims in cases where ‚unintended, sudden and uncontrollable acceleration’ causes an accident.82 In such cases, plaintiffs have been able to recover under a ‚malfunction theory.’83 Under a malfunction theory, plaintiffs use a ‚res ipsa loquitur like inference to infer defectiveness in strict liability where there was no independent proof of a defect in the product.’84 Plaintiffs have also prevailed where design defects cause injury. 85 For example, there was a proliferation of litigation in the 1970s and 1980s as a result of vehicles that were designed with a high center of gravity, which increased their propensity to roll over.86 Additionally, many design-defect cases arose in response to faulty transmissions that could inadvertently slip into gear, causing crashes and occupants to be run over in some cases. 87 The two primary tests that courts use to assess the defectiveness of a product’s design are the consumer-expectations test and the risk-utility test.88 The consumer-expectations test focuses on whether ‚the danger posed by the design is greater than an ordinary consumer would expect when using the product in an intended or reasonably foreseeable manner.’89 …Thus, while an ordinary consumer can have expectations that a car will not explode at a stoplight or catch fire in a two-mile-per-hour collision, they may not be able to have expectations about how a truck should handle after striking a five- or six-inch rock at thirty-five miles-per-hour.92 Perhaps because the consumer-expectations test is difficult to apply to complex products, and we live in a world where technological growth increases complexity, the risk-utility test has become the dominant test in design-defect cases.93 …Litigation can also arise where a plaintiff alleges that a vehicle is not sufficiently ‚crashworthy.’104 Crashworthiness claims are a type of design- defect claim.105

…Since their advent and incorporation, seat belts have resulted in litigation-much of which has involved crashworthiness claims. 136 In Jackson v. General Motors Corp., for example, the plaintiff alleged that as a result of a defectively designed seat belt, his injuries were enhanced. 137 The defendant manufacturer argued that the complexity of seat belts foreclosed any consumer expectation,138 but the Tennessee Supreme Court noted that seat belts are ‘familiar products for which consumers’ expectations of safety have had an opportunity to develop,‘and permitted the plaintiff to recover under the consumer-expectations test.139 Although manufacturers have been sued where seat belts render a car insufficiently crashworthy- as in cases where they fail to perform as intended or enhance injury-the incorporation of seat belts has reduced liability as well.140 This reduction comes in the form of the ‚seat belt defense.’141 The ’seat belt defense’ allows a defendant to present evidence about an occupant’s nonuse of a seat belt to mitigate damages or to defend against an enhanced-injury claim.142 Because seat belts are capable of reducing the number of lives lost and the overall severity of injuries sustained in crashes, it is argued that nonuse should protect a manufacturer from some claims.143 Although the majority rule is to prevent the admission of such evidence in enhanced-injury litigation, there is a growing trend toward admission.144

…Since their incorporation, consumers have sued manufacturers for defective cruise control systems that lead to injury. 171 Because of the complexity of cruise control technology, courts may not allow a plaintiff to use the consumer-expectations test.172 Despite the complexity of the technology, other courts allow plaintiffs to establish a defect using either the risk-utility test or the consumer-expectations test.173

…Under the consumer-expectations test, manufacturers will likely argue-as they historically have-that OAV technology is too complicated for the average consumer to have appropriate expectations about its capabilities.182 Commentators have stated that ‚consumers may have unrealistic expectations about the capabilities of these technologies . . . . Technologies that are engineered to assist the driver may be overly relied on to replace the need for independent vigilance on the part of the vehicle operator.’183 Plaintiffs will argue that, while the workings of the technology are concededly complex, the overall concept of autonomous driving is not.184 Like the car exploding at a stoplight or the car that catches fire in a two- mile-per-hour collision, the average consumer would expect autonomous vehicles to drive themselves without incident.185 This means that components that are meant to keep the car within a lane will do just that, and others will stop the vehicle at traffic lights. 186 Where incidents occur, OAVs will not have performed as the average consumer would expect.187 …plaintiffs who purchase OAVs at the cusp of availability, and attempt to prove defect under the consumer- expectations test, are likely to face an up-hill battle.194 But the unavailability of the consumer-expectations test will not be a significant detriment as plaintiffs can fall back on the risk-utility test.195 And as OAVs are increasingly incorporated, and users become more familiar with their capabilities, the consumer-expectations test will become more accessible to plaintiffs.196 Given the modern trend, plaintiffs are likely to face the risk- utility test.197

…Additionally, the extent to which injuries are ‚enhanced’ by OAVs will be debated.228 Because the majority of drivers fail to fully apply their brakes prior to a collision,229 where an OAV only partially applies brakes, or fails to apply brakes at all, manufacturers and plaintiffs will disagree about the extent of enhancement.230 Manufacturers will argue that, absent the OAV, the result would have been the same or worse-thus, the extent to which the injuries of the plaintiff are ‚enhanced’ is minimal.231 Plaintiffs will argue that, just like the presentation of crash statistics in a risk-utility analysis, this is a false choice.232 Like no-fire air bag claims, plaintiffs will contend that but for the malfunction of the OAV, their injuries would have been greatly reduced or nonexistent. 233 As a result, any injuries sustained above that threshold should serve as a basis for recovery. 234

…In products liability cases the ’use of expert witnesses has grown in both importance and expense.’301 Because of the extraordinary cost of experts in products liability litigation, many plaintiffs are turned away because, even if they were to recover, the prospective award would not cover the expense of litigating the claim. 302

…Although complex, OAVs function much like the cruise control that exists in modern cars. As we have seen with seat belts, air bags, and cruise control, manufacturers have always been hesitant to adopt safety technologies. Despite concerns, products liability law is capable of handling OAVs just as it has these past technologies. While the novelty and complexity of OAVs are likely to preclude plaintiffs from proving defect under the consumer-expectation test, as implementation increases this likelihood may decrease. Under a risk-utility analysis, manufacturers will stress the extraordinary safety benefits of OAVs, while consumers will allege that designs can be improved. In the end, OAV adoption will benefit manufacturers. Although liability will fall on manufacturers when vehicles fail, decreased incidences and severity of crashes will result in a net decrease in liability. Further, the combination of LDWS cameras and EDRs will drastically reduce the cost of litigation. By reducing reliance on experts for complex causation determinations, both manufacturers and plaintiffs will benefit. In the end, obstacles to OAV implementation are more likely to be psychological than legal, and the sooner that courts, manufacturers, and the motoring public prepare to confront these issues, the sooner lives can be saved.

“Self-driving cars can navigate the road, but can they navigate the law? Google’s lobbying hard for its self-driving technology, but some features may never be legal”, The Verge 14 December 2012

Google says that on a given day, they have a dozen autonomous cars on the road. This August, they passed 300,000 driver-hours. In Spain this summer, Volvo drove a convoy of three cars through 200 kilometers of desert highway with just one driver and a police escort.

…Bryant Walker Smith teaches a class on autonomous vehicles at Stanford Law School. At a workshop this summer, he put forward this thought experiment: the year is 2020, and a number of companies offer “advanced driver assistance systems” with their high-end model. Over 100,000 units have been sold. The owner’s manual states that the driver must remain alert at all times, but one night a driver - we’ll call him “Paul” - falls asleep while driving over a foggy bridge. The car tries to rouse him with alarms and vibrations but he’s a deep sleeper, so the car turns on the hazard lights and pulls over to the side of the road where another driver (let’s say Julie) rear-ends him. He’s injured, angry, and prone to litigation. So is Julie. That would be tricky enough by itself, but then Smith starts layering on complications. Another model of auto-driver would have driven to the end of the bridge before pulling over. If Paul had updated his software, it would have braced his seatbelt for the crash, mitigating his injuries, but he didn’t. The company could have pushed the update automatically, but management chose not to. Now, Smith asks the workshop, who gets sued? Or for a shorter list, who doesn’t?

…The financial stakes are high. According to the Insurance Research Council, auto liability claims paid out roughly $215 for each insured car, between bodily injury and property damage claims. With 250 million cars on the road, that’s $54 billion a year in liability. If even a tiny portion of those lawsuits are directed towards technologists, the business would become unprofitable fast.

…Changing the laws in Europe would take a replay of the internationally ratified Vienna Convention (passed in 1968) as well as pushing through a hodgepodge of national and regional laws. As Google proved, it’s not impossible, but it leaves SARTRE facing an unusually tricky adoption problem. Lawmakers won’t care about the project unless they think consumers really want it, but it’s hard to get consumers excited about a product that doesn’t exist yet. Projects like this usually rely on a core of early adopters to demonstrate their usefulness - a hard enough task, as most startups can tell you - but in this case, SARTRE has to bring auto regulators along for the ride. Optimistically, Volvo told us they expect the technology to be ready “towards the end of this decade,” but that may depend entirely on how quickly the law moves. The less optimistic prediction is that it never arrives at all. Steve Shladover is the program manager of mobility at California’s PATH program, where they’ve been trying to make convoy technology happen for 25 years, lured by the prospect of fitting three times as many cars on the freeway. They were showing off a working version as early as 1997 (powered by a single Pentium processor), before falling into the same gap between prototype and final product. “It’s a solvable problem once people can see the benefits,” he told The Verge, “but I think a lot of the current activity is wildly optimistic in terms of what can be achieved.” When I asked him when we’d see a self-driving car, Shladover told me what he says at the many auto conferences he’s been to: “I don’t expect to see the fully-automated, autonomous vehicle out on the road in the lifetime of anyone in this room.”

…Many of Google’s planned features may simply never be legal. One difficult feature is the “come pick me up” button that Larry Page has pushed as a solution to parking congestion. Instead of wasting energy and space on urban parking lots, why not have cars drop us off and then drive themselves to park somewhere more remote, like an automated valet?It’s a genuinely good idea, and one Google seems passionate about, but it’s extremely difficult to square with most vehicle codes. The Geneva Convention on Road Traffic (1949) requires that drivers “shall at all times be able to control their vehicles,” and provisions against reckless driving usually require “the conscious and intentional operation of a motor vehicle.” Some of that is simple semantics, but other concerns are harder to dismiss. After a crash, drivers are legally obligated to stop and help the injured - a difficult task if there’s no one in the car. As a result, most experts predict drivers will be legally required to have a person in the car at all times, ready to take over if the automatic system fails. If they’re right, the self-parking car may never be legal.

“Automated Vehicles are Probably Legal in the United States”, Bryant Walker Smith 2012

The short answer is that the computer direction of a motor vehicle’s steering, braking, and accelerating without real-time human input is probably legal….The paper’s largely descriptive analysis, which begins with the principle that everything is permitted unless prohibited, covers three key legal regimes: the 1949 Geneva Convention on Road Traffic, regulations enacted by the National Highway Traffic Safety Administration (NHTSA), and the vehicle codes of all fifty US states.

The Geneva Convention, to which the United States is a party, probably does not prohibit automated driving. The treaty promotes road safety by establishing uniform rules, one of which requires every vehicle or combination thereof to have a driver who is “at all times … able to control” it. However, this requirement is likely satisfied if a human is able to intervene in the automated vehicle’s operation.

NHTSA’s regulations, which include the Federal Motor Vehicle Safety Standards to which new vehicles must be certified, do not generally prohibit or uniquely burden automated vehicles, with the possible exception of one rule regarding emergency flashers. State vehicle codes probably do not prohibit-but may complicate-automated driving. These codes assume the presence of licensed human drivers who are able to exercise human judgment, and particular rules may functionally require that presence. New York somewhat uniquely directs a driver to keep one hand on the wheel at all times. In addition, far more common rules mandating reasonable, prudent, practicable, and safe driving have uncertain application to automated vehicles and their users. Following distance requirements may also restrict the lawful operation of tightly spaced vehicle platoons. Many of these issues arise even in the three states that expressly regulate automated vehicles.

…This paper does not consider how the rules of tort could or should apply to automated vehicles-that is, the extent to which tort liability might shift upstream to companies responsible for the design, manufacture, sale, operation, or provision of data or other services to an automated vehicle. 6

…Because of the broad way in which the term and others like it are defined, an automated vehicle probably has a human “driver.” 295 Obligations imposed on that person may limit the independence with which the vehicle may lawfully operate. 296 In addition, the automated vehicle itself must meet numerous requirements, some of which may also complicate its operation. 297 Although three states have expressly established the legality of automated vehicles under certain conditions, their respective laws do not resolve many of the questions raised in this section. 298

…A brief but important aside: To varying degrees, states impose criminal or quasicriminal liability on owners who permit others to drive their vehicles. 359 In Washington, “[b]oth a person operating a vehicle with the express or implied permission of the owner and the owner of the vehicle are responsible for any act or omission that is declared unlawful in this chapter. The primary responsibility is the owner’s.” 360 Some states permit an inference that the owner of a vehicle was its operator for certain offenses; 361 Wisconsin provides what is by far the most detailed statutory set of rebuttable presumptions. 362 Many others punish owners who knowingly permit their vehicles to be driven unlawfully. 363 Although these owners are not drivers, they are assumed to exercise some judgment or control with respect to those drivers-an instance of vicarious liability that suggests an owner of an automated vehicle might be liable for merely permitting its automated operation. 364

…On the human side, physical presence would likely continue to provide a proxy for or presumption of driving. 366 In other words, an individual who is physically positioned to provide real-time input to a motor vehicle may well be treated as its driver. This is particularly likely at levels of automation that involve human input for certain portions of a trip. In addition, an individual who starts or dispatches an automated vehicle, who initiates the automated operation of that vehicle, or who specifies certain parameters of operation probably qualifies as a driver under existing law. That individual may use some device-anything from a physical key to the click of a mouse to the sound of her voice-to activate the vehicle by herself. She may likewise deliberately request that the vehicle assume the active driving task. And she may set the vehicle’s maximum speed or level of assertiveness. This working definition is unclear in the same ways that existing law is likely to be unclear. Relevant acts might occur at any level of the primary driving task, from a decision to take a particular trip to a decision to exceed any speed limit by ten miles per hour. 367 A tactical decision like speeding is closely connected with the consequences-whether a moving violation or an injury-that may result. But treating an individual who dispatches her fully automated vehicle as the driver for the entirety of the trip could attenuate the relationship between legal responsibility and legal fault. 368 Nonetheless, strict liability of this sort is accepted within tort law 369 and present, however controversially, in US criminal law. 370

On the corporate side, a firm that designs or supplies a vehicle’s automated functionality or that provides data or other digital services might qualify as a driver under existing law. The key element, as provided in the working definition, may be the lack of a human intermediary: A human who provides some input may still seem a better fit for a human-centered vehicle code than a company with other relevant legal exposure. However, as noted above, public outrage is another element that may motivate new uses of existing laws. 377

…The mechanism by which someone other than a human would obtain a driving license is unclear. For example, some companies may possess great vision, but “a test of the applicant’s eyesight” may nonetheless be difficult. 395 And while General Motors may (or may not) 396 meet a state’s minimum age requirement, Google would not. [See Google, Google’s mission is to organize the world’s information and make it universally accessible and useful, www.google.com/intl/en/about/company/. In some states, Google might be allowed to drive itself to school. See, e.g., Nev. Rev. Stat. § 483.270; Nev. Admin. Code § 483.200.]

And people say lawyers have no sense of humor.

[Link] Philosophical Parenthood

1 SquirrelInHell 30 May 2017 02:09PM

Divergent preferences and meta-preferences

4 Stuart_Armstrong 30 May 2017 07:33AM

Crossposted at the Intelligent Agents Forum.

In simple graphical form, here is the problem of divergent human preferences:

Here the AI either chooses A or ¬A, and as a consequence, the human then chooses B or ¬B.

There are a variety of situations in which this is or isn't a problem (when A or B or their negations aren't defined, take them to be the negative of what is define):

  • Not problems:
    • A/¬A = "gives right shoe/left shoe", B/¬B = "adds left shoe/right shoe".
    • A =  "offers drink", ¬B = "goes looking for extra drink".
    • A = "gives money", B = "makes large purchase".
  • Potentially problems:
    • A/¬A = "causes human to fall in love with X/Y", B/¬B = "moves to X's/Y's country".
    • A/¬A = "recommends studying X/Y", B/¬B = "choose profession P/Q".
    • A = "lets human conceive child", ¬B = "keeps up previous hobbies and friendships".
  • Problems:
    • A = "coercive brain surgery", B = anything.
    • A = "extreme manipulation", B = almost anything.
    • A = "heroin injection", B = "wants more heroin".

So, what are the differences? For the "not problems", it makes sense to model the human as having a single reward R, variously "likes having a matching pair of shoes", "needs a certain amount of fluids", and "values certain purchases". Then all that the the AI is doing is helping (or not) the human towards that goal.

As you move more towards the "problems", notice that they seem to have two distinct human reward functions, RA and R¬A, and that the AI's actions seem to choose which one the human will end up with. In the spirit of humans not being agents, this seems to be AI determining what values the human will come to possess.

 

Grue, Bleen, and agency

Of course, you could always say that the human actually has reward R = IARA + (1-IA)R¬A, where IA is the indicator function as to whether the AI does action A or not.

Similarly to the grue and bleen problem, there is no logical way of distinguishing that "pieced-together" R from a more "natural" R (such as valuing pleasure, for instance). Thus there is no logical way of distinguishing the human being an agent from the human not being an agent, just from its preferences and behaviour.

However, from a learning and computational complexity point of view, it does make sense to distinguish "natural" R's (where RA and R¬A are essentially the same, despite the human's actions being different) from composite R's.

This allows us to define:

  • Preference divergence point: A preference divergence point is one where RA and R¬A are sufficiently distinct, according to some criteria of distinction.

Note that sometimes, RA = RA' + R' and R¬A = R¬A' + R': the two RA and R¬A overlap on a common piece R', but diverge on RA' and R¬A'. It makes sense to define this as a preference divergence point as well, if RA'and R¬A' are "important" in the agent's subsequent decisions. Importance being a somewhat hazy metric, which would, for instance, assess how much R' reward the human would sacrifice to increase RA' and R¬A'.

 

Meta-preferences

From the perspective of revealed preferences about the human, R(μ)=IARA + μ(1-IA) R¬A will predict the same behaviour for all scaling factors μ > 0.

Thus at a preference divergence point, the AI's behaviour, if it was a R(μ) maximiser, would depend on the non-observed weighting between the two divergent preferences.

This is unsafe, especially if one of the divergent preferences is much easier to achieve a high value with than the other.

Thus preference divergence points are moments when the AI should turn explicitly to human meta-preferences to distinguish between them.

This can be made recursive - if we see the human meta-preferences as explicitly weighting RA versus R¬A and hence giving R, then if there is a prior AI decision point Z, and, depending on what the AI chooses, the human meta-preferences will be different, this gives two reward functions RZ=IARA+ μZ(1-IA)R¬A and R¬Z=IARA+ μ¬Z(1-IA)R¬A with different weights μZ and μ¬Z.

If these weights are sufficiently distinct, this could identify a meta-preference divergence point and hence a point where human meta-meta-preferences become relevant.

[Link] "AIXIjs: A Software Demo for General Reinforcement Learning", Aslanides 2017

1 gwern 29 May 2017 09:09PM

Invitation to comment on a draft on multiverse-wide cooperation via alternatives to causal decision theory (FDT/UDT/EDT/...)

4 Caspar42 29 May 2017 08:34AM

I have written a paper about “multiverse-wide cooperation via correlated decision-making” and would like to find a few more people who’d be interested in giving a last round of comments before publication. The basic idea of the paper is described in a talk you can find here. The paper elaborates on many of the ideas and contains a lot of additional material. While the talk assumes a lot of prior knowledge, the paper is meant to be a bit more accessible. So, don’t be disheartened if you find the talk hard to follow — one goal of getting feedback is to find out which parts of the paper could be made more easy to understand.

If you’re interested, please comment or send me a PM. If you do, I will send you a link to a Google Doc with the paper once I'm done with editing, i.e. in about one week. (I’m afraid you’ll need a Google Account to read and comment.) I plan to start typesetting the paper in LaTeX in about a month, so you’ll have three weeks to comment. Since the paper is long, it’s totally fine if you don’t read the whole thing or just browse around a bit.

Open thread, May 29 - June 4, 2017

2 Thomas 29 May 2017 06:13AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Interview on IQ, genes, and genetic engineering with expert (Hsu)

4 James_Miller 28 May 2017 10:19PM

Bi-Weekly Rational Feed

15 deluks917 28 May 2017 05:12PM

Five Recommended Articles You Might Have Missed:

The Four Blind Men The Elephant And Alan Kay by Meredith Paterson (Status 451) - Managing technical teams. Taking a new perspective is worth 90 IQ points. Getting better enemies. Guerrilla action.

Vast Empirical Literature by Marginal REVOLUTION - Tyler's 10 thoughts on approaching fields with large literatures. He is critical of Noah's "two paper rule" and recommends alot of reading.

Notes From The Hufflepuff Unconference (Part 1) by Raemon (lesswrong) - Goal: Improve at: "social skills, empathy, and working together, sticking with things that need sticking with". The article is a detailed breakdown of the unconference including: Ray's Introductory Speech, a long list of what people want to improve on, the lightning talks, the 4 breakout sessions, proposed solutions, further plans, and closing words. Links to conference notes are included for many sections.

Antipsychotics Might Cause Cognitive Impairment by Sarah Constantin (Otium) - A harrowing personal account of losing abstract thinking ability on Risperdal. The author conducts a literature review, and concludes with some personal advice about taking medication.

Dwelling In Possibility by Sarah Constantin (Otium) - Leadership. Confidence in the face of the uncertainty and imperfection. Losing yourself when you try to step back and facilitate.

Scott:

Those Modern Pathologies by Scott Alexander - You can argue X is a modern pathology for almost any value of X. Scott demonstrates this by repeated example. Among other things "Aristotelian theory of virtue" and "Homer's Odyssey" get pathologized.

The Atomic Bomb Considered As Hungarian High School Science Fair Project by Scott Alexander - Ashkenazi Jewish Intelligence. An explanation of Hungarian dominance in physics and science in the mid 1900s.

Classified Ads Thread by Scott Alexander - Open thread where people post ads. People are promoting their websites and some of them are posting actual job ads among other things.

Open Thread 76 by Scott Alexander - Bi-weekly Open thread.

Postmarketing Surveillance Is Good And Normal by Scott Alexander - Scott shows why a recent Scientific American study does not imply the FDA is too risky.

Epilogue by Scott Alexander (Unsong) - All's Whale that Ends Whale.

Polyamory Is Not Polygyny by Scott Alexander - A quick review of how polyamory actually function in the rationalist community.

Bail Out by Scott Alexander - "About a fifth of the incarcerated population – the top of the orange slice, in this graph – are listed as “not convicted”. These are mostly people who haven’t gotten bail. Some are too much of a risk. But about 40% just can’t afford to pay."

Rationalist:

Strong Men Are Socialist Reports A Study That Previously Reported The Opposite by Jacob Falkovich (Put A Number On It!) - Defense Against the Dark Statistical Arts. Jacob provides detailed commentary on a popular study and shows that the studies dataset can be used to support the opposite conclusion, with p = 0.0086.

Highly Advanced Tulpamancy 101 For Beginners by H i v e w i r e d - Application of lesswrong theory to the concept of the self. In particular the author applies "How an Algorithm Feels from the Inside" and "Map and Territory". Hive then goes into the details of creating and interacting with tulpas. "A tulpa is an autonomous entity existing within the brain of a “host”. They are distinct from the host in that they possess their own personality, opinions, and actions"

Existential Risk From Ai Without An Intelligence by Alex Mennen (lesswrong) - Reasons why an intelligence explosion might not occur and reasons why we might have a problem anyway.

Dragon Army Theory Charter (30min Read) by Duncan Sabien (lesswrong) - A detailed plan for an ambitious military style rationalist house. The major goals include self-improvement, high quality group projects and the creation of a group with absolute trust in one another. The leader of the house is the curriculum director and head of product at CFAR.

The Story Of Our Life by H i v e w i r e d - The authors explain their pre-rationalist life and connection to the community. They then argue the rationalist community should take better care of one another. "Venture Rationalism".

Don't Believe in God by Tyler Cowen - Seven arguments for not believing in God. Among them: Lack of Bayesianism among believers, the degree to which people follow their family religion and the fundamental weirdness of reality.

Antipsychotics Might Cause Cognitive Impairment by Sarah Constantin (Otium) - A harrowing personal account of losing abstract thinking ability on Risperdal. The author conducts a literature review, and concludes with some personal advice about taking medication.

The Four Blind Men The Elephant And Alan Kay by Meredith Paterson (Status 451) - Managing technical teams. Taking a new perspective is worth 90 IQ points. Getting better enemies. Guerrilla action.

Qualia Computing At Consciousness Hacking June 7th 2017 by Qualia Computing - Qualia computing will present in San Fransisco on June 7th at Consciousness Hacking. The event description is detailed and should give readers a good intro to Qualia Computing's goals. The author's research goal is to create a mathematical theory of pain/pleasure and be able to measure these directly from brain data.

Notes From The Hufflepuff Unconference (Part 1) by Raemon (lesswrong) - Goal: Improve at: "social skills, empathy, and working together, sticking with things that need sticking with". The article is a detailed breakdown of the unconference including: Ray's Introductory Speech, a long list of what people want to improve on, the lightning talks, the 4 breakout sessions, proposed solutions, further plans, and closing words. Links to conference notes are included for many sections.

Is Silicon Valley Real by Ben Hoffman (Compass Rose) - The old culture of Silicon Valley is mostly gone, replaced by something overpriced and materialist. Ben check's the details of Scott Alexander's list of six noble startups and finds only two in SV proper.

Why Is Harry Potter So Popular by Ozy (Thing of Things) - Ozy discusses a paper on song popularity in an artificial music market. Social dynamics had a big impact on song ratings. "Normal popularity is easily explicable by quality. Stupid, wild, amazing popularity is due to luck."

Design A Better Chess by Robin Hanson - Can we design a game that promotes even more useful honesty than chess? A link to Hanson's review of Gary Kasparov's book is included.

Deserving Truth 2 by Andrew Critch - How the author's values changed over time. Originally he tried to maximize his own positive sensory experiences. The things he cared about began to include more things, starting with his GF's experiences and values. He eventually rejects "homo-economus" thinking.

A Theory Of Hypocrisy by João Eira (Lettuce be Cereal) - Hypocrisy evolved as a way to solve free rider problems. "It pays to be a free rider. If no one finds out"

Building Community Institution In Five Hours a Week by Particular Virtue - Eight pieces of advice for running a successful meetup. The author and zir partner have been running lesswrong events for five years.

Dwelling In Possibility by Sarah Constantin (Otium) - Leadership. Confidence in the face of the uncertainty and imperfection. Losing yourself when you try to step back and facilitate.

Ai Safety Three Human Problems And One Ai Issue by Stuart Armstrong (lesswrong) - Humans have poor predictions, don't know their values and aren't agents. Ai might be very powerful. A graph of which problems many Ai risk solutions target.

Recovering From Failure by mindlevelup - Avoid negative spirals, figure out why you failed, List of questions to ask yourself. Strategies -> Generate good alternatives, metacognitive affordances.

Review The Dueling Neurosurgeons by Sam Kean by Aceso Under Glass - Positive review. Author learned alot. Speculation on a better way to teach Science.

Principia Qualia Part 2: Valence by Qualia Computing - A mathematical theory of valence (what makes experience feel good or bad). Speculative but the authors make concrete predictions. Music plays a heavy role.

Im Not Seaing It by Robin Hanson - Arguments against seasteading.

EA:

One of the more positive surprises by GiveDirectly - Links post. Eight articles on Give Directly, Cash Transfer and Basic Income.

Returns Functions And Funding Gaps by the Center for Effective Altruism (EA forum) - Links to CEA's explanation of what "returns functions" are and how using them compares to "funding gap" model. They give some arguments why returns functions are a superior model.

Online Google Hangout On Approaches To by whpearson (lesswrong) - Community meeting to discuss Ai risk. Will use "Optimal Brainstorming Theory". Currently early stage. Sign up and vote on what times you are available.

Expected Value Estimates We Cautiously Took by The Oxford Prioritization Project (EA forum) - Details of how the four bayesian probability models were compared to produce a final decision. Some discussion of how assumptions affect the final result. Actual code is included.

Four Quantitative Models Aggregation And Final by The Oxford Prioritization Project (EA forum) - 80K hours, MIRI, Good Foods Institute and StrongMinds were considered. Decisions were made using concrete Bayesian EV calculations. Links to the four models are included.

Peer to Peer Aid: Cash in the News by GiveDirectly - 8 Links about GiveDirectly, cash transfer and basic income.

The Value Of Money Going To Different Groups by The Center for Effective Altruism - "It is well known that an extra dollar is worth less when you have more money. This paper describes the way economists typically model that effect, using that to compare the effectiveness of different interventions. It takes remittances as a particular case study."

Politics and Economics:

Study Of The Week Better And Worse Ways To Attack Entrance Exams by Freddie deBoer - Freddie's description of four forms of "test validity". The SAT and ACT are predictive of college grades, one should criticize them from other angles. Freddie briefly gives his socialist critique.

How To Destroy Civilization by Zvi Moshowitz - A parable about the game "Advanced Civilization". The difficulties of building a coalition to lock out bad actor. Donald Trump. [Extremely Partisan]

Trust Assimilation by Bryan Caplan - Data on how much immigrants and their children trust other people. How predictive is the trust level of their ancestral country. Caplan reviews papers and crunches the numbers himself.

There Are Bots, Look Around by Renee DiResta (ribbonfarm) - High frequency trading disrupted finance. Now algorithms and bots are disrupting the marketplace of ideas. What can finance's past teach us about politics' future?

The Behavioral Economics of Paperwork by Bryan Caplan - Vast Numbers of students miss financial aid because they don't fill out paperwork. Caplan explores the economic implications of the fact that "Humans hate filling out paperwork. As a result, objectively small paperwork costs plausibly have huge behavioral response".

The Nimby Challenge by Noah Smith - Smith Argues makes an economic counterargument to the claims that building more housing wouldn't lower prices. Noah includes 6 lessons for engaging with NIMBYs.

Study Of The Week What Actually Helps Poor Students: Human Beings by Freddie deBoer - Personal feedback, tutoring and small group instruction had the largest positive effect. Includes Freddie's explanation of meta-analysis.

Vast Empirical Literature by Marginal REVOLUTION - Tyler's 10 thoughts on approaching fields with large literatures. He is critical of Noah's "two paper rule" and recommends alot of reading.

Impact Housing Price Restrictions by Marginal REVOLUTION - Link to a job market paper on the economic effects of housing regulation.

Me On Anarcho Capitalism by Bryan Caplan - Bryan is interviewed on the Rubin Report about Ancap.

Campbells Law And The Inevitability Of School Fraud by Freddie deBoer - Rampant Grade Inflation. Lowered standards. Campbell's law says that once you base policy on a metric that metric will always start being gamed

Nimbys Economic Theories: Sorry Not Sorry by Phil (Gelman's Blog) - Gelman got a huge amount of criticism on his post on whether building more housing will lower prices in the Bay. He responds to some of the criticism here. Long for Gelman.

Links 8 by Artir (Nintil) - Link Post. Physics, Technology, Philosophy, Economics, Psychology and Misc.

Arguing About How The World Should Burn by Sonya Mann ribbonfarm - Two different ways to decide who to exclude. One focuses on process the other on content. Scott Alexander and Nate Soares are quoted. Heavily [Culture War].

Seeing Like A State by Bayesian Investor - A quick review of "Seeing like a state".

Whats Up With Minimum Wage by Sarah Constantin (Otium) - A quick review of the literature on the minimum wage. Some possible explanations for why raising it not reduce unemployment.

Misc:

Entirely Too Many Pieces Of Unsolicited Advice To Young Writer Types by Feddie deBoer - Advice about not working for free, getting paid, interacting with editors, why 'Strunk and White' is awful, and taking writing seriously.

Conversations On Consciousness by H i v e w i r e d - The author is a plural system. Their hope is to introduce plurality by doing the following: "First, we’re each going to describe our own personal experiences, from our own perspectives, and then we’re going to discuss where we might find ourselves within the larger narrative regarding consciousness."

Notes On Debugging Clojure Code by Eli Bendersky - Dealing with Clojure's cryptic exceptions, Finding which form an exception comes from, Trails and Logging, Deeper tracing inside cond forms

How to Think Scientifically About Scientists’ Proposals for Fixing Science by Andrew Gelman - Gelman asks how to scientifically evaluate proposals to fix science. He considers educational, statistical, research practice and institutional reforms. Excerpts from an article Gelman wrote, the full paper is linked.

Call for Volunteers who Want to Exercize by Aceso Under Glass - Author is looking for volunteers who want to treat their anxiety or mood disorder with exercise.

Learning Deep Learning the Easy Way with Keras (lesswrong) - Articles showing the power of neural networks. Discussion of ML frameworks. Resources for learning.

Unsong of Unsongs by Scott Aaronson - Aaronson went to the Unsong wrap party. A quick review of Unsong. Aaronson talks about how Scott Alexander defended him with untitled.

2016 Spending by Mr. Money Mustache - Full details of last year's budget. Spending broken down by category.

Amusement:

And Another Physics Problem by protokol2020 - Two Planets. Which has a higher average surface temperature.

A mysterious jogger by Jacob Falkovich (Put A Number On It!) - A mysterious jogger. Very short fiction.

Podcast:

Persuasion And Control by Waking Up with Sam Harris - "surveillance capitalism, the Trump campaign's use of Facebook, AI-enabled marketing, the health of the press, Wikileaks, ransomware attacks, and other topics."

Raj Chetty: Inequality, Mobility and the American Dream by Conversations with Tyler - "As far as I can tell, this is the only coverage of Chetty that covers his entire life and career, including his upbringing, his early life, and the evolution of his career, not to mention his taste in music"

Is Trump's incompetence saving us from his illiberalism? by The Ezra Klein Show - Political Scientist Yascha Mounk. "What Mounk found is that the consensus we thought existed on behalf of democracy and democratic norms is weakening."

The Moral Complexity Of Genetics by Waking Up with Sam Harris - "Sam talks with Siddhartha Mukherjee about the human desire to understand and manipulate heredity, the genius of Gregor Mendel, the ethics of altering our genes, the future of genetic medicine, patent issues in genetic research, controversies about race and intelligence, and other topics."

Ester Perel by The Tim Ferriss - The Relationship Episode: Sex, Love, Polyamory, Marriage, and More

Lane Pritchett by Econtalk - Growth, and Experiments

Meta Learning by Tim Ferriss - Education, accelerated learning, and my mentors. Conversation with Charles Best the founder and CEO of DonorsChoose.org

Bryan Stevenson On Why The Opposite Of Poverty Isn't Wealth by The Ezra Klein Show - Founder and executive director of the Equal Justice Initiative. Justice for the wrongly convicted on Death Row.

10 'incredible' weaknesses of the mental health system

7 arunbharatula 28 May 2017 04:22AM

I aim to identify some of the mental health workforce's credibility issues in this article. This may inform your prevention and treatment strategy as a mental health consumer, or your practice if you work in mental health.


Mental health is the strongest determinant of quality of life at a later age. And, the pursuit of happiness predicts both positive emotions and less depressive symptoms. People who prioritize happiness are more psychologically able. In times of crises, some turn to the mental health system for support. But, how credible is the support available? Here are 10 categories of shortcomings that the mental health sector faces today:

 

1. Institutional credibility

 

Headspace's evaluations indicate it’s ineffective and they are evaluated better than many services out there. This isn’t academic, attendees who report that their mental health has not improved since using the service will trust the mental health system less, and with good reason.

 

2. Network credibility

 

There is an evidence base for the selecting a type of therapy (psychodynamic, cognitive-behavioural, etc) for a particular constellations of mental symptoms. If you work in mental health, have you ever made a referral on the basis of both symptomatology and theoretical orientation?

 

3. ‘Walk the talk’ credibility

 

Social workers, nurses, social workers medical doctors, and psychiatrists abuse substances and incur mental ill-health at among the highest rates of any occupation. For instance, the psychiatrist burnout rate is 40%. Mental health consumers may perceive clinicians as hypocritical or unwilling (...or too willing) to swallow their own medicine.

 

4. Academic credibility

 

Psychology is mired by error-riddled research and myth-ridden textbooks. Broadly, most published research is wrong. And, questionable research practices are common which bias the relevant evidence.

 

The difference between a well designed experiment and a poorly designed psychotherapy experiment is large. To quote the pseudonymous physician Scott Alexander:

 

‘Low-quality psychotherapy trials in general had a higher effect size (SMD = 0.74) than high-quality trials (SMD = 0.22), p < 0.001"...Effect sizes for the low quality trials are triple those for the high-quality trials.’

 

5. Credibility of treatments

 

Are treatments are becoming less effective over time? Cognitive behavioural therapy is a common treatment for various mental illnesses. It is the most researched psychotherapy. However, the more evidence piles up, the less effective that psychotherapy appears to be...the same goes for antidepressants.

 

Why are outdated treatments still used? Over the 19th and 20th Centuries, Austrian neurologist Sigmund Freud famously founded ‘psychoanalysis’. Psychoanalysis is a school of psychotherapy that together with other 'psychodynamic' psychotherapies focused on early experience on human behaviour and emotion. Freud's ideas challenged fundamental assumptions about human psychology. In particular, he suggested that our conscious mind is the just the tip of iceberg of our identities.

 

Today Freud is the subject of jokes and derision. Many of his testable ideas have been proven false.  'When tested, psychoanalysis was shown to be less effective than placebo.’  Yet, many psychologists and psychiatrists continue to practice psychoanalysis.

 

Psychology is a rather unsettled science. One estimate for the time after which half of the ‘knowledge’ in the field of psychology is overturned or superseded (it’s ‘half-life’) is at just 7.5 years. Interestingly, this time-span appears to be falling. That would suggest the field is becoming increasingly less reliable. The subfield of psychoanalysis bucks the trend. It has over double the parent field’s half-life. Why?

 

How do other subfields of psychology fair? Psychopharmacology is at the intersection of psychiatric drugs and brain chemistry. Knowledge in psychopharmacology is overturned at a rate higher than the rest of the field in general. Typically the ‘half life of knowledge’ argument aims discount psychology relative to ‘harder’ sciences like physics.

 

Psychological therapies are confusing and unnecessarily fragmented: According to The Handbook of Counseling Psychology:

 

‘Meta-analyses of psychotherapy studies have consistently demonstrated that there are no substantial differences in outcomes among treatments.’

 

Meta-analyses are a kind of research technique that quantitatively puts together many pieces of individual relevant research on a particular topic. There is 'little evidence to suggest that any one psychological therapy consistently outperforms any other for any specific psychological disorders.

 

This is sometimes called the 'Dodo bird verdict' after a scene/section in Alice in Wonderland where every competitor in a race was called a winner and is given prizes'. So, what is one to make of the best vetted clinical guidelines that indicate that particular therapies are more appropriate for particular mental conditions?

 

Guidelines are considered a higher order of evidence than a ‘handbook’ to some, and vice-versa for another. Could an expert or indeed an amateur credibly lead someone to conclude that all therapies are ‘equal’ or ‘different’ armed with either body of evidence? Could a similar case be made for say, antibiotics? Yes, or so the evidence suggests in the case of antibiotics, actually.

 

Finally, psychological therapies are administered haphazardly. Eclectically combining elements from different psychological therapies is inefficient. But, it happens. Clinicians should ‘integrate’ components of different psychotherapies using established formulae, if they want to ‘mix and match’. When I hear someone’s theoretical orientation is ‘psychodynamically informed’ or similar, for me that’s a red flag for eccelectisms.

 

6. Economic credibility

 

Therapists have a financial incentive to re-traumatise patients.

 

7. Social credibility

 

'The benefits of psychotherapy may be no better than the benefits of talking to a friend'.

 

8. Credibility of counsel

 

Mental health professionals offer their clients and the community general counsel and advice. But, if I was to ask a given mental health professional about the value of kindness or love of learning they would almost certainly indicate it’s worthwhile. Pop psychology is pervasive. And why not, people have been interested in psychology long before it was a science. But, misconceptions about psychology infiltrate mental health care practice.

 

Researchers who have reported on the character traits of people with high and low life satisfaction found something like this:

 

Character strengths that DO predict life satisfaction

Character strengths that DO NOT predict life satisfaction

Zest

Appreciation of beauty and excellence

Curiosity  

Creativity

Hope

kindness

Humour

Love of learning

Perspective

 

Meanwhile, research that separates their findings by gender looks different

 

Character strengths that predict life satisfaction

 

Men

Women

humour

zest

fairness

gratitude

perspective

hope

creativity

appreciation of beauty and love

 

Would you receive nuanced, evidence-based advice when soliciting general counsel from your treatment provider?

 

9. Practitioner credibility

 

Consider the therapist factors that relate to a patient's success in therapy:

 

What does predict success?

What there aren’t stable conclusions about

Compliance with a treatment manual (but that compromises a therapist’s relationship skills and supportiveness)

Interpersonal style of therapist

Female therapists

Verbal style of therapist

Ethnic similarity of therapist and patient

Nonverbal styles of therapist

Ethnic sensitivity of therapist to patient

Combined verbal and nonverbal patterns

Therapists with more training

Which treatment manual is used

Therapist disclosure about themselves

Therapist directness

Therapist interpretation of their relationship with the patient, their motives and their psychological processes

Therapist personality

Therapist coping patterns

Therapist emotional wellbeing

Therapist values

Therapist beliefs

Therapists cultural beliefs

Therapist dominance

Therapist sense of control

Therapist sense of what a patient's needs to know

 

Are mental health services hiring based on the factors that predict a consumer’s success in therapy? Are they training for the right skills, and ignoring those that are irrelevant?

 

10. Diagnostic credibility

 

Imprecise measurement and lack of gold standards for validating diagnoses means that definitions tend to drift over time, even though, per the evidence, response to treatment does not vary across culture.

 

45% of Australians will experience mental illness over their lifetime. Whether that mental ill-health is transient, long-term or lifelong matters to the individual and for public health. To illustrate: experts suggests that those who have had 2 depressive episodes in recent years, or three episodes over their lifelong to get treated on an ongoing basis to prevent recurrent depression.

 

'At least 60% of individuals who have had one depressive episode will have another, 70% of individuals who have had two depressive episodes will have a third, and 90% of individuals with three episodes will have a fourth episode. '

- APA 

 

Without reliable diagnoses, how can one estimate their risk of relapse into depression?

[Link] Researchers studying century-old drug in potential new approach to autism

0 morganism 27 May 2017 09:16PM

View more: Next