created: 15 Sep 2011; modified: 21 May 2017; status: in progress; confidence: unlikely; importance: 7
One does not care to acknowledge the mistakes of one’s youth.1
It is salutary for the soul to review past events and perhaps keep a list of things one no longer believes, since such crises are rare2 and so easily pass from memory (there is no feeling of being wrong, only of having been wrong3). One does not need an elaborate ritual (fun as they are to read about) to change one’s mind, but the changes must happen. If you are not changing, you are not growing4; no one has won the belief lottery and has a monopoly on truth5. To the honest inquirer, all surprises are pleasant ones6.
Changes
Only the most clever and the most stupid cannot change.7
This list is not for specific facts of which there are too many to record, nor is it for falsified predictions like my belief that George W. Bush would not be elected (for those see Prediction markets or my PredictionBook.com page), nor mistakes in my private life (which go into a private file), nor things I never had an initial strong position on (Windows vs Linux, Java vs Haskell). The following are some major ideas or sets of ideas that I have changed my mind about:
Religion
For I count being refuted a greater good, insofar as it is a greater good to be rid of the greatest evil from oneself than to rid someone else of it. I don’t suppose that any evil for a man is as great as false belief about the things we’re discussing right now…8
I think religion was the first subject in my life that I took seriously. As best as I can recall at this point, I have no deconversion story
or tale to tell, since I don’t remember ever seriously believing9 - the stories in the Bible or at my Catholic church were interesting, but they were obviously fiction to some degree. I wasn’t going to reject religion out of hand because some of the stories were made-up (any more than I believed George Washington didn’t exist because the story of him chopping down an apple tree was made-up), but the big claims didn’t seem to be panning out either:
- My prayers received no answers of any kind, not even a voice in my head
- I didn’t see any miracles or intercessions like I expected from a omnipotent loving god
The latter was probably due to the cartoons I watched on TV, which seemed quite sensible to me: a powerful figure like a god would act in all sorts of ways. If there really was a god, that was something that ought to be quite obvious to anyone who had eyes to see
. I had more evidence that Santa or China existed than did God, which seemed backwards. Explanations for the absence of divine action ranged from the strained to so ludicrously bad that they corroded what little faith I possessed10. I would later recognize my own doubts in passages of skeptical authors like Edward Gibbon and his Decline:
…From the first of the fathers to the last of the popes, a succession of bishops, of saints, of martyrs, and of miracles, is continued without interruption; and the progress of superstition was so gradual, and almost imperceptible, that we know not in what particular link we should break the chain of tradition. Every age bears testimony to the wonderful events by which it was distinguished, and its testimony appears no less weighty and respectable than that of the preceding generation, till we are insensibly led on to accuse our own inconsistency, if in the eighth or in the twelfth century we deny to the venerable Bede, or to the holy Bernard, the same degree of confidence which, in the second century, we had so liberally granted to Justin or to Irenaeus. If the truth of any of those miracles is appreciated by their apparent use and propriety, every age had unbelievers to convince, heretics to confute, and idolatrous nations to convert; and sufficient motives might always be produced to justify the interposition of Heaven. And yet, since every friend to revelation is persuaded of the reality, and every reasonable man is convinced of the cessation, of miraculous powers, it is evident that there must have been some period in which they were either suddenly or gradually withdrawn from the Christian church. Whatever aera is chosen for that purpose, the death of the apostles, the conversion of the Roman empire, or the extinction of the Arian heresy, the insensibility of the Christians who lived at that time will equally afford a just matter of surprise. They still supported their pretensions after they had lost their power. Credulity performed the office of faith; fanaticism was permitted to assume the language of inspiration, and the effects of accident or contrivance were ascribed to supernatural causes. The recent experience of genuine miracles should have instructed the Christian world in the ways of Providence, and habituated their eye (if we may use a very inadequate expression) to the style of the divine artist…Whatever opinion may be entertained of the miracles of the primitive church since the time of the apostles, this unresisting softness of temper, so conspicuous among the believers of the second and third centuries, proved of some accidental benefit to the cause of truth and religion. In modern times, a latent and even involuntary scepticism adheres to the most pious dispositions. Their admission of supernatural truths is much less an active consent than a cold and passive acquiescence.
I have seen these reasons mocked as simplistic and puerile, and I was certainly aware that there were subtle arguments which intelligent philosophers believed resolved the theodicy (such as Alvin Plantinga’s free will defense, which is valid but I do not consider it sound since it requires the meaningless concept of free will) and that Christians of various stripes had various complicated explanations for why this world was consistent with there being a God (if for no other reason than that I observed there were theists as intelligent or more intelligent than me). But the basic concept seemed confused, free will was an even more dubious plank to go on, and in general the entire complex of historical claims, metaphysics, and activities of religious people did not seem convincing. (Richard Carrier’s 2011 Why I Am Not A Christian expresses the general tenor of my misgivings, especially after I checked out everything the library had on higher Biblical criticism, Josephus, the Gnostics, and early Christianity - Je n’avais pas besoin de cette hypothèse-là, basically.)
So I never believed (although it was obvious enough that there was no point in discussing this since it might just lead to me going to church more and sitting on the hard wooden pews), but there was still the troubling matter of Heaven & Hell: those infinities meant I couldn’t simply dismiss religion and continue reading about dinosaurs or Alcatraz. If I got religion wrong, I would have gotten literally the most important possible thing wrong! Nothing else was as important - if you’re wrong about a round earth, at worst you will never be a good geographer or astronomer; if you’re wrong about believing in astrology, at worst you waste time and money; if you’re wrong about evolution and biology, at worst you endanger your life; and so on. But if you’re wrong about religion, wasting your life is about the least of the consequences. And everyone accepts a religion or at least the legitimacy of religious claims, so it would be unspeakably arrogant of a kid to dismiss religion entirely - that sort of evidence is simply not there1112. (Oddly enough, atheists - who are not immediately shown to be mistaken or fools - are even rarer in books and cartoons than they are in real life.)
Kids actually are kind of skeptical if they have reason to be skeptical, and likewise will believe all sorts of strange things if the source was previously trustworthy13. This is as it should be! Kids cannot come prewired with 100% correct beliefs, and must be able to learn all sorts of strange (but true) things from reliable authorities; these strategies are exactly what one would advise. It is not their fault that some of the most reliable authorities in their lives (their parents) are mistaken about one major set of beliefs. They simply have bad epistemic luck.
So I read the Bible, which veered from boring to incoherent to disgusting. (I became a fan of the Wisdom literature, however, and still periodically read the Book of Job, Ecclesiastes, and Proverbs.) That didn’t help much. Well, maybe Christianity was not the right religion? My elementary school library had a rather strange selection of books which included various Eastern texts or anthologies (I remember in particular one anthology on meditation, which was a hodge-podge of religious instruction manuals, essays, and scientific studies on meditation - that took me a long time to read, and it was only in high school and college that I really became comfortable reading psychology papers). I continued reading in this vein for years, in between all my more normal readings. The Koran was interesting and in general much better than the Bible. Shinto texts were worthless mythologizing. Taoism had some very good early texts (the Chuang-tzu in particular) but then bizarrely degenerated into alchemy. Buddhism was strange: I rather liked the general philosophical approach, but there were many populist elements in Mahayana texts that bothered me. Hinduism had a strange beauty, but my reaction was similar to that of the early translators, who condemned it for sloth and lassitude. I also considered the Occult seriously and began reading the Skeptical literature on that and related topics (see the later section).
By this point in my reading, I had reached middle school; this summary makes my reading sound more systematic than it was. I still hadn’t found any especially good reason to believe in God or any gods, and had a jaundiced view of many texts I had read. Higher Biblical criticism was a shock when I finally became capable of reading it and source-texts like Josephus: it’s amazing just how uncertain, variable, self-contradictory, edited, and historically inconsistent both the Old and New Testaments are. There are hundreds of major variants of the various books, there are countless thousands of textual variants (many of key theological passages) leaving traces of ideological fabrication throughout besides the casual falsification of many historical events for dramatic effect or faked coherence with Old Testament prophecies
(we all know of false claims like the Massacre of the Innocents or Jesus being born of a virgin
to fit an erroneous translation or the sun stopping or the Temple veil being ripped or the Roman census) but it’s dramatic to find that Jesus was an utter nobody even in Josephus, where the only mention of Jesus seems to be falsified (by Christians of course) despite him name-dropping constantly - and speaking of Josephus, it’s hard not to be impressed that while one recites every Sunday how Jesus was crucified under Pontius Pilate
as proof we are told of Jesus’s historicity & that he was not a story or mythology, the Pontius Pilate in Josephus is a corrupt merciless Roman official who doesn’t hesitate to get his hands bloody if necessary and who doesn’t match the Gospel story in the slightest; indeed, reading through Josephus’s accounts of constant turmoil in Jerusalem of various false prophets and messiahs and rebels (hmm….) and the difficulties the authorities face, one can’t believe the entire story of Pilate because to not either immediately execute Jesus or leave the whole matter until well after the very politically sensitive holiday is to assume both the Roman and Jewish officials suffered a sudden attack of the stupids and a collective amnesia of how they usually dealt with such problems. The whole story is blatant rubbish! Yet my religion teachers kept occasionally emphasizing how Jesus was a historical figure. The mythicist case is not compelling but they do a good job of showing how elements of the story were standard tropes for the ancient world, with many parts of the narrative having multiple precedents or weirdly-interpreted Old Testament justifications. In short, reading through higher Biblical criticism, it’s not a surprise that it is such anathema to modern Christian sects and scriptural inerrancy developed in reaction to it.
At some point, I shrugged and gave up and decided I was an atheist14 because certainly I felt nothing15. Theology was interesting to some extent, but there were better things to read about. (My literary interest in Taoism and philosophical interest in Buddhism remain, but I put no stock in any supernatural claims they might make.)
The American Revolution
In middle school, we were assigned a pro-con debate about the American Revolution; I happened to be on the pro side, but as I read through the arguments, I became increasingly disturbed and eventually decided that the pro-Revolution arguments were weak or fallacious.
The Revolution was a bloodbath with ~100,000 casualties or fatalities followed by 62,000 Loyalist/Tory refugees fleeing the country for fear of retaliation and their expropriation (the ones who stayed did not escape persecution); this is a butcher’s bill that did not seem justified in the least by anything in Britain or America’s subsequent history (what, were the British going to randomly massacre Americans for fun?), even now with a population of >300 million, and much less back when the population was 1/100th the size. Independence was granted to similar English colonies at the smaller price of waiting a while
: Canada was essentially autonomous by 1867 (less than a century later) and Australia was first settled in 1788 with autonomous colonies not long behind and the current Commonwealth formed by 1901. (Nor did Canada or Australia suffer worse at England’s hands during the waiting period than, say, America in that time suffered at its own hands.) In the long run, independence may have been good for the USA, but this would be due to sheer accident: the British were holding the frontier at the Appalachians (see Royal Proclamation of 1763), and Napoleon likely would not have been willing engage in the Louisiana Purchase with English colonies inasmuch as he was at war with England. (Assuming we see this as a good thing: Bryan Caplan describes that as removing the last real check on American aggression against the Indians
.)
Neither of these is a very strong argument; the British could easily have revoked the Proclamation in face of the colonial resistance (and in practice did16), and Napoleon could not hold onto New France for very long against the British fleets. The argument from freedom
is a buzzword or unsupported by the facts - Canada and Australia are hardly hellhole bastions of totalitarianism, and are ranked by Freedom House as being as free as the USA. (Steve Sailer asks Yet how much real difference did the very different political paths of America and Canada make in the long run?
; could we have been Canada?)
And there are important arguments for the opposite, that America would have been better off under British rule - Britain ended slavery very early on and likely would have ended slavery in the colonies as well. (Some have argued that with continued control of the southern colonies, Britain would have not been able to do this; but the usual arguments for the Revolution center on the tyranny of Britain - so was the dog wagging the tail or the tail the dog?) The South crucially depended on England’s tacit support (seeing the South as a counterweight to the dangerous North?), so the American Civil War would either never have started or have been suppressed very quickly. The Civil War would also have lacked its intellectual justification of states’ rights if the states had remained Crown colonies. The Civil War was so bloody and destructive17 that avoiding it is worth a great deal indeed. And then there comes WWI and WWII. It is not hard to see how America remaining a colony would have been better for both Europe and America.
Aside from the better outcomes for slaves and Indians, it’s been suggested that America would have benefited from maintaining a parliamentary constitutional-monarchy democracy rather than inventing its particular president-oriented republic (a view that has some more appeal in the 2000s, but is more broadly supported by the popularity of parliamentary democracies globally and their apparent greater stability & success compared to the more American-style systems in unstable & coup-prone Latin America).
Since that paradigm shift in middle school, my view has changed little:
- Crane Brinton’s The Anatomy of Revolution confirmed my beliefs with statistics about the economic class of participants: naked financial self-interest is not a very convincing argument for plunging a country into war, given that England had incurred substantial debt defending and expanding the colonies and their tax burden - that they endlessly complained of - was comically tiny compared to England proper. One of the interesting points Brinton makes was that contrary to the universal belief, revolutions do not universally tend to occur at times of poverty or increasing wealth inequality; indeed, before the American revolution, the colonists were less taxed, wealthier & more equal than the English.
continuing the economic theme, the burdens on the American colonists such as the Navigation Acts are now considered to not be burdensome at all, but negligible or positive, especially compared to independence. Famed Scottish economist Adam Smith supported the Navigation Acts as a critical part of the Empire’s defense18 (which included the American colonies; but see again the colonies’ gratitude for the French-Indian War). Their light burden has become economic history consensus since the discussion was sparked in the 1960s (eg. Thomas 1965, Thomas 1968): in 1994, 198 economic historians were surveyed asked several questions on this point finding that:
- 132 disagreed with the proposition
One of the primary causes of the American Revolution was the behavior of British and Scottish merchants in the 1760s and 1770s, which threatened the abilities of American merchants to engage in new or even traditional economic pursuits.
- 178 agreed or partially agreed that
The costs imposed on the colonists by the trade restrictions of the Navigation Acts were small.
- 111 disagreed that
The economic burden of British policies was the spark to the American Revolution.
- 117 agreed or partially agreed that
The personal economic interests of delegates to the Constitutional Convention generally had a [substantial] effect on their voting behavior.
- 132 disagreed with the proposition
Mencius Moldbug discussed good deal of primary source material which supported my interpretation.
I particularly enjoyed his description of the Pulitzer-winning The Ideological Origins of the American Revolution, a study of the popular circulars and essays (of which Thomas Paine’s Common Sense is only the most famous): the author finds that the rebels and their leaders believed there was a conspiracy by English elites to strip them of their freedoms and crush the Protestants under the yoke of the Church of England.
Bailyn points out that no traces of any such conspiracy has ever been found in the diaries or memorandums or letters of said elites. Hence the Founding Fathers were, as Moldbug claimed, exactly analogous to 9/11 Truthers or Birthers. Moldbug further points out that reality has directly contradicted their predictions, as both the Monarchy and Church of England have seen their power continuously decreasing to their present-day ceremonial status, a diminution in progress long before the American Revolution.- Possibly on Moldbug’s advice, I then read volume 1 of Murray Rothbard’s Conceived in Liberty. I was unimpressed. Rothbard seems to think he is justifying the Revolution as a noble libertarian thing (except for those other scoundrels who just want to take over); but all I saw were scoundrels.
Attempting to take an outside view and ignore the cult built up around the Founding Fathers, viewing them as a cynical foreigner might, the Fathers do not necessarily come off well.
For example, one can compare George Washington to Robert Mugabe: both led a guerrilla revolution of British colonies against the country which had built their colony up into a wealthy regional powerhouse, and they or their allies employed mobs and terrorist tactics; both oversaw hyperinflation of their currency; both expropriated politically disfavored groups, and engaged in give-aways to supporters (Mugabe redistributed land to black supporters, Washington approved Alexander Hamilton’s assumption of states’ war-debts - an incredible windfall for the Hamilton-connected speculators, who supported the Federalist party); both were overwhelmingly voted into office and commanded mass popularity even after major failures of their policies became evident (economic growth & hyperinflation for Mugabe, the Whiskey Rebellion for Washington), being hailed as fathers of their countries; and both wound up one of, if not the, most wealthy men in the country (Mugabe’s fortune has been estimated at anywhere from $3b to $10b; Washington, in inflation-adjusted terms, has been estimated at $0.5b).Jeremy Bentham amusingly eviscerates the Independence’s complaints
Communism
In roughly middle school as well, I was very interested in economic injustice and guerrilla warfare, which naturally led me straight into the communist literature. I grew out of this when I realized that while I might not be able to pinpoint the problems in communism, a lot of that was due to the sheer obscurity and bullshitting in the literature (I finally gave up with Empire, concluding the problem was not me, Marxism was really that intellectually worthless), and the practical results with economies & human lives spoke for themselves: the ideas were tried in so many countries by so many groups in so many different circumstances over so many decades that if there were anything to them, at least one country would have succeeded. In comparison, even with the broadest sample including hellholes like the Belgian Congo, capitalism can still point to success stories like Japan.
(Similar arguments can be used for science and religion: after early science got the basic inductive empirical formula right, it took off and within 2 or 3 centuries had conquered the intellectual world and assisted the conquest of much of the real world too; in contrast, 2 or 3 centuries after Christianity began, its texts were beginning to finally congeal into the beginnings of a canon, it was minor, and the Romans were still making occasional efforts to exterminate this irksome religion. Charles Murray, in a book I otherwise approve of, attempts to argue in Human Accomplishment that Christianity was a key factor in the great accomplishments of Western science & technology by some gibberish involving human dignity; the argument is intrinsically absurd - Greek astronomy and philosophy were active when Christianity started, St. Paul literally debated the Greek philosophers in Athens, and yet Christianity did not spark any revolution in the 100s, or 200s, or 300s, or for the next millennium, nor the next millennium and a half. It would literally be fairer to attribute science to William the Conqueror, because that’s a gap one-third the size and there’s at least a direct line from William the Conqueror to the Royal Society! If we try to be fairer and say it’s late Christianity as exemplified by the philosophy of Thomas Aquinas - as influenced by non-Christian thought like Aristotle as it is - that still leaves us a gap of something like 300-500 years. Let us say I would find Murray’s argument of more interest if it were coming from a non-Christian…)
The Occult
This is not a particular error but a whole class of them. I was sure that the overall theistic explanations were false, but surely there were real phenomenon going on? I’d read up on individual things like Nostradamus’s prophecies or the Lance of Longinus, check the skeptics literature, and disbelieve; rinse and repeat until I finally dismiss the entire area with some exceptions like the mental & physical benefits of meditation. One might say my experience was a little like Susan Blackmore’s career as recounted in The Elusive Open Mind: Ten Years of Negative Research in Parapsychology
, sans the detailed experiments. (I am still annoyed that I was unable to disbelieve the research on Transcendental Meditation until I read more about the corruption, deception, and falsified predictions of the TM organization itself.) Fortunately, I had basically given up on occult things by high school, before I read Eco’s Foucault’s Pendulum, so I don’t feel too chagrined about this.
Fiction
I spend most of my time reading; I also spent most of my time in elementary, middle, and high school reading. What has changed in what I read - I now read principally nonfiction (philosophy, economics, random sciences, etc.), where I used to read almost exclusively fiction. (I would include one nonfiction book in my stacks of books to check out, on a sort of vegetables
approach. Eat your vegetables and you can have dessert.) I, in fact, aspired to be a novelist. I thought fiction was a noble task, the highest production of humanity, and writers some of the best people around, producing immortal works of truth. Slowly this changed. I realized fiction changed nothing, and when it did change things, it was as oft as not for the worse. Fiction promoted simplification, focus on sympathetic examples, and I recognized how much of my own infatuation with the Occult (among other errors) could be traced to fiction. What a strange belief, that you could find truths in lies.19 And there are so many of them, too! So very many. (I wrote one essay on this topic, Culture is not about Esthetics.) I still produce some fiction these days, but mostly when I can’t help it or as a writing exercise.
Nicotine
I changed my mind about nicotine in 2011. I had naturally assumed, in line with the usual American cultural messages, that there was nothing good about tobacco and that smoking is deeply shameful, proving that you are a selfish lazy short-sighted person who is happy to commit slow suicide (taking others with him via second-hand smoke) and cost society a fortune in medical care. Then some mentions of nicotine as useful came up and I began researching it. I’m still not a fan of smoking, and I regard any tobacco with deep trepidation, but the research literature seems pretty clear: nicotine enhances mental performance in multiple domains and may have some minor health benefits to boot. Nicotine sans tobacco seems like a clear win. (It amuses me that of the changes listed here, this is probably the one people will find most revolting and bizarre.)
Centralized darknet-markets
I overestimated the stability of Bitcoin+Tor darknet markets such as Silk Road: I was aware that the centralization of the first-generation DNMs (SR/BMR/Atlantis/Sheep) meant that the site operators had a strong temptation to steal all deposit & escrows, but I thought that the value of future escrow commissions provided enough incentive to make rip-and-run scams rare - certainly they were fairly rare during the Silk Road 1 era.
After Silk Road was shut down in October 2013, SR turned out to be highly unusual: both less hacked than most markets, and it seems that whatever his (many) other failings, Ross Ulbricht genuinely believed his own ideology and so was running Silk Road out of principle rather than greed (which also explains why he didn’t retire despite a fortune larger than he could spend in a lifetime). Attracted by the sudden void in a large market, and by the FBI’s press releases crowing over how many hundreds of millions of dollars Silk Road had earned, dozens of new markets sprang up to fill the void. Many then proceeded to scam users (often taking advantage of the standard seller bonds
: sellers would deposit a large sum as a guarantee against scamming buyers in the early period where they were accepting orders but most packages would not have arrived) or alternately, be hacked due to the operators’ get-rich-quick incompetence and rather than refund users from future profits, decide to steal everything the hacker didn’t get. As of April 2014, it seems users have mostly learned caution, and the shift to multisig escrow removes the need to trust market operators and hence the risk from the operators or hackers, so matters may finally be stabilizing.
I think my original point is still correct that markets can be trusted as long as the discounted present value of their future earnings exceeds the amount they can steal. My mistake here was overestimating the net present value: I didn’t realize that site operators had such high discount rates (one, PBF, pulled its scam after perhaps a few thousand dollars’ worth of Bitcoin had been deposited despite positive initial reviews) and there was so much risk involved (the Bitcoin exchange rate, arrest, hacking; all exacerbated by the incompetence of many site operators).
This mistake lead to complacency on my part in archiving the markets & forums: if you expect a market to be around for years, there is no particular need to try to mirror them weekly. And so while I have good coverage of the DNMs post-December-2013, I am missing most of the markets before then.
Potential changes
The mind cannot foresee its own advance.20
There are some things I used to be certain about, but I am no longer certain either way; I await future developments which may tip me one way or the other.
Near Singularity
I am no longer certain that the Singularity is near.
In the 1990s, all the numbers seem to be ever-accelerating. Indeed, I could feel with Kurzweil that The Singularity is Near. But an odd thing happened in the 2000s (a dreary decade, distracted by the dual dissipation of Afghanistan & Iraq). The hardware kept getting better mostly in line with Moore’s Law (troubling as the flight to parallelism is), but the AI software didn’t seem to keep up. I am only a layman, but it looks as if all the AI applications one might cite in 2011 as progress are just old algorithms now practical with newer hardware. And economic growth slowed down, and the stock market ticked along, barely maintaining itself. The Human Genome Project completely fizzled out, with interesting insights and not much else. (It’s great that genome sequencing has improved exactly as promised, but what about everything else? Where are our embryo selections, our germ-line engineering, our universal genetic therapies, our customized drugs?21) The pharmaceutical industry has reached such diminishing returns that even the optimists have noticed the problems in the drug pipeline, problems so severe that it’s hard to wave them away as due to that dratted FDA or ignorant consumers. As of 2007, the increases in longevity for the elderly22 in the US has continued to be less each year and are probably getting slower
, which isn’t good news for those hoping to reach Aubrey de Grey’s escape velocity
; and medicine has been a repeated disappointment even to forecasting-savvy predictors (the ’90s and the genetic revolution being especially remarkable for its lack of concrete improvements). Kurzweil published an evaluation of his predictions up to ~2009 with great fanfare and self-congratulation, but reading through them, I was struck by how many he weaseled out on (claiming as a hit anything that existed in a lab or a microscopic market segment, even though in context he had clearly expected it to be widespread) and how often they failed due to unintelligent software.
And there are many troubling long-term metrics. I was deeply troubled to read Charles Murray’s Human Accomplishment pointing out a long-term decline in discoveries per capita (despite ever increasing scientists and artists per capita!), even after he corrected for everything he could think of. I didn’t see any obvious mistakes. Tyler Cowen’s The Great Stagnation twisted the knife further, and then I read Joseph Tainter’s The Collapse of Complex Societies. I have kept notes since and see little reason to expect a general exponential upwards over all fields, including the ones minimally connected to computing. (Peter Thiel’s The End of the Future
makes a distinction between the progress in computers and the failure in energy
; he also makes an interesting link between the lack of progress and the many recent speculative bubbles in The Optimistic Thought Experiment
.) The Singularity is still more likely than not, but these days, I tend to look towards emulation of human brains via scanning of plastinated brains as the cause. Whole brain emulation is not likely for many decades, given the extreme computational demands (even if we are optimistic and take the Whole Brain Emulation Roadmap figures, one would not expect a upload until the 2030s) and it’s not clear how useful an upload would be in the first place. It seems entirely possible that the mind will run slowly, be able to self-modify only in trivial ways, and in general be a curiosity akin to the Space Shuttle than a pivotal moment in human history deserving of the title Singularity.
Counter-point
I respect my own opinion, but at the same time I know I am not immune to common beliefs; so it bothers me to see stagnation
and pessimistic ideas become more widespread because this means I may just be following a trend. I did not like agreeing with any of Wireds hyperbolic forecasts back in the 1990s, and I do not like agreeing with Peter Thiel or Neal Stephenson now. One of Buffet’s classic sayings is
now, when many are being if they [investors] insist on trying to time their participation in equities, they should try to be fearful when others are greedy and greedy when others are fearful.
What grounds do I have for being ’greedyfearful
? What Kahneman-style pre-mortem would I give for explaining why the Singularity might indeed be Near?
First, one could point out that a number of technological milestones seem to be catching up, after long stagnations. From 2009-2012, there were a number of unexpected achievements: Google’s robotic car astounded me, the long AI-resistant game of Go is falling to AIs using Monte Carlo tree techniques are closing in the Go masters (I expect computers to take world champion by 2030), online education seems to be starting to realize its promise (eg. the success of Khan Academy), private space exploitation is doing surprisingly well (as are Tesla electric cars, which seem to be moving from playthings to perhaps mass market cars), smartphones - after a decade of being crippled by telecoms and limited computing power - are becoming ubiquitous and desktop replacements. Old cypherpunk dreams like anonymous digital cash and online darknet markets have swung into action and given rise to active & growing communities. In the larger picture, the Long Depression beginning in 2008 has wreaked havoc on young people, but China has not imploded while continuing to move up the quality chain (replacing laborers with robots) and far from being the crisis of capitalism
or the end of capitalism
, in general, global life is going on. Even Africa, while the population size is exploding, is growing economically - perhaps thanks to universally available cheap cellphones. Peak Oil continues to be delayed by new developments like fracking and resultant gluts of natural gas (the US now exports energy!), although the long-term scientific productivity trends seem to still point downwards.
What many of these points have in common is that their forebears germinated for a long time in niches and did not live up to the forecasts of their proponents - smartphones, for example, have been expected to revolutionize everything since at least the 1980s (by members of the MIT Media Lab in particular). And indeed, they are revolutionizing everything, worldwide, 30 years later. This exemplifies a line attributed to Roy Amara:
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
A number of current but disappointing trends may be disappointing only in the short run
, the flat part of their respective exponential or sigmoid development curves23. For example, DNA sequencing has been plummeting and sequencing a whole human genome will likely be <$100 by 2015; this has been an incredible boon for basic research and our knowledge of the world, but so far the applications have been fairly minimal - but this may not be true forever, with new projects starting up tackling topics of the greatest magnitude like using thousands of genomes to search for the thousands of alleles which each affect intelligence a tiny bit. Were embryo selection for intelligence to become viable (as there is no reason to believe would not be possible once the right alleles have been identified) and every baby could be born with IQs >130, society would change.
Does this apply to AI? At least two of the examples seem clear-cut examples of an Amara effect
: Go-playing AIs were for decades toys easily beaten by bad amateurs until Monte Carlo Tree Search was introduced in 200624 and then a decade after that they were superhuman, while the first DARPA Grand Challenge in 2004 was a debacle in which no car finished and the best car managed to travel a whopping 7 miles before getting stuck on a rock. 8 years later, and now the conversation has suddenly shifted from will Go AIs ever reach human level?
or will self-driving cars ever be able to cope with the real world?
to the simple question when?.
One of the ironies is that I am sure a pure
AI is possible; but the AI can’t be developed before the computing power is available (we humans are just not good enough at math & programming to achieve it without running code), which means the AI will be developed either simultaneous with or after enough computing power becomes available. If the latter, if the AI is not run at the exact instant that there is enough processing power available, ever more computing power in excess of what is needed (by definition) builds up. It is like a dry forest roasting in the summer sun: the longer the wait until the match, the faster and hotter the wildfire will burn25. Perhaps paradoxically, the longer I live without seeing an AI of any kind, the more schizophrenic my forecasts will appear to an outsider who hasn’t carefully thought about the issue - I will predict with increasingly high confidence that the future will be boring and normal (because the continued non-appearance makes it increasingly likely AI is impossible, see the hope function & AI), that AI is more likely to be created in the next year (because the possibilities are being exhausted as time passes) and the changes I predict become ever more radical!
This set of estimates is obviously consistent with an appearance of stagnation: each year small advances build up, but no big breakthroughs appear - until they do.
Neo-Luddism
Almost in the same way as earlier physicists are said to have found suddenly that they had too little mathematical understanding to be able to master physics; we may say that young people today are suddenly in the position that ordinary common sense no longer suffices to meet the strange demands life makes. Everything has become so intricate that for its mastery an exceptional degree of understanding is required. For it is not enough any longer to be able to play the game well; but the question is again and again: what sort of game is to be played now anyway?–Wittgenstein’s Culture and Value, MS 118 20r: 27.8.1937
The idea of technological unemployment - permanent structural unemployment and a jobless recovery - used to be dismissed contemptuously as the Luddite fallacy. (There are models where technology does produce permanent unemployment, and quite plausible ones too; see Autor et al 2003 and Autor & Hamilton26 and Krugman’s commentary pointing to recent data showing the hollowing out
and deskilling
predicted by the Autor model, which is also consistent with the long-term decline in teenage employment due to immigration. Martin Ford has some graphs explaining the complementation-substitution model.) But ever since the Internet bubble burst, it’s been looking more and more likely, with scads of evidence for it since the housing bubble like the otherwise peculiar changes in the value of college degrees27. (This is closely related to my grounds for believing in a distant Singularity.) When I look around, it seems to me that we have been suffering tremendous unemployment for a long time. When Alex Tabarrok writes If the Luddite fallacy were true we would all be out of work because productivity has been increasing for two centuries
, I think, isn’t that correct? If you’re not a student, you’re retired; if you’re not retired, you’re disabled28; if you’re not disabled, perhaps you are institutionalized; if you’re not that, maybe you’re on welfare, or just unemployed.
Compare now to most of human history, or just the 1300s:
- every kid in special ed would be out working on the farm; there would, if only from reduced moral hazard29 be fewer disabled than now (federal Supplemental Security Income alone supports 8 million Americans)
everyone in college would be out working (because the number of students was a rounding error and they didn’t spend very long in higher education to begin with)
Indeed, education and healthcare are a huge chunk of the US economy - and both have serious questions about how much good, exactly, they do and whether they are grotesquely inefficient or just inefficient.- retirees didn’t exist outside the tiny nobility
guard labor
- people employed solely to control and ensure productivity of the others has increased substantially (Bowles & Jayadev 2006 claim US guard labor has gone from 6% of the 1890 labor force to 26% in 2002; this is not due to manufacturing declines30); examples of guard labor:- standing militaries were unusual (although effective when needed31); the US maintains the second-largest active in the world - ~1.5m (~0.5% of the population), which employs millions more with its $700 billion budget32 and is a key source of pork33 and make-work
- prisons were mostly for temporary incarceration pending trial or punishment34; the US currently has ~2.3m (nearly 1% of the population!), and perhaps another 4.9m on parole/probation. (See also the relationship of psychiatric imprisonment with criminal imprisonment.) That’s impressive enough, but as with the military, consider how many people are tied down solely because of the need to maintain and supply the prison system - prison wardens, builders, police etc.
people worked hard; the 8-hour day and 5-day workweek were major hard-fought changes (a plank of the Communist Manifesto!). Switching from a 16-hour to an 8-hour day means we are half-retired already and need many more workers than otherwise.
In contrast, Americans now spend most of their lives not working.
The unemployment rate looks good - 9% is surely a refutation of the Luddite fallacy! - until you look into the meat factory and see that that is the best rate, for college graduates actively looking for jobs, and not the overall population one including those who have given up. Economist Alan Krueger writes of the ratio (which covers only 15-64 year olds):
Tellingly, the employment-to-population rate has hardly budged since reaching a low of 58.2% in December 2009. Last month it stood at just 58.4%. Even in the expansion from 2002 to 2007 the share of the population employed never reached the peak of 64.7% it attained before the March-November 2001 recession.
Let’s break it down by age group using FRED:
Scott Sumner correctly points out that the employment:population ratio itself doesn’t intrinsically tell us about whether things are going well or poorly - one could imagine a happy and highly automated country with a basic income guarantee where only 20% of the population works or an agricultural country where everyone works and is desperately poor. What matters more is wealth inequality combined with employment ratio: how many people are either rich enough that not having a job is not a disaster or at least can get a job?
What do you suppose the employment-to-population rate was in 1300 in the poorer 99% of the world population (remembering how homemaking and raising children is effectively a full-time job)? I’d bet it was a lot higher than the world record in 2005, Iceland’s 84%. And Iceland is a very brainy place. What are the merely average with IQs of 100-110 supposed to do? (Heck, what is the half of America with IQs in that region or below supposed to do? Learn C++ and statistics so they can work on Wall Street?) If you want to see the future, look at our youth; where are summer jobs these days? Gregory Clark comments sardonically (although he was likely not thinking of whole brain emulation) in Farewell to Alms:
Thus, while in preindustrial agrarian societies half or more of the national income typically went to the owners of land and capital, in modern industrialized societies their share is normally less than a quarter. Technological advance might have been expected to dramatically reduce unskilled wages. After all, there was a class of workers in the preindustrial economy who, offering only brute strength, were quickly swept aside by machinery. By 1914 most horses had disappeared from the British economy, swept aside by steam and internal combustion engines, even though a million had been at work in the early nineteenth century. When their value in production fell below their maintenance costs they were condemned to the knacker’s yard.
Technology may increase total wealth under many models, but there’s a key loophole in the idea of Pareto-improving
gains - they don’t ever have to make some people better off. And a Pareto-improvement is a good result! Many models don’t guarantee even that - it’s perfectly possible to become worse off (see the horses above and the fate of humans in Robin Hansons ’crack of a future dawn
scenario). Such doctrinairism is not useful:
Like experts in many fields who give policy advice, the authors show a preference for first-best, textbook approaches to the problems in their field, while leaving other messy objectives acknowledged but assigned to others. In this way, they are much like those public finance economists who oppose tax expenditures on principle, because they prefer direct expenditure programs, but do not really analyze the various difficulties with such programs; or like trade economists who know that the losers from trade surges need to be protected but regard this as not a problem for trade policy.–Lawrence H Summers,Comments on, Brookings Papers on Economic Activity 2006, 2, 149-162The Contradiction in China’s Gradualist Banking Reforms
This is closely related to what I’ve dubbed the
(along the lines of the Pascal’s Wager Fallacy Fallacy): technologists who are extremely intelligent and have worked most of their life only with fellow potential Mensans confidently say that Luddite fallacy
fallacyif there is structural unemployment (and I’m being generous in granting you Luddites even this contention), well, better education and training will fix that!
It’s a little hard to appreciate what a stupendous mixture of availability bias, infinite optimism, and plain denial of intelligence differences this all is. Marc Andreessen offers an example in 2011:
Secondly, many people in the U.S. and around the world lack the education and skills required to participate in the great new companies coming out of the software revolution. This is a tragedy since every company I work with is absolutely starved for talent. Qualified software engineers, managers, marketers and salespeople in Silicon Valley can rack up dozens of high-paying, high-upside job offers any time they want, while national unemployment and underemployment is sky high. This problem is even worse than it looks because many workers in existing industries will be stranded on the wrong side of software-based disruption and may never be able to work in their fields again. There’s no way through this problem other than education, and we have a long way to go.
I see. So all we have to do with all the people with <120 IQs, who struggled with algebra and never made it to calculus (when they had the self-discipline to learn it at all), is just to train them into world-class software engineers and managers who can satisfy Silicon Valley standards; and we have to do this for the first time in human history. Gosh, is that all? Why didn’t you say so before - we’ll get on that right away! Or an anonymous data scientist
recorded in the NYT: He found my concerns to be amusing. People can get work creating SEO-optimized niche blogs, he said. Or they can learn to code.
Thomas Friedman:
Every middle-class job today is being pulled up, out or down faster than ever. That is, it either requires more skill or can be done by more people around the world or is being buried - made obsolete - faster than ever. Which is why the goal of education today, argues Wagner, should not be to make every child
college readybutinnovation ready- ready to add value to whatever they do…more than ever, our kids will have toinventa job. (Fortunately, in today’s world, that’s easier and cheaper than ever before.) Sure, the lucky ones will find their first job, but, given the pace of change today, even they will have to reinvent, re-engineer and reimagine that job much more often than their parents if they want to advance in it… What does that mean for teachers and principals? [Tony Wagner:]All students should have digital portfolios to show evidence of mastery of skills like critical thinking and communication, which they build up right through K-12 and post-secondary. Selective use of high-quality tests, like the College and Work Readiness Assessment, is important. Finally, teachers should be judged on evidence of improvement in students’ work through the year - instead of a score on a bubble test in May. We need lab schools where students earn a high school diploma by completing a series of skill-basedmerit badgesin things like entrepreneurship. And schools of education where all new teachers haveresidencieswith master teachers and performance standards - not content standards - must become the new normal throughout the system.
These sentiments or goals are so breathtakingly delusional (have these people ever met the average American? or tried to recall their middle school algebra? or thought about how many of their classmates actually learned anything?) that I find myself wondering (despite my personal injunctions against resorting to ad hominems) that surely no one could believe such impossible things, either before or after breakfast; surely an award-winning New York Times columnist or a famous Harvard educational theorist, surely these people cannot seriously believe the claims they are supposedly making, and there is some more reasonable explanation - like they have been bribed by special interests, or are expounding propaganda designed to safeguard their lucrative profits from populist redistribution, or are pulling a prank in very bad taste, or (like President Reagan) are tragically in the grips of a debilitating brain disease?
But the sentiments are so consistent and people who’ve met proponents of the training panacea say they are genuine about it (eg Scott Alexander thought the retraining people were just Internet strawmen until he met them), that it must be what they think.
But moving on past Andreessen and Friedman. If it really is possible for people to rise to the demands of the New Economy, why is it not happening? For example (emphasis added)
As documented in Turner (2004), Bound and Turner (2007, 2011), and Bound, Lovenheim and Turner (2010a), while the number of students attending college has increased over the past three decades in the U.S., college graduation rates (i.e., the fraction of college enrollees that graduate) and college attainment rates (i.e., the fraction of the population with a college degree) have hardly changed since 1970 and the time it takes college students to complete a baccalaureate (BA) degree has increased (Bound, Lovenheim and Turner, 2010b). The disparities between the trends in college attendance and completion or time-to-completion of college degrees is all the more stark given that the earnings premium for a college degree relative to a high school degree nearly doubled over this same period (Goldin and Katz, 2008).
- Bound, John and Sarah Turner (2011).
Dropouts and Diplomas: The Divergence in Collegiate Outcomes.in Handbook of the Economics of Education, Vol. 4, E. Hanushek, S. Machin and L. Woessmann (eds.) Elsevier B.V., 573-613- Goldin, Claudia and Lawrence Katz (2008). The Race between Education and Technology. Cambridge: Harvard University Press
Or Study of Men’s Falling Income Cites Single Parents
:
The fall of men in the workplace is widely regarded by economists as one of the nation’s most important and puzzling trends. While men, on average, still earn more than women, the gap between them has narrowed considerably, particularly among more recent entrants to the labor force. For all Americans, it has become much harder to make a living without a college degree, for intertwined reasons including foreign competition, advancements in technology and the decline of unions. Over the same period, the earnings of college graduates have increased. Women have responded exactly as economists would have predicted, by going to college in record numbers. Men, mysteriously, have not. Among people who were 35 years old in 2010, for example, women were 17% more likely to have attended college, and 23% more likely to hold an undergraduate degree.
I think the greatest, most astonishing fact that I am aware of in social science right now is that women have been able to hear the labor market screaming outsaid Michael Greenstone, an M.I.T. economics professor who was not involved in Professor Autor’s work.You need more educationand have been able to respond to that, and men have not,And it’s very, very scary for economists because people should be responding to price signals. And men are not. It’s a fact in need of an explanation.
It’s always a little strange to read an economist remark that potential returns to education have been rising and so more people should get an education, but this same economist somehow not realize that the continued presence of this free lunch indicates it is not free at all. Look at how the trend of increasing education has stalled out:
Education attainment climbed dramatically in the 20th century, but its growth has flattened recently (source: Census)
Apparently markets work and people respond to incentives - except when it comes to education, and there people simply aren’t picking up those $100 bills laying on the ground and have been not picking them up for decades for some reason35, as the share of income accruing to labor
falls both in the USA and worldwide I see. (In England, there’s evidence that college graduates were still being successfully absorbed in the ’90s and earlier, although apparently there weren’t relatively many during those periods36.) What do bad students know that good economists don’t?
David Ricardo (inventor of the much-cited - by optimists and anti-neo-luddites - comparative advantage concept) changed his mind about whether technological unemployment was possible, but he thought it was possible only under certain conditions; Sachs & Kotlikoff 2013 gives a multi-generational model of suffering. Most economists, though, continue to dismiss this line of thought, saying that technological changes and structural unemployment are real but things will work themselves out somehow. Robin Hanson, for example, seems to think that and he’s a far better economist than me and has thought a great deal about AI and the economic implications. Their opposition to Neo-Luddism is about the only reason I remain uncertain, because otherwise, the data for the economic troubles starting in 2007, and especially the unemployment data, seem to match nicely. From a Federal Reserve brief (principally arguing that the data is better matched by a model in which the longer a worker remains unemployed, the longer they are likely to remain unemployed):
For most of the post-World War II era, unemployment has been a relatively short-lived experience for the average worker. Between 1960 and 2010, the average duration of unemployment was about 14 weeks. The duration always rose during recessions, but relatively quick upticks in hiring after recessions kept the long-term unemployment rate fairly low. Even during the two
jobless recoveriesthat followed the 1990-91 and 2001 recessions, the peak shares of long-term unemployment were 21% and 23%, respectively. But the 2007-09 recession represents a marked departure from previous experience: the average duration has increased to 40 weeks, and the share of long-term unemployment remains high more than two years after the official end of the recession.37 Never before in the postwar period have the unemployed been unemployed for so long.
The Economist asked in 2011:
But here is the question: if the pace of technological progress is accelerating faster than ever, as all the evidence indicates it is, why has unemployment remained so stubbornly high - despite the rebound in business profits to record levels? Two-and-a-half years after the Great Recession officially ended, unemployment has remained above 9% in America. That is only one percentage point better than the country’s joblessness three years ago at the depths of the recession. The modest 80,000 jobs added to the economy in October were not enough to keep up with population growth, let alone re-employ any of the 12.3m Americans made redundant between 2007 and 2009. Even if job creation were miraculously nearly to triple to the monthly average of 208,000 that is was in 2005, it would still take a dozen years to close the yawning employment gap caused by the recent recession, says Laura D’Andrea Tyson, an economist at University of California, Berkeley, who was chairman of the Council of Economic Advisers during the Clinton administration.
And lays out the central argument for neo-Luddism, why this time is different
:
Thanks to tractors, combine harvesters, crop-picking machines and other forms of mechanisation, agriculture now accounts for little more than 2% of the working population. Displaced agricultural workers then, though, could migrate from fields to factories and earn higher wages in the process. What is in store for the Dilberts of today? Media theorist Douglas Rushkoff (Program or Be Programmed and Life Inc) would argue
nothing in particular.Put bluntly, few new white-collar jobs, as people know them, are going to be created to replace those now being lost-despite the hopes many place in technology, innovation and better education.The argument against the Luddite Fallacy rests on two assumptions: one is that machines are tools used by workers to increase their productivity; the other is that the majority of workers are capable of becoming machine operators. What happens when these assumptions cease to apply - when machines are smart enough to become workers? In other words, when capital becomes labour. At that point, the Luddite Fallacy looks rather less fallacious…In his analysis [Lights in the Tunnel], Mr [Martin] Ford noted how technology and innovation improve productivity exponentially, while human consumption increases in a more linear fashion. In his view, Luddism was, indeed, a fallacy when productivity improvements were still on the relatively flat, or slowly rising, part of the exponential curve. But after two centuries of technological improvements, productivity has
turned the cornerand is now moving rapidly up the more vertical part of the exponential curve. One implication is that productivity gains are now outstripping consumption by a large margin.
The American oddities began before the current recession:
Unemployment increased during the 2001 recession, but it subsequently fell almost to its previous low (from point A to B and then back to C). In contrast, job openings plummeted-much more sharply than unemployment rose-and then failed to recover. In previous recoveries, openings eventually outnumbered job seekers (where a rising blue line crosses a falling green line), but during the last recovery a labor shortage never emerged. The anemic recovery was followed in 2007 by an increase in unemployment to levels not seen since the early 1980s (the rise after point C). However, job openings fell only a little-and then recovered. The recession did not reduce hiring; it just dumped a lot more people into an already weak labor market.38
And then there is the well-known example of Japan. Yet overall, both Japanese, American, and global wealth continue to grow. The hopeful scenario is that all we are suffering is temporary pains, which will eventually be grown out of, as John Maynard Keynes forecast in his 1930 essay Optimism in a Terrible Economy
:
At the same time technical improvements in manufacture and transport have been proceeding at a greater rate in the last ten years than ever before in history. In the United States factory output per head was 40 per cent greater in 1925 than in 1919. In Europe we are held back by temporary obstacles, but even so it is safe to say that technical efficiency is increasing by more than 1 per cent per annum compound…For the moment the very rapidity of these changes is hurting us and bringing difficult problems to solve. Those countries are suffering relatively which are not in the vanguard of progress. We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come–namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour. But this is only a temporary phase of maladjustment. All this means in the long run that mankind is solving its economic problem. I would predict that the standard of life in progressive countries one hundred years hence will be between four and eight times as high as it is to-day. There would be nothing surprising in this even in the light of our present knowledge. It would not be foolish to contemplate the possibility of afar greater progress still.
Evaluation
Of course, as plausible as this all looks, that doesn’t mean much. Anyone can cherrypick a bunch of quotes and citations. When making predictions, there are a few heuristics or principles I try to apply, and it might be worth applying a few here.
The specification seems fairly clear: the Neo-Luddite claim, in its simplest form predicts that ever fewer people will be able to find employment in undistorted free markets. We can see other aspects as either tangents (will people be able to consume due to a Basic Income or via capital ownership?) or subsets (the Autor thesis of polarization would naturally lead to an overall increase in unemployment). The due date is not clear, but we can see the Neo-Luddite thesis as closely linked to artificial intelligences, and 2050 would be as good a due date as any inasmuch as I expect to be alive then & AI will have matured substantially (if we date serious AI to 1960 then 2012 is a bit past halfway) & many predictions like Ray Kurzweil’s will have been verified or falsified.
The probability part of a prediction is the hard part. Going in order (the latter heuristics aren’t helpful):
What does the prediction about the future world imply about the present world?
What would we expect in a world in which the Neo-Luddite thesis were true?
- first and foremost, we would expect both software & hardware to continue improving. Both are true: Moore’s law continues despite the breakdown in chip frequencies, and AI research forges on with things like deep neural networks being deployed at scale by companies such as Google. If we did not see improvement, that would be extremely damaging to the thesis. However, this is a pretty boring retrodiction to make: technology has improved for so many centuries now that it would be surprising if the improvements had suddenly stopped, and if it had, why would anyone be taking this thesis seriously? It’s not like anyone worries over the implications of a philosopher’s stone for forex.
More meaningfully: capital & labor increasingly cease to be complements, and become substitutes. We would expect gradually rising disemployment as algorithms & software & hardware were refined and companies learned when employees could be replaced by technological substitutes, with occasional jumps as idiosyncratic breakthroughs were made for particular tasks. We would expect returns on capital to increase, and we would expect that employees with un-substitutable skills or properties would increase wealth. This seems sort of true: STEM-related salaries in particular fields seem to be steady and tech companies continue to complain that good software engineers are hard to find (and Congress should authorize ever more H-1B visas) with consequences such as skyrocketing San Francisco real estate as tech companies flock there to find the rare talent they require, which is the sort of
On the other hand, I also read of booming poor economies like China or Africa where wages are rising in general and unemployment seems to be less of a concern. This might fit the Autor model of polarization if we figure that those booming economies are pricing human labor so cheap that it outcompetes software/robots/etc, in which case we would expect to see these countries hit asuperstar effect
we would expect if human beings with certain properties were intrinsically rare & valuable and the remainder just so much useless dross that hold back a business or worse. This is particularly striking when we note that it has never been cheaper or easier to become a software engineer as adequate computer hardware is dirt cheap & all necessary software is available for free online & instructional materials likewise, and it’s unclear how barriers like certification could matter when programmers are producing objective products - either a website is awesome and works, or it doesn’t.wall
where only a part of their populations can pass thevalley of death
to reach the happy part of the polarized economy but the rest of the population is now struggling to be cheap enough to compete with the capital-alternatives. I’m not sure I see this. Yes, there are a lot of robotic factories being set up in China now, but does that really mean anything important on China’s scale? What’s a few million robots in a country of 1.3 billion people? If China does wind up falling into what looks like a middle income trap, that would be consistent with the Autor model, I think, and strengthen this retrodiction.- As technology is mobile and can easily be sold or exported, we would expect to see this general trend in many wealthy Western countries. This is a serious weak point of my knowledge thus far: I simply don’t know what it really looks like in eg Japan or England or Germany. Are they seeing similar things to my factoids about the USA?
Base rates
here is essentially applying the Outside ViewThe main problem here is that it’s very difficult to rebut the Outside View: the Luddite thesis has, it seems, failed many times in the past; why expect this time to be any different? The historical horse example is amusing, certainly, but there could be many factors separating horses from ordinary people. To this, I don’t have any good reply. Even if the thesis is
right
from the perspective of 1000 years from now, there is good reason to be chary of expecting it to happen in my lifetime. Computers themselves furnish a great many examples of people who, with vision and deep insight not shared by the people who ridiculed them as techno-utopians, correctly foresaw things like the personal computer or the Internet or online sales - and started their companies too early. The best I can say is that software/AI seems completely & qualitatively different from earlier technologies like railroads or assembly lines, in that they are performing deeply human mental functions that earlier technologies did not come anywhere near: the regulator of a steam engine is solving a problem so much simpler than an autonomous car solves that it’s hard to even see them as being even theoretically related in exerting control on processes by feedback processes. The dimmest human could productively use contemporary technologies, where today we struggle to find subsidized jobs for the mentally handicapped where they are even just not a net loss.
From these musings, I think we can extract a few warning signs which would indicate the Neo-Luddite thesis breaking down:
- global economic growth stopping
- AI research progress stopping
- Moore’s law in terms of FLOPS/$ breaking down
- decreased wealth inequality (eg. Gini) in the First World
- increases in population working
Daniel Kahneman has an interesting thinking technique he calls the pre-mortem
, where you ask yourself: assume it’s the future, and my confident predictions have completely failed to come true. What went wrong?
Looking back, if the Neo-Luddite thesis fails, I think the most likely explanation for what I’ve seen in the USA would be something related to globalization & China in particular: the polarization, increased disemployment, increasing need for technical training etc, all seem explainable by those jobs heading overseas, exacerbated by other factors such as domestic politics (Bush’s tax cuts on the rich?) and maybe things like the structural unemployment relating to existing workers having difficulty switching sectors or jobs but new workers being able to adapt. If this is so, then I think we would expect the trends to gradually ameliorate themselves: older workers will die off & retire, new workers will replace them, new niches and jobs will open up as the economy adapts, China’s exponential growth will result in catchup being completed within 2 or 3 decades, and so on.
External links
The End of Labor: How to Protect Workers From the Rise of Robots
, Noah SmithWelcome, Robot Overlords. Please Don’t Fire Us?
, Kevin DrumInequality In The Robot Future
, Karl SmithRobot Econ Primer
, Robin HansonThe Robots, AI, and Unemployment Anti-FAQ
, Eliezer Yudkowsky;Eliezer Yudkowsky asks about automation
Assessing the job polarization explanation of growing wage inequality
, Mishel et al 2013 (draft?);Don’t Blame the Robots: Assessing the Job Polarization Explanation of Growing Wage Inequality
(final?)Engels’ pause: Technical change, capital accumulation, and inequality in the British industrial revolution
(see alsoJobs, automation, Engels’ pause and the limits of history
)
IQ & race
This one may be even more inflammatory than supporting nicotine, but it’s an important entry on any honest list. I never doubted that IQ was in part hereditary (Stephen Jay Gould aside, this is too obvious - what, everything from drug responses to skin and eye color would be heritable except the most important things which would have a huge effect on reproductive fitness?), but all the experts seemed to say that diluted over entire populations, any tendency would be non-existent. Well, OK, I could believe that; visible traits consistent over entire populations like skin color might differ systematically because of sexual selection or something, but why not leave IQ following the exact same bell curve in each population? There was no specific thing here that made me start to wonder, more a gradual undermining (Gould’s work like The Mismeasure of Man being completely dishonest is one example - with enemies like that…) as I continued to read studies and wonder why Asian model minorities did so well, and a lack of really convincing counter-evidence like one would expect the last two decades to have produced - given the politics involved - if the idea were false. And one can always ask oneself: suppose that intelligence was meaningful, and did have a large genetic component, and the likely genetic ranking East Asians > Caucasian > Africans; in what way would the world, or the last millennium (eg the growth of the Asian tigers vs Africa, or the different experiences of discriminated-against minorities in the USA), look different than it does now?
Mu
It’s worth noting that the IQ wars are a rabbit hole you can easily dive down. The literature is vast, spans all sorts of groups, all sorts of designs, from test validities to sampling to statistical regression vs causal inference to forms of bias; every point is hotly debated, the ways in which studies can be validly critiqued are an education in how to read papers and look for how they are weak or make jumps or some of the data just looks wrong, and you’ll learn every technical requirement and premise and methodological limitation because the opponents of that particular result will be sure to bring them up if it’ll at all help their case.
In this respect, it’s a lot like the feuds in biblical criticism over issues like whether Jesus existed, or the long philosophical debate over the existence of God. There too is an incredible amount of material to cover, by some really smart people (what did geeks do before science and modernity? well, for the most part, they seem to have done theology; consider how much time and effort Isaac Newton reportedly spent on alchemy and his own Biblical studies, or the sheer brainpower that must’ve been spent over the centuries in rabbinical studies). You could learn a lot about the ancient world or the incredibly complex chain of transmission of the Bible’s constituents in their endless varieties and how they are put together into a single canonical modern text, or the other countless issues of textual criticism. An awful lot, indeed. One could, and people as smart or smarter than you have, lose one’s life in exploring little back-alleys and details.
If, like most people, you’ve only read a few papers or books on it, your opinion (whatever that is) is worthless and you probably don’t even realize how worthless your opinion is, how far you are from actually grasping the subtleties involved and having a command of all the studies and criticisms of said studies. I exempt myself from this only inasmuch as I have realized how little I still know after all my reading. No matter how tempting it is to think that you may be able to finally put together the compelling refutation of God’s existence or to demonstrate that Jesus’s divinity was a late addition to his gospel, you won’t make a dent in the debate. In other words, these can become forms of nerd sniping and intellectual crack. If only I compile a few more studies, make a few more points - then my case will become clear and convincing, and people on the Internet will stop being wrong!
But having said that, and admiring things like Plantinga’s free will defense, and the subtle logical issues in formulating it and the lack of any really concrete evidence for or against Jesus’s existence, do I take the basic question of God seriously? No. The theists’ rearguard attempts and ever more ingenious explanations and indirect pathways of reasons and touted miracles fundamentally do not add up to an existing whole. The universe does not look anything like a omni-benevolent/powerful/scient god was involved, a great deal of determined effort has failed to provide any convincing proof, there not being a god is consistent with all the observed processes and animal kingdom and natural events and material world we see, and so on. The persistence of the debate reflects more what motivated cognition can accomplish and the weakness of existing epistemology and debate. Unfortunately, this could be equally well-said by someone on the other side of the debate, and in any case, I cannot communicate my gestalt impression of the field to anyone else. I don’t expect anyone to be the least bit swayed by what I’ve written here.
So why be interested in the topics at all? If you cannot convince anyone, if you cannot learn the field to a reasonable depth, and you cannot even communicate well what convinced you, why bother? In the spirit of keeping one’s identity small, I say: it’s not clear at all. So you should know in advance whether you want to take the red pill and see how far down the rabbit hole you go before you finally give up, or you take the blue pill and be an onlooker as you settle for a high-level overview of the more interesting papers and issues and accept that you will only have that and a general indefensible assessment of the state of play.
My own belief is that as interesting as it is, you should take the blue pill and not adopt any strong position but perhaps (if it doesn’t take too much time) point out any particularly naive or egregious holes in argument, by people who are simply wrong or don’t realize how little they know or how slanted a view they have received from the material they’ve read. It’s sad to not reach agreement with other people, dangerous to ignore critics, tempting to engage trolls - but life is too short to keep treading the same ground.
The reason for IQ is this: yes, Murray failed to organize a definitive genetic study. It hasn’t happened yet even though it’s more important than most of the trivialities that get studied in population genetics (like historical movements of random groups). I don’t need to explain why this would be the case even if people on the environmentalist side of the IQ wars were confident they were right. But the massive fall in genome sequencing costs (projected to be <$1000 by ~2014) means that large human datasets will be produced, and the genetics directly examined, eliminating entire areas of objections to the previous heredity studies. And at some point, some researcher will manage the study - some group inside or outside the USA will fund it, at some point a large enough genetic database will be cross-referenced against IQ tests and existing racial markers. We already see some of this in research: Rietveld et al 2013 (followup: Ward et al 2014) found 3 SNPs simply by pooling existing databases of genetics data & correlating against schooling. I don’t know when the definitive paper will come out, if it’ll be this year, or by 2020, although I would be surprised if there was still nothing by 2030; but it will happen and it will happen relatively soon (for a debate going on for the past century or more). Genome sequencing is simply going to be too cheap for it to not happen. By 2030 or 2040, I expect the issue will be definitively settled in the same way earlier debates about the validity of IQ tests were eventually settled (even if the public hasn’t yet gotten the word, the experts all concede that IQ tests are valid, reliable, not biased, and meaningful predictors of a wide variety of real-world variables).
Value of Information
What is the direct value of learning about IQ? Speaking of it in terms of money may not be the best approach, so instead we can split the question up into a few different sub-questions:
how much do your efforts lead to additional information?
In this case, not much. I would have to be very arrogant to think I can go through a large fraction of the literature and evaluate it better than the existing authorities like Nisbett or Flynn or Jensen. I have no advantages over them.would this information-gathering be expensive?
Yes. A single paper can take an hour to read well, and a technical book weeks. There are hundreds of papers and dozens of books to learn. The mathematics and statistics are nontrivial, and sooner or later, one will have to learn them in order to evaluate the seriousness of criticisms for oneself. The time spent will not have been throw-away recreational time, either, like slumming on the couch watching TV, but will be one’s highest-quality time, which could have been spent learning other difficult material, working, meaningfully interacting with other people, and so on. Given the decline with age of fluid intelligence, one may be wasting a non-trivial fraction of one’s lifetime learning.will new information come in the absence of your efforts?
Yes. My interest does not materially affect when the final genetic studies will be conducted.what decisions or beliefs would the additional information change?
Suppose the environmentalists were 100% right and the between-race genetics were a negligibly small factor. Regardless, the topic of IQ and its correlates and what it predicts does not live and die based on there being a genetic factor to average IQ differences between groups; if the admixture and genetics studies turn in a solid estimate of 0, IQ will still predict lifetime income, still predict crime rates, still predict educational scores, and so on.
In contrast, some of the other topics have very concrete immediate implications. Switching from occultism/theism to atheism implies many changed beliefs & choices; a near vs far Singularity has considerable consequences for retirement planning, if nothing else; while Neo-Luddism has implications for both career choice and retirement planning; attitudes towards fiction and nicotine also cash out in obvious ways. Of the topics here, perhaps only Communism and the American Revolution are as sterile in practical application.
So, I try not to spend too much time thinking about this issue: the results will come in regardless of my opinion, and unlike other issues here, does not materially affect my worldview or suggest action. Given this, there’s no reason to invest your life in the topic! It has no practical ramifications for you, discussing the issue can only lead to negative consequences - and on the intellectual level, no matter how much you read, you’ll always have nagging doubts, so you won’t get any satisfaction. You might as well just wait patiently for the inevitable final answer.
Genetics links
Complex traits
Meta-analysis of the heritability of human traits based on fifty years of twin studies
, Polderman et al 2015 (supplements: 1, 2)Phenome-wide Heritability Analysis of the UK Biobank
, Ge et al 2016The Closest of Strangers
Genetic and environmental influences on height from infancy to early adulthood: An individual-based pooled analysis of 45 twin cohorts
, Jelenkovic et al 2016Heritability in the genomics era - concepts and misconceptions
, Visscher et al 2008;Human Complex Trait Genetics in the 21st Century
, Visscher 2016Understanding and using quantitative genetic variation
, Hill 2009Assumption-Free Estimation of Heritability from Genome-Wide Identity-by-Descent Sharing between Full Siblings
, Visscher et al 2006A novel sibling-based design to quantify genetic and shared environmental effects: application to drug abuse, alcohol use disorder and criminal behavior
, Kendler et al 2016The continuing value of twin studies in the -omics era
, van Dongen et al 2012Using Extended Genealogy to Estimate Components of Heritability for 23 Quantitative and Dichotomous Traits
, Zaitlen et al 2013- GCTA: trait estimates
Estimate of disease heritability using 4.7 million familial relationships inferred from electronic health records
, Polubriaginof et al 2016Genetic variance estimation with imputed variants finds negligible missing heritability for human height and body mass index
, Yang et al 2015; see alsoHaplotypes of common SNPs can explain missing heritability of complex diseases
, Bhatia et al 2015- Obesity and genetics (see also GCTAs for anthropometric traits )
Sweet Taste Perception is Associated with Body Mass Index at the Phenotypic and Genotypic Level
, Hwang et al 2016Heritability and causal reasoning
, Lynch 2016Human genetics shape the gut microbiome
, Goodrich et al 2014Variants near CHRNA3/5 and APOE have age- and sex-related effects on human lifespan
, Joshi et al 2016
Psychology
The Evolutionary Genetics of Personality
, Penke et al 2007The Evolutionary Genetics of Personality Revisited
, Penke & Jokela 2016Correlation and Causation in the Study of Personality
, Lee 2012Genome-wide analysis of over 106,000 individuals identifies 9 neuroticism-associated loci
, Smith et al 2016Genome-wide analyses of empathy and systemizing: heritability and correlates with sex, education, and psychiatric risk
, Warrier et al 2016a;Genome-wide meta-analysis of cognitive empathy: heritability, and correlates with sex, neuropsychiatric conditions and brain anatomy
, Warrier et al 2016bPersonality Polygenes, Positive Affect, and Life Satisfaction
, Weiss et al 2016Genetic Relations Among Procrastination, Impulsivity, and Goal-Management Ability: Implications for the Evolutionary Origin of Procrastination
, Gustavson et al 2014The genetics of investment biases
, Cronqvist & Siegal 2014GWAS of 89,283 individuals identifies genetic variants associated with self-reporting of being a morning person
, Hu et al 2016Genetics and the placebo effect: the placebome
, Hall et al 2015Prevalence of Congenital Amusia
, Peretz & Vuvan 2016The Importance of Heritability in Psychological Research: The Case of Attitudes
, Tesser 1993Genetic Influence on Human Psychological Traits: A Survey
, Bouchard 2004Are Political Orientations Genetically Transmitted?
, Alford et al 2005Not by Twins Alone: Using the Extended Family Design to Investigate Genetic Influence on Political Beliefs
, Hatemi et al 2010A Genome-Wide Analysis of Liberal and Conservative Political Attitudes
, Hatemi et al 2011The Etiology of Stability and Change in Religious Values and Religious Attendance
, Button et al 2011Political Attitudes Develop Independently of Personality Traits
, Hatemi & Verhulst 2015Genome-wide meta-analysis identifies six novel loci associated with habitual coffee consumption
, The Coffee and Caffeine Genetics Consortium et al 2014- How harmful is smoking during pregnancy - after controlling for the sort of people who would do that? ; see also
Critical need for family-based, quasi-experimental designs in integrating genetic and social science research
, D’Onofrio et al 2013; Smoking is heritable but not through shared environment A Genetically Informed Study of the Association Between Harsh Punishment and Offspring Behavioral Problems
, Lynch et al 2006Genes Take Charge, and Diets Fall by the Wayside
Evidence that low socioeconomic position accentuates genetic susceptibility to obesity
, Tyrrell et al 2016on the benefits of exercise:
Modifiable Risk Factors as Predictors of All-Cause Mortality: The Roles of Genetics and Childhood Environment
, Kujala et al 2002Physical activity in adulthood: genes and mortality
, Karvinen et al 2015Physical Activity, Fitness, Glucose Homeostasis, and Brain Morphology in Twins
, Rottensteiner et al 2015
Intelligence
A general intelligence factor in [border collie] dogs
, Arden & Adams 2016Chimpanzee Intelligence Is Heritable
, Hopkins et al 2014Heritability of Neuroanatomical Shape
, Ge et al 2015Genomic architecture of human neuroanatomical diversity
, Toro et al 2014Common genetic variants influence human subcortical brain structures
, Hibar et al 2015Most Reported [Candidate-Gene] Genetic Associations with General Intelligence Are Probably False Positives
, Chabris et al 2012Genome-wide association studies establish that human intelligence is highly heritable and polygenic
, Davies et al 2011Genetic contributions to stability and change in intelligence from childhood to old age
, Deary et al 2012Common DNA Markers Can Account for More Than Half of the Genetic Influence on Cognitive Abilities
, Plomin et al 2013The genetic architecture of pediatric cognitive abilities in the Philadelphia Neurodevelopmental Cohort
, Robinson et al 2015Intelligence indexes generalist genes for cognitive abilities
, Trzaskowski et al 2013DNA Evidence for Strong Genome-Wide Pleiotropy of Cognitive and Learning Abilities
, Trzaskowski et al 2013Substantial SNP-based heritability estimates for working memory performance
, Vogler et al 2014Results of a
, Kirkpatrick et al 2014GWAS Plus
: General Cognitive Ability Is Substantially Heritable and Massively PolygenicChildhood intelligence is heritable, highly polygenic and associated with FNBP1L
, Benyamin et al 2014DNA evidence for strong genetic stability and increasing heritability of intelligence from age 7 to 12
, Trzaskowski et al 2014aGenomic analysis of family data reveals additional genetic effects on intelligence and personality
, Hill et al 2017;Comparison of methods that use whole genome data to estimate the heritability and genetic architecture of complex traits
, Evans et al 2017 (semi-rare variants explain the rest of intelligence’s missing heritability?)GWAS of 126,559 Individuals Identifies Genetic Variants Associated with Educational Attainment
, Rietveld et al 2013 (supplement)Common genetic variants associated with cognitive performance identified using the proxy-phenotype method
, Rietveld et al 2014 (supplement)Genetic Variation Associated with Differential Educational Attainment in Adults Has Anticipated Associations with School Performance in Children
, Ward et al 2014Genetic contributions to variation in general cognitive function: a meta-analysis of genome-wide association studies in the CHARGE consortium (n=53949)
, Davies et al 2015The genetic architecture of pediatric cognitive abilities in the Philadelphia Neurodevelopmental Cohort
, Robinson et al 2015Thinking positively: The genetics of high intelligence
, Shakeshaft et al 2014A genome-wide analysis of putative functional and exonic variation associated with extremely high intelligence
, Spain et al 2015Genome-wide association study of cognitive functions and educational attainment in UK Biobank (n=112151)
, Davies et al 2016GWAS for executive function and processing speed suggests involvement of the CADM2 gene
, Ibrahim-Verbaas et al 2016Genome-wide association study identifies 74 [162] loci associated with educational attainment
, Okbay et al 2016b (supplement)Predicting educational achievement from DNA
, Selzam et al 2016GWAS meta-analysis reveals novel loci and genetic correlates for general cognitive function: a report from the COGENT consortium
, Trampush et al 2017Cognitive Performance Among Carriers of Pathogenic Copy Number Variants: Analysis of 152,000 UK Biobank Subjects
, Kendall et al 2016Molecular genetic contributions to socioeconomic status and intelligence
, Marioni et al 2014Assortative mating on educational attainment leads to genetic spousal resemblance for causal alleles
, Hugh-Jones et al 2016Ultra-rare disruptive and damaging mutations influence educational attainment in the general population
, Ganna et al 2016Cognitive Performance Among Carriers of Pathogenic Copy Number Variants: Analysis of 152,000 UK Biobank Subjects
, Kendall et al 2016Intersection of diverse neuronal genomes and neuropsychiatric disease: The Brain Somatic Mosaicism Network
, McConnell et al 2017 (Towards explaining non-shared-environment effects on intelligence, psychiatric disorders, and other cognitive traits - developmental noise such as post-conception mutations in individual cells or groups of cells)A closer look at the role of parenting-related influences on verbal intelligence over the life course: Results from an adoption-based research design
, Beaver et al 2014Large Cross-National Differences in Gene × Socioeconomic Status Interaction on Intelligence
, Tucker-Drob & Bates 2015Family environment and the malleability of cognitive ability: A Swedish national home-reared and adopted-away cosibling control study
, Kendler et al 2015What Twin Studies Tell Us About the Heritability of Brain Development, Morphology, and Function: A Review
, Jansen et al 2015Genes, Evolution and Intelligence
, Bouchard 2014Genetics and intelligence differences: five special findings
, Plomin & Deary 2015Classical and Molecular Genetic Research on General Cognitive Ability
, McGrue & Gottesman 2015Is Psychometric g a Myth?
Suppressing Intelligence Research: Hurting Those We Intend to Help
, Gottfredson 2005How Much Can We Boost IQ and Scholastic Achievement?
, Jensen 1969; for comparison, note carefully what Nisbett et al 2012 review actually says compared to what many laymen believe, and consider also whether its description of DNB would be recognizable to a thorough reader of my DNB FAQ/meta-analysis or whether the encouraging research would pass the critiques of the replicability crisis.
Pleiotropy/inter-correlation/phenome
- Genetic correlations vs Mendelian randomization
Molecular genetic contributions to self-rated health
, Harris et al 2016The MC1R Gene and Youthful Looks
, Liu et al 2016Genetic Prediction of Male Pattern Baldness
, Hagenaars et al 2016GWAS Reveals Multiple Loci Influencing Normal Human Facial Morphology
, Shaffer et al 2016; see alsoHeritability and genomics of facial characteristics
&How Craig Venter is fighting ageing with genome sequencing
The Genetic Architecture & Natural History of Pigmentation
Detecting Genome-wide Variants of Eurasian Facial Shape Differentiation: DNA based Face Prediction Tested in Forensic Scenario
, Qiao et al 2016genetics of breast size:
Phenome-wide analysis of genome-wide polygenic scores
, Krapohl et al 2015An Atlas of Genetic Correlations across Human Diseases and Traits
, Bulik-Sullivan et al 2015Contrasting the genetic architecture of 30 complex traits from summary association data
, Shi et al 2016Shared genetic aetiology between cognitive functions and physical and mental health in UK Biobank (n=112151) and 24 GWAS consortia
, Hagenaars et al 2016Detection and interpretation of shared genetic influences on 42 human traits
, Pickrell et al 2016Polygenic prediction of the phenome, across ancestry, in emerging adulthood
, Docherty et al 2017LD Hub: a centralized database and web interface to perform LD score regression that maximizes the potential of summary level GWAS data for SNP heritability and genetic correlation analysis
, Zheng et al 2016 (Web interface to scores of GWAS results, allowing combined inference over them all: LD Hub. Gets around privacy & scaling problems by using, not GCTA, but LD regression which only needs summary statistics. LD score regression is definitely the wave of the future. GCTA was king for a few years, but the inability to use summary data is killing it; in the modern context of a thousand different funding sources,medical ethics
, & empire building driving countless genetic silos, a method must be able to work on just summary statistics if it’s to have any real uptake and get n into the hundreds of thousands.)Genetic contributions to self-reported tiredness
, Deary et al 2016Genetically low vitamin D concentrations and increased mortality: Mendelian randomisation analysis in three large cohorts
, Afzal et al 2014The high heritability of educational achievement reflects many genetically influenced traits, not just intelligence
, Krapohl et al 2014Pleiotropy across academic subjects at the end of compulsory education
, Rimfeld et al 2015Genetics affects choice of academic subjects as well as achievement
, Rimfeld et al 2016Genetically-Mediated Associations Between Measures of Childhood Character and Academic Achievement
, Tucker-Drob et al 2016Educational attainment and personality are genetically intertwined
, Mottus et al 2016The association between intelligence and lifespan is mostly genetic
, Arden et al 2015Analysis of shared heritability in common disorders of the brain
, Anttila et al 2016Genetic relationship between 5 psychiatric disorders estimated from genome-wide SNPs
, Psychiatric Genomics Consortium 2013Genome-wide analyses for personality traits identify six genomic loci and show correlations with psychiatric disorders
, Lo et al 2016Identification of 15 genetic loci associated with risk of major depression in individuals of European descent
, Hyde et al 2016Genome-Wide Analysis Of 113,968 Individuals In UK Biobank Identifies Four Loci Associated With Mood Instability
, Ward et al 2017 (UK Biobank, the gift that keeps on giving; continuing the themes ofeverything is heritable
andabnormal is normal
)Different neurodevelopmental symptoms have a common genetic etiology
, Pettersson et al 2013Risk of Psychiatric and Neurodevelopmental Disorders Among Siblings of Probands With Autism Spectrum Disorders
, Jokiranta-Olkoniemi et al 2016Analysis of Intellectual Disability Copy Number Variants for Association With Schizophrenia
, Rees et al 2016Schizophrenia risk alleles and neurodevelopmental outcomes in childhood: a population-based cohort study
, Riglin et al 2016Uncovering the Hidden Risk Architecture of the Schizophrenias: Confirmation in Three Independent Genome-Wide Association Studies
, Arnedo et al 2014 (press release); see alsoBiological insights from 108 schizophrenia-associated genetic loci
, PGC 2014Patterns of Nonrandom Mating Within and Across 11 Major Psychiatric Disorders
, Nordsletten 2016Contrasting regional architectures of schizophrenia and other complex diseases using fast variance components analysis
, Loh et al 2015Genetic variants associated with subjective well-being, depressive symptoms, and neuroticism identified through genome-wide analyses
, Okbay et al 2016aGenetic risk for autism spectrum disorders and neuropsychiatric variation in the general population
, Robinson et al 2016Diagnoses in Siblings of Probands With Autism Spectrum Disorders
, Jokiranta-Olkoniemi et al 2016 (see alsoDifferent neurodevelopmental symptoms have a common genetic etiology
, Pettersson et al 2013)Systems genetics identifies a convergent gene network for cognition and neurodevelopmental disease
, Johnson et al 2015 (media)Large-scale genomics unveils the genetic architecture of psychiatric disorders
, Gratten et al 2014Polygenic risk scores for schizophrenia and bipolar disorder predict creativity
, Power et al 2015Genome-wide meta-analysis of cognitive empathy: heritability, and correlates with sex, neuropsychiatric conditions and brain anatomy
, Warrier et al 2016Psychiatric Genomics: An Update and an Agenda
, Sullivan et al 2017 (commentary)
Rare genetics
- The Sports Gene, Epstein 2014; see also
How Athletes Get Great Just train for 10,000 hours, right? Not quite.
;Man and Superman: In athletic competitions, what qualifies as a sporting chance?
;Practice Doesn’t Make Perfect
The Superhero Genes: One scientist is on a quest to find the genetic mutations that make athletes elite - which may lead to new treatments for the rest of us
The Genetic Basis of Mendelian Phenotypes: Discoveries, Challenges, and Opportunities
, Chong et al 2015The contribution of de novo coding mutations to autism spectrum disorder
, Iossifov et al 2014 (media: 1, 2)Polygenic transmission disequilibrium confirms that common and rare variation act additively to create risk for autism spectrum disorders
, Weiner et al 2016How Craig Venter is fighting ageing with genome sequencing
One of a Kind: What do you do if your child has a condition that is new to science?
The Muscular Dystrophy Patient and Olympic Medalist with the Same Genetic Disorder
DIY Diagnosis: How an Extreme Athlete Uncovered Her Genetic Flaw
A Prospective Study of Sudden Cardiac Death among Children and Young Adults
, Bagnall et al 2016Genetic enhancement of cognition in a kindred with cone-rod dystrophy due to RIMS1 mutation
, Sisodiya et al 2007A Gene That Makes You Need Less Sleep?
(seeHeritability of Performance Deficit Accumulation During Acute Sleep Deprivation in Twins
,The Transcriptional Repressor DEC2 Regulates Sleep Length in Mammals
, &A Novel BHLHE41 Variant is Associated with Short Sleep and Resistance to Sleep Deprivation in Humans
)
Evolution
Thoroughbreds Are Running as Fast as They Can
(after centuries of intense selection)The USSR’s Moose Domestication Projects Yield Mixed Results
Nice Rats, Nasty Rats: Maybe It’s All in the Genes
;Tameness is in the genes
- on the genetic basis of rat and mink tameness
- domesticated foxes
A Simple Genetic Architecture Underlies Morphological Variation in Dogs
, Boyko et al 2010Comparative analysis of the domestic cat genome reveals genetic signatures underlying feline biology and domestication
, Montague et al 2014Rabbit genome analysis reveals a polygenic basis for phenotypic change during domestication
, Carneiro et al 2014 (the power of selection - complex behaviors influenced by many small changes)Genetic Architecture of Domestication-Related Traits in Maize
, Xue et al 2016Whole-genome resequencing reveals loci under selection during chicken domestication
, Rubin et al 2010When local means local: Polygenic signatures of local adaptation within whitebark pine (Pinus albicaulis Engelm.) across the Lake Tahoe Basin, USA
, Lind et al 2016The Rat Paths of New York
Geneticists Evolve Fruit Flies With the Ability to Count
Rapid evolutionary response to a transmissible cancer in Tasmanian devils
, Epstein et al 2016Ancient genomic changes associated with domestication of the horse
, Librado et al 2017 (media; more recent evolution in complex behavioral traits in mammals: the neural crest shows up, again)Selective sweep analysis using village dogs highlights the pivotal role of the neural crest in dog domestication
, Pendleton et al 2017 (More on complex intellectual changes in higher mammals due to recent evolution due to soft sweeps: dog physical & behavioral domestication is due to a lot of small recent genetic changes. Evolution doesn’t stop at the neck.)Urban Evolution: How Species Adapt, or Don’t, to City Living; A Conversation With Jonathan B. Losos
Evolution Is Happening Faster Than We Thought
Artificial Selection on Relative Brain Size in the Guppy Reveals Costs and Benefits of Evolving a Larger Brain
, Kotrschal et al 2013Evidence that the rate of strong selective sweeps increases with population size in the great apes
, Nam et al 2017
Human evolution
Constructing genomic maps of positive selection in humans: Where do we go from here?
, Akey 2009Measuring selection in contemporary human populations
, Stearns et al 2010Divergent Ah receptor ligand selectivity during hominin evolution
, Hubbard et al 2016 (media)Exceptional Evolutionary Divergence of Human Muscle and Brain Metabolomes Parallels Human Cognitive and Physical Uniqueness
, Bozek et al 2014Comparative Genomic Evidence For Self-Domestication In Homo sapiens
, Theofanopoulou et al 2017 (gene-culture co-evolution in selection for cognitive traits involved in domestication?)The phenotypic legacy of admixture between modern humans and Neandertals
, Simonti et al 2016The Genetic Cost of Neanderthal Introgression
, Harris & Nielsen 2016Genetic Markers of Human Evolution Are Enriched in Schizophrenia
, Srinivasan et al 2016 (schizophrenia variants tend to be in genes which have also been under natural selection; is human evolution for intelligence & sociality incomplete and schizophrenia due to lingering mistakes which haven’t yet been purged?)Parallel Adaptive Divergence among Geographically Diverse Human Populations
, Tennessen & Akey 2011Patterns of shared signatures of recent positive selection across human populations
, Johnson & Voight 2017Soft sweeps are the dominant mode of adaptation in the human genome
, Schrider & Kern 2017 (Detection of 2000 instances of recent human selection, half of which are specific to individual human populations, and many of which affect the central nervous system. Note this doesn’t cover polygenic selection, so it’s a very loose lower bound on how much recent human evolution there has been.)Genome-wide patterns of selection in 230 ancient Eurasians
, Mathieson et al 2015 (media)Eight thousand years of [human] natural selection in Europe
, Mathieson et al 2015 (Khan); see also EDARDetection of human adaptation during the past 2,000 years
, Field et al 2016Population structure of UK Biobank and ancient Eurasians reveals adaptation at genes influencing blood pressure
, Galinsky et al 2016Greenlandic Inuit show genetic signatures of diet and climate adaptation
, Fumagalli et al 2015;Extreme distribution of deleterious variation in a historically small and isolated population-insights from the Greenlandic Inuit
, Pedersen et al 2016Selection in Europeans on Fatty Acid Desaturases Associated with Dietary Changes
, Buckley et al 2017 (More on recent human evolution with large between-country differences due to local adaptations.)Archaic adaptive introgression in TBX15/WARS2
, Racimo et al 2016 (media)Genetic Adaptation to Levels of Dietary Selenium in Recent Human History
, White et al 2015; see alsoHuman Adaptation to Arsenic-Rich Environments
, Schlebusch et al 2015The genetics of Mexico recapitulates Native American substructure and affects biomedical traits
, Moreno-Estrada et al 2016Adaptation to infectious disease exposure in indigenous Southern African populations
, Owers et al 2017 (recent human evolution leading to between-group differences)Investigating the case of human nose shape and climate adaptation
, Zaidi et al 2017 (Recent human evolution leading to between-group differences: local adaptations of noses to local climates.)Humans Never Stopped Evolving: The emergence of blood abnormalities, an adult ability to digest milk, and changes in our physical appearance point to the continued evolution of the human race
Going global by adapting local: A review of recent human adaptation
, Fan et al 2016Clustering of 770,000 genomes reveals post-colonial population structure of North America
, Han et al 2017A time transect of exomes from a Native American population before and after European contact
, Lindo et al 2016Metabolic costs and evolutionary implications of human brain development
, Kuzawa et al 2014Infectious causation of disease: an evolutionary perspective
, Cochran et al 2000; see also Toxoplasma gondiiApolipoprotein E4 is associated with improved cognitive function in Amazonian forager-horticulturalists with a high parasite burden
, Trumble et al 2016 (media; of course, different regions, and hence different groups, differ in parasite load.)Rising Plague
The Rhythm of the Tide: When I heard data from an island had proven humans are still evolving, I had to visit
The Indicted and the Wealthy: Surnames, Reproductive Success, Genetic Selection and Social Class in Pre-Industrial England
, Clark 2009Evidence for evolution in response to natural selection in a contemporary human population
, Milot et al 2011Quantitative Genetics in the Postmodern Family of the Donor Sibling Registry
, Lee 2013What women want in their sperm donor: a study of more than 1000 women’s sperm donor selections
, Whyte et al 2016Genetic Associations Between Personality Traits and Lifetime Reproductive Success in Humans
, Berg et al 2016Categorization of humans in biomedical research: genes, race and disease
, Risch et al 2002Genetic Mapping in Human Disease
, Altshuler et al 2008- Race and health/pharmacogenomics:
Not a black and white question: Medical research is starting to take account of people’s race
Brazil’s Cancer Curse
A Natural Fix for A.D.H.D.
Genetic Origins of Economic Development
Human Genetic Diversity: Lewontin’s Fallacy
Consistent Association of Type 2 Diabetes Risk Variants Found in Europeans in Diverse Racial and Ethnic Groups
, Waters et al 2010Additive Genetic Variation in Schizophrenia Risk Is Shared by Populations of African and European Descent
, de Candia et al 2013Generalization and Dilution of Association Results from European GWAS in Populations of Non-European Ancestry: The PAGE Study
, Carlson et al 2013Consistency of genome-wide associations across major ancestral groups
, Ntzani et al 2012Transethnic Genetic-Correlation Estimates from Summary Statistics
, Brown et al 2016Population genetic differentiation of height and body mass index across Europe
, Robinson et al 2015 (to quote Yvain:Genetic differences explain 24% of between-national-populations differences in height and 8% of between-national-populations in BMI across Europe. Now that the only two massively polygenic traits that might vary among national populations have been successfully studied, I look forward to never having to read any further research of this sort ever again.
)The fine-scale genetic structure of the British population
, Leslie et al 2015Pygmies’ small stature evolved many times
;Adaptive, convergent origins of the pygmy phenotype in African rainforest hunter-gatherers
, Perry et al 2014National Happiness and Genetic Distance: A Cautious Exploration
, Proto & Oswald 2015Gene Flow by Selective Emigration as a Possible Cause for Personality Differences Between Small Islands and Mainland Populations
, Ciani & Cialuppi 2011Demography is Destiny
Some Uses of Models of Quantitative Genetic Selection in Social Science
, Weight & Harpending 2016;Western Europe, state formation, and genetic pacification
, Frost & Harpending 2015Cryptic Admixture, Mixed-Race Siblings, & Social Outcomes
…intelligence GWAS hits: Their relationship to country IQ…
, Piffer 2015The facts that need to be explained
;Admixture in the Americas: Regional and National Differences
; collection of black-white gap datapoints (graphed over time)
Dysgenics
Sexual Selection as a Justification for Sex
Holocene selection for variants associated with cognitive ability: Comparing ancient and modern genomes
, Woodley et al 2017Rates and Fitness Consequences of New Mutations in Humans
, Keightley 2012Parent-of-origin-specific signatures of de novo mutations
, Goldmann et al 2016Older fathers’ children have lower evolutionary fitness across four centuries and in four populations
, Arslan et al 2016 (fromThe cost of inbreeding in terms of health
)Childhood Autism and Assortative Mating
, Golden 2013Heritability, Autism, & Fear of Breeding
Estimating the Inbreeding Depression on Cognitive Behavior: A Population Based Study of Child Cohort
; see alsoGenetic diversity and intellectual disability
Mutation and Human Exceptionalism: Our Future Genetic Load
, Lynch 2016The Biodemography of Fertility: A Review and Future Research Frontiers
, Mills & Tropf 2015Physical and neurobehavioral determinants of reproductive onset and success
, Day et al 2016a (variants associated with early puberty have causal impact on lower educational attainment, risk taking, earlier sex, teen pregnancy)Shared genetic aetiology of puberty timing between sexes and with health-related outcomes
, Day et al 2015 (Most correlations are bad, as predicted by life cycle theory.)Genomic analyses for age at menarche identify 389 independent signals and indicate BMI-independent effects of puberty timing on cancer susceptibility
, Day et al 2016bFather Absence And Accelerated Reproductive Development
, Gaydosh et al 2017- Fertility and intelligence
Genetic evidence for natural selection in humans in the contemporary United States
, Beauchamp 2016Assortative mating and differential fertility by phenotype and genotype across the 20th century
, Conley et al 2016a (Dysgenics found in the USA, 1920-1955. Appendix) see alsoChanging Polygenic Penetrance on Phenotypes in the 20th Century Among Adults in the US Population
, Conley et al 2016b- in Iceland: decrease in the education polygenic score 1910-1990,
Selection against variants in the genome associated with educational attainment
, Kong et al 2017 (graph; media) - in the US: decrease in the education polygenic score 1920-1960,
Mortality Selection in a Genetic Sample and Implications for Association Studies
, Domingue et al 2016 (graph) Genome-wide analysis identifies 12 loci influencing human reproductive behavior
, Barban et al 2016 (supplement; genetic correlations with fewer later offspring: rg = -0.236 and 0.712 respectively. Cross-sectional confirmation of Conley et al 2016.)How cognitive genetic factors influence fertility outcomes: A mediational SEM analysis
, Woodley et al 2016The negative Flynn Effect: A systematic literature review
, Dutton et al 2016Assortative Mating, Class, and Caste
, Harpending & Cochran 2015
Genetic engineering
Critically endangered species successfully reproduced using frozen sperm from ferret dead for 20 years: Genetic diversity of the species significantly increased providing fresh hope for the future survival of this near-extinct species
Breeding a better crop seed, trait by trait
using molecular breeding; see alsoMonsanto Is Going Organic in a Quest for the Perfect Veggie
Genome-wide association study of leaf architecture in the maize nested association mapping population
, Tian et al 2011Growth, efficiency, and yield of commercial broilers from 1957, 1978, and 2005
, Zuidhof et al 2014 (media)Understanding genomic selection in poultry breeding
, Wolc 2014The Genetic
;SUPER COW
- Myth vs RealityCloning Cows From Steaks (and Other Ways of Building Better Cattle)
;Frontiers in cattle genomics
It’s all in the gene: cows
The Ride of Their Lives: Children prepare for the world’s most dangerous organized sport
Tolman and Tyron: Early Research on the Inheritance of the Ability to Learn
, Innis 1992A short critical history of the application of genomics to animal breeding
, Blasco & Toro 2014One Hundred Years of Statistical Developments in Animal Breeding
, Gianola & Rosa 2015Understanding the Genetics of Intelligence: Can Height Help? Can Corn Oil?
, Johnson 2010Data and Theory Point to Mainly Additive Genetic Variance for Complex Traits
, Hill et al 2008;On epistasis: why it is unimportant in polygenic directional selection
, Crow 2010On the genetic architecture of intelligence and other quantitative traits
, Hsu 2014U.S. Research Lab Lets Livestock Suffer in Quest for Profit: Animal Welfare at Risk in Experiments for Meat Industry
Korea’s Sooam Biotech Is the World’s First Animal Cloning Factory
Healthy ageing of cloned sheep
, Sinclair et al 2016Monkey kingdom: China is positioning itself as a world leader in primate research
The Gene Hackers: A powerful new technology enables us to manipulate our DNA more easily than ever before
Improved CRISPR-Cas9: Safe and Effective?
:High-fidelity CRISPR-Cas9 nucleases with no detectable genome-wide off-target effects
/Rationally engineered Cas9 nucleases with improved specificity
Human Gene Editing Receives Science Panel’s Support
(Human Genome Editing: Science, Ethics, and Governance 2017)The reversal test: eliminating status quo bias in applied ethics
, Bostrom & Ord 2006Society Is Fixed, Biology Is Mutable
The Disease That Turned Us Into Genetic-Information Junkies
- Wrongful birth
Father, Son, and the Double Helix
This couple says everything they were told about their sperm donor was a lie
Preimplantation genetic diagnosis guided by single-cell genomics
, Aa et al 2013Prevalence and architecture of de novo mutations in developmental disorders
, Deciphering Developmental Disorders Study 2017Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?
, Shulman & Bostrom 2013The Rise of the Smart Sperm Shopper: How the Repository for Germinal Choice accidentally revolutionized sperm banking
/The
; The Genius Factory, Plotz 2005Genius Babies
Grow Up: What happened to 15 children from the Nobel Prize sperm bank?In vitro eugenics
, Sparrow 2013In vitro gametogenesis: just another way to have a baby?
, Suter 2016Introducing precise genetic modifications into human 3PN embryos by CRISPR/Cas-mediated genome editing
, Kang et al 2016 (see also Liang et al 2015; both describe the CRISPR state of the art as of early 2014)CRISPR/Cas9-mediated gene editing in human tripronuclear zygotes
, Liang et al 2015CRISPR/Cas9-mediated gene editing in human zygotes using Cas9 protein
, Tang et al 2017 (no off-target mutations and efficiencies of 20/50/100% for various edits. As I predicted, the older papers, Liang et al 2015 / Kang et al 2016 / Komor et al 2016, were not state of the art and would be improved on considerably.)Evolution of the Human Brain: From Matter to Mind
, Hofman 2015Engineering the Perfect Baby
How Humans Are Shaping Our Own Evolution
Eugenics, Ready or Not
, SalterThe Genome Project-Write
, Boeke et al 2016 (media)
Reception
What You Can’t Say
, Paul GrahamPolitical Diversity Will Improve Social Psychological Science
, Duarte et al 2014The Social and Political Views of American Professors
, Gross & Simmons 2007Whither the Blank Slate? A Report on the Reception of Evolutionary Biological Ideas among Sociological Theorists
, Horowitz et al 2014 (media)Anthropologists’ views on race, ancestry, and genetics
, Wagner et al 2016Evolution is Not Relevant to Sex Differences in Humans Because I Want it That Way! Evidence for the Politicization of Human Evolutionary Psychology
, Geher & Gambacorta 2010How Napoleon Chagnon Became Our Most Controversial Anthropologist
Contrarians, Crackpots, and Consensus
Darwin, Galton and the Statistical Enlightenment
, Stigler 2010Measure for Measure: The strange science of Francis Galton
The IQ Controversy, the Media and Public Policy
(WP)Mainstream Science on Intelligence
2013 survey of expert opinion on intelligence
, Rindermann et al 2013;Survey of Expert Opinion on Intelligence: Causes of International Differences in Cognitive Ability Tests
, Rindermann et al 2016Ethics: Taboo genetics
Science Is Not Always
, Cofnas 2015Self-Correcting
: Fact-Value Conflation and the Study of IntelligenceU.S. Public Wary of Biomedical Technologies to
Enhance
Human Abilities- Cochran on publishing the Ashkenazi intelligence hypothesis
We can’t ignore the evidence: genes affect social mobility
- The end of environmental inequality means the rise of genetic inequality
2007 Interview of Linda S. Gottfredson, by Howard Wainer & Daniel Robinson
Why can’t we talk about IQ?
Dalton Conley Answers Your
Parentology
QuestionsTutankhamen’s Blood
Eske Willerslev Is Rewriting History With DNA
Book Review: A Troublesome Inheritance by Nicholas Wade
Is crime genetic? Scientists don’t know because they’re afraid to ask
R. A. Fisher on Race and Human Genetic Variation
Cosmic Horror: In which we confront the terrible racism of H. P. Lovecraft
Implications
Three Laws of Behavior Genetics and What They Mean
, Turkheimer 2000The Fourth Law of Behavior Genetics
Epidemiology, genetics and the
, Smith 2011Gloomy Prospect
: embracing randomness in population health research and practiceG = E: What GWAS Can Tell Us about the Environment
, Gage et al 2016Do People Make Environments or Do Environments Make People?
, Rowe 2001Top 10 Replicated Findings From Behavioral Genetics
, Plomin et al 2016Why Behavioral Genetics Matters: Comment on Plomin et al (2016)
, Lee & McGue 2016The Iron Law Of Evaluation And Other Metallic Rules
, Rossi 1987- Behavioral Genetics, Plomin et al 2012
What 2,500 Sequenced Genomes Say About Humanity’s Future
My Genome, My Self
Why We’re Different: A Conversation with Robert Plomin
See also
Appendix
Miller on neo-Luddism
From chapter 13 of Singularity Rising, James Miller 2012:
There’s this stupid myth out there that AI has failed, but AI is everywhere around you every second of the day. People just don’t notice it. You’ve got AI systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an AI scheduling system. Every time you use a piece of Microsoft software, you’ve got an AI system trying to figure out what you’re doing, like writing a letter, and it does a pretty damned good job. Every time you see a movie with computer-generated characters, they’re all little AI characters behaving as a group. Every time you play a video game, you’re playing against an AI system.–Rodney Brooks, Director, MIT Computer Science and AI Laboratory 288…In the next few decades, all of my readers might have their market value decimated by intelligent machines. Should you be afraid? Fear of job-destroying technology is nothing new. During the eighteenth century, clothing manufacturers in England replaced some of their human laborers with machines. In response, a gang supposedly led by one Ned Ludd smashed a few machines owned by a sock maker. Ever since then, people opposing technology have been called Luddites. Luddites are correct in thinking that machines can cause workers to lose their jobs. But overall, in the past, job-destroying machine production has overall greatly benefited workers.
Destroying jobssounds bad - like something that should harm an economy. But the benefits of job destruction become apparent when you realize that an economy’s most valuable resource is human brains. If a businessman figured out how to make a product using less energy or fewer materials, we would applaud him because the savings could be used to produce additional goods. The same holds true when we figure out how to make something using less labor. If you used to need 1,000 workers to run your sock factory but you can now produce the same number of socks by employing only 900 workers, then you probably would (and perhaps even should) fire the other 100. Although in the short run these workers will lack jobs, in the long run they will likely find new employment and expand the economy.The obliteration of most agricultural jobs has been a huge source of economic growth for America. In 1900, farmers made up 38% of the Americans workforce, whereas now they constitute less than 2% of it.289 Most of the displaced agricultural laborers found work in cities. Yet despite the massive decrease in farming jobs, the United States has steadily produced more and more food since 1900. Agricultural technology gave the American people a
free lunch,in which we got more food with less effort, making obesity a greater threat to American health than calorie deprivation. Technology raises wages by increasing worker productivity. In a free-market economy, the value of the goods an employee produces for his employer roughly determines his wage. A farmer with a tractor produces more food than one with just a hoe. Consequently, modern farmers earn higher wages than they would if they lived in a world deprived of modern agricultural technology. In rich nations, wages have risen steadily over the last two hundred years because technology keeps increasing worker productivity. But will this trend continue? Past technologies never completely eliminated the need for humans, so fired sock workers usually found other employment. But a sufficiently advanced AI possessing a robot body might outperform people at every single task.…If a Kurzweilian merger doesn’t occur, sentient AIs might compete directly with people in the labor market. Let’s now explore what happens to human wages if these AIs become better than humans at every task. Adam Smith, the great eighteenth-century economist, explained that everyone benefits from trade if each participant makes what he is best at. So, for example, if I’m better at making boots than you are, but you have more skill at making candles, then we would both become richer if I produced your boots and you made my candles. But what if you’re more skilled at making both boots and candles? What if, compared to you, I’m worse at doing everything? Adam Smith never answered this question, but nineteenth-century economist David Ricardo did. This question is highly relevant to our future, as an AI might be able to produce every good and service at a lower cost than any human could, and if we turn out to have no economic value to the advanced artificial intelligences, then they might (at best) ignore us, depriving humanity of any benefits of their superhuman skills.
Most people intuitively believe that mutually beneficial trades take place only when each person has an area of absolute excellence. But Ricardo’s theory of comparative advantage shows that trade can make everyone better off regardless of a person’s absolute skill because everyone has an area of comparative advantage. I’ll illustrate Ricardo’s nineteenth-century theory with a twenty-first-century example involving donuts and anti-gravity flying cars. Let’s assume that humans can’t make flying cars, but an AI can; and although people can make donuts, an AI can make them much faster than we can. Let’s pretend that at least one AI likes donuts, where donuts represent anything a human can make that an AI would want. Here’s how a human and AI could both benefit from trade: a human could offer to give an AI many donuts in return for a flying car. The trade could clearly benefit the human. If it gets enough donuts, the AI also benefits from the trade. To see how this could work, imagine that (absent trade) it takes an AI one second to make a donut. The AI could build a flying car in one minute.
- Time needed for an AI to make a donut: one second
- Time needed for an AI to make a flying: car one minute
A human then offers the following deal to the AI: Build me a flying car and I will give you one hundred donuts. It will take you one minute to make me a flying car. In return for this flying car you get something that would cost you 100 seconds to make. Consequently, our trade saves you 40 seconds. As the AI’s powers grew, people could still gain from trading with it. If, say, it took the AI only one nanosecond to make a donut and 60 nanoseconds to make a flying car, then it would still become better off by trading 100 donuts for 1 flying car. 292 In general, as an AI becomes more intelligent, trading with humans will save it less time, but what the AI can do with this saved time goes up, especially since a smarter AI would probably gain the capacity to create entirely new categories of products. An AI might trade 100 donuts for a flying car, but an
AI+would trade this number of donuts for a wormhole generator. Modern economists use Ricardo’s theory of comparative advantage to show how rich and poor countries can benefit by trading with each other. Understanding Ricardo’s theory causes almost all economists to favor free trade. If we substitutehumanityforpoor countriesand AI forrich countries,then Ricardo gives us some hope for believing that even self-interested advanced artificial intelligences would want to take actions that bestow tremendous economic benefits on mankind.MAGIC WANDS
In the previous scenario, I implicitly assumed that producing donuts doesn’t require the use of some
factor of production.A factor of production is an essential nonhuman element needed to create a good. Factors of production for donuts include land, machines, and raw materials, and without these factors, a person (no matter how smart and hardworking) can’t make donuts. Instead of using the intimidating and boring termfactor of production,I’m going to say that to make a good or produce a service you need the rightmagic production wand,with the wand being the appropriate set of factors of production. For example, a donut maker needs a donut wand.If a relatively small number of wands existed and no more could be created, then all of the wands would go to AIs. Let’s say donuts sell for $1 each and an AI could use a donut wand to produce one million donuts, whereas a human using the same wand could make only a thousand donuts. A human would never be willing to pay more than $1,000 for the wand, whereas an AI would earn a huge profit if it bought a donut wand for, say, $10,000. Even if a human initially owned a donut wand, he would soon sell it to an AI. Human wand owners in this situation would benefit from AIs because AIs would greatly raise the market value of wands. Human workers who had never had a wand would become impoverished because they couldn’t produce anything.
The Roman Republic’s conquests in the first century BC effectively stripped many Roman citizens of their production wands. In the early Republic, poor citizens had access to wands, as they were often hired to farm the land of the nobility. But after the Republic’s conquests brought in a huge number of slaves, the noblemen had their slaves use almost all of the available land wands. Cheap slave labor enriched the landowning nobility by reducing their production costs. But abundant slave labor impoverished non-landowning Romans by depriving them of wands. Cheap slave labor contributed to the fall of the Roman Republic. As Roman inequality increased, common soldiers came to rely on their generals for financial support. The troops put loyalty to their generals ahead of loyalty to the Roman state. Generals such as Sulla and Julius Caesar took advantage of their increased influence over their troops to propel themselves to absolute political power. Caesar sought to reduce the social instability caused by slaves by giving impoverished free Roman citizens new lands from the territories Rome had recently conquered. Caesar essentially created many new wands and gave them to his subjects.
Although AIs will use wands, they will also likely help create them. For example, using nanotechnology, they might be able to build dikes to reclaim land from the ocean. Or perhaps they’ll figure out how to terraform Mars, making Martian land cheap enough for nearly any human to afford. AIs could also figure out better ways to extract raw materials from the earth or invent new ways to use raw materials, resulting in each product needing fewer wands. The future of human wages might come down to a race between the number of AIs and the quantity of wands. Economist and former artificial-intelligence programmer Robin Hanson has created a highly counterintuitive theory of why (in the long run) AIs will destroy nearly all human jobs: they will end up using all of the production wands (
Economic Growth Given Machine Intelligence).…What I’ve written so far about the economics of emulations probably seems correct to most readers. After all, if we can make copies of extraordinarily bright and productive people and employ multiple copies of them in science and industry, then we should all get richer. The results would be similar to what would happen if a select few nursery schools became so fantastically good that each year they turned ten thousand toddlers into von Neumann-level geniuses who then immediately entered the workforce.
Robin Hanson, however, isn’t willing to rely on mere intuition when analyzing the economics of emulations. Robin realizes that if, after we have emulations, the price of computing power continues to fall at an exponential rate, then emulations will soon become extraordinarily cheap. If you combine extremely inexpensive emulations with a bit of economic theory, you get a seemingly crazy result, something that you might think is too absurd to ever happen. But Robin, ever the bullet-eater, refuses to turn away from his conclusion. Robin thinks that in the long run, emulations will drive wages down to almost zero, pushing most of the people who are unfortunate enough to rely on their wages into starvation-because emulations will kick us back into a
Malthusian trap.Arguably, humanity’s greatest accomplishment was escaping the Malthusian trap. Thomas Malthus, a nineteenth-century economist, believed that starvation would ultimately strike every country in the entire world. Malthus wrote that if a population is not facing starvation, people in that population will have many children who grow up, get married, and have even more children. A country with an abundance of food, Malthus wrote, is one with an increasing population. Unfortunately, in Malthus’s time, as the size of a country’s population went up, it became more difficult to feed everyone in the country. Eventually, when the population got large enough, many starved. Only when lots of people were dying of starvation would the country’s population stabilize. Consequently, Malthus believed that all countries were trapped in one of two situations:
- Many people are starving.
- The population is growing, and so many will eventually starve.
…Pretend that someone emulates Robin and places the software in the public domain. Anyone can now freely copy e-Robin, although it still costs something to buy enough computing power to run him on, say a hundred thousand dollars a year. A profit-maximizing business would employ an e-Robin if the e-Robin brought the business more than $100,000 a year in revenue. After Moore’s law pushes the annual hardware costs of an e-Robin down to a mere $1, then a company would hire e-Robins as long as each brought the business more than $1 per annum. What happens to the salary of bio-Robin if you can hire an e-Robin for only a dollar? David Ricardo implicitly knew the answer to that question. Ricardo wrote that if it costs 5,000 pounds to rent a machine, and this machine could do the work of 100 men, the total wages paid to 100 men will never be greater than 5,000 pounds because if the total wages were higher, manufacturers would fire the workers and rent the machine.296 Applying Ricardo’s theory to an economy with emulations tells us that, if an emulation can do whatever you can do, your wage will never be higher than what it costs to employ the emulation. The question now is whether, if it’s extremely cheap to run an e-Robin, these e-Robins would still earn high salaries and therefore allow the original Robin to bring home a decent paycheck. Unfortunately, the answer is no because if an e-Robin were earning much more than what it costs to run an e-Robin, then it would be profitable for businesses to create many more of them. Companies will keep making copies of their emulations until they no longer make a profit by producing the next copy. A general rule of economics is that the more you have of something, the smaller its value. For example, even though water is inherently much more useful than diamonds because there is so much more water than diamonds, the price of water is much lower. If anyone can freely copy e-Robin, then the free market would drive the wage of an e-Robin down to what it costs to run one.
…Even if the emulations push wages to almost zero, lots of bio-humans would be much richer than they would be in a world without emulations. Though ancient Rome was in a Malthusian trap, its landowning nobility was rich. When you have lots of people and little land, the land is extremely valuable because it’s cheap to hire people to work the land and there is great demand for the food the land produces. Similarly, if there are a huge number of emulations and relatively few production wands, then the wands become extraordinarily valuable. True, the emulations will increase the number of production wands. But because it’s so cheap to copy software, if the price of hardware is low enough, there will always be a lot more labor than wands. Consequently, bio-people who own wands will become fantastically rich. Even though you would lose your job, the value of your stock portfolio might jump a thousandfold…Since bio-humans could earn almost nothing by working, our prosperity would depend on our owning property or receiving welfare payments. If bio-humans became masters of an emulation-filled Malthusian world, keeping most of the wealth for ourselves, then we would live like a landed aristocracy that receives income from taxing others and renting out our agricultural lands to poor peasants. Eliezer Yudkowsky doubts this possibility:
The prospect of biological humans sitting on top of a population of [emulations] that are smarter, much faster, and far more numerous than bios while having all the standard human drives, and the bios treating the [emulations] as standard economic [value] to be milked and traded around, and the [emulations sitting] still for this for more than a week of bio time - this does not seem historically realistic.301Carl Shulman, one of the most knowledgeable people I’ve spoken to about Singularity issues, goes even further than Yudkowsky. He writes that since obsolescence would frequently kill entire categories of emulations, bio-humans could maintain total control of the government and economy only if the emulations regularly submitted
to genocide, even though the overwhelming majority of the population expects the same thing to happen to it soon.302…Robin thinks that if we behaved intelligently and maintained good relations with the emulations, bio-humans could safely take up to around 5% of the world’s economic output without having the emulations seek to destroy us. By appropriating 5% rather than the preponderance of the world’s income, we would ensure that the emulations would have less to gain from killing us and taking our stuff. But as power flows from money, having less income would make us less able to defend ourselves from any emulations that did wish to strip us of our wealth. Robin is optimistic about our ability to keep this 5%. He correctly notes that many times in history wealthy but weak groups have managed to keep their property for long periods of time. For example, many Americans over the age of seventy are rich even though they no longer contribute to economic production. These Americans, if standing without allies, would not have the slightest chance of prevailing in a fight in which Americans in their twenties joined together to steal the property of seniors. Yet it’s almost inconceivable that this would happen. Similarly, in many societies throughout human history, rich senior citizens have enjoyed secure property rights even though they would quickly lose their wealth if enough younger men colluded to take it from them. Even senior citizens whom dementia has made much less intelligent than most of their countrymen are still usually able to retain their property. Robin mentioned to me that tourists from rich countries are generally secure when they travel to poor nations even when the tourists are clearly undefended wealthy outsiders. A wealthy white American wearing expensive Western clothes could probably walk safely through most African villages even if the villagers knew that the American earns more in a day than a villager does in a year.
…As Robin points out, throughout human history most revolts broke out when conditions had improved for the poorest in society.305 And great revolutions have almost invariably been led by the rich. George Washington and Thomas Jefferson were wealthy landowners; Lenin’s father, born a serf, had risen through government service to the rank of a nobleman and married a woman of wealth, and Trotsky’s father was an illiterate but prosperous landlord; Julius Caesar was the first- or second-wealthiest individual alive at the time he overthrew the Roman Republic; and the mutiny on the Bounty was led by an officer. Perhaps bio-humans would have more to fear from the small number of wealthy emulations than from the emulations facing starvation.
Char Aznable, Mobile Suit Gundam; this line stayed with me after watching Otaku no Video - one does not care, indeed.↩
…Once we have taken on a definite form, we do not lose it until death.
–Chapter 2 of the Chuang-tzu; Thomas Cleary’s translation in Vitality, Energy, Spirit: A Taoist Sourcebook (1991), ISBN 978-0877735199↩One of the few good bits of Kathryn Schulz’s 2011 book Being Wrong (part 1) is where she does a more readable version of Wittgenstein’s observation (PI Pt II, p. 162),
One can mistrust one’s own senses, but not one’s own belief. If there were a verb meaning
to believe falsely," it would not have any [meaningful] first person, present indicative." Her version goes:But before we can plunge into the experience of being wrong, we must pause to make an important if somewhat perverse point: there is no experience of being wrong.
There is an experience of realizing that we are wrong, of course. In fact, there is a stunning diversity of such experiences. As we’ll see in the pages to come, recognizing our mistakes can be shocking, confusing, funny, embarrassing, traumatic, pleasurable, illuminating, and life-altering, sometimes for ill and sometimes for good. But by definition, there can’t be any particular feeling associated with simply being wrong. Indeed, the whole reason it’s possible to be wrong is that, while it is happening, you are oblivious to it. When you are simply going about your business in a state you will later decide was delusional, you have no idea of it whatsoever. You are like the coyote in the Road Runner cartoons, after he has gone off the cliff but before he has looked down. Literally in his case and figuratively in yours, you are already in trouble when you feel like you’re still on solid ground. So I should revise myself: it does feel like something to be wrong. It feels like being right.
You’re only as young as the last time you changed your mind.
–Timothy Leary (quoted in Office Yoga: Simple Stretches for Busy People (2000) by Darrin Zeer, p. 52)↩Everyone thinks they’ve won the Magical Belief Lottery. Everyone thinks they more or less have a handle on things, that they, as opposed to the billions who disagree with them, have somehow lucked into the one true belief system.
–R. Scott Bakker, Neuropath↩From E.T. Jaynes’s
Bayesian Methods: General Background
:As soon as we look at the nature of inference at this many-moves-ahead level of perception, our attitude toward probability theory and the proper way to use it in science becomes almost diametrically opposite to that expounded in most current textbooks. We need have no fear of making shaky calculations on inadequate knowledge; for if our predictions are indeed wrong, then we shall have an opportunity to improve that knowledge, an opportunity that would have been lost had we been too timid to make the calculations.
Instead of fearing wrong predictions, we look eagerly for them; it is only when predictions based on our present knowledge fail that probability theory leads us to fundamental new knowledge.
From Wittgenstein’s Culture and Value, MS 117 168 c: 17.2.1940:
You can’t be reluctant to give up your lie & still tell the truth.
As quoted in page 207 of Genna Sosonko’s Russian Silhouettes of Mikhail Botvinnik:
He did not dissolve and he did not change. On the last pages of the book he is still the same Misha Botvinnik, pupil of the 157th School of United Workers in Leningrad and Komsomol member. He had not changed at all for seventy years, and, listening to his sincere and passionate monologue, one involuntarily thinks of Confucius:
Only the most clever and the most stupid cannot change.
I have a similar absence of story for my generally transhumanist beliefs, since I was born hearing-impaired and grew up using hearing aids. That technology could improve my natural condition, that the flesh was imperfect, or that scientific & technological advancement was a good thing were not things I ever had to be argued into believing: I received those lessons daily when I took off my hearing aids, the world went silent, and I could no longer understand anyone around me. Or for that matter, when my hearing aids were periodically replaced & upgraded. And how could I regard cyborgs as bizarre or horrible things when I myself verged on the cyborg?↩
One such argument is that miracles don’t work because we are too skeptical or don’t believe faithfully enough (this argument is also used in parapsychology, oddly enough). This is absurd, since much of the point of miracles in the Bible (and especially by saints & missionaries) was to convert infidels or skeptics or wavering believers; and especially rings hollow when people like Pat Robertson attempt to explain why miracles are generally reported from poor, superstitious, and uneducated contemporary areas like Africa:
Cause people overseas didn’t go to Ivy League schools! [chuckles] Well, we’re so sophisticated. We think we’ve got everything figured out. We know about evolution, we know about Darwin, we know about all these things that say God isn’t real. We know about all this stuff and if we’ve been in many schools, the more advanced schools, we have been inundated with skepticism and secularism. And overseas they’re simple, humble, you tell them God loves them and they say
okay he loves me.
And you tell them God will do miracles and they sayokay, we believe you.
And that’s what God’s looking for. That’s why they have miracles.Whatever the truth may be, I stand staunchly by this point: a kid sees so much evidence and belief in God that he ought to rationally believe. Mencius Moldbug:
Most people are theists not because they were
reasoned into
believing in God, but because they applied Occam’s razor at too early an age. Their simplest explanation for the reason that their parents, not to mention everyone else in the world, believed in God, was that God actually existed. The same could be said for, say, Australia. Dennett’s approach, which of course is probably ineffective in almost all cases, is to explain why, if God doesn’t exist, everyone knows who He is. How did this whole God thing happen? Why is it not weird that people believed in Him for 2000 years, but actually they were wrong?Bayesian Informal Logic and Fallacy
, Korb 2003:In some circles (or circumstances) it is popularly believed that there are witches; in others, it is believed that hairless aliens walk the planet. If a bald appeal to the popularity of a belief were enough to establish its acceptability, then reasonable beliefs and the arguments for them would ebb and flow with the tides of fashion. Nevertheless, there seems to be some merit to the appeal to popular belief. Johnson (1996) points out a direct relevance between popular belief and (propositions concerning) the outcome of democratic elections! But even when there is no direct (or indirect) causal chain leading from popular belief to the truth of a proposition, there may well be a common cause that relates the two, making the popularity of a belief a (possibly minor) reason to shift one’s belief in a conclusion. Presumably, a world in which no one believes in witches would support a moderately smaller rational degree of belief in them than one in which many do, at least prior to the development of science…Popularity of a belief may well in general be associated with the truth of what is believed; so, lacking any clear scientific judgment (say, during the Dark Ages), common belief in the efficacy of witchcraft may well rationally lift our own belief, if only slightly. Nevertheless, given an improved understanding of natural phenomena and the fallibility of human belief formation (perhaps some time in the future!), the popular belief is no longer relevant for deciding whether witches exist or not: science accounts for both the belief in witches and their unreality.
From a theology blog,
Trust in testimony and miracles
:…Harris found that children do not fall into either pattern. Pace the Humean account, he found that young children are readily inclined to believe extraordinary claims, such as that there are invisible organisms on your hands that can make you ill and that you need to wash off, and that there is a man who visits you each 24th December to bring presents and candy if you are nice (see e.g., Harris & Koenig, 2006, Child Development, 77, 505-524). But children are not blindly credulous either, as Reid supposed. In a series of experiments, Harris could show that even children of 24 months pay attention to the reliability of the testifier. When they see two people, one of which systematically misnames known objects (e.g., saying
that’s a bear
, while presenting a bottle), toddlers are less likely to trust later utterances by these unreliable speakers (when they name unfamiliar objects), and more likely to trust people who systematically gave objects their correct names (see e.g., Paul L. Harris and Kathleen H. Corriveau Phil. Trans. R. Soc. B 2011 366, 1179-1187.) Experiments by Mills and Keil show that 6-year-olds already take into account a testifier’s self-interest: they are more likely to believe someone who says he lost a race than someone who says he won it (Candice M. Mills and Frank C. Keil Psychological Science 2005 16: 385).I sometimes wonder if this had anything to do with my later philosophy training; atheists make up something like 70% of respondents to the Philpapers survey, and a critical
reflective
style both correlates with and causes lower belief in God; another interesting correlation is that people on the autism spectrum (which I have often been told I must surely be on) seem to be heavily agnostic or atheistic.For discussion of these points, see:
Atheism & the autism spectrum
Cognitive style tends to predict religious conviction
On the etiology of religious belief
One of my favorite books on modern religion is Luhrmann’s 2012 When God Talks Back. It’s not that it’s fantastically written or researched, although I do like books where the author has done research themselves on the topic and cite a reasonable number of claims. I like it because, as the last chapter says, it provides a large part of the answer to the
nonbeliever’s question
: in an age of zero miracles beyond the risible (my tumor went away after I prayed!
), with no gods thundering to crowds, and with the best philosophical arguments contenting themselves with the logical possibility of the god of the philosophers, how could anyone sincerely believe in supernatural beings and why isn’t a sort of practical agnosticism (yeah, I don’t really believe, but church is where all my social activities are
) universal? Why are there so many fervent believers and some religions spreading rapidly while holding fairly constant in highly developed industrialized countries? The book answers that this absence is partly illusory: they do hear God’s voice, through a variety of auto-suggestive meditative practices which collectively constitute the sensus divinitatis that atheists are accused of sadly lacking, which combined with the other factors (the intuitiveness of supernatural beings pace the research in kids and evo-psych reasoning, the suppression of analytic thought, the social benefits, etc) maintains religion at its historical popularity in developed countries and spurs new growth in developing countries. (Africa is growing fantastically for both Islam and Christianity; and other developing countries experience their own versions of Japan’srush hour of the gods
.)Excerpts from Luhrmann 2012: preface / 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10↩
pg 120, When London was Capital of America, Julie Flavell 2011:
The British government hoped that a west sealed off from encroachments by whites, and where traders had to operate under the watchful eye of a British army detachment, would bring about good relations with the Indians. To the great discontent of speculators, in 1761 it was announced that all applications for land grants now had to go to London; no colonial government could approve them. The Proclamation of 1763 banned westward settlement altogether and instead encouraged colonists who wanted new lands to settle to the north in Quebec, and to the south in Florida. Within just a few years British ministers would be retreating from the Proclamation and granting western lands.
From Wikipedia:
It remains the deadliest war in American history, resulting in the deaths of 620,000 soldiers and an undetermined number of civilian casualties. According to John Huddleston,
↩10% of all Northern males 20-45 years of age died, as did 30% of all Southern white males aged 18-40.
An Inquiry into the Nature and Causes of the Wealth of Nations ,
Book IV: On Systems of Political Economy
:When the Act of Navigation was made, though England and Holland were not actually at war, the most violent animosity subsisted between the two nations…They are as wise, however, as if they had all been dictated by the most deliberate wisdom. National animosity at that particular time aimed at the very same object which the most deliberate wisdom would have recommended, the diminution of the naval power of Holland, the only naval power which could endanger the security of England.
The Act of Navigation is not favourable to foreign commerce, or to the growth of that opulence which can arise from it. The interest of a nation in its commercial relations to foreign nations is, like that of a merchant with regard to the different people with whom he deals, to buy as cheap and to sell as dear as possible. But it will be most likely to buy cheap, when by the most perfect freedom of trade it encourages all nations to bring to it the goods which it has occasion to purchase; and, for the same reason, it will be most likely to sell dear, when its markets are thus filled with the greatest number of buyers. The Act of Navigation, it is true, lays no burden upon foreign ships that come to export the produce of British industry. Even the ancient aliens’ duty, which used to be paid upon all goods exported as well as imported, has, by several subsequent acts, been taken off from the greater part of the articles of exportation. But if foreigners, either by prohibitions or high duties, are hindered from coming to sell, they cannot always afford to come to buy; because coming without a cargo, they must lose the freight from their own country to Great Britain. By diminishing the number of sellers, therefore, we necessarily diminish that of buyers, and are thus likely not only to buy foreign goods dearer, but to sell our own cheaper, than if there was a more perfect freedom of trade. As defence, however it is of much more importance than opulence, the Act of Navigation is, perhaps, the wisest of all the commercial regulations of England.
Nietzsche writes this summary of traditional philosophers to mock them, but isn’t there a great deal of truth in it?
How could anything originate out of its opposite? Truth out of error or the pure and sunlike gaze of the sage out of lust? Such origins are impossible; whoever dreams of them is a fool.
↩Friedrich Hayek, The Constitution of Liberty (1960)↩
The rhetoric in the 1990s and early 2000s is amazing to read in retrospect; some of the claims were about as wrong as it is possible to be. For example, the CEO of Millennium Pharmaceuticals - not at all a small or fly-by-night pharmacorp - said in 2000 it had high hopes for 6 drugs in human trials and claimed in 2002 that thanks to genetic research it would have 1-2 drugs entering trials every year within 3 years, for 6-12 new drugs by 2011. As of October 2011, it has exactly 1 approved drug.↩
The subsequently cited review covers this; almost all of the famous increase in longevity by decades is due to the young:
Table 1 shows the average number of years of life remaining from 1900 to 2007 from various ages, combining both sexes and ethnic groups. From birth, life expectancy increased from 49.2 years (previously estimated at 47.3 years in these same sources) in 1900 to 77.9 in 2007, a gain of life expectancy of nearly 29 years and a prodigious accomplishment. The increase was largely due to declines in perinatal mortality and reduction in infectious diseases which affected mainly younger persons. Over this period, developed nations moved from an era of acute infectious disease to one dominated by chronic illness. As a result, life extension from age 65 was increased only 6 years over the entire 20th century; from age 75 gains were only 4.2 years, from age 85 only 2.3 years and from age 100 a single year. From age 65 over the most recent 20 years, the gain has been about a year [16].
Much confusion in longevity predictions comes from using projections of life expectancy at birth to estimate future population longevity [18]. For example,
If the pace of increase in life expectancy (from birth) for developed countries over the past two centuries continues through the 21st century, most babies born since 2000 will celebrate their 100th birthdays
29. Note from the 100-year line of Table 1 that life expectancies for centenarians would be projected to rise only one year in the 21st century, as in the 20th. Such attention-grabbing statements follow from projecting from birth rather than age 65, thus including infant and early life events to projectsenior
aging, using data from women rather than both genders combined, cherry-picking the best data for each year, neglecting to compute effects of in-migration and out-migration, and others.Remarkably, some groups show a decrease in longevity; a centenarian in 1980 has an average remaining lifespan of 2.7 years, but in 2000, that has fallen to 2.6. There was an even larger reversal in 1940 (2.1) to 1960 (1.9). Younger groups show larger gains (eg. 85-year-olds had 6.0 years in 1980 and 6.3 in 2000), evidence for compression of morbidity.↩
If exponentials & sigmoids really do explain Amara’s observation, that implies that there ought to be some sort of
reverse
Amara effect: where an observer is at the top of the sigmoid and naively extrapolates that the long run will look very different from now - and it turns out that the long run looks identical to right now. Identifying reverse Amara effects is easier than regular Amara effects because one has the benefit of hindsight. For example, nuclear energy: it was initially a puzzling physics anomaly and a research problem at most - uranium was important to the world economy because it was used in things like the Haber-Bosch process which helped enable World War I. Even in the 1930s, it was more interesting than useful. Then suddenly nuclear reactors were demonstrated and atomic bombs transform the world in the 1940s, leading to widespread futurism predictions of ubiquitous nuclear energytoo cheap to meter
, but further use of nuclear technologies suddenly breaks down; with the honorable exception of nuclear medicine, the world in the 2010s looks pretty much identical to the 1950s. Given this, one would not be very surprised if in 2112, nuclear technologies were nothing but refinements of 2012 nuclear technologies.↩Monte Carlo trees are very similar to the techniques used in one of the computable implementations of AIXI, incidentally: MC-AIXI (background).↩
This is known as the
overhang
argument. The development and canonical form of it is unclear; it may simply be Singulitarian folklore-knowledge. Eliezer Yudkowsky, from the 2008Hard Takeoff
:Or consider the notion of sudden resource bonanzas. Suppose there’s a semi-sophisticated Artificial General Intelligence running on a cluster of a thousand CPUs. The AI has not hit a wall - it’s still improving itself - but its self-improvement is going so slowly that, the AI calculates, it will take another fifty years for it to engineer / implement / refine just the changes it currently has in mind. Even if this AI would go FOOM eventually, its current progress is so slow as to constitute being flatlined…
So the AI turns its attention to examining certain blobs of binary code - code composing operating systems, or routers, or DNS services - and then takes over all the poorly defended computers on the Internet. This may not require what humans would regard as genius, just the ability to examine lots of machine code and do relatively low-grade reasoning on millions of bytes of it. (I have a saying/hypothesis that a human trying to write code is like someone without a visual cortex trying to paint a picture - we can do it eventually, but we have to go pixel by pixel because we lack a sensory modality for that medium; it’s not our native environment.) The Future may also have more legal ways to obtain large amounts of computing power quickly.
…A subtler sort of hardware overhang, I suspect, is represented by modern CPUs have a 2GHz serial speed, in contrast to neurons that spike 100 times per second on a good day. The
hundred-step rule
in computational neuroscience is a rule of thumb that any postulated neural algorithm which runs in realtime has to perform its job in less than 100 serial steps one after the other. We do not understand how to efficiently use the computer hardware we have now, to do intelligent thinking. But the much-vauntedmassive parallelism
of the human brain, is, I suspect, mostly cache lookups to make up for the sheer awkwardness of the brain’s serial slowness - if your computer ran at 200Hz, you’d have to resort to all sorts of absurdly massive parallelism to get anything done in realtime. I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less.So that’s another kind of overhang: because our computing hardware has run so far ahead of AI theory, we have incredibly fast computers we don’t know how to use for thinking; getting AI right could produce a huge, discontinuous jolt, as the speed of high-grade thought on this planet suddenly dropped into computer time.
A still subtler kind of overhang would be represented by human failure to use our gathered experimental data efficiently25.
Anders Sandberg & Carl Shulman gave a 2010 talk on it; from the blog post:
We give an argument for why - if the AI singularity happens - an early singularity is likely to be slower and more predictable than a late-occurring one….
If you are on the hardware side, how much hardware do you believe will be available when the first human level AI occurs? You should expect the first AI to be pretty close to the limits of what researchers can afford: a project running on the future counterpart to Sequoia or the Google servers. There will not be much extra computing power available to run more copies. An intelligence explosion will be bounded by the growth of more hardware.
If you are on the software side, you should expect that hardware has continued to increase after passing
human equivalence
. When the AI is finally constructed after all the human and conceptual bottlenecks have passed, hardware will be much better than needed to just run a human-level AI. You have ahardware overhang
allowing you to run many copies (or fast or big versions) immediately afterwards. A rapid and sharp intelligence explosion is possible.This leads to our conclusion: if you are an optimist about software, you should expect an early singularity that involves an intelligence explosion that at the start grows
just
as Moore’s law (or its successor). If you are a pessimist about software, you should expect a late singularity that is very sharp. It looks like it is hard to coherently argue for a late but smooth singularity.…Note that sharp, unpredictable singularities are dangerous. If the breakthrough is simply a matter of the right insights and experiments to finally cohere (after endless disappointing performance over a long time) and then will lead to an intelligence explosion nearly instantly, then most societies will be unprepared, there will be little time to make the AIs docile, there are strong first-mover advantages and incentives to compromise on safety. A recipe for some nasty dynamics.
Jaan Tallinn in 2011:
It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law - creating a massive hardware overhang. The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence. Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.
Or Robin Hanson’s paper,
Economic Growth Given Machine Intelligence
:Machines complement human labor when they become more productive at the jobs they perform, but machines also substitute for human labor by taking over human jobs. At first, expensive hardware and software does only the few jobs where computers have the strongest advantage over humans. Eventually, computers do most jobs. At first, complementary effects dominate, and human wages rise with computer productivity. But eventually substitution can dominate, making wages fall as fast as computer prices now do. An intelligence population explosion makes per-intelligence consumption fall this fast, while economic growth rates rise by an order of magnitude or more.
Intuitively, one would guess that the value of education and changes in it value would follow some sort of linear or exponential - more is better, less is worse. If the value of a high school diploma increases, an undergraduate ought to increase more, and postgraduate degrees increase even more, right? A
hollowing-out
model, on the other hand, would seem to predict that there would be a sort of U-curve where the mediocre education is not worth what it costs and one would be better off not bothering with getting more education or sticking it out and getting areal
degree. With that in mind, it is interesting to look at the Census data:In fact, new Census Bureau data show that if you divide the population by education, on average wages have risen only for those with graduate degrees over the past 10 years. (On average, of course, means that some have done better and some have done worse.) Here (thanks to economist Matthew Slaughter of Dartmouth College’s Tuck School of Business) are changes in U.S. workers wages as reported in the latest Census Bureau report, adjusted for inflation using the CPI-U-RS measure recommended by the Bureau of Labor Statistics:
Change between 2000 and 2010 in inflation-adjusted average earnings by educational attainment
Charles Murray reportedly cites statistics in Coming Apart: The State of White America 1960-2010 that the disability rate for men - working class - was 2% in 1960; with more than half a century of medical progress, the rate has not fallen but risen to 10%.↩
And there have always been rumors that the moral hazard is substantial; eg. the psychiatrist Steve Balt,
How To Retire At Age 27
and commentary.↩From pg 13/340 of Bowles & Jayadev 2006:
Other differences in technology (or different distributions of labor across sectors of the economy) may account for some of the differences. However, the data on supervision intensity by manufacturing sector in five sub-Saharan African countries shown in Table 4 suggest large country effects independent of the composition of output. Supervisory intensities in Zambia’s
wood and furniture
andfood processing
industries, are twice and five times Ghana’s respectively. A country-and-industry fixed effects regression indicates that Zambia’s supervision intensity conditioned on industrial structure is two and a half times Ghana’s. Of course these differences could reflect within sector variation among countries in output composition or technologies, but there is no way to determine how much (if any) of the estimated country effects are due to this. We also explored if supervision intensity was related to more advanced technologies generically. However, in the advanced economy dataset (shown in Table 2) the value added of knowledge intensive sectors as a share of gross value added was substantially uncorrelated with the supervisory ratio (r = 0.14).While the data are inadequate to provide a compelling test of the hypothesis, we thus find little evidence that the increase in guard labor in the U.S. or the differences across the countries is due to differences in output composition and technology. A more likely explanation is what we term
enforcement specialization
. Economic development proceeds through a process of specialization and increasing division of labor; the work of perpetuating a society’s institutions is no exception to this truism….Our data indicate that the United States devotes well over twice as large a fraction of its labor force to guard labor as does Switzerland. This may occur in part because peer monitoring and informal sanctioning play a larger role in Switzerland, as well as the fact that ordinary Swiss citizens have military defense capacities and duties and are not counted in our data as soldiers.A key advantage of the Byzantine Empire, according to Edward Luttwak, was that it had an efficient tax system which enabled it to support a standing military, which was able to be trained in horse-archery all the way up to steppe-nomad standards - a task which took years for the trainees who could manage it at all. (In contrast, the US military is happy to send many soldiers into combat with only a few months of training.)↩
If you think that’s the whole military-industrial-intelligence budget, you are quite naive.↩
Witness the massive fights over the Base Realignment and Closure and unusual measures required; the Congressmen aren’t stupid, they understand how valuable the military-industrial welfare is for their communities.↩
Imprisonment as a permanent punishment was used rarely prior to the Industrial Revolution, and what prisons there were often were primarily a mine or other facility of that kind; it is very expensive to imprison and only imprison someone, which is why techniques like fines (eg. Northern Europe), torture (China), exile (Greece) or penal transportation (England & Australia), or execution (everyone) were the usual methods.↩
For example, MIT economist
Daron Acemoglu on Inequality
has all the pieces but somehow escape the obvious conclusion:Let’s go through your books. Your first choice is The Race between Education and Technology, published by Harvard University Press. You mentioned in an earlier email to me that it is
a must-read for anyone interested in inequality
. Tell me more.This is a really wonderful book. It gives a masterful outline of the standard economic model, where earnings are proportional to contribution, or to productivity. It highlights in a very clear manner what determines the productivities of different individuals and different groups. It takes its cue from a phrase that the famous Dutch economist, Jan Tinbergen coined. The key idea is that technological changes often increase the demand for more skilled workers, so in order to keep inequality in check you need to have a steady increase in the supply of skilled workers in the economy. He called this
the race between education and technology
. If the race is won by technology, inequality tends to increase, if the race is won by education, inequality tends to decrease.The authors, Claudia Goldin and Larry Katz, show that this is actually a pretty good model in terms of explaining the last 100 years or so of US history. They give an excellent historical account of how the US education system was formed and why it was very progressive, leading to a very large increase in the supply of educated workers, in the first half of the century. This created greater equality in the US than in many other parts of the world.
They also point to three things that have changed that picture over the last 30 to 40 years. One is that technology has become even more biased towards more skilled, higher earning workers than before. So, all else being equal, that will tend to increase inequality. Secondly, we’ve been going through a phase of globalisation. Things such as trading with China - where low-skill labour is much cheaper - are putting pressure on low wages. Third, and possibly most important, is that the US education system has been failing terribly at some level. We haven’t been able to increase the share of our youth that completes college or high school. It’s really remarkable, and most people wouldn’t actually guess this, but in the US, the cohorts that had the highest high-school graduation rates were the ones that were graduating in the middle of the 1960s. Our high-school graduation rate has actually been declining since then. If you look at college, it’s the same thing. This is hugely important, and it’s really quite shocking. It has a major effect on inequality, because it is making skills much more scarce then they should be.
Do Goldin and Katz go into the reasons why education is failing in the US?
They do discuss it, but nobody knows. It’s not a monocausal, simple story. It’s not that we’re spending less. In fact, we are spending more. It’s certainly not that college is not valued, it’s valued a lot. The college premium - what college graduates earn relative to high-school graduates - has been increasing rapidly. It’s not that the US is not investing enough in low-income schools. There has been a lot of investment in low-income schools. Not just free lunches, but lots of grants and other forms of spending from both states and the federal government.
The failure of education to increase may be masked by the dying of the uneducated elderly, but that is an effect that can only last so long. And then we will see something like that looks more like this, a log graph which may begin petering out soon (and which looks like a diminishing returns graph - every time unit sees less and less increase squeezed out as additional efforts or larger returns are applied to the populace):
Log-Relative Supply of College/non-College Labor, 1963-2008 Or economist Alex Tabarrok, in a podcast, who identifies the problem and blames it on a decrease in teacher quality!
You argue that the American education system, both K-12 and at the college levels, has got some serious problems. Let’s talk about it. What’s wrong with it? And of course, as a result, education is a key part of innovation and productivity. If you don’t have a well-educated populace you are not going to have a very good economy. What’s wrong with our education system?
Let’s talk about K-12. Here’s two remarkable facts, which have just blown me away. Right now, in the United States, people 55-64 years old, they are more likely to have had a high school education than 25-34 year olds. Just a little bit, but they are more likely. So, you look everywhere in the world and what do you see? You see younger people having more education than older people. Not true in the United States. That is a shocking claim. Incredible. And the reason is that the drop-out rate has increased? Exactly. So, the high school dropout rate has increased. Now, 25% of males in the United States drop out of high school. And that’s increased since the 1960s, even as the prospects for a high school dropout have gotten much worse. We’ve seen an increase, 21st century–25% of males not graduating high school. That’s mind-boggling. Why? One of the underlying facts relating to education, which is [?], which is that the more education you get on average–and I’m going to talk about why on average can be very misleading–high school graduates do better than high school dropouts; people with some college do better than high school graduates; people graduating from college do better than people with some college; people with graduate degrees do better than college grads. And the differences are large. Particularly if you compare a college graduate to a high school dropout, there is an enormous difference.
So, normally we would say: Well, this problem kind of solves itself. There’s a natural incentive to stay in school, and I wouldn’t worry about it. Why should we be worrying about it? It doesn’t seem to be working. Why isn’t it working and what could be done? I think there’s a few problems. One is the quality of teachers I think has actually gone down. So I think that’s a problem. This is a case of every silver lining has a cloud, or something like that, in that in 1970s about half of college-educated women became teachers. This is at a time when there’s maybe 4% are getting an MBA, less than 10% are going to medical school, going to law school. These smart women, they are becoming teachers. Well, as we’ve opened up, by 1980 you’ve got 30% or so of the incoming class of MBAs, doctors, lawyers, are women. Which is great. Their comparative advantage, moving into these fields, productivity, and so forth. And yet that is meant that on average, the quality of teachers, the quality pool we are drawing from, has gone down in terms of their SAT levels and so forth. So, I think we need to fix that.
Also relevant:
Liberal Arts Grads Win Long-Term
.↩Over-Education and the Skills of UK Graduates
(Chevalier & Lindley 2006):Before the Eighties, Britain had one of the lowest participation rates in higher education across OECD countries. Consequently, increasing participation in higher education became the mantra of British governments. The proportion of school leavers reaching higher education began to slowly increase during the early Eighties, until it suddenly increased rapidly towards the end of the decade. As illustrated in Figure 1, the proportion of a cohort participating in higher education doubled over a five year period, from 15% in 1988 to 30% by 1992…we analyse the early labour market experience of the 1995 cohort, since these people graduated at the peak of the higher education expansion period. We find a reduction in the proportion of matched graduates, compared to the 1990 cohort. This suggests that the labour market could not fully accommodate the increased inflow of new graduates, although this did not lead to an increased wage penalty associated with over-education. Hence, the post-expansion cohort had the appropriate skills to succeed in the labour market. Secondly, we are the first to investigate whether the over-education wage penalty remains even after controlling for observable graduate skills, skill mismatch, as well as unobservable characteristics. We find some evidence that genuinely over-educated individuals lack
graduate skills
; mostly management and leadership skills. Additionally, the longitudinal element of the dataset is used to create a measure of time-invariant labour market unobservable characteristics which are also found to be an important determinant of the probability to be over-educated. Over-education impacts negatively on the wages of graduates, over and above skill levels (observed or not) which suggests that the penalty cannot be solely explained by a lack of skills but also reflects some job idiosyncratic characteristics. It also increases unemployment by up to three months but does not lead to an increase in job search, as the numbers of job held since graduation is not affected by the current over-education status.…Most of the UK literature has relied on self-assessment of over-education, and typically finds that 30% of graduates are overeducated4. Battu et al. (2000) provide one of the most comprehensive studies of over-education. The average proportion of over-educated individuals across the 36 estimates of their analysis was around one-quarter, with estimates ranging between one-fourteenth and as high as two-thirds. For the UK, Battu et al (2000) concluded that over-education has not increased in the early Nineties.
This result is supported by Groot and Maassen van den Brink (2000) whose meta-analysis of 25 studies found no tendency for a world-wide increase in the incidence of over-education despite the general improvement in the level of education, although they do suggest it has become increasingly concentrated among lower ability workers, suggesting the over-education is not solely due to mismatch of workers and jobs. Freeman’s pioneering work on over-education (1976) suggests that over-education is a temporary phenomenon due to friction in the labour market, although UK evidence is contrary to this assumption. Dolton and Vignoles (2000) found that 38% of 1980 UK graduates were over-educated in their first job and that 30% remained in that state six years later. Over a longer period there is also evidence that over-education is a permanent feature of some graduates’ career (Dolton and Silles, 2003). For graduates the wage penalty associated with over-education ranges between 11 and 30%, however, contrary to Freeman’s view over-education has not led to a decrease in the UK return to education in general (Machin, 1999 and Dearden et al., 2002) even if recent evidence by Walker and Zhu (2005) report lower returns for the most recent cohort of graduates.
The general consensus is that after controlling for differences in socio-economic and institutional factors, over-education is a consequence of unobservable elements such as heterogeneous ability and skills. There is evidence to support this from studies by Büchel and Pollmann-Schult (2001), Bauer (2002), Chevalier (2003) and Frenette (2004). Most over-educated workers are efficiently matched into appropriate jobs and after accounting for the unobserved heterogeneity, the wage penalty for over-education is reduced. However, a remaining group of workers appear over-skilled for their jobs and suffer from substantial wage penalties.
The Bureau of Labor Statistics revised its data-collection method for unemployment duration in January 2011. Based on the previous method, the average unemployment duration would be about 37 weeks rather than 40 weeks. For more information, see
Changes to data collected on unemployment duration
.↩A Decade of Slack Labor Markets
, Scott Winship, Brookings Institution Fellow; other good quotes:From 1951 through 2007, there were never more than three unemployed workers for each job opening, and it was rare for that figure even to hit two-to-one. In contrast, there have been more than three jobseekers per opening in every single month since September 2008. The ratio peaked somewhere between five-to-one and seven-to-one in mid-2009. It has since declined but we have far to go before we return to
normal
levels.The bleak outlook for jobseekers has three immediate sources. The sharp deterioration beginning in early 2007 is the most dramatic feature of the above chart (the rise in job scarcity after point C in the chart, the steepness of which depends on the data source used). But two less obvious factors predated the recession. The first is the steepness of the rise in job scarcity during the previous recession in 2001 (from point A to point B), which rivaled that during the deep downturn of the early 1980s. The second is the failure between 2003 and 2007 of jobs per jobseeker to recover from the 2001 recession (the failure of point C to fall back to point A).
Social outcomes
The Nurture Assumption, Harris 2009
crime and aggression:
Burt debate:
, Power et al 2014
Friends are as genetically similar as fourth cousins; , Domingue et al 2017
, Gottfredson 2004; , Gottfredson 1997
, Shakoor et al 2016
, Baud et al 2017 (where does culture come from?)