Friday, June 16, 2017

Scientific Consensus on Cognitive Ability?


From the web site of the International Society for Intelligence Research (ISIR): a summary of the recent debate involving Charles Murray, Sam Harris, Richard Nisbett, Eric Turkheimer, Paige Harden, Razib Khan, Bo and Ben Winegard, Brian Boutwell, Todd Shackelford, Richard Haier, and a cast of thousands! ISIR is the main scientific society for researchers of human intelligence, and is responsible for the Elsevier journal Intelligence.

If you click through to the original, there are links to resources in this debate ranging from podcasts (Harris and Murray), to essays at Vox, Quillette, etc.

I found the ISIR summary via a tweet by Timothy Bates, who sometimes comments here. I wonder what he has to say about all this, given that his work has been cited by both sides :-)
TALKING ABOUT COGNITIVE ABILITY IN 2017

2017 has already seen more science-lead findings on cognitive ability, and public discussion about the origins, and social and moral implications of ability, than we have had in some time, which should be good news for those seeking to understand and grow cognitive ability. This post brings together some of these events linking talk about differences in reasoning that are so near to our sense of autonomy and identity.

Middlebury
Twenty years ago, when Dr Charles Murray co-authored a book with Harvard Psychologist Richard Herrnstein he opened up a conversation about the role of ability in the fabric of society, and in the process made him famous for several things (most of which that he didn‘t say), but for which he, and that book – The Bell Curve – came to act as lightning rods, for the cauldron of mental compression of complex ideas, multiple people, into simpler slogans. 20 years on, Middlebury campus showed this has made even speaking to a campus audience fraught with danger.

Waking Up
In the wake of this disrupted meeting, Sam Harris interviewed Dr Murray in a podcast listened (and viewed on youtube) by and audience of many thousands, creating a new audience and new interest in ideas about ability, its measurement and relevance to modern society.

Vox populi
The Harris podcast lead a response in turn, published in Vox in which IQ, genetics, and social psychology experts Professors Eric Turkheimer, Paige Harden, and Richard Nisbett responded critically to the ideas raised (and those not raised) which they argue are essential for informed debate on group differences.

Quillette
And that lead in turn lead to two more responses: First by criminologists and evolutionary psychologists Bo and Ben Winegard, Brian Boutwell, and Todd Shackelford in Quillette, and a second post at Quillette, also supportive of the Murray-Harris interaction, from past-president of ISIR and expert intelligence research Professor Rich Haier.

And that lead to a series of planned essays by Professor Harden (first of which is now published here) and Eric Turkheimer (here). Each of these posts contains a wealth of valuable information, links to original papers, and they are responsive to each other: Addressing points made in the other posts with citations, clarifications, and productive disagreement where that still exists. They’re worth reading.

The answer, in 2017, may be a cautious “Yes, – perhaps we can talk about differences in human cognitive ability”. And listen, reply, and perhaps even reach a scientific consensus.

[ Added: 6/15 Vox response from Turkheimer et al. that doesn't appear to be noted in the ISIR summary. ]
In a recent post, NYTimes: In ‘Enormous Success,’ Scientists Tie 52 Genes to Human Intelligence, I noted that scientific evidence overwhelmingly supports the following claims:
0. Intelligence is (at least crudely) measurable
1. Intelligence is highly heritable (much of the variance is determined by DNA)
2. Intelligence is highly polygenic (controlled by many genetic variants, each of small effect)
3. Intelligence is going to be deciphered at the molecular level, in the near future, by genomic studies with very large sample size
I believe that, perhaps modulo the word near in #3, every single listed participant in the above debate would agree with these claims.

(0-3) above take no position on the genetic basis of group differences in measured cognitive ability. That is the where most of the debate is focused. However, I think it's fair to say that points (0-3) form a consensus view among leading experts in 2017.

As far as what I think the future will bring, see Complex Trait Adaptation and the Branching History of Mankind.

Thursday, June 15, 2017

Everything Under the Heavens and China's Conceptualization of Power



Howard French discusses his new book, Everything Under the Heavens: How the Past Helps Shape China's Push for Global Power, with Orville Schell. The book is primarily focused on the Chinese historical worldview and how it is likely to affect China's role in geopolitics.

French characterizes his book as, in part,
... an extended exploration of the history of China's conceptualization of power ... and a view as to how ... the associated contest with the United States for primacy ... in the world could play out.
These guys are not very quantitative, so let me clarify a part of their discussion that was left rather ambiguous. It is true that demographic trends are working against China, which has a rapidly aging population. French and Schell talk about a 10-15 year window during which China has to grow rich before it grows old (a well-traveled meme). From the standpoint of geopolitics this is probably not the correct or relevant analysis. China's population is ~4x that of the US. If, say, demographic trends limit this to only an effective 3x or 3.5x advantage in working age individuals, China still only has to reach ~1/3 of US per capita income in order to have a larger overall economy. It seems unlikely that there is any hard cutoff preventing China from reaching, say, 1/2 the US per capita GDP in a few decades. (Obviously a lot of this growth is still "catch-up" growth.) At that point its economy would be the largest in the world by far, and its scientific-technological workforce and infrastructure would be far larger than that of any other country.




Gideon Rachman writes for the FT, so it's not surprising that his instincts seem a bit stronger when it comes to economics. He makes a number of incisive observations during this interview.

At 16min, he mentions that
I was in Beijing about I guess a month before the vote [US election], in fact when the first debates were going on, and the Chinese, I thought that official Chinese [i.e. Government Officials] in our meeting and the sort of semi-official academics were clearly pulling for Trump.
See also Trump Triumph Viewed From China.

Related: Thucydides trap, China-US relations and all that.

Tuesday, June 13, 2017

Climate Risk and AI Risk for Dummies

The two figures below come from recent posts on climate change and AI. Please read them.

The squiggles in the first figure illustrate uncertainty in how climate will change due to CO2 emissions. The squiggles in the second figure illustrate uncertainty in the advent of human-level AI.



Many people are worried about climate change because polar bears, melting ice, extreme weather, sacred Gaia, sea level rise, sad people, etc. Many people are worried about AI because job loss, human dignity, Terminator, Singularity, basilisks, sad people, etc.

You can choose to believe in any of the grey curves in the AI graph because we really don't know how long it will take to develop human level AI, and AI researchers are sort of rational scientists who grasp uncertainty and epistemic caution.

You cannot choose to believe in just any curve in a climate graph because if you pick the "wrong" curve (e.g., +1.5 degree Celsius sensitivity to a doubling of CO2, which is fairly benign, but within the range of IPCC predictions) then you are a climate denier who hates science, not to mention a bad person :-(

Oliver Stone confronts Idiocracy



See earlier post Trump, Putin, Stephen Cohen, Brawndo, and Electrolytes.

Note to morons: Russia's 2017 GDP is less than that of France, Brazil, Italy, Canada, and just above that of Korea and Australia. (PPP-adjusted they are still only #6 in the world, between Germany and Indonesia: s-s-scary!) Apart from their nuclear arsenal (which they will struggle to pay for in the future), they are hardly a serious geopolitical competitor to the US and certainly not to the West as a whole. Relax! Trump won the election, not Russia.


This is a longer (and much better) discussion of Putin with Oliver Stone and Stephen Cohen. At 17:30 they discuss the "Russian attack" on our election.

Sunday, June 11, 2017

Rise of the Machines: Survey of AI Researchers


These predictions are from a recent survey of AI/ML researchers. See SSC and also here for more discussion of the results.
When Will AI Exceed Human Performance? Evidence from AI Experts

Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans

Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.
Another figure:


Keep in mind that the track record for this type of prediction, even by experts, is not great:


See below for the cartoon version :-)



Wednesday, June 07, 2017

Complex Trait Adaptation and the Branching History of Mankind


A new paper (94 pages!) investigates signals of recent selection on traits such as height and educational attainment (proxy for cognitive ability). Here's what I wrote about height a few years ago in Genetic group differences in height and recent human evolution:
These recent Nature Genetics papers offer more evidence that group differences in a complex polygenic trait (height), governed by thousands of causal variants, can arise over a relatively short time (~ 10k years) as a result of natural selection (differential response to varying local conditions). One can reach this conclusion well before most of the causal variants have been accounted for, because the frequency differences are found across many variants (natural selection affects all of them). Note the first sentence above contradicts many silly things (drift over selection, genetic uniformity of all human subpopulations due to insufficient time for selection, etc.) asserted by supposed experts on evolution, genetics, human biology, etc. over the last 50+ years. The science of human evolution has progressed remarkably in just the last 5 years, thanks mainly to advances in genomic technology.

Cognitive ability is similar to height in many respects, so this type of analysis should be possible in the near future. ...
The paper below conducts an allele frequency analysis on admixture graphs, which contain information about branching population histories. Thanks to recent studies, they now have enough data to run the analysis on educational attainment as well as height. Among their results: a clear signal that modern East Asians experienced positive selection (~10kya?) for + alleles linked to educational attainment (see left panel of figure above; CHB = Chinese, CEU = Northern Europeans). These variants have also been linked to neural development.
Detecting polygenic adaptation in admixture graphs

Fernando Racimo∗1, Jeremy J. Berg2 and Joseph K. Pickrell1,2 1New York Genome Center, New York, NY 10013, USA 2Department of Biological Sciences, Columbia University, New York, NY 10027, USA June 4, 2017

Abstract
An open question in human evolution is the importance of polygenic adaptation: adaptive changes in the mean of a multifactorial trait due to shifts in allele frequencies across many loci. In recent years, several methods have been developed to detect polygenic adaptation using loci identified in genome-wide association studies (GWAS). Though powerful, these methods suffer from limited interpretability: they can detect which sets of populations have evidence for polygenic adaptation, but are unable to reveal where in the history of multiple populations these processes occurred. To address this, we created a method to detect polygenic adaptation in an admixture graph, which is a representation of the historical divergences and admixture events relating different populations through time. We developed a Markov chain Monte Carlo (MCMC) algorithm to infer branch-specific parameters reflecting the strength of selection in each branch of a graph. Additionally, we developed a set of summary statistics that are fast to compute and can indicate which branches are most likely to have experienced polygenic adaptation. We show via simulations that this method - which we call PhenoGraph - has good power to detect polygenic adaptation, and applied it to human population genomic data from around the world. We also provide evidence that variants associated with several traits, including height, educational attainment, and self-reported unibrow, have been influenced by polygenic adaptation in different human populations.

https://doi.org/10.1101/146043
From the paper:
We find evidence for polygenic adaptation in East Asian populations at variants that have been associated with educational attainment in European GWAS. This result is robust to the choice of data we used (1000 Genomes or Lazaridis et al. (2014) panels). Our modeling framework suggests that selection operated before or early in the process of divergence among East Asian populations - whose earliest separation dates at least as far back as approximately 10 thousand years ago [42, 43, 44, 45] - because the signal is common to different East Asian populations (Han Chinese, Dai Chinese, Japanese, Koreans, etc.). The signal is also robust to GWAS ascertainment (Figure 6), and to our modeling assumptions, as we found a significant difference between East Asian and non- East-Asian populations even when performing a simple binomial sign test (Tables S4, S9, S19 and S24).

Sunday, June 04, 2017

Epistemic Caution and Climate Change

I have not, until recently, invested significant time in trying to understand climate modeling. These notes are primarily for my own use, however I welcome comments from readers who have studied this issue in more depth.

I take a dim view of people who express strong opinions about complex phenomena without having understood the underlying uncertainties. I have yet to personally encounter anyone who claims to understand all of the issues discussed below, but I constantly meet people with strong views about climate change.

See my old post on epistemic caution Intellectual honesty: how much do we know?
... when it comes to complex systems like society or economy (and perhaps even climate), experts have demonstrably little predictive power. In rigorous studies, expert performance is often no better than random.  
... worse, experts are usually wildly overconfident about their capabilities. ... researchers themselves often have beliefs whose strength is entirely unsupported by available data.
Now to climate and CO2. AFAIU, the heating effect due to a increasing CO2 concentration is only a logarithmic function (all the absorption is in a narrow frequency band). The main heating effects in climate models come from secondary effects such as water vapor distribution in the atmosphere, which are not calculable from first principles, nor under good experimental/observational control. Certainly any "catastrophic" outcomes would have to result from these secondary feedback effects.

The first paper below gives an elementary calculation of direct effects from atmospheric CO2. This is the "settled science" part of climate change -- it depends on relatively simple physics. The prediction is about 1 degree Celsius of warming from a doubling of CO2 concentration. Anything beyond this is due to secondary effects which, in their totality, are not well understood -- see second paper below, about model tuning, which discusses rather explicitly how these unknowns are dealt with.
Simple model to estimate the contribution of atmospheric CO2 to the Earth’s greenhouse effect
Am. J. Phys. 80, 306 (2012)
http://dx.doi.org/10.1119/1.3681188

We show how the CO2 contribution to the Earth’s greenhouse effect can be estimated from relatively simple physical considerations and readily available spectroscopic data. In particular, we present a calculation of the “climate sensitivity” (that is, the increase in temperature caused by a doubling of the concentration of CO2) in the absence of feedbacks. Our treatment highlights the important role played by the frequency dependence of the CO2 absorption spectrum. For pedagogical purposes, we provide two simple models to visualize different ways in which the atmosphere might return infrared radiation back to the Earth. The more physically realistic model, based on the Schwarzschild radiative transfer equations, uses as input an approximate form of the atmosphere’s temperature profile, and thus includes implicitly the effect of heat transfer mechanisms other than radiation.
From Conclusions:
... The question of feedbacks, in its broadest sense, is the whole question of climate change: namely, how much and in which way can we expect the Earth to respond to an increase of the average surface temperature of the order of 1 degree, arising from an eventual doubling of the concentration of CO2 in the atmosphere? And what further changes in temperature may result from this response? These are, of course, questions for climate scientists to resolve. ...
The paper below concerns model tuning. It should be apparent that there are many adjustable parameters hidden in any climate model. One wonders whether the available data, given its own uncertainties, can constrain this high dimensional parameter space sufficiently to produce predictive power in a rigorous statistical sense.

The first figure below illustrates how different choices of these parameters can affect model predictions. Note the huge range of possible outcomes! The second figure below illustrates some of the complex physical processes which are subsumed in the parameter choices. Over longer timescales, (e.g., decades) uncertainties such as the response of ecosystems (e.g., plant growth rates) to increased CO2 would play a role in the models. It is obvious that we do not (may never?) have control over these unknowns.
THE ART AND SCIENCE OF CLIMATE MODEL TUNING

AMERICAN METEOROLOGICAL SOCIETY MARCH 2017 | 589

... Climate model development is founded on well-understood physics combined with a number of heuristic process representations. The fluid motions in the atmosphere and ocean are resolved by the so-called dynamical core down to a grid spacing of typically 25–300 km for global models, based on numerical formulations of the equations of motion from fluid mechanics. Subgrid-scale turbulent and convective motions must be represented through approximate subgrid-scale parameterizations (Smagorinsky 1963; Arakawa and Schubert 1974; Edwards 2001). These subgrid-scale parameterizations include coupling with thermodynamics; radiation; continental hydrology; and, optionally, chemistry, aerosol microphysics, or biology.

Parameterizations are often based on a mixed, physical, phenomenological and statistical view. For example, the cloud fraction needed to represent the mean effect of a field of clouds on radiation may be related to the resolved humidity and temperature through an empirical relationship. But the same cloud fraction can also be obtained from a more elaborate description of processes governing cloud formation and evolution. For instance, for an ensemble of cumulus clouds within a horizontal grid cell, clouds can be represented with a single-mean plume of warm and moist air rising from the surface (Tiedtke 1989; Jam et al. 2013) or with an ensemble of such plumes (Arakawa and Schubert 1974). Similar parameterizations are needed for many components not amenable to first-principle approaches at the grid scale of a global model, including boundary layers, surface hydrology, and ecosystem dynamics. Each parameterization, in turn, typically depends on one or more parameters whose numerical values are poorly constrained by first principles or observations at the grid scale of global models. Being approximate descriptions of unresolved processes, there exist different possibilities for the representation of many processes. The development of competing approaches to different processes is one of the most active areas of climate research. The diversity of possible approaches and parameter values is one of the main motivations for model inter-comparison projects in which a strict protocol is shared by various modeling groups in order to better isolate the uncertainty in climate simulations that arises from the diversity of models (model uncertainty). ...

... All groups agreed or somewhat agreed that tuning was justified; 91% thought that tuning global-mean temperature or the global radiation balance was justified (agreed or somewhat agreed). ... the following were considered acceptable for tuning by over half the respondents: atmospheric circulation (74%), sea ice volume or extent (70%), and cloud radiative effects by regime and tuning for variability (both 52%).






Here is Steve Koonin, formerly Obama's Undersecretary for Science at DOE and a Caltech theoretical physicist, calling for a "Red Team" analysis of climate science, just a few months ago (un-gated link):
WSJ: ... The outcome of a Red/Blue exercise for climate science is not preordained, which makes such a process all the more valuable. It could reveal the current consensus as weaker than claimed. Alternatively, the consensus could emerge strengthened if Red Team criticisms were countered effectively. But whatever the outcome, we scientists would have better fulfilled our responsibilities to society, and climate policy discussions would be better informed.

Note Added: In 2014 Koonin ran a one day workshop for the APS (American Physical Society), inviting six leading climate scientists to present their work and engage in an open discussion. The APS committee responsible for reviewing the organization's statement on climate change were the main audience for the discussion. The 570+ page transcript, which is quite informative, is here. See Physics Today coverage, and an annotated version of Koonin's WSJ summary.

Below are some key questions Koonin posed to the panelists in preparation for the workshop. After the workshop he declared that The idea that “Climate science is settled” runs through today’s popular and policy discussions. Unfortunately, that claim is misguided.
The estimated equilibrium climate sensitivity to CO2 has remained between 1.5 and 4.5 in the IPCC reports since 1979, except for AR4 where it was given as 2-5.5.

What gives rise to the large uncertainties (factor of three!) in this fundamental parameter of the climate system?

How is the IPCC’s expression of increasing confidence in the detection/attribution/projection of anthropogenic influences consistent with this persistent uncertainty?

Wouldn’t detection of an anthropogenic signal necessarily improve estimates of the response to anthropogenic perturbations?
I seriously doubt that the process by which the 1.5 to 4.5 range is computed is statistically defensible. From the transcript, it appears that IPCC results of this kind are largely the result of "Expert Opinion" rather than a specific computation! It is rather curious that the range has not changed in 30+ years, despite billions of dollars spent on this research. More here.

Saturday, June 03, 2017

Python Programming in one video



Putting this here in hopes I can get my kids to watch it at some point 8-)

Please recommend similar resources in the comments!

Wednesday, May 31, 2017

The mystery of genius at Slate Star Codex


Three excellent posts at Slate Star Codex. Don't miss the comments -- there are over a thousand, many of them very good.

THE ATOMIC BOMB CONSIDERED AS HUNGARIAN HIGH SCHOOL SCIENCE FAIR PROJECT
A group of Manhattan Project physicists created a tongue-in-cheek mythology where superintelligent Martian scouts landed in Budapest in the late 19th century and stayed for about a generation, after which they decided the planet was unsuitable for their needs and disappeared. The only clue to their existence were the children they had with local women.

The joke was that this explained why the Manhattan Project was led by a group of Hungarian supergeniuses, all born in Budapest between 1890 and 1920. These included Manhattan Project founder Leo Szilard, H-bomb creator Edward Teller, Nobel-Prize-winning quantum physicist Eugene Wigner, and legendary polymath John von Neumann, namesake of the List Of Things Named After John Von Neumann.

The coincidences actually pile up beyond this. Von Neumann, Wigner, and possibly Teller all went to the same central Budapest high school at about the same time, leading a friend to joke about the atomic bomb being basically a Hungarian high school science fair project. ...
See also

HUNGARIAN EDUCATION II: FOUR NOBEL TRUTHS


and

HUNGARIAN EDUCATION III: MASTERING THE CORE TEACHINGS OF THE BUDAPESTIANS

... Laszlo Polgar studied intelligence in university, and decided he had discovered the basic principles behind raising any child to be a genius. He wrote a book called Bring Up Genius and recruited an interested woman to marry him so they could test his philosophy by raising children together. He said a bunch of stuff on how ‘natural talent’ was meaningless and so any child could become a prodigy with the right upbringing.

This is normally the point where I’d start making fun of him. Except that when he trained his three daughters in chess, they became the 1st, 2nd, and 6th best female chess players in the world, gaining honors like “youngest grandmaster ever” and “greatest female chess player of all time”. Also they spoke seven languages, including Esperanto.

Their immense success suggests that education can have a major effect even on such traditional genius-requiring domains as chess ability. How can we reconcile that with the rest of our picture of the world, and how obsessed should we be with getting a copy of Laszlo Polgar’s book? ...

Friday, May 26, 2017

Borges, blogging, and a vast circle of invisible friends


This blog gets about 100k page views per month. My sense is that there are a lot of additional views through RSS feeds and social media (FB, G+, etc.), but those are hard to track. Most of the hits are on the main landing page, with a smaller fraction going to a specific article. I'd guess that someone hitting the landing page looks at a few posts, so there are probably at least 200k article views per month. I write somewhat fewer than 20 posts per month, which suggests that a typical post is read ~10k times. Some outlier posts get a lot of traffic from inbound links and search engine results even years after they were written. These have far more than 10k cumulative views, according to logs. From cookies, I can see that there are many thousands of regular readers (i.e., who visit at least several times a month).

Is there any better way to estimate impact/reach than what I've described above?

For comparison, I was told that a serious non-fiction book on the NY Times Best Seller list might sell ~10k copies. So it seems possible my blog has a significantly greater reach than what I could expect from writing a book. I've thought about writing books at various times, but have always been too busy. I fantasize about writing more when I retire, or later in my career :-)

When I attend meetings or conferences, I often bump into people I don't know who tell me they read my blog. This seems to be true whether the participants are scientists, technologists, investors, or academics. I'm guessing that for every person who tells me that they're a reader, there must be many more who are readers but don't volunteer the information. If you ever see me in person, please come right up and say hello! :-)

I've been told by some people that they have tried to read this blog but find it hard to understand. I suppose that regular readers are mostly well above average in intelligence.

Borges once said

... the life of a writer is a lonely one. You think you are alone, and as the years go by, if the stars are on your side, you may discover that you are at the center of a vast circle of invisible friends whom you will never get to know, but who love you. And that is an immense reward.

Thursday, May 25, 2017

Von Neumann, in his head


From Robert Jungk's Brighter than a Thousand Suns: A Personal History of the Atomic Scientists.

The H-bomb project:
... Immediately after the White House directive the Theoretical Division at Los Alamos had started calculations for the new bomb.

... There was a meeting in Teller's office with Fermi, von Neumann, and Feynman ... Many ideas were thrown back and forth and every few minutes Fermi or Teller would devise a quick numerical check and then they would spring into action. Feynman on the desk calculator, Fermi with the little slide rule he always had with him, and von Neumann, in his head. The head was usually first, and it is remarkable how close the three answers always checked.
The MANIAC:
... When von Neumann released his last invention for use, it aroused the admiration of all who worked with it. Carson Mark, head of the Theoretical Division at Los Alamos, recollects that 'a problem which would have otherwise kept three people busy for three months; could be solved by the aid of this computer, worked by the same three people, in about ten hours. The physicist who had set the task, instead of having to wait for a quarter of a year before he could get on, received the data he required for his further work the same evening. A whole series of such three months' calculations, narrowed down to a single working day, were needed for the production of the hydrogen bomb.

It was a calculating machine, therefore, which was the real hero of the work on the construction of the bomb. It had a name of its own, like all the other electronic brains. Von Neumann had always been fond of puns and practical jokes. When he introduced his machine to the Atomic Energy Commission under the high-sounding name of 'Mathematical Analyser, Numerical Integrator and Computer', no one noticed anything odd about this designation except that it was rather too ceremonious for everyday use. It was not until the initial letters of the six words were run together that those who used the miraculous new machine realized that the abbreviation spelled 'maniac'.

Wednesday, May 24, 2017

AI knows best: AlphaGo "like a God"


Humans are going to have to learn to "trust the AI" without understanding why it is right. I often make an analogous point to my kids -- "At your age, if you and Dad disagree, chances are that Dad is right" :-)  Of course, I always try to explain the logic behind my thinking, but in the case of some complex machine optimizations (e.g., Go strategy), humans may not be able to understand even the detailed explanations.

In some areas of complex systems -- neuroscience, genomics, molecular dynamics -- we also see machine prediction that is superior to other methods, but difficult even for scientists to understand. When hundreds or thousands of genes combine to control many dozens of molecular pathways, what kind of explanation can one offer for why a particular setting of the controls (DNA pattern) works better than another?

There was never any chance that the functioning of a human brain, the most complex known object in the universe, could be captured in verbal explication of the familiar kind (non-mathematical, non-algorithmic). The researchers that built AlphaGo would be at a loss to explain exactly what is going on inside its neural net...
NYTimes: ... “Last year, it was still quite humanlike when it played,” Mr. Ke said after the game. “But this year, it became like a god of Go.”

... After he finishes this week’s match, he said, he would focus more on playing against human opponents, noting that the gap between humans and computers was becoming too great. He would treat the software more as a teacher, he said, to get inspiration and new ideas about moves.

“AlphaGo is improving too fast,” he said in a news conference after the game. “AlphaGo is like a different player this year compared to last year.”
On earlier encounters with AlphGo:
“After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”

Monday, May 22, 2017

NYTimes: In ‘Enormous Success,’ Scientists Tie 52 Genes to Human Intelligence


The Nature Genetics paper below made a big splash in today's NYTimes: In ‘Enormous Success,’ Scientists Tie 52 Genes to Human Intelligence. The picture above is of a UK Biobank storage facility for blood (DNA) samples.

The results are not especially surprising to people who have been following the subject, but this is the largest sample of genomes and cognitive scores yet analyzed (~80k individuals). SSGAC has assembled a much larger dataset (~750k, soon to be over 1M; over 600 genome-wide significant SNP hits), but are working with a proxy phenotype for cognitive ability: years of education.
Genome-wide association meta-analysis of 78,308 individuals identifies new loci and genes influencing human intelligence

Nature Genetics (2017) doi:10.1038/ng.3869
Received 10 January 2017 Accepted 24 April 2017 Published online 22 May 2017

Intelligence is associated with important economic and health-related life outcomes1. Despite intelligence having substantial heritability2 (0.54) and a confirmed polygenic nature, initial genetic studies were mostly underpowered3, 4, 5. Here we report a meta-analysis for intelligence of 78,308 individuals. We identify 336 associated SNPs (METAL P < 5 × 10−8) in 18 genomic loci, of which 15 are new. Around half of the SNPs are located inside a gene, implicating 22 genes, of which 11 are new findings. Gene-based analyses identified an additional 30 genes (MAGMA P < 2.73 × 10−6), of which all but one had not been implicated previously. We show that the identified genes are predominantly expressed in brain tissue, and pathway analysis indicates the involvement of genes regulating cell development (MAGMA competitive P = 3.5 × 10−6). Despite the well-known difference in twin-based heratiblity2 for intelligence in childhood (0.45) and adulthood (0.80), we show substantial genetic correlation (rg = 0.89, LD score regression P = 5.4 × 10−29). These findings provide new insight into the genetic architecture of intelligence.
Perhaps the most interesting aspect of this study is the further evidence it provides that many (the vast majority?) of the hits discovered by SSGAC are indeed correlated with cognitive ability (as opposed to other traits such as Conscientiousness, which might influence educational attainment without affecting intelligence):
To examine the robustness of the 336 SNPs and 47 genes that reached genome-wide significance in the primary analyses, we sought replication. Because there are no reasonably large GWAS for intelligence available and given the high genetic correlation with educational attainment, which has been used previously as a proxy for intelligence7, we used the summary statistics from the latest GWAS for educational attainment21 for proxy-replication (Online Methods). We first deleted overlapping samples, resulting in a sample of 196,931 individuals for educational attainment. Of the 336 top SNPs for intelligence, 306 were available for look-up in educational attainment, including 16 of the independent lead SNPs. We found that the effects of 305 of the 306 available SNPs in educational attainment were sign concordant between educational attainment and intelligence, as were the effects of all 16 independent lead SNPs (exact binomial P < 10−16; Supplementary Table 14). ...
Carl Zimmer did a good job with the Times story. The basic ideas, that
0. Intelligence is (at least crudely) measurable
1. Intelligence is highly heritable (much of the variance is determined by DNA)
2. Intelligence is highly polygenic (controlled by many genetic variants, each of small effect)
3. Intelligence is going to be deciphered at the molecular level, in the near future, by genomic studies with very large sample size 
are now supported by overwhelming scientific evidence. Nevertheless, they are and have been heavily contested by anti-Science ideologues.

For further discussion of points (0-3), see my article On the genetic architecture of intelligence and other quantitative traits.

Sunday, May 21, 2017

Contingency, History, and the Atomic Bomb

How Alexander Sachs, acting on behalf of Szilard and Einstein, narrowly convinced FDR to initiate the atomic bomb project. History sometimes hangs on a fragile thread: had the project been delayed a year, atomic weapons might not have been used in WWII. Had the project completed a year earlier, the bombs might have been used against Germany.

See also A Brief History of the Future, as told to the Masters of the Universe.


Excerpts below are from Robert Jungk's Brighter than a Thousand Suns: A Personal History of the Atomic Scientists. (Note the book contains inaccuracies concerning the wartime role of German physicists such as Weizsacker and Heisenberg.)

Alexander Sachs:
... This international financier could always obtain entry to the White House, for he had often amazed Roosevelt by his usually astonishingly accurate forecasts of economic events. Ever since 1933 Sachs had been one of the unofficial but extremely influential advisers of the American President, all of whom had to possess, by F. D. R.'s own definition, 'great ability, physical vitality, and a real passion for anonymity'.


... It was nearly ten weeks before Alexander Sachs at last found an opportunity, on October 11, 1939, to hand President Roosevelt, in person, the letter composed by [Leo] Szilard and signed by [Albert] Einstein at the beginning of August [1939]. In order to ensure that the President should thoroughly appreciate the contents of the document and not lay it aside with a heap of other papers awaiting attention, Sachs read to him, in addition to the message and an appended memorandum by Szilard, a further much more comprehensive statement by himself. The effect of these communications was by no means so overpowering as Sachs had expected. Roosevelt, wearied by the prolonged effort of listening to his visitor, made an attempt to disengage himself from the whole affair. He told the disappointed reader that he found it all very interesting but considered government intervention to be premature at this stage.

Sachs, however, was able, as he took his leave, to extort from the President the consolation of an invitation to breakfast the following morning. "That night I didn't sleep a wink," Sachs remembers. "I was staying at the Carlton Hotel [two blocks north of the White House]. I paced restlessly to and fro in my room or tried to sleep sitting in a chair. There was a small park quite close to the hotel. Three or four times, I believe, between eleven in the evening and seven in the morning, I left the hotel, to the porter's amazement, and went across to the park. There I sat on a bench and meditated. What could I say to get the President on our side in this affair, which was already beginning to look practically hopeless? Quite suddenly, like an inspiration, the right idea came to me. I returned to the hotel, took a shower and shortly afterwards called once more at the White House."

Roosevelt was sitting alone at the breakfast table, in his wheel chair, when Sachs entered the room. The President inquired in an ironical tone:

"What bright idea have you got now? How much time would you like to explain it?"

Dr. Sachs says he replied that he would not take long.

"All I want to do is to tell you a story. During the Napoleonic wars a young American inventor came to the French Emperor and offered to build a fleet of steamships with the help of which Napoleon could, in spite of the uncertain weather, land in England. Ships without sails? This seemed to the great Corsican so impossible that he sent [Robert] Fulton away. In the opinion of the English historian Lord Acton, this is an example of how England was saved by the shortsightedness of an adversary. Had Napoleon shown more imagination and humility at that time, the history of the nineteenth century would have taken a very different course."

After Sachs finished speaking the President remained silent for several minutes. Then he wrote something on a scrap of paper and handed it to the servant who had been waiting at table. The latter soon returned with a parcel which, at Roosevelt's order, he began slowly to unwrap. It contained a bottle of old French brandy of Napoleon's time, which the Roosevelt family had possessed for many years. The President, still maintaining a significant silence, told the man to fill two glasses. Then he raised his own, nodded to Sachs and drank to him.

Next he remarked: "Alex, what you are after is to see that the Nazis don't blow us up?"

"Precisely."

It was only then that Roosevelt called in his attaché, [Brigadier] General [Edwin] "Pa" Watson, and addressed him—pointing to the documents Sachs had brought—in words which have since become famous:

"Pa, this requires action!"
More on the challenges:
Teller criticizes as follows one of these excessively rosy views of the early history of the American atom bomb: 'There is no mention of the futile efforts of the scientists in 1939 to awaken the interest of the military authorities in the atomic bomb. The reader does not learn about the dismay of scientists faced with the necessity of planned research. He does not find out about the indignation of engineers asked to believe in the theory and on such an airy basis to construct a plant.'

Wigner remembers the resistance. 'We often felt as though we were swimming in syrup,' he remarks. Boris Pregel, a radium expert, without whose disinterested loan of uranium the first experiments al Columbia University would have been impossible, comments: 'It is a wonder that after so many blunders and mistakes anything was ever accomplished at all.' Szilard still believes today that work on the uranium project was delayed for at least a year by the short-sightedness and sluggishness of the authorities. Even Roosevelt's manifest interest in the plan scarcely accelerated its execution. ...

Saturday, May 20, 2017

American Psycho

Author Bret Easton Ellis, Christian Bale (Patrick Bateman), and Director Mary Herron discuss American Psycho.




This is a rare 1999 documentary about Ellis. It mixes interviews with dramatizations of scenes from his writing. The American Psycho bits are terrible, especially compared to the actual movie, which was released in 2000. Rewatching the movie today, my main reaction is that Bale is simply brilliant as Patrick Bateman: e.g., Hip to be Square (reprised by Huey Lewis himself here).




It seems likely that the title American Psycho is partly an homage to the late 1970s film American Gigolo, which had a big impact on Ellis. (I highly recommend BEE's podcast to anyone interested in film or literature.)
Rolling Stone: 'American Psycho' at 25

Before American Psycho came out, 25 years ago this month, it was already the most controversial novel of the Nineties. Its vivid depictions of gruesome murders of women, men, children and animals preceded wherever it went. The original publisher dropped it and told author Bret Easton Ellis to keep the money — but to please go away. The New York Times titled its book review "Snuff This Book!" On the opposite coast, Los Angeles Times begrudgingly wrote that "Free Speech Protects Even an 'American Psycho.'" The National Organization of Women attempted to organize boycotts. Stores refused to order it. And Ellis, who turned 27 around its release, received death threats. ...

Has the way that Patrick Bateman has become a cult character surprised you?
What if I said, no? [Pause.] I'm kidding [laughs]. Of course, it was surprising to me. American Psycho was an experimental novel. I wasn't really quite sure, nor did I care, how many copies it was going to sell. I really didn't care who connected with it.

Why is that?
I created this guy who becomes this emblem for yuppie despair in the Reagan Eighties – a very specific time and place ...

... Beginning in the Eighties, men were prettifying themselves and in ways they weren't. And they were taking on a lot of the tropes of gay male culture and bringing it into straight male culture — in terms of grooming, looking a certain way, going to the gym, waxing, and being almost the gay porn ideals. You can track that down to the way Calvin Klein advertised underwear, a movie like American Gigolo, the re-emergence of Gentlemen's Quarterly. All of these things really informed American Psycho when I was writing it. So that seemed to me much more interesting than whether he is or is not a serial killer, because that really is a small section of the book. ...

... Patrick Bateman, who was obsessed with Donald Trump, would likely be pretty happy with his campaign.
Or would he be embarrassed? Trump today isn't the Trump of 1987. He's not the Trump of Art of the Deal. He seemed much more elitist in '87, '88. Now he seems to be giving a voice to white, angry, blue-collar voters. I think, in a way, Patrick Bateman may be disappointed by how Trump is coming off and who he's connecting with.

To the guys that I was talking to in the Eighties when I was researching American Psycho, Donald Trump was an aspirational figure. That's why the jokes are throughout the book. It wasn't like I pulled that out of my hat; that was happening. And so I just thought it was funny that "OK, well, Patrick Bateman's gonna be obsessed with Donald Trump. He's gonna want to aspire to be Donald Trump." And I don't know if he would think that today. ...

Thursday, May 18, 2017

Comey under oath: no obstruction of justice



Almost everything we hear from the media these days is simply motivated reasoning -- i.e., partisan nonsense.
Wikipedia: ...When people form and cling to false beliefs despite overwhelming evidence, the phenomenon is labeled "motivated reasoning". In other words, "rather than search rationally for information that either confirms or disconfirms a particular belief, people actually seek out information that confirms what they already believe."
CNN transcript of a May 3 Senate hearing -- weeks after the alleged conversation with Trump discussed in Comey's memo.
HIRONO: So if the Attorney General or senior officials at the Department of Justice opposes a specific investigation, can they halt that FBI investigation?

COMEY: In theory yes.

HIRONO: Has it happened?

COMEY: Not in my experience. Because it would be a big deal to tell the FBI to stop doing something that -- without an appropriate purpose. I mean where oftentimes they give us opinions that we don't see a case there and so you ought to stop investing resources in it. But I'm talking about a situation where we were told to stop something for a political reason, that would be a very big deal. It's not happened in my experience.
ZeroHedge.

Tuesday, May 16, 2017

Walter Pitts and Neural Nets


Pitts is one of the least studied geniuses of the early information age. See also Wikipedia, Nautil.us.
Cabinet Magazine: There are no biographies of Walter Pitts, and any honest discussion of him resists conventional biography. Pitts was among the original participants in the mid-century cybernetics conferences, though he began his association with that group of scientists when he was only a teenager. His intellectual strengths contributed to some central cybernetic theories reliant on logical structures and universal elements. Yet the sketchy details of his life provide the kind of evidence that resists the structure of those theories.

The mid-century cybernetics conferences, or Macy Conferences (1946-53), theorized about automatic or self-balancing systems in biological and technological ecologies. Warren McCullough, Norbert Weiner, John von Neumann, Margaret Mead, and Gregory Bateson were among those invited, and their discussions mixed methodologies and evidence from a broad range of disciplines including anthropology, neurophysiology, mathematics, logic, and computational networks. ...

Pitts entered into the company of this group as mysteriously as he left it. None of his colleagues knew very much about him. He came from a reportedly troubled working class family in Detroit and entered McCullough's home as a live-in collaborator when he was only seventeen. Even at this early age, he had already worked with prominent scientists in logic and mathematical biology. Three years later in 1943, he co-authored, with Warren McCullough, a theory of neurophysiological organization that would provoke sustained debate throughout the years of the cybernetics conferences. The paper proposed a logical structure for neural nets as coded circuitry, thus supporting machinic theories of mind as well as ambitions involving artificial intelligence. Pitts was only in his 20s during the cybernetics conferences of the 1940s and though he was an autodidact, he had already mastered or could almost instantly absorb the subject matter from the several fields of study engaged by the conferences. Not only did he circulate among prominent scientists, but those scientists either rivaled for his collaboration or deeply valued his critique and commentary. Pitts could, without fail, identify errors in logical thinking and he simply did not pursue information or discourse outside of the realm of logic. He considered the search for universal elements of mental structure to be not just a species of speculation, but a hard science of the brain and the mind. ...

Several of the scientists and psychiatrists of the group thought Pitts was schizophrenic and potentially very ill. The prominent psychiatrists who moved in his circles were more and more baffled by his reclusive shyness and his apparent personal discomfort. Later Pitts began to live on his own in Cambridge where he may have experimented with homemade drugs. No one seems to know much about his later life. He died in 1969 and some have speculated that he committed suicide. ...
McCulloch–Pitts (MCP) neuron:



Overview: In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published "A logical calculus of the ideas immanent in nervous activity" in the Bulletin of Mathematical Biophysics 5:115-133. In this paper McCulloch and Pitts tried to understand how the brain could produce highly complex patterns by using many basic cells that are connected together. These basic brain cells are called neurons, and McCulloch and Pitts gave a highly simplified model of a neuron in their paper. The McCulloch and Pitts model of a neuron, which we will call an MCP neuron for short, has made an important contribution to the development of artificial neural networks -- which model key features of biological neurons.

The original MCP Neurons had limitations. Additional features were added which allowed them to "learn." The next major development in neural networks was the concept of a perceptron which was introduced by Frank Rosenblatt in 1958. Essentially the perceptron is an MCP neuron where the inputs are first passed through some "preprocessors," which are called association units. These association units detect the presence of certain specific features in the inputs. In fact, as the name suggests, a perceptron was intended to be a pattern recognition device, and the association units correspond to feature or pattern detectors.
This is an old interview with McCulloch. Nature vs Nurture and IQ starting at ~4min ;-)



Nautil.us: ... Thus formed the beginnings of the group who would become known as the cyberneticians, with Wiener, Pitts, McCulloch, Lettvin, and von Neumann its core. And among this rarified group, the formerly homeless runaway stood out. “None of us would think of publishing a paper without his corrections and approval,” McCulloch wrote. “[Pitts] was in no uncertain terms the genius of our group,” said Lettvin. “He was absolutely incomparable in the scholarship of chemistry, physics, of everything you could talk about history, botany, etc. When you asked him a question, you would get back a whole textbook … To him, the world was connected in a very complex and wonderful fashion.”

... In a letter to the philosopher Rudolf Carnap, McCulloch catalogued Pitts’ achievements. “He is the most omniverous of scientists and scholars. He has become an excellent dye chemist, a good mammalogist, he knows the sedges, mushrooms and the birds of New England. He knows neuroanatomy and neurophysiology from their original sources in Greek, Latin, Italian, Spanish, Portuguese, and German for he learns any language he needs as soon as he needs it. Things like electrical circuit theory and the practical soldering in of power, lighting, and radio circuits he does himself. In my long life, I have never seen a man so erudite or so really practical.” Even the media took notice. In June 1954, Fortune magazine ran an article featuring the 20 most talented scientists under 40; Pitts was featured, next to Claude Shannon and James Watson. Against all odds, Walter Pitts had skyrocketed into scientific stardom.

Sunday, May 14, 2017

Baizuo = Libtard

Baizuo could also translate as "idiot left" (using a kind word of wordplay). Noam Chomsky is often cited as an exemplar, although I find this a bit unfair.

See also Trump Triumph Viewed From China.
The curious rise of the ‘white left’ as a Chinese internet insult

ChenChen Zhang 11 May 2017 (openDemocracy.net)

... If you look at any thread about Trump, Islam or immigration on a Chinese social media platform these days, it’s impossible to avoid encountering the term baizuo, or literally, the ‘white left’. It first emerged about two years ago, and yet has quickly become one of the most popular derogatory descriptions for Chinese netizens to discredit their opponents in online debates.

So what does ‘white left’ mean in the Chinese context, and what’s behind the rise of its (negative) popularity?

The question has received more than 400 answers from Zhihu users, which include some of the most representative perceptions of the 'white left'. Although the emphasis varies, baizuo is used generally to describe those who “only care about topics such as immigration, minorities, LGBT and the environment” and “have no sense of real problems in the real world”; they are hypocritical humanitarians who advocate for peace and equality only to “satisfy their own feeling of moral superiority”; they are “obsessed with political correctness” to the extent that they “tolerate backwards Islamic values for the sake of multiculturalism”; they believe in the welfare state that “benefits only the idle and the free riders”; they are the “ignorant and arrogant westerners” who “pity the rest of the world and think they are saviours”.

Apart from some anti-hegemonic sentiments, the connotations of ‘white left’ in the Chinese context clearly resemble terms such as ‘regressive liberals’ or ‘libtards’ in the United States. In a way the demonization of the ‘white left’ in Chinese social media may also reflect the resurgence of right-wing populism globally. ...

The term first became influential amidst the European refugee crisis, and Angela Merkel was the first western politician to be labelled as a baizuo for her open-door refugee policy. Hungary, on the other hand, was praised by Chinese netizens for its hard line on refugees, if not for its authoritarian leader. ...

Chenchen Zhang has a PhD in Political Theory from LUISS Guido Carli University and a PhD in Political Science from Université libre de Bruxelles. She has worked as a post-doctoral researcher at the University of Copenhagen.

Wednesday, May 10, 2017

AI Now (O'Reilly ebook)



爱 (ài) means "love" in Chinese!  :-)
Artificial Intelligence Now
Current Perspectives from O'Reilly Media

Get the free ebook

The past year or so has seen a true explosion in both the capabilities and adoption of artificial intelligence technologies. Today’s generalized AI tools can solve specific problems more powerfully than the complex rule-based tools that preceded them. And, because these new AI tools can be deployed in many contexts, more and more applications and industries are ripe for transformation with AI technologies.

By drawing from the best posts on the O’Reilly AI blog, this in-depth report summarizes the current state of AI technologies and applications, and provides useful guides to help you get started with deep learning and other AI tools.

In six distinct parts, this report covers:

The AI landscape: the platforms, businesses, and business models shaping AI growth; plus a look at the emerging AI stack

Technology: AI’s technical underpinnings and deep learning capabilities, tools, and tutorials

Homebuilt autonomous systems: "hobbyist" applications that showcase AI tools, libraries, cloud processing, and mobile computing

Natural language: strategies for scoping and tackling NLP projects

Use cases: an analysis of two of the leading-edge use cases for artificial intelligence—chat bots and autonomous vehicles

Integrating human and machine intelligence: development of human-AI hybrid applications and workflows; using AI to map and access large-scale knowledge databases

Tuesday, May 09, 2017

20 years of GATTACA (1997)

A 20 year lag between science fiction and reality... not bad!





Embryo selection, but no additional engineering:
Geneticist (Blair Underwood): Keep in mind, this child is still you -- simply the best of you. You could conceive naturally a thousand times and never get such a result ...
According to this discussion, an offer of genetic editing didn't make the final cut:
In an outtake to the movie, the geneticist states that for an extra $5,000 he could give the embryo enhanced musical or mathematical skills – essentially splicing in a gene that was not present on the parents’ original DNA.


Saturday, May 06, 2017

More Shock and Awe: James Lee and SSGAC in Oslo, 600 SNP hits


To quote James Lee, the first author listed below: "Shock and Awe" for those who doubt that cognitive ability is influenced by genetic variants.

See work from a year ago: ~100 hits from 300k individuals. Now ~600 hits from 750k. (SNPs associated with EA are likely to also be associated with cognitive ability -- see figure at link above.)
47th Behavior Genetics Annual Meeting, Oslo, Norway

GWAS of Educational Attainment, Phase 3: Biological Findings

Abstract
Genetic factors are estimated to account for at least 20% of the variation across individuals for educational attainment (Rietveld et al., 2013). The results of the latest GWAS for educational attainment identified 74 genome-wide significant loci for educational attainment (Okbay et al., 2016). Here, in one of the largest GWAS to date, we increase our sample to nearly 750,000 individuals, and we identify over 600 genome-wide significant loci associated with the number of years of schooling completed. Note that at the time of presentation, we will likely have updated our meta-analysis to include over 1,000,000 individuals

In this presentation, I will focus on the biological implications of the GWAS results. At the time of writing, 1,656 genes are significantly prioritized, a more than 10-fold increase since our previous report (Okbay et al., 2016). The newly significant results reinforce the biological theme of prenatal brain development and also bring to the foreground new themes that shed light on the biological underpinnings of cognitive performance and other traits affecting educational attainment.

Authors
James Lee (University of Minnesota - Twin Cities), Aysu Okbay (Free University Amsterdam), Robbee Wedow (University of Colorado - Boulder), Edward Kong (Harvard University), Patrick Turley (Broad Institute of MIT and Harvard), Meghan Zacher (Harvard University), Kevin Thom (New York University), Anh Tuan Nguyen Viet (University of Southern California), Omeed Maghzian (Harvard University, NBER), Richard Karlsson Linnér (Vrije Universiteit Amsterdam), Matthew Robinson (The University of Queensland), Social Science Genetic Association Consortium (NA), Peter Visscher (The University of Queensland), Daniel Benjamin (University of Southern California), David Cesarini (New York University)
Note the data here have only been analyzed using summary statistics coming from each sub-cohort. More powerful methods may soon become available:
Penalized regression from summary statistics

One of the difficulties in genomics is that when DNA donors are consented for a study, the agreements generally do not allow sharing (aggregation) of genomic data across multiple studies. This leads to isolated silos of data that can't be fully shared. However, computations can be performed on one silo at a time, with the results ("summary statistics") shared within a larger collaboration. Most of the leading GWAS collaborations (e.g., GIANT for height, SSGAC for cognitive ability) rely on shared statistics. Simple regression analysis (one SNP at a time) can be conducted using just summary statistics, but more sophisticated algorithms cannot. These more sophisticated methods can generate a better phenotype predictor, using less data, than a SNP by SNP analysis.
A successful implementation like the one described at the link above could produce many (several times!) more hits and significantly more variance accounted for by corresponding predictors. Stay tuned!

Note Added: I'm getting lots of questions about how to interpret these results, so here are some comments.

1. I predicted ~10k variants would account for most of the heritability due to common SNPs (i.e., about 50% of total variance; allowing a predictor which correlates ~0.7 with actual cognitive ability). The rate of discovery of genome-wide significant hits and corresponding variance accounted for seems consistent with this prediction. Genetic associations are most easily discovered for variants which are common (e.g., have ~0.5 Minor Allele Frequency, not 0.05) and have large effect sizes. But alleles with this combination of properties are rare. As statistical power increases, one starts to discover (more and more) variants of lower frequency and/or lower effect size. A reasonable guess at the genetic architecture suggests a higher density of such variants, and is consistent with an accelerating rate of discovery of SNP hits (~100 hits from 300k individuals, ~600 hits from 750k). There are more efficient methods that, I believe, would discover nearly all the variants given sample size of ~1M well-phenotyped individuals. But these methods require more than just summary statistics.

I made a similar prediction of ~10k variants for height, and our (unpublished) genomic prediction results make me fairly confident that this will turn out to be correct. We now have moderately good height predictors and they are getting better very fast. That ~10k variants will turn out to be responsible for most of the variation in cognitive ability is still at a somewhat lower confidence level.

2. People are still confused about how many + variants above the mean in the population are required to make a "genius" (or super-genius). I managed to compress the explanation enough to fit in a tweet:
Flip coin 10000 times. 5000 + sqrt(10000)/2 = 5050 heads is +1SD outcome. 5100 is +2SD, etc. sqrt(N) << N for N large. Binomial~Normal dist.
You can see that even if cognitive ability is controlled by ~10k variants, flipping only ~100 of them is enough to cause a big difference in actual intelligence. Flipping a few hundred could get us to super-geniuses beyond anything in human history.

3. If you read press accounts related to our creation of the BGI Cognitive Genomics Lab back in 2011 (at that time there were zero genome-wide significant alleles associated with intelligence), you can find quotes from genomics "experts" asserting that mankind would never discover the genetic architecture of cognitive ability. (Such quotes are easy to obtain even today!) A Bayesian update given what is known in 2017 would call into question the competence of these "experts"!  ;-)

Wednesday, May 03, 2017

SubAltern Homesick Blues


New York Magazine seems to have dedicated an entire issue to the Alt-Right. If you don't recognize any of this subterranean internet stuff you should probably have a look. Somehow they left out the Illuminati, though.

See The Paranoid Style in American Politics and Subaltern Postcolonial Gramsci Homi Bhabha Babble.
BEYOND ALT: THE EXTREMELY REACTIONARY, BURN-IT-DOWN-RADICAL, NEWFANGLED FAR RIGHT

... what follows here is an attempt to really reckon with the alt-right and its fellow travelers: to organize and catalogue influences, philosopher-kings, and shit-posting foot soldiers; to track the movement’s history, its future, and the story of how the modern internet made it possible; to study its grievances, its media savvy, its symbols, its heroes and villains, its president and its critics of the president, its billionaire supporters and the underemployed message-board-dwelling “advocates” who serve as its creative engine. The movement is not a monolith — though it would also never be mistaken for a rainbow coalition — and part of what we’ve focused on is just how the various wings work together in concert. How does Steve Bannon relate to Russia Today, and what do conspiracy theorists have in common with pickup artists and Nazi Furries? How do memes like milk become weaponized, and just when did 4chan get political? How do its beliefs get amplified into the larger culture?

For all its theoretical anti-modernity, the alt-right is a uniquely modern movement, after all, enabled in reach by the connective tools of the social internet and by the clever use of the sort of irony that every young American raised on TV and memes recognizes as our pop-culture lingua franca. It’s an irony they’ve used as armor, too: If you take them seriously, they’ll claim you missed the joke. But of course, by treating them as a joke, you can miss their importance ...


Spooooky!
The Techno-Libertarians Praying for Dystopia

... people who believe that the future of our species involves shedding our humanity in a marriage with AI; this is known as transhumanism, and it has not unreasonably been called a new tech religion. Though the movement has no explicit political affiliations, it tends, for reasons that are probably self-explanatory, to draw a disproportionate number of Silicon Valley libertarians. And the cluster of ideas at its center — that the progress of technology will inevitably render good ol’ Homo sapiens obsolete; that intelligence, pure computational power, is to be pursued above all other values — has exerted a powerful attraction on a small group of futurists whose extreme investment in techno-libertarianism has pushed them over an event horizon into a form of right-wing authoritarianism it might be useful to regard as Dark Transhumanism.

Replication and the "Crises of Confidence" in Science

One of the authors pointed me to the interesting paper below, which contains a proposal meant to improve the reliability of scientific research (specifically in some areas such as social science or biomedicine). Could this proposal work if, say, it were strongly supported by scholarly associations and funding agencies?

Note the authors' use of "Crises of Confidence" in their title. I believe that certain areas of science really should be experiencing a crisis of confidence, if only their practitioners were smarter and more honest. But in fact I suspect it's mostly business as usual, even in the research areas with the lowest (and best documented as lowest) replication rates.
An Economic Approach to Alleviate the Crises of Confidence in Science: With An Application to the Public Goods Game

Luigi Butera, John A. List  (University of Chicago)

April 5, 2017

... This paper proposes and puts into practice a novel and simple mechanism that allows mutually beneficial gains from trade between original investigators and other researchers. In our mechanism, the original investigators, upon completing their initial study, write a working paper version of their study. While they do share their working paper online, they do however commit not to submit it to any journal for publication, ever. The original investigators instead offer co-authorship of a second paper to other researchers who are willing to independently replicate the experimental protocol in their own research facilities.2 Once the team is established, but before beginning replications, the replication protocol is pre-registered at the AEA experimental registry, and referenced in the first working paper. This is to guarantee that all replications, both successful and failed, are properly accounted for, eliminating any concerns about publication biases. The team of researchers composed by the original investigators and the other scholars will then write and coauthor a second paper, which will reference the original unpublished working paper, and submit it to an academic journal. Under such an approach, the original investigators accept to publish their work with several other coauthors, a feature that is typically unattractive to economists, but in turn gain a dramatic increase in the credibility and robustness of their results, should they replicate. Further, the referenced working paper would provide a credible signal about the ownership of the initial research design and idea, a feature that is particularly desirable for junior scholars. On the other hand, other researchers would face the monetary cost of replicating the original study, but would in turn benefit from coauthoring a novel study, and share the related payoffs. Overall, our mechanism could critically strengthen the reliability of novel experimental results and facilitate the advancement of scientific knowledge.

Blog Archive

Labels