Biz & IT —

Debunking the biggest myths about artificial intelligence

From killer robots, to runaway sentience, there's a lot of FUD that needs clearing up.

HAL 9000 in the film <em>2001</em>.
HAL 9000 in the film 2001.

The concept of inhuman intelligence goes back to the deep prehistory of mankind. At first the province of gods, demons, and spirits, it transferred seamlessly into the interlinked worlds of magic and technology. Ancient Greek myths had numerous robots, made variously by gods or human inventors, while extant artefacts like the Antikythera calendrical computer show that even in 200 BCE we could build machinery that usefully mimicked human intellectual abilities.

There has been no age or civilisation without a popular concept of artificial intelligence (AI). Ours, however, is the first where the genuine article—machinery that comfortably exceeds our own thinking skills—is not only possible but achievable. It should not be a surprise, then, that our ideas of what that actually means and what will actually happen are hopelessly coloured by cultural assumptions ancient and modern.

We rarely get it right: Kubrick’s 2001 saw HAL 9000 out-thinking highly trained astronauts to murderous effect; Bill Gates’ 2001 gave us Clippy, which was more easily dealt with.

Now, with AI a multi-billion dollar industry seeping into our phones, businesses, cars, and homes, it’s time to bust some of the most important AI myths and dip into some reality.

Myth: AI is all about making machines that can think

When digital computing first became practical in the middle of the last century, there were high hopes that AI would follow in short order. Alan Turing was famously comfortable with the concept in his 1948 “Intelligent Machinery” paper, seeing no objections to a working, thinking machine by the end of the century. Sci-fi author Isaac Asimov created Multivac, a larger, brighter version of actual computers such as the 1951 UNIVAC 1, first used by the US Census Bureau. (The favour was later returned by IBM, when it named the first chess computer to outrank all humans Deep Blue after Douglas Adams’ hyperintelligent Deep Thought machine from the HitchHiker’s Guide To The Galaxy.)

There are many projects and much research going on into replicating human-like thought, mostly by hardware and software simulations of human brain structures and functions as new techniques reveal them. One of the higher-profile efforts is the Blue Brain project at the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, which started in 2005 with a target date for a working model roughly equivalent to some human functions by 2023.

There are two main problems for any brain simulator. The first is that the human brain is extraordinarily complex, with around 100 billion neurons and 1,000 trillion synaptic interconnections. None of this is digital; it depends on electrochemical signaling with inter-related timing and analogue components, the sort of molecular and biological machinery that we are only just starting to understand.

An image from the Blue Brain project, showing the complexity of the mammalian neocortical column. Shown here is "just" 10,000 neurons and 30 million interconnections. A human brain is millions of times more complex than this.
Enlarge / An image from the Blue Brain project, showing the complexity of the mammalian neocortical column. Shown here is "just" 10,000 neurons and 30 million interconnections. A human brain is millions of times more complex than this.
EPFL

Even much simpler brains remain mysterious. The landmark success to date for Blue Brain, reported this year, has been a small 30,000 neuron section of a rat brain that replicates signals seen in living rodents. 30,000 is just a tiny fraction of a complete mammalian brain, and as the number of neurons and interconnecting synapses increases, so the simulation becomes exponentially more complex—and exponentially beyond our current technological reach.

This yawning chasm of understanding leads to the second big problem: there is no accepted theory of mind that describes what “thought” actually is.

This underlying quandary—attempting to define “thought”—is sometimes referred to as the hard problem, and the results of understanding it are called strong AI. People engaged in commercial AI remain sceptical that it will be resolved any time soon, or that it is necessary or even desirable to do for any practical benefits. There is no doubt that artificial intelligences are beginning to do very meaningful work and that the speed of change of technology will continue to shunt things along, but full-blown sentience still seems far-fetched.

IBM Watson, one of the highest-profile successes in AI to date, started its life as an artificial contender on the American TV game show Jeopardy. It combines natural language processing with a large number of expert processes that try different strategies to match an internal knowledge database with potential answers. It then checks the confidence levels of its internal experts and chooses to answer the question if those levels are high enough (see below right).

The first serious application of Watson that might actually improve the quality of human life has been as a diagnostic aid in cancer medicine. Since 2011, Watson has been assisting oncologists by delving through patient medical records and trying to correlate that data with clinical expertise, academic research, or other sources of data in its memory banks. The end result is that Watson might offer up treatment options that the human doctor may not have previously considered.

“[It’s like] having a capable and knowledgeable ‘colleague’ who can review the current information that relates to my patient,” said Dr. James Miser, the chief medical information officer at Bumrungrad international hospital in Thailand. “It is fast, thorough, and has the uncanny ability to understand how the available evidence applies to the unique individual I am treating.”

Watson, competing on the game show Jeopardy. The bars at the bottom show its confidence in each answer. If no answer passes the confidence threshold (the white line), Watson doesn't respond.
Enlarge / Watson, competing on the game show Jeopardy. The bars at the bottom show its confidence in each answer. If no answer passes the confidence threshold (the white line), Watson doesn't respond.

As marvellous as this sounds, it mostly serves to highlight the similarities and differences between current, narrow, practical AI and its strong, as-yet-mythical cousin. One basic engine of both is the neural network, a system based on basic biological concepts that takes a set of inputs and attempts to match them to things that have previously been seen by the neural network. The key concept is that the system isn’t told how to do this analysis; instead, it learns by being given both inputs and outputs for a correct solution and then adjusting its own computational pathways to create internal knowledge to be used on later, unknown inputs.

We are now at the point where Watson and other AI systems such as Facebook’s DeepFace facial recognition system can do this with narrow, constrained data sets, but they are generally incapable by themselves of extending beyond the very specific tasks they’ve been programmed to do.

Google, for its part, seems content with narrow AI—searching pictures by content, crunching environmental and science data, and machine language translation—than predicting the emergence of general strong AI. The human brain can find, utilise, and link together vastly more complicated and ill-defined data, performing feats of recognition and transformation that can model entire universes. Google projects like DeepMind are experimenting with combining different techniques—in one case, using neural networks alongside reinforcement learning where the machine generates random inputs until it happens to hit on a rewarding strategy, which it refines—to try and close the gap, but they still act on very specific, narrow tasks.

This video is currently unavailable.

A video showing DeepMind learning how to play the Atari game <em>Breakout</em>.

Most recently, the DeepMind project used this combination of techniques to “master a diverse range of Atari 2600 games.” Speaking to Wired, Google researcher Koray Kavukcuoglu said his team has built “a general-learning algorithm that should be applicable to many other tasks”—but learning how to perform a task is a long way away from consciously thinking about those tasks, what the repercussions of those tasks might be, or having the wherewithal to opt out of doing those tasks in the first place.

Ars Video

STRYKA | A Sci-Fi Short Film Starring Aimee Mullins & Rupert Friend

Myth: AI won’t be bound by human ethics

The myriad dangers of artificial intelligences acting independently from humans are easy to imagine in the case of a rogue robot warrior, or a self-driving car that doesn’t correctly identify a life-threatening situation. The dangers are less obvious in the case of a smart search engine that has been quietly biased to give answers that, in the humble opinion of the megacorp that owns the search engine, aren’t in your best interest.

These are real worries with immediate importance to how we use, and are used by, the current and plausible future of AI technology. If a doctor uses Watson (or Siri or Google Now or Cortana) as part of what proves to be a misdiagnosis, who or what is ethically responsible for the consequences? And might we one day face the issues of sentient machines demanding rights?

The good news is that these worries are being taken seriously. Trying to define ethics, even between humans, is notoriously difficult. Society’s generally accepted ground rules are codified in a practical way by law and the legal system—and it’s here that practical answers to AI ethics are being developed.

The first question is whether robots and AI are genuinely new things in human experience requiring new ways of thinking, or whether they can be corralled by tweaks to existing principles.

“Both,” Ryan Calo, assistant professor of law at Washington University and leading light of cyberlaw, told Ars Technica. “Some rather visible people focus on the notion that robots will ‘wake up’ and demand rights or try to harm us. I don't think this will happen, at least not in the foreseeable future. But robots and AI even now present novel and interesting challenges for law and policy, just as the Internet did in the 1990s.”

So what happens if an AI learns or exhibits harmful behaviour. Who carries the can?

We have options, said Calo, including making people strictly liable if they deploy learning systems where they could cause trouble. “This could limit self-learning systems to those where they are really needed or less dangerous,” he said. But that can’t cover everything, according to Calo. “Risk management will play an even greater role in technology policy.”

The Internet itself, a new technology that brought new legal challenges, has a lot of lessons for AI law. “Some of those lessons are readily applicable to robots—for example, the idea that architecture or ‘code’ can be a kind of regulatory force, or that disciplines like computer science and law should talk to each other,” Calo said.

But other lessons don't translate, especially when it’s not just information that can be damaged. “Courts won't be so comfortable when bones instead of bits are on the line. I call this the problem of embodiment," he said.

“We may need a new model entirely. We may need a Federal Robotics Commission to help other agencies, courts, and state and federal lawmakers understand the technology well enough to make policy.”

Such a move would ensure that AI and robotics get the attention that they need as a new technology, while still hewing to familiar legislative approaches.

Boston Dynamics' "Petman" robot. Petman is ostensibly being developed to test military clothing and other equipment. Google acquired Boston Dynamics in 2013.

Make law, not war

There are less sanguine lessons for places where ethics have always been harder to enforce, though. In March 2015, the US Army sponsored a workshop that imagined what the battlefield will look like in 2050. Among its conclusions, it saw a huge increase in the role of artificial intelligence, not just in processing data but prosecuting warfare, putting the human soldiers “on the loop” rather than in it.

The workshop also predicted automated decision making, misinformation as a weapon, micro-targeting, large-scale self-organisation, and swarms of robots that would act independently or collaboratively. Even with humans in control, modern warfare is exceptionally prone to civilian collateral damage. With machines calling the shots in an environment filled with automated deception, what happens?

With so much AI development happening through open-source collaboration—Elon Musk and Sam Altman recently announced a billion-dollar investment in OpenAI, a research company devoted to keeping AI developments generally available—one ethical decision is immediately important. If you are developing AI techniques, do you want them used in war? If not, how can that be stopped?

Channel Ars Technica

Sitrep: Boeing 707

Footage by Dvids