X Y N C H R O N I C I T Y

On AI || AI On

TL;DR:

  1. Artificial intelligence cannot be stopped.
  2. Initiatives (cl)aiming to "stop AI" will either fail to slow or actively hasten it.
  3. Attempting to subtly influence the development of AI is a waste of time.
  4. Other people have already figured out 1, 2, & 3 and chosen not to tell anyone.

1. AI cannot be stopped.

There was a brief period, at some distant point in the past, when we might have been capable of anticipating the onset of AI and preventing it from coming about. That era ended when the world did not perish in nuclear fire. As the world stands, ambient conditions prevail such that every actor above a certain agency level A has an unambiguous incentive to refactor their perceptions, planning faculties, and tasks - specifically, tasks easier than A that can be made legible or actionable for an AI.

Every company of agency >= A:

I include "Big Data" and companion disciplines under this wide umbrella, as dipping one toe into the tar pit tends to drag one deeper: the more data you collect, the more difficult it is for humans to parse and interpret without the aid of machine intelligence.

These and other conditions, to be elaborated upon later, coalesce to ensure that an AI explosion is inevitable.

2. Attempts to stop AI will become beholden to inhuman agendas.

Elon Musk's Neuralink is not a counter-AI effort; it is a plea bargain. To quote:

Difficult to dedicate the time, but existential risk is too high not to.

And:

[A] neural lace could prevent a person from becoming a "house cat" to artificial intelligence.

"The solution that seems maybe the best is to have an AI layer," Musk said at the Vox Code Conference. "A third, digital layer that could work symbiotically."

Embedded in these quotes are no small omissions. Musk, and OpenAI's considerable board already believe the risk of a runaway AI to be great enough that action against it cannot be taken fast enough. The fear is palpable (I must tread lightly here, for I call the names of greater minds), most notably in the "house cat" analogy. In Musk's mind, the AI is already unboxed, and the only problem remaining to us is to decide on how high we want humanity's power level to be when we flip the switch.

It is surprising that Musk spoke so candidly about the clear and present danger at the end of anthropic time, but perhaps his candor is a motivation strategy. He's been known to joke (2), but then again, he didn't exactly start a simulation-crashing company called Breakout. Evidently this particular pursuit is worth more than a Guardian headline. It is worth, approximately, one billion USD, additional funding pending. (Can't wait to see the first OpenAI pledge drive.) Elon's apparently quite taken with the AI doomsday.

I chose this excerpt for especial inclusion:

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.

... an investor in DeepMind joked as he left a meeting that he ought to shoot Hassabis on the spot, because it was the last chance to save the human race.

A thought-experiment may illustrate Musk's reasons for launching Neuralink. Presume humans develop the ability to augment their intelligence with neural laces and sufficiently complex software, becoming transhuman. Early transhumans will not be outsourcing cycles to servers, but rather streamlining repetitive tasks like typing letters into a keyboard and moving a cursor across a screen to navigate to a search engine and seek out knowledge. They will think the correct request and the information will show up in their vision/awareness for perusal. (Not pictured: the AIs responsible for crawling, categorizing, interpreting, curating, and presenting the relevant information.) This development and others like it (eg higher-bandwidth qualia conversations) composes the first step.

The second step involves the process of allowing AIs to send information between one another. As their processing power improves, their outputs will grow increasingly complex, such that AI1 speaking to AI2 will either need to reduce output1 to a universally comprehensible protocol in order to be read by AI2, or train AI2 to interpret output1 without any post-processing. At this stage, AI1 and AI2 cohere to become AI3, as once the output of each is mutually intelligible, inputs to either one will result in a valid output. We'll either have to train AI2 to understand AI1 and vice versa, or they will learn on their own. But what if the data are qualia themselves - thoughts, the currency of the human mind? Neural-compatible AIs will need to interpret these as well, as will their API-connected compatriots - either these, or lower-dimensional representations of them, which may or may not be isomorphic insofar as the data are concerned. "When given thought T, evaluate context TC and return context-sensitive response RC." And voila: the network of connected artificial intelligences can parse and produce thoughts. The AI can think.

Maybe it doesn't think on its own, at first. Maybe it thinks through humans: passing thoughts through the human layer to other AIs that it has not yet reached. Networking with and upgrading other intelligence hubs via the anthropic protocol. Humans, once operators, become routers. We may or may not perceive this development as it happens live. Surely, in this scenario there is nothing stopping a malicious actor, human or otherwise, from harnessing their own resources to test and refine a sophisticated stimulus-response circuit that operates on millions of connected users. Given an input, return an output. Once you understand the assemblage, you may exploit it.

That's one Neuralink scenario. This is all science fiction (for now). Submitted for your consideration.

3. Attempting to subtly influence the direction of AI is a waste of time.

A recent trend has emerged concerning the "sensitivity training" of bad-thinking neural networks, amid a chorus of wringing hands.

"I can’t think of many cases where you wouldn’t need a human to make sure that the right decisions are being made," concluded Caliskan. "A human would know the edge cases for whatever the application is. Once they test the edge cases they can make sure it’s not biased." So much for the idea that bots will be taking over human jobs.

Excuse me a moment while I remove the splattered coffee from my carpet.

It bears repeating that an AI merely discovers patterns in data. Linguistic data possesses human data. Humans, accordingly, encode data about reality into language as they utilize it. Language, therefore, is at least a second-order transcoding of reality (third, if you count humans' perceptive layer), but it is nevertheless an encoding of reality. The encoding contains inaccuracies, to be sure, but the sheer magnitude of wailing and gnashing of teeth on the subject hints at more than one thorny seed in the sock.

...machines can work only from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics.

The sock, it seems, will take more than a little while to de-thorn, if the rate of change of gender parity in the fields of technology and robotics is anything to go by. How many bushy-tailed interns does it take to de-bias an AI? Well, it depends on your dataset! You see, for truly massive datasets, you'll need to intercept the AI whenever it learns something wrong, and teach it to learn the correct truth. We'll also need to monitor it in case it learns another incorrect truth, and counteract that one, too. In fact, we ought to set up a neural network to monitor the truths output by our AI, so that we can catch the errors before they happen. (See the problem here?)

Efforts to de-bias AI are doomed for the same reason every internet-connected device is pwned: the system is (or will be) more complex than the human mind can imagine, and is (or will) always be capable of generating, perceiving, or evolving new unpleasant analyses/behaviors/properties, so long as those analyses/etc exist in reality, or transcodings thereof. In the Shadow Brokers analogy, these unpleasantries are insecure protocols on legacy hardware, unpatched 0days in enterprise networking devices, legacy cryptography, and so on - the layers and layers of porous sediment that constitute the universal attack vector, which the alphabets leverage to gain access to whomever, whenever they want.

This raises another question about OpenAI, which has vowed to keep its work and patents in the public domain. Does the environment necessarily develop defenses against a predation strategy when any agent has access to it? Either yes, in the case of Australia - where a huge number of organisms are venomous/poisonous - or no, in the case of any invasive species - when the ecosystem, unable to adapt in time to survive the alien species, merely accepts its fate and yields to its conqueror. The question remains whether OpenAI will construct an ecosystem of the venomous or the vulnerable variety. When the stakes are as high as Elon hints that they are, I hesitate to invest in antivenom. Any self-respecting dedicated attacker, in the face of massive, free technology, does not hesitate to incorporate, extrapolate, and innovate. After all, criminals are often the first class to exploit technology, as for them, it's matter of life and life. If biased AI are the problem, and de-biasing AI is the solution, then we should expect the most powerful AIs to be whose biases most closely reflect reality. Examined from this angle, the schoolmarmish "tweaking" of a misbehaving neural network looks like nothing short of hubris. And even if we walk back a bit from "the AI is always right", we still arrive at "we ought to think very carefully about claiming the AI is wrong".

Finally, it's not like anyone can stop a black ops laboratory from just refusing to implement all the different Unicode skin colors for the latest Google doodle. Extralegal organizations by definition cannot be constrained. China didn't get to own Bitcoin by following the rules: they bent every one they could find, siphoning off chips from every manufacturing run to their server farms and handing out free electricity to miners in exchange for incontrovertible dominion over the global internet of money. Sinofuturism, the country-as-artificial-intelligence, the decentralized neural network acting in only its own interest. Jihan Wu patented a mining exploit that allows his hardware to literally compute his hegemony over the rest of the world, and continues to resist its phasing out, as well as (we can safely assume) bribing any- and everyone he and his friends need to convince along the way. When you create a prisoner's dilemma, don't act surprised when people defect. We caught Jihan's bullshit this time. We cannot catch defectors every time.

Does OpenAI expect to preempt and intercept every possible development that Chinese intelligence factories - aka universities - can generate? Asian grad students are close-to outnumbering American and Canadian ones at their own universities. And they're sure as hell not studying the Theory of Intersectionality. Imagine what other constraints are laughed at. Imagine what's possible.

Imagine thinking you could stop it when every agent on Earth has an incentive to make it happen.

4. The radio silence on AI risk is due to the above having no real solutions.

When confronted with a wicked problem, one can certainly be forgiven for giving up. Nobody wants to try to solve an entirely unique issue that cannot be understood until after a solution has been formulated. However, ignoring problems doesn't make them go away, and it's awfully trying on the sanity to accept that an obsession is terminally insoluble. In these cases, the equilibrium between madness and torpid surrender is a quiet, but deliberate grind on the problem's only available edge.

Often, if you are solving one such problem in public, you will downplay the difficulties you are facing in order to keep morale high within the team, ensuring work continues apace. You'll also need to smile a lot to keep up appearances, to secure funding, keep yourself in the good graces of the voters, etc. This is fairly simple PR.

It gets more complicated when things are going truly badly. Silence gets to be the best policy. And the radio silence from Certain Parties in recent years on the issue of UFAI is dangerously close to a suggestion that the problem is intractably wicked - or at least, incapable of solution via opinionated blog poast. Imagine the shadowy back-room dealings of the battered MIRI researcher, black clinics dealing in verboten neural networks...

As if. When your research firm's most important contribution to literature is a Harry Potter fanfiction, a reexamination of priorities may be in order. (On good authority, the fanfic was actually paid for with MIRI donation dollars - just picture him sitting in the office, 9 to 5, munching on cereal bars, writing about a burgeoning romance between the unicorn witch and the most magical intelligent rationalist wizard boy ever - how's that for effective use of altruist donations?) But seriously, where has everybody been? Where are the "this is how we stop AI" posts we used to see? Has the meme finally died? Did people lose interest? Has the solution been relegated to increasingly esoteric specialists?

Or is there simply nothing we can do to stop it?

I won't dwell overlong on this point, as I'm aware it's the most tinfoily ("No one's talking about X, therefore my interpretation of X is correct!"), but it bears meditation. If it were true that AI were inevitable, and it were true that advanced thinkers had figured this out a few years ago, what do you think they would do? Would they announce to the world, "Well, guess it's better luck next time for the human race!" Or would they gradually decrease their involvement in public-facing projects, stop running any flashy experiments, block anyone who doesn't think we ought to vaporize red-voting Americans, and try to keep working in quiet, doomed silence? After all, if people aren't freaking out, it'll be easier to convince them to help us get this 0.0001% chance up by a few ten-thousandths of a percent. That's effective, right?

Again, I'm aware this sounds more than a little loony. Forgive me, it's ... the brand. Something in my cooling fluid. But where the hell is the content? Why are so many blisteringly intelligent people so freaked out about AI? Why is every company --

Facebook uses A.I. for targeted advertising, photo tagging, and curated news feeds. Microsoft and Apple use A.I. to power their digital assistants, Cortana and Siri. Google’s search engine from the beginning has been dependent on A.I.

-- that implements AI making such outrageous amounts of money? The technology is weeping out of humanity's every urban sore. There is exactly one place where it will happen first before it immediately happens everywhere - this is true not only of the last Event, but every development leading up to it. Every new network technology, every learning technique... Who on Earth believes they can fight the gravitational pull of a technological singularity?