Hide table of contents

Help me out here. Isn't "AI as Normal Technology" a huge misnomer? Sure, there are important differences between this worldview and the superintelligence worldview that dominates AI safety/AI alignment discussions. But "normal technology", really? 

In this interview, the computer scientist Arvind Narayanan, one of the co-authors of the "AI as Normal Technology" article, describes a foreseeable, not-too-distant future where, seemingly, AI systems will act as oracles to which human CEOs can defer or outsource most or all of the big decisions involved in running a company. This sounds like we'd have AI systems that can think very much like humans can with the same level of generalization, data efficiency, fluid intelligence, and so on as human beings. 

It's hard to imagine how such systems wouldn't be artificial general intelligence (AGI), or at least wouldn't be almost, approximately AGI. Maybe they would not meet the technical definition of AGI because they're only allowed to be oracles and not agents. Maybe their cognitive and intellectual capabilities don't map quite one-to-one with humans', although the mapping is still quite close overall and is enough to have transformative effects on society and the economy. In any case, whether such AI systems count as AGI or not, how in the world is it apt to call this "normal technology"? Isn't this crazy, weird, futuristic, sci-fi technology? 

I can understand that this way of imagining the development of AI is more normal than the superintelligence worldview, but it's still not normal! 

For example, following in the intellectual lineage of the philosopher of mind Daniel Dennett and the cognitive scientist Douglas Hofstadter — whose views on the mind broadly fall under the umbrella of functionalism, which 33% of English-speaking philosophers, a plurality, accept or lean toward, according to a 2020 survey — it is hard to imagine how the sort of AI system to which, say, Tim Cook could outsource most or all of the decisions involved in running Apple would not be conscious in the way a human is conscious. At the very least, it seems like we would have a major societal debate about whether such AI systems were conscious and whether they should be kept as unpaid workers (slaves?) or liberated and given legal personhood and at least some the rights outlined in the UN's Universal Declaration of Human Rights. I personally would be a strong proponent of liberation, legal personhood, and legal rights for such machine minds, who I would view as conscious and as metaphysical persons.[1] So, it's hard for me to imagine this as "normal technology". Instead, I would see it as the creation of another intelligent, conscious, human-like lifeform on Earth, with whom humans have not dealt since the extinction of Neanderthals. 

We can leave aside the metaphysical debate about machine consciousness and the moral debate about machine rights, though, and think about other ways "normal AI" would be highly abnormal. In the interview I mentioned, Arvind Narayanan discusses how AI will achieve broad automation of the tasks human workers do and of increasingly large portions of human occupations overall. Narayanan compares this to the Internet, but unless I'm completely misunderstanding the sort of scenarios he's imagining, this is nothing like the Internet at all! 

Even the automation and productivity gains in agriculture and cottage manufacturing industries that followed the Industrial Revolution, where the majority of people worked up until then, primarily involved the automation or mechanization of manual labour and extremely simple, extremely repetitive tasks. The diversity of human occupations in industrialized economies has had a Cambrian explosion since then. The tasks involved in human labour now tend to be much more complex, much less repetitive, much more diverse and heterogeneous, and involve much more of an intellectual and cognitive component, especially in knowledge work. Narayanan does not seem to be saying that AI will only automate the simple, repetitive tasks or jobs, but will automate many kinds of tasks and jobs broadly, including taking most of Tim Cook's decision-making out of his hands. In this sense, even the Industrial Revolution is not a significant enough comparison. When muscle power gave way to machine power, brain power took over. When machine brains take over from brain power, what, then, will be the role of brain power? 

My worry is that I'm misunderstanding what the "AI as Normal Technology" view actually is. I worry that I'm overestimating what this view imagines AI capabilities will be and, consequently, overestimating the level of social and economic transformation it imagines. But Narayanan's comments seem to indicate a view where, essentially, there will be a gradual, continuous trajectory from current AI systems toward AGI or something very much like AGI over the next few decades, and those AGI or AGI-like systems will be able to substitute for humans in much of human endeavour. If my impression is right, then I think "normal" is just the wrong word for this. 

Some alternative names I think would be more apt, if my understanding of the view is correct:

  • Continuous improvement to transformative AI
  • Continuous improvement of AI oracles
  • AI as benign personal assistants
  • Industrial Revolution 2: Robots Rising
  1. ^

    Provided, as is assumed in the "AI as Normal Technology" view, that AIs would not present any of the sort of dangers that are imagined in the superintelligence worldview. I am imagining that the "Normal Technology" AIs would be akin to C-3PO from Star Wars or Data from Star Trek and would be more or less safe and harmless in the same way humans are more or less safe and harmless. They would be completely dissimilar to imagined powerful, malignant AIs like the paperclip maximizer.

17

0
0

Reactions

0
0
New Answer
New Comment


1 Answers sorted by

Yes, I think "AI as normal technology" is probably a misnomer - or at least very liable to being misinterpreted. Perhaps this later post by the authors is helpful - they clarify they don't mean "mundane or predictable" when they say "normal". 

But I'm not sure a world where human CEOs defer a lot of decisions, including high-level strategy, to AI requires something that is approximately AGI. Couldn't we also see this happen in a world with very narrow but intelligent "Tool AI" systems? In other words, CEOs could be deferring a lot of decisions "to AI", but to many different AI systems, each of which has relatively narrow competencies. This might depend on your view of how narrow or general a skill "high-level strategy" is. 

From the Asterisk interview you linked, it doesn't sound like Arvind is expecting AI to remain like narrow and tool-like forever. Just that he expects it will take longer to reach AGI than people expect, and only after AIs are used extensively in the real world. He admits he would significantly change his evaluation if we saw a fairly general-purpose personal assistant work out of the box in 2025-26.

Thank you for weighing in! I appreciate your perspective.

"Normal technology" really invokes a sense of, well, normal technology — smartphones, Internet, apps, autocorrect, Google, recommender algorithms on YouTube and Netflix, that sort of thing. 

You raised an interesting question about tool AI vs. agent AI, but then you also (rather helpfully!) answered your own question. Arvind seems to be imagining a steady, gradual, continuous, relatively slower (compared to, say, Metaculus, but not what I'd necessarily call "slow" without qualification) path to a... (read more)

Curated and popular this week
 ·  · 13m read
 · 
It’s to build the skills required to solve the problems that you want to solve in the world [I am a career advisor at 80,000 Hours. This post is adapted from a talk I gave on career capital to some ambitious altruistic students. If you prefer slides, you can access them here. These ideas are informed by my work at 80k but reflects my personal views.] I’m often asked how to have an impactful fulfilling career. My four word answer is “get good, be known.” My “fits on a postcard” answer is something like this: 1. Identify a problem with a vast scale of harm that is neglected at current margins and that is tractable to solve. Millions will die of preventable diseases this year, billions of animals will be tortured, severe tail risks like nuclear war and catastrophic pandemics still exist, and we might be on the cusp of a misaligned intelligence explosion. You should find an important problem to work on. 2. Obsessively improve at the rare and valuable skills to solve this problem and do so in a legible way for others to notice. Leverage this career capital to keep the flywheel going— skills allow you to solve more problems, which builds more skills. Rare and valuable roles require rare and valuable traits, so get so good they can’t ignore you. Unfortunately, some ambitious and altruistic young people that I speak to seem to have implicitly developed a model that looks more like this: 1. Identify a problem with a vast scale of harm, that is neglected at current margins, and that is tractable to solve. 2. Get a job from the 80,000 Hours job board at a capital-E capital-A Effective Altruist organization right out of college, as fast as possible, otherwise feel like a failure, oh god, oh god... I empathize with this feeling. Ambitious people who care about reducing risk and suffering in the world understandably think it’s the most important thing they can be doing, and often hold themselves to a high standard when trying to get there. Before properly entering the w
 ·  · 6m read
 · 
I really enjoyed reading the "why I donate" posts in the past week, so much so that I felt compelled to add my reflections, in case someone finds my reasons as interesting as I found theirs. 1. My money must be spent on something, might as well spend it on the most efficient things The core reason I give is something that I think is under-represented in the other posts: the money I have and earn will need to be spent on something, and it feels extremely inefficient and irrational to spend it on my future self when it can provide >100x as much to others. To me, it doesn't seem important whether I'm in the global top 10% or bottom 10%, or whether the money I have is due to my efforts or to the place I was born. If it can provide others 100x as much, it just seems inefficient/irrational to allocate it to myself. The post could end here, but there are other secondary reasons/perspectives on why I personally donate that I haven't seen commonly discussed. 2. Spending money is voting on how the global economy allocates its resources In 2017, I read Wealth: The Toxic Byproduct by Kevin Simler. Surprisingly, I don't think it has ever been posted on this forum. I disagree with some of it, but the core points really changed how I think about wealth, earning, and spending. The post is very well written and enjoyable, but it's 2400 words, so copy-pasting some snippets: > A thought experiment — the Congolese Trading Window: > > Suppose one day you wake up to find a large pile of Congolese francs. [...] A window [...] pushes open to reveal the unfamiliar sights of a Congolese outdoor market [...] a man approaches your window. [...] He's asking if you'd like to buy his grain for 500 francs. > > What should you do? [...] > Your plan is to buy grain whenever you think the price is poised to go up in the near future, and sell whenever you think the price is poised to go down. > [...] > Imagine a particular bag of grain that you bought at T1 for 200 francs, and then sold at T
 ·  · 6m read
 · 
Note: This post was crossposted from the Coefficient Giving Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- It can feel hard to help factory-farmed animals. We’re up against a trillion-dollar global industry and its army of lobbyists, marketeers, and apologists. This industry wields vast political influence in nearly every nation and sells its products to most people on earth. Against that, we are a movement of a few thousand full-time advocates operating on a shoestring. Our entire global movement — hundreds of groups combined — brings in less funds in a year than one meat company, JBS, makes in two days. And we have the bigger task. The meat industry just wants to preserve the status quo: virtually no regulation and ever-growing demand for factory farming. We want to upend it — and place humanity on a more humane path. Yet, somehow, we’re winning. After decades of installing battery cages, gestation crates, and chick macerators, the industry is now removing them. Once-dominant industries, like fur farming, are collapsing. And advocates are building momentum toward bigger reforms for all farmed animals. Here are my top ten wins from this year: 1. Liberté et Égalité, for Chickens. France’s largest chicken producer, the LDC Group, committed to adopting the European Chicken Commitment for its two flagship brands by 2028 — a shift that French advocacy group L214 estimates will cover 40% of the national chicken market, or up to 400 million birds each year. Across the Channel, British supermarket chain Waitrose transitioned all its own-brand chicken to comply with the parallel UK Better Chicken Commitment. 2. Guten Cluck! The Wurst Is Over for German Animals. Germany’s top retailer, Edeka, committed to making all of its own-brand chicken products compliant with Germany’s equivalent of the European Chicken Commitment by 2030