Hide table of contents

Help me out here. Isn't "AI as Normal Technology" a huge misnomer? Sure, there are important differences between this worldview and the superintelligence worldview that dominates AI safety/AI alignment discussions. But "normal technology", really? 

In this interview, the computer scientist Arvind Narayanan, one of the co-authors of the "AI as Normal Technology" article, describes a foreseeable, not-too-distant future where, seemingly, AI systems will act as oracles to which human CEOs can defer or outsource most or all of the big decisions involved in running a company. This sounds like we'd have AI systems that can think very much like humans can with the same level of generalization, data efficiency, fluid intelligence, and so on as human beings. 

It's hard to imagine how such systems wouldn't be artificial general intelligence (AGI), or at least wouldn't be almost, approximately AGI. Maybe they would not meet the technical definition of AGI because they're only allowed to be oracles and not agents. Maybe their cognitive and intellectual capabilities don't map quite one-to-one with humans', although the mapping is still quite close overall and is enough to have transformative effects on society and the economy. In any case, whether such AI systems count as AGI or not, how in the world is it apt to call this "normal technology"? Isn't this crazy, weird, futuristic, sci-fi technology? 

I can understand that this way of imagining the development of AI is more normal than the superintelligence worldview, but it's still not normal! 

For example, following in the intellectual lineage of the philosopher of mind Daniel Dennett and the cognitive scientist Douglas Hofstadter — whose views on the mind broadly fall under the umbrella of functionalism, which 33% of English-speaking philosophers, a plurality, accept or lean toward, according to a 2020 survey — it is hard to imagine how the sort of AI system to which, say, Tim Cook could outsource most or all of the decisions involved in running Apple would not be conscious in the way a human is conscious. At the very least, it seems like we would have a major societal debate about whether such AI systems were conscious and whether they should be kept as unpaid workers (slaves?) or liberated and given legal personhood and at least some the rights outlined in the UN's Universal Declaration of Human Rights. I personally would be a strong proponent of liberation, legal personhood, and legal rights for such machine minds, who I would view as conscious and as metaphysical persons.[1] So, it's hard for me to imagine this as "normal technology". Instead, I would see it as the creation of another intelligent, conscious, human-like lifeform on Earth, with whom humans have not dealt since the extinction of Neanderthals. 

We can leave aside the metaphysical debate about machine consciousness and the moral debate about machine rights, though, and think about other ways "normal AI" would be highly abnormal. In the interview I mentioned, Arvind Narayanan discusses how AI will achieve broad automation of the tasks human workers do and of increasingly large portions of human occupations overall. Narayanan compares this to the Internet, but unless I'm completely misunderstanding the sort of scenarios he's imagining, this is nothing like the Internet at all! 

Even the automation and productivity gains in agriculture and cottage manufacturing industries that followed the Industrial Revolution, where the majority of people worked up until then, primarily involved the automation or mechanization of manual labour and extremely simple, extremely repetitive tasks. The diversity of human occupations in industrialized economies has had a Cambrian explosion since then. The tasks involved in human labour now tend to be much more complex, much less repetitive, much more diverse and heterogeneous, and involve much more of an intellectual and cognitive component, especially in knowledge work. Narayanan does not seem to be saying that AI will only automate the simple, repetitive tasks or jobs, but will automate many kinds of tasks and jobs broadly, including taking most of Tim Cook's decision-making out of his hands. In this sense, even the Industrial Revolution is not a significant enough comparison. When muscle power gave way to machine power, brain power took over. When machine brains take over from brain power, what, then, will be the role of brain power? 

My worry is that I'm misunderstanding what the "AI as Normal Technology" view actually is. I worry that I'm overestimating what this view imagines AI capabilities will be and, consequently, overestimating the level of social and economic transformation it imagines. But Narayanan's comments seem to indicate a view where, essentially, there will be a gradual, continuous trajectory from current AI systems toward AGI or something very much like AGI over the next few decades, and those AGI or AGI-like systems will be able to substitute for humans in much of human endeavour. If my impression is right, then I think "normal" is just the wrong word for this. 

Some alternative names I think would be more apt, if my understanding of the view is correct:

  • Continuous improvement to transformative AI
  • Continuous improvement of AI oracles
  • AI as benign personal assistants
  • Industrial Revolution 2: Robots Rising
  1. ^

    Provided, as is assumed in the "AI as Normal Technology" view, that AIs would not present any of the sort of dangers that are imagined in the superintelligence worldview. I am imagining that the "Normal Technology" AIs would be akin to C-3PO from Star Wars or Data from Star Trek and would be more or less safe and harmless in the same way humans are more or less safe and harmless. They would be completely dissimilar to imagined powerful, malignant AIs like the paperclip maximizer.

6

0
0

Reactions

0
0
New Answer
New Comment


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 14m read
 · 
You may have noticed that Open Philanthropy is hiring for several roles in our GCR division: senior generalists across our global catastrophic risks team, and grantmakers for our technical AI safety team. (We're also hiring for recruiting and operations roles! I know very little about either field, so I'm not going to talk about them here.) I work as a grantmaker on OP's AI governance team, and inspired by Lizka's recent excellent post giving her personal take on working at Forethought, I wanted to share some personal takes on reasons for and against working on the AI teams at Open Philanthropy. A few things to keep in mind as you read this: * I'm mostly going to talk about personal fit and the day-to-day experience of working at OP, rather than getting into high-level strategy and possible disagreements people might have with it. * On strategy: if you have some substantive big picture disagreements with OP's approach, I think that firstly, you're in good company (this describes many OP staff!), and secondly, you'd still enjoy working here. But if you disagree with many of our strategic choices, or have especially foundational or basic disagreements, you'd probably be kind of miserable working here. * Everything below is just my personal take. I ran a draft past some colleagues and got reactions, but I expect they’d disagree with plenty of what I've written here. The case for working at Open Philanthropy's AI team So why OP’s AI team? A couple of reasons: Impact Open Philanthropy is the biggest philanthropic funder in the AI safety space, and has been for a long time. We have the benefits of: * being able to influence a large amount of funding * having outstanding access to, and coordination abilities across, the AI safety ecosystem * a very well-oiled grantmaking machine, with significant flexibility over how and to whom we can make grants * being trusted to spend money ambitiously and at scale, if we think it's high impact to do so Now is an esp
 ·  · 8m read
 · 
Cross-posted from Good Structures. For impact-minded donors, it’s natural to focus on doing the most cost-effective thing. Suppose you’re genuinely neutral on what you do, as long as it maximizes the good. If you’re donating money, you want to look for the most cost-effective opportunity (on the margin) and donate to it. But many organizations and individuals who care about cost-effectiveness try to influence the giving of others. This includes: * Research organizations that try to influence the allocation or use of charitable funds. * Donor advisors who work with donors to find promising opportunities. * People arguing to community members on venues like the EA Forum. * Charity recommenders like GiveWell and Animal Charity Evaluators. These are endeavors where you’re specifically trying to influence the giving of others. And when you influence the giving of others, you don’t get full credit for their decisions! You should only get credit for how much better the thing you convinced them to do is compared to what they would otherwise do. This is something that many people in EA and related communities take for granted and find obvious in the abstract. But I think the implications of this aren’t always fully digested by the community. In particular, often, when looking at an intervention, it being highly cost-effective is less important than who paid for it — if the donor would have otherwise funded something similarly cost-effective, the intervention isn’t actually making that much difference on the margin. As a quick demonstration, say as a donor advisor you have two options: * Option 1: You can influence Big EA Donor to move $1,000,000 from something generating 95 units of value per dollar to something generating 105 units per dollar. * You’ve created ($1,000,000 * 105 - $1,000,000 * 95) = 10,000,000 units of value. * This is often what I expect a typical EA Forum post arguing that X animal welfare campaign is better than Y one, or similar, would
 ·  · 1m read
 · 
The cause prioritization landscape in EA is changing. * Focus has shifted away from evaluation of general cause areas or cross-cause comparisons, with the vast majority of research now comparing interventions within particular cause areas. * Artificial Intelligence does not comfortably fit into any of the traditional cause buckets of Global Health, Animal Welfare, and Existential Risk. As EA becomes increasingly focused on AI, traditional cause comparisons may ignore important considerations. * While some traditional cause prioritization cruxes remain central (e.g. animal vs. human moral weights, cluelessness about the longterm future), we expect new cruxes have emerged that are important for people’s giving decisions today but have received much less attention. We want to get a better picture of what the most pressing cause prioritization questions are right now. This will help us, as a community, decide what research is most needed and open up new lines of inquiry. Some of these questions may be well known in EA but still unanswered. Some may be known elsewhere but neglected in EA. Some may be brand new. To elicit these cruxes, consider the following question: > Imagine that you are to receive $20 million at the beginning of 2026. You are committed to giving all of it away, but you don’t have to donate on any particular timeline. What are the most important questions that you would want answers to before deciding how, where, and when to give?
      ;