Hide table of contents

Help me out here. Isn't "AI as Normal Technology" a huge misnomer? Sure, there are important differences between this worldview and the superintelligence worldview that dominates AI safety/AI alignment discussions. But "normal technology", really? 

In this interview, the computer scientist Arvind Narayanan, one of the co-authors of the "AI as Normal Technology" article, describes a foreseeable, not-too-distant future where, seemingly, AI systems will act as oracles to which human CEOs can defer or outsource most or all of the big decisions involved in running a company. This sounds like we'd have AI systems that can think very much like humans can with the same level of generalization, data efficiency, fluid intelligence, and so on as human beings. 

It's hard to imagine how such systems wouldn't be artificial general intelligence (AGI), or at least wouldn't be almost, approximately AGI. Maybe they would not meet the technical definition of AGI because they're only allowed to be oracles and not agents. Maybe their cognitive and intellectual capabilities don't map quite one-to-one with humans', although the mapping is still quite close overall and is enough to have transformative effects on society and the economy. In any case, whether such AI systems count as AGI or not, how in the world is it apt to call this "normal technology"? Isn't this crazy, weird, futuristic, sci-fi technology? 

I can understand that this way of imagining the development of AI is more normal than the superintelligence worldview, but it's still not normal! 

For example, following in the intellectual lineage of the philosopher of mind Daniel Dennett and the cognitive scientist Douglas Hofstadter — whose views on the mind broadly fall under the umbrella of functionalism, which 33% of English-speaking philosophers, a plurality, accept or lean toward, according to a 2020 survey — it is hard to imagine how the sort of AI system to which, say, Tim Cook could outsource most or all of the decisions involved in running Apple would not be conscious in the way a human is conscious. At the very least, it seems like we would have a major societal debate about whether such AI systems were conscious and whether they should be kept as unpaid workers (slaves?) or liberated and given legal personhood and at least some the rights outlined in the UN's Universal Declaration of Human Rights. I personally would be a strong proponent of liberation, legal personhood, and legal rights for such machine minds, who I would view as conscious and as metaphysical persons.[1] So, it's hard for me to imagine this as "normal technology". Instead, I would see it as the creation of another intelligent, conscious, human-like lifeform on Earth, with whom humans have not dealt since the extinction of Neanderthals. 

We can leave aside the metaphysical debate about machine consciousness and the moral debate about machine rights, though, and think about other ways "normal AI" would be highly abnormal. In the interview I mentioned, Arvind Narayanan discusses how AI will achieve broad automation of the tasks human workers do and of increasingly large portions of human occupations overall. Narayanan compares this to the Internet, but unless I'm completely misunderstanding the sort of scenarios he's imagining, this is nothing like the Internet at all! 

Even the automation and productivity gains in agriculture and cottage manufacturing industries that followed the Industrial Revolution, where the majority of people worked up until then, primarily involved the automation or mechanization of manual labour and extremely simple, extremely repetitive tasks. The diversity of human occupations in industrialized economies has had a Cambrian explosion since then. The tasks involved in human labour now tend to be much more complex, much less repetitive, much more diverse and heterogeneous, and involve much more of an intellectual and cognitive component, especially in knowledge work. Narayanan does not seem to be saying that AI will only automate the simple, repetitive tasks or jobs, but will automate many kinds of tasks and jobs broadly, including taking most of Tim Cook's decision-making out of his hands. In this sense, even the Industrial Revolution is not a significant enough comparison. When muscle power gave way to machine power, brain power took over. When machine brains take over from brain power, what, then, will be the role of brain power? 

My worry is that I'm misunderstanding what the "AI as Normal Technology" view actually is. I worry that I'm overestimating what this view imagines AI capabilities will be and, consequently, overestimating the level of social and economic transformation it imagines. But Narayanan's comments seem to indicate a view where, essentially, there will be a gradual, continuous trajectory from current AI systems toward AGI or something very much like AGI over the next few decades, and those AGI or AGI-like systems will be able to substitute for humans in much of human endeavour. If my impression is right, then I think "normal" is just the wrong word for this. 

Some alternative names I think would be more apt, if my understanding of the view is correct:

  • Continuous improvement to transformative AI
  • Continuous improvement of AI oracles
  • AI as benign personal assistants
  • Industrial Revolution 2: Robots Rising
  1. ^

    Provided, as is assumed in the "AI as Normal Technology" view, that AIs would not present any of the sort of dangers that are imagined in the superintelligence worldview. I am imagining that the "Normal Technology" AIs would be akin to C-3PO from Star Wars or Data from Star Trek and would be more or less safe and harmless in the same way humans are more or less safe and harmless. They would be completely dissimilar to imagined powerful, malignant AIs like the paperclip maximizer.

8

0
0

Reactions

0
0
New Answer
New Comment


1 Answers sorted by

Yes, I think "AI as normal technology" is probably a misnomer - or at least very liable to being misinterpreted. Perhaps this later post by the authors is helpful - they clarify they don't mean "mundane or predictable" when they say "normal". 

But I'm not sure a world where human CEOs defer a lot of decisions, including high-level strategy, to AI requires something that is approximately AGI. Couldn't we also see this happen in a world with very narrow but intelligent "Tool AI" systems? In other words, CEOs could be deferring a lot of decisions "to AI", but to many different AI systems, each of which has relatively narrow competencies. This might depend on your view of how narrow or general a skill "high-level strategy" is. 

From the Asterisk interview you linked, it doesn't sound like Arvind is expecting AI to remain like narrow and tool-like forever. Just that he expects it will take longer to reach AGI than people expect, and only after AIs are used extensively in the real world. He admits he would significantly change his evaluation if we saw a fairly general-purpose personal assistant work out of the box in 2025-26.

Thank you for weighing in! I appreciate your perspective.

"Normal technology" really invokes a sense of, well, normal technology — smartphones, Internet, apps, autocorrect, Google, recommender algorithms on YouTube and Netflix, that sort of thing. 

You raised an interesting question about tool AI vs. agent AI, but then you also (rather helpfully!) answered your own question. Arvind seems to be imagining a steady, gradual, continuous, relatively slower (compared to, say, Metaculus, but not what I'd necessarily call "slow" without qualification) path to a... (read more)

Curated and popular this week
 ·  · 1m read
 · 
The cause prioritization landscape in EA is changing. * Focus has shifted away from evaluation of general cause areas or cross-cause comparisons, with the vast majority of research now comparing interventions within particular cause areas. * Artificial Intelligence does not comfortably fit into any of the traditional cause buckets of Global Health, Animal Welfare, and Existential Risk. As EA becomes increasingly focused on AI, traditional cause comparisons may ignore important considerations. * While some traditional cause prioritization cruxes remain central (e.g. animal vs. human moral weights, cluelessness about the longterm future), we expect new cruxes have emerged that are important for people’s giving decisions today but have received much less attention. We want to get a better picture of what the most pressing cause prioritization questions are right now. This will help us, as a community, decide what research is most needed and open up new lines of inquiry. Some of these questions may be well known in EA but still unanswered. Some may be known elsewhere but neglected in EA. Some may be brand new. To elicit these cruxes, consider the following question: > Imagine that you are to receive $20 million at the beginning of 2026. You are committed to giving all of it away, but you don’t have to donate on any particular timeline. What are the most important questions that you would want answers to before deciding how, where, and when to give?
 ·  · 19m read
 · 
Author’s note: This is an adapted version of my recent talk at EA Global NYC (I’ll add a link when it’s available). The content has been adjusted to reflect things I learned from talking to people after my talk. If you saw the talk, you might still be interested in the “some objections” section at the end.   Summary Wild animal welfare faces frequent tractability concerns, amounting to the idea that ecosystems are too complex to intervene in without causing harm. However, I suspect these concerns reflect inconsistent justification standards rather than unique intractability. To explore this idea: * I provide some context about why people sometimes have tractability concerns about wild animal welfare, providing a concrete example using bird-window collisions. * I then describe four approaches to handling uncertainty about indirect effects: spotlighting (focusing on target beneficiaries while ignoring broader impacts), ignoring cluelessness (acting on knowable effects only), assigning precise probabilities to all outcomes, and seeking ecologically inert interventions. * I argue that, when applied consistently across cause areas, none of these approaches suggest wild animal welfare is distinctively intractable compared to global health or AI safety. Rather, the apparent difference most commonly stems from arbitrarily wide "spotlights" applied to wild animal welfare (requiring consideration of millions of species) versus narrow ones for other causes (typically just humans). While I remain unsure about the right approach to handling indirect effects, I think that this is a problem for all cause areas as soon as you realize wild animals belong in your moral circle, and especially if you take a consequentialist approach to moral analysis. Overall, while I’m sympathetic to worries about unanticipated ecological consequences, they aren’t unique to wild animal welfare, and so either wild animal welfare is not uniquely intractable, or everything is. Consequentialism +
 ·  · 2m read
 · 
Long time lurker, first time poster - be nice please! :) I was searching for summary data of EA funding trends, but couldn't find anything more recent than Tyler's post from 2022. So I decided to update it. If this analysis is done properly anywhere, please let me know. The spreadsheet is here (some things might look weird due to importing from Excel to sheets) Observations * EA grantmaking appears on a steady downward trend since 2022 / FTX. * The squeeze on GH funding to support AI / other longtermist priorities appears to be really taking effect this year (though 2025 is a rough estimate and has significant uncertainty.) * I am really interested in particular about the apparent drop in GW grants this year. I suspect that it is wrong or at least misleading - the metrics report suggests they are raising ~$300m p.a. from non OP donors. Not sure if I have made an error (missing direct to charity donations?) or if they are just sitting on funding with the ongoing USAID disruption. Methodology * I compiled the latest grants databases from EA Funds, GiveWell, OpenPhilanthropy, and SFF. I added summary level data from ACE. * To remove double counting, I removed any OpenPhilanthropy grants that were duplicated in GiveWell's grant database. Likewise for EA Funds. * I inflation adjusted to 2025 $ based on the US CPI data from WorldBank. * For 2025 data, I made a judgement call on how much data was "complete" and pro-rated accordingly - e.g. from GiveWell, it looks complete up until the end of June, so I excluded any grants made in H2 and doubled the sum. Notes My numbers are a bit different from Tyler's. I've identified the following reasons: * Inflation adjustments (i.e. an upward boost from using 2025$) * I've used GiveWell's grant database rather than their metrics reports, * Different avoidance of double counting (I removed from OP, Tyler removed from GW. I also went through more manually - from what I can see Tyler removed any GW grant that has OP as