How tech companies are pushing us to use AI
How tech companies are pushing us to use AI
Work carried out by Nolwenn Maudet, Anaëlle Beignon and Thomas Thibault
Initial publishing : february, 2025
English translation : july, 2025
The scientific paper (and the replay of the presentation video)
Introduction
The forced deployment of generative AI services causes serious social and environnemental issues. We mentionned it in one of our newsletters and press alerts are becoming more and more frequent.
AI significantly increases environmental impacts of digital technologies. To mention just a few digits, 80% of data centers could now be obsolete and during the first half of 2023 North America reached a new record with a +25% rise in the construction of new data centers.
Driven by massive investments that need to be profitable, by promises of new profit sources or by fear to be overwhelmed, many tech companies started to implement features feeded by AI.
The addition of new functionalities in apps or softwares is of course not new in itself, but what strikes with AI is the way companies push to adopt these features, especially through design. It seems to us that this phenomenon is unprecedented in the history of interfaces. Our goal with this study is to analyse multiple strategies settled by companies to encourage us to adopt AI, whether we want it or not.
Analysis
Early 2024, we noticed that a lot of AI icons appeared in several softwares, websites and apps we were using.
So we've started to collect systematically screenshots of each interface change linked to AI. Thanks to word of mouth and a call on Mastodon, we collected hundreds of screenshots, as much in professionnal productivity softwares than in other type of use (hobbies, communication, creativity, etc.).
By analysing this corpus, we observed that features based on AI are proposed to users aggressively, through different recurring strategies. When we compare the way functions feeded by AI are integrated to interfaces with changes not linked to AI, the contrast is often striking and shows how much AI features take a special place.
Here is how companies push us to use artificial intelligence.
➀ AI has the first role in our interfaces
AI on center stage
First of all, AI functions tend to take a lot of space and get usually the most valued place in interfaces, toolbars and menus.
Snapchat for example added a feature based on AI in the form of a conversation named MyAI that, unlike other conversations, always stands above the rest, even if users never interact with it. We noted Google employs quite the same system by disguising AI as an incoming message. In the LinkedIn pop-over messaging service, a banner announcing an AI-based feature takes more than half of the space.
Snaptchat and Google Messages
Google Keep and Linkedin
Google Keep uses the floating action button, usually dedicated to the most important action, to display an AI-based function.
In Notion and DeepL, when people select words, a toolbar appears in which first buttons on the left (located as close as possible to the cursor) are functionalities feeded by AI. On Notion, we can notice that these features take about a third of the toolbar space.
DeepL
Notion
Meta AI in Whatsapp
In Whatsapp, Meta AI is both present as a simple conversation, but can also be launched by clicking just above the message button. Moreover, the search bar button now launches a prompt instead. Many people have told us that they launched the AI inadvertently.
A graphical highlight
AI features are not only valued in the space, they are also highlighted visually. For example, in many cases like Buffer, Miro or Acrobat Reader, AI functions are emphasized by a specific color, such as a colored gradient or even an animated icon (Notion), contrary to all other functions that are generally displayed thanks to static grayscale icons.
In Notion
We also observed that AI features are provided with numerous ways of access and that they're constantly reminded to people. In Acrobat Reader for example, AI functions are presented and suggested to users several times. The same “AI Assistant” function is introduced at least six times in the interface: in left and right toolbars, as a button in the menu bar, as a tooltip in the interface, as an invitation in the graphical interface and when people select text.
We can see in some pages a complete graphical rework of the interface at the colors and symbols of AI.
The home page of the Qwant search engine [fr] in april, 2014 (on the left) and in november, 2024 (on the right). The new page wears all the graphical codes of AI (colors, spark symbol) and values AI functionalities.
Spotlighted by the experience interruption
As they open Adobe Photoshop, users are first received in a contextual window encouraging them to “explore the power of generative AI”. After closing this first pop-up, a tooltip opens in the interface to suggest the same functionality. Likewise, when people open Microsoft Skype they're often welcomed by an AI chatbot called Copilot which invites them to use it as soon as the interface opens. Many tools like Slack and Google Doc also promote their AI functions with the help of banners in the interface.
Accordingly, the way AI features are introduced to people leads to a split of their usual work habits stream. A bunch of tooltips and banners promoting AI in interfaces generally need at least one clic from users to be deleted.
Adobe Acrobat: at the document opening
You're only one clic left to use it
The preeminence given to AI functions in interfaces brings inescapably people to trigger them, even by mistake. For example, in Notion, it's very easy to trigger the AI assistant accidentally by pressing the space bar, which is a very frequently used key. In comparison, the keyboard shortcut for any other command requires from people to press “/”, which is less subject to this kind of error.
It is possible that this overexposure is a first time in the history of interfaces. Never a feature would have been more pushed in such a short time, spatially, graphically and interactively all at once, and in such a persistent way.
On Notion, an invitation to AI on simple clic and the prompt launch through the space bar
Always enabled…
If AI functions are extremely easy to trigger or enabled by default, yet it is way harder for people to disable or refuse them. Usually, it's even impossible.
For example, people discovered that the sport monitoring app Strava started to use AI to comment their activities and, even if they didn't want to use it, they would not have any way to turn it off.
In Snapchat again, the MyAI conversation not only stays on the top, it is also currently impossible for users to delete it.
Even if apps offer ways to disable AI-based functions, it's not harmless. By the words we choose, first of all, because the refusal of AI-based function use can rarely be non categorical/is often categorical. Instead, apps propose usually a temporary way to remove the function by using words like “ignore for know” or “maybe later”. That means to users that they'll definitely end up using it in the long term.
…but very hard to disable
To understand how much AI is imposed to us, it's interesting to see, by contrast, how many applications hide that they started to enable AI learning on our datas by default. If the disable option exists, its existence isn't mentionned to users and the dedicated parameter is hidden among others. There is a notable gap between, in one hand, the way AI-based functions abilities are highlighted and introduced and, on the other hand, the informations provided to users to let them prevent AI training on/over their datas.
➁ AI introduced/showed as magic
The symbol of magic and innovation
The most common graphic symbol used to represent AI features is the spark icon ✨. Usually combined to something “special”, thrilling, new, but also to innovation and amazement, the icon contributes to an inherently “good” depiction of a functionality. Unlike other icons that literally or figuratively illustrate what they do (⚙️🖋️🗑️📅📎 etc.), the ✨ icon used for AI actually stands out from other functionalities in interfaces.
This graphical treatment is by the way often completed with superlatives, and the lexical field of power and magic (“Powerful assistant” for Grok, “Explore the power” for Photoshop, “Join us to dive into performance” for Apple).
The color of magic
Note also the ubiquity of the mauve color commonly associated to magic and impalpable. Moreover, we observed the use of pretty much identical blue-mauve gradients.
Witches in Merlin the Wizard or in Snow White
Commercial page on Notion's website (2022)
During a meeting with Marion Lamarque, a color specialist, she draws our attention to this: “In visual arts, purple stands for magic, but is also connects with innovation. The mauve, a faded version of purple, adds a sensation of softness and gives to the magic of AI a comforting face, especially when combined to a sky blue gradient, calming and consensual.”
The all-in-one or the non definition of the feature
By summoning visual semantics of magic, companies place AI as a multipurpose tool, without ever saying what it can't do. This graphical undefinition favors the AI highlight in all our interfaces, without distinguishing their major use differencies. Actually the same icon is used by companies for AI-based features that have nothing in common. The AI button from Zoom, for example, is a complete different use of Google's, however they're visually close. The graphical standardisation of deeply different functionalities produces a grey area beneficial for/serving AI. If we don't know precisely what we are talking about, we are not really in a position to expect something from it. We are invited to be caught by surprise and may encounter difficulties to crititicise its effects.
Notion (on the left) explains that their AI tool “does it all”, Adobe (in the middle) proposes different functions and uses “etc” to suggest that AI can do a lot of other things. Google did likewise with “and more”.
This “magification” affects not only new functionalities but also sometimes those yet settled/fixed in our uses and well understood. On Cairn for example, the algorithmic recommandation of similar publications is now related to the AI symbol.
Make the machine and its materiality invisible
The versatility promised by AI tools, embodied by magical invocations, results also to an invisibilisation of many AI operations materiality, imposing the idea that AI makes necessarily things easier, better and faster.
Those metaphors are convenient because they help to avoid questions about efficiency and environmental or political effects, leading to an opaque data generation process during a magical event.
Qwant proposes an “instantaneous” answer to searches thanks to AI. Even though it appears a long time after results. Words and metaphors hide the calculation of the machine and the impacts much more significant than a basic search.
We don't know how should behave something magic : is it normal that it takes time ? Did I do something wrong ? Is its resources consumption justified ? Making the machine invisible through the metaphor of magic changes the relation people have to tools. It is not possible anymore to pay attention to the machine.
➂ AI is an assistant (not a simple feature)
The human metaphor
A common shape that generative AI take in our interfaces is the assistant's. Intelligent assistants - or AI assistants - are personified through metaphors reminding human characteristics, for example by having a name like “Aria” (Opera) or “Leo” (_Brave_). Their humanity is also suggested by the way they are showed in the interface. For example, the Snapchat assistant is introduced as a contact. The AI assistants personnification is pushed until the textual interaction mode and the invitations to ask to or to discuss with the AI.
Assistants propose help to people for “learning in new ways, planning events, writting thank-you notes, and more” (Gemini on Google), “discuss, create and find anything” (Notion), or “suggest unique ideas” (Skype). These activities bring to mind tasks we usually associate to humans more than to IT/computing process. Presenting AI with human characteristics or skills contributes to speeches praising its attractivity and high-performance. This human skills versatility also justifies the AI functions colonisation in our digital services.
Notion IA button
Generative AI tools often mention professionnal roles (ex: “Copilot” (Microsoft), “Agentforce” (Slack), “Cocreator” (Paint), “Companion” (Microsoft, Zoom), “Analyst” (Google Chat), or literally “Assistant” (Proton, Brave, Webex)). It feeds the idea that AI is like a peer, or a partner. It also echoes to the fantasy of reducing challenging tasks thanks to technology: the AI assistant allows to delegate uninteresting or difficult actions, raising users to more important tasks.
In those personnified examples, we (users) do not use AI as we would do with a tool, we are asking for devoted assistants (ex: “Whenever you need me”(Notion), “Need help?” (Brave)) to do things in our place. AI assistants are presented as skillfull subordinates, waiting for instructions to guide them. This positioning from AI as hierarchically inferior to us, and controllable (waiting for our orders) could be an answer to the fear of work damaging for qualified workers. It conveys the seducing figure of an agent merging the properties of a tool and a teammate, except that this colleague won't ask for help.
“We want tireless agents that treat us as co-workers, even if we do not treat them like that”
Newendorp and his colleagues, in an 2024 article talking about the metaphor of social relation induced by conversationnal agents.
AI mastering, a skill like an other ?
Another strategy to reinforce the AI adoption involves presenting functionalities that seek generative AI as an extension of productivity, creativity or intelligence of people, which give them new skills (ex: generate code) or more time (ex: generate meeting notes). Features do not introduce AI as a customised assistant who will do things for us, they rather display a sample group of tools that extend users skills (here professionnal ones): possible actions achievable by AI are depicted as traditionnal software tools icons, rather than a free text field.
This tool evocation takes part in the presentation of AI use as a factor of professionnal competivity, where the execution speed of tasks and the spectum of know-hows are valued. It is a more indirect and pernicious way to impose AI, by playing on the competition between workers.
Why this pressure then?
An unseen forced use to offset huge investments
This analyse shows that something special happens with AI: a form of forced use, probably unseen in the history of interfaces. Because massive investments of hundreds of billions dollars, made by companies affraid of being outdistanced in what can be perceived as a gold rush, have to be profitable. In a saturated market, where perspectives of growth are know quite weak, new AI features become very often an opportunity to increase here and there the subscriptions prices of several digital services. We, users, have then to pay the price of the financial risk taken by companies in this AI race. And then more or less visible changes that overwhelmed our interfaces are the direct consequence.
Unlike previous technological evolutions, companies can now, through updates and softwares hosted in the cloud, transform interfaces overnight, affecting all users in an instant. What could progressively happen before is now almost instantly spread to billions of people in the world, except that there is no escape.
The collective hype effect pushes then companies that don't want to “miss” the change of paradigm in a race. To such an extent that some functionalities seem to exist only to say “we too”, and other features employing algorithms for ages get dressed in the colors of AI.
AI, an answer seeking for a need to satisfy
Tech companies grope around to try to find features which will be adopted. We saw it, the eruption of new AI functionalities is served with a massive amount of informations, pop-ups, communications, tutorials and use case examples (TV spots, tooltips, etc.). This forced use strategy, accompanied by a massive profusion of explanations, seem to be the sign of a technology where uses are at best unclear, at worst unable to answer to any request or need.
This race to new AI features, symbol of innovation, happens at the cost of real needs and can, in many cases, participate to hide deeper issues in our softwares.
A big thank-you to people who shared us their stories and screenshots to feed this research.