Summary.
Making your AI tools look human or dehumanizing them altogether are not the best ways to win over your users. Instead, show them the human effort and expertise that go into their development and design. The path forward is highlighting the humansAs AI is infiltrating every aspect of how we live and work, it increasingly looks or sounds like us. From virtual assistants that can hold conversations with natural intonation to digital avatars that replicate human facial expressions, AI is becoming more human-like in its appearance and behavior. Consider OpenAI’s recent launch of the Advanced Voice Mode in ChatGPT. It used real actors’ voices with lifelike tones, natural conversations, and emotional responses, aiming to make AI sound “warm, engaging, confidence-inspiring, [and] charismatic.” Likewise, Character.ai — the second most popular gen AI application — allows users to engage with fictional or historical characters like Librarian Linda, Elon Musk, or even Napoleon Bonaparte.
It seems that most companies believe that if AI has a human face or voice, we will like it more and trust it more. This intuition is indeed in line with some scientific evidence pointing to the positive effects of anthropomorphizing AI for consumer trust.
But is this the best way to make AI more acceptable and trustworthy for humans? We argue that anthropomorphizing AI may not be the optimal approach and could even have unintended consequences. For example, creating human-like AI can set unrealistic expectations about its capabilities, leading to disappointment and frustration when it falls short. A recent study found that anthropomorphic chatbots reduced customer satisfaction, firm evaluation, and purchase intentions when customers were already angry. This reaction was driven by the violation of inflated expectations; simply put, consumers expected more from a human-like chatbot and felt let down when it didn’t deliver.
In another study, researchers found that players enjoyed computer games less when they got assistance from anthropomorphic helpers compared to helpers that weren’t humanized. This is because humanlike helpers undermined players’ sense of autonomy, a key factor in game enjoyment.
Efforts to anthropomorphise AI might also land in the uncanny valley — a phenomenon where AI that is almost human, but not quite, triggers feelings of unease and discomfort rather than familiarity and connection. Finally, anthropomorphizing AI entails making decisions about AI’s gender and race, which may (unintentionally) perpetuate harmful stereotypes. Pre-empting this from happening, some software developers, for instance, started to imbue virtual assistants with gender-neutral voices.
So, what could be a better way to make AI more acceptable? The key lies in emphasizing the human input behind AI development.
The Empirical Evidence
To test this intuition, we conducted five studies. In one of them, participants were told that they had to upload a photo and would receive feedback from an AI coach to help them improve their photography skills. We randomly assigned participants to one of three conditions. In the intervention condition, we highlighted human input in the development of the AI coach: participants read that the AI coach was developed by a team of human data scientists and photography experts. In the first control condition, the AI coach was anthropomorphized, with a human name and picture. In the second control condition, we removed the human element entirely; participants read that the AI coach was developed based on machine learning algorithms.
Participants then received feedback from the coach, which seemed tailored but was, in fact, the same for everyone (e.g., about the use of natural lighting and the choice of background objects). We then asked participants to evaluate how helpful they perceived the feedback to be.
The results supported our expectations. Feedback from the AI coach where human input in its development was highlighted was perceived as most helpful, despite being identical to feedback in the other conditions. Anthropomorphizing the AI coach was better than removing the human element entirely, but the feedback felt significantly less helpful than when human input was emphasized.
In follow-up studies, we consistently replicated the positive effect of highlighting human input and explored the reasons behind it. We found that emphasizing human input increased participants’ subjective understanding of the AI coach — how well they felt they understood how it worked and what it could do. This enhanced understanding, in turn, increased the acceptance of the AI coach.
Overall, the results point to a simple and cost-effective intervention that can significantly increase the perceived helpfulness of AI-generated output. While anthropomorphizing AI — the common practice — is better than having no human element at all, there is a more effective approach: Companies could see a meaningful impact in the acceptance of their AI tools if they proactively communicate the human input in their development.
Practical Implementation
Our findings have important implications for many domains where AI tools are used to provide feedback, guidance, or advice to humans, such as education, healthcare, finance, or entertainment.
In education, for example, an AI tutor company will benefit more by conveying that their AI was developed by a team of educators rather than trying to make it look like a human teacher or boasting about advanced analytics. A case in point is Eduaide.AI — an AI tool to help teachers automate administrative tasks — which prominently features its founders’ educational backgrounds and positions itself as “developed by educators.”
Similarly, in medicine, communicating that an AI advisor incorporates the expertise of human doctors will be received more positively than mimicking an expert’s voice or claiming to use cutting-edge algorithms. Consider SkinVision, a regulated medical service that offers accurate and timely skin cancer detection. It is described as a “medical device that merges AI technology with the expertise of skin health professionals and dermatologists.”
The examples could be extended to other professional services such as asset management (Finaix), legal document generation (LegalNow), and photography (Kira); in all these examples, AI coaches are described as developed by experts in their respective domains (finance, law, or photography).
However, we observe that most companies still miss this opportunity. For example, unlike Kira (the AI photo coach), fotographer.AI does not emphasize any human input at all despite having similar functionalities. Likewise, molemapper app, offering comparable service as our healthcare example above, does not mention any human input in the service design. Another case in point is the recent introduction of a LinkedIn AI-powered coaching tool that emphasizes technical capabilities (e.g., it uses “Microsoft’s Azure OpenAI API service to process every learner’s questions, responses, and personal information into prompts for a generative AI model”) but misses the opportunity to underscore the human expertise behind its development.
Highlight Human Involvement
There are several ways for companies to disclose human involvement in their AI products. One approach is to clearly state the role of experts in its development, using labels and salient information to emphasize human input. Much like labels such as “organic,” “carbon neutral,” or “fair trade” convey healthiness, sustainability, or ethical practices, companies could attach similar indicators to AI tools to showcase human expertise and oversight.
Companies can take this further by sharing detailed stories about the humans behind the AI. For example, just as some wine bottles highlight the winemaker’s story or chocolate brands celebrate their farmers, AI products could feature brief bios of the experts who shaped them. To make the connection even more personal, companies could include photos or behind-the-scenes glimpses of the development process. A language learning app, for instance, could feature a short “making of” film showcasing the linguists and educators who guided its creation to bring the “human touch” of AI to life.
This is, of course, not a call for making false or exaggerated claims about human input in AI tools. It is essential to make sure that the human element is authentic and meaningful, and that it reflects the actual design and development process of AI tools. If customers perceive human involvement as non-genuine, misleading, or fake, these messages could easily backfire.
The good news: the reality for today’s AI systems is that they are fundamentally products of human work. From architectural decisions about model design to the curation of training data; reinforcement learning feedback to fine-tuning response behaviors; and prompt engineering to application development — humans shape these systems at every level. When users interact with AI, they are engaging with tools molded by countless human decisions and decisions. Anthropomorphizing practices, however, misrepresent reality as they entail portraying AI as human.
Our research shows that emphasizing this simple fact — the essential role of humans in AI tools — can boost perceptions of AI’s usefulness and reduce resistance to adoption.
Beyond Consumer Trust and Acceptance
The benefits of highlighting human expertise in AI systems could also extend far beyond building immediate consumer trust and acceptance. Companies that effectively communicate their human-AI collaboration can reframe the narrative from “AI versus humans” to “AI by and for humans,” helping to reduce fears of AI-driven job displacement. Employees who see their contributions acknowledged are more likely to engage with AI initiatives with a sense of ownership and partnership rather than competition.
This narrative shift can also help reshape public perception. Popular media often frame AI as competing with humans, such as the recent BBC series pitting “AI against human experts in their chosen field.” Highlighting the role of human input in AI development can promote a more constructive and empowering conversation.
In addition, this approach can provide a basis for sustainable competitive advantage. While algorithms are often easily copied or accessed, the unique integration of human expertise and creativity behind AI systems is far harder to imitate. Focusing on this synergy could enable companies to craft distinctive value propositions and stand out in increasingly crowded markets.
Emphasizing human involvement could also reinforce ethical and transparent practices. Acknowledging the humans behind AI could foster accountability, build trust with consumers, and strengthen credibility with regulators and stakeholders.
The Path Forward
For managers aiming to implement this strategy, the process begins with conducting a detailed review of AI messaging strategies. Start by asking a foundational question: Are we placing too much emphasis on technical sophistication while neglecting the human expertise behind the product? An audit of current communication can uncover whether the messaging inadvertently downplays the critical role of human input.
Once gaps are identified, the next priority is creating clear frameworks to systematically document and showcase the human contributions behind AI development. This could involve specific practices such as maintaining detailed records of human expert involvement, conducting interviews with key contributors, or including human-centric case studies in product materials to highlight that human involvement is both authentic and visible.
Finally, companies should systematically evaluate the impact of this approach. By tracking metrics such as consumer trust, adoption rates, and user perceptions, they can identify the most effective ways to communicate human input. These insights enable companies to refine their communication frameworks and respond to evolving consumer expectations.
Overall, our message to managers is clear: making your AI tools look human or dehumanizing them altogether are not the best ways to win over your users. Instead, show them the human effort and expertise that go into their development and design. The path forward is highlighting the humans in AI rather than humanizing AI.