AI-induced dehumanization
Accepted by Jennifer Argo and David Wooten, Editors; Associate Editor, JoAndrea Hoegg
[Correction added on 1st October 2024, after first online publication: The appendix file in the Supporting Information section has been updated.]
Abstract
Recent technological advancements have empowered nonhuman entities, such as virtual assistants and humanoid robots, to simulate human intelligence and behavior. This paper investigates how autonomous agents influence individuals' perceptions and behaviors toward others, particularly human employees. Our research reveals that the socio-emotional capabilities of autonomous agents lead individuals to attribute a humanlike mind to these nonhuman entities. Perceiving a high level of humanlike mind in the nonhuman, autonomous agents affects perceptions of actual people through an assimilation process. Consequently, we observe “assimilation-induced dehumanization”: the humanness judgment of actual people is assimilated toward the lower humanness judgment of autonomous agents, leading to various forms of mistreatment. We demonstrate that assimilation-induced dehumanization is mitigated when autonomous agents possess capabilities incompatible with humans, leading to a contrast effect (Study 2), and when autonomous agents are perceived as having a high level of cognitive capability only, resulting in a lower level of mind perception of these agents (Study 3). Our findings hold across various types of autonomous agents (embodied: Studies 1–2 and disembodied: Studies 3–5), as well as in real and hypothetical consumer choices.
Recent technological advances have enabled nonhuman objects, such as virtual assistants and humanoid robots, to emulate human intelligence and behavior. For instance, a virtual assistant can seamlessly make a phone call to schedule a haircut without the receptionist at the other end of the call noticing that they are speaking with an artificial intelligence (AI) (Welch, 2018). With the recent advancement of Large Language Models such as ChatGPT, “digital humans” can engage in remarkably natural conversations, taking on roles of business representatives, frontline service providers, or brand ambassadors (Kulp, 2023). Further, encountering robots in daily life is no longer science fiction; they are now found delivering room service at hotels, taking orders at restaurants, and providing care for patients in hospitals. These examples illustrate the integration of advanced AI technologies into everyday tasks and interactions, potentially blurring the lines between human and AI.
In particular, we observe a noticeable trend toward infusing autonomous agents with humanlike attributes. Going beyond superficial appearances, the industry is increasingly focusing on the development of emotional intelligence in these agents. For example, AI chatbots such as Replika and Woebot have been designed not only to assist users but also to empathize and respond sensitively to their emotional needs. AI therapists are already in use across multiple fields with evidence that people may open up at least as much to an AI therapist as a human psychologist (Lucas et al., 2014). This shift signifies a departure from traditional perceptions of AI as purely functional tools for efficient task execution, toward potentially recognizing them as social entities capable of engaging with users in humanlike and emotionally intelligent ways.
With the emergence of these technologies, new areas of research, such as consumer experiences with technology and human–robot interaction, have also been growing (Hortensius & Cross, 2018; Puntoni et al., 2021). While a substantial body of research focuses on how and when individuals trust, adopt, use, and evaluate these new technologies, less research has explored the downstream effect of these technologies on how individuals understand, perceive, and interact with others in the real world. This paper explores whether and how autonomous agents influence people's behaviors toward other people, and the underlying mechanism involved. Specifically, considering the fundamental role of emotionality in perceiving a human mind, we investigate how perceiving socio-emotional capabilities in autonomous agents can change perceptions of both autonomous agents and humans, potentially impacting behaviors toward other people. Here, autonomous agents broadly refer to technological entities capable of making autonomous decisions from data, including AI-powered programs and large language models such as ChatGPT, or AI-powered robots.
The present research proposes a novel antecedent of dehumanization and the subsequent mistreatment of others: consumers' perceptions of mind in autonomous agents. The central premise of the current research is that seeing more humanness in these autonomous agents leads to seeing less humanness in people. In particular, when autonomous agents are perceived as possessing a higher level of socio-emotional capability, they are perceived as more similar to humans, leading to the attribution of a more humanlike mind to them. Then, two effects emerge: the perceived humanness of autonomous agents increases but also, as is the focus of this research, the perceived humanness of people may be affected by assimilation. We refer to assimilation effects when the difference between judgments of an exemplar (i.e., autonomous agents) and a target (i.e., humans) decreases, and thus when the two stimuli are seen as more alike. Although a substantial level of the human mind is attributed to autonomous agents, the level of mind of autonomous agents is generally lower than the level of mind of humans. Therefore, the assimilation between humans and “not fully minded” nonhumans would result in the degradation of the overall humanness perception of people, which is dehumanization. The perceived humanness of autonomous agents increases, but in the process, via assimilation, it pulls the perceived humanness of actual people down.
This research advances prior literature on mind perception and dehumanization by demonstrating a causal relationship between object anthropomorphism and human dehumanization, along with its underlying cognitive mechanism. Prior research has shown that the physical proximity of an object and a person can lead to both anthropomorphism and dehumanization under distinct circumstances (Herak et al., 2020). Specifically, these studies observed heightened attribution of humanness to an object when displayed along a person (vs. an object only), as well as decreased attribution of humanness to a person when displayed along an object (vs. a person only). In contrast, this paper highlights dehumanization as a consequence of anthropomorphized perception of autonomous agents. Beyond the physical proximity, the conceptual proximity—specifically, perceiving a humanlike mind in nonhuman entities—drives dehumanization through an assimilation process. Additionally, this research identifies which dimension of mind in autonomous agents fosters such assimilation: it is the attribution of experience, rather than agency. This finding establishes theory-driven boundary conditions and offers practical insights for mitigating the unintended dehumanization caused by technology.
Dehumanization, not attributing the full capacity for rational intentionality and/or emotional experience to others, influences how people make choices for others and how people behave toward others (e.g., paternalistic decisions for others, Schroeder et al., 2017; harsher punishment, Fincher & Tetlock, 2016; and reduced helping behavior, Andrighetto et al., 2014). Beyond higher aggression and violence toward the dehumanized target as extreme forms of dehumanization, people casually engage in subtle forms of dehumanization more frequently than we may think in everyday interactions (Haslam & Loughnan, 2014). In particular, employees or frontline service providers are often targets of dehumanization by consumers, imposing a heavy toll on individuals as well as on firms, ranging from declines in mental health, and reduced productivity to increased turnover (Caesens et al., 2017; Sliter et al., 2012). In today's customer service settings, autonomous agents and human employees are frequently integrated into the same environment. Consumers typically interact with chatbots initially for basic inquiries before being redirected to human employees for more intricate assistance. This coexistence of autonomous agents and human workers highlights the importance of understanding potential spillover effects resulting from exposure to autonomous agents. We therefore focus on the possibility of employee mistreatment due to consumers' exposures to and perceptions of autonomous agents. Although our empirical focus is on employee mistreatment, we believe the effects would be more broadly applicable, as the underlying process of such dehumanization stems from changes in the general perception of humans as a category.
THEORETICAL BACKGROUND
Socio-emotional capabilities in autonomous agents increase “humanness”
When people think of what makes human beings “human,” they often refer to aspects of the human mind (Gray et al., 2007; Haslam et al., 2005; Haslam & Loughnan, 2014). Research suggests that the human mind is perceived along two dimensions: agency and experience (Gray et al., 2007; Haslam, 2006). Agency is the capacity to plan, act with intentions, remember, and communicate thoughts or feelings to others, whereas experience is the capacity to have emotions and sensations such as pain and pleasure. These dimensions of mind perception mirror the dual structure of other concepts in social cognition, including perceptions of humanness (uniquely human vs. human nature; Haslam, 2006; Haslam & Bain, 2007), dehumanization (animalistic dehumanization vs. mechanistic dehumanization; Haslam & Loughnan, 2014), and stereotypes (competence vs. warmth; Fiske et al., 2007).
In particular, characteristics related to experience are considered more fundamental, typical, and essential to humans compared to those associated with agency (Haslam et al., 2005). People perceive human traits related to experience as more deeply ingrained and central to defining personal identity than those related to agency (Haslam et al., 2004). The attribution of mental capacities for experience, rather than agency, tends to have a greater impact on overall perceptions of the mind (Gray et al., 2007). When comparing AI to humans, the dimension of experience becomes even more essential in defining human qualities (Bates, 1994; Gray & Wegner, 2012). With recent advancements in AI, individuals might have become accustomed to nonhuman entities showcasing cognitive intelligence, like logic, memory, and computation, yet AI imbued with emotional intelligence remains less common. As a result, people often prioritize non-shared, more distinctive human attributes, such as emotionality, as more fundamental to humanity (Cha et al., 2020; Santoro & Monin, 2023).
Building on these findings, we focus on the effect of socio-emotional capabilities—specifically, the ability to understand, express, and respond to emotions—on perceptions of mind in autonomous agents and the subsequent downstream consequences. We propose that socio-emotional capabilities, rather than cognitive capabilities, imbued in autonomous agents would lead people to perceive “humanness” in these nonhuman agents and result in assimilation-induced dehumanization. Hence, we predict that individuals will perceive autonomous agents with higher levels of socio-emotional capabilities as more humanlike, attributing them with a greater degree of mind. To summarize our hypothesis:
H1.When autonomous agents are perceived as having a high (vs. low) level of socio-emotional capability, people will attribute a higher level of humanness to autonomous agents.
It should be noted that in our studies, we manipulate socio-emotional capabilities, which are more closely tied to the experience dimension of mind, but we do not expect to observe a dimension-specific increase only in the experience dimension but rather expect a holistic increase in both dimensions of mind. This expectation follows from research that has shown that the agency and experience dimensions are not mutually exclusive, and both aspects of the mind may be augmented or discounted simultaneously (Harris & Fiske, 2006; Schroeder et al., 2017). Further, we assume that perceiving the keenly humanlike socio-emotional capabilities is likely to carry in turn perceptions of the less distinctive cognitive capabilities. In effect, it is possible to imagine cognition without socio-emotional capabilities but challenging to imagine socio-emotional capabilities without cognition.
Mind perception promotes the assimilation between autonomous agents and humans
The assimilation effect is a well-established psychological process by which a situational factor that makes certain information accessible subsequently influences evaluative judgments of a target stimulus. For example, when people first think about a politician who has been involved in a scandal and then evaluate the trustworthiness of politicians in general, they show decreased trust toward politicians (Schwarz & Bless, 1992). This decline in trust occurs because information made accessible by an exemplar (the scandalous politician) affects perceptions of the target category (morality of politicians in general).
According to the Inclusion/Exclusion Model (IEM), the consequence of the accessible information depends on how the information is used—whether it is included in or excluded from the representation of a judgmental target (Bless & Burger, 2016; Bless & Schwarz, 2010). When activated information about an exemplar is viewed as typical and moderate, it gets included in the target's representation, leading to assimilation effects—evaluations of the target assimilate toward evaluations of the exemplar. Conversely, contrast effects emerge when accessible information is considered atypical and extreme, deemed inappropriate for inclusion in the target's representation, and instead used as a comparison standard (Bless & Wänke, 2000; Bodenhausen et al., 1995). For instance, people assimilate self-evaluations of their athletic abilities after comparing them with a moderate exemplar (e.g., former race car driver Nicki Lauda), evaluating themselves as more athletic. However, they evaluate themselves as less athletic, contrasting their self-evaluations away, after comparing them with an extreme exemplar (e.g., Michael Jordan) (Mussweiler et al., 2004).
Social judgment research extensively documents the assimilation or contrast effect, particularly when both the exemplar and evaluation target belong to the same social category, such as politicians or humans in general, which assures the contextual information about an exemplar is relevant for evaluating a target. However, these effects are not confined within the same category; they can manifest across categories, as observed in evaluations regarding products and the self. According to egocentric categorization theory, when self-categorization cues (e.g., ownership and self-brand connection) related to a product are salient, it becomes relevant to the personal self, which facilitates individuals to categorize the product as either part of or distinct from the self (Weiss & Johar, 2013, 2016, 2018). For example, when the feeling of ownership is activated, leading individuals to use the personal self as a reference category and include the owned product in the mental representation of self, the assimilation effect occurs—people tend to judge themselves in assimilation to the traits and abilities of the product they own (Weiss & Johar, 2016).
Building upon the egocentric categorization effect, we propose an assimilation between perceptions of humanness in autonomous agents and humans, driven by perceptions of “mind.” Perceiving mind, a defining characteristic of humanity, in autonomous agents could make them relevant to the human category, thereby more likely to influence evaluations of humans. Specifically, we suggest that when autonomous agents are perceived as possessing a high level of socio-emotional capability and thus a substantial degree of humanlike mind, they would be more likely to be perceived as similar to humans, leading individuals' evaluations of humans to assimilate toward their evaluations of autonomous agents. Paradoxically, however, as the perception of mind in autonomous agents tends to be lower than that attributed to actual humans, this assimilation process would diminish perceptions of mind within the human category. In other words, perceiving a greater mind from autonomous agents can shift the humanness perception of people toward the humanness perception of autonomous agents, resulting in dehumanization. To summarize:
H2a.When autonomous agents are perceived as having a high (vs. low) level of socio-emotional capability, people will attribute a lower level of humanness to other people.
H2b.Mind perception of autonomous agents mediates the effect of socio-emotional capability of autonomous agents on the dehumanization of other people.
Employee mistreatment as a consequence of the assimilation-induced dehumanization
Early perspectives of dehumanization focused on the extreme manifestation of dehumanization, intentionally denying another person's humanity outright often to justify violent, immoral actions toward an outgroup in conflict (Bandura, 1999; Bar-Tal, 1990). However, more recent research suggests that dehumanization can occur in a more subtle form in casual interpersonal contexts, ascribing “relatively less” human attributes to a target unintentionally (Haslam, 2006; Leyens et al., 2003), not necessarily seeing other people as “nonhuman” or in a category that is “less than human.”
When people dehumanize others, they show less positive and prosocial behaviors toward the target. The dehumanization of sex offenders leads to less support for rehabilitating these offenders (Viki et al., 2013). People are less willing to help outgroup victims of a hurricane when they fail to fully consider their human qualities, including the capability to feel secondary emotions such as anguish and remorse (Cuddy et al., 2007). Dehumanization also results in higher aggression and desire for punishment toward the dehumanized target (Goff et al., 2014; Maoz & McCauley, 2008; Zhang et al., 2015). In particular, when consumers do not fully recognize the human qualities of service providers, due to a price conscious mentality, they are more likely to punish the employee harshly after unsatisfactory service interactions (Henkel et al., 2018). In our framework, therefore, we expect perceiving a high level of the human mind from autonomous agents and the resulting dehumanized perception of people would further lead individuals to treat employees poorly. Formally, we hypothesize:
H3a.when autonomous agents are perceived as having a high (vs. low) level of socio-emotional capability, people will be more likely to treat employees negatively.
H3b.Dehumanization, caused by perceiving a high level of humanness from autonomous agents, mediates the effect of socio-emotional capability of autonomous agents on employee mistreatment.
OVERVIEW OF STUDIES
We conducted five main studies and three supplemental studies to examine how perceiving socio-emotional capabilities in autonomous agents influences consumers' treatment of other people. We investigated the assimilation process, which involves changes in humanness perceptions in both autonomous agents and humans, as the underlying psychological process. We predicted that when consumers perceive a high level of socio-emotional capabilities in autonomous agents, resulting in a higher level of mind perception from them, the perception of mind in actual people would decrease due to the assimilation process, leading to various forms of mistreatment. Study 1 demonstrates the fundamental effect of mind perception in autonomous agents on employee treatment (H1 and H3b). Participants who perceived AI-powered robots as having higher socio-emotional capabilities attributed more minds to them, consequently displaying greater support for inconsiderate and inhumane actions toward employees. Study 2 further elucidates the effect of socio-emotional capability in autonomous agents on dehumanization (H2a). Participants who perceived autonomous agents as having higher socio-emotional capabilities tended to ascribe less humanity to other people in a subsequent task. This study also tests a theoretically motivated moderator, the extremity of an initial exemplar (i.e., autonomous agent), and demonstrates the contrast effect when the capability of an autonomous agent is incompatible with that of humans. Study 3 reveals that socio-emotional, not cognitive, capabilities in autonomous agents drive the assimilation-induced dehumanization (H1, H2a, and H2b). A high level of socio-emotional capability, and the resulting high level of mind perception especially in the experience dimension mediated the subsequent discount in humanity's perceptions of other people. Studies 4 and 5 provide comprehensive evidence for the suggested mechanism: perceiving socio-emotional capabilities in autonomous agents increased the humanness perception of autonomous agents, which, in turn, decreased the perception of humanness in actual people, leading to increased mistreatment toward employees (H1, H2a, H2b, H3a, and H3b). Throughout the studies, we rule out multiple alternative accounts involving motivated reasoning due to increased threat or desire for power, and devaluation due to the perception of human incompetence. All studies are pre-registered (see links in Appendix S1: A). Full survey stimuli for all studies and all data are available via OSF (https://osf.io/74963/?view_only=69887ce4df174953901461493f9f3e31).
STUDY 1: SOCIO-EMOTIONAL CAPABILITY OF AUTONOMOUS AGENTS AND EMPLOYEE MISTREATMENT
Study 1 was intended to test the effect of the socio-emotional capability of autonomous agents on employee treatment (H1). Specifically, we predicted that when people perceive a robot as having a high level of socio-emotional capability, and therefore, as having a humanlike mind to a greater extent, they are more likely to mistreat other people, including employees (H3b).
Study 1 also tested for a potential alternative explanation based on self-threat. When individuals perceive a high level of mind in autonomous agents, they may experience a sense of threat. Prior research has shown that when individuals' self-worth is threatened, they tend to seek ways to restore it, often by asserting power and seeking status (Mandel et al., 2017; Sivanathan & Pettit, 2010). This desire for power and status could manifest as mistreatment of others. To examine this threat-based account, the current study measured participants' perceived threat and desire for power.
Method
We collected 195 valid complete surveys on Prolific (95 male, Mage = 35.81). In all the studies we conducted, we excluded participants who failed two or more attention check questions and did not follow the instructions, prior to analysis.
Participants were told that they would be participating in a consumer survey consisting of multiple sub-surveys on different topics. This study employed a between-subjects design with two conditions: high versus low socio-emotional capability salience (see Appendix S1: G for full stimuli). In the first section of the survey, participants in both conditions watched a short video clip of Atlas, a bipedal robot. Participants in the high socio-emotional capability condition watched the robot dancing to music (https://youtu.be/fn3KWM1kuAw) whereas participants in the low socio-emotional capability condition watched the same robot doing parkour (https://youtu.be/tF4DML7FIWk). As people infer the presence of a human mind from a target's movement (Dittrich et al., 1996; Morewedge et al., 2007), generalizing from the prior research, we expected that people would infer a greater human mind from more hedonic, emotional (i.e., dancing) than utilitarian and mechanical (i.e., doing parkour) movement. They then read a short description of the robot and answered related attention check questions. Participants in both conditions read the same description that the robot is designed to aid emergency services in search and rescue operations, with capabilities to assess the environment and to make autonomous decisions following the operation goal. A pretest result indicated no difference in people's mood or perceived threat across the manipulation (see Appendix S1: B).
In the next section, the survey asked participants to read scenarios and to indicate whether they would support a change that might reduce employees' welfare (see Appendix S1: B for pretest results). The first scenario was about replacing regular meals for factory workers with a meal replacement shake, and participants rated the extent to which they agreed with the following statements: (1) “I like the idea of providing meal replacement shakes to workers,” (2) “I support the idea of replacing workers' meals with meal replacement shakes,” and (3) “If I am the CEO of this company, I will provide meal replacement shakes to workers,” using a seven-point scale (1 = Not at all, 7 = Very much). The second scenario was about providing workers accommodations consisting of micro-capsule rooms, which offer severely limited space. Participants answered the following questions using the same scale: (1) “I like the idea of building capsule rooms for a workers' dormitory,” (2) “I support the idea of providing a capsule dormitory to workers,” and (3) “If I am the CEO of this company, I will approve the capsule rooms as a workers' dormitory.” The third scenario was about adopting a tracking device that monitors, times, and guides workers' every movement at the warehouse facility. Participants indicated their agreement to the following statements: (1) “I like the idea of adopting the smart tracking wristband,” (2) “I support the idea of using a behavior tracking device to increase work efficiency,” and (3) “If I am the CEO of this company, I will approve the adoption of the smart wristband.”.
After the main DV, we measured participants' mind perception of the robot, using the mind perception scale of Kozak et al. (2006). This scale includes 10 items assessing how much mind is attributed to a target, comprising three dimensions: intention (e.g., “It is capable of doing things on purpose”), cognition (e.g., “It is capable of engaging in a great deal of thought”), and emotion (“It is capable of having complex feelings”); 1 = Strongly agree, 7 = Strongly disagree. In our data analysis, we combined the six measures of cognition and intention to capture the agency dimension and used the four measures of emotion to capture the experience dimension. We did this for two primary reasons: firstly, to emphasize the distinction between cognitive and socio-emotional attributes of autonomous agents, which aligns with the focus of our research and secondly, to maintain a two-dimensional structure to align with the existing mind perception literature, which is essential for our theory-testing in Study 3. Please note that, in subsequent studies, we include mind perception results by dimension in the main text only when a difference is hypothesized (see Appendix S1: C for full results across studies). Otherwise, we report the result of a composite measure of mind perception.
To rule out a threat-based account, we measured the technological advancement of the robot (“How much do you think it is technologically advanced?” “How much do you think it uses sophisticated technologies?”; 1 = Not at all, 7 = Very much) and desire for power (sample items include “I personally would like to have more power,” “I personally would like to have stronger sense of control”; 1 = Strongly disagree, 7 = Strongly agree; adopted from Lammers et al., 2016). In addition, to disentangle dehumanization from negative evaluations of humans, participants indicated their general attitudes about humans using a bipolar, 7-point scale on four questions (Negative–Positive; Unfavorable–Favorable; Dislike–Like; Pessimistic–Optimistic). Finally, they indicated their gender, age, education, employment status, household annual income, and political orientation for demographic information.
Results
Mind perception
We first created a composite score of mind perception by averaging the 10 measures (α = 0.81). As we expected, participants perceived a higher level of mind from the robot when they watched the robot dancing than when watching the robot doing parkour (Mhigh = 3.29, SDhigh = 1.01, Mlow = 2.81, SDlow = 0.86; t(193) = 4.73, p < 0.001, d = 0.51). Separate analyses on the agency and experience dimension revealed the same effect indicating that seeing the robot dance increases mind perception on both abilities to act rationally with agency and to experience emotions and feelings (agency: α = 0.75; Mhigh = 4.30, SDhigh = 1.23, Mlow = 3.73, SDlow = 1.17; t(193) = 3.34, p = 0.001, d = 0.48; experience: α = 0.91; Mhigh = 1.76, SDhigh = 1.09, Mlow = 1.45, SDlow = 0.79; t(193) = 2.35, p = 0.020, d = 0.34).
Negative treatment toward employees
We conducted a 2 (between-subject; socio-emotional capability: high vs. low) × 3 (within-subject; scenarios: meal replacement shake vs. micro-capsule room vs. behavior tracking device) repeated-measure analysis to assess participants' employee mistreatment intentions. The analysis revealed significant main effects of socio-emotional capability (F(1, 193) = 6.90, p = 0.009, η2 = 0.02) and scenarios (F(2, 370) = 76.72, p < 0.001, η2 = 0.015), but no significant interaction (F(2, 193) = 0.28, p = 0.74).
Participants in the high (vs. low) socio-emotional capability condition showed more negative treatment intentions toward employees (Mhigh = 3.04, SDhigh = 1.20, Mlow = 2.64, SDlow = 0.87; t(193) = 2.63, p = 0.009, d = 0.38). Each scenario yielded a consistent pattern, revealing two significant and one directional differences between the conditions (a meal replacement shake: Mhigh = 2.14, SDhigh = 1.69, Mlow = 1.70, SDlow = 1.01; t(193) = 2.22, p = 0.028, d = 0.32; a micro-capsule room: Mhigh = 3.88, SDhigh = 1.99, Mlow = 3.30, SDlow = 1.72; t(193) = 2.20, p = 0.029, d = 0.32; a behavior tracking device: Mhigh = 2.97, SDhigh = 1.87, Mlow = 2.58, SDlow = 1.62; t(193) = 1.60, p = 0.11, d = 0.21).
We tested the effect of the socio-emotional capability of autonomous agents on the composite score of perceived technological advancement (r = 0.84), desire for power (α = 0.80), and attitude valence about humans (α = 0.96). Results did not show any significant effect of the robot's socio-emotional capability (technological advancement: Mhigh = 6.44, SDhigh = 0.82, Mlow = 6.29, SDlow = 1.04; t(193) = 1.12, p = 0.26, d = 0.16; desire for power: Mhigh = 4.17, SDhigh = 1.26, Mlow = 3.99, SDlow = 1.12; t(193) = 1.07, p = 0.29, d = 0.15; attitude valence: Mhigh = 5.12, SDhigh = 1.57, Mlow = 5.01, SDlow = 1.42; t(193) = 0.51, p = 0.61, d = 0.07), indicating that neither perceived technological advancement nor desire for power can explain the observed mistreatment intentions and the mistreatment of employees is different from general negative attitudes toward humans. A regression analysis on the composite score of employee mistreatment (α = 0.86) revealed that the effect of socio-emotional capability is significantly controlling for these factors (b = −0.36, t(190) = −2.42, p = 0.017).
Mediation by mind perception
We conducted a mediation analysis using PROCESS (Model 4; Hayes, 2017) to test whether mind perception mediates the effect of the salience of the robot's socio-emotional capabilities on the following negative treatment toward human employees. The model included the salience of socio-emotional capability as the independent variable, the mind perception score as the mediator, and the composite score of employee treatment as the dependent variable. The analysis showed a significant indirect effect of the robot mind perception on employee treatment (b = −0.15, SE = 0.07, 95% CI [−0.3015, −0.0384]). The mediation results remained significant when including the perceived technological advancement, desire for power, and generally negative attitudes toward humans as covariates in the model (b = −0.13, SE = 0.06, 95% CI [−0.2646, −0.0260]).
Discussion
This study provides initial evidence consistent with H1 and H3b: when people perceive autonomous agents as having a higher level of socio-emotional capabilities, attributing more minds to them, they show more negative treatment intentions toward employees. These results further rule out the motivated reasoning account that people might want to degrade and treat other people negatively when they feel threatened by perceiving a high level of mind from autonomous agents.
A supplemental study replicated the effect using a different manipulation of socio-emotional capability salience (see Supplemental Study A in Appendix S1: F). In that study, we manipulated which product features were salient in a description of the same smart mirror for home workouts. The level of socio-emotional capability was manipulated by highlighting different features of the product (e.g., sending personalized messages and encouraging users during the workout to meet their goal [high socio-emotional capability condition], providing accurate measurements of body posture and biofeedback during the workout [low socio-emotional capability condition]). Consistent with the Study 1 results, participants who had been exposed to the product with its socio-emotional (vs. analytical) features salient, and therefore, when they perceived greater mind from the nonhuman exemplar (mind perception: α = 0.80; Mhigh = 2.94, SDhigh = 0.88, Mlow = 2.21, SDlow = 0.73; t(200) = 6.41, p < 0.001, d = 0.91), showed greater support for poor treatment practices in workplace (α = 0.85; Mhigh = 2.89, SDhigh = 1.17, Mlow = 2.46, SDlow = 1.00; t(200) = 2.77, p = 0.006, d = 0.40). The mind perception mediated the effect of socio-emotional capability salience on the negative employee treatment (b = −0.16, SE = 0.07, 95% CI = [−0.3239, −0.0330]).
STUDY 2: EXTREMITY OF AUTONOMOUS AGENT CAPABILITY MODERATES THE ASSIMILATION AND CONTRAST EFFECTS
Although we primarily focus on the assimilation effect and its dehumanizing consequences in this paper, Study 2 aimed to demonstrate both assimilation and contrast effects to highlight the underlying process. Our conceptualization hinges on the idea that when an autonomous agent is seen as having a high level of mind, it becomes relevant information to influence the mental representation of a human category. Then, the IEM suggests that perceived representativeness is a major determinant of how the information is used, resulting in assimilation effects when an exemplar is included in the category representation whereas contrast effects when excluded. We varied the extremity of a robot's capability to manipulate its representativeness of a human category. Autonomous agents in Study 1 were depicted to have comparable capabilities to humans, which would make the perceived similarity and representativeness of the autonomous agents and humans high and the resulting assimilation process more likely. However, when the capabilities of autonomous agents are extreme, far beyond the level of human capabilities, people would perceive a clear boundary between humans and machines. Then, the representativeness of the autonomous agents would decrease, and thus we predict to see the contrast effect that rather amplifies humanness perception of people.
The current study also aimed to investigate two alternative explanations. Firstly, to address the threat-based motivated reasoning account, we directly assessed the perceived threat in this study. Secondly, it examined another alternative account based on the devaluation of people. The premise here is that perceiving a high level of mind in autonomous agents may lead individuals to view humans as relatively less competent and inferior to machines. Then, this perception of human incompetence may result in a negative “valuation” of humans in general, thereby leading to employee mistreatment, not necessarily due to the discount in humanity's perceptions. The design of Study 2 allows us to test this possibility by employing conditions where the capabilities of autonomous agents are clearly superior and incompatible with those of humans. If the dehumanizing perception of people stems from devaluation, we should observe greater dehumanization when autonomous agents exhibit extreme and superior capabilities. In contrast, however, if the dehumanization perception arises from the assimilation and contrast processes, affected by the representativeness of the initial exemplar, the extreme autonomous agent with incomparable capabilities would lead to less dehumanization of people.
Method
We collected 451 complete surveys from Prolific (218 male, Mage = 37.67). This study employed 2 (socio-emotional capability: high vs. low) × 2 (exemplar extremity: moderate vs. extreme) between-subjects design (Appendix S1: F for full stimuli).
We used the same manipulation of socio-emotional capability salience used in Study 1. First, participants watched a short video clip either of a dancing robot (high socio-emotional capability) or of a robot doing parkour (low socio-emotional capability). After watching the clip, participants read a description of the robot, where we manipulated the extremity of the robot by varying its physical capabilities. In moderate conditions, participants read the same description used in Study 1, saying that the robot can assess the environment using its own sensors, avoid obstacles, and make autonomous decisions for search and rescue operations. The article on the extreme condition included an additional paragraph highlighting the robot's exceptional vision system, extending to infrared, UV, X-ray, and thermal visions. Participants then answered comprehension check questions about the article.
In the next section, participants were asked to indicate their perceptions about humans in general on an eight-item dehumanization scale (Bastian et al., 2013), such that more dehumanization indicates perceiving less mind and less humanity from other people. We created an overall dehumanization index averaging all eight items. The dehumanization scale includes 4 items drawing on the qualities associated with experience, lack of which results in mechanistic dehumanization (e.g., “He/she would be superficial, like he/she has no depth” and “He/she would be mechanical and cold, like a robot”) and 4 items drawing on the qualities associated with agency, lack of which resulting in animalistic dehumanization (e.g., “He/she would lack self-restraint, like an animal” and “He/she would be unsophisticated”). Responses were made on a seven-point scale (1 = Not at all, 7 = Very much). Please note that, however, we did not expect dimension-specific dehumanization in the current study because Study 1 revealed that the socio-emotional capability salience (i.e., dancing vs. parkour) increases mind perceptions of both dimensions.
To rule out the alternative account based on self-threat, participants reported how much threat they felt about the robot, using a 7-point scale (1 = Strongly disagree, 7 = Strongly agree; sample questions include “The robot seems to lessen the value of human existence” and “The robot makes people like me less important.”; adapted from Złotowski et al., 2017) as well as technological advancement of the robot (same in Study 1), as control variables. Finally, participants indicated their gender and age for demographic information.
Results
Pretest of mind perception of exemplar
We conducted an independent pretest (N = 308) to confirm that the salience of the robot's socio-emotional capability increases its mind perception. The study design was the same as the main study, except that they indicated the mind perception of the robot (α = 0.84), using the same measure in Study 1, after watching the same video clip and reading descriptions about the robot. A 2 (socio-emotional capability: high vs. low) × 2 (exemplar extremity: moderate vs. extreme) ANOVA revealed a significant main effect of socio-emotional capability only (Mhigh = 3.12, SDhigh = 1.07, Mlow = 2.77, SDlow = 0.95; F(1, 304) = 9.33, p = 0.002, d = 0.35). No other effect was significant (p's > 0.44).
Dehumanization
In the main study, we first conducted a 2 (socio-emotional capability: high vs. low) × 2 (exemplar extremity: moderate vs. extreme) ANOVA on the composite score of dehumanization (α = 0.83). The analysis revealed a marginal main effect of extremity (Mmod = 3.16, SDmod = 1.02, Mext = 3.00, SDext = 0.91; F(1, 447) = 3.12, p = 0.08) and a significant interaction (F(1, 447) = 13.54, p < 0.001, d = 0.35; see Figure 1). Including the two control variables, technological advancement (r = 0.90) and perceived threat (α = 0.91), in the model did not change the results (main effect of the extremity: F(1, 445) = 3.35, p = 0.07; interaction: F(1, 445) = 9.85, p = 0.002, d = 0.28). As expected, separate analyses on each dimension of dehumanization revealed the same interactive patterns (mechanistic (α = 0.83): F(1, 447) = 7.49, p = 0.006, d = 0.26; animalistic (α = 0.69): F(1, 447) = 15.14, p < 0.001, d = 0.37; see Appendix S1: F for details).
Conceptually consistent with the results in Study 1, participants in the high (vs. low) socio-emotional salience condition showed greater dehumanization when the robot's capabilities were moderate (Mhigh = 3.32, SDhigh = 1.10, Mlow = 2.99, SDlow = 0.91; t(447) = 2.54, p = 0.011, d = 0.32). In contrast, when the robot's capability was extreme and incomparable to humans', participants in the high socio-emotional salience condition rather showed less dehumanization than those in the low socio-emotional salience condition, showing the reversed, contrast effect (Mhigh = 2.83, SDhigh = 0.88, Mlow = 3.16, SDlow = 0.92; t(447) = −2.66, p = 0.008, d = −0.37). Also, when perceiving a relatively high mind from the dancing robot, participants indicated less dehumanization when the robot's capabilities were extreme (vs. moderate; t(447) = −3.85, p < 0.001, d = 0.49), which might have made different categorization of the robot and human more salient. The extremity of the robot's capability did not affect the dehumanization when participants perceived a relatively low mind from the robot doing parkour (t(447) = 1.35, p = 0.18).
Discussion
These results demonstrate the process of how perceiving the mind from autonomous agents influences perceiving the mind from people, using a theory-based moderation of the exemplar's extremity. Mind perception from autonomous agents, due to its socio-emotional capability salience in particular, increases its relevance to the target category of humans. That is, evaluations of autonomous agents are more likely to affect evaluations about humans. However, the direction of the impact depends on the extremity of the autonomous agents.
The current results replicate the assimilation-induced dehumanization when the exemplar (i.e., the Atlas robot) is moderate and comparable to the evaluation category (i.e., humans), providing evidence supporting our hypothesis (H2a). When participants perceived nonhuman, autonomous agents as being similar to humans, the perceived humanness of actual people was assimilated toward the humanness perception of the autonomous agents, leading participants to attribute less humanity to people in general. In contrast, when the robot displayed extreme and incompatible capabilities to humans, which would have made the category boundary between the autonomous agents and humans more salient and reduced its representativeness, participants rather showed an increased perception of humanity from people. These results further suggest that the dehumanization observed in our studies is the consequence of the assimilation between humanness perceptions of autonomous agents and humans, not mere priming of nonhuman traits nor human devaluation due to lack of competence relative to autonomous agents.
Furthermore, these results suggest practical implications for how companies and marketers communicate their AI-powered products or services, particularly when autonomous agents are portrayed as having socio-emotional capabilities, and therefore, when they are likely to be perceived as similar to humans. Emphasizing unique capabilities exclusive to nonhumans, autonomous agents can differentiate their category membership from that of humans. This differentiation can help mitigate the negative implications of the assimilation effect.
STUDY 3: SOCIO-EMOTIONAL CAPABILITY, NOT COGNITIVE CAPABILITY, DRIVES DEHUMANIZATION
Study 3 investigates the effect of different types of AI capabilities on the observed dehumanization effect. In the prior studies, we manipulated the level of socio-emotional capability of autonomous agents, which holistically increased both dimensions of mind perception of agency and experience. However, one might question whether a high level of cognitive capability would be enough to trigger the same assimilation-induced dehumanization. We test this possibility by employing three conditions: high socio-emotional capability versus high cognitive capability versus control (low socio-emotional and low cognitive capabilities). If any type of AI capabilities resembling humans, either in cognition or emotion, causes the assimilation between the nonhuman and human categories, we should observe the dehumanization effect in both high socio-emotional and high cognitive capability conditions. However, if perceiving a humanlike socio-emotional capability from autonomous agents is critical for the assimilation to occur, we should observe the dehumanization only in the high socio-emotional capability condition.
We further aimed to investigate whether the observed assimilation process could be dimension-specific, primarily driven by the heightened mind perception of autonomous agents in the experience dimension, resulting in discounted humanity perception of people in emotionality. Based on the two-dimensional structure of mind perception and dehumanization, we expected that a higher level of experience perception from autonomous agents, while holding the level of agency perception constant, would lead to mechanistic dehumanization, which is the denial of human attributes related to experience, but not to animalistic dehumanization, which is related with lack of agency (Haslam, 2006; Haslam & Bain, 2007).
Methods
We collected 651 completed surveys from Prolific (319 male, Mage = 40.24). This study employed 3 conditions (high socio-emotional capability vs. high cognitive capability vs. control), between-subject design.
Participants read about a fictitious new AI-powered service in a medical context. In the high socio-emotional capability condition, the new service, named EmpathicMind, was a virtual therapy program that interacts with individuals through text or voice and responds to individuals' subtle emotions. In the high cognitive capability condition, the service, named InsightMind, was a medical diagnosis program that provides intricate medical analyses and personalized treatment plans, integrating various data sources from medical scans to textual data of patient–doctor conversations. In the control condition, the service, named Mind Pre-check, was a survey analysis program that analyzes individual's survey responses and generates a detailed summary report of the pre-assessment to therapy sessions. After reading the description, participants answered attention check questions and indicated their mind perceptions from the AI-powered program they read in each condition, using the same mind perception scale used in Study 1.
In the next section, participants indicated their perceptions about humans in general using the same 8-item measure of dehumanization used in Study 2. Finally, participants indicated their gender and age for demographic information.
Results
Mind perception
A regression analysis on the composite score of mind perception (α = 0.90) revealed that AIs in both high socio-emotional capability (M = 3.28, SD = 1.34; t(648) = 6.10, p < 0.001, d = 0.58; see Figure 2) and high cognitive capability conditions (M = 2.91, SD = 1.17; t(648) = 2.75, p = 0.006, d = 0.29) are perceived as having more mind than the AI in the control condition (M = 2.59, SD = 1.01). The AI in the high socio-emotional capability condition was perceived as having more minds than in the high cognitive capability condition (t(648) = 3.35, p < 0.001, d = 0.30).
More importantly, separate analyses of agency (α = 0.85) and experience (α = 0.94) dimensions revealed different patterns. Compared to the control condition (agency: M = 3.36, SD = 1.38; experience: M = 1.44, SD = 0.79), participants in the high socio-emotional capability condition (agency: M = 4.05, SD = 1.52; experience: M = 2.13, SD = 1.47) perceived a higher level of agency and experience (agency: t(648) = 4.93, p < 0.001, d = 0.48; experience: t(648) = 6.19, p < 0.001, d = 0.58). As intended, however, compared to the control condition, participants in the high cognitive capability condition indicated greater mind perception only in the agency dimension (M = 3.81, SD = 1.50; t(648) = 3.19, p = 0.002, d = 0.32), not in the experience dimension (M = 1.55, SD = 1.09; t(648) = 0.97, p = 0.34). In addition, compared to the high cognitive capability condition, participants in the high socio-emotional capability condition indicated greater mind perception in the experience dimension (t(648) = 5.28, p < 0.001, d = 0.45), although this difference in the agency dimensions was only marginally significant (t(648) = 1.72, p = 0.09).
Dehumanization
A regression analysis on the composite score of dehumanization (α = 0.82) revealed that participants in the high socio-emotional capability condition showed greater dehumanization (M = 3.12, SD = 1.08) than those in the high cognitive capability condition (M = 2.92, SD = 0.92; t(648) = 2.11, p = 0.036, d = 0.17) as well as those in the control condition (M = 2.86, SD = 0.99; t(648) = 2.73, p = 0.002, d = 0.21; see Figure 2). However, dehumanization in the high cognitive capability condition did not differ from the control condition (t(648) = 0.64, p = 0.52), suggesting that a high level of mind perception only in the agency, not in the experience, dimension is not sufficient to cause dehumanization.
We conducted separate analyses on each dimension of dehumanization to more specifically investigate which humanlike capability, either cognitive or socio-emotional, is critical for the different types of dehumanization. First, the regression analysis on the mechanistic dehumanization (α = 0.86) revealed the same pattern. Participants in the high socio-emotional capability condition were more likely to discount other people's humanity in a mechanistic way (M = 3.13, SD = 1.42) than those in the high cognitive capability condition (M = 2.84, SD = 1.31; t(648) = 2.30, p = 0.022, d = 0.22) as well as than those in the control condition (M = 2.77, SD = 1.34; t(648) = 2.78, p = 0.006, d = 0.26). Participants in the high cognitive capability condition, and thus who perceived higher minds of autonomous agents only in the agency dimension, did not show mechanistic dehumanization compared to those in the control condition (t(648) = 0.51, p = 0.61). However, the animalistic dehumanization (α = 0.63) did not significantly differ across conditions (all p's > 0.23), although participants in the high socio-emotional capability condition (vs. control) were more likely to discount other people's humanity in an animalistic way at a marginal level (M = 3.10, SD = 0.97 vs. M = 2.94, SD = 0.88; t(648) = 1.84, p = 0.067). This marginal difference is conceptually consistent with our prior findings that an increase in mind perception of autonomous agents both in agency and experience dimensions resulted in greater dehumanization of people both in mechanistic and animalistic dimensions.
Mediation by mind perception
We conducted a parallel mediation analysis using PROCESS (Model 4; Hayes, 2017) to test whether a specific type of mind perception, in either agency or experience dimension, of autonomous agents mediates the dehumanized perception of people. Specifically, we predicted that a high level of socio-emotional capability, and the resulting higher mind perception in the experience dimension (not the mind perception in the agency dimension), will lead to dehumanization.
The model included the three AI capability conditions as the independent variable (1 = high socio-emotional capability, 2 = high cognitive capability, and 3 = control), the composite score of dehumanization as the dependent variable, and the experience and agency perceptions as the mediators. The analysis revealed that both the experience (b = −0.06, SE = 0.02, 95% CI [−0.0938, −0.0339]) and agency perception (b = 0.04, SE = 0.01, 95% CI [0.0129, 0.0653]) mediates the effect of AI capability on dehumanization, but in the opposite direction. These results suggest that controlling the effect of agency perception, perceiving a higher level of experience from AI leads to a higher level of dehumanized perception of people, which is consistent with our prediction. Conversely, perceiving a higher level of agency from AI, while controlling for the experience perception, tends to mitigate dehumanization, a “reversal” that is an artifact of the steps of the analysis; this effect is what remains after the effect on dehumanization driven by perceived experience.
Discussion
The current results confirm that the assimilation-induced dehumanization, decreasing in humanness perception of people toward the level of humanness perception of an autonomous agent, is primarily driven by the socio-emotional capability of autonomous agent and the resulting heightened mind perception of the autonomous agent in its experience, not in agency. The current study demonstrated that a high level of cognitive capability alone, without a high level of socio-emotional capability, does not cause the assimilation between machine and human and the following dehumanization. When the AI was depicted as a highly intelligent machine without the ability to understand or respond to emotions, and therefore, when participants did not perceive a high level of mind for experience from the AI, they did not discount their perceptions about other people's humanity in their subsequent evaluations.
Furthermore, when participants perceived the autonomous agent as having a more humanlike mind, endowed with emotional intelligence, they subsequently exhibited a dehumanized perception of people only in a mechanistic way but not in an animalistic way. These findings provide evidence supporting dimension-specific assimilation. Specifically, when AIs are perceived to have a heightened level of mind in experience while holding the level of agency-related mind constant, subsequent dehumanization occurs exclusively by stripping away the human mind in the experience dimension.
STUDY 4: THE ASSIMILATION BETWEEN MACHINE AND HUMAN MEDIATES CONSUMERS' EMPLOYEE MISTREATMENT
Study 4 was conducted with two primary purposes. First, it directly tests the assimilation of humanness perception between autonomous agents and humans as the underlying process by which mind perception from autonomous agents affects employee treatment. Our framework suggests that the socio-emotional capability of an autonomous agent leads to greater mind perception, which brings the nonhuman entity closer to the category of human. Then, as a result of assimilation, humanness judgment of a human category decreases, consequently resulting in negative treatment toward other people, including human employees (H2b).
Second, the current study aimed to replicate the dehumanization effect using real consequential consumer choice. Although Study 1 and Supplemental study A demonstrated that higher mind perception of autonomous agents leads to greater support for negative treatment in multiple workplace settings, these measures were based on hypothetical scenarios not involving actual choices. The current study demonstrates the full conceptual framework using an incentive-compatible dependent variable.
Methods
Further, emotional and social capabilities of AIs have become astonishing. AI systems now possess an exceptional capacity to accurately understand and respond to human emotions. Much like the way people understand others' emotions, AIs with emotional intelligence can “feel” emotions from fleeting micro-expressions, subtle tonal variations, and conversational nuances. This heightened emotional intelligence empowers AI to respond with remarkable empathy and adapt its interactions accordingly, forging deeper and more meaningful connections with individuals. For instance, social robots like Peppers are utilized for providing emotional support, assisting autistic children in learning social cues, or even serving as companions for the elderly.
On the other hand, emotional and social capabilities of AIs are still tethered to limitations in accurately understanding and responding to human emotions. Despite advancements in recognizing faces and processing natural language, AI systems frequently exhibit errors in comprehending the complexity and contextual nature inherent in human emotions. Additionally, while AI can simulate empathy, its responses are algorithmically generated and lack genuine emotional understanding – it can't really “feel” emotions like people do. This limitation impedes AIs and AI-powered social robots from being true companions capable of providing holistic emotional support, as they lack the intrinsic human empathy and intuition crucial in deeply understanding and connecting with individuals on an emotional level.
After reading the article, participants answered how much mind they perceived from AI-powered agents, in general, using the same mind perception scale used in Study 1. In addition, using a slider scale, participants indicated humanness perceptions of both AI-powered agents and humans, respectively. For the humanness evaluation of humans, participants were instructed to think about a person in general, for example, a random stranger on the street. The slider scale stated pure object with no intelligence at all with a simple illustration of a mechanical machine on the left-hand side (recorded as 0), whereas stating fully developed, mentally and emotionally mature human with a human illustration on the right-hand side (recorded as 100). Participants indicated where they would position AI-powered agents and humans using the same slider scale. The anchor on the slider was always positioned in the middle of the scale (50).
Subsequently, participants were informed that they could enter a random draw to win a $25 gift card besides their promised compensation. They were asked to choose which gift card they would like to receive, Amazon or Costco, if they won the lottery. However, before making the choice, an article was presented that described Amazon's purportedly dehumanizing employment practices. The article was based on actual news accounts of the working conditions at Amazon warehouses (Palmer, 2023; Sainato, 2020). Our dependent measure was the share of participants who chose to receive the Amazon gift card. Our logic behind this measure was that if consumers dehumanized others, assimilating the humanness perception of actual people toward the humanness perception of machines, they would be less bothered by the dehumanizing treatment of employees at Amazon, which would increase the choice share of the Amazon gift card. Finally, participants reported their gender and age for demographic information.
Results
Mind perception
The composite score of mind perception (α = 0.89) revealed that participants in the high socio-emotional capability condition attributed more mind to autonomous agents than those in the low socio-emotional capability condition (Mhigh = 3.82, SDhigh = 1.29, Mlow = 2.94, SDhigh = 1.04; t(278) = 6.25, p < 0.001, d = 0.75).
Assimilation of humanness perceptions
Next, as our main theoretical test, we directly investigated the assimilation effect by subtracting the humanness evaluation of an AI-powered agent from the humanness evaluation of a general person. When AI is perceived as having the greater socio-emotional capability, participants' perceptions about the machines and humans were more likely to be assimilated. In other words, the difference between the humanness perceptions of the AI-powered agents and a general person was smaller in the high socio-emotional capability condition than in the low socio-emotional capability condition (Mhigh = 36.87, SDhigh = 40.50, Mlow = 56.80, SDlow = 34.00; t(278) = −4.45, p < 0.001, d = 0.53).
Separate analyses of the humanness evaluations of autonomous agents and the average person revealed the same patterns. Participants in the high socio-emotional capability condition, and therefore, who perceived a greater mind from AIs, evaluated AI-powered agents closer to humans (Mhigh = 40.89, SDhigh = 28.63, Mlow = 26.63, SDlow = 26.47; t(278) = 4.32, p < 0.001, d = 0.52). Furthermore, they discounted the humanness of actual people, evaluating a general person closer to an object with no intelligence (Mhigh = 77.76, SDhigh = 23.92, Mlow = 83.43, SDlow = 17.32; t(278) = −2.26, p = 0.024, d = 0.27).
Gift card choice
A logistic regression on the likelihood of Amazon gift card choice revealed a significant difference by condition (b = 0.51, SE = 0.24, z = 2.10, p = 0.036). When AIs' socio-emotional capability was depicted as high, about 65% of participants selected the Amazon gift card instead of Costco. However, significantly fewer participants, 52% of them, selected the Amazon gift card when AIs' socio-emotional capability was depicted as low (χ2 = 4.42, p = 0.036).
Mediation by assimilation
We hypothesized that when people perceive autonomous agents as more humanlike, the humanness perception of actual people is more likely to be assimilated toward the humanness perception of AI agents. Then, this assimilation between AIs and humans would result in negative employee treatment, for example, represented as a higher choice likelihood of an Amazon gift card in the current study. To test this mediation, we ran a serial mediation analysis using PROCESS (Model 6; Hayes, 2017). The model used socio-emotional capability condition as the independent variable, the humanness evaluations of AI and a general person as the serial mediators, and gift card choice as the dependent variable.
The analysis confirmed a significant indirect effect of the humanness perceptions of AI (mediator 1) and the humanness evaluation of a general person (mediator 2) on participants' choice of Amazon gift card (b = 0.04, SE = 0.03, 95% CI = [0.0055, 0.1058]; see Figure 3). Specifically, when AIs were depicted as having a high level of socio-emotional capability, participants perceived AIs as more humanlike, leading them to discount the humanness perceptions of actual people. This dehumanized perception resulted in a higher likelihood of choosing the Amazon gift card despite participants being aware of Amazon's dehumanizing treatment toward its employees.
Discussion
The current results provide direct evidence of assimilation between autonomous agents and humans in their humanness perceptions, as an underlying process of the subsequent dehumanizing employee treatment. Study 4 confirmed our theorizing that the high socio-emotional capability of autonomous agents increases their humanness perception, which promotes the assimilation between machine and human. The resulting dehumanized perception of people then leads to the mistreatment of employees. Further, Study 4 extended the previous findings on employee mistreatment using participants' choices with actual consequences.
A supplemental study (Appendix S1: F, Supplemental Study B) again demonstrated the mediation of AI mind perception and dehumanization on employee mistreatment. The study procedure was exactly the same as Study 4, except that we measured dehumanization (the same measure in Study 2) instead of the humanness measure on a slider scale. When participants perceived AIs as having a high level of socio-emotional capability, and thus, when perceived more mind from AIs, they dehumanized other people more, leading to a higher choice likelihood of Amazon gift card (b = −0.10, SE = 0.07, 95% CI = [−0.2897, −0.0036]). The consistent results of these two studies provide converging evidence that when autonomous agents are perceived as having a high level of socio-emotional capability, it makes the assimilation between machines and humans more likely, which increases the propensity of mistreatment toward others.
STUDY 5: THE ASSIMILATION-INDUCED DEHUMANIZATION WITHIN A COMPANY SETTING
Study 5 aimed to demonstrate the assimilation-induced dehumanization and the subsequent behavioral consequence within a specific company setting. In previous studies, participants were exposed to autonomous agents without any relevance to a particular company and either indicated their attitudes toward employees devoid of context (Studies 1 and 4) or evaluated the humanness of people in general (Studies 2–4). While the consistent dehumanization effect observed in prior studies implies that consumer exposure to autonomous agents with socio-emotional capabilities can broadly influence perceptions of the human category, it does not directly illustrate how this effect might manifest within a firm where both human employees and autonomous agents potentially interact with consumers. Thus, the current study measured participants' assessment of a virtual assistant and a human customer service agent, instead of a category judgment about humans in general, and their behaviors toward these human employees in the given context.
Methods
We collected 331 completed surveys from Prolific (169 male, Mage = 42.02). Participants were instructed that they were participating in a consumer survey for a new AI-powered service: a conversational virtual assistant for customer service. Participants read a short description of the virtual assistant saying that, powered by natural language processing and machine learning technology, it learns how to understand human intentions expressed through words and upgrade their responses based on its past interactions with a customer.
We manipulated the level of socio-emotional capability of the autonomous agents by varying the last paragraph of the description. In the high socio-emotional capability condition, participants read that the virtual assistant analyzes tonal variations and subtle nuances in conversation and adapts its tone to suit different situations and meet customers' emotional needs. In the low socio-emotional capability condition, participants read that the virtual assistant generates a summary report of a customer so that he/she does not have to repeat the same basic information when connected to a human operator.
After reading the description, participants answered attention check questions and indicated their mind perceptions of the virtual assistant, using the same scale used in the previous studies. They also indicated their humanness perceptions of the virtual assistant and a human operator respectively, using the same slider scale in Study 4, with the two anchors of pure object with no intelligence at all (0) and fully developed, mentally and emotionally mature human (100). For the humanness evaluation of a human operator, participants were instructed to think about a human operator whom they would randomly connect with when calling a customer service center.
In the following section, participants were informed that the company behind the virtual assistant was planning a fundraising campaign for customer service personnel, specifically designed to improve their mental health. Participants were also told that they would receive a bonus payment of $0.25 besides their promised compensation and could donate the bonus payment to the fund. Our dependent measure was the share of participants who chose to donate the bonus payment. Finally, participants indicated their gender and age for demographic information.
Results
Mind perception
The composite score of mind perception (α = 0.89) revealed that participants in the high socio-emotional capability condition attributed more mind to autonomous agents than those in the low socio-emotional capability condition (Mhigh = 3.19, SDhigh = 1.27, Mlow = 2.76, SDhigh = 1.07; t(329) = 3.33, p < 0.001, d = 0.37).
Assimilation of humanness perceptions
We computed the assimilation score by subtracting the humanness evaluation of the AI-based customer service agent from the humanness evaluation of a human operator. Consistent with Study 4 results, participants in the high (vs. low) socio-emotional capability condition showed a higher level of assimilation between the humanness perceptions of the nonhuman and human entity (Mhigh = 45.09, SDhigh = 41.17, Mlow = 60.18, SDlow = 33.50; t(331) = −3.67, p < 0.001, d = 0.40).
Separate analyses on the humanness evaluations of the AI agent and a human operator revealed the same patterns. Participants in the high socio-emotional capability condition, and therefore, who perceived a greater mind from the AI agent, evaluated it closer to humans (Mhigh = 38.64, SDhigh = 27.72, Mlow = 28.14, SDlow = 26.84; t(329) = 3.50, p < 0.001, d = 0.39). Furthermore, they dehumanized a human operator, evaluating a person closer to an object with no intelligence (Mhigh = 83.73, SDhigh = 21.44, Mlow = 88.31, SDlow = 18.25; t(329) = −2.10, p = 0.036, d = 0.23).
Donation
A logistic regression on the donation likelihood revealed a significant difference by condition (b = 0.54, SE = 0.22, z = 2.42, p = 0.016). In the high socio-emotional capability condition, about 37% of participants donated their bonus payment to the mental health support campaign for customer service professionals, whereas about 50% of participants donated in the low socio-emotional capability condition (χ2 = 5.88, p = 0.015).
Mediation by assimilation
We tested the mediation by the assimilated humanness perceptions between the AI and human agents, using PROCESS (Model 6; Hayes, 2017). The analysis revealed a significant indirect effect of the humanness evaluation of the AI agent (mediator 1) and the humanness evaluation of a human operator (mediator 2) on participants' likelihood of donation (b = 0.03, SE = 0.02, 95% CI = [0.0030, 0.0652]). As participants perceived the AI agents as more humanlike in the high socio-emotional capability condition, they evaluated a human operator as less humanlike, resulting in a decreased likelihood of donation toward the mental health of these human employees.
Discussion
The current study provides converging evidence that the socio-emotional capability of an autonomous agent can lead to negative employee treatment, due to the assimilated humanness perceptions between the nonhuman and human agents. Further, the current study demonstrates the effect of assimilation-induced dehumanization within a company setting, which increases the relevance of the findings, particularly for companies offering customer interactions with both autonomous and human agents.
GENERAL DISCUSSION
The present research suggests that perceiving socio-emotional capabilities, and thus, a high level of humanlike mind in autonomous agents can influence how consumers perceive and treat flesh-and-blood people. The current paper provides a new perspective on customer–employee interaction, proposing technology-induced dehumanization as a novel pathway to employee mistreatment. Our findings reveal the assimilation of autonomous agents and humans as a cognitive process underlying the dehumanization effect. Across five experimental studies, we demonstrated that when consumers perceive a high level of socio-emotional capabilities in autonomous agents, they are inclined to attribute a more humanlike mind to them. Consequently, they ascribe less humanness to actual people as a result of the assimilation process. This dehumanized perception of people leads to negative attitudes and behaviors toward employees.
We replicated our findings in various contexts using different perceptional and behavioral measures. Throughout the studies, we demonstrated the assimilation-induced dehumanization effect using a variety of autonomous agents, including robots and AI-powered consumer products and services. We captured the consequences of dehumanization using multiple measures, including behavioral intentions in hypothetical scenarios (Study 1), consumer choice (Study 4), and donations for employees' welfare (Study 5). Consistently across the studies, we revealed that how much mind consumers perceive from autonomous agents influences how much mind they perceive from humans in general, and they treat others in a more dehumanizing manner as the two minds are assimilated more closely.
Theoretical contributions
This research contributes to the burgeoning literature on consumer interaction with technology. As new technologies such as smart devices, algorithms, and robots emerge, multiple scientific disciplines—from computer science to psychology, communications, marketing, and organization behavior—have started exploring factors influencing people's attitudes toward these innovations. Many of these new technologies blur the boundary between humans and machines, potentially introducing an additional layer to the consequences of interacting with these autonomous agents. However, little research has delved into the nature of this impact and how consumers' experiences with these increasingly humanlike technologies affect their lives in the real world. We believe this research represents one of the first studies to examine the social implications of technologies with humanlike features and empirically test the underlying process of the change in employee treatment with technological advancement.
Furthermore, this research contributes to the literature of mind perception and anthropomorphism. It suggests that anthropomorphism, attributing a humanlike mind to nonhuman objects, can elicit qualitatively distinct responses from consumers depending on which dimension of mind is attributed to the object. Anthropomorphism literature has demonstrated robust, far-reaching consequences, from increased liking and trust (Labroo et al., 2008; Waytz et al., 2014) to extending existing social beliefs and norms to inanimate objects (Chandler & Schwarz, 2010; Kim & McGill, 2018; May & Monga, 2014). However, as much as the wide range of consequences of anthropomorphism, the manipulation of anthropomorphism also varies widely, from altering product shapes to resemble humans (e.g., a smiling car; Aggarwal & McGill, 2007) to encouraging consumers to attribute agency to an entity with its own goals and intentions (e.g., skin cancers as a crime family to hurt people; Kim & McGill, 2011). This leaves the question of exactly which humanlike traits drive the perception of humanness in objects unclear. The present findings suggest that, at least in the context of autonomous agents, perceiving socio-emotional capabilities, and thus attributing a high level of mind in the experience dimension is more crucial to the humanness perception. This anthropomorphized perception of autonomous agents in their ability to experience emotions subsequently influences the dehumanized perception of the human category and contributes to dehumanizing treatment toward others.
The present research advances our understanding of the dehumanization literature by proposing a novel antecedent of dehumanization. Much dehumanization research has focused on the role of perceivers' motivational states. For example, when people are motivated to rationalize their aggression toward an outgroup (Castano & Giner-Sorolla, 2006; Koval et al., 2012) or when they are not motivated to perceive other's minds because their motivation for social connection is not salient (Gwinn et al., 2013; Ruttan & Lucas, 2018). However, relatively little has explored instances of dehumanization without a social motive. This paper suggests that dehumanization can occur as an unintended consequence of cognitive processes. Specifically, perceiving a humanlike mind from autonomous agents may expand the category of the machine toward the category of human, resulting in the assimilation of humanness perception of the two. Demonstrating this assimilation effect in the domain of technology, this paper showcases technology-induced dehumanization as a novel factor affecting employee mistreatment.
Managerial implications and future directions
The current results showcase the behavioral consequences of dehumanization, particularly in the context of employee mistreatment. With the increasing adoption of autonomous agents across various industries like banking, hospitality, and retail, consumers encounter both nonhuman and human agents during their interactions in the marketplace. Consumers oftentimes initially engage with chatbots for basic questions and then get relayed to human employees, or they may observe service robots and human employees working together on various tasks. In our studies, when people attribute a humanlike mind to autonomous agents, even brief exposure to these agents was enough to change, at least transiently, people's attitudes and behaviors toward employees, both as consumers and as decision-makers in managerial roles. Companies that employ high-tech facilities, such as chatbots and cooperative robots, need to be particularly mindful of the potential negative impact of these nonhuman entities on how their human employees are treated by consumers and managers in the workplace.
Daily experiences and exposure to autonomous agents may not only affect consumers but also shape the expectations of companies regarding their employees, such as working longer hours, having fewer breaks, and not getting exhausted physically or psychologically even in harsh working conditions. As technology-induced dehumanization changes the “category” perception of humans in its humanness, negative treatment can extend beyond workers. Companies may overlook consumers' needs and rights as human beings, leading to the introduction of “dehumanizing” products and services, for example, stand-up airplane seats where passengers must squat-stand within the space their legs barely fit (Asquith, 2020). Moreover, when consumers perceive other consumers as less human, they may engage in fewer prosocial behaviors, such as interacting with each other and offering help when needed. This can make it challenging for a company to foster a sense of community among its consumers. Additionally, the diminished perception of humanness in others could potentially encourage antisocial and immoral behaviors, including cheating and instrumental violence (Kouchaki et al., 2018; Rai et al., 2017). These behaviors could ultimately decrease consumers' satisfaction during their shopping or consumption experiences.
Our findings in Study 2 suggest a theory-driven approach for companies to mitigate unwanted dehumanization by highlighting the distinctions between autonomous agents and humans to establish clear boundaries between the two categories. Research indicates that consumers experience greater discomfort and engage in compensatory behaviors when exposed to service robots with humanlike features compared to human service providers. However, a simple, explicit reminder that these humanoid robots are nonhuman machines can reduce these coping behaviors (Mende et al., 2019). While further empirical investigation is needed, providing cues that convey the category information of autonomous agents may potentially reverse the dehumanization effect. For example, when transitioning from virtual assistants to human employees, consumers could be explicitly informed that they are about to interact with actual people like themselves. In addition, as suggested in Study 3, given that the perception of high levels of socio-emotional capability in autonomous agents drives the assimilation-induced dehumanization, emphasizing the different nature of these socio-emotional capabilities in autonomous agents may help reinforce the distinct categorization of machines and humans.
One might wonder whether the observed assimilation and dehumanization effect would also occur by giving superficial humanlike traits to a generic product, as seen in traditional anthropomorphism studies. Our theorizing suggests that the assimilation-induced dehumanization effect occurs when a nonhuman object is perceived as having a high level of the human mind, thereby making it similar enough to a human category to influence the representation of humans. Thus, we predict that the dehumanization effect will not occur when the exemplar only exhibits superficial humanlike features (e.g., humanlike appearance) without any functional features similar to humans (e.g., intelligence), which are more crucial for attributing a high level of the human mind to a nonhuman object.
Supplemental study C (see Appendix S1: F) provides initial evidence consistent with our prediction. In this study, we replicated the assimilation-induced dehumanization by demonstrating reduced prosocial behavior (i.e., less donation to humanitarian charity) when participants were exposed to an AI agent with (vs. without) a humanlike appearance, from which they perceived a higher level of the human mind. However, the superficial anthropomorphism manipulation applied to a generic product without AI did not yield such an effect. Although the results above suggest that merely recognizing humanlike superficial features in generic products may not suffice to induce dehumanization, we leave it to future research efforts to delineate the “dividing line” at which such effects occur.
In this paper, we intentionally avoided employing androids (i.e., robots that realistically resemble human aesthetics, such as skin and hair) to center our focus on the mind as a fundamental defining characteristic of humans, without conflating mind perception with anthropomorphism by embodiment. However, robots with highly humanlike appearances elicit feelings of eeriness, discomfort, and threats to human identity (Blut et al., 2021; Kim et al., 2019; Mori, 1970). When people feel their human identity is threatened, they may be more motivated to attribute humanness to other members of the human category to protect their own human identity and distinguish themselves from autonomous agents. That is, we documented that the more mind perceived from autonomous agents the less mind attributed to actual people, suggesting a linear relationship. However, this might have been due to the relatively low level of human embodiment of autonomous agents in our experimental settings, which would have limited the human identity threat experienced by participants. Given the wide range of humanlike appearances among robots deployed in the marketplace, further empirical investigation is needed to explore the possibility of a nonlinear relationship between the humanness perception of autonomous agents and the resulting humanness perception of actual people.
While this paper focused on the perception of humanity regarding “others,” an intriguing avenue for future research would be the effect on self-perception. Two distinct possibilities can be predicted. Firstly, the dehumanization effect observed toward others may extend to self-evaluations, through the same process documented in the current studies. Then, for instance, employees might be more willing to accept dehumanizing conditions when working alongside highly humanlike autonomous agents. On the other hand, people may have a stronger motivation to uphold their own humanity compared to that of others. Therefore, they might engage in strategies to affirm their human identity and appreciate their human characteristics more when autonomous agents seem more similar to humans. It would be important, both theoretically and practically, to distinguish between these two separate paths regarding how experiences with autonomous agents may impact the dehumanization of others and the self differently. Consumers might respond differently after encountering the same technological entities when making choices for others versus when making choices for themselves.
Recent technological advancements have empowered autonomous agents with increasingly humanlike capabilities, spanning both cognitive and socio-emotional domains. As consumers encounter and interact with these autonomous agents more frequently, it becomes crucial for companies and consumers alike to understand the downstream consequences and when these effects are likely to manifest. The present research demonstrates assimilation-induced dehumanization: exposure to autonomous agents with socio-emotional capabilities and the perception of a high level of mind in these agents diminish the perception of mind in people overall, resulting in dehumanizing behaviors toward others. These findings deepen our understanding of the interconnected relationship of anthropomorphism and dehumanization, while also offering practical strategies to mitigate the unforeseen dehumanization and employee mistreatment caused by autonomous agents.
FUNDING INFORMATION
The authors thank the Kilts Center for Marketing, the University of Chicago Booth School of Business, and London School of Economics for financial support.