SKIP TO CONTENT

“I’m Afraid We Are Automating This Work Without Really Understanding It”

February 20, 2025
Pepino de Mar studio/Stocksy

Summary.   

AI is often touted as a way to handle busy work to free people up for tasks that matter. But in the race to add automation to pretty much every job, it’s rare that people question what, exactly, people are being freed from, and which tasks actually

At this point, AI has touched pretty much every workplace and every job, to some degree—from customer service to medicine to the little automated pop-up that appears whenever I cut-and-paste something into a document. This can sometimes be handy, but other times can degrade what John’s Hopkins University sociologist Allison J. Pugh calls “connective labor” in her latest book, The Last Human Job. This type of labor involves “seeing the other and reflecting understanding back,” an action critical to “millions of jobs, including people working not just in health care, counseling, or education, but also in the legal, advertising, and entertainment industries, in management, in real estate, in tourism, even in security.”

The humanity of this work is threatened when automation prevents us from recognizing and truly being present with other people, or when we have to collect data about every interaction instead of focusing on the human being in front of us. (The first chapter of the book features a hospital chaplain who must record her interactions with patients “in no fewer than three different tracking systems,” gathering information that probably isn’t even necessary for her to do her job successfully.)

Attempts to use technology to “solve” our perceived inefficiencies, or to supposedly free us up for meaningful work that matters, also raises a broader question. To paraphrase Pugh, “who is going to be freed and what counts as meaningful?”

I recently posed a few questions to Pugh over email to better understand her research on society’s rush to automate work, what we’ll lose if we go too far (and who will bear the brunt of these loses), and what advice she has for leaders who are considering integrating AI and other technologies into their organizations. This is an edited version of our conversation.

HBR: One commonly argued benefit of incorporating things like generative AI into work tasks is that busy work in front of screens or with devices will be reduced or eliminated, allowing for more human connections with colleagues, clients, patients, or customers. In other words, that the technology is beneficial in part because it will bring people closer together, which will in turn make people happier and more productive.

In your research, does this rationale ring true? And if not, what does this thinking miss?

Pugh: That kind of a change would indeed be a huge benefit. So many of us are beset with mind-numbing tasks in our jobs that the idea that we could slough some of them off to machines sounds positively utopian. Perhaps that is why the “AI will free us up for meaningful work” is such a common idea out there, promulgated by AI researchers and economic analysts alike. Yet this argument relies on a certain naiveté about capitalism, I think, at least as it is currently practiced in the United States: If AI takes work off our hands, is it likely that employers will then seek to fill our days with more meaningful tasks, or that they will instead take the opportunity to eliminate jobs and reduce their workforce when they can?

I love the vision of people closer together, happier and more productive. But that vision also relies on the notion that human connections with colleagues, clients, patients, or customers are highly valued, and I’m afraid that is not currently evident. If they were prized, then perhaps researchers would not find that the more a job involves face-to-face communication with clients or customers, the less it is paid—controlling for skill and other characteristics. If we treasured human connection, we would not ask the same people who are in charge of forging it—the teachers, primary care physicians, and others—to spend their time collecting data, fitting in their connective work on the side as they can. Instead, our current management practices suggest we don’t really value this work, and if that is true, then AI is unlikely to be deployed so that we can accomplish it better.

Indeed, I wrote The Last Human Job in part to spell out what this connective labor is, how people do it, and why it is so valuable. I’m afraid we are automating this work without really understanding it, and thus what is at stake.

What are some examples of how technology (be it gen AI or something else) promises to create connections or improve workers’ experiences, but instead does the opposite?

There are many examples of technology gone awry: e.g., chatbots that respond to people confessing their depression with “maybe the weather is affecting you,” or offering weight loss tips to eating disorder clients. Some organizations are racing to embrace AI before it is reliable enough to put in unsupervised contact with people.

But even when the technology performs as promised, it affects human relations. I talked to one woman who worked as a “coach” at a startup that offered cognitive behavioral therapy in an app to people with social anxiety. The firm expressly forbade her from doing therapy, and did not pay her or expect her to be a credentialed counselor, but the clients themselves ended up treating it like an inexpensive talking cure, with some working with her for months at a time. She told me how hard it was to hear about her clients’ trauma, and how she had had no training in how to handle it.

The impact of AI on her job was threefold. First, she was a classic example of what Harry Braverman once called “deskilling”: when a firm breaks down the component parts of a particular job and then hires cheaper labor to complete much of the work. While Frederick Taylor might have done this to bricklaying a century ago, today’s AI does this to socio-emotional work: the startup had divided a therapist’s job into the app software and a team of untrained, uncredentialed “coaches.” Second, her work was made invisible, as the firm denied that her regular encounters with clients counted as any sort of counseling. This kind of invisibilization is a common-enough finding for other examples of AI “automation,” where unseen armies labor behind the scenes to train models about whether something is a bagel or a dog, for example, but as it turns out, it applies to “automated” connecting work also. Finally, she faced the existential problem of having to prove that she was human to customers used to working with machines.  The technology incontrovertibly shaped her experience, making her feel somewhat automated herself.

Are there things executives and managers can do better in thinking about what technologies they want to introduce in their organizations, and why? Are there key questions they should be asking — about the technology, what their organization really needs, and what their employees might experience or feel while using it?

The key issue is that technology is not a neutral force that might simply solve a problem, but that it reflects the culture of the firm where it is introduced. And business leaders have a lot of influence on what that culture is—particularly how it may or may not encourage human connections between workers and their clients. I analyzed firms where workers managed to forge strong connections, and others where they did not. The difference comes down not just to material factors like adequate time, money and space—although of course those factors are important. It also matters whether leaders articulate a vision with commitment to human connection, whether they foster mentorship and sounding boards for workers to process what they are hearing, and whether they help to enact the norms and rituals that prioritize relationships.

Any leader seeking to introduce new technology uses into their organizations should ask themselves how the tech would affect those factors. We might call this a “connection criteria”—how a given technology affects the way humans relate to each other—and applying it clarifies the potential impact of introducing new technology not just for core services offered to patients or clients, but also to the interactions among a firm’s workers. Unless your organization is almost entirely automated, those interactions are vital for the processes that produce what you sell.

In my research, I met charismatic leaders—the head of a clinic, the principal of an independent school, the director of a program teaching inmates business skills—who managed to build a culture of connection, making time and space for it, articulating its value, prioritizing it for their employees. Some introduced new technologies but subordinated them to the relationships these leaders valued.

Finally, executives should keep in mind that technology can promise an easy solution to a compartmentalized problem, but too often it ends up not reducing work so much as reshaping who does it and how visible that work is. The introduction of robots to manufacturing is actually a good example: it is not that humans are “freed up” for more meaningful work necessarily; instead, the human job becomes assisting the machine, fixing problems when it gets stuck, clearing the way for it to do its job. An engineer once famously said that humans will have to choose whether to be the “pets or livestock” of our AI masters, but far more likely is a dynamic where humans act as valets;, and not just in manufacturing but in socio-emotional service work as well. Needless to say, when the human job is to make sure the AI agent can do its best, clearly human-to-human connections are not the priority.

How do you see the next five years of technology development and workplace adoption playing out? What are you optimistic and pessimistic about? And what would an ideal future look like?

I can see three futures playing out, partly because they are here already.

One future is a “triage model,” where AI takes care of the “easy” cases, leaving the more complex ones for humans to handle. We can see an example of this whenever you call an airline and find yourself shouting “agent!” again and again to get a human being on the phone.

Another future is the “inequality model,” where rich people get human attention from workers, who are themselves serviced by a bot. Again, the seeds of this are being planted right now: we know that the fastest growing occupations are those that involve personal services to the affluent, while I spoke to many AI engineers who are designing AI couples counselors or AI discharge nurses for disadvantaged folk because they are “better than nothing.”

A third kind of future we face is a “binary model,” which involves separating out thinking from feeling, with thinking reserved for machines. That distinction used to be made between men and women in the workforce, and led to significant gender pay disparities.

At first blush, this future seems the most hopeful, in which empathy, social sensitivity, listening skills, or caregiving morphs from attributes associated with femininity into those that convey what it means to be human. It would surely be a better world if those skills were widely shared across human beings of all stripes. Yet, the distinction between thinking and feeling is a false one, as any caregiver will tell you, and indeed, their intertwining is profoundly important, for example, in occasions of mercy.

One home health aide I spoke to was cherished by her employer for being good at the “hard cases.” Newly assigned to a difficult client, she quickly figured out why so many of her predecessors had quit before her, soothed the client who was feeling ignored, and essentially reset the whole dynamic. She did so not just by feeling her way, but by decoding the elderly client’s signals and effectively diagnosing the situation so that she could change it. The example is one of many I heard about—human connective labor involves both thinking and feeling—but so-called thinking jobs (say, that of judges) also often involve feeling. And if we consign analytic judgment to machines, then sooner or later humans will be the ones offering false comfort to the forsaken.

Instead, the ideal future is one where we defend human connective labor from its automation, and we start by protecting it from its degradation by the imposition of data collection requirements and efficiency campaigns. The ideal future is one where we welcome AI and technological development in all sorts of domains, from inventing new antibiotics to decoding sperm whale speech to predicting earthquakes. But in that future, we cordon it off from human connection, applying a “connection criterion” to its deployment. Right now, our unregulated environment means that AI is being sold as an appropriate tool for automating teachers, physicians, therapists, lawyers, and a host of other connective jobs. When we are inventing a new hammer, we need to remind ourselves that not everything is a nail.

Partner Center