Nils Gilman is a deputy editor of Noema Magazine and the chief operating officer at the Berggruen Institute.
The history of economic progress is a graveyard of tools. The water frame replaced the spinning jenny; the automobile replaced the horse; the spreadsheet replaced the ledger clerk. Each transition was accompanied by moral panics about the end of human utility. Yet, in each instance, the economy didn’t collapse into a heap of idleness. The lump of labor merely shifted its weight.
Today, we face a similar shift, one likely more profound even than the arrival of the internet but perhaps less catastrophic than the doomsayers suggest. Mastery of the blinking cursor, long a symbol of professional status and middle-class stability, is becoming a relic. As a recent Wall Street Journal article argued, virtually all tasks involving what might informally be described as “doing stuff with a keyboard” seem destined for automation. Companies are aggressively capturing the institutional knowledge and digital practices of their employees, distilling decades of experience into digital models that never need a bathroom break or demand a better 401(k) match.
This might sound like dystopian speculative fiction — but it’s an economic inevitability. If your primary value to an organization is the ability to manipulate data, draft documents or write code via a QWERTY interface, you are standing on a shrinking ice floe.
The corporate logic is cold. Generative artificial intelligence has reached a threshold where the cost of content generation is plunging toward zero. For decades, the knowledge worker acted as a high-priced router, taking information from one place, processing it through the cognitive stack sitting at the top of their spinal column and using their fingers to output that knowledge via a keyboard onto a screen.
This was lucrative while it lasted. But as AI models become more adept at understanding context and intent, the need for a human intermediary to type is vanishing. In the very near future, the most efficient way to interact with a system won’t be through a series of rhythmic taps, but through high-level conceptual direction, delivered orally and rendered through voice-recognition technologies.
This transition has unleashed a new chorus of panicked warnings. Many people fear that AI will corrode our reasoning skills and make our labor obsolete. Bloomberg’s Joe Weisenthal has argued that the rise of “post-literacy” represents “re-wiring the logic engine” of the human mind, with possibly dreadful consequences for the ability to maintain the benefits of reason-centered modernity. A massive Brookings Institution study published in January warned that AI is causing a “great unwiring” of students’ brains, with one teacher quoted in the study bemoaning, “Students can’t reason. They can’t think. They can’t solve problems.” In February, Mustafa Suleyman, co-founder of DeepMind and now CEO of Microsoft AI, told Business Insider that he expects AI to automate “most, if not all” white-collar tasks within 18 months.
I disagree. There is no doubt that large language models (LLMs) are automating many job functions, especially in white-collar fields. But this will shift — not displace — human utility in the workplace. For those of us who work in fields subject to such automation, it will require a great refactoring of our skillsets. And I believe that what will emerge as our most valuable skills are modes of communication and understanding that are best performed orally and socially, such as empathy, negotiation, persuasion and trust-building.
Human Value In AI-Augmented Work
Consider a corporate law firm facing a labyrinth of evolving regulatory constraints, client expectations and geopolitical risks. A brilliant junior associate may draft a contract with flawless grammar and airtight risk language, but AI models can now do this too — in mere minutes or less. In this world, the real leverage for a lawyer lies with someone who can do what an LLM still cannot: apply expert professional judgment and intuition to read the political wind, anticipate where a clause will cause political heat and negotiate a settlement that preserves a client’s strategic options while maintaining relationships with regulators and counterparties.
When technology commodifies one set of inputs into the workplace production process, driving their cost and value toward zero, it makes other inputs more valuable. Already three decades ago, MIT economist David Autor and his colleagues found that as computers commodified “routine cognitive tasks” such as bookkeeping and accounting or administrative and clerical support, the relative value of non-routine analytical and interpersonal tasks increased. Just over 10 years later, Google’s former chief economist Hal Varian advised job-seekers: “If you are looking for a career where your services will be in high demand, you should find something where you provide a scarce, complementary service to something that is getting ubiquitous and cheap.”
“If your primary value to an organization is the ability to manipulate data, draft documents or write code via a QWERTY interface, you are standing on a shrinking ice floe.”
The same rule of thumb holds now. The more AI commodifies what used to be considered “non-routine” cognitive tasks, like crunching numbers and generating text, the more the human value proposition will necessarily shift toward capabilities that no machine can (yet) reliably replicate: applying judgment in messy, evolving contexts; connecting disparate ideas into a coherent plan; and persuading, negotiating and leading when the ground is shifting underfoot.
Jack Clark, the co-founder and the head of policy at Anthropic, recently told The New York Times that expert professional judgment, or “good taste and intuitions about what to do,” will become “increasingly limited” in the workplace. “We’re going to need to be extremely intentional about working out where we as people specialize so that we have that intuition and taste — or else you’re just going to be surrounded by super-productive AI systems, and when they ask you what to do next, you probably won’t have a great idea.”
Similarly, Jensen Huang, the president and CEO of NVIDIA, said in a podcast in January that “the future definition of ‘smart’” includes the ability to “infer the unspoken around the corners” and “preempt problems before they show up, just because you feel the vibe.”
The most prized future workers will be those who can decode a sea of outputs, spot the meaningful signal and translate it into action that others understand and trust. They will be adept at cross-domain reasoning: seeing how a precedent in one field might illuminate a policy in another, how a consumer insight from marketing could recalibrate a product’s technical architecture, or how an ethical concern might redirect a technical roadmap. They will be comfortable with ambiguity and rapid iteration, able to move from hypothesis to test to refinement with confidence and humility. They will be capable of designing decision processes rather than merely executing them.
There are telling parallels in how automation has unfolded in other sectors. A 2018 study from UC Berkeley Labor Center concerning the prospective automation of truck driving pointed out that a driver’s job is only partially about keeping a vehicle moving ahead between two white lines. If that were all there is to it, lorries would already be empty of humans. In reality, close ethnographic observation showed that truck drivers don’t just navigate the roads, but in fact act as complex problem-solvers in haphazard physical environments. They navigate broken engines, negotiate with warehouse managers, secure cargo and provide a physical presence that deters theft. They handle the edge cases that a digital model, no matter how sophisticated, isn’t (yet) able to fully anticipate.
The same logic applies to many white-collar domains that are directly in the line of sight of generative AI tools. In professional services, routine drafting and analysis will be automated, but the edge work — negotiation, stakeholder management and the interpretation of context — will persist in human hands. In fact, it will become more valuable as AI helps grow the volume of output. The high-powered lawyer of the future will surface a dozen potential frameworks at machine speed, rank those frameworks by likelihood, foresee the downstream consequences and guide clients through the ethical thicket.
Similarly, in a software engineering environment, AI can already generate code at unprecedented speed. But the design of systems that scale, endure and remain auditable still requires a translator — the crack product manager who can reconcile user needs, security constraints, platform governance and business objectives. The best technologists will be those who can tell a story about a system’s architecture in ways that resonate with the personal experiences of nontechnical stakeholders, or who can negotiate tradeoffs and lead a cross-functional team through experimentation cycles that keep the business moving while staying within ethical and legal boundaries.
In healthcare, AI can triage symptoms, propose differential diagnoses and automate routine documentation. The clinician’s greatest value emerges in the moments of uncertainty: when data points conflict, when patients’ values and fears shape the choice of treatment, when the clinician must build trust with a family in crisis. The judgment here is about timing, communication and weighing tradeoffs that aren’t captured in a chart. The same applies in education, where AI tutors can scaffold practice and feedback, but the most transformative teaching occurs when a pedagogue reads a learner’s temperament, adapts to cultural context and builds a learning path that respects a student’s dignity and aspirations.
A New Way To Work
As generative AI tools become ubiquitous, the richness of value produced by an organization or a professional will no longer be measured just by how much raw output is created per hour or by how fast it can be produced by a machine. Instead, value will accrue to those who can exercise sound judgment under stress and conditions of uncertainty, craft coherent strategic narratives and align a constellation of actors toward a shared purpose. The currency is trust, influence and the reliability of a decision. This emerging system of work is what I call the judgment economy.
“The most prized future workers will be those who can decode a sea of outputs, spot the meaningful signal and translate it into action that others understand and trust.”
Several features characterize this economy. Decision quality outstrips production quantity as the key metric; firms will increasingly compete on their ability to make decisions that survive scrutiny, pass ethical tests and achieve durable outcomes. The metrics will shift from throughput to decision confidence, calibration and post-decision learning. As attention becomes the scarcest resource in a world of proliferating content and code, value lies in the ability to distinguish signal from noise, prioritize projects that truly move the needle and communicate that prioritization clearly to stakeholders.
Governance must be designed into systems from the start — how decisions are made, who bears responsibility for outcomes and how feedback loops are closed when bad outcomes occur — with a clear definition of what sorts of social, economic and ethical outcomes are being sought. Cooperative competition will become more central: The most successful actors will coordinate across competitors, regulators, customers and partners to set standards, reduce systemic risk and promote shared value. Credentialing will be less about a particular degree or technical skillset than a certified track record in effective decision-making under real-world constraints. This will entail portfolios of projects, scenario-driven assessments and demonstrated ability to integrate knowledge across domains.
In a world of AI-induced content super-abundance, three broad skill categories will rise in relative importance. The first is interpersonal and organizational alignment, which includes persuasion, consensus-building, leadership, diplomacy and the capacity to align diverse stakeholders around a shared course. This extends to executive leadership, policy design, program management and cross-functional orchestration. The job of supply chain director, for example, will evolve from being a logistics planner who calculates routes into a strategic diplomat whose primary value is persuading skeptical executives and global partners to execute the high-stakes shift an algorithm may suggest.
The second category, contextual and strategic interpretation, is the interpretation of data-rich inputs into strategic choices, risk assessments and long-horizon planning. Think strategy, product discovery, risk governance and ethics oversight. For instance, an investment portfolio manager’s job will transform from manually parsing earnings reports and calculating risk ratios in spreadsheets to thinking through how market reactions to a sudden geopolitical shift or an emerging ethical scandal might fundamentally alter a fund’s 20-year risk profile and long-term capital allocation strategy.
The third category, creative synthesis and domain integration, spans roles that fuse insights from multiple disciplines to create novel solutions, frameworks or business models. The job of urban transit planner, for example, will no longer be about using specialized software to simulate microscopic and macroscopic traffic flows. The person in that role will instead be a synthesizer who develops long-term investment hypotheses by fusing disparate insights from public health (transit users’ mental health in crowds), climatology (heat island effects during summer months), sociology (differentiated accessibility) and the reliability of forecasts concerning the product roadmap of autonomous-vehicle technology.
This is an arena where liberal arts-informed thinking, storytelling, historical analogy, ethical reasoning and cross-domain curiosity become tangible competitive advantages.
Thriving In The Judgment Economy
For decades, the prevailing career advice was simple and utilitarian: Abandon the squishy subjects of the arts and humanities and embrace STEM. We were told that to stay relevant in a digital economy, we must “learn how to code.” In the judgment economy, that advice is all but obsolete.
The most valuable skill is no longer the ability to generate a technical output, but rather the capacity to evaluate a hundred AI-generated options and identify the connections that matter most. This is the capacity for seeing connections and resonances between disparate knowledge domains: understanding how a historical precedent might inform a modern marketing strategy, or how an ethical framework might guide a product rollout.
This is the interdisciplinary thinking that liberal arts colleges have long championed, often to the derision of tech executives like the demon-obsessed Peter Thiel, who considers college at best a site for training in banausic skills like software engineering that are anyways better learned on the job.
One of the great ironies of this new machine age is that it seems set to rehabilitate, quite precisely, the value of the traditional “liberal arts” education. I don’t mean merely the humanities, but rather the broadest possible exposure to the natural sciences and the arts, the social sciences and the humanities. In a world dominated by AI, deep educational specialization is a risk; if your value is tied to a single technical niche, you are vulnerable to being eclipsed by the next software update. By contrast, absorbing knowledge from a variety of fields creates a cognitive flexibility that is the best bet for maintaining long-term viability.
“One of the great ironies of this new machine age is that it seems set to rehabilitate, quite precisely, the value of the traditional ‘liberal arts’ education.”
“My expectation now is that there’s gonna be something like the revenge of the humanities,” neuroscientist and philosopher Sam Harris said recently on his podcast. “What the world is going to need are well-educated generalists with good taste … people who have read good books and gone to good museums and had good arguments and can make good arguments, and can create companies using robots that have learned to code, that produce things that we actually want that benefit our culture,” he added. “I would be much more bullish on a degree in philosophy, or even English literature quite frankly, at this moment, than perhaps certain STEM fields.”
I would make the case for the utility of my own profession’s skillset as well. Learning to “think like a historian” isn’t mainly about the memorization of dates or the details of the Holy Roman Empire’s political travails or the Tang dynasty’s legal and administrative innovations. Rather, historical training instills the instinct to understand that every system exists due to complex, multicausal factors — that to improve a system, one must understand its context.
History teaches us to recognize long arcs and recurring patterns, allowing leaders to avoid “presentism” and understand how near-term choices echo across decades. This ability to place current decisions within a broader chronicle of cause and effect is an operational tool that AI cannot (yet) easily replicate.
Similarly, the humanities provide practical complements to the analytical prowess gained from studying STEM fields. Reading fiction, for example, is a rigorous training ground for perspective-taking. In a loud, data-driven environment, the ability to inhabit a different cultural or social vantage point is rare — and therefore valuable. It allows a professional to anticipate objections, fears and human resistance before they arise. While AI can process data, it doesn’t (yet) have the stable and robust world model required to understand ramifications.
As a recent Harvard Business Review article observed, to thrive alongside high-powered AI, one needs “metacognition” — the self-awareness to regulate one’s own thinking and the confidence to assess machine outputs. To unlock your creativity and value-add in a world with ubiquitous high-powered AI tools, you will need enough hands-on experience to have good domain judgment and confidence to assess results, plus enough metacognitive awareness of how to talk about it to guide others. The key word here is judgment, which is ultimately a question of balance and taste.
The Fruits And Risks
If the prospect of a judgment economy is reassuring in its humanistic core, it also invites caution. On the plus side, it promises to preserve agency in a time of rapid automation. It anticipates rewards for curiosity, ethical clarity and the capacity to lead through ambiguity. It acknowledges that not all value is (yet) codifiable and that some of the deepest work is about human connection and moral discernment. It places a premium on reflexive thinking suitable for navigating the ethical implications of powerful technologies.
At the same time, the risks are nontrivial. The judgment economy could magnify disparities if access to the right kind of education and experiences remains uneven. People who have the means to pursue broad, cross-disciplinary training will gain outsized leverage, while those funneled into narrow vocational tracks will find opportunities constrained as more and more cognitive tasks become cheaper and cheaper to automate.
The increased emphasis on judgment also intensifies the need for robust governance and accountability. When decisions move into the realm of strategic influence, consequences — both positive and negative — are magnified. The risk of harmful blind spots grows if institutions value efficiency over deliberation, speed over scrutiny or expediency over transparency. The Silicon Valley ethos of “move fast and break things” is going to result in ever bigger piles of rubble.
To mitigate these dangers, institutions must prioritize inclusive access to the kind of liberal arts-informed metacognitive training that the judgment economy rewards. It isn’t enough just to teach a toolkit; we must cultivate the habit of thoughtful reflection, the discipline of testing assumptions and the humility to adjust when the evidence or context shifts.
This is the shared societal project for the age of AI — educators, employers, policymakers and workers will need to partner to build a culture that values judgment without surrendering to hubris or fear. A sharp sense of the historical context for this shift will matter greatly: The society that learns to see its present through the lens of past decisions, and that situates new technologies within a longer arc of change, stands a better chance of steering toward outcomes we can defend as just and durable.
“A system in which the profit is entirely concentrated in the hands of a few very rich people while it disemploys and impoverishes millions is a formula for political revolt or worse.”
Perhaps the biggest question is what sorts of jobs will be replaced and what will be left. If most all the people currently making white-collar salaries get busted back to stocking shelves at Walmart — in other words, if well-remunerated jobs are backfilled with poorly paid ones, with the corporate owners of the AIs reaping the delta as private profit — that won’t just be a personal setback for the workers involved, but will represent a major social crisis, and ultimately a political one.
As Noema’s editor-in-chief, Nathan Gardels, has often asked in these pages, who will own the robots? This is perhaps the most important political-economic question of our time, albeit one outside present mainstream political debates. What’s certain is that a system in which the profit is entirely concentrated in the hands of a few very rich people while it disemploys and impoverishes millions is a formula for political revolt or worse.
Another concern is how entry-level employees are supposed to acquire these skills of judgment and metacognition, which are presently framed as “senior” capabilities. From a societal perspective, entry-level jobs have never just been about extracting value from junior employees; they’re also about adding value to those employees by allowing them to observe and learn from more senior people who (supposedly) possess such skills. But those junior jobs are precisely the ones that will be first on the chopping block as LLMs and related tools reduce their value and availability. As a society, are we therefore eating our own seed corn? Or to use a different metaphor, are the senior-most people in organizations going to use AI to pull up the ladder behind them?
Finally, we must ask: Will metacognitive capabilities and the ineffable quality of judgment prove enough to maintain a space for human value-add that can spare us the fate of becoming mere consumers of AI outputs? To put it in Silicon Valley’s business-speak: Will the judgment economy form a moat that can protect our human business from emergent AI competitors?
Several times in this essay I have parenthetically added “(yet)” to statements about what generative AI today still cannot do. These parentheses are themselves rooted in presentist suppositions that may well be superseded by the advancement of the technology. This is all the more reason to continue to invest both individually and collectively in flexibility and good judgment.