Walk the rows of the farmers market in a small, nondescript Texas town about an hour away from Austin, and you might stumble across something unexpected: In between booths selling fresh, local pickles and pies, there’s a table piled high with generic-looking streaming boxes, promising free access to NFL games, UFC fights, and any cable TV network you can think of.
It’s called the SuperBox, and it’s being demoed by Jason, who also has homemade banana bread, okra, and canned goods for sale. “People are sick and tired of giving Dish Network $200 a month for trash service,” Jason says. His pitch to rural would-be cord-cutters: Buy a SuperBox for $300 to $400 instead, and you’ll never have to shell out money for cable or streaming subscriptions again.
I met Jason through one of the many Facebook groups used as support forums for rogue streaming devices like the SuperBox. To allow him and other users and sellers of these devices to speak freely, we’re only identifying them by their first names or pseudonyms.
“People are sick and tired of giving Dish Network $200 a month for trash service.”
SuperBox and its main competitor, vSeeBox, are gaining in popularity as consumers get fed up with what TV has become: Pay TV bundles are incredibly expensive, streaming services are costlier every year, and you need to sign up for multiple services just to catch your favorite sports team every time they play. The hardware itself is generic and legal, but you won’t find these devices at mainstream stores like Walmart and Best Buy because everyone knows the point is accessing illegal streaming services that offer every single channel, show, and movie you can think of.
This advertising content was produced in collaboration between Vox Creative and our sponsor, without involvement from Vox Media editorial staff.
How the age of AI is making human expertise more valuable
The real competitive advantage is shifting to something machines can’t replicate: human judgment.
byVox Creative
Something’s happening in American offices right now. No, it’s not about AI taking jobs — it’s that workplaces are losing institutional expertise they don’t know how to replace. An entire generation of professionals who built their careers before AI are retiring, and they’re taking with them the kind of knowledge that never makes it into any documentation: the insight on why something’s not working, and gut instincts honed by years of experience.
The anxiety over AI’s growing role in the workplace also isn’t where you’d expect it to be. While conventional wisdom suggests Gen Z workers are most threatened by AI, new data from a 2026 global survey of portfolio leaders* conducted by Smartsheet found all generations in the workforce feel worried. The data reveals that 67 percent of Gen X, 66 percent of Millennials, and 84 percent of Gen Z are concerned about being replaced by AI within five years. Despite holding decades of experience, 92 percent of Gen X professionals feel they must gain AI expertise just to stay relevant in their roles.
At the same time, people just starting their careers are wondering if there’s room for them to build that kind of expertise. If AI can handle entry-level analysis and reporting, what do younger employees do and how do they develop valuable knowledge?
According to Drew Garner, SVP of AI & data platform at Smartsheet, AI hasn’t made expertise obsolete, it’s made it more valuable.
What gets lost without human judgement
Garner has spent 25 years watching technology reshape how people work. Right now, he sees organizations making a fundamental error: treating AI as a substitute for humans instead of something that needs humans to work properly.
“What we stand to lose isn’t just knowledge — it’s judgment,” he says. “Knowledge can be documented, indexed, and fed into a knowledge graph.” What AI can’t absorb on its own are things like the pattern recognition that comes from 30 years of seeing projects fail, and the understanding that a vendor always lowballs estimates.
Often, AI will convincingly answer a question, but it takes a person with expertise to recognize that something’s missing. Workers know this instinctively: while AI “replacement” dominates headlines, the real fear is more nuanced. In the Smartsheet survey, the top concern for workers using AI isn’t about being replaced — it’s making “poor decisions or mistakes” due to AI recommendations (45 percent). Additionally, 33 percent worry that AI simply doesn’t understand the “complexity or context” of their specific projects.
The top concern for workers using AI isn’t about being replaced — it’s making ‘poor decisions or mistakes’ due to AI recommendations.
— Global PPM Leader Survey conducted by Smartsheet
Here’s an example: A CFO asks an AI agent why a cloud migration project is over budget and the agent pulls data and reports back that consultants exceeded their estimate by $40,000 and web services costs ran $30,000 over.
This is where human discernment proves its worth, according to Garner. “The human looking at that output asks: ‘Wait — we’ve had [web services] cost issues on three consecutive projects with the same consulting firm. Is this a pattern?’ Or notices scope creep happened without a formal change request, which points to a governance gap, not a budgeting problem.”
When smart tools learn the wrong things
Garner was an early adopter of an AI coding assistant in a past role, and he had 50 interns using it.
“I saw features and bugs skyrocket from our intern class,” he says. “And I found out that the AI learned it from them, thought it was a good pattern, and literally started suggesting it in everybody’s code.”
The interns were writing buggy code. The AI watched them do it, assumed this was how things should be done, and started teaching every other intern the same bugs. The tool was working exactly as designed. It just didn’t know the patterns were wrong, and neither did the interns, which perfectly illustrates the discernment problem.
Garner can’t do his job without AI anymore. But that’s only true because he’s spent significant time teaching the tools how to work for him — customizing them, building knowledge graphs, correcting their mistakes, and showing them relationships they miss on their own. “These tools are only as smart as you make them,” he says.
“The human in the loop isn’t optional. It’s what makes the difference between AI that’s confidently wrong and AI that’s genuinely useful.”
— Drew Garner, SVP of AI & data platform at Smartsheet
The productivity gains companies are all chasing only show up after humans invest serious time teaching, debugging, and refining AI systems. You don’t just implement AI and walk away. You become what Garner calls “the teacher, the debugger, the quality controller, the customizer.”
When models lack sufficient data, they fabricate responses rather than admitting they don’t know, explains Garner. “The human in the loop isn’t optional,” he says. “It’s what makes the difference between AI that’s confidently wrong and AI that’s genuinely useful.”
The human becomes the crucial quality control layer — what Garner calls “andon cords for knowledge work,” borrowing the term from manufacturing where any worker on an assembly line can pull a cord to stop production if they spot a problem.
What Intelligent Work actually means
With an engineering team led by Garner, Smartsheet built an Intelligent Work Management platform creating systems that understand the entire scope of work well enough to tell users what should happen next. But those systems only get smart when humans teach them what matters.
Garner describes it this way: “The AI separates signal from noise — but only because it’s been trained and personalized over time to understand what matters to you, in your role, at this moment.”
For a CEO wondering what issues need their immediate attention, this means generating exactly three things that actually require a decision — not 200 status updates across five dashboards they need to sift through. That only works if the system has been taught what “needs attention” means for a CEO versus a CFO versus a project manager.
“Organizations that let experienced professionals exit without structured knowledge transfer aren’t just losing people — they’re losing the teachers who could have trained their AI systems to carry that judgment forward.”
— Drew Garner, SVP of AI & data platform at Smartsheet
For professionals with decades of experience, Garner is direct: “Organizations that let experienced professionals exit without structured knowledge transfer aren’t just losing people — they’re losing the teachers who could have trained their AI systems to carry that judgment forward.”
This risk is particularly acute given that the generation holding the most institutional knowledge — Gen X professionals — is the same group feeling the most precarious about their future. When 67 percent of these veterans are very concerned about being replaced, organizations face a double threat: losing both the expertise and the experts who could have embedded that judgment into AI systems.
Garner advises workers at all levels and degrees of experience to take this moment to think about their value in an organization. “If your value is, ‘I write reports’ or ‘I analyze data’ or ‘I create presentations,’ then yes — AI can increasingly do those tasks. But if your value is, ‘I make AI systems actually work for my organization by teaching them context, debugging their mistakes, and customizing their outputs’ — that’s a fundamentally different position.”
What separates winners from losers
The AI race is not just about who adopts technology fastest or spends the most money, it’s about those who understand that AI needs constant human attention.
The organizations that thrive will have invested in humans as AI teachers, according to Garner. “When someone corrects an AI mistake, that correction propagates,” he says. “When someone customizes a workflow effectively, that pattern gets captured. The human expertise compounds into the AI over time.”
This takes infrastructure: Unified data foundations, knowledge graphs that capture relationships, and feedback loops between human corrections and AI improvement. This type of infrastructure — one that unites people, data, and AI to eliminate execution silos — is what Smartsheet is building, and it marks the difference between AI that works well, and AI that confidently generates plausible-sounding nonsense.
The alternative is what Garner calls the “deployment trap.” Organizations deploy AI trying to remove humans from the loop, then discover the loop doesn’t actually work without humans. They’ve optimized for short-term efficiency and ended up with systems that can’t tell the difference between good and bad outputs.
AI is supposed to handle routine tasks to free up humans for higher-value work like judgement calls, creative problem solving, and relationship building. But that can’t happen if people with subject matter expertise retire without passing it on to the people and AI systems in place. And younger workers can’t build their own expertise if AI handles all the learning-curve tasks that used to help workers develop knowledge.
This is the expertise void that should actually worry business leaders. Not because AI is replacing humans, but because we risk losing expertise from both ends: experienced professionals retiring before passing on their knowledge, and younger workers so uncertain about their role that they never develop the judgement AI needs in the first place. Discernment isn’t something that’s inherited or installed — it’s something that’s built and developed over time. Without it, organizations won’t have the human intelligence to make their artificial intelligence work.
*Global project and portfolio professionals
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Vox Creative
Privacy Notice
We use cookies and personal data to enhance your experience and fund the production of content on our site. If you would like to customize the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts, you may click Manage Privacy Settings at any time. If you approve of our use of cookies and the collection of your data for the purposes described, you may click 'Accept.'Privacy Policy
Your Opt Out Preference Signal is Honored
Privacy Preference Center
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
User ID: afc86b56-4d94-4cfe-a429-17f86566b744
This User ID will be used for storing and accessing your preferences for future.
Timestamp: You have not consented yet
Manage Consent Preferences
Strictly Necessary Cookies
Essential
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Functional and Performance Cookies
We use these cookies and trackers to enhance site functionality, to customize your experience and content, to learn about user behavior and interests, and to gather analytics about our sites and user activity.
Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Targeting Cookies
These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.
Social Media & Embedded Content
Content embedded on our sites (e.g. social media posts, video clips, polls and games) originates from third party sources such as social media platforms, video sharing sites, or other third party websites. When this content loads on pages you visit, any cookies or similar tracking technologies set by the third party source in connection with that content may also load. Vox Media doesn't set these cookies and doesn't control them. These cookies may be capable of tracking your browser across sites and/or building a profile of your interests. Not allowing these cookies will impact what content you can see and engage with on our sites.
Vendors List
label
ConsentLeg.Interest
label
label
label
Submit your email to Verge Daily so you never miss out on the tech news that matters most
Sign up for Verge Daily so you never miss out on the tech news that matters most.