Skip to main content
258206_The_new_face_of_piracy-_Streaming_boxes_CVirginia4
258206_The_new_face_of_piracy-_Streaming_boxes_CVirginia4

Everyone is stealing TV

Fed up with increasing subscription prices, viewers embrace rogue streaming boxes.

Image:
Cath Virginia / The Verge, Getty Images
Janko Roettgers
is a tech reporter and author of the Lowpass newsletter.

Walk the rows of the farmers market in a small, nondescript Texas town about an hour away from Austin, and you might stumble across something unexpected: In between booths selling fresh, local pickles and pies, there’s a table piled high with generic-looking streaming boxes, promising free access to NFL games, UFC fights, and any cable TV network you can think of.

It’s called the SuperBox, and it’s being demoed by Jason, who also has homemade banana bread, okra, and canned goods for sale. “People are sick and tired of giving Dish Network $200 a month for trash service,” Jason says. His pitch to rural would-be cord-cutters: Buy a SuperBox for $300 to $400 instead, and you’ll never have to shell out money for cable or streaming subscriptions again.

I met Jason through one of the many Facebook groups used as support forums for rogue streaming devices like the SuperBox. To allow him and other users and sellers of these devices to speak freely, we’re only identifying them by their first names or pseudonyms.

“People are sick and tired of giving Dish Network $200 a month for trash service.”

SuperBox and its main competitor, vSeeBox, are gaining in popularity as consumers get fed up with what TV has become: Pay TV bundles are incredibly expensive, streaming services are costlier every year, and you need to sign up for multiple services just to catch your favorite sports team every time they play. The hardware itself is generic and legal, but you won’t find these devices at mainstream stores like Walmart and Best Buy because everyone knows the point is accessing illegal streaming services that offer every single channel, show, and movie you can think of.

Subscribe to The Verge to continue reading.
Partner Content From
This advertising content was produced in collaboration between Vox Creative and our sponsor, without involvement from Vox Media editorial staff.
Smartsheet logoSmartsheet logo

How the age of AI is making human expertise more valuable

The real competitive advantage is shifting to something machines can’t replicate: human judgment.

VG26_Smartsheet_1823464_lede_2000x2500_8278a7
VG26_Smartsheet_1823464_lede_2000x2500_8278a7

Something’s happening in American offices right now. No, it’s not about AI taking jobs — it’s that workplaces are losing institutional expertise they don’t know how to replace. An entire generation of professionals who built their careers before AI are retiring, and they’re taking with them the kind of knowledge that never makes it into any documentation: the insight on why something’s not working, and gut instincts honed by years of experience.

The anxiety over AI’s growing role in the workplace also isn’t where you’d expect it to be. While conventional wisdom suggests Gen Z workers are most threatened by AI, new data from a 2026 global survey of portfolio leaders* conducted by Smartsheet found all generations in the workforce feel worried. The data reveals that 67 percent of Gen X, 66 percent of Millennials, and 84 percent of Gen Z are concerned about being replaced by AI within five years. Despite holding decades of experience, 92 percent of Gen X professionals feel they must gain AI expertise just to stay relevant in their roles.

At the same time, people just starting their careers are wondering if there’s room for them to build that kind of expertise. If AI can handle entry-level analysis and reporting, what do younger employees do and how do they develop valuable knowledge?

According to Drew Garner, SVP of AI & data platform at Smartsheet, AI hasn’t made expertise obsolete, it’s made it more valuable.

What gets lost without human judgement

Garner has spent 25 years watching technology reshape how people work. Right now, he sees organizations making a fundamental error: treating AI as a substitute for humans instead of something that needs humans to work properly.

“What we stand to lose isn’t just knowledge — it’s judgment,” he says. “Knowledge can be documented, indexed, and fed into a knowledge graph.” What AI can’t absorb on its own are things like the pattern recognition that comes from 30 years of seeing projects fail, and the understanding that a vendor always lowballs estimates.

Often, AI will convincingly answer a question, but it takes a person with expertise to recognize that something’s missing. Workers know this instinctively: while AI “replacement” dominates headlines, the real fear is more nuanced. In the Smartsheet survey, the top concern for workers using AI isn’t about being replaced — it’s making “poor decisions or mistakes” due to AI recommendations (45 percent). Additionally, 33 percent worry that AI simply doesn’t understand the “complexity or context” of their specific projects.

The top concern for workers using AI isn’t about being replaced — it’s making ‘poor decisions or mistakes’ due to AI recommendations.

— Global PPM Leader Survey conducted by Smartsheet

Here’s an example: A CFO asks an AI agent why a cloud migration project is over budget and the agent pulls data and reports back that consultants exceeded their estimate by $40,000 and web services costs ran $30,000 over.

This is where human discernment proves its worth, according to Garner. “The human looking at that output asks: ‘Wait — we’ve had [web services] cost issues on three consecutive projects with the same consulting firm. Is this a pattern?’ Or notices scope creep happened without a formal change request, which points to a governance gap, not a budgeting problem.”

When smart tools learn the wrong things

Garner was an early adopter of an AI coding assistant in a past role, and he had 50 interns using it.

“I saw features and bugs skyrocket from our intern class,” he says. “And I found out that the AI learned it from them, thought it was a good pattern, and literally started suggesting it in everybody’s code.”

The interns were writing buggy code. The AI watched them do it, assumed this was how things should be done, and started teaching every other intern the same bugs. The tool was working exactly as designed. It just didn’t know the patterns were wrong, and neither did the interns, which perfectly illustrates the discernment problem.

Garner can’t do his job without AI anymore. But that’s only true because he’s spent significant time teaching the tools how to work for him — customizing them, building knowledge graphs, correcting their mistakes, and showing them relationships they miss on their own. “These tools are only as smart as you make them,” he says.

“The human in the loop isn’t optional. It’s what makes the difference between AI that’s confidently wrong and AI that’s genuinely useful.”

— Drew Garner, SVP of AI & data platform at Smartsheet

The productivity gains companies are all chasing only show up after humans invest serious time teaching, debugging, and refining AI systems. You don’t just implement AI and walk away. You become what Garner calls “the teacher, the debugger, the quality controller, the customizer.”

When models lack sufficient data, they fabricate responses rather than admitting they don’t know, explains Garner. “The human in the loop isn’t optional,” he says. “It’s what makes the difference between AI that’s confidently wrong and AI that’s genuinely useful.”

The human becomes the crucial quality control layer — what Garner calls “andon cords for knowledge work,” borrowing the term from manufacturing where any worker on an assembly line can pull a cord to stop production if they spot a problem.

What Intelligent Work actually means

With an engineering team led by Garner, Smartsheet built an Intelligent Work Management platform creating systems that understand the entire scope of work well enough to tell users what should happen next. But those systems only get smart when humans teach them what matters.

Garner describes it this way: “The AI separates signal from noise — but only because it’s been trained and personalized over time to understand what matters to you, in your role, at this moment.”

For a CEO wondering what issues need their immediate attention, this means generating exactly three things that actually require a decision — not 200 status updates across five dashboards they need to sift through. That only works if the system has been taught what “needs attention” means for a CEO versus a CFO versus a project manager.

“Organizations that let experienced professionals exit without structured knowledge transfer aren’t just losing people — they’re losing the teachers who could have trained their AI systems to carry that judgment forward.”

— Drew Garner, SVP of AI & data platform at Smartsheet

For professionals with decades of experience, Garner is direct: “Organizations that let experienced professionals exit without structured knowledge transfer aren’t just losing people — they’re losing the teachers who could have trained their AI systems to carry that judgment forward.”

This risk is particularly acute given that the generation holding the most institutional knowledge — Gen X professionals — is the same group feeling the most precarious about their future. When 67 percent of these veterans are very concerned about being replaced, organizations face a double threat: losing both the expertise and the experts who could have embedded that judgment into AI systems.

Garner advises workers at all levels and degrees of experience to take this moment to think about their value in an organization. “If your value is, ‘I write reports’ or ‘I analyze data’ or ‘I create presentations,’ then yes — AI can increasingly do those tasks. But if your value is, ‘I make AI systems actually work for my organization by teaching them context, debugging their mistakes, and customizing their outputs’ — that’s a fundamentally different position.”

What separates winners from losers

The AI race is not just about who adopts technology fastest or spends the most money, it’s about those who understand that AI needs constant human attention.

The organizations that thrive will have invested in humans as AI teachers, according to Garner. “When someone corrects an AI mistake, that correction propagates,” he says. “When someone customizes a workflow effectively, that pattern gets captured. The human expertise compounds into the AI over time.”

This takes infrastructure: Unified data foundations, knowledge graphs that capture relationships, and feedback loops between human corrections and AI improvement. This type of infrastructure — one that unites people, data, and AI to eliminate execution silos — is what Smartsheet is building, and it marks the difference between AI that works well, and AI that confidently generates plausible-sounding nonsense.

The alternative is what Garner calls the “deployment trap.” Organizations deploy AI trying to remove humans from the loop, then discover the loop doesn’t actually work without humans. They’ve optimized for short-term efficiency and ended up with systems that can’t tell the difference between good and bad outputs.

AI is supposed to handle routine tasks to free up humans for higher-value work like judgement calls, creative problem solving, and relationship building. But that can’t happen if people with subject matter expertise retire without passing it on to the people and AI systems in place. And younger workers can’t build their own expertise if AI handles all the learning-curve tasks that used to help workers develop knowledge.

This is the expertise void that should actually worry business leaders. Not because AI is replacing humans, but because we risk losing expertise from both ends: experienced professionals retiring before passing on their knowledge, and younger workers so uncertain about their role that they never develop the judgement AI needs in the first place. Discernment isn’t something that’s inherited or installed — it’s something that’s built and developed over time. Without it, organizations won’t have the human intelligence to make their artificial intelligence work.

*Global project and portfolio professionals

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.