Sitemap

Analyst’s corner

All aspects of organisational analysis: business analysis | enterprise architecture | quality

Why Everyone Got AI Wrong (And What Actually Matters)

What I learned about artificial intelligence by watching it fail to do what people expected, and succeed at things nobody predicted

8 min readOct 14, 2025
Press enter or click to view image in full size

I’ve been in a lot of conversations about artificial intelligence lately, and I’ve noticed something strange.

People talk about AI like it’s a magic solution that will either save humanity or destroy civilization. Both narratives feel equally dramatic and equally missing the point.

The truth is weirder and more interesting.

AI isn’t magical. But it’s also not useless. The real story is about what it’s actually good for — which turns out to be different from what most people think.

The Expectation That Didn’t Match Reality

Here’s what I thought AI was supposed to do: replace experts.

If a radiologist can diagnose disease from X-rays, then AI trained on thousands of X-rays should be able to do it better. If a financial analyst can spot fraud patterns, then AI should spot them faster. If a researcher can find connections in scientific literature, then AI should find more of them.

This seemed logical. Machines are faster than humans. Machines don’t get tired. Machines can process more information. So naturally, machines should outperform humans at expert tasks.

Except that’s not really how it works.

The reality I keep seeing is subtler. AI is useful, but not in the way everyone expected. It’s useful in ways that are less dramatic but more practical.

What I Actually Observe About How AI Gets Used

When I look at how people are actually using AI effectively in their work, a pattern emerges. It’s almost never about replacement.

It’s about speed.

Speed of translation. Speed of processing. Speed of exploration.

Someone has a question about their data. They can ask it to an AI in plain language rather than learning query syntax. The answer comes in seconds instead of hours of manual work.

Someone needs to find connections across hundreds of documents. Instead of reading each one manually, AI can summarize and highlight what connects. That takes hours instead of weeks.

Someone is tracking patterns across a large dataset. Instead of building custom reports, they can describe what they want to see and get visualization instantly.

The Translation Layer

I think of it as a translation layer. Between human thinking and mechanical processing.

When the translation layer is fast, the thinking gets faster. You ask a question, get an answer, notice something interesting, ask a follow-up question. The cycle of exploration accelerates.

When the translation layer is slow (or requires specialized skills to use), the thinking gets bottlenecked. You have a question, but getting the answer requires a database expert, or a programmer, or weeks of manual work.

So you don’t ask the question. The insight never surfaces.

The value of AI in this context isn’t intelligence. It’s accessibility. Making the mechanical work accessible so the thinking work can happen.

Press enter or click to view image in full size

The Problem That AI Actually Solves

Most expert work involves two kinds of thinking:

Type 1: Mechanical thinking — Processing information according to rules. Extracting specific data points. Applying formulas. Transforming formats. Comparing against criteria. This is the work that’s repetitive and rule-based.

Type 2: Judgment thinking — Understanding context. Recognizing what matters. Connecting disparate information. Making decisions about tradeoffs. Deciding what to do next. This is the work that requires expertise and intuition.

Historically, experts had to do both. They spent enormous amounts of time on Type 1 thinking (mechanical processing) which left limited capacity for Type 2 thinking (judgment).

This was the bottleneck.

AI is very good at Type 1 thinking. It can process information mechanically at scale. It can extract, transform, compare, organize.

But — and this is crucial — humans are still better at Type 2 thinking. Understanding what matters. Deciding what’s worth exploring. Knowing when something makes sense versus when it’s statistically significant but meaningless.

When AI handles Type 1 thinking well, it frees humans to focus on Type 2 thinking. And that’s when work gets better.

Press enter or click to view image in full size

What This Means for Data Work Specifically

Data work is particularly interesting because it’s so much Type 1 thinking.

Data preparation: extracting from source systems, standardizing formats, handling inconsistencies, joining datasets. All mechanical.

Exploratory analysis: calculating statistics, creating visualizations, testing formulas. Mostly mechanical.

Report generation: extracting relevant insights, formatting outputs, creating summaries. Mechanical.

The judgment work — deciding what questions matter, interpreting what results mean, deciding what to do with findings — that often gets squeezed into small portions of the timeline because so much time is spent on mechanical work.

Where AI Changes Things

AI is becoming useful at the mechanical parts of data work.

Not perfectly. It makes mistakes. It needs validation. But it handles the mechanical processing faster than humans can, which means less time spent on mechanical work and more time available for judgment work.

This matters because the judgment work is where the value actually lives.

Get Mahathidhulipala’s stories in your inbox

Join Medium for free to get updates from this writer.

A data analyst who spends 80% of her time preparing data and 20% interpreting it is limited by the mechanical work. If AI reduces that to 40% preparation and 60% interpretation, her work becomes much more valuable.

She’s not replaced. She’s redirected. The total amount of human thinking is the same, but it’s directed toward higher-value activities.

The Uncomfortable Part Nobody Discusses

Here’s something that’s uncomfortable: AI won’t fix bad judgment.

If someone doesn’t understand their domain well, AI will help them reach conclusions faster. But those conclusions might be wrong or misleading or based on misunderstandings that the AI amplifies.

If someone has unclear priorities about what matters, AI will optimize for something at scale. But it might be the wrong thing.

If someone asks an AI the wrong question, they’ll get an answer to the wrong question — often a very confident, sophisticated-sounding answer to the wrong question.

This is why domain expertise matters more, not less, in a world with good AI tools.

The better you understand your field, the better you can direct AI toward valuable problems. The worse you understand your field, the more likely you are to misuse the tools.

What I Think Is Actually Happening

The pattern I see is this: AI is becoming infrastructure for expertise.

Not replacing expertise. Becoming infrastructure for it.

The specialist who understands their domain well can now leverage AI to extend their reach. They can handle more data, explore more possibilities, see patterns they couldn’t see before. Not because AI replaced their expertise, but because infrastructure supports it.

The specialist who doesn’t have deep domain expertise — they won’t benefit as much. They can use the tools, but they won’t know what questions to ask or whether the answers make sense.

This creates a divergence: deep experts become more powerful. Shallow practitioners… might get faster at being shallow.

The tools are neutral. They amplify what you put into them.

Why Domain Expertise Is Becoming More Valuable, Not Less

I think we’re in the middle of a shift in what expertise means.

Previously: expertise meant knowing things. Retaining information. Remembering patterns and reference points.

Increasingly: expertise means knowing what questions matter. Recognizing patterns that are significant versus patterns that are just noise. Understanding context deeply enough to know when something makes sense.

The first kind of expertise — knowledge retention — is becoming less valuable because AI is good at retrieval. You need to know something, you can ask.

The second kind of expertise — judgment and context understanding — is becoming more valuable because it’s not automatable.

So the shift isn’t that expertise becomes less important. It’s that the form expertise takes changes.

The Practical Implication

This means the work of developing expertise in your field — understanding it deeply, building intuition, learning the exceptions and edge cases — this work is becoming more important, not less.

The field requires specialists with real understanding. People who can look at data and know what’s significant. People who can interpret results and understand implications. People who know their domain well enough to ask good questions.

The infrastructure is becoming available to support that expertise. But the expertise itself is still needed.

What I’m Actually Seeing Work

When I look at effective uses of AI in practice, certain patterns emerge:

Clarity about the question — The more specific and well-understood the problem, the more useful AI becomes. Vague questions about nebulous problems don’t lead anywhere. Specific questions about understood challenges do.

Domain expertise directing the tool — The person using AI knows their field well enough to validate results, catch errors, notice when something doesn’t make sense. They’re not trusting the tool blindly.

Iterative exploration — Rather than asking once and accepting the answer, there’s a cycle of questioning, results, new insights, follow-up questions. The tool enables rapid iteration.

Focus on acceleration, not replacement — Using AI to do faster what they’d do slower, not trying to replace human judgment with automated decisions.

These aren’t revolutionary uses. They’re practical uses. And they’re what actually creates value.

Press enter or click to view image in full size

The Gap Between Expectation and Reality

The expectation: AI will transform everything, replace most knowledge work, make expertise obsolete.

The reality: AI is becoming a useful tool for handling mechanical work, which frees time for judgment work. Expertise becomes more important because judgment becomes more central.

The gap between these is enormous. And the reality is less dramatic than the expectation.

But the reality is actually more interesting because it’s more sustainable. Tools that amplify expertise are more useful long-term than tools that try to replace it.

Where This Goes

I don’t know exactly. But based on what I’m seeing:

The organizations and individuals who figure out how to use AI as infrastructure for expertise will be more effective. They’ll handle more work at higher quality because mechanical work is accelerated and judgment work is prioritized.

The ones who try to use AI to replace expertise will find it doesn’t work as intended. You can’t automate judgment. You can only amplify it.

So the investment is the same as it always has been: developing real expertise in your field. Understanding your domain deeply. Building judgment through experience.

The difference is: now there’s infrastructure available to support and amplify that expertise.

The expertise itself is still the bottleneck. The expertise is still what matters.

What are you seeing in how AI actually gets used effectively? Where is it delivering value versus where is it mostly hype? I’m genuinely curious what’s working in practice versus what sounds good in theory.

Analyst’s corner

Published in Analyst’s corner

All aspects of organisational analysis: business analysis | enterprise architecture | quality

Mahathidhulipala

Written by Mahathidhulipala

Project Specialist and Data Analyst exploring how technology, data, and design transform the way we work and think.

Responses (9)

What the Fold Learned About AI (And What Everyone Keeps Missing)
By James Edward Lara — Founder of the Fold
I’ve read many thoughtful takes on artificial intelligence lately — including pieces that cut through the hype and get to the heart of what AI…

1

This really hits home. A tool only does what it’s told, so the user’s framing and priorities are crucial. Garbage in, garbage out but more dangerously, clever output for the wrong problem.

1

I do research and use AI frequently and i often treat it as a colleague, i ask its opinion(!) and if it makes sense, good, continue. If it doesn’t, dig, try other angles. It’s good at pattern recognition (in language) and finding the closest…