A junior analyst produces a polished strategic recommendation in two hours. It looks professional, cites data and has a clear structure. It's the kind of work that would have taken me weeks and multiple rounds of feedback when i was starting out.

Leadership is impressed with the report and congratulates the analyst for a job well done.

But other senior people in the room can feel something is off. The analysis looks right, but the questions it answers aren't the questions that matter. The data is accurate, but the interpretation misses cthe ontext that only comes from experience. The recommendation is logical, but it ignores political realities that would doom it on arrival.

This scene is playing out in organizations everywhere. And it reveals a dynamic that most companies haven't thought through: AI tools are raising the floor of what inexperienced people can produce, while doing almost nothing for the ceiling.

The Floor Raising Effect

Research from MIT Sloan confirms what many leaders have observed intuitively. When software developers were given access to AI coding assistants, productivity increased by 26% on average. But the gains weren't evenly distributed. Junior developers and recent hires saw productivity increases of 27% to 39%. Senior developers saw just 8% to 13%. One caveat: the researchers couldn't evaluate the quality of the code produced, just the quantity.

This pattern repeats across industries. BCG research found that when nontechnical consultants were given access to generative AI tools, they could suddenly perform data science tasks that were previously outside their expertise. The AI acted as an "exoskeleton," letting people operate beyond their training.

This isn't really a bad thing, though. Faster ramp times for new hires, more access to knowledge, and less time stuck on formatting and structure are real benefits.

But "competent-looking" and "actually good" aren't the same thing. A gap exists between them that can only come from wisdom and judgment, which are things that don't come from AI. They come from experience, from making mistakes, and from learning what questions to ask before you start looking for answers.

When output looks polished, people feel confident. When people feel confident, they make decisions.

The problem with this: AI compresses the time to confident-looking output without compressing the time to actual expertise. Junior people start operating at a level that looks senior but they don't yet know what they don't yet know, and now they have a tool that papers over the gaps.

Recent research from Aalto University found something troubling about this dynamic. Normally, the Dunning-Kruger effect means novices are overconfident while experts underestimate themselves. It's a built-in correction mechanism: the people who know the most tend to be the most humble about what they don't know.

With AI, that pattern breaks.

When people used ChatGPT to solve problems, everyone overestimated how well they'd done. But there's a twist: the people who considered themselves most AI-literate were actually more overconfident than the novices.

A quote from the Aalto University research explains it well: "We found that when it comes to AI, the Dunning-Kruger effect vanishes," said Professor Robin Welsch. "What's really surprising is that higher AI literacy brings more overconfidence."

People who considered themselves experienced AI users were the most likely to overestimate how well they'd performed. The researchers identified a pattern they called "cognitive offloading," where users trusted AI outputs without reflection or double-checking. Most participants in the study relied on single prompts and accepted whatever came back without questioning whether it was actually correct.

The implications of this for organizational decision-making are significant. When AI makes everyone feel smarter than they are, the feedback loop that normally corrects overconfidence breaks down. Junior employees who would have hesitated to present work to senior leadership now feel ready. Their confidence reads as competence, and the work product reinforces that impression.

This has real consequences.

Goldman Sachs research shows that unemployment among 20- to 30-year-olds in tech-exposed occupations has risen by almost 3 percentage points since early 2025. Junior roles are disappearing faster than expected, which means the people who remain need to make decisions with less mentorship and less margin for error.

Faros AI's research on software engineering teams found another troubling pattern. While over 75% of developers now use AI coding assistants, many organizations report a disconnect: developers say they're working faster, but companies aren't seeing measurable improvement in delivery velocity or business outcomes. The adoption skews toward less tenured engineers. Usage is highest among people newer to the company.

Meanwhile, the expert who would have caught the problem is either not in the room or has stopped looking closely because everything "looks fine."

The Other Side

Before this becomes a cautionary tale about AI destroying expertise, there's a flip side worth considering.

AI can accelerate learning, not just output. Junior people can see patterns faster, ask better questions, and get exposure to approaches they'd otherwise take years to encounter. The World Economic Forum cites examples such as Brazil's SOMOS Educação, where AI-powered lesson planning saves teachers up to 20 hours per month, giving them more time for individual mentorship rather than less.

If junior staff can handle routine work competently, experts should be able to focus on genuinely hard problems. The ones that require judgment and actually move the business.

Microsoft's New Future of Work Report frames AI as a "bicycle for the mind," boosting output and initially narrowing inequality by automating routine work. As AI advances, human judgment becomes increasingly critical, particularly in identifying opportunities for improvement and selecting the right course of action amid ambiguity.

This is the upside. But it only materializes if organizations are intentional about it.

The Question Most Companies Aren't Asking

The data on AI training reveals a significant gap. According to Absorb's 2026 L&D Report, 61% of organizations have adopted or are testing AI in their learning and development strategies. But only 11% feel extremely confident in their future skills-building strategy.

AI adoption is outpacing readiness. And the skills gap isn't primarily about learning to use the tools. It's about maintaining the judgment that makes the tools useful.

BCG found that while the need for AI upskilling is well established, only 6% of surveyed executives said they had begun upskilling in a meaningful way. Of those leaders, 59% reported having limited or no confidence in their executive team's proficiency in generative AI.

The existing training programs focus on tool proficiency. Things like "How to write better prompts" and "How to use AI features in existing software." These are necessary but insufficient.

The deeper question that most organizations aren't asking is: "How are you ensuring people still develop judgment, not just output?"

AI can generate answers, but it cannot teach someone when to doubt the answer or when to ask a different question. It cannot tell a user when the data is technically correct but strategically irrelevant. That still requires humans developing humans.

If you shortcut that process, you end up with a workforce that produces but doesn't think. A workforce that can generate but can't evaluate. A workforce that looks capable but crumbles when the situation doesn't match the pattern the AI was trained on.

What This Means for Leadership

This isn't a problem you solve with a policy memo or a new tool. It's a cultural question about how your organization develops people.

The World Economic Forum puts it bluntly: "Creativity, contextual reasoning and ethical judgment are capabilities that no algorithm can fully replicate." The demand will be for roles that combine domain expertise with AI literacy. AI system architects. Human-AI collaboration designers. People who can tell when the machine is wrong.

But you can't hire your way to those capabilities. You have to build them. And building them requires rethinking how expertise develops in your organization.

A few places to start:

  • Where is AI-assisted confidence running ahead of actual capability? Look for situations where junior people are producing work that bypasses traditional review processes. Where decisions are being made faster without clear evidence, they're being made better. The speed feels like a win until you realize the guardrails that used to catch problems have been removed, along with the friction.

  • Are your experts creating or just validating? If your most experienced people are spending their time checking AI-assisted work instead of doing the thinking only they can do, you've shifted the bottleneck without eliminating it. And you've turned the most expensive people in your organization into quality control instead of being focused on the 'big picture,'

  • What's your plan for developing judgment? Research consistently shows that mentorship and cross-functional collaboration remain critical for skill transfer, particularly in AI-driven workflows. If AI is handling the routine work that used to build expertise, what replaces that learning path?

The struggle matters. It brings friction, which forces learning. Friction comes from doing things the hard way, making mistakes, and getting feedback from people who've been there before.

When someone hands you AI-assisted work, can you tell what they actually contributed? If you can't, you can't evaluate whether they're growing or just producing.

The TalentLMS 2026 L&D Report found that 70% of employees now multitask during training, trying to work and learn simultaneously. That's up from 58% just two years ago. Learning is competing with production demands, and production is winning.

So... when do people develop the judgment that makes their work worth anything?

The Balance

AI is a tool. A powerful one. It raises the floor, and that's valuable.

But raising the floor isn't the same as raising the ceiling. If everyone stands on a higher floor while the ceiling remains the same, you've compressed the space where expertise develops.

The World Economic Forum's Future of Jobs Report projects that 170 million new roles will be created while 92 million are displaced between 2025 and 2030. Nearly two-fifths of existing skills required on the job are predicted to change over the next five years.

Human insight and expertise will become more crucial for using AI tools effectively, not less. The demand will be for people who can direct, oversee, and evaluate AI operations. That requires judgment, experience, and the kind of learning you can't shortcut.

The temptation is to frame this as either/or. Embrace AI and accept the tradeoffs, or hold back and fall behind. But that misses what's actually at stake.

If you treat AI adoption as purely a productivity question, you'll hit your numbers. You won't notice the expertise gap until someone has to handle something the AI wasn't trained for.

The organizations that get this right will keep investing in how people develop, not just what they produce. They'll protect the inefficient parts like mentorship, struggle, and learning from mistakes, because that's where judgment comes from.

The question worth asking: in a world where AI makes everyone's output look good, how do you still build people who can think?

I write about AI reality checks and technology strategy for executives. If you're navigating how AI is changing your organization, you can find more in my newsletter at newsletter.ericbrown.com or connect with me on LinkedIn.

Will Your Retirement Income Last?

A successful retirement can depend on having a clear plan. Fisher Investments’ The Definitive Guide to Retirement Income can help you calculate your future costs and structure your portfolio to meet your needs. Get the insights you need to help build a durable income strategy for the long term.

Newsletter Recommendations

The Magnus Memo

The Magnus Memo

A personal dispatch from my corner of the tech world, 25 years in the making, I write about a blend of tech wisdom, hard-won lessons, behind-the-scenes stories, and the occasional life hack — all t...

Westenberg.

Westenberg.

Where Builders Come to Think.

Brian Maierhofer

Brian Maierhofer

One decision to change your life; one decision to save your heart

Reply

or to participate

Keep Reading

No posts found