This is a follow up to an article i wrote last week title The AI Confidence Crisis.
The ritual begins: another conference room, another AI strategy presentation, another round of nodding heads. What happens after everyone closes their laptops?
Three months ago, I sat in a conference room with a company's leadership team. The CEO had just announced their "AI-first transformation." Around the table, heads nodded. Strategic AI initiative. Innovation. Future-ready. The words hung in the air like incense.
Then I watched what happened next.
The same executives who'd nodded enthusiastically at AI transformation went back to their departments and... nothing changed. Well, almost nothing. They added "AI strategy" to their quarterly goals. They hired consultants. They attended webinars. But the actual work? That stayed exactly the same.
If you dropped an anthropologist into corporate America right now, they'd recognize what they were seeing immediately: cargo cult behavior. Companies performing the rituals of AI adoption without understanding—or sometimes even caring about—what makes AI actually work.
The data backs this up in ways that should make every executive uncomfortable. S&P Global found that 42% of companies now abandon most AI projects before production—up from 17% just a year ago. BCG reports that 74% of organizations can't demonstrate tangible value from AI. MIT's research suggests 95% of enterprise AI pilots deliver zero revenue acceleration.
This is systematic failure at scale.
But systematic failures have patterns. Walk into any organization attempting AI adoption and you'll find the same groups forming, the same roles emerging, and the same dynamics playing out.
The Tribes
Every organization running AI initiatives has developed distinct tribal groups, each with their own beliefs, behaviors, and territorial claims.
The Believers show up to every meeting with the same energy. They speak in absolutes. AI will transform everything. The ROI is obvious. We just need to move faster. According to McKinsey's research, 78% of employees describe themselves as "AI optimists". They see the potential. They're excited.
The Skeptics occupy a different space. They've seen this movie before—blockchain, metaverse, mobile-first. They ask uncomfortable questions about data quality, integration complexity, and real costs. They point out that your current systems barely work, so adding AI on top might not be the brilliant move everyone thinks it is.
The Performers are the most fascinating group. These are the executives who've figured out that talking about AI generates more value than actually implementing it. They announce AI strategies. They create Chief AI Officer positions (now present in 61% of enterprises, per Wharton research). They attend conferences and post on LinkedIn.
BuzzFeed's stock jumped 100% after announcing AI-generated quizzes. Companies like Klarna got headlines for replacing 700 workers with chatbots. The performance worked, at least temporarily.
What these groups share: they're all responding to the same cultural pressure. The pressure to be seen doing something with AI, whether or not that something makes sense.
Watch any organization "adopting AI" and you'll see the same ceremonies play out with remarkable consistency.
The Announcement Ritual: Leadership declares an AI initiative. There's a company-wide email. Perhaps a town hall. The language is always the same: transformation, innovation, competitive advantage. Does this create actual AI value? No. Did it create the appearance of AI commitment? Absolutely.
The Pilot Project Dance: Organizations launch dozens of experiments. They investigate, test, evaluate. According to multiple sources, most companies that "investigated" AI projects never actually resolved to carry them out. But the investigation itself became the deliverable. The theater of innovation.
The Measurement Evasion: Here's where it gets interesting. Only 40% of IT leaders fully measure ROI on AI programs, according to Fivetran research. That $500K spent on an AI platform? That year-long pilot? Most organizations can't tell you what value they created.
This usually isn’t incompetence but a deliberate decision. When you don't measure results, projects can drift along indefinitely without being declared failures. The budget keeps renewing, the team stays busy, and nobody has to stand up at the quarterly review and explain why $500K produced nothing.
This measurement vacuum creates space for something else to flourish: the performance of progress. Without hard numbers tying claims to reality, organizations can say whatever they want about their AI initiatives. And they usually do.
The gap between stated intentions and actual behavior would fascinate any anthropologist studying corporate culture.
Public Claims: We're AI-first. We're transforming with AI. AI is core to our strategy. McKinsey found that 49% of tech leaders say AI is "fully integrated" into core business strategy.
Private Reality: Only 21% of organizations using generative AI reported fundamentally redesigning any workflows in 2024. The AI sits next to existing processes, not integrated into them. Teams keep working the same way, occasionally consulting AI tools for content generation or summarization.
One executive I know describes this as "AI veneer": a thin layer of AI activity covering up unchanged operations underneath. But, that veneer costs real money. The 6% of annual revenue that businesses lose to poor AI decisions, per Vanson Bourne research, adds up quickly.
All this spending, all this theater, all these failed pilots, they're not happening in a vacuum. Money flows somewhere and authority accumulates around someone. The AI adoption performance has created winners inside organizations, even when the organizations themselves aren't winning.
The Power Shifts
AI adoption isn't just changing what organizations do. It's changing who has power within them.
The Rise of the AI Officer: Chief AI Officers (CAIOs) now exist in 61% of enterprises. These roles didn't exist three years ago. Now they command budgets, headcount, and executive attention. They report directly to CEOs. They attend board meetings.
What do they actually do? In many organizations, not much beyond managing the perception that AI strategy exists. They're high-priests of the cargo cult, maintaining the rituals that keep investment flowing.
The Middle Manager Squeeze: While CAIOs rise, middle managers face a different reality. They're caught between executive demands for AI transformation and frontline reality where AI tools don't integrate with existing systems, require extensive training, and often make work harder before making it easier.
Research from McKinsey shows employees report inadequate support for success with AI. Nearly two-thirds say their company's learning and development programs don't help. Middle managers bear the brunt of this gap and are expected to implement what they haven't been equipped to deploy.
The Data Team Ascendancy: Organizations that succeed with AI share at least one characteristic: they've empowered data teams with real authority. Not just analysis but actual decision-making power with budget control and veto rights over projects with bad data foundations.
This obviously creates tension. Traditional power structures in organizations flow through business units and functional departments. AI success requires cross-functional data governance. When these two models clash the traditional structure usually wins, and AI projects fail.
The pattern emerging from current research points to an uncomfortable conclusion:
Most AI "adoption" isn't about building AI capabilities but about managing stakeholder expectations.
Boards want to see AI investment, investors demand AI strategy and competitors are announcing AI initiatives. So organizations perform AI adoption theater: they go through the motions, create the artifacts and use the language.
Consider this: Organizations spent $109 billion on AI investment in 2024 in the US alone. Yet 74% can't demonstrate tangible value. Something other than the technology is broken.
What Actually Works
Not every organization is performing theater. Some are building real AI capabilities that deliver measurable value. What do they do differently?
They Tell the Truth: Organizations that succeed with AI start by acknowledging what they don't know. They assess their data quality honestly and most discover it's terrible. They admit their technical debt. They face the reality that their current systems barely function, and AI won't fix that.
PwC research found that organizations with lower project failure rates have more holistic approaches to project prioritization. They consider compliance, risk, and data availability before selecting projects. This seems obvious, yet it's rare.
They Invest in People, Not Just Technology: BCG's research shows successful AI organizations follow the 10-20-70 rule: 10% of effort on algorithms, 20% on technology, 70% on people and processes.
Most organizations invert this. They spend months selecting the perfect AI platform and days thinking about how people will actually use it. This produces expensive software that nobody touches.
They Start Small: The organizations making progress aren't announcing enterprise-wide AI transformations. They're picking specific problems, building small solutions, measuring results, and then—only then—expanding.
This approach doesn't generate exciting LinkedIn posts and it doesn't create conference keynote material. But, it does work.
They Maintain Clear Authority: Successful AI initiatives have clear decision-makers with real power. Not advisory committees. Not working groups. Actual authority to kill projects, redirect resources, and override objections.
This requires organizational changes that most companies won't make. It threatens existing power structures. So they opt for consensus-based approaches that move slowly and fail quietly.
The Real Stakes
The anthropological lens reveals something most people miss: AI adoption isn't primarily a technical challenge. It's a cultural transformation that most organizations aren't equipped to handle.
Organizations have spent decades building cultures optimized for stability, predictability, and incremental improvement. AI requires experimentation, tolerance for failure, and rapid iteration. These are fundamentally different operating modes.
You can't bolt new technology onto old culture and expect transformation. That's cargo cult thinking: arranging the technology artifacts and hoping results follow.
The companies that will succeed with AI over the next five years aren't going to be the ones with the biggest AI budgets or the most impressive announcements. They're the ones willing to do the hard cultural work: changing decision-making processes, redistributing power, tolerating failure, and measuring results honestly.
That work is harder than buying AI platforms. It's less visible than announcing AI strategies. And it takes longer than most quarterly planning cycles accommodate.
But it's the only path to actual AI value, as opposed to AI theater.
If you're leading a team through AI adoption and recognizing these patterns, I'd be happy to talk through what actual capability-building looks like versus the theater. Find me at ericbrown.com or connect on LinkedIn.




