Your next AI project will succeed or fail based on one decision you'll make in the first two weeks: build, buy, or wait.
Get this wrong, and you'll join the 42% of companies that abandoned most of their AI initiatives in 2025 (up from just 17% the year before). The average organization now scraps 46% of AI proofs-of-concept projects before they reach production.
Most of those failures share a common origin: the team picked their technology approach before they understood what they were actually solving for.
Every AI conversation I've been part of in the last two years starts the same way. Someone asks: "Should we build this ourselves or buy a solution?"
That’s the wrong question. The right question comes much earlier: Does this capability create competitive differentiation, or is it table stakes?
That distinction changes everything about how you should proceed.
Table stakes means your competitors already have this capability, or will soon. These include customer service chatbots, basic analytics dashboards, and document processing. These are solved problems with mature vendor solutions. Building them yourself means you're spending engineering time on something that won't make customers choose you over a competitor.
Differentiation means this capability could become a reason customers pick you. It touches proprietary data, reflects unique workflows or attempts to create a new product that doesn’t exist.
The third option—wait—applies when you can't clearly articulate which category you're in. If you can't define the specific business outcome in terms that your CFO would accept, you're not ready for the technology decision.
Leadership teams convince themselves their requirements are unique when they're actually standard. I see this all the time… here are a few examples:
"Our customer service is different because we have complex products." Every B2B company says this, but most could use an off-the-shelf solution with minimal configuration.
"Our data is proprietary." Usually true, but that doesn't mean the model needs to be proprietary. Often, you can plug your data into existing infrastructure.
"Our industry has special compliance requirements." Also often true, but vendors serving your industry have already solved this. You'd be rebuilding their compliance work from scratch.
According to Forrester's 2024 analysis, 67% of failed software implementations stem from incorrect build-vs-buy decisions. The most common mistake they found was that companies were building what should have been bought.
The question to ask yourself is: If you removed your company name from the requirements document, would it still look unique? Or would it describe half the companies in your industry?
When Building Actually Makes Sense
Building your own AI capability is expensive.
Custom AI solutions typically range from $100,000 to $500,000+ for enterprise-grade implementations. And, that's just the upfront cost; 65% of total costs materialize after deployment through maintenance, updates, and talent retention.
According to MIT research, purchasing AI tools from vendors succeeds about 67% of the time, while internal builds succeed only one-third as often.
So, when does building make sense despite those numbers? Here’s a few examples.
When the capability compounds over time. UPS spent over $1 billion and four years developing ORION, its route-optimization system. That sounds insane until you see the results: 100 million fewer miles driven annually, $300 million+ in annual savings, and a capability their competitors can't just purchase anywhere. The system gets smarter every day as it processes more delivery data.
When you're integrating deeply into proprietary systems. If the AI capability requires intimate knowledge of your internal processes, data structures, and business logic, vendors will struggle to serve you well. The customization required often exceeds what "configuration" can handle.
When regulatory sensitivity demands control. If you're handling PHI, PII, or financial data under strict compliance requirements, the shared responsibility model of vendor solutions can create gaps. Building gives you full control of the data pipeline—though it also gives you full responsibility for security.
When Buying Is the Smart Move
Lumen Technologies faced a simple problem: sales reps spent four hours researching customer backgrounds before outreach calls. They didn't build a custom AI solution, though…they knew they could find something off the shelf. They deployed Microsoft Copilot and compressed that research time to 15 minutes.
The result: $50 million in projected annual savings with no custom development and no ongoing model maintenance.
Lumen understood something important: customer research isn't its competitive advantage. Serving customers better is. The time saved in customer research can go toward activities that actually differentiate them.
Buy when:
The problem is well-defined, and vendors have proven solutions
Speed matters more than customization
You lack the internal talent to build and maintain
The capability doesn't touch your competitive moat
The mistake many companies make is treating "buy" as the cheap option. It's not. Enterprise AI platforms cost $200-400/month per user, plus implementation, plus integration, plus change management. But it's usually cheaper than building, and far cheaper than building something that fails.
When Waiting Is the Right Call
McDonald's spent three years testing AI-powered drive-thru ordering with IBM. The system achieved about 85% accuracy, which sounds decent until you realize that 15% error rate meant viral videos of customers getting bacon added to their ice cream, or orders for 260 chicken nuggets instead of 26.
In June 2024, McDonald's ended the partnership and shut down the AI at all 100+ test locations.
Here's what makes this instructive: McDonald's didn't fail at AI. They ran a contained pilot, learned the technology wasn't ready for their specific use case, and stepped back. That's exactly how waiting should work.
The company still believes AI voice ordering will be part of its future. But they recognized that the current state of the technology couldn't meet their accuracy requirements at scale. They're evaluating new partners rather than forcing a premature deployment.
Wait when:
You can't articulate the business outcome precisely
The technology isn't mature enough for your accuracy requirements
You're feeling pressure to "do something with AI" without a clear problem to solve
Your data infrastructure isn't ready to support what you want to build
Waiting doesn't mean doing nothing; it means running small pilots, building data foundations, and developing internal expertise. Waiting means getting ready so that when you do move, you move with confidence.
The Evaluation Framework
The build/buy/wait decision gets easier when you force yourself through a specific diagnostic:
First, define the outcome in business terms. Not "implement AI for customer service" but "reduce average handle time by 30% while maintaining satisfaction scores." If you can't state the goal in terms your CFO would recognize, stop. You're not ready.
Second, assess whether this outcome creates differentiation. Would achieving this goal make customers more likely to choose you over competitors? Or are you just catching up to baseline expectations?
Third, inventory what you actually have. Do you have clean, accessible data to train or fine-tune models? Do you have engineers who understand ML operations? Do you have executive patience for a multi-year capability build?
Fourth, map the honest timeline. Internal AI builds take 2-3x longer than companies expect. What was pitched as a six-month implementation often stretches to eighteen months. Vendor implementations have their own delays, but they're usually measured in weeks to months, not years.
If the outcome is differentiation, you have the infrastructure and talent, and you can stomach a longer timeline, then you should build.
If the outcome is table stakes and speed matters, you should buy.
If you can't clearly answer these questions, then you need to wait and use that time to develop clarity.
The Cost of Getting This Wrong
The average organization now scraps 46% of AI proof-of-concept projects before production. According to RAND Corporation, AI projects fail at twice the rate of other IT projects.
These failures waste money and time, exhaust teams, and create organizational scar tissue that makes the next AI initiative harder to fund and staff.
UPS built because route optimization compounds over time and touches their core competitive position. Lumen bought because sales research doesn't differentiate them: serving customers faster does. McDonald's waited because the technology couldn't meet their accuracy bar.
Each made a different choice. Each was correct for their situation.
The decision that determines success isn't which AI model to use or which vendor to select. It's whether you understand your own situation clearly enough to know what the right approach even looks like.
Define the problem with enough precision that the build/buy/wait answer becomes obvious. If you can't get there, that's your signal to keep working on clarity; not to pick a technology and hope it works out.
If you found this post helpful, consider sharing it with another executive grappling with AI, technology, and data. If you want to explore AI and other Technology strategies, grab some time on my calendar, and let's chat.
Startups get Intercom 90% off and Fin AI agent free for 1 year
Join Intercom’s Startup Program to receive a 90% discount, plus Fin free for 1 year.
Get a direct line to your customers with the only complete AI-first customer service solution.
It’s like having a full-time human support agent free for an entire year.





