Post · · 7 min read

Your AI Strategy Has a Single Point of Failure

The numbers nobody talks about at the sales demo

OpenAI is projected to lose $14 billion in 2026. Not revenue. Losses. The company burned through $8 billion on compute alone in 2025, on top of salaries, research, and infrastructure. Its own financial documents project cumulative losses of $115 billion through 2029, with profitability not expected until somewhere in the 2030s.

Anthropic, the company behind Claude, is in better shape but still losing money. Revenue hit around $7 billion annualised by early 2026, but costs are running at roughly $5 billion and climbing. The company forecasts breaking even by 2028, which is optimistic by industry standards but still two years away.

Google is pouring tens of billions annually into AI infrastructure. Microsoft has committed over $80 billion in capital expenditure for AI in a single fiscal year. Meta is spending at a similar scale. None of these companies have demonstrated that their AI services generate a net profit at current pricing.

These are not struggling startups. These are the best-funded technology companies in history, and they are all losing money on the AI services you are building your workflows around. That should concern you.

You are not paying the real price

The current pricing of AI tools is, to put it plainly, subsidised. When you pay $20 a month for ChatGPT Plus or $100 a month for Claude Max, you are not paying what it costs to serve you. You are paying what the company has decided it can afford to lose while it builds market share.

This is the classic loss-leader model. Get users dependent on the product at an artificially low price, establish switching costs, then raise prices once the market is captured. It has worked in streaming, in cloud computing, in ride-hailing. There is no reason to believe AI will be different.

The signs are already visible. OpenAI introduced a $200 per month Pro tier in late 2024, ten times the price of Plus. It removed GPT-4o from the free tier in 2025, putting it behind the paywall. It launched advertising in ChatGPT for free-tier users. Microsoft added Copilot to Microsoft 365 and raised subscription prices. These are not one-off adjustments. They are the early stages of a pricing correction that the entire industry will eventually make.

The analyst firm Bain & Company estimated an $800 billion gap between what the industry is spending on AI infrastructure and what it is earning from AI services. That gap has to close. It will close through some combination of price increases, reduced service quality, and companies going under entirely. The only question is the timing.

The vendor lock-in problem

None of this would matter much if switching between AI providers was easy. But it is not, and it is getting harder.

If your team has spent six months building prompts, workflows, and integrations around Claude's API, moving to GPT or Gemini is not a weekend project. The prompts need rewriting. The integration code needs replacing. The behaviour differences between models mean your quality assurance process starts over. The institutional knowledge your team has built about how to get good results from one model does not transfer cleanly to another.

This is true at the consumer level too. If you have built a personal workflow around Claude's Projects feature, or ChatGPT's custom GPTs, or Gemini's integration with Google Workspace, your data and your patterns of use are tied to that platform. Moving means losing context, losing conversation history, and relearning how to get the results you need.

Every month you spend deepening your relationship with a single AI vendor, the cost of leaving goes up. That is not an accident. It is the business model.

What happens when the music stops

There are several plausible scenarios, and none of them are comfortable for organisations that have bet on a single provider.

Prices go up significantly. This is the most likely outcome. As investor patience thins and the pressure to reach profitability increases, subscription prices will rise and API costs will climb. OpenAI has already shown willingness to create premium tiers at multiples of existing pricing. If your $20 per month tool becomes $50 per month, that is manageable. If your API costs double, that might break your unit economics.

Features get paywalled. This is already happening. Models that were available on free tiers get moved behind subscriptions. Advanced features get reserved for higher-priced plans. The capability you built your workflow around becomes a premium feature you did not budget for.

A provider fails or pivots. The AI startup landscape is brutal. Companies burning billions annually need continuous funding rounds just to survive. If your chosen provider hits a funding wall, gets acquired, or pivots its business model, your workflow breaks. This is not hypothetical. Dozens of smaller AI companies have already shut down or been absorbed.

Quality degrades as costs are cut. When companies need to reduce losses, they optimise for cost rather than quality. Cheaper models get substituted. Rate limits get tightened. Response quality drops during peak hours. You notice it as your AI tool having "off days," but what you are actually seeing is a company managing its burn rate.

Building for resilience

The answer is not to avoid AI tools. The answer is to use them without becoming dependent on any single one.

Keep your prompts and workflows portable. Document what you ask AI to do, not just how you ask a specific model to do it. If your prompt engineering is documented as a process rather than a collection of platform-specific tricks, you can migrate it. Store your system prompts, your templates, and your workflow descriptions in a format you control, not locked inside a vendor's interface.

Use the API layer where it matters. For critical business processes, build against the API rather than the consumer interface. API integrations are more work upfront but give you a clean abstraction point where you can swap providers. Libraries like LiteLLM provide a unified interface across multiple model providers, reducing switching costs significantly.

Test alternatives regularly. Every quarter, take your most important AI workflow and run it through a competing model. See how the results compare. Understand what would break if you had to switch and what would transfer cleanly. This is not paranoia. It is the same due diligence you would apply to any critical supplier.

Ensure your tasks are scalable beyond AI. If a task can only be done by a specific AI tool at a specific price point, you have a fragility problem. Ask yourself: if this tool doubled in price tomorrow, would this workflow still make sense? If the tool disappeared entirely, could we still deliver? The best AI-augmented workflows have a human fallback, not because AI is unreliable, but because dependency on any single tool is a business risk.

Own your data and your learnings. Keep copies of everything that matters outside the AI platform. Conversation histories, generated templates, refined prompts, decision logs. If you are using AI to develop institutional knowledge, that knowledge needs to live somewhere you control. A vendor's server is not that place.

The open-source hedge

This is where open-source models become relevant not just as a philosophical preference but as a practical risk mitigation strategy.

Models like Llama, Mistral, and others in the open-source ecosystem are reaching capability levels that make them viable for many business tasks. They are not yet matching the frontier models from OpenAI or Anthropic on the most demanding workloads, but for a growing range of practical applications, they are good enough.

Running an open-source model on your own infrastructure, or on a cloud provider where you control the deployment, eliminates vendor lock-in entirely. Your costs become predictable. Your data stays under your control. Your workflow cannot be disrupted by someone else's pricing decision or funding round.

The realistic approach for most organisations today is a hybrid one. Use frontier models from commercial providers for the tasks that genuinely require their capability. Use open-source models for everything else. And design your workflows so that moving tasks between the two is straightforward rather than painful.

The uncomfortable truth

The AI tools you are using today are priced to acquire you, not to sustain a business. Every major AI provider is losing money. The pricing you have built your budgets around is temporary. The integrations you have built your workflows around create switching costs that benefit the vendor more than they benefit you.

None of this means you should stop using AI. It means you should use it with your eyes open. Diversify your providers. Keep your workflows portable. Own your data. Test alternatives. Budget for price increases. And remember that a tool's current price is not a promise about its future price.

The companies building AI are playing a long game with very deep pockets and very patient investors. But patience has limits, losses have consequences, and the bill always comes due eventually. When it does, the organisations that planned for it will adapt. The ones that did not will scramble.

Plan for it.

Related reading

Post · 27 February 2026

Your AI Is Not My AI

The endless 'which AI is best' debate misses the point entirely. The right AI tool is the one that fits your brain, your work, and your life.