DeepSeek and the moment the AI cost curve broke

DeepSeek released R1 in late January, and the market reaction was not subtle. The model is competitive with GPT-4 on reasoning tasks. It costs a fraction of what OpenAI charges. It’s open-source, so you can run it on your own infrastructure if you want. And within days, it had reached the number one position on Hugging Face and upended the pricing assumptions for every AI vendor.

The old assumption was that building state-of-the-art models costs you a fortune. You need massive data centres, years of training, and billions of dollars. That’s why OpenAI’s business model (sort of) worked. You pay for the compute, they amortise the cost across millions of users, and they take a margin. Same logic applies to Anthropic and Google and Meta.

DeepSeek appears to have broken that assumption. Not by making a bad model. By making a good one more efficiently. Different training approach. Different inference optimisation. Different scale. The specifics don’t matter as much as the fact that the cost ceiling that we’d all accepted turned out to be a preference.

Every provider is now either defending their pricing or explaining why they’re not just cheaper. It doesn’t mean they’re in trouble immediately, but it changes the conversation with customers.

But if you can run a frontier-class model on your own infrastructure at reasonable cost, the case for paying someone else for inference gets harder to make. That’s good for companies with a team that can manage open models.

But we’re probably going to see a wave of smaller teams building things on top of open models because the barrier to entry just dropped.

I don’t think this moment breaks the AI vendor market. OpenAI still has distribution and brand. Anthropic still has the enterprise trust story. Google still has search.

But it’s does mean that it’s not as clear-cut as “AI is expensive and only a few people can do it”.