Strategy

AI Accessibility: Elite Tool or Public Commodity?

The rise of large language models has democratized coding, writing, and problem solving in ways that felt impossible just a few years ago. Yet beneath the excitement there is a growing tension that few people are talking about: what happens if the cost of running AI never comes down? We are used to thinking of technology as something that gets cheaper over time, but the economics of modern AI tell a more complicated story.

Major labs like OpenAI, Anthropic, and Google are burning extraordinary amounts of money to train and serve their models. The margins are thin or nonexistent, and the hardware required to run inference at scale is staggering. Some observers argue that we are heading toward a future where only the wealthiest companies and individuals can afford state-of-the-art AI. If that happens, the technology that promised to level the playing field could instead deepen existing inequalities.

Imagine a world where elite programmers inside top companies have access to the best reasoning models, while everyone else is stuck with weaker alternatives. The gap between market leaders and smaller competitors would widen. Career advancement would depend on whether your employer can afford a premium AI subscription. The internet became a public commodity because infrastructure costs plummeted and access expanded globally. There is no guarantee that LLMs will follow the same trajectory.

Anthropic’s models are famously expensive compared to competitors. Chinese labs currently offer cheaper alternatives, but that is partly because they have not reached the same scale of global demand. Once they do, they will face the same physics: more users mean more GPUs, more power, and more capital expenditure. The question is not who is cheapest today, but whether any organization can sustainably serve billions of users at frontier model quality without going bankrupt.

There is, however, an alternative vision. Hardware could hit a breakpoint where local inference becomes efficient enough to run capable models on consumer devices. If that happens, every company could host its own fine-tuned model on private infrastructure. Every individual could run a personal assistant without relying on a distant server. AI would become as ubiquitous as electricity, woven into daily life rather than rationed to those who can pay premium prices.

The direction we take depends on more than economics. It depends on whether the industry prioritizes open weights, efficient architectures, and distributed inference alongside raw capability. It also depends on whether regulators and communities treat access to AI as a utility worth protecting, rather than a luxury product to be sold to the highest bidder. The internet was not inevitable; it required public investment, open standards, and competitive markets. AI may need the same.

What we do in the next few years matters. If the dominant business model becomes API credits sold to enterprise buyers, the elite-only future feels more likely. If progress instead drives down per-token costs and enables local deployment, we get closer to the public-commodity outcome. The technology is still young enough that the path is not set in stone, but the window for shaping it is closing.

For builders and technologists, this means thinking about efficiency as a first-class goal, not an afterthought. It means supporting open models and smaller architectures that can run without a data center. And it means advocating for access policies that recognize AI as infrastructure, not just a product. The promise of AI was never that it would make the rich richer. It was that intelligence itself could become abundant. Whether that promise is fulfilled is up to us.