NEWS

How Long Will AI Products Stay This Cheap?

Photo-Illustration: Intelligencer; Photo: Getty Images

Every big consumer tech cycle has its free-trial era. In 1999, in a few cities, it was the magic of at-cost same-day delivery from Kozmo, which lost money on every order in a bid for growth. Through the 2000s, it was the rise of powerful search engines and social networks with relatively few ads. In the 2010s, the VC-backed lose-money-to-make-money strategy returned to the physical world, with services like Uber (and a hundred Uber-fors) hemorrhaging money on underpriced taxis, food delivery, and DTC goods in a multibillion-dollar experiment in customer acquisition memorialized as a fleeting “millennial lifestyle subsidy.” As with just about everything else, after 2020, things, you know, got a little hazy for a while — although you might argue that low interest rates produced so many examples of unsustainably priced services and goods that it’s hard to pick just one.

Companies do this, and investors allow it, because it sometimes works. In the process, early adopters can reap small but real benefits from strategic underpricing. Kozmo was beloved by its tiny customer base but collapsed into a punch line; Amazon, its money-losing contemporary, is one of the largest companies in the world. Most free (and ad-free) social-media apps failed, but a few of them became so dominant that they can now add as much friction and advertising to their products as they want, making longtime users nostalgic for how they used to be. A lot of Uber-type businesses failed, but Uber made it through. Rides cost more because Uber needs to charge more and can, even if longtime riders remember paying $15 for what’s now a $50 ride.

In 2025, we’re somewhere in the front half of a historically massive tech-investment cycle in and around AI, and the pattern is showing signs of repeating. An “interesting situation has returned where free-to-access AI is very close to the frontier,” writes Wharton professor and AI evangelist Ethan Mollick. There are nearly as many pricing strategies in AI as there are AI products to pay for — subscriptions, ads, metered usage, and combinations of all three — and the products themselves are complicated, tiered, and broadly unfamiliar. Still, if you’re looking for it, you can see the outline of a familiar situation: OpenAI, Google, Meta, Microsoft, and smaller firms like Anthropic are losing massive amounts of money by giving away their AI products or selling them at a loss. “We are in the era of $5 Uber rides anywhere across San Francisco but for LLMs,” wrote early OpenAI engineer Andrej Karpathy in response to Mollick, “weee.” Chatbots are free, programming assistance is cheap, and attention-grabbing, money-losing AI toys are everywhere. AI is in its free(ish) trial era.

The sense of déjà vu is strongest at companies that have done this sort of thing before. Meta is slotting chatbots and AI-powered tools into virtually all of its products in an attempt to amass — re-amass? — as many users as possible. As with acquired products like Instagram and WhatsApp, attempts to turn a profit are delayed but inevitable. There are “pretty clear monetization opportunities here over time, including paid recommendations and including a premium offering,” the company’s finance chief said in January. (That’s ads and subscriptions, in plain English.) Google charges developers for access to its models and also sells subscriptions to Gemini, but the company is installing free AI features in virtually all of its popular products, including Android, Search, Gmail, and Docs. OpenAI’s confusing and constantly changing pricing currently gives unpaid users limited access to near-cutting-edge models and unlimited access to its basic ChatGPT and Search functions, with more powerful models, and more features, tiered at $20 and $200 per month, a price at which the company still loses money, according to CEO Sam Altman:

Altman is projecting confidence by making a joke out of this and has made no secret of the fact that OpenAI is losing a lot of money to grow. And the trend Mollick is referring to — in just a few disorienting years, the industry has alternated directionally between moving more AI tools behind paywalls and just making everything free by default — might not last. In July of last year, Altman was talking about “intelligence too cheap to meter,” a phrase popularized in debates about nuclear power. (To some extent, competition between AI firms can be understood as a battle to control the next version of the omnibox, the universal computer interface that has made Google hundreds of billions of dollars; in this more familiar territory, providing services for free, or very cheaply, is a necessary part of the plan.) This week, after some major pivots in how these companies develop and deploy models, and somewhat to Altman’s own surprise, he’s floating ideas like this:

As familiar as some of this is, there’s a lot about this cycle that isn’t. Unlike, say, food delivery, the cost of serving even relatively recent AI models has tended to drop quickly and precipitously, while the cost of creating new ones has continued — with an occasional trillion-dollar caveat — to go up. (You can run freely available models that would have been considered cutting-edge just a year ago on a hardy PC.)

Also unlike Uber or Seamless or even Facebook — but perhaps a bit more like Amazon — the quality of the product on offer is tied, at least in theory, not just to scale but to massive up-front and ongoing material investment (for Amazon, in logistics infrastructure; for OpenAI, in GPUs). The actual costs of providing an Uber ride have been relatively stable for the life of the company, and higher current prices are just closer to the actual cost of hiring a person with a car to drive you somewhere, plus whatever Uber needs to add to cover its other operations and turn a profit. Uber was undercharging, and now it’s not. OpenAI is undercharging, too, and one day it might not — the free trial might come to an end, and its mainstream product might be filled with ads or cost more to use — but it could also be a fundamentally different product, with different capabilities and value to its customers. Karpathy’s suggestion is perhaps a bit narrower than that. Now, pretty much anyone can play with the latest AI tools. Soon, in part because of new models that require a lot of computing power not just to train but to use, this might not be true. This dovetails with recent reporting that OpenAI is considering much higher price tiers for its next generation of AI “agents”:

OpenAI executives have told some investors it planned to sell low-end agents at a cost of $2,000 per month to “high-income knowledge workers”; mid-tier agents for software development costing possibly $10,000 a month; and high-end agents, acting as PhD-level research agents, which could cost $20,000 per month, according to a person who’s spoken with executives.

OpenAI is claiming that it sees a path to creating tools to match these prices, and some others in the industry do as well — in other words, this is the lawyered, quantified, pitch-deck version of looser public discussions about AGI, the exhilarating and/or terrifying (and fundamentally fuzzy) prospect of which has pulled hundreds of billions of dollars into AI investment.

According to the Information, though, it’s monetizing ChatGPT, not suddenly vaporizing a significant percent of knowledge work, that’s core to the company’s actual plans, at least for the next couple of years:

Overall, OpenAI told investors it expects revenue to more than triple this year, from $3.7 billion to more than $12.5 billion … By 2026, OpenAI expects revenue to hit $28 billion. It expects most of its revenue in 2025 and 2026 to come from ChatGPT, with the rest of its sales coming from software developer tools and AI agents.

Whether OpenAI can scale its technology to “general intelligence” or meaningfully deploy its agents “into the workforce” are fascinating questions that the company enjoys drawing attention to, but in its most optimistic projections — again, regarding products it has not yet built for customers it has not yet identified — this sort of stuff represents up to a quarter of its revenues “in the long run.” In the meantime, OpenAI intends to figure out how to make money from its one popular product that actually exists. For most people who use ChatGPT, it’s a novel tool that can answer questions, help with homework, and search the web through an interface that is, in 2025, unusually uncluttered and clean. It’s also something they use for free, with increasing awareness of multiple free and comparable alternatives. They’re enjoying the AI trial period, and despite broad uncertainty about the future of the underlying technology, when it comes to the tools they’re already using, they have a pretty good sense of what comes next.




Source link

Related Articles

Check Also
Close
Back to top button