Sunday, January 26, 2025

Top 5 This Week

Related Posts

OpenAI’s latest model will change the economics of software


Yet since then one point of consensus has emerged. The model, as well as its predecessor, o1 (o2 was skipped because it is the name of a European mobile network), produces better results the more “thinking” it does in response to a prompt. More thinking means more computing power—and a higher cost per query. As a result a big change is afoot in the economics of a digital economy built on providing cheap services to large numbers of people at low marginal cost, thanks to free distribution on the internet. Every time models become more expensive to query, the zero-marginal-cost era is left further behind.

Investors value OpenAI like a tech darling: it is worth $157bn, going by a recent fundraising. They hope that thanks to the success of products like ChatGPT, it will become the next trillion-dollar giant. But the higher costs of state-of-the-art models, as well as other pressures from suppliers, distributors and competitors, suggest model-making may not confer the monopoly-like powers enjoyed by big tech. “One very important thing to understand about the future: the economics of AI are about to change completely,” said François Chollet, an AI researcher, on X, a social-media site, on the day o3 was made public.

Mr Chollet has helped drum up excitement about o3. In June he launched a $1m prize for models that could run a gauntlet he had created five years earlier called the “Abstraction and Reasoning Corpus”, or ARC. It is a plethora of simple-looking visual-reasoning puzzles (see diagram) intended to be “easy for humans and impossible for modern AI”. The prize wasn’t just challenging for its own sake. Mr Chollet said beating an ARC task was a “critical” step towards building artificial general intelligence, meaning machines beating humans at many tasks.

Six months later OpenAI aced the test. Its o3 model achieved a breakthrough score of 91.5%. Its success in the challenge showed a step-change in AI’s ability to adapt to novel tasks, Mr Chollet said. The new model is not just better; it is different. Like o1, it uses a “test-time compute” approach, which produces better results the more time that is spent on inference (when a trained ai model answers queries). Rather than simply producing an answer as quickly as it can spit it out, o3 is built to, in effect, think harder about the question.

That is where the higher costs come in. Mr Chollet set a limit of $10,000 on the amount that contestants can spend on computing power to answer the 400 questions in his challenge. When OpenAI put forward a model under the limit, it spent $6,677 (about $17 per question) to score 82.8%. The score of 91.5%, achieved by o3, came from blowing the budget. The company didn’t reveal the amount spent, but said that the expensive version of the process used 172 times the amount of “compute” as the cheaper approach—suggesting around $3,000 to solve a single query that takes humans seconds.

Past AI models had already challenged the low-marginal-cost norm of the software industry, because answering queries required substantially more processing power than using equivalent tools like a search engine. But the costs of building large language models and running them were small enough in absolute terms that OpenAI could still give free access.

With the latest models that is no longer the case. OpenAI restricts the “pro” version of the o1 model to users on its $200-a-month subscription tier (and loses money, according to Sam Altman, its boss, because customers are spending more on queries than the company had budgeted for). Pierre Ferragu of New Street Research, a firm of analysts, reckons that OpenAI may charge as much as $2,000 a month for full access to o3.

The power of such models relies on them bringing a version of the sector’s “scaling laws” closer to the end user. Until now, progress in AI had relied on bigger and better training runs, with more data and more computer power creating more intelligence. But once a model was trained, it was hard to use extra processing power well. As o3’s success on the ARC challenge shows, that is no longer the case. Scaling laws appear to have moved from training models to inference.

Such developments change the economics facing model-makers, such as OpenAI. A dependence on more processing power strengthens their suppliers, such as Nvidia, a maker of specialist AI chips. It also benefits the distributors of ai models, notably cloud-service providers like Amazon, Microsoft and Alphabet. And it may justify the fortunes these tech giants continue to invest in data centres because more inference will need more computing power. On January 21st Mr Trump announced “Stargate“, a huge private-sector project to build data centres in America involving Openai. The firm is being squeezed from both sides.

Then there is competition. Google has released its own reasoning model, Gemini 2.0 Flash, and other tech firms probably will, too. Open-source models are expected to follow. Customers will be able to draw on multiple models from different providers. And although generative-AI models may improve a little through their interactions with customers, they lack true network effects, unlike the products Google and Facebook made in the past era.

High marginal costs mean the model-builders will have to generate meaningful value in order to charge premium prices. The hope, says Lan Guan of Accenture, a consultancy, is that models like o3 will support AI agents that individuals and companies will use to increase their productivity. Even a high price for the use of a reasoning model may be worth it compared with the cost of hiring, say, a fully fledged maths PhD. But that depends on how useful the models are.

Different use cases may also lead to more fragmentation. Jeremy Schneider of McKinsey, a consultancy, says providing AI services to corporate customers will require models that are specialised for the needs of each enterprise, rather than general-purpose ones such as ChatGPT.

Instead of domination by one firm, some expect model-making to be more like an oligopoly, with high barriers to entry but no stranglehold—or monopoly profits. For now, OpenAI is the leader, but one of its main rivals, Anthropic, is reportedly raising money at a $60bn valuation, and xAI, majority-owned by Elon Musk, is worth $45bn. That suggests there are high hopes for them, too. With o3 OpenAI has demonstrated its technical edge, but its business model remains untested.

 

© 2025, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com


OpenAI,artificial intelligence,AI,OpenAI o3
#OpenAIs #latest #model #change #economics #software

Leave a Reply

Popular Articles