Qualcomm announced Monday that it will release new artificial intelligence accelerator chips, marking new competition for Nvidia, which has so far dominated the market for AI semiconductors.
The stock soared 15% following the news.
The AI chips are a shift from Qualcomm, which has thus far focused on semiconductors for wireless connectivity and mobile devices, not massive data centers.
Qualcomm said that both the AI200, which will go on sale in 2026, and the AI250, planned for 2027, can come in a system that fills up a full, liquid-cooled server rack.
Qualcomm is matching Nvidia and AMD, which offer their graphics processing units, or GPUs, in full-rack systems that allow as many as 72 chips to act as one computer. AI labs need that computing power to run the most advanced models.
Qualcomm’s data center chips are based on the AI parts in Qualcomm’s smartphone chips called Hexagon neural processing units, or NPUs.
“We first wanted to prove ourselves in other domains, and once we built our strength over there, it was pretty easy for us to go up a notch into the data center level,” Durga Malladi, Qualcomm’s general manager for data center and edge, said on a call with reporters last week.
The entry of Qualcomm into the data center world marks new competition in the fastest-growing market in technology: equipment for new AI-focused server farms.
Nearly $6.7 trillion in capital expenditures will be spent on data centers through 2030, with the majority going to systems based around AI chips, according to a McKinsey estimate.
The industry has been dominated by Nvidia, whose GPUs have over 90% of the market so far and sales of which have driven the company to a market cap of over $4.5 trillion. Nvidia’s chips were used to train OpenAI’s GPTs, the large language models used in ChatGPT.
But companies such as OpenAI have been looking for alternatives, and earlier this month the startup announced plans to buy chips from the second-place GPU maker, AMD, and potentially take a stake in the company. Other companies, such as Google, Amazon and Microsoft, are also developing their own AI accelerators for their cloud services.
Qualcomm said its chips are focusing on inference, or running AI models, instead of training, which is how labs such as OpenAI create new AI capabilities by processing terabytes of data.
The chipmaker said that its rack-scale systems would ultimately cost less to operate for customers such as cloud service providers, and that a rack uses 160 kilowatts, which is comparable to the high power draw from some Nvidia GPU racks.
Malladi said Qualcomm would also sell its AI chips and other parts separately, especially for clients such as hyperscalers that prefer to design their own racks. He said other AI chip companies, such as Nvidia or AMD, could even become clients for some of Qualcomm’s data center parts, such as its central processing unit, or CPU.
“What we have tried to do is make sure that our customers are in a position to either take all of it or say, ‘I’m going to mix and match,'” Malladi said.
The company declined to comment, the price of the chips, cards or rack, and how many NPUs could be installed in a single rack. In May, Qualcomm announced a partnership with Saudi Arabia’s Humain to supply data centers in the region with AI inferencing chips, and it will be Qualcomm’s customer, committing to deploy up to as many systems as can use 200 megawatts of power.
Qualcomm said its AI chips have advantages over other accelerators in terms of power consumption, cost of ownership, and a new approach to the way memory is handled. It said its AI cards support 768 gigabytes of memory, which is higher than offerings from Nvidia and AMD.
Qualcomm’s design for an AI server called AI200.
Qualcomm
Qualcomm one day stock chart.
Semiconductor device manufacturing,Generative AI,Apple Inc,Business,Breaking News: Technology,Technology,Qualcomm Inc,NVIDIA Corp,Advanced Micro Devices Inc,Alphabet Class A,Amazon.com Inc,Microsoft Corp,Artificial intelligence,business news
#Qualcomm #announces #chips #compete #AMD #Nvidia

