Sam Altman didn’t set out to compete with Nvidia.
OpenAI began with a simple bet that better ideas, not better infrastructure, would unlock artificial general intelligence. But that view shifted years ago, as Altman realized that more compute, or processing power, meant more capability — and ultimately, more dominance.
On Monday morning, he unveiled his latest blockbuster deal, one that moves OpenAI squarely into the chipmaking business and further into competition with the hyperscalers.
OpenAI is partnering with Broadcom to co-develop racks of custom AI accelerators, purpose-built for its own models. It’s a big shift for a company that once believed intelligence would come from smarter algorithms, not bigger machines.
“In 2017, the thing that we found was that we were getting the best results out of scale,” the OpenAI CEO said in a company podcast on Monday. “It wasn’t something we set out to prove. It was something we really discovered empirically because of everything else that didn’t work nearly as well.”
That insight — that the key was scale, not cleverness — fundamentally reshaped OpenAI.
Now, the company is expanding that logic even further, teaming up with Broadcom to design and deploy racks of custom silicon optimized for OpenAI’s workloads.
The deal gives OpenAI deeper control over its stack, from training frontier models to owning the infrastructure, distribution, and developer ecosystem that turns those models into lasting platforms.
Altman’s rapid series of deals and product launches is assembling a complete AI ecosystem, much like Apple did for smartphones and Microsoft did for PCs, with infrastructure, hardware, and developers at its core.

Hardware
Through its partnership with Broadcom, OpenAI is co-developing custom AI accelerators, optimized for inference and tailored specifically to its own models.
Unlike Nvidia and AMD chips, which are designed for broader commercial use, the new silicon is built for vertically integrated systems, tightly coupling compute, memory, and networking into full rack-level infrastructure. OpenAI plans to begin deploying them in late 2026.
The Broadcom deal is similar to what Apple did with its M-series chips: control the semiconductors, control the experience.
But OpenAI is going even further and engineering every layer of the hardware stack, not just the chip.
The Broadcom systems are built on its Ethernet stack and designed to accelerate OpenAI’s core workloads, giving the company a physical advantage that’s deeply entangled with its software edge.
At the same time, OpenAI is pushing into consumer hardware, a rare move for a model-first company.
Its $6.4 billion all-stock acquisition of Jony Ive’s startup, io, brought the legendary Apple designer into its inner circle. It was a sign that OpenAI doesn’t just want to power AI experiences, it wants to own them.
Ive and his team are exploring a new class of AI-native devices designed to reshape how people interact with intelligence, moving beyond screens and keyboards toward more intuitive, engaging experiences.
Reports of early concepts include a screenless, wearable device that uses voice input and subtle haptics, envisioned more as an ambient companion than a traditional gadget.
OpenAI’s twin bet on custom silicon and emotionally resonant consumer hardware adds two more powerful branches over which it has direct control.

Blockbuster deals
OpenAI’s chips, datacenters and power fold into one coordinated campaign called Stargate that provides the physical backbone of AI.
In the past three weeks, that campaign has gone into overdrive with several major deals:
- OpenAI and Nvidia have agreed to a framework for deploying 10 gigawatts of Nvidia systems, backed by a proposed $100 billion investment.
- AMD will supply OpenAI with multiple generations of its Instinct GPUs under a 6-gigawatt deal. OpenAI can acquire up to 10% of AMD if certain deployment milestones are met.
- Broadcom’s custom inference chips and racks are slated to begin deployment in late 2026, as part of Stargate’s first 10‑gigawatt phase.
Taken together, it is OpenAI’s push to root the future of AI in infrastructure it can call its own.
“We are able to think from etching the transistors all the way up to the token that comes out when you ask ChatGPT a question, and design the whole system,” Altman said. “We can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models — all of that.”
Whether or not OpenAI can deliver on every promise, the scale and speed of Stargate is already reshaping the market, adding hundreds of billions in market cap for its partners, and establishing OpenAI as the de facto market leader in AI infrastructure.
None of its rivals appears able to match the pace or ambition. And that perception alone is proving a powerful advantage.
Developers
OpenAI’s DevDay made it clear that the company isn’t just focused on building the best models — it’s betting on the people who build with them.
“OpenAI is trying to compete on several fronts,” said Gil Luria, Head of Technology Research at D.A. Davidson, pointing to its frontier model, consumer-facing chat product, and enterprise API platform. “It is competing with some combination of all the large technology companies in one or more of these markets.”
Developer Day, he said, was aimed at helping companies incorporate OpenAI models into their own tools.
“The tools they presented were very impressive — OpenAI has been terrific at commercializing their products in a compelling and easy-to-use manner,” he added. “Having said that, they are fighting an uphill battle, since the companies they are competing with have significantly more resources — at least for now.”
The main competition, Luria said, is primarily Microsoft Azure, AWS and Google Cloud.
Developer Day signaled just how aggressively OpenAI is leaning in.
The company rolled out AgentKit for developers, new API bundles for enterprise, and a new App Store that offers direct distribution inside ChatGPT — which now reaches 800 million weekly active users, according to OpenAI.
“It’s the Apple playbook: own the ecosystem and become a platform,” said Menlo Ventures partner Deedy Das.

Until now, most companies treated OpenAI as a tool in their stack. But with new features for publishing, monetizing, and deploying apps directly inside ChatGPT, OpenAI is pushing for tighter integration — and making it harder for developers to walk away.
Microsoft CEO Satya Nadella pursued a similar strategy after taking over from Steve Ballmer.
To build trust with developers, Nadella leaned into open source and acquired GitHub for $7.5 billion, a move that signaled Microsoft’s return to the developer community.
GitHub later became the launchpad for tools like Copilot, anchoring Microsoft back at the center of the modern developer stack.
“OpenAI and all the big hyperscalers are going for vertical integration,” said Ben van Roo, CEO of Legion, a startup building secure agent frameworks for defense and intelligence use cases.
“Use our models and our compute, and build the next-gen agents and workflows with our tools. The market is massive. We’re talking about replaying SaaS, big systems of record, and literally part of the labor force,” said van Roo.
SaaS stands for software as a service, a group of companies specializing in enterprise software and services, of which Salesforce, Oracle and Adobe are part.
Legion’s strategy is to stay model-agnostic and focus on secure, interoperable agentic workflows that span multiple systems. The company is already deploying inside classified Department of Defense environments and embedding across platforms like NetSuite and Salesforce.
But that same shift also introduces risk for the model makers.
“Agents and workflows make some of the massive LLMs both powerful and maybe less necessary,” he noted. “You can build reasoning agents with smaller and specific workflows without GPT-5.”
The tools and agents built with leading LLMs have the potential to replace legacy software products from companies like Microsoft and Salesforce.
That’s why OpenAI is racing to build the infrastructure around its models. It’s not just to make them more powerful, but harder to replace.
The real bet isn’t that the best model will win, but that the company with the most complete developer loop will define the next platform era.
And that’s the vision for ChatGPT now: Not just a chatbot, but an operating system for AI.

Energy,Bitcoin,Cryptocurrency,Technology,FinTech,Apple Inc,Microsoft Corp,NVIDIA Corp,Advanced Micro Devices Inc,Broadcom Inc,Oracle Corp,Japan Post Bank Co Ltd,SoftBank Corp,Salesforce Inc,business news
#OpenAIs #latest #megadeals #testing #hyperscaler #ambitions