OpenAI’s Ambitious Infrastructure Push: Diversifying Beyond Nvidia
While Nvidia has long been the undisputed king of AI chips, OpenAI, the driving force behind groundbreaking generative AI models, is actively forging a more diverse supplier base to fuel its aggressive expansion. This strategic pivot is underscored by a series of significant infrastructure deals, aiming to secure the vast computational power required for its next-generation AI systems and solidify its market dominance.
The company’s ambitious growth trajectory, which has propelled it to a staggering $500 billion private market valuation, necessitates a monumental scale of processing capabilities. OpenAI’s strategy involves locking in substantial commitments with key players in the semiconductor and cloud computing sectors, ensuring a robust and scalable foundation for its AI development.
Nvidia: The Enduring Foundation
Nvidia remains a cornerstone of OpenAI’s infrastructure. The Santa Clara, California-based company, now the world’s most valuable, has been instrumental in OpenAI’s journey since the early days of large language model development. In a testament to this deep-rooted relationship, Nvidia committed a staggering $100 billion in September 2025 to support OpenAI’s infrastructure build-out, aiming to deploy at least 10 gigawatts of Nvidia systems. This commitment, equivalent to the annual power consumption of approximately 8 million U.S. households, is projected to translate into 4 to 5 million GPUs.
However, the path forward for the full realization of this agreement has seen some uncertainty. During Nvidia’s November 2025 earnings call, the company indicated there was “no assurance” that the agreement with OpenAI would progress beyond an announcement to a formal contract. Nonetheless, the initial investment of $10 billion is slated for deployment upon the completion of the first gigawatt phase, with subsequent investments contingent on then-current valuations.
AMD: A Strategic Alliance for Scale
In October 2025, OpenAI unveiled a significant partnership with Advanced Micro Devices (AMD), committing to deploy six gigawatts of AMD’s GPUs over several years and across multiple hardware generations. This deal includes a warrant for up to 160 million shares of AMD common stock, potentially granting OpenAI a roughly 10% stake in the company, with vesting milestones tied to deployment volume and AMD’s share price. The initial rollout of one gigawatt of chips is anticipated in the latter half of 2026. AMD CEO Lisa Su emphasized the critical nature of such collaborations in assembling an ecosystem to deliver cutting-edge technologies.
Broadcom: Custom Solutions for AI Acceleration
Later in October 2025, OpenAI announced a collaboration with Broadcom, a company that designs custom AI chips, referred to as XPUs. This partnership, which has been in development for over a year, involves deploying 10 gigawatts of these custom AI accelerators. Broadcom aims to begin deploying these systems by the second half of 2026, with a project completion target of the end of 2029. Despite the long-term vision, Broadcom CEO Hock Tan indicated that significant revenue from this partnership is not expected in 2026, characterizing it as a multi-year alignment.
Cerebras: A Bold Move into Wafer-Scale Computing
Most recently, in January 2026, OpenAI announced a substantial $10 billion-plus deal with Cerebras Systems, a less established but innovative player in the AI chip market. This agreement entails the deployment of 750 megawatts of Cerebras’ AI chips through 2028. Cerebras specializes in large wafer-scale chips, claiming they can deliver response times up to 15 times faster than GPU-based systems. This partnership represents a significant opportunity for Cerebras, which recently withdrew its IPO plans.
Looking Ahead: Diversification and Potential Partnerships
OpenAI’s proactive approach extends to its cloud infrastructure as well. A $38 billion cloud deal with Amazon Web Services (AWS) in November 2025 will see OpenAI leverage existing AWS data centers while also benefiting from the build-out of additional infrastructure. Amazon is reportedly in talks to invest over $10 billion in OpenAI, potentially leading to the adoption of Amazon’s own AI chips, such as its Inferentia and Trainium processors.
While OpenAI has an existing cloud computing agreement with Google Cloud, it has stated no plans to utilize Google’s in-house Tensor Processing Units (TPUs). The company also explored a potential investment and hardware partnership with Intel years ago, an opportunity that Intel reportedly declined. Intel has since announced its own data center GPU, codenamed Crescent Island, with customer sampling expected in the latter half of 2026, as the company seeks to gain ground in the AI chip arena.
OpenAI’s multifaceted strategy of securing diverse hardware and cloud resources signals a clear intent to maintain its leadership in the rapidly evolving AI landscape, ensuring it has the foundational power to drive continuous innovation.
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/15861.html