“`html
OpenAI CEO Sam Altman speaks to media following a Q&A at the OpenAI data center in Abilene, Texas, U.S., Sept. 23, 2025.
Shelby Tauber | Reuters
Nvidia’s massive investment in OpenAI, announced earlier this week, will channel billions of dollars into the artificial intelligence startup, empowering them with significant resources. However, sources indicate that a substantial portion of this capital will be strategically directed towards leveraging Nvidia’s advanced chip technology, marking a pivotal dependency in their collaboration.
The extensive agreement between the two tech titans, while impressive in scale, lacked granular details. The framework outlined a potential investment reaching up to $100 billion, contingent on the phased establishment of AI supercomputing facilities over the coming years, with the first deployment slated for the latter half of 2026.
While the specific rollout schedule and individual data center costs remain subject to further clarification, it has emerged that OpenAI intends to finance its use of Nvidia’s graphics processing units (GPUs) through innovative lease arrangements, rather than conventional upfront purchases. This flexible financing model allows OpenAI to optimize its capital expenditure over the GPU’s operational lifecycle, potentially spanning up to five years, thereby shifting a significant portion of the financial burden and risk to Nvidia, according to sources familiar with the terms who requested anonymity due to the confidential nature of the details.
Nvidia CEO Jensen Huang characterized this week’s agreement as “monumental,” estimating the capital expenditure for an AI data center with a gigawatt of capacity to be approximately $50 billion, with GPUs accounting for around $35 billion of that total. By opting for a leasing model, OpenAI can distribute its costs over the GPUs’ lifespan, potentially freeing up capital for other crucial areas, such as talent acquisition and software development platforms.
According to sources, the initial tranche of $10 billion will provide a near-term boost to OpenAI, facilitating the deployment of its first gigawatt of computing power. The move signals an accelerated effort to capitalize on the growing demand for AI-driven solutions.
Although Nvidia’s investment is expected to bolster OpenAI’s recruitment, marketing, and operational capabilities, its primary application will be to procure the compute infrastructure necessary for constructing and training substantial language models and managing AI workloads. This strategic allocation underscores the fundamental role of GPUs in powering AI capabilities.
As a non-investment-grade startup without positive cash flow, securing funding for such ambitious projects has been a persistent challenge. OpenAI executives have openly stated that equity financing is the most costly way to fund data centers. They are also exploring debt financing options to support the remainder of their expansion plans. The Nvidia lease option may make those future debt financing rounds more attractive due to the improved cashflow position.
Furthermore, Nvidia’s flexible leasing structure and enduring commitment may enhance OpenAI’s attractiveness to banks when seeking debt financing. The agreement signals a confidence in OpenAI’s future revenue streams, which can create more favorable loan terms.
An Nvidia spokesperson declined to comment.
Managing the Compute Crunch
Speaking from Abilene, Texas, where the first data center is being established, OpenAI CFO Sarah Friar underscored the partnership with Oracle and Nvidia. According to Friar, Oracle is leasing the Abilene equipment, and OpenAI will eventually pay for all operations.
“Entities such as Oracle are leveraging their balance sheets to establish these remarkable data centers,” Friar stated. “In Nvidia’s case, they are injecting equity to catalyze the initiative. More importantly, they will receive payment for each deployed chip.”
Friar emphasized the crucial role of these partnerships in alleviating the mounting scarcity of computing resources.
“Our collective emphasis must be on addressing the prevailing compute shortage,” Friar declared. “As the business expands, we will undoubtedly possess the capacity to finance our future needs – including increased compute power and expanded revenue streams.”
The steel frame of data centers under construction during a tour of the OpenAI data center in Abilene, Texas, U.S., Sept. 23, 2025.
Shelby Tauber | Reuters
The agreement between OpenAI and Nvidia has ignited inquiries regarding AI boom.
Nvidia’s surge towards to become one of the top world companies has stemmed from robust GPU sales not only to OpenAI, but also tech giants like Google, Meta, Microsoft, and Amazon. Concurrently, OpenAI’s ascent to a $500 billion private sector valuation has been enabled by investments, which has enabled the company to make capital investments while developing its models. This includes ChatGPT.
Jamie Zakalik, an analyst at Neuberger Berman, offered a critical perspective, noting that the Nvidia arrangement exemplifies OpenAI procuring capital and subsequently channeling those funds back into the organization that provided the initial financing.
Zakalik asserted that investors are wary of the “interdependent nature of this arrangement enhancing profits and figures across the board,” cautioning that “it isn’t generating anything tangible.”
Asked about these concerns, Altman told CNBC the company is focused on driving real demand.
“We need to keep selling services to consumers and businesses — and building these great new products that people pay us a lot of money for,” he said. “As long as that keeps happening, that pays for a lot of these data centers, a lot of chips.”
“`
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/9894.html