Meta is poised to significantly expand its artificial intelligence infrastructure with a substantial new agreement to deploy millions of Nvidia chips across its U.S. data centers. This sweeping deal, announced Tuesday, includes not only Nvidia’s cutting-edge Grace CPUs as standalone processors but also its forthcoming Vera Rubin systems.
Mark Zuckerberg, Meta’s CEO, articulated the company’s ambitious vision, stating the expanded partnership fuels their ongoing commitment “to deliver personal superintelligence to everyone in the world,” a goal he outlined in July. While financial specifics of the agreement remain undisclosed, industry analysts anticipate a considerable investment, with estimates suggesting the deal is “certainly in the tens of billions of dollars.” This aligns with Meta’s previously announced intention to invest up to $135 billion in AI throughout 2026, with a significant portion of capital expenditures expected to be directed towards this Nvidia build-out.
This collaboration deepens an already decade-long relationship between the two Silicon Valley titans. However, the current agreement signifies a more profound technological alliance. A key innovation within this partnership is Meta’s role as the inaugural large-scale deployer of Nvidia’s Grace CPUs as independent units within its data centers, a departure from their traditional integration alongside GPUs. These standalone Grace CPUs are engineered to efficiently handle inference and agentic workloads, acting as a crucial component alongside Nvidia’s Grace Blackwell and Vera Rubin platforms.
The Vera Rubin CPUs are slated for deployment by Meta in 2027. This multiyear commitment underscores Meta’s broader strategy to invest $600 billion in the U.S. by 2028, focusing on data center expansion and the requisite infrastructure. The company is currently developing 30 data centers, with 26 located domestically. Among these, the Prometheus site in New Albany, Ohio, a 1-gigawatt facility, and the 5-gigawatt Hyperion site in Richland Parish, Louisiana, are positioned to be its largest AI data centers.
Beyond processing power, the agreement encompasses Nvidia’s Spectrum-X Ethernet switches, critical for interconnecting GPUs within large-scale AI environments. Meta will also leverage Nvidia’s security technologies for AI features within WhatsApp.
While Nvidia remains a primary partner, Meta is diversifying its AI hardware strategy. Reports indicate the company is exploring the integration of Google’s tensor processing units in its data centers by 2027. Furthermore, Meta continues to develop its in-house silicon processors and utilizes offerings from AMD, which secured a significant partnership with OpenAI last October, reflecting a broader industry trend towards securing alternative chip suppliers amidst supply chain constraints. The current demand for Nvidia’s Blackwell GPUs has led to extended back-orders, and the production of the next-generation Rubin GPUs is just commencing, making Meta’s secured supply a strategic advantage.
To further optimize performance, engineering teams from both Nvidia and Meta will collaborate on “deep codesign” initiatives, aiming to enhance and accelerate the development of advanced AI models for Meta’s platforms. This comes as Meta is reportedly developing a new frontier model, codenamed Avocado, intended to succeed its Llama AI technology. However, recent iterations have reportedly not fully captivated developers, adding a layer of scrutiny to Meta’s AI endeavors.
Meta’s stock performance has experienced volatility recently, with its AI strategy drawing considerable attention from Wall Street. Following a significant downturn in October after announcing aggressive AI spending plans, the stock saw a notable rebound in January after exceeding sales expectations.
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/18594.html