Nvidia Sees Accelerated Growth, Vera Rubin Enters Market

Nvidia continues its AI chip dominance, reporting over 55% revenue growth for the eleventh consecutive quarter and projecting a 77% surge this quarter. The company is accelerating the rollout of its next-generation Vera Rubin AI system, aiming for ten times the performance per watt. Despite this, competition is intensifying with AMD’s upcoming Helios system and customers developing in-house chips. Nvidia’s CEO emphasizes that “compute equals revenue” in the booming AI landscape.

Nvidia Continues AI Chip Dominance With Strong Revenue Growth, Eyes Next-Gen Systems

Nvidia, already the world’s most valuable public company, has reported its eleventh consecutive quarter of revenue growth exceeding 55%, fueled by relentless demand for its AI chips from leading technology firms. The company’s growth trajectory is not only sustained but is now reaccelerating.

In its latest earnings report, Nvidia announced an anticipated year-over-year revenue surge of approximately 77% for the current quarter, projecting figures around $78 billion. This represents the fastest growth rate since the quarter ending January 2025, when expansion reached a slightly higher 78%. This forecast significantly surpassed the average analyst estimate of $72.6 billion, according to LSEG data. The previous quarter saw a robust 73% revenue jump, following a 62% expansion in the period before. Notably, Nvidia’s data center business, which houses its critical AI graphics processing units (GPUs), now accounts for over 91% of total sales.

This optimistic outlook is underpinned by Nvidia’s aggressive ramp-up of its next-generation rack-scale AI system, Vera Rubin, which will succeed the current Grace Blackwell platform. Nvidia claims that the Rubin system, equipped with 72 next-generation GPUs, is engineered to deliver ten times the performance per watt compared to its predecessors. Colette Kress, Nvidia’s chief financial officer, confirmed during the earnings call that the company began shipping Vera Rubin samples to select customers earlier this week. Nvidia anticipates that all major model builders and cloud providers will eventually adopt this new system. Kress also indicated that the company now foresees this year’s growth surpassing the initial projection of a $500 billion revenue opportunity spanning both Blackwell and Rubin.

“We believe we have inventory and supply commitments in place to address future demand, including shipments extending into calendar 2027,” Kress stated.

Despite this strong performance, Nvidia’s shares saw minimal movement in after-hours trading, suggesting that the market had already priced in the company’s impressive results, given its nearly $5 trillion valuation driven by its commanding position in AI processors.

However, the competitive landscape is evolving. Advanced Micro Devices (AMD) is poised to introduce its first rack-scale AI system, Helios, later this year. Meta recently announced a significant commitment to deploy up to 6 gigawatts of AMD GPUs, with Helios shipments scheduled to commence in 2026. This move by Meta highlights the growing strategic importance of diversifying GPU suppliers.

Furthermore, Nvidia faces an emerging challenge from some of its largest customers, including Amazon and Google, as they increasingly develop their own in-house AI chips to power their data center operations. Nvidia acknowledged this risk in its annual filing, noting that “customers develop their own internal solution” as a potential threat to future revenue.

Looking beyond the current fiscal year, LSEG projects a significant deceleration in growth, tapering from 63% this year to 30%, 11.5%, and 3% in the subsequent three years, respectively.

**’Compute Equals Revenue’: The Driving Force Behind AI’s Explosive Growth**

For the time being, Nvidia’s growth trajectory is significantly outperforming its rivals. Tech giants and AI model developers are engaged in an intense race to build out their infrastructure to meet the escalating demand for artificial intelligence.

“In this new world of AI, compute equals revenue,” declared CEO Jensen Huang during the earnings call, a phrase he reiterated multiple times. He specifically referenced the rapid adoption of agentic AI, a paradigm that moves beyond traditional generative AI by enabling businesses to create and deploy applications through simple text prompts. The burgeoning success of Anthropic’s Claude Cowork, which has quickly gained traction in the enterprise by integrating with numerous applications, and OpenAI’s recent recruitment of OpenClaw developer Peter Steinberger, whose tool automates a wide range of tasks from email management to web browsing, are testaments to this trend.

“Between Claude Cowork and OpenClaw, compute demand is skyrocketing, and the ChatGPT moment of agentic AI has arrived,” Huang emphasized.

Notably, Nvidia’s first-quarter forecast does not include any potential data center revenue from China. The ongoing uncertainty surrounding export controls has prevented Nvidia from engaging with the world’s second-largest economy, despite a reported January indication from President Donald Trump that his administration would approve China sales of Nvidia’s H200 chip, with the U.S. government taking a 25% cut. Huang previously expressed in May that the Chinese AI market could reach approximately $50 billion within two to three years, deeming any exclusion from this market a “tremendous loss.”

“While small amounts of H200 products for China-based customers were approved by the U.S. government, we have yet to generate any revenue and we do not know whether any imports will be allowed into China,” Kress clarified on the call. “We are not assuming any data center compute revenue from China in our outlook.”

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/19390.html

Like (0)
Previous 2 hours ago
Next 58 mins ago

Related News