“`html
Nvidia President and CEO Jensen Huang speaks about NVIDIA Omniverse as he delivers the keynote address during the Nvidia GTC (GPU Technology Conference) at the Walter E. Washington Convention Center on Oct. 28, 2025 in Washington, DC.
Anna Moneymaker | Getty Images
As tech giants embark on a projected $1 trillion spending spree over the next half-decade to bolster their AI data center infrastructure, a critical accounting concept looms large: depreciation. This financial mechanism, the allocation of an asset’s cost over its useful life, is under intense scrutiny given the unprecedented scale and pace of AI development.
The question of how long hundreds of thousands of Nvidia graphics processing units (GPUs), the workhorses of AI, will remain valuable or functionally relevant is now paramount for executives and investors alike. The depreciation timeline directly impacts profitability, influencing investor confidence and the overall economic feasibility of these massive AI buildouts.
Infrastructure titans such as Google, Oracle, and Microsoft have historically estimated server lifespans at up to six years. However, the rapidly evolving nature of AI technology throws this established practice into uncertainty. Microsoft, in its latest annual filing, acknowledges a broader two-to-six-year lifespan for its computer equipment, reflecting the inherent volatility in the current technological landscape.
This uncertainty poses a substantial challenge for those financing the AI boom. A longer depreciation period, predicated on sustained equipment value, translates to a smoother impact on profits. Conversely, a shorter lifespan necessitates accelerated depreciation, potentially impacting reported earnings and influencing investment decisions.
AI GPUs present a unique depreciation conundrum. Unlike traditional data center hardware, AI-specific processors are relatively new to the market. Nvidia’s foray into AI-centric data center chips began around 2018, with the current AI surge ignited by ChatGPT’s emergence in late 2022. This catalyst fueled Nvidia’s data center revenue from $15 billion to an astounding $115 billion in the year ending January 2025.
Haim Zaltzman, vice chair of Latham & Watkins’ emerging companies and growth practice, highlights the absence of a well-defined depreciation precedent for GPUs. “Is it three years, is it five, or is it seven?” poses Zaltzman, whose practice involves GPU financing. “It’s a huge difference in terms of how successful it is for financing purposes,” he asserts, underscoring the financial implications of accurately assessing GPU lifecycles.
Some Nvidia clients remain optimistic, arguing that AI chips will retain significant value and command demand even as older processors due to their suitability for diverse tasks. CoreWeave, a prominent GPU rental provider, has adopted a six-year depreciation cycle for its infrastructure since 2023. The company’s CEO, Michael Intrator, emphasizes a “data-driven” approach to GPU shelf-life evaluation.
Intrator claims that CoreWeave’s Nvidia A100 chips, launched in 2020, are fully utilized. He further cites an instance where expired contracts freed up Nvidia H100 chips from 2022, which were promptly re-booked at 95% of their original price. “All of the data points that I’m getting are telling me that the infrastructure retains value,” Intrator contends, citing empirical evidence from their market experience.
CoreWeave CEO, Michael Intrator appears on CNBC on July 17, 2024.
CNBC
However, CoreWeave’s stock experienced a 16% plunge following its earnings report, triggered by data center development delays. The stock now sits 57% below its June peak, reflecting broader market concerns surrounding AI overspending. Similarly, Oracle’s shares have plummeted 34% from their September record high, signaling a cooling investor sentiment toward ambitious AI buildout plans.
Michael Burry, a well-known short seller, has emerged as a vocal skeptic of the AI investment narrative, publicly disclosing bearish positions against Nvidia and Palantir.
Burry has voiced concerns that companies like Meta, Oracle, Microsoft, Google, and Amazon may be inflating the useful life of their AI chips, thereby understating depreciation expenses. He posits a shorter, two-to-three-year lifespan for server equipment, suggesting that companies are artificially boosting earnings by manipulating depreciation schedules.
Amazon and Microsoft declined to comment on the allegations. Meta, Google, and Oracle did not respond to requests for comment, leaving the underlying accounting practices shrouded in silence.
‘You couldn’t give Hoppers away’
Various factors could contribute to accelerated AI chip depreciation. These include physical wear and tear leading to failure, the rapid obsolescence driven by newer GPU releases, and diminishing economic viability of older chips for certain workloads.
Nvidia CEO Jensen Huang himself alluded to this phenomenon. During the unveiling of the new Blackwell chip, he playfully suggested that the value of its predecessor, the Hopper, would plummet. “When Blackwell starts shipping in volume, you couldn’t give Hoppers away,” quipped Huang at Nvidia’s AI conference in March.
“There are circumstances where Hopper is fine,” he clarified, “Not many.” This underscores the relentless pressure of technological advancement and its impact on the economic value of older hardware.
Nvidia has transitioned to an accelerated annual release cycle for new AI chips, outpacing its previous two-year cadence. Rival Advanced Micro Devices (AMD) has followed suit, intensifying competitive pressures and further accelerating the potential for technological obsolescence.
Amazon, in a February filing, revealed a reduction in the useful life for a subset of its servers, from six to five years. The rationale behind this adjustment was a study indicating “an increased pace of technology development, particularly in the area of artificial intelligence and machine learning.”
Concurrently, some hyperscalers are extending GPU useful life estimates for newer server equipment, creating a dichotomy in depreciation strategies.
Microsoft Chairman and Chief Executive Officer Satya Nadella speaks during the Microsoft Build 2025, conference in Seattle, Washington, on May 19, 2025.
Jason Redmond | AFP | Getty Images
While Microsoft is committed to aggressive AI infrastructure development, CEO Satya Nadella recently stated that the company aims to strategically pace its AI chip purchases to avoid over-investment in any single generation of processors. He emphasized that the biggest challenge to any new Nvidia AI chip is its immediate predecessor.
“One of the biggest learnings we had even with Nvidia is that their pace increased in terms of their migrations,” Nadella elaborated. “That was a big factor. I didn’t want to go get stuck with four or five years of depreciation on one generation.” This statement highlights the need for careful financial planning in the face of rapidly accelerating technological change.
Dustin Madsen, vice president of the Society of Depreciation Professionals and founder of Emrydia Consulting, clarifies that depreciation is essentially a management-driven financial estimate, susceptible to revisions as technological advancements unfold. He asserts that depreciation estimates typically encompass technological obsolescence, maintenance requirements, historical lifespans of comparable equipment, and internal engineering analysis.
“You’re going to have to convince an auditor that what you’re suggesting what its life will be is actually its life,” Madsen emphasizes. “They will look at all of those factors, like your engineering data that suggests that the life of these assets is approximately six years, and they will audit that at a very detailed level.” This implies that the scrutiny from auditors will be vigorous, checking the validity of the claimed period.
“`
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/12853.html