Broadcom Unveils Its $10 Billion Mystery Customer: Anthropic

Broadcom announced a $10 billion order from AI lab Anthropic for custom TPU Ironwood racks, with an additional $11 billion commitment, marking its fourth XPU customer. A fifth undisclosed client placed a $1 billion order. The deal ties into Anthropic’s multi‑year cloud partnership with Google, granting access to up to one million TPUs. Broadcom’s rack‑level AI accelerators aim to rival Nvidia’s GPUs, potentially boosting its custom‑chip revenue and expanding its foothold in the high‑performance AI market.

Broadcom Unveils Its  Billion Mystery Customer: Anthropic

A Broadcom sign is pictured as the company prepares to launch new optical‑chip technology to counter Nvidia in San Jose, California, September 5 2025.

Photo: Reuters

Broadcom disclosed in its September earnings call that a yet‑unnamed client placed a $10 billion order for custom silicon. The identity of that client was revealed on Thursday when CEO Hock Tan confirmed it was the AI research lab Anthropic, which secured the order for the latest Google Tensor Processing Unit (TPU) racks.

“We received a $10 billion order to sell the newest TPU Ironwood racks to Anthropic,” Tan said during Broadcom’s fourth‑quarter earnings briefing. He added that Anthropic has now committed an additional $11 billion for the current quarter, underscoring the rapid scaling of its compute requirements.

Broadcom traditionally keeps its marquee customers confidential, but the magnitude of the deal attracted immediate attention from investors eager to gauge the company’s foothold in the exploding AI‑infrastructure market. A company spokesperson clarified in October that the mystery buyer was not OpenAI, which already has a separate chip‑supply agreement with Broadcom.

Broadcom’s portfolio includes application‑specific integrated circuits (ASICs) that many analysts argue can outperform Nvidia’s graphics processing units (GPUs) for certain deep‑learning workloads. The firm also manufactures the ASICs that power Google’s TPUs, and Google recently highlighted that its Gemini 3 model was trained entirely on these in‑house processors.

Broadcom brands its custom AI accelerators as XPUs. Tan announced that the company is delivering full server racks—not merely chips—to Anthropic, marking the AI lab as Broadcom’s fourth XPU client. The chipmaker also confirmed a fifth customer, which placed a $1 billion order in the fourth quarter, though the customer’s identity remains undisclosed.

“It’s a real customer, and it will grow,” Tan emphasized, indicating that Broadcom expects the relationship to expand as AI models become more compute‑intensive.

Google‑Anthropic deal

In late October, Anthropic and Google announced a multi‑year cloud partnership valued in the tens of billions of dollars. The agreement grants Anthropic access to up to one million Google TPUs, a capacity that is projected to add more than a gigawatt of AI compute power by 2026.

Anthropic pursues a multi‑cloud, multi‑chip strategy, spreading workloads across Google TPUs, Amazon’s custom Trainium chips, and Nvidia GPUs. The startup selects the platform that offers the best price‑performance mix for each stage of model training, inference, or research.

For Alphabet, Anthropic’s commitment validates the growing market demand for Google’s in‑house silicon. Google Cloud CEO Thomas Kurian has repeatedly pointed to the “strong price‑performance and efficiency” of TPUs as a key factor behind the partnership.

Google’s shift from selling TPUs as hardware to offering them as a cloud service reflects a broader industry trend: customers prefer consumption‑based models that lower capital expenditures while providing access to cutting‑edge performance. Analysts view TPUs as the most credible alternative to Nvidia’s GPUs, especially as power consumption—not chip supply—emerges as the primary bottleneck for large‑scale AI training.

From a financial perspective, Broadcom’s $21 billion total commitment from Anthropic could boost its custom‑chip segment revenue by double‑digit percentages through 2027, assuming the company can maintain its lead‑time and volume discount structures. The contracts also diversify Broadcom’s customer base beyond traditional telecom and data‑center markets, positioning the firm as a direct competitor to Nvidia in the high‑performance AI accelerator space.

Strategically, the deals illustrate a converging ecosystem where hyperscale cloud providers, specialized AI labs, and silicon innovators are aligning to overcome the energy and latency constraints of next‑generation models. Broadcom’s ability to deliver rack‑level solutions—integrating power, cooling, and networking—offers a turnkey proposition that could attract additional AI‑focused customers seeking to avoid the complex integration challenges associated with assembling disparate components.

Looking ahead, the key variables will be supply‑chain resilience for advanced process nodes, the evolution of AI workloads toward larger, multimodal models, and the competitive response from Nvidia, which is accelerating its own custom‑chip roadmap (including the upcoming Hopper‑2 architecture). If Broadcom can sustain its design‑win momentum and expand its XPU portfolio, it may carve out a sustainable niche that challenges Nvidia’s dominance in both the data‑center and cloud‑AI markets.

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/14446.html

Like (0)
Previous 7 hours ago
Next 7 hours ago

Related News