AI accelerators
-
OpenAI Deals Test Hyperscaler Ambitions
OpenAI is aggressively pursuing strategic partnerships and hardware development, signaling a shift from solely focusing on algorithms to prioritizing infrastructure and custom silicon. Collaborations with Broadcom and acquisitions like Jony Ive’s startup aim to optimize AI accelerators and create innovative AI-native devices. The Stargate initiative solidifies OpenAI’s control over AI infrastructure through deals with Nvidia and AMD. Furthermore, OpenAI is actively cultivating its developer ecosystem, transforming ChatGPT into an AI operating system and fostering tight integration to enhance its platform’s stickiness. This vertical integration strategy mirrors those of Apple and Microsoft, aiming to establish OpenAI as a dominant force in the AI landscape.
-
Broadcom CEO: Generative AI Set for Major GDP Impact
Broadcom CEO Hock Tan anticipates AI to significantly impact global GDP, potentially increasing “knowledge-based” industries’ share from 30% to 40%. Broadcom is strategically partnering with companies like OpenAI to develop AI accelerators and collaborating with multiple cloud providers, securing substantial chip orders, including $10 billion from one client. The company’s focus is on AI infrastructure and revenue-generating partnerships, reflecting confidence in the essential role of specialized hardware in AI’s growth.
-
OpenAI Eyes Custom AI Chips with Broadcom, Diversifying Beyond Nvidia and AMD
Broadcom and OpenAI are collaborating to develop and deploy 10 gigawatts of custom AI accelerators, signaling intensified competition in AI infrastructure. This partnership, following OpenAI’s alliances with Nvidia, Oracle, and AMD, aims to diversify its supply chain and optimize compute resources. The initiative involves customized networking, memory, and compute components built on Broadcom’s Ethernet stack, potentially reducing OpenAI’s costs and enhancing efficiency. Broadcom’s stock surged, reflecting the growing demand for custom AI chips. OpenAI aims to expand compute capacity to meet the demands of advanced AI models and future superintelligence.