“`html
SINGAPORE, Oct. 16, 2025 – SuperX AI Technology Limited (NASDAQ: SUPX) unveiled today its SuperX GB300 NVL72 System, a rack-scale AI supercomputing platform engineered around the NVIDIA GB300 Grace Blackwell Ultra Superchip. This system represents a paradigm shift in AI infrastructure, designed to overcome the limitations of training and deploying next-generation trillion-parameter models. The liquid-cooled GB300 boasts a performance density and energy efficiency profile poised to redefine modern data center requirements.
The arrival of rack-scale AI systems signifies a pivotal moment for the industry. SuperX’s GB300 NVL72 delivers up to 1.8 exaFLOPS of AI performance within a single, liquid-cooled rack. This compute density challenges traditional air-cooled data center designs and conventional Alternate Current (AC) power distribution systems. The concentration of compute power requires a fundamental re-think of supporting infrastructure.
This creates a market opportunity for power solutions such as 800 Voltage Direct Current (800VDC). Delivering power directly and efficiently becomes critical for stability, safety, and operational viability. The SuperX GB300 NVL72 system, therefore, is positioned as the core of a full-stack SuperX Prefabricated Modular AI Factory solution. In addition to the GB300 NVL72, this solution incorporates an essential liquid cooling system, and 800VDC power infrastructure.

Grace Blackwell Ultra Superchip Advantage
The SuperX GB300 System utilizes the GB300 Superchips providing 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs in a 2-to-1 configuration. This integration provides bandwidth via a 900GB/s chip-to-chip link that provides connectivity of the Grace CPU’s high-memory bandwidth to the Blackwell Ultra GPUs. Unified memory is delivered by combining 2,304GB of HBM3E with the Grace CPU’s LPDDR5X, enabling the handling of models and K/V caches without I/O bottlenecks. Efficiency is enhanced because the Grace CPU’s power-efficient processing complements Blackwell Ultra GPU’s compute-intensive performance.
Rack-Scale Exascale AI
A single SuperX GB300 System can scale up to the NVL72 rack configuration, linking 72 Blackwell Ultra GPUs together into a unified GPU system. Performance is further enhanced since the system delivers up to 1.8 exaFLOPS of FP4 AI compute within a single rack. With 800Gb/s InfiniBand XDR connectivity, ultra-low latency is supported across AI clusters. This level of performance depends on an advanced liquid cooling design, exceptional density, and continuous 24/7 operation. This combination maximizes energy efficiency.
Technical Specifications:
|
|
36* NVIDIA Grace CPUs (144 Arm Neoverse V2 cores total) |
|
|
72* NVIDIA Blackwell Ultra GPUs |
|
|
≈165TB (GPU High-Bandwidth Memory) |
|
|
≈17TB (Grace CPU Memory) |
|
|
≈1.8 ExaFLOPS (FP4 AI) |
|
|
4* NVIDIA NVLink Connectors (1.8TB/s) 4* NVIDIA ConnectX 8 OSFP Ports (800Gb/s) 1* NVIDIA BlueField 3 DPUs (400Gb/s) |
|
|
48U NVIDIA MGX Rack 2296mm(H) x 600mm(W) x 1200mm(D) |
Market Positioning
The SuperX GB300 NVL72 System targets organizations building the foundation of AI:
- Hyperscale & Sovereign AI: For constructing national AI infrastructure, public cloud services, and enterprise AI factories that require exascale compute to train and serve multi-modal and large language models.
- Exascale Scientific Computing: For governments and research institutions tackling challenges in disciplines such as physics, materials science, and climatology.
- Industrial Digital Twins: For automotive, manufacturing, and energy sectors building digital twins requiring the combined processing power of the Grace CPU and Blackwell Ultra GPU.
About SuperX AI Technology Limited (NASDAQ: SUPX)
SuperX AI Technology Limited provides AI infrastructure solutions. The Company’s services include advanced solution design and planning, cost-effective infrastructure product integration, and end-to-end operations and maintenance. Core products include high-performance AI servers, 800 Voltage Direct Current (800VDC) solutions, high-density liquid cooling solutions, AI cloud, and AI agents. Headquartered in Singapore, the Company serves institutional clients globally, including enterprises, research institutions, and cloud and edge computing deployments.
Safe Harbor Statement
This press release may contain forward-looking statements. These forward-looking statements are based on our expectations and projections about future events, which we derive from the information currently available to us.
Forward-looking statements are only predictions. The reader is cautioned not to rely on these forward-looking statements.
“`
Original article, Author: Jam. If you wish to reprint this article, please indicate the source:https://aicnbc.com/10999.html