AI-native networks have been a consistent theme at Mobile World Congress for years, but MWC 2026 in Barcelona marked a significant shift from discussion to demonstrable progress. A wave of announcements from leading telecom vendors, chipmakers, and operators showcased not just the vision for AI-RAN (Radio Access Network), but also tangible results from field trials, commercial product launches, open-source toolkits, and the formation of a multi-operator coalition committed to building 6G on AI-native foundations.
For enterprise and IT decision-makers, the message is unequivocal: the ongoing architectural transformation in telecom infrastructure is poised to fundamentally reshape how connectivity is delivered, managed, and monetized.
### Nvidia and a Global Coalition Champion AI-RAN and 6G
Nvidia made perhaps the most consequential announcement of the week, securing commitments from over a dozen global operators and technology companies, including BT Group, Deutsche Telekom, Ericsson, Nokia, SK Telecom, SoftBank, T-Mobile, Cisco, and Booz Allen. This coalition has pledged to build 6G on open, secure, and AI-native software-defined platforms. The initiative, positioned as a collective effort to ensure future connectivity infrastructure is intelligent, resilient, and trustworthy, is further bolstered by collaborations with governments across the US, UK, Europe, Japan, and Korea.
Nvidia’s founder and CEO, Jensen Huang, underscored the significance, stating, “AI is redefining computing and driving the largest infrastructure buildout in human history–and telecommunications is next.” Nvidia is a founding member of the AI-RAN Alliance, which now boasts over 130 participating companies, and has joined the US FutureG Office-led OCUDU Initiative to accelerate the development of open, software-defined, AI-native 6G architectures.
Complementing these strategic moves, Nvidia released a suite of open-source tools for network operators. This includes the 30-billion-parameter Nemotron Large Telco Model (LTM), developed in collaboration with AdaptKey AI and fine-tuned on telecom datasets, including industry standards and synthetic logs. Additionally, an open-source guide, co-published with Tech Mahindra, offers a framework for building AI agents capable of reasoning like Network Operations Center (NOC) engineers. New Nvidia Blueprints were also introduced, focusing on RAN energy efficiency and network configuration. The energy blueprint integrates VIAVI’s TeraVM AI RAN Scenario Generator for simulating energy-saving policies in a closed-loop environment before deployment on live networks. Real-world adoption of the network configuration blueprint is already underway, with Cassava Technologies deploying it for an autonomous network platform across Africa’s multi-vendor mobile environment, and NTT DATA utilizing it with a tier-one operator in Japan to manage traffic surges post-network outages.
### Nokia and Operators Push AI-RAN into Live Environments
Nokia announced significant advancements in its strategic AI-RAN partnership with Nvidia, successfully completing functional tests of its anyRAN software on Nvidia’s GPU-accelerated AI-RAN platform. These tests were conducted in collaboration with T-Mobile US, Indosat Ooredoo Hutchison (IOH), and SoftBank Corp. The importance of these results lies in moving validation beyond controlled lab settings into live, over-the-air conditions.
At T-Mobile’s AI-RAN Innovation Center in Seattle, Nokia’s AirScale Massive MIMO radio, operating in the 3.7GHz band, simultaneously handled AI and RAN workloads—including video streaming, generative AI queries, and AI-powered video captioning—on a single Nvidia Grace Hopper 200 server alongside commercial 5G traffic. Indosat Ooredoo Hutchison achieved Southeast Asia’s first AI-RAN-powered Layer 3 5G call at MWC, with AI and RAN workloads running concurrently on shared GPU infrastructure. As IOH President Director and CEO Vikram Sinha articulated, “This is not just about proving that the technology works. It is about ensuring that every Indonesian, wherever they are, can benefit from the digital and AI era.”
SoftBank’s demonstration further illustrated the potential monetization of RAN infrastructure beyond connectivity. Their demonstration showcased how spare compute capacity, identified by its AITRAS Orchestrator, can be utilized to run third-party AI workloads. Nokia’s expanding AI-RAN ecosystem now includes Dell Technologies, Quanta, Supermicro, and Red Hat OpenShift for orchestration, providing operators with a broader selection of commercial off-the-shelf options. Nokia’s stock saw a notable increase of 5.4% on the day of this announcement.
### Ericsson Pursues an Independent Path to AI-Native Networks
Ericsson presented a distinctly different strategy at MWC 2026. While Nokia has aligned with Nvidia’s GPU acceleration, Ericsson unveiled ten new AI-ready radios built on its proprietary silicon. These radios feature embedded neural network accelerators directly within their Massive MIMO hardware, eliminating the need for external Nvidia GPUs. This approach underscores Ericsson’s focus on total cost of ownership, arguing that custom silicon offers superior TCO and power efficiency compared to external GPU solutions, alongside enhanced supply chain independence.
The new portfolio includes AI-managed beamforming, AI-powered outdoor positioning, instant coverage prediction leveraging AI models, and a latency-prioritized scheduler designed to deliver response times up to seven times faster. Per Narvinger, head of Ericsson’s mobile networks business, has indicated that this strategy is unlikely to change. Furthermore, Ericsson announced a comprehensive collaboration with Intel, spanning compute, cloud technologies, and AI-driven RAN and packet core use cases, aimed at accelerating ecosystem readiness for AI-native 6G. Ericsson President and CEO Börje Ekholm stated, “6G is not merely an iteration of mobile technology. It is the infrastructure that will distribute AI across devices, the edge and the cloud.” Intel CEO Lip-Bu Tan framed the partnership as a pathway to open, power-efficient networks grounded in AI inference, with future Ericsson Silicon to be built on Intel’s most advanced process nodes.
### SK Telecom, SoftBank, and the Operator Infrastructure Overhaul
Beyond vendor pronouncements, two major operators used MWC 2026 to articulate how deeply AI-RAN integrates into their broader infrastructure strategies. SK Telecom CEO Jung Jai-hun outlined a comprehensive AI-native rebuild plan, encompassing everything from the network core to customer service systems. This includes upgrading its sovereign AI foundation model from 519 billion to over one trillion parameters and establishing a new AI data center in Korea in partnership with OpenAI. The company is also expanding its autonomous network operations, leveraging AI to automate wireless quality management, traffic control, and network equipment operations, with AI-RAN technology playing a pivotal role in enhancing speed and reducing latency.
SoftBank, in collaboration with Northeastern University’s INSI, Keysight Technologies, and zTouch Networks, demonstrated its Autonomous Agentic AI-RAN (AgentRAN) system. This system utilizes SoftBank’s Large Telecom Model to translate natural-language operator goals into real-time 5G and 6G network configurations, representing a significant step towards self-managing networks driven by intent rather than manual instructions.
### A Maturing Hardware Ecosystem for AI-RAN
The growing breadth of hardware companies developing purpose-built products for AI-RAN signifies its transition from a conceptual idea to a commercial infrastructure reality. At MWC 2026, Quanta Cloud Technology announced commercial, off-the-shelf AI-RAN products supporting Nvidia ARC platforms and Nokia software. Supermicro extended its support across the entire Nvidia AI-RAN portfolio, including configurations based on ARC-Pro and RTX 6000. MSI unveiled its unified AI-vRAN platform, featuring dynamic GPU allocation between 5G and AI workloads. Lanner Electronics launched its AstraEdge AI Server lineup, the ECA-6710 and ECA-5555, engineered for co-locating AI inference, RAN functions, and high-performance packet processing at cell sites. AMD, positioning its EPYC 8005 edge platform and Open Telco AI initiative, presented an alternative compute pathway for operators migrating from AI pilot projects to full production.
### Broader Implications Beyond Network Infrastructure
For enterprise decision-makers, the ramifications of this week’s announcements extend far beyond telecom infrastructure procurement. AI-RAN networks that continuously evolve through software, rather than requiring costly hardware upgrades, will increasingly mirror cloud infrastructure in their agility and pace of change. The integration of GPU compute within the RAN opens up possibilities for enterprise AI workloads to operate at the network edge, closer to data generation points. As noted in Nvidia’s “State of AI in Telecom” report, 77% of respondents anticipate a significantly faster deployment timeline for AI-native wireless architecture compared to previous network generations.
The ongoing debate between Ericsson’s custom silicon approach and Nokia-Nvidia’s GPU-accelerated strategy is particularly noteworthy. This isn’t about a definitive winner, but rather reflects a fundamental question about the optimal placement and cost of AI inference within network hardware. This discussion will undoubtedly shape operator procurement decisions and vendor relationships for years to come.
MWC 2026 unequivocally established that AI-native networks are no longer a theoretical pursuit. Field trials are operational, hardware is being shipped, and significant coalitions are forming. For enterprises and operators alike, the central questions are no longer *if* this transition will occur, but rather *how quickly* it will unfold and *who will lead* the charge.
Original article, Author: Samuel Thompson. If you wish to reprint this article, please indicate the source:https://aicnbc.com/19608.html