In a move poised to reshape the landscape of artificial intelligence, AI unicorn DataCanvas recently unveiled Alaya NeW Cloud 2.0, its next-generation intelligent computing cloud platform, at the “DataCanvas Intelligent Computing Forum.” This launch is accompanied by the introduction of the world’s first reinforcement learning intelligent computing service, promising to deliver on-demand computing infrastructure for AI innovators and research institutions worldwide.
The new platform, built on a serverless architecture and deeply integrated with reinforcement learning technology, has shattered performance barriers, enabling “millions of tokens to be generated per second.” Alaya NeW Cloud 2.0 is targeting compute-intensive applications, offering a unique blend of intelligent computing infrastructure (AI Infra) and user-friendly toolchains. Early tests showcase the platform’s prowess in unified orchestration of heterogeneous computing power at scales ranging from thousands to tens of thousands of GPUs. Designed with MoE (Mixture of Experts) model architectures in mind, the platform delivers multiple times improved inference optimization efficiency. It also significantly reduces the barrier to entry for AI developers, allowing them to orchestrate distributed workloads with a single line of code while a unique “pay-as-you-go” pricing model reduces user costs.
“The shift from ‘bandwidth-intensive applications’ in the mobile internet era to ‘compute-intensive applications’ in the AI age demands new cloud architectures,” noted Fang Lei, Chairman of DataCanvas. “Alaya NeW Cloud 2.0’s paradigm shift, with its ‘highly integrated, high-density AI Infra + low-threshold toolchain,’ is providing a full-stack intelligent computing solution for the intelligent computing era.”
**Redefining AI Computing Architecture**
The evolution of AI is driving an exponential demand for intelligent computing power. This demand requires not only massive computing pools but also exceptional computational efficiency, coupled with a deep understanding of the high-density and fixed characteristics of AI workloads. Alaya NeW Cloud 2.0 addresses this challenge by deeply integrating the computing infrastructure with the large model ecosystem. This approach redefines how enterprises acquire, utilize, and manage intelligent computing resources. With its ease of use, cost-effectiveness, high elasticity, and massive-scale, high-density concurrent computation capabilities, this platform is set to become a core base for making AI universally accessible.
At the infrastructure level, Alaya NeW Cloud 2.0 is built on a serverless architecture, which replaces traditional virtualization in order to maximize the utilization of compute resources. This transformative shift streamlines the AI development process; rather than focusing on provisioning hardware, developers can concentrate on their core business logic and model implementation.
Leveraging the serverless technology, the platform offers end-to-end optimization. The tests show that it supports elastic resource scheduling across AIDC (Autonomous Intelligent Data Centers), enabling rapid response times and limitless scaling capabilities. It also features auto-scaling to handle environment configurations, strategy loading, and task monitoring and delivers a five-fold increase in end-to-end performance. The pricing model departs from traditional bare-metal rentals, offering a unique, “pay-per-use” that reduces total cost of ownership (TCO) by up to 60%, making AI computing power more accessible to more businesses and developers.
Alaya NeW Cloud 2.0 has launched a suite of intuitive AI toolchains, designed to cover the entire model lifecycle – from pre-training and fine-tuning to adaptation and application development – effectively reducing the technical hurdles for those working on models. These tools eliminate the need for complex GPU configuration and cluster management, enabling users to define data sources, select a model foundation, and set optimization goals, the system automatically orchestrates the computing process. This allows business users, in essence, to easily manage AI computing power.
DataCanvas has also introduced AgentiCTRL, a reinforcement learning cloud platform. With this, it becomes the world’s first to deeply integrate reinforcement learning capabilities into the infrastructure. This significantly enhances the inference capabilities of large models, reducing the barrier to entry for AI agent training and inference to a single line of code. Compared to conventional reinforcement learning implementations, the platform increases end-to-end training efficiency by 500% while cutting overall costs by 60%. AgentiCTRL is the first reinforcement learning infrastructure platform supporting the orchestration of ten thousand heterogeneous cards.
Alaya NeW Cloud 2.0 exemplifies the characteristics of a new cloud paradigm, providing comprehensive services for intelligent computing, from the underlying infrastructure up to the level of the toolchain. It integrates the modular technologies of reinforcement learning, Serverless architecture, and others, enabling elastic combinations of intelligent computing services. These include support for the orchestration of heterogeneous resources across hundreds of thousands of GPUs and reduces latency to milliseconds. This innovative “infrastructure-as-a-service” model promises to scale AI applications.
**Making AI Computing Accessible: A Chinese Solution**
The intelligent computing market is experiencing structural growth, primarily driven by three factors: the evolution of large model technology fueling demand for new infrastructure; the rise of enterprise-level models boosting demand for intelligent computing; and the increasing use of intelligent agent technology ushering in new consumption patterns. The next generation of intelligent computing cloud service providers must build more effective computing power supply systems to reduce computing costs, thereby making AI accessible to all.
Alaya NeW Cloud 2.0 steps away from the traditional model of “renting GPUs” and focuses on offering a range of tools for agent development, aiming to promote accessible computing power through a cloud-based system. In order to make agents widespread, DataCanvas is developing a comprehensive ecosystem centered on four pillars.
In pricing innovation, Alaya NeW Cloud has introduced “compute unit” pricing, with an “pay-as-you-go” approach.
At the infrastructure level, Alaya NeW Cloud 2.0 leverages a serverless architecture to pool GPU resources, making computing power as accessible as utilities. For example, a training task requiring thousands of cards and a fine-tuning need of ten cards can utilize the same resource pool, which lowers the cost by up to 45% compared to prior systems. This tackles a central pain point in AI development: The IDC reports that 83% of the million AI developers in China are limited from training due to computing costs.
By accelerating its global computing network deployments, the cloud platform has built a global supply of computing power via the construction of massive AICD nodes. In China, the hubs are found in Beijing, Shandong, Anhui and Yunnan, and more are planned. DataCanvas is also actively expanding into overseas markets, to establish a global intelligent computing service system with vast concurrent processing power, which will let global users use high-performance intelligent computing services near to where they are located.
The platform isn’t only about providing a very scalable compute service. It is also working to create a complete AI ecosystem, optimization of tools, and supporting open industry collaboration to turn computing capacity into intelligent results. DataCanvas is making an open ecosystem a long-term strategy. It will partner with industry giants to scale the implementation of intelligent applications.
As a major player in China’s AI infrastructure market, DataCanvas also plans further R&D investments to enable practical application of AI technology. The Alaya NeW Cloud 2.0 launch is a notable step forward for China’s AI computing. Simultaneously, providing affordable computing power to millions of developers will play a significant role in determining the competitiveness of AI cloud platforms. As Fang Lei noted, “The essence of competition in the AI industry is a contest for the foresight of technology.” In this crucial race toward the future, every technological exploration and innovative business model has the potential to reshape the course of history.
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/2595.html