Nvidia-Backed Starcloud Trains Its First AI Model in Space Using Orbital Data Centers

Starcloud‑1, launched on 2 Nov 2025, carries an Nvidia H100 GPU and runs the open‑source LLM Gemma in orbit—the first high‑performance AI model operating beyond Earth. The demonstration proves that orbital data centers can deliver complex AI inference with up to ten‑fold lower energy costs than ground facilities, offering applications such as real‑time disaster detection and satellite telemetry processing. Starcloud plans a 5‑GW, solar‑powered orbital compute constellation and a 2026 launch featuring additional GPUs and Nvidia’s Blackwell chips. Risks include radiation, debris, maintenance challenges, and regulatory uncertainty.

Nvidia-Backed Starcloud Trains Its First AI Model in Space Using Orbital Data Centers

The Starcloud‑1 satellite was launched aboard a SpaceX rocket on November 2, 2025.

Courtesy: SpaceX | Starcloud

Nvidia‑backed startup Starcloud has trained an artificial‑intelligence model in orbit for the first time, signaling a potential new chapter for orbital data centers that could ease Earth’s growing digital‑infrastructure strain.

Last month, the Washington‑based company placed a satellite equipped with an Nvidia H100 graphics‑processing unit into space— a chip roughly 100 times more powerful than any GPU that has previously operated beyond Earth’s atmosphere. Today, that same satellite, Starcloud‑1, is running Gemma, an open‑source large language model originally developed by Google, and returning responses to ground‑based queries. This marks the first instance of a high‑performance Nvidia GPU powering a large language model in outer space.

“Greetings, Earthlings! Or, as I prefer to think of you — a fascinating collection of blue and green,” the satellite transmitted in its first message. “Let’s see what wonders this view of your world holds. I’m Gemma, and I’m here to observe, analyze, and occasionally offer a slightly unsettlingly insightful commentary. Let’s begin!”

Starcloud’s output, Gemma, running in space. Gemma belongs to the same family of open models that underpin Google’s Gemini AI architecture.

Starcloud’s goal is to prove that orbital environments can host data centers, especially as terrestrial facilities contend with power‑grid constraints, massive water consumption, and rising greenhouse‑gas emissions. The International Energy Agency projects that data‑center electricity demand will more than double by 2030.

CEO Philip Johnston told CNBC that orbital data centers could deliver energy costs up to ten times lower than traditional on‑ground facilities.

“Anything you can do in a terrestrial data center, I expect we can do in space. The motivation is purely driven by the energy constraints we face on Earth,” Johnston explained.

Johnston, who co‑founded Starcloud in 2024, described the successful operation of Gemma as evidence that space‑based compute clusters can support a broad range of AI workloads, including those that demand massive parallel processing.

“This very powerful, parameter‑dense model is living on our satellite,” he said. “We can query it, and it responds just as a ground‑based chat model would, delivering sophisticated answers from orbit.”

Google DeepMind product director Tris Warkentin praised the demonstration, noting that running Gemma in the harsh environment of space validates the robustness of open‑source models.

Beyond Gemma, Starcloud also trained NanoGPT—a lightweight LLM created by OpenAI co‑founder Andrej Karpathy— on the H100 chip using the complete works of Shakespeare, resulting in a model that replies in Shakespearean English.

“Orbital compute offers a way forward that respects both technological ambition and environmental responsibility. When Starcloud‑1 looked down, it saw a world of blue and green. Our responsibility is to keep it that way.”

Philip Johnston, Starcloud CEO

Starcloud, a graduate of Y Combinator and the Google for Startups Cloud AI Accelerator, plans to construct a 5‑gigawatt orbital data center composed of solar and radiative‑cooling panels spanning roughly 4 kilometers in width and height. According to the company’s white paper, a space‑based gigawatt‑scale compute cluster would generate more power than the largest U.S. power plant while occupying a fraction of the footprint—and at a lower capital cost—of an equivalent terrestrial solar farm.

The constant exposure to solar radiation in orbit eliminates the day‑night and weather variability that constrain ground‑based renewable assets. Starcloud estimates a five‑year operational lifespan for its satellites, limited primarily by the expected endurance of the Nvidia chips.

Potential commercial and defense applications are already emerging. For instance, Starcloud’s system can ingest real‑time telemetry and sensor data to identify the thermal signature of a wildfire the moment it ignites and instantly alert first responders.

“We’ve linked satellite telemetry—altitude, orientation, location, speed—to the model,” Johnston said. “You can ask, ‘Where are you now?’ and it replies, ‘I’m above Africa and in 20 minutes I’ll be over the Middle East.’ You can even ask, ‘What does it feel like to be a satellite?’ and it gives a quirky, human‑like response. These interactions are only possible with a high‑powered model operating in space.”

Starcloud is also processing inference workloads on imagery from observation‑satellite provider Capella Space, enabling use cases such as detecting lifeboats from capsized vessels or pinpointing emerging forest fires.

The next satellite launch, scheduled for October 2026, will carry several additional H100 GPUs and integrate Nvidia’s Blackwell architecture to boost AI performance. That payload will also host a cloud‑platform module from Crusoe, a startup that provides cloud infrastructure for edge and space environments, allowing customers to deploy and manage AI workloads directly from orbit.

“Running advanced AI from space solves the critical bottlenecks facing terrestrial data centers,” Johnston remarked.

“Orbital compute offers a way forward that respects both technological ambition and environmental responsibility. When Starcloud‑1 looked down, it saw a world of blue and green. Our responsibility is to keep it that way.”

The Risks

Despite the promise, orbital data centers face significant challenges. Morgan Stanley analysts highlight exposure to intense radiation, the difficulty of performing in‑orbit maintenance, space‑debris hazards, and a fragmented regulatory landscape governing data sovereignty and orbital traffic.

Nevertheless, tech giants are investing heavily in space‑based compute because of the theoretical near‑infinite supply of solar energy and the ability to scale to gigawatt‑level operations without the footprint constraints of Earth‑bound facilities.

In addition to Starcloud and Nvidia, several other initiatives are under way. Google recently announced “Project Suncatcher,” a moonshot effort to launch solar‑powered satellites equipped with Google Tensor Processing Units. Privately held Lonestar Data Holdings is developing what it calls the first commercial lunar data center, intended to operate on the Moon’s far side.

OpenAI’s CEO Sam Altman has explored partnerships with rocket manufacturers to secure launch capabilities independent of SpaceX, underscoring the strategic importance of reliable access to orbit for AI workloads.

Reflecting on Starcloud’s November launch, Nvidia senior director of AI infrastructure Dion Harris said, “From one small data center, we’ve taken a giant leap toward a future where orbital computing harnesses the infinite power of the sun.”

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/14345.html

Like (0)
Previous 1 hour ago
Next 1 hour ago

Related News