The future is now, according to OpenAI‘s Sam Altman. The tech visionary has declared that humanity has crossed the Rubicon and entered the era of artificial superintelligence, a transition he insists is irreversible.
“We are past the event horizon; the takeoff has started,” Altman asserts. “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”
The lack of visible fanfare – no robot uprisings on Main Street, no overnight cures for disease – masks what Altman frames as a profound transformation already underway. Behind the closed doors of companies like his own, systems are emerging that are poised to surpass generalized human intellect.
“In some big sense, ChatGPT is already more powerful than any human who has ever lived,” Altman claims, noting that “hundreds of millions of people rely on it every day and for increasingly important tasks.”
This casual observation hints at a pivotal reality: such systems already wield notable influence, with even minor flaws prone to causing widespread harm when magnified across their vast user base.
The Road to Superintelligence
Altman maps out a timeline toward superintelligence that could leave even the most seasoned tech analysts reevaluating their forecasts.
By next year, he anticipates “the arrival of agents that can do real cognitive work,” a development poised to fundamentally alter software development. The following year could bring “systems that can figure out novel insights”—meaning AI that generates original discoveries rather than merely processing existing knowledge. By 2027, we might see “robots that can do tasks in the real world.”
Each prediction appears to leapfrog the previous one in terms of capability, tracing a trajectory that points inexorably toward superintelligence—systems whose intellectual power will vastly exceed human potential across the majority of domains.
“We do not know how far beyond human-level intelligence we can go, but we are about to find out,” Altman states.
This progression has ignited heated debate among experts, with some arguing these capabilities remain decades away. Yet Altman’s timeline suggests OpenAI possesses internal data supporting this accelerated path, information that has yet to be made public.
A Feedback Loop That Changes Everything
What makes current AI development uniquely concerning is what Altman calls a “larval version of recursive self-improvement”—the ability of today’s AI to assist researchers in constructing tomorrow’s more potent systems.
“Advanced AI is interesting for many reasons, but perhaps nothing is quite as significant as the fact that we can use it to do faster AI research,” he explains. “If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different.”
This acceleration is compounded as multiple positive feedback loops intersect. Economic value fosters infrastructure development, which enables more sophisticated systems, which in turn generate greater economic value. Meanwhile, the creation of physical robots capable of manufacturing more robots could catalyze another explosive cycle of growth.
“The rate of new wonders being achieved will be immense,” Altman predicts. “It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonisation the next year.”
Such pronouncements would likely trigger skepticism from almost anyone else. But coming from the individual overseeing some of the most advanced AI systems on the planet, they warrant serious consideration.
Living Alongside Superintelligence
Despite the potential for immense impact, Altman envisions many aspects of human life maintaining their familiar characteristics. People will continue to form meaningful relationships, create art, and savor the simple pleasures.
Yet beneath these constants, society confronts significant disruption. “Whole classes of jobs” will disappear—potentially at a pace that outstrips our capacity to create new roles or retrain workers. The silver lining, according to Altman, is that “the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.”
For those struggling to grasp this future, Altman offers a thought experiment: “A subsistence farmer from a thousand years ago would look at what many of us do and say we have fake jobs, and think that we are just playing games to entertain ourselves since we have plenty of food and unimaginable luxuries.”
Our descendants may view our most esteemed professions with similar bewilderment.
The Alignment Problem
Amid these bold predictions, Altman spotlights a challenge that keeps AI safety researchers awake at night: ensuring superintelligent systems stay aligned with human values and intentions.
Altman stresses the need to solve “the alignment problem, meaning that we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term”. He contrasts this with social media algorithms that maximize engagement by capitalizing on psychological vulnerabilities.
This isn’t solely a technical hurdle; it’s an existential one. If superintelligence emerges without robust alignment, the consequences could be catastrophic. However, pinning down “what we collectively really want” will be exceptionally difficult in a diverse global society with a multitude of competing values and interests.
“The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better,” Altman urges.
OpenAI is Building a Global Brain
Altman has continually described what OpenAI is constructing as “a brain for the world.”
This isn’t intended as a metaphor. OpenAI and its competitors are producing cognitive systems designed to infiltrate every facet of human civilization—systems that, by Altman’s own admission, will ultimately surpass human capabilities across numerous domains.
“Intelligence too cheap to meter is well within grasp,” Altman asserts, implying that superintelligent capabilities will eventually become as ubiquitous and affordable as electricity.
For those dismissing such claims as science fiction, Altman offers a reminder that just a few years ago, today’s AI capabilities seemed equally implausible: “If we told you back in 2020 we were going to be where we are today, it probably sounded more crazy than our current predictions about 2030.”
As the AI industry aggressively pursues superintelligence, Altman’s closing wish – “May we scale smoothly, exponentially, and uneventfully through superintelligence” – sounds less like a prediction and more like a prayer.
While timelines may—and undoubtedly will—be debated, the OpenAI chief makes it clear the race toward superintelligence isn’t on the horizon—it’s already here. Humanity must begin to reckon with what that means.
Original article, Author: Samuel Thompson. If you wish to reprint this article, please indicate the source:https://aicnbc.com/2234.html