
Worased Boontipchayakun | Istock | Getty Images
A growing chorus of voices, spanning artificial intelligence experts, tech luminaries, and even figures from the political and entertainment spheres, is calling for a halt to the relentless pursuit of “superintelligence” – AI systems purported to surpass human cognitive capabilities across nearly all domains.
More than 850 individuals, including Virgin Group founder Richard Branson and Apple co-founder Steve Wozniak, have signed a public statement urging a pause on superintelligence development. The statement, released Wednesday, argues for a global reassessment of the technology’s risks and benefits before proceeding further.
The list’s weight comes from its inclusion of prominent AI pioneers widely regarded as the “godfathers” of modern AI, such as computer scientists Yoshua Bengio and Geoff Hinton. These figures, joined by leading AI researchers like UC Berkeley’s Stuart Russell, represent a formidable intellectual force questioning the current trajectory.
The term “superintelligence” has gained traction in the AI sector as companies like Elon Musk’s xAI and Sam Altman’s OpenAI compete to launch increasingly sophisticated large language models (LLMs). Meta’s decision to name its LLM division “Meta Superintelligence Labs” underscores the industry’s escalating ambitions, but also highlights the growing unease surrounding the concept. The pursuit of Artificial General Intelligence (AGI), capable of performing any intellectual task that a human being can, inherently raises questions about control, alignment, and potential unintended consequences on a scale never before contemplated.
Signatories of the statement express grave concerns, citing potential societal disruptions ranging from widespread economic displacement and erosion of individual autonomy to threats to national security and even, in the most extreme scenarios, the extinction of humanity. This risk assessment moves beyond theoretical concerns into the realm of concrete, actionable anxieties.
The statement advocates for a moratorium on superintelligence development until there is broad public consensus supporting the technology and a robust scientific consensus demonstrating its safe and controllable implementation. This call for shared oversight and public discourse highlights the ethical quandaries and philosophical implications of creating machines possibly exceeding human intellect.
The individuals behind the statement reflect a diverse range of backgrounds, spanning academics, media personalities, religious leaders, and former U.S. politicians and officials from both sides of the political spectrum. This broad coalition includes former Chairman of the Joint Chiefs of Staff Mike Mullen and former National Security Advisor Susan Rice.
Adding a further layer of complexity, figures with ties to former President Donald Trump, such as Steve Bannon and Glenn Beck, are also signatories, demonstrating that the debate transcends traditional political divides. Prince Harry, Meghan Markle, and former President of Ireland Mary Robinson are also part of this growing list.
AI ‘Doomers’ vs. AI ‘Boomers’: A Deepening Divide
The technology landscape is being cleaved by a widening chasm between those who champion AI as a transformative force for good requiring minimal constraints and those who perceive it as a potentially catastrophic threat demanding stringent regulation. This ideological divide represents a fundamental disagreement about the nature of technological progress and the limits of human control.
However, as highlighted on the ‘Statement on Superintelligence’ website, figures like Musk and Altman, who hold leadership positions in the world’s leading artificial intelligence companies, have also previously voiced concerns about the perils of superintelligence. This nuanced view reveals an internal tension within the AI development community itself.
Altman, before assuming the role of CEO of OpenAI, penned a blog post in 2015 noting that “development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.” This past acknowledgement of potential threats provides context for the current debate, signaling that anxieties over superintelligence are not just external criticisms, but come from those in the AI field as well.
Musk voiced concerns on a podcast earlier this year saying there was only a ‘20% chance of annihilation’ when discussing the risks of advanced AI surpassing human intelligence.
The ‘Statement on Superintelligence’ referenced a survey by the Future of Life Institute. The survey revealed that only 5% of U.S. adults support “the status quo of fast, unregulated” superintelligence development. This suggests that both the AI community and the public want a more thoughtful approach to developing the technology.
Findings of the survey included that a majority believe “superhuman AI” shouldn’t be created until proven safe or controllable and want robust regulation on advanced AI.
Computer scientist Bengio emphasized. “To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use,” he said, adding “We also need to make sure the public has a much stronger say in decisions that will shape our collective future.”
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/11377.html