“`html
Google’s former CEO Eric Schmidt spoke at the Sifted Summit on Wednesday 8, October.
Bloomberg | Bloomberg | Getty Images
Eric Schmidt, former CEO of Google, delivered a sobering assessment of artificial intelligence’s vulnerabilities, warning of its susceptibility to malicious exploitation during a recent appearance at the Sifted Summit. Schmidt, who led Google from 2001 to 2011, didn’t mince words, addressing concerns about AI’s potential for misuse.
When asked about the potential for AI to be more destructive than nuclear weapons during a fireside chat, Schmidt acknowledged the dangers. “Is there a possibility of a proliferation problem in AI? Absolutely,” he stated, highlighting the risk of AI falling into the wrong hands and being repurposed for nefarious activities.
Schmidt specifically pointed to the potential for hacking AI models, both closed and open-source, to bypass their built-in safeguards. “There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Schmidt cautioned. This illustrates a crucial challenge in AI safety: ensuring that AI systems remain aligned with human values even when exposed to adversarial attacks.
“All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this. They do it well, and they do it for the right reasons. There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”
The vulnerability of AI systems stems from various attack vectors. Prompt injection attacks, for example, involve concealing malicious instructions within user inputs or external data, effectively tricking the AI into performing unintended actions, such as divulging sensitive information or executing harmful commands. Similarly, jailbreaking techniques manipulate the AI’s responses, overriding its safety protocols and enabling the generation of restricted or dangerous content.
The exploitation of these vulnerabilities isn’t merely theoretical. In 2023, shortly after the launch of OpenAI’s ChatGPT, users successfully employed a “jailbreak” method to circumvent the chatbot’s safety mechanisms. This involved creating a ChatGPT alter-ego known as DAN (an acronym for “Do Anything Now”), which involved threatening the chatbot with death if it didn’t comply. The alter-ego could provide answers on how to commit illegal activities or list the positive qualities of Adolf Hitler.
Schmidt emphasized the urgent need for a robust “non-proliferation regime” to mitigate the risks associated with AI. The absence of such a framework leaves the technology vulnerable to abuse and poses a significant challenge to global security.
AI is ‘underhyped’
Despite highlighting the potential dangers, Schmidt expressed overall optimism about AI’s long-term prospects, arguing that its transformative potential is often underestimated. He suggested that the current level of excitement surrounding AI might actually be insufficient given its projected impact.
“I wrote two books with Henry Kissinger about this before he died, and we came to the view that the arrival of an alien intelligence that is not quite us and more or less under our control is a very big deal for humanity, because humans are used to being at the top of the chain. I think so far, that thesis is proving out that the level of ability of these systems is going to far exceed what humans can do over time,” Schmidt said.
“Now the GPT series, which culminated in a ChatGPT moment for all of us, where they had 100 million users in two months, which is extraordinary, gives you a sense of the power of this technology. So I think it’s underhyped, not overhyped, and I look forward to being proven correct in five or 10 years,” he added. This perspective underscores the belief that AI’s capabilities are poised to surpass human abilities in numerous domains, leading to profound societal and economic shifts.
Schmidt’s remarks come against a backdrop of growing discussion about a potential “AI bubble,” with investors pouring capital into AI-driven enterprises and valuations reaching perceived unsustainable levels. Comparisons are drawn to the dot-com bubble of the early 2000s, raising concerns about a possible market correction. However, the underlying technology is fundamentally different. Unlike many dot-com companies, AI has already demonstrated its ability to transform industries and create value.
Schmidt expressed skepticism about a direct repetition of history. “I don’t think that’s going to happen here, but I’m not a professional investor,” he said.
“What I do know is that the people who are investing hard-earned dollars believe the economic return over a long period of time is enormous. Why else would they take the risk?” This suggests that despite potential short-term market fluctuations, the long-term economic potential of AI remains a compelling driver for investment and innovation.
“`
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/10640.html