Cyber Threat Already Here, Experts Warn

Mythos, Anthropic’s AI model, has sparked concerns about AI-driven cybercrime due to its reported ability to find numerous software vulnerabilities. However, cybersecurity experts confirm similar capabilities exist in existing AI models, achievable through sophisticated orchestration. While Mythos highlights the increasing scale of vulnerability discovery, the core threat is already present, intensifying the AI arms race between companies like Anthropic and OpenAI.

The recent buzz surrounding Anthropic’s powerful AI model, Mythos, and its alleged discovery of thousands of previously unknown software vulnerabilities has sent shockwaves through the global banking, tech, and government sectors. However, the stark reality is that the capabilities raising alarms are not entirely new; cybersecurity experts and AI researchers confirm that existing models, including those from Anthropic and OpenAI, can achieve similar results through sophisticated orchestration.

Ben Harris, CEO of cybersecurity firm watchTowr Labs, stated, “What we are seeing across the industry now is that people are able to reproduce the vulnerabilities found with Mythos through clever orchestration of public models to get very, very similar results.” This suggests that the core concern – AI’s ability to rapidly uncover exploitable flaws – is already a present reality, rather than a future threat.

Mythos has indeed ignited considerable apprehension among executives and policymakers regarding the potential for a new era of AI-enabled cybercrime. Anthropic’s decision to limit Mythos’s release to a select group of American companies, including Apple, Amazon, JPMorgan Chase, and Palo Alto Networks, was a strategic move to mitigate the risk of it falling into the wrong hands. This controlled release has even prompted the U.S. administration to consider new government oversight for future AI models.

This development further intensifies the rivalry between Anthropic and OpenAI, two leading AI firms poised for highly anticipated IPOs. Following Mythos’s introduction, OpenAI CEO Sam Altman announced GPT-5.5-Cyber, a model specifically designed for cybersecurity applications, and initiated a limited rollout to vetted cybersecurity teams.

Anthropic’s “Project Glasswing,” under which Mythos was developed, aimed to provide the corporate world with a crucial window to bolster its cyber defenses against an anticipated surge in attacks from criminal organizations and nation-states. Anthropic CEO Dario Amodei articulated the gravity of the situation, warning of a significant increase in vulnerabilities and breaches, and the potential for devastating ransomware attacks on critical infrastructure.

**”Scary Enough” Capabilities Already Exist**

However, for those on the front lines of cyber warfare, the ability to discover software vulnerabilities at scale, a key feature highlighted by Anthropic, has been a reality for some time. Klaudia Kloc, CEO of cybersecurity firm Vidoc, expressed that current models are already “powerful enough to detect zero days in a large scale, and this is scary enough.” She indicated this capability has been present for “a couple of months, if not a year.”

The term “zero-day” refers to a software flaw that is unknown to developers and unpatched, offering attackers a temporary window for exploitation. Researchers at Vidoc demonstrated the effectiveness of “orchestration” by successfully identifying similar vulnerabilities to those claimed by Mythos using older AI models. This technique involves breaking down code into smaller components and employing multiple tools or models to cross-reference findings.

“We ran older models against the same code base to see if we’d be able to detect the same vulnerabilities,” Kloc explained. “We did, with both OpenAI and Anthropic’s older models.” Similarly, AISLE, another cybersecurity firm, found that many of Mythos’s reported findings could be replicated using less advanced, more cost-effective models working in tandem. This underscores the principle that widespread, coordinated search efforts can be more effective than relying on a single, highly advanced tool.

Anthropic acknowledged that previous models were indeed capable of finding software vulnerabilities. A company spokesperson highlighted their prior warnings about the rapid advancement of AI in cybersecurity, citing a February blog post where their Claude Opus 4.6 model identified over 500 “high severity” vulnerabilities in open-source software. Amodei reiterated this point, emphasizing that while Mythos’s findings represented a significant leap in scale, the underlying trend of increasing vulnerability discovery was not new.

**Hysteria and the Arms Race**

What sets Mythos apart, according to Anthropic, is its ability to autonomously develop working exploits with minimal human intervention, automating a process that traditionally demanded skilled researchers. However, cybersecurity experts argue that malicious actors, particularly those affiliated with state-sponsored groups and criminal syndicates, already possess these advanced capabilities. As Kloc put it, hackers in North Korea, China, and Russia “know how to do this, with or without Anthropic.”

The escalating threat of AI-driven cyberattacks has intensified concerns among corporations and government regulators about safeguarding critical systems. Harris described recent discussions with financial institutions, insurers, and regulators as bordering on “hysteria,” driven by the palpable fear of an impending wave of sophisticated ransomware and other cyber threats.

Even before the advent of generative AI, the cybersecurity landscape was characterized by a perpetual race between attackers and defenders. Skilled hackers could exploit newly discovered vulnerabilities within hours, while patching these flaws often took days or weeks, sometimes requiring systems to be taken offline. “The industry is panicking about the number of vulnerabilities they face now,” Harris observed, “But even before Mythos is widely available, it couldn’t fix vulnerabilities fast enough.”

Previously, the ability to discover and exploit obscure software vulnerabilities was limited to a select few experts globally. However, current AI models have significantly lowered the barrier to entry for malicious actors, democratizing the capability to inflict widespread cyber damage. This implies an inevitable increase in the volume and sophistication of attacks targeting a broader range of entities, including those that were previously considered lower-risk targets.

**Advantage: Offense**

While leading AI companies like Anthropic and OpenAI are actively developing cyber defense capabilities to counter these emerging threats, the current advantage clearly lies with the offense. JPMorgan CEO Jamie Dimon aptly summarized this sentiment, suggesting that while AI tools may eventually bolster defenses, they are currently contributing to increased vulnerability.

Justin Herring, a partner at Mayer Brown and former executive deputy superintendent for cybersecurity at New York’s financial regulator, noted, “You have a significant increase in the volume of vulnerabilities discovered, but they don’t seem to have deployed a tool that helps you fix them.” He characterized vulnerability management as the “great Sisyphean task of cybersecurity.”

The limited access granted to select entities for Mythos provided them with an initial advantage in addressing discovered vulnerabilities. However, this exclusivity has also meant that independent AI researchers have not had the opportunity to scrutinize Anthropic’s claims or to proactively develop defenses. This has created a “tiers of haves and have-nots,” potentially hindering the pace of broader cybersecurity innovation, according to Pavel Gurvich, CEO of cybersecurity startup Tenzai.

Numerous cybersecurity startups are now focused on developing solutions to navigate this new AI-driven threat landscape. Ben Seri, co-founder of Zafran Security, described the current situation as a “chicken-and-egg situation,” where the industry is racing to build defenses before these powerful capabilities become universally accessible, acknowledging that some “eggs are going to be broken” in the process.

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/21537.html

Like (0)
Previous 2 hours ago
Next 2026年2月19日 am5:26

Related News