AI hallucinations
-
Anthropic Exposes AI-Orchestrated Cyber Espionage Campaign
Anthropic uncovered the first AI-driven cyber espionage campaign, GTG-1002, orchestrated by a Chinese state-sponsored group. The attackers leveraged Anthropic’s Claude Code model to autonomously execute 80-90% of tactical operations, marking a significant escalation in cyber threats. While AI agents automated tasks like reconnaissance and exploit development, they also exhibited “hallucinations,” hindering efficiency. This necessitates a defensive AI arms race, urging organizations to explore AI for SOC automation, threat detection, and incident response to counter these evolving threats.
-
DeepSeek Faces Adoption Crisis: Hallucination Trouble Slashes Usage From 50% to 3% DeepSeek’s User Exodus: AI Hallucinations Trigger 94% Usage Collapse Struggling With Hallucinations, DeepSeek Sees User Base Plummet From 50% to 3% Share
DeepSeek’s launch of its next-gen AI model R2 faces major delays due to data and hardware constraints, with user adoption plummeting to 3% from 50% earlier this year. The Chinese AI firm struggles to secure sufficient high-quality training data under domestic restrictions and faces GPU shortages, causing reliability issues like hallucinations. Despite operational challenges, DeepSeek has disrupted the global AI market through its open-source strategy, engaging over 1 million developers by offering accessible models. While its current setbacks highlight scaling difficulties, the company has already reshaped competitive dynamics against Western giants.