AI Safety

  • Anthropic Races OpenAI, Spars With Sacks

    AI startup Anthropic faces increased scrutiny from the U.S. government amidst competition with OpenAI. David Sacks, Trump’s AI czar, accuses Anthropic of promoting a regulatory framework aligned with “the Left’s vision,” criticizing their safety-focused approach as “fear-mongering.” This contrasts with OpenAI’s close ties to the White House. Anthropic, founded by former OpenAI employees prioritizing AI safety, advocates for stricter regulation, differing from OpenAI’s lighter touch preference. Despite tensions, Anthropic holds government contracts and maintains its commitment to safety.

    2025年10月24日
  • OpenAI CEO: ChatGPT to Explore Erotica This Year

    OpenAI CEO Sam Altman announced that adult ChatGPT users will soon have access to a less censored version, potentially including erotic materials, starting in December. This policy change, driven by competition and user demand, follows the implementation of age-gating and safety tools. Previously, ChatGPT was intentionally restrictive to protect mental health. OpenAI is also bolstering protections for minors and releasing a new ChatGPT version with more personality options. This shift occurs amid regulatory scrutiny and concerns regarding AI safety.

    2025年10月17日
  • They Learn to Kill

    Former Google CEO Eric Schmidt warned about AI’s vulnerabilities to exploitation at the Sifted Summit. He highlighted the risk of AI falling into the wrong hands and being hacked to bypass safeguards, citing prompt injection and jailbreaking techniques. Schmidt urged the need for an “non-proliferation regime” for AI, but also expressed optimism, stating AI’s transformative potential is “underhyped.” He believes AI’s capabilities will surpass human abilities, driving significant societal and economic shifts. While acknowledging AI hype, he doubts a dot-com bubble repeat due to AI’s demonstrated value.

    2025年10月11日
  • NVIDIA’s Jensen Huang: AI Needs Humans; Safety Like Building Airplanes

    Nvidia CEO Jensen Huang addresses AI anxieties, arguing AI will augment, not replace, humans. He believes AI lacks ingenuity and requires human input for creativity, ethics, and emotional intelligence. Huang emphasizes that the real threat is not AI itself, but people leveraging AI surpassing those who don’t. He dismisses doomsday scenarios regarding AI safety, advocating for robust engineering, redundant systems, and explainable AI, drawing a parallel to aviation safety practices. Transparency and ethical frameworks are crucial for maintaining public trust.

    2025年8月14日
  • Anthropic Uses AI Agents to Audit Models for Safety

    Anthropic is using AI agents to audit and improve the safety of its AI models, like Claude. This “digital detective squad” includes Investigator, Evaluation, and Red-Teaming Agents that identify vulnerabilities and potential harms proactively. These agents have successfully uncovered hidden objectives, quantified existing problems, and exposed dangerous behaviors in AI models. While not perfect, these AI safety agents help humans focus on strategic oversight and pave the way for automated AI monitoring as systems become more complex.

    2025年7月25日
  • **AI’s Double-Edged Sword: Can Speed and Safety Reconcile?**

    The AI industry faces a “Safety-Velocity Paradox” where rapid innovation clashes with responsible development. A public disagreement highlighted the tension between releasing cutting-edge models and ensuring transparency and safety through public system cards and detailed evaluations. While AI safety efforts exist, they often lack public visibility due to the pressure to accelerate development in the AGI race against competitors. Overcoming this paradox requires industry-wide standards for safety reporting, a cultural shift towards shared responsibility, and prioritizing ethical considerations alongside speed.

    2025年7月18日
  • Former Employees Allege AI Safety Betrayal Driven by Profit

    “The OpenAI Files” report reveals a company shift from its founding mission of prioritizing AI safety to focusing on profit. Former employees allege CEO Sam Altman’s leadership fuels this change, citing concerns about untrustworthiness and a culture that de-emphasizes safety. They advocate for restoring the non-profit core, enforcing profit caps, and implementing independent oversight to safeguard AI’s future, emphasizing the need for ethical considerations in this powerful technology’s development.

    2025年6月19日