Reasoning Models
-
OpenAI Unveils Safety Models for Wider Harm Classification
OpenAI released two reasoning models, gpt-oss-safeguard-120b and gpt-oss-safeguard-20b, designed to help developers classify and mitigate online harms. These “open-weight” models allow organizations to tailor them to specific policies and understand their decision-making, enhancing online safety. Developed in collaboration with ROOST, Discord, and SafetyKit, the models aim to address ethical concerns surrounding rapidly scaling AI and promote responsible AI development. They are available for download via Hugging Face.
-
Apple Slams AI Reasoning Models, Calling “Thought” a Mirage
Apple researchers challenge current AI reasoning models, arguing they are sophisticated pattern matchers, not true thinkers. They criticize existing evaluation methods, proposing new puzzles to assess in-depth thought. Results show models struggling with increased complexity, leading to performance collapses. The paper sparks debate about AI’s limitations and the need for improved reasoning and evaluation.