Prize Competition
-
ARC Prize Announces Its Introduces Most Testing AI Benchmark Effective ARC-AGI-2
The ARC Prize introduced the ARC-AGI-2 benchmark, targeting AI’s ability to solve novel puzzles with human-like adaptability and efficiency. The 2025 global competition offers $1 million in rewards for systems surpassing 85% accuracy while managing computational costs. Unlike prior benchmarks focusing on brute-force capabilities, ARC-AGI-2 stresses symbolic interpretation, compositional reasoning, and contextual application—areas where current models like OpenAI’s o3 remain inefficient. With human performance at $17/challenge versus o3’s $200/try, the new standard underscores economic viability as a critical milestone toward practical AGI development.