Benchmark Performance

  • .DeepSeek V3.2 Achieves GPT‑5‑Level Performance While Cutting Training Costs by 90%

    .DeepSeek’s new V3.2 model matches OpenAI’s upcoming GPT‑5 on reasoning benchmarks while using a fraction of the training FLOPs, thanks to its Sparse Attention (DSA) architecture and efficient token‑selection. The open‑source base model (93.1 % AIME accuracy) and the higher‑performing V3.2‑Speciale variant (gold‑medal scores on the 2025 IMO and IOI) show that advanced AI no longer requires massive compute budgets. Enterprise users can deploy the models on‑premise, benefiting from lower cost, strong coding performance, and retained reasoning traces, though DeepSeek plans to improve factual coverage and generation fluency.

    5 hours ago