LLM
-
DeepSeek: Fallen From Grace in Under Six Months? Not So Fast.
A report indicates a 72.2% drop in DeepSeek’s monthly downloads in Q2 2025. However, this decline is attributed to DeepSeek’s integration into third-party platforms and its focus on open-source LLM development for AGI advancement. Users are being diverted through these channels, impacting direct app downloads. While specialized AI tools like “AI+office” and “AI+education” platforms are gaining traction, DeepSeek remains a key AI player, despite facing challenges like delayed updates and user experience concerns.
-
Kyivstar and Ukrainian Digital Ministry Partner on National Large Language Model
VEON Ltd. and Ukraine’s Ministry of Digital Transformation are partnering to create Ukraine’s first large language model (LLM), trained exclusively on Ukrainian data. This project, fueled by VEON’s $1 billion infrastructure investment, aims to develop a secure, culturally relevant AI ecosystem. The LLM will power AI-driven tools tailored to Ukrainian needs, specifically in sectors like government and healthcare. The first version is expected by December 2025, led by Kyivstar, aiming to enhance digital sovereignty and economic growth.
-
Kimi’s “King Move”: Open-Source Kimi Model Debuts, Surpassing DeepSeek R1 Globally
Moonshot AI introduced Kimi-Dev-72B, an open-source code LLM designed for software engineering. The 72-billion parameter model tops the SWE-bench Verified benchmark, surpassing larger models like DeepSeek-R1. Kimi-Dev-72B utilizes RLHF to autonomously repair code within Docker. Key features include BugFixer, TestWriter, mid-stage training, and test-time self-play. The model is available on Hugging Face and GitHub.
-
Baidu: 65% of SOEs Choose Us, Boasting Full-Stack Proprietary Technology
Baidu announced significant advancements at the 2025 Smart Economy Forum. Baidu Smart Cloud partnered with 65% of China’s central SOEs to boost the LLM industry. They launched the Qianfan Huijin Financial LLM and introduced industry-specific intelligent agents. Baidu also deployed “Ten Thousand Card” and “Thirty Thousand Card” clusters on the Baige GPU platform, designed for efficient LLM training and inference and compatible with various models.