Security Benchmark
-
AI Safety Benchmark: Code Model Safety Testing Results Released
CAICT’s AI Institute launched security benchmark testing for code-generating LLMs, assessing risks and capabilities using a dataset of 15,000+ test cases across nine languages and various attack methods. The initial assessment of 15 Chinese models (3B-671B parameters) revealed varied security levels, with most exhibiting medium risk. Models showed weaknesses in scenarios involving malicious intent, highlighting vulnerabilities to cyberattacks. CAICT plans to expand testing to international models and develop mitigation tools, aiming to promote a secure LLM ecosystem.