AGI

  • Huawei Connect 2025: Details Unveiled on Open-Source AI Platform

    At Huawei Connect 2025, Huawei detailed its plan to open-source its AI software stack by year-end, including CANN, the Mind series, and OpenPangu models. This move aims to address developer challenges and foster collaboration. CANN will offer open interfaces for its compiler, while the Mind series development environment commits to full open-source. Huawei also plans to open-source its UB OS Component for flexible OS integration and prioritize compatibility with PyTorch and vLLM. The success hinges on initial release quality, community support, and clear governance.

    2025年9月29日
  • CAMIA Attack Exposes AI Model Memorization

    Researchers have developed CAMIA, a novel Context-Aware Membership Inference Attack, that exposes privacy vulnerabilities in AI models by detecting data memorization during training. CAMIA outperforms existing methods by monitoring the evolution of model uncertainty throughout text generation, identifying subtle indicators of memorization at the token level. Evaluations on Pythia and GPT-Neo models showed significant accuracy improvements with CAMIA compared to previous attacks. This research highlights the privacy risks of training AI models on large datasets and emphasizes the need for privacy-enhancing technologies. CAMIA’s efficiency makes it a practical tool for auditing AI models.

    2025年9月26日
  • Ethical Cybersecurity with ManageEngine: A 2025 Outlook

    The cybersecurity industry faces a growing need for aggressive containment features but must balance rapid response with ethical considerations. Automatically quarantining critical systems can be detrimental, highlighting the importance of ethical cybersecurity practices. ManageEngine advocates for a “trust by design” philosophy, embedding fairness, transparency, and accountability into its products. The company’s “SHE AI principles”—Secure, Human, and Ethical AI—address the ethical implications of AI-driven security. Organizations should adopt cybersecurity ethics charters, embed ethics in technology decisions, and operationalize ethics through training and controls.

    2025年9月26日
  • Samsung Benchmarks Enterprise AI Model Productivity

    Samsung has introduced TRUEBench, a novel AI benchmark specifically designed to evaluate large language model (LLM) performance in real-world enterprise contexts. Addressing the limitations of traditional benchmarks, TRUEBench assesses AI across diverse business tasks, multilingual capabilities, and the ability to understand unstated user intents. It leverages a comprehensive suite of metrics across 10 categories and 46 sub-categories, based on Samsung’s internal AI deployments. Through its open-source platform on Hugging Face, Samsung aims to establish TRUEBench as an industry standard for AI productivity measurement.

    2025年9月25日
  • Huawei’s Plan to Unite Thousands of AI Chips

    Huawei introduced SuperPoD at HUAWEI CONNECT 2025, a new AI infrastructure architecture that aggregates thousands of AI chips into a unified resource using UnifiedBus (UB). This creates a “supercomputer” from distributed servers, designed to address the limitations of traditional architectures. The Atlas 950 SuperPoD utilizes up to 8,192 Ascend 950DT chips, with future plans for the larger Atlas 960. Beyond AI, TaiShan 950 SuperPoD targets general-purpose computing. Huawei’s open-source approach with UnifiedBus 2.0 aims to accelerate innovation and foster broad industry participation in AI infrastructure development.

    2025年9月25日
  • Adoption’s Security Price

    Netskope reports near-universal (95%) generative AI adoption in retail, up sharply from 73% last year, driven by competitive pressures. While usage of company-approved AI tools rises (from 21% to 52%), security risks escalate, with source code (47%) and regulated data (39%) commonly exposed. Companies are banning risky apps like ZeroGPT, and increasingly using enterprise platforms like OpenAI via Azure and Amazon Bedrock (16% each). Concerns extend to API connections (63%) and broader cloud security vulnerabilities, including malware via OneDrive and GitHub. Strict data protection and visibility are crucial.

    2025年9月24日
  • OpenAI, Nvidia Eye $100B AI Chip Partnership

    OpenAI and Nvidia are reportedly discussing a potential $100 billion partnership, with Nvidia supplying at least 10 gigawatts of hardware and investing significantly in OpenAI. This collaboration aims to bolster OpenAI’s AI infrastructure for advanced model training, utilizing Nvidia’s Vera Rubin platform starting in 2026. The deal raises concerns about competition, potentially solidifying Nvidia and OpenAI’s dominance. OpenAI seeks to secure computational resources crucial for AI development, while also exploring custom chip solutions. The partnership is under scrutiny for potential circular funding and antitrust implications.

    2025年9月24日
  • Governing Agentic AI: Balancing Autonomy and Accountability

    Agentic AI, intelligent systems acting as autonomous agents, is rapidly integrating into business, yet raises significant risks. Organizations deploying it must address potential deviations from business rules, regulatory mandates, and ethical standards. Low-code platforms offer a solution by embedding governance and compliance into the development process, unifying app and agent development within a single environment and enabling seamless integration with existing systems. This approach fosters transparency, control, and scalability, ensuring AI-driven processes align with strategic goals while mitigating risks.

    2025年9月24日
  • Data Quality: The Foundation for AI Growth

    AI implementation often stalls due to poor data quality. Snowflake’s Martin Frederik emphasizes that a robust data strategy is crucial; AI is only as good as the data it uses. Successful AI projects require clear business alignment, addressing data challenges from the start, and viewing AI as an enabler, not the end goal. Key factors include accessible, governed, and centralized data platforms and breaking down data silos. The future lies in AI agents capable of reasoning across diverse data, empowering users and freeing data scientists for strategic tasks.

    2025年9月23日
  • Public Trust: A Key Obstacle to AI Advancement

    A recent study reveals a significant lack of public trust in AI, hindering its widespread adoption despite government efforts. This skepticism stems from unfamiliarity and concerns about ethical considerations like data privacy and potential misuse. Trust correlates with usage, as those familiar with AI are less likely to perceive it as a risk. The report emphasizes the need for targeted communication highlighting tangible benefits, demonstrable effectiveness in public services, and robust regulations to ensure ethical and responsible development. Building trust requires transparency and a collaborative approach.

    2025年9月22日