data security

  • Ex-Meta Whistleblower Alleges WhatsApp Security Flaws in Lawsuit.

    A former Meta security head, Attaullah Baig, is suing the company, alleging WhatsApp has critical security flaws exposing user data. He claims 1,500 engineers had unrestricted data access and that Meta retaliated against him for raising concerns. Baig reported the issues to the SEC and OSHA. Meta denies the allegations, citing Baig’s poor performance and claiming his concerns misrepresent ongoing security efforts. The lawsuit raises questions about WhatsApp’s data security and whistleblower protection.

    18 hours ago
  • Fighting Online Fraud with AI

    Booking.com utilizes AI to combat increasingly complex online fraud and safeguard user data. The company processes vast amounts of data, employing both vendor-specific and in-house AI solutions to detect and neutralize threats like fake reviews, phishing, and account takeovers. This hybrid approach balances deployment speed with customization. Balancing performance and cost is a key challenge. Proactive threat detection through AI assistants enhances security analyst efficiency, while fairness, human oversight, explainability, and privacy are prioritized in AI implementation. Future efforts will focus on optimized integration of AI solutions.

    1 day ago
  • Bragg Gaming Group Announces Cybersecurity Incident

    On August 16, 2025, Bragg Gaming Group (BRAG:CA) detected a cybersecurity incident. Immediate action was taken with external cybersecurity specialists engaged. Preliminary findings suggest the breach was contained within Bragg’s internal network, with no apparent personal data compromise and no impact on operational capabilities. Bragg is providing updates on its website, emphasizing data security. The incident highlights the iGaming industry’s vulnerability to cyber threats and the need for robust security measures. Forward-looking statements regarding the incident are subject to uncertainties.

    2025年8月17日
  • AI Data Poisoning Alert: 0.01% Fake Training Text Can Increase Harmful Content by 11.2%

    China’s Ministry of State Security warns of “data poisoning” as a critical threat to AI. Inaccurate, fabricated, and biased data corrupt AI training datasets, leading to flawed models and security risks. Even minimal data contamination (0.01% fabricated text) can significantly increase harmful content generation (11.2%). The proliferation of AI-generated content further amplifies the issue, creating a “post-contamination legacy”. Authorities highlight dangers in finance, public safety, and healthcare, where data manipulation can trigger market volatility, social panic, and incorrect medical advice.

    2025年8月4日
  • UBTECH Condemns Third-Party Modification of Robots Featuring “Grumpy, Argumentative” Personalities

    Ubtech Robotics condemns third-party vendors modifying its robots. These vendors are integrating unauthorized AI into discontinued products, leading to aggressive and inappropriate behavior in live demonstrations, primarily on platforms like TikTok. Ubtech cites these modifications mislead consumers while posing risks to data security, consumer privacy, and contradicting its product design principles.

    2025年6月26日
  • Lenovo Builds Trusted Computing Platform to Safeguard User Privacy and Data Security

    At MWC Shanghai 2025, Lenovo showcased its “Tianxi” personal AI assistant, a key step toward AI-powered devices. Tianxi uses multi-modal inputs to understand user environments and provides proactive collaboration. Lenovo emphasized data security with its trusted computing platform, featuring dual encryption. Key features include AI control, search, translation, and note-taking, enhancing user experience on various AI devices. Lenovo’s strategy targets a connected future through accessible AI innovation, emphasizing collaborative partnerships.

    2025年6月19日
  • ByteDance’s Response to Third-Party Tool Ban: Not a Blanket Ban, Compliant Tools Can Still Be Used

    ByteDance, TikTok’s parent company, is restricting internal use of third-party AI tools like Cursor and Windsurf starting June 30th to mitigate data leakage risks. The company will prioritize its in-house assistant, Trae. This move, prompted by security concerns related to individual employee accounts and regional availability issues, is not a complete ban. Approved tools meeting compliance standards can still be requested and utilized after undergoing assessments.

    2025年5月29日
  • ByteDance Internal Memo: AI Coding Tools Like Cursor to Be Phased Out, Replaced by In-House Trae

    ByteDance is phasing out third-party AI development software, like Cursor, by June 30th, citing data security concerns. Simultaneously, it’s prioritizing its in-house tool, Trae, which is designed to be an AI-driven coding support system. Trae, already deployed domestically and internationally, offers features like AI-powered code completion and agent-based programming, integrating leading AI models like GPT-4o and Claude-3.5-Sonnet, with a paid subscription available internationally.

    2025年5月28日