“`html
The retail sector is rapidly embracing generative AI, but a new report from cybersecurity firm Netskope reveals a growing concern: the escalating security risks associated with this technological leap.
Netskope’s findings indicate near-universal adoption of generative AI within the retail space. A staggering 95% of retail organizations are now leveraging generative AI applications, a substantial increase from 73% just a year prior. This surge underscores the intense competitive pressure driving retailers to adopt these technologies to stay relevant.
However, this rapid adoption comes with a significant downside. As retailers integrate generative AI into their core operations, they inadvertently expand the attack surface for cyberattacks and increase the potential for sensitive data leaks. The report paints a picture of an industry in flux, transitioning from an initial period of uncoordinated experimentation to a more structured, enterprise-led deployment strategy.
The data shows a marked shift away from employees using personal AI accounts. Such usage has more than halved since the beginning of the year, dropping from 74% to 36%. Conversely, the adoption of company-approved generative AI tools has more than doubled, rising from 21% to 52% during the same period. This trend indicates a growing awareness among businesses about the dangers of “shadow AI” and a proactive effort to regain control.
In the competitive landscape of AI tools, ChatGPT remains the dominant player, used by 81% of organizations. However, its dominance isn’t unchallenged. Google’s Gemini has made significant inroads, with 60% adoption, while Microsoft’s Copilot tools are gaining traction at 56% and 51% respectively. Notably, ChatGPT’s popularity has recently experienced its first dip, while Microsoft 365 Copilot’s usage has surged, likely attributable to its seamless integration with the productivity tools commonly used by employees.
Beneath the surface of this widespread adoption lies a burgeoning security challenge. The inherent value of generative AI tools – their ability to process and analyze information – simultaneously represents their greatest vulnerability. Retailers are witnessing an alarming amount of sensitive data being fed into these systems.
The most commonly exposed type of data is the company’s own source code, accounting for 47% of all data policy violations in generative AI applications. Regulated data, including confidential customer and business information, follows closely behind at 39%.
In response to these escalating risks, a growing number of retailers are resorting to banning applications deemed too risky. ZeroGPT is the most frequently blocklisted app, with 47% of organizations banning it due to concerns about user content storage and reported data redirection to third-party sites.
This increased caution is pushing the retail industry towards more robust, enterprise-grade generative AI platforms offered by major cloud providers. These platforms provide enhanced control, enabling companies to host models privately and develop their own custom tools. OpenAI via Azure and Amazon Bedrock are currently tied for the lead in this space, each being used by 16% of retail companies. However, these platforms are not foolproof. A simple misconfiguration could inadvertently connect a powerful AI system directly to a company’s most valuable assets, creating the potential for a catastrophic data breach.
The threat extends beyond employees using AI within their browsers. The report reveals that 63% of organizations are now connecting directly to OpenAI’s API, embedding AI deep into their backend systems and automated workflows. While this integration can significantly improve efficiency, it also introduces new vulnerabilities that must be addressed with robust security measures.
This AI-specific risk is part of a broader, concerning trend of inadequate cloud security practices. Attackers are increasingly exploiting trusted services to deliver malware, capitalizing on the likelihood that employees are more inclined to click on links from familiar sources. Microsoft OneDrive is the most common platform used for malware delivery, affecting 11% of retailers each month, while the developer hub GitHub is used in 9.7% of attacks.
The persistent issue of employees using personal applications at work continues to exacerbate the problem. Social media platforms such as Facebook and LinkedIn are used in nearly every retail environment (96% and 94% respectively), alongside personal cloud storage accounts. It is on these unapproved personal services that the most severe data breaches occur. When employees upload files to personal apps, regulated data is involved in 76% of the resulting policy violations.
For security leaders in retail, the era of casual generative AI experimentation is over. Netskope’s findings serve as a stark warning that organizations must take decisive action. It’s crucial to gain comprehensive visibility into all web traffic, block high-risk applications, and enforce strict data protection policies to control the flow of information.
Without adequate governance and robust security protocols, the next technological innovation could easily become the next headline-making data breach, potentially jeopardizing customer trust and financial stability.
“`
Original article, Author: Samuel Thompson. If you wish to reprint this article, please indicate the source:https://aicnbc.com/9873.html