The UK government is moving to close a significant regulatory gap, extending its Online Safety Act to encompass AI chatbots and hold them accountable for the proliferation of illegal content. Under the new measures, generative AI platforms, including OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot, will be subject to stringent “illegal content duties” or face penalties that could include substantial fines and even platform blocking.
This decisive action follows intense criticism directed at Elon Musk’s X platform over its chatbot, Grok, which was implicated in generating and disseminating sexually explicit material. Prime Minister Keir Starmer publicly condemned these instances, signaling a clear intent to bring AI-driven services under the purview of existing online safety regulations.
The move by the UK government underscores a broader global trend of increasing regulatory scrutiny on rapidly evolving technologies. While historically, lawmakers have shied away from regulating technology itself, preferring to focus on its applications, the unique challenges posed by generative AI are forcing a recalibration.
Alex Brown, head of TMT at law firm Simmons & Simmons, commented on the government’s evolving approach. “Historically, our lawmakers have been reluctant to regulate technology and have rather sought to regulate its use cases and for good reason,” Brown stated. “Regulations focused on specific technology can age quickly and risk missing aspects of its use.” However, he added that generative AI is exposing the limitations of an act like the Online Safety Act, which is primarily designed to regulate services rather than the underlying technology.
Starmer’s latest announcement indicates a desire to address the inherent risks embedded within the design and behavior of new technologies, moving beyond just user-generated content or platform features. “The action we took on Grok sent a clear message that no platform gets a free pass,” Starmer remarked, emphasizing the government’s commitment to safeguarding children online. “We are closing loopholes that put children at risk, and laying the groundwork for further action.”
These new powers extend beyond AI chatbots, with proposals to establish minimum age limits for social media platforms, restrict addictive features like infinite scrolling, and limit children’s access to AI chatbots and VPNs. The government also plans to mandate that social media companies retain data following a child’s death, unless that online activity is clearly unrelated to the incident.
The heightened focus on protecting minors online is mirrored in other jurisdictions. Australia recently enacted legislation banning social media access for individuals under 16, requiring platforms like YouTube, Instagram, and TikTok to implement robust age-verification methods. Spain has become the first European nation to enforce a similar ban, with several other European countries, including France, Greece, Italy, Denmark, and Finland, reportedly considering comparable measures.
In the UK, a consultation on banning social media for under-16s was launched earlier this year. Concurrently, the House of Lords, the UK’s upper legislative chamber, voted to amend the Children’s Wellbeing and Schools Bill to include such a ban. The bill is now set for review by the House of Commons, with both houses needing to agree on any amendments before they can become law. This comprehensive regulatory push signals a significant shift in how governments are grappling with the societal impact of advanced technologies and the digital world’s influence on young people.
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/18558.html