Photo: Jaromir Chalabala / Getty
U.K. regulators are intensifying their scrutiny of social media platforms, pushing for more robust child protection measures following the rejection of a broad age-based ban for minors under 16 by lawmakers. In a coordinated move, the U.K.’s communications regulator, Ofcom, and the Information Commissioner’s Office (ICO) have formally written to major social media players, including YouTube, TikTok, Facebook, Instagram, and Snapchat. The letters urge these tech giants to address a spectrum of critical child safety issues, ranging from the implementation of stringent age verification protocols to the proactive combating of online child grooming.
This regulatory push comes in the wake of a parliamentary vote earlier this month, where U.K. lawmakers decided against incorporating a blanket social media ban for individuals under 16 into a proposed child welfare bill. The government has since initiated a public consultation to gauge the opinions of parents and young people on the potential effectiveness of such a ban.
The U.K.’s stance reflects a growing global trend. Governments across Europe are actively considering and, in some cases, implementing stricter regulations to curb teenage social media usage. This wave of legislative action was notably preceded by Australia, which became the first country to enact a sweeping ban for users under 16 in December. Several other European nations, including Spain, France, and Denmark, are currently evaluating similar measures.
Advancing Age Verification Technologies
Ofcom has set an April 30 deadline for the social media platforms to submit detailed reports on their current child protection strategies. The regulator’s demands are comprehensive, focusing on enhanced enforcement of minimum age requirements, robust measures to prevent unsolicited contact from strangers, the provision of safer content for adolescents, and a strict halt to experimental product testing, particularly involving artificial intelligence, on children.
Ofcom CEO Melanie Dawes articulated a strong stance, stating, “Tech giants are failing to put children’s safety at the heart of their products, and are falling short on promises to keep children safe online.” She further emphasized the critical need for effective age verification, noting, “Without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose, on services they can’t realistically avoid.”
Complementing Ofcom’s efforts, the ICO has issued an open letter advocating for the adoption of advanced age verification methods. The ICO suggests that social media platforms should leverage technologies such as facial age estimation, digital identity solutions, or one-time photo matching to significantly improve the accuracy of age checks.

Currently, many platforms predominantly rely on user self-declaration for age verification, a method that regulators have identified as easily circumvented and fundamentally ineffective. ICO CEO Paul Arnold highlighted the severe risks this poses: “This puts under-13s at risk by allowing their information to be collected and used unlawfully, without the protections they are entitled to.” Arnold stressed the urgency for industry action: “With ever-growing public concern, the status quo is not working, and industry must do more to protect children. You should act now to identify and implement current viable technologies to prevent children under your minimum age from accessing your service.”
In response to Australia’s stringent regulations, Meta, the parent company of Facebook, Instagram, and Threads, took swift action by blocking over 500,000 accounts suspected of belonging to users under 16. However, Meta has publicly urged the Australian government to reconsider its blanket ban, arguing that such measures could inadvertently drive young users to circumvent the law and access social media without adequate safeguards.
Instagram has proactively introduced features aimed at parental oversight, including alerts to parents when their teens repeatedly search for sensitive terms related to suicide and self-harm over a short period. This initiative underscores a broader industry reckoning with the potential impact of social media on adolescent mental health.
The broader legal landscape is also evolving. A significant trial, initiated in January against Meta and Alphabet (Google’s parent company), centers on allegations that features within Instagram and YouTube are designed to foster addictive behavior. The case, brought forth by a young woman and her mother, is closely watched as its outcome could set a crucial precedent regarding the responsibility social media companies bear for their youngest users’ well-being. Meta CEO Mark Zuckerberg and Instagram CEO Adam Mosseri have already provided testimony, with a verdict anticipated in mid-March.
Further regulatory actions include the European Commission’s January investigation into Elon Musk’s X (formerly Twitter) concerning the dissemination of child sexual abuse material via its AI chatbot, Grok. Additionally, the ICO recently imposed a substantial fine of £14 million ($18 million) on Reddit for unlawfully processing children’s personal data, highlighting the escalating penalties for non-compliance.
Industry Response and Technological Stance
In a statement provided to CNBC, a Meta spokesperson asserted that the company already implements several measures aligned with the regulators’ demands. These include employing AI-driven age detection based on user activity and utilizing facial age estimation technology. The spokesperson also highlighted Meta’s creation of a dedicated teen account with built-in protections. They further commented, “With teens using on average 40 apps per week, we believe the most effective way to complement our own age assurance approach is to verify age centrally at the app store level.”
TikTok stated that since January, it has deployed enhanced technologies across Europe to identify and remove accounts belonging to individuals under its minimum age requirement of 13, supported by a team of specialist moderators. The platform also employs facial age estimation, credit card authorization, and government-issued identification for user age verification.
A spokesperson for Snapchat conveyed that the company is actively investing in and advancing its safety initiatives. They echoed the sentiment for industry-wide solutions, stating, “We strongly support app store-level age verification – a consistent, industry-wide solution applied at the point of download, rather than patchwork checks by individual platforms.”
YouTube did not immediately respond to CNBC’s requests for comment.
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:http://aicnbc.com/19689.html