“`html
As concerns mount over the potential risks associated with the proliferation of artificial intelligence, U.S. states are beginning to take action, prompting a call for federal preemption from Tennessee Republican Sen. Marsha Blackburn.
California recently enacted legislation addressing these concerns, with Governor Gavin Newsom signing measures focused on chatbot safeguards, mental health risk labeling for social media apps, and age verification tools. However, Newsom also vetoed stricter AI conditions that some legislators had advocated for, signaling a complex regulatory landscape. This follows earlier moves by states like Utah and Texas to implement AI safeguards specifically targeted at minors. Other states are also reportedly considering similar regulations, highlighting a growing trend at the state level.
“The impetus for state intervention, aimed at both consumer and child protection, stems from the federal government’s current inability to enact preemptive legislation,” stated Blackburn at the CNBC AI Summit in Nashville. “Until Congress steps in to address the influence of large tech platforms, states will continue to bridge this gap.”
Blackburn has been a long-standing advocate for children’s online safety and social media regulation. Her proposed Kid’s Online Safety Act, introduced in 2022, aims to establish guidelines protecting minors from harmful online content. The bill, which passed the Senate with bipartisan support, faces hurdles in the House, with Blackburn noting that “big tech companies have worked to hold up the legislation.” She remains optimistic about its eventual passage, emphasizing the urgent need for comprehensive federal action.
The concerns originally targeted at social media risks have intensified alongside the rapid advancement of AI, Blackburn explained, creating a new and complex challenge for regulators and policymakers.
Sen. Marsha Blackburn (R-TN) speaks during a rally organized by Accountable Tech and Design It For Us to hold tech and social media companies accountable for taking steps to protect kids and teens online on January 31, 2024 in Washington, D.C.
Jemal Countess | Getty Images Entertainment | Getty Images
Blackburn further elaborated on the need for comprehensive online consumer privacy protection, enabling individuals to “set firewalls and protect the virtual you.” She cautioned that “once a Large Language Model (LLM) scoops your data and information, it is then used to train that model”, underscoring the importance of data control and consent in the age of AI.
Beyond data privacy, Blackburn is also advocating for safeguards against the unauthorized use of an individual’s name, image, or likeness by AI systems. This includes legislation focused on preventing AI from exploiting personal attributes without explicit consent, reflecting a growing concern among lawmakers about the ethical implications of increasingly sophisticated AI technologies.
“We must establish mechanisms to protect our information in the virtual realm, analogous to the protections we enjoy in the physical world,” she asserted.
Acknowledging the swift pace of AI advancements, Blackburn emphasizes the need for a regulatory approach that focuses on “end-use utilizations,” rather than specific delivery systems or technologies. This forward-looking strategy recognizes the fluid nature of AI development and the necessity for adaptable regulations that can effectively address evolving risks.
This adaptable approach is crucial considering the rapid pivots within the AI industry. The recent statements from OpenAI CEO Sam Altman highlight the dynamic nature of these technologies. Altman’s announcement that OpenAI will “safely relax” most restrictions on ChatGPT, citing progress in mitigating “serious mental health issues,” underscores the complex interplay between innovation, ethical considerations, and regulatory oversight.
Blackburn noted that legislators are increasingly hearing from parents concerned about the impact of AI on their children, expressing a desire to shield them from potentially harmful experiences in the digital world. These concerns have prompted some parents to delay providing cell phones to their children until they reach the age of 16, viewing such devices as analogous to driving a car – a privilege that requires appropriate maturity and safeguards.
“As a society, we must implement rules and laws to protect children and minors,” Blackburn concluded, emphasizing the collective responsibility of policymakers, tech companies, and parents in navigating the challenges and opportunities presented by artificial intelligence.
“`
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/11005.html