“`html
OpenAI announced Monday a collaboration with actor Bryan Cranston, SAG-AFTRA, and other talent unions to bolster protections against deepfakes on its AI-powered video creation platform, Sora. The move comes amid growing concerns within the entertainment industry regarding unauthorized use of actors’ likenesses and voices in AI-generated content.
The collaboration follows reports that unauthorized AI-generated clips featuring Cranston’s voice and likeness surfaced on Sora after the recent launch of Sora 2. This incident highlights the urgent need for robust safeguards against the misuse of AI technology to create deceptive and potentially damaging content.
“I am grateful to OpenAI for its policy and for improving its guardrails,” Cranston stated. “I hope that they, and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness.” His statement underscores the critical importance of talent having agency over their digital identities in the age of increasingly sophisticated AI tools.
Beyond SAG-AFTRA, OpenAI will also partner with United Talent Agency (UTA), which represents Cranston, along with the Association of Talent Agents (ATA) and Creative Artists Agency (CAA). This broad coalition aims to strengthen the existing guardrails within Sora, making it more difficult to generate unapproved AI renderings of actors and other public figures. This partnership signifies a coordinated effort between the tech sector and the creative industries to establish ethical frameworks for AI development and deployment.
CAA and UTA have previously expressed concerns about the utilization of copyrighted materials by OpenAI, raising potential legal and ethical questions about the training data used to develop Sora. This issue touches upon the broader debate surrounding AI and intellectual property rights, highlighting the need for clear guidelines and regulations to ensure that AI development does not infringe upon existing protections.
OpenAI’s efforts to address these concerns include blocking videos depicting Martin Luther King Jr. on Sora at the request of King’s estate after users created what were deemed “disrespectful depictions” of the civil rights leader. This action demonstrates a reactive approach to content moderation, but the long-term challenge lies in proactively preventing such instances from occurring in the first place.
Zelda Williams, daughter of the late comedian Robin Williams, also appealed for an end to AI-generated videos of her father shortly after Sora 2’s release. These examples underscore the emotional toll that deepfakes can take on individuals and families, further emphasizing the urgency of developing effective safeguards.
OpenAI has been adjusting its copyright restriction policies since the Sora 2 launch on September 30th. Previously, the platform operated on an opt-out basis for IP use, requiring studios to specifically request that their material not be used. However, on October 3rd, CEO Sam Altman announced an update, providing rightsholders with “more granular control over generation of characters.” This shift represents a significant concession to the entertainment industry’s concerns about unauthorized use of its intellectual property.
Sora also requires opt-in consent for the use of an individual’s voice and likeness and is committed to rapidly addressing complaints. The evolution of OpenAI’s policies reflects the ongoing tension between fostering innovation and protecting the rights of individuals and creators.
OpenAI has voiced its support for the NO FAKES Act, a federal bill designed to offer protection against unauthorized AI-generated replicas of individuals’ voices and visual likenesses.
“OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness,” Altman stated. “We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers.” This statement signals OpenAI’s recognition of the ethical responsibilities that come with developing powerful AI technologies and underscores its commitment to working collaboratively with stakeholders to establish appropriate safeguards. The question remains whether legislative measures like the NO FAKES Act are sufficient to address the rapidly evolving challenges posed by deepfake technology.
“`
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/11275.html