“`html
Country music star Martina McBride is raising alarms about the potential for AI deepfakes to harm artists and their fans, a concern she articulated both before a Senate Judiciary subcommittee and at the recent CNBC AI Summit in Nashville.
McBride, alongside Recording Industry Association of America (RIAA) chief policy officer Morna Willens, spoke with CNBC about the NO FAKES Act, a bipartisan legislative effort aimed at safeguarding individuals’ voices and likenesses from AI exploitation. The proposed law seeks to establish clear boundaries for AI-generated content, particularly concerning the unauthorized replication of an individual’s identity for commercial gain or malicious purposes.
The four-time Academy of Country Music Awards top female vocalist emphasized the personal significance of advocating for AI guardrails: “The thing that I’m most proud of in my career is my reputation and the fact that when I say something, my fans trust that it’s the truth.” This trust, she argues, is directly threatened by the ability of AI to create convincing audio and video fakes.
The core of McBride’s concern lies in the potential misuse of AI to manipulate her artistic expression. She highlighted a disturbing scenario: “At some point you can’t discern what I say and what someone manipulates me saying and that’s terrifying.” The alteration of lyrics, particularly those addressing sensitive topics like domestic violence, could be weaponized to distort her message and undermine her advocacy work.
Willens echoed McBride’s urgency, stating that she dedicates “100% of her time” at the RIAA to addressing the rapidly evolving AI landscape. She noted the constant discussions with artists, managers, and lawmakers to navigate the complex challenges and devise effective strategies. Willens emphasized the need for some form of regulation around this technology.
Beyond artistic integrity, McBride warned of the immediate financial dangers posed by deepfake scams. She cited a specific instance where a fan nearly liquidated his assets after being contacted by an AI-generated “Martina McBride” soliciting funds. This illustrates the potential for sophisticated phishing schemes that exploit parasocial relationships between artists and their fans.
Willens pushed back against the notion that the music industry is inherently anti-AI: “The music industry has been on the front edge of technology for a while.” She pointed to the long-standing collaborative relationships between labels, artists, and streaming platforms like Apple Music and Spotify as evidence of the industry’s adaptability and engagement with technological innovation. The challenge, according to Willens, isn’t technological complexity but a lack of transparency from major AI developers.
“We can’t tell if they’re training on Martina’s music for instance,” she stated. “And if she doesn’t know what they’re training on, she can’t enforce her rights.” This opaqueness hinders artists’ ability to control how their work is used to train AI models, potentially leading to unauthorized replication and commercial exploitation.
McBride also expressed deep concern for emerging artists, whose nascent careers could be irreparably damaged by deepfake manipulation: “If someone can invade that artist-fan bond and distort the story a young artist tells the world about who they are, careers could be lost before they truly get started.” The ability to create convincing fake content, particularly during an artist’s formative years, could undermine their authenticity and credibility.
Building on that point, McBride also highlighted the potential for real-world repercussions: “If you lose your house because a deepfake of an artist says they need money and you never get that money back, that’s an angry situation.” She expressed concern for her physical safety, considering the unpredictable reactions of individuals who might be victimized by AI-driven scams targeting her fans.
“`
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/11426.html