RAISE Act: Trump, AI Super PAC Oppose New York’s Proposal

New York is becoming a key battleground for AI regulation. Congressional candidate Alex Bores, a supporter of the Responsible AI Safety and Education (RAISE) Act, is being targeted by a super PAC, “Leading the Future” (LTF), backed by tech industry figures. LTF advocates for federal AI laws over state-level regulations, arguing that strict rules could stifle innovation and harm U.S. competitiveness. The RAISE Act, awaiting approval, mandates safety protocols and incident reporting for large AI entities, prompting debate over balancing innovation with risk mitigation and potentially setting a national precedent.

“`html
RAISE Act: Trump, AI Super PAC Oppose New York's Proposal

While Silicon Valley remains the epicenter of technological innovation, New York is increasingly emerging as a key battleground in the burgeoning debate over artificial intelligence regulation. The state is currently grappling with the complex challenge of fostering innovation while mitigating potential risks associated with advanced AI systems.

The recent targeting of New York congressional candidate Alex Bores by “Leading the Future” (LTF), a bipartisan super PAC, underscores the intensity of this debate. Bores, a vocal proponent of AI safety legislation, is a champion of the Responsible AI Safety and Education (RAISE) Act. This proposed legislation would mandate that large AI entities disclose safety protocols and report significant safety incidents, a move that has drawn the ire of some industry stakeholders.

“They don’t want there to be any regulation whatsoever,” Bores told CNBC’s “Squawk Box,” highlighting the core contention. “What they’re saying is the fact that you dared step up and push back on us at all means we need to bury you with millions and millions of dollars.” This statement speaks volumes about the perceived stakes in the ongoing regulatory skirmish.

LTF, backed by over $100 million in funding, advocates for a “bold, forward-looking approach to AI,” aligning with the perspective that federal AI laws should supersede state-level regulations. This preemptive approach aims to prevent what some perceive as overly restrictive or inconsistent regulations from stifling innovation, particularly in states like California and New York. The underlying economic argument centers on the belief that excessive regulation could hinder the United States’ competitiveness in the global AI landscape.

The super PAC’s backers include prominent figures in the tech industry, such as OpenAI President Greg Brockman, Palantir co-founder Joe Lonsdale, and venture capital firm Andreessen Horowitz, along with AI startup Perplexity. This roster highlights the alignment of interests among established and emerging players in the AI ecosystem, all advocating for a regulatory environment conducive to innovation.

“LTF and its affiliated organizations will oppose policies that stifle innovation, enable China to gain global AI superiority, or make it harder to bring AI’s benefits into the world, and those who support that agenda,” the group stated, framing the issue as a matter of national competitiveness and global leadership.

Bores, a New York State Assembly member since 2023 and a former Palantir employee, launched his congressional campaign for New York’s 12th district in October. His background provides a unique perspective, straddling both the technological and legislative realms, allowing him to understand the nuances of AI development and the potential implications of regulation.

As an assemblyman, Bores co-sponsored the RAISE Act, indicating his commitment to AI safety measures. Understanding the potential benefits and risks of AI is crucial for him.

“I’m very bullish on the power of AI, I take the tech companies seriously for what they think this could do in the future,” Bores said. “But the same pathways that will allow it to potentially cure diseases [will] allow it to, say, build a bio weapon. And so you just want to be managing the risk of that potential.”

Assembly member Alex Bores speaks during a press conference on the Climate Change Superfund Act at Pier 17 on May 26, 2023 in New York City.

Michael M. Santiago | Getty Images

The RAISE Act, having passed both the New York State Assembly and Senate in June, awaits a decision from Democratic Gov. Kathy Hochul by the start of the 2026 session. The bill’s fate hangs in the balance, potentially setting a precedent for AI regulation across the country.

LTF’s leaders, Zac Moffatt and Josh Vlasto, announced their intention to invest significant resources in opposing Bores’ congressional bid. They accuse him of promoting “ideological and politically motivated legislation” that would “handcuff” the U.S. and its ability to be a leader in AI. Industry analysis suggests that the move underscores the high stakes involved, with the AI industry prepared to actively influence political outcomes to shape the regulatory landscape.

Moffatt and Vlasto argue that the bill exemplifies the “patchwork, uninformed, and bureaucratic state laws that would slow American progress and open the door for China to win the global race for AI leadership.” This narrative resonates with concerns about maintaining U.S. competitiveness in the face of rising global technological powers.

Bores, in turn, has capitalized on the attention generated by LTF’s announcement, leveraging it as a fundraising opportunity by urging supporters to donate to his campaign. This tactic can be particularly effective, as it allows a candidate to portray themselves as a challenger against powerful interests.

“I am someone with a master’s in computer science, two patents, and nearly a decade working in tech,” Bores stated. “If they are scared of people who understand their business regulating their business, they are telling on themselves.” This response frames the opposition as being fearful of informed oversight, further bolstering his position with voters concerned about industry accountability.

What is the RAISE Act?

The RAISE Act targets large AI companies, including Google, Meta, and OpenAI, that have invested over $100 million in computational resources to train advanced models. This threshold aims to focus regulation on the entities that are deploying the most powerful and potentially impactful AI systems.

The act mandates the creation, publication, and adherence to safety and security protocols, subject to periodic updates. Violations could result in penalties of up to $30 million, designed to create significant financial incentives for compliance.

Furthermore, the bill requires the implementation of safeguards to prevent AI models from contributing to activities that could cause “critical harm,” such as assisting in the development of chemical weapons or engaging in large-scale automated criminal activities. “Critical harm” is defined as incidents causing death or serious injury to 100 or more people, or damages exceeding $1 billion.

A key provision prohibits the release of AI models that pose an “unreasonable risk of critical harm,” a point of contention fiercely debated by opponents. The act is trying to proactively prevent potential catastrophes linked to AI by preventing some models that produce potential catastrophes.

“That’s designed to basically avoid the problem we had with the tobacco companies, where they knew that cigarettes caused cancer but denied it publicly and continued to release their products,” he said. This analogy highlights the need of protecting the public from AI models.

The RAISE Act also requires disclosure of notable safety incidents, such as the theft of a model by a malicious actor. Developers would be compelled to report such incidents within 72 hours of discovery, promoting transparency and accountability.

“We just saw two weeks ago, Anthropic talk about how China used their model to do a cyber attack on U.S. government institutions and our chemical manufacturing plants,” This quote illustrates the need for disclosure of safety incidents.

According to Bores, the initial draft of the bill was circulated among “all of the major developers” for feedback, underscoring a collaborative effort to refine the legislation and address potential concerns. With insight of relevant AI developers, the safety bill will be more effective than it would be otherwise.

The drafting was amended in May and June. The bill creation was through conversation with AI developers to help create a safer industry.

U.S. President Donald Trump arrives on the South Lawn of the White House on November 22, 2025 in Washington, DC.

John Mcdonnell | Getty Images

LTF’s involvement in the Bores race highlights the larger dispute over whether AI should be regulated at the state or federal level. The debate essentially centers on striking a balance between fostering innovation and safeguarding against potential risks.

Federal advocates claim that the laws will be too hard to incorporate for different states. Bores disagrees as he believes that the government should work to make the process better overall.

“What’s being debated right now is, should we stop the states from making any progress before the feds have solved the problem? Or should we actually work together to have the federal government solve the problem?” Bores said.

Aside from New York, states including California, Colorado, and Illinois, have their AI laws ready to be incorporated early next year. Incorporating AI laws across several states is crucial to maintaining safety for the technology.

President Donald Trump advocated for Federal AI standards. These standards are supposed to keep the US as the head of technology.

“Investment in AI is helping to make the U.S. Economy the ‘HOTTEST’ in the World, but overregulation by the States is threatening to undermine this Major Growth ‘Engine,'” Trump wrote. “We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes. If we don’t, then China will easily catch us in the AI race.”

Earlier this year, a proposed action in Trumps “One Big Beautiful Bill Act” would have established a 10-year-long suspension in state-level AI laws. The purpose for this failed as it was not included with the legislation. However, that was recently revitalized in the effort.

What were seeing with AI is natural as states are stepping up along with moving quickly. Eventually, a Federal AI standard should be created.

“`

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/13503.html

Like (0)
Previous 2026年1月3日 pm8:54
Next 2026年1月3日 pm10:09

Related News