close
close
migores1

California governor rejects bill to create AI safeguards for First Nation

California Gov. Gavin Newsom has vetoed a landmark bill aimed at establishing the nation’s first safeguards for large artificial intelligence models.

The decision is a major blow to efforts to curb the domestic industry, which is growing rapidly with little oversight. The bill would have established some of the first regulations on large-scale AI designs in the country and paved the way for AI safety regulations across the country, supporters said.

Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California needed to lead in AI regulation in the face of federal inaction, but that the proposal “may have a chilling effect on the industry.”

The proposal, which has drawn fierce opposition from startups, tech giants and several House Democrats, could have hurt domestic industry by setting rigid requirements, Newsom said.

“While well-intentioned, SB 1047 does not consider whether an AI system is deployed in high-risk environments, involves making critical decisions, or uses sensitive data,” Newsom said in a statement. “Instead, the bill applies strict standards to even the most basic functions — as long as a large system implements it. I don’t think this is the best approach to protect the public from the real threats posed by technology.”

Newsom announced Sunday that the state will work with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. They opposed the AI ​​safety proposal.

The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, destroy the state’s power grid or help build of chemical weapons. Experts say these scenarios could be possible in the future as the industry continues to move forward rapidly. It would also have provided protections for worker whistleblowers.

The bill’s author, Democratic state Sen. Scott Weiner, called the veto “a setback for everyone who believes in the oversight of massive corporations that make critical decisions that affect the safety and well-being of the public and the future of the planet.”

“Companies developing advanced AI systems recognize that the risks these models pose to the public are real and growing rapidly. While major AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work well for the public,” Wiener said in a statement Sunday after -noon.

Wiener said the debate surrounding the bill has dramatically advanced the issue of AI safety, and he will continue to press the issue.

The legislation is among a number of bills passed by the Legislature this year to regulate AI, fight counterfeiting and protect workers. State lawmakers said California needed to take action this year, citing the hard lessons they learned from failing to crack down on social media companies when they had a chance.

Backers of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability into large-scale AI models, as developers and experts say they still don’t have a full understanding of how the models AI behaves and why.

The bill targeted systems that require a high level of computing power and more than $100 million to build. No current AI model has reached that threshold, but some experts said that could change in the next year.

“This is due to the massive expansion of investment in the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company’s disregard for AI risks. “It’s an insane amount of power to have any private company controlled irresponsibly, and it’s also incredibly risky.”

The United States is already behind Europe in regulating AI to limit risks. California’s proposal wasn’t as comprehensive as Europe’s regulations, but it would have been a good first step to put guardrails around the fast-growing technology, which raises concerns about job losses, misinformation, privacy invasions and automation, proponents said.

A number of leading artificial intelligence companies voluntarily agreed last year to follow safeguards set out by the White House, such as testing and sharing information about their models. The California bill would have mandated AI developers meet requirements similar to those pledges, supporters of the measure said.

But critics, including former US House Speaker Nancy Pelosi, argued the bill would “kill California tech” and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said.

Newsom’s decision to reject the bill marks another victory in California for big tech companies and artificial intelligence developers, many of whom have spent the past year lobbying with the California Chamber of Commerce to influence the governor and MPs from advancing AI regulations.

Two other major AI proposals, which also faced growing opposition from the tech industry and others, died before last month’s legislative deadline. The bills would have required AI developers to label AI-generated content and prohibited discrimination from AI tools used to make employment decisions.

The governor said earlier this summer that he wants to protect California’s status as a global leader in AI, noting that 32 of the world’s top 50 AI companies are located in the state.

He touted California as an early adopter, as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs . The state also last month announced a voluntary partnership with AI giant Nvidia to help train students, college professors, developers and data scientists. California is also considering new rules against AI discrimination in employment practices.

Earlier this month, Newsom signed some of the toughest laws in the country to crack down on voter fraud and measures to protect Hollywood workers from unauthorized use of AI.

But even with Newsom’s veto, California’s security proposal is inspiring lawmakers in other states to take similar action, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on the proposals. of technology and privacy.

“They will be able to either copy it or do something similar in the next legislative session,” Rice said. “So it won’t go away.”

The Associated Press and OpenAI have a license and technology agreement that allows OpenAI access to a portion of the AP’s text archives.

Copyright 2024 Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

TOPICS
California InsurTech Data-Driven Artificial Intelligence

Related Articles

Back to top button