Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

California Gov. Newsom Vetoes Hotly Debated AI Safety Bill

Newsom Says Bill Not 'Flexible' Solution to Curb Catastrophic Risks
California Gov. Newsom Vetoes Hotly Debated AI Safety Bill
California Gov. Gavin Newsom during a January 2024 press conference. (Image: Shutterstock)

California Gov. Gavin Newsom on Sunday vetoed a hotly debated artificial intelligence safety bill that would have pushed developers to implement measures to prevent "critical harms." The bill "falls short of providing a flexible, comprehensive solution to curbing the potential catastrophic risks," Newsom said.

See Also: The future is now: Migrate your SIEM in record time with AI

Authored by Democratic Sen. Scott Wiener, the legislation would have applied to AI models that cost at least $100 million to develop.

Wiener said the veto was a "setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet." Debates around the bill had "dramatically advanced the issue of AI safety on the international stage," he added.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act bill was written to require affected AI companies to test the safety of their products before releasing them to the public. It would have allowed the state attorney general to sue developers for serious harms caused by those models. As it traversed the statehouse, lawmaker amendments removed language that would have created a new government agency to guide AI safety. The bill intended to set up a public cloud computing cluster to allow startups and researchers to contribute to responsible AI development, and create whistleblower protections for employees of frontier AI laboratories.

Despite attempts in the California senate to mollify the tech industry, the legislation drew sustained criticism from the likes of Google, Meta and OpenAI, which have made voluntary commitments to develop safe AI.

Anthropic, a cautious supporter of the bill whose suggestions for amendments were mostly incorporated to in the amended version, had not reacted to the Democratic governor's veto, as of publication.

Newsom called the bill "well-intentioned," but said it did not take into account whether an AI system would be deployed in high-risk environments, involved critical decision-making or the use of sensitive data. He said the bill applied stringent standards to even the most basic functions, as long as a large system deployed it. "I do not believe this is the best approach to protecting the public from real threats posed by the technology," he said.

Former Democratic House Speaker Nancy Pelosi praised Newsom after the announcement for "recognizing the opportunity and responsibility we all share to enable small entrepreneurs and academia - not big tech - to dominate."

Newsom has signed 17 other bills focused on AI regulation and deployment in the past month, with guidance from "the godmother of AI" Fei-Fei Li, who had reportedly said the Wiener bill would "harm our budding AI ecosystem."

Newsom said he had asked generative AI experts, including Dr. Li, Tino Cuéllar of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research, and Jennifer Tour Chayes from the College of Computing, Data Science, and Society at UC Berkeley, to help California develop "workable guardrails" that focused on "developing an empirical, science-based trajectory analysis." He also asked state agencies to expand their assessment of AI risks from potential catastrophic events related to AI use.

"We have a responsibility to protect Californians from potentially catastrophic risks of GenAI deployment. We will thoughtfully - and swiftly - work toward a solution that is adaptable to this fast-moving technology and harnesses its potential to advance the public good," he said.

Among the AI bills Newsom has signed are SB 896, which requires California's Office of Emergency Services to expand its work assessing AI's potential threat to critical infrastructure. The governor also directed the agency to undertake the same risk assessment with water infrastructure providers and the communications sector.

Home to 32 of the world's 50 leading gen AI entities, California has been on the national forefront of tech regulation, enacting several measures in the past year to crack down on the misuse of AI, starting with an executive order. Earlier this month, Newsom signed five bills focused on curbing the impact of deepfakes.

Newly enshrined California laws also address the handling of personal information by AI systems, mandate transparency in usage and establish protections against deceptive or harmful AI-generated content. The laws also focus on AI literacy in education and create frameworks for responsible AI use in state communications, critical infrastructure and healthcare decisions.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.co.uk, you agree to our use of cookies.