California Gov. Gavin Newsom said the bill’s approach was too broad, applying stringent rules even to basic AI functions|Gage Skidmore|CC BY-SA 2.0
California Governor Gavin Newsom vetoed the landmark AI safety bill SB 1047 on Sunday, striking down what would have established strict regulations that AI tech developers, including OpenAI, Google, and Microsoft, should follow.
Newsom said the bill’s approach was too broad, applying stringent rules even to basic AI functions, and instead announced a plan to collaborate with experts like Fei-Fei Li, widely known as the godmother of artificial intelligence technology, to develop AI safety guidelines.
SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, aimed to mitigate risks from AI, such as the potential manipulation of its large language models (LLMs) for harmful purposes like disrupting power grids or creating weapons.
LLMs are used to train generative AI tools, digital assistants, and chatbots like ChatGPT and Gemini.
The AI safety bill also included whistleblower protections and transparency requirements for AI tech companies.
Proponents, including Elon Musk, argue the bill would have improved AI accountability. Critics claim it would harm California’s tech industry and hinder innovation.
Newsom’s veto is seen as a victory for big tech, which had lobbied against the bill, and marks a setback for those pushing for AI regulations.