California's controversial AI bill is on the verge of becoming law

California’s SB-1047 is on a path to the governor’s desk. Will it kill the AI boom? 

California's controversial AI bill is on the verge of becoming law
The California State Capitol (Wikipedia)

California's controversial bill to regulate the artificial intelligence industry, SB-1047, passed out of the Assembly Appropriations Committee on Thursday. If it passes the full Senate by the end of the month, it will head to Gov. Gavin Newsom for his signature. Today let’s talk about what it could mean for Meta, Google, Anthropic, and the other leading AI companies that call California home.

If an AI causes harm, should we blame the AI — or the person who used the AI? That’s the question that runs through the debate over SB-1047, and the larger question of how to regulate the technology. 

We saw a practical example of the debate this week when X released the second generation of its AI model, Grok, which has an image generation feature similar to OpenAI’s DALL-E. X is known for its laissez-faire approach to content moderation, and the new Grok is no exception. 

Users quickly put the text-to-image generator through its paces — and, as Adi Robertson found out at The Verge, Grok will make just about anything. “Subscribers to X Premium, which grants access to Grok, have been posting everything from Barack Obama doing cocaine to Donald Trump with a pregnant woman who (vaguely) resembles Kamala Harris to Trump and Harris pointing guns,” she writes, before citing several more examples of violent or edgy images that Grok created. (“Bill Gates sniffing a line of cocaine from a table with a Microsoft logo,” for example.)

One possible response to this is to get mad at Grok for creating the image. Another, conveyed with some deft sarcasm by this X user, is to suggest we should instead get mad at the person who created the image.

This kind of question is almost as old as the web. In the 1990s, internet service providers like Prodigy and Compuserve faced lawsuits related to potentially libelous material that their users had posted. Congress included Section 230 in the Communications Decency Act to specify that tech companies in most cases cannot be held legally liable for what their users post. 

In this case, Congress ruled that we should get mad at the person rather than the technology. And we’ve been fighting about it ever since.

Tech companies would love to see a kind of Section 230 for AI, making them immune to prosecution for what their users do with their AI tools. But California’s bill takes the opposite approach, putting the onus on tech companies to assure the government that their products won’t be used to create harm.

SB-1047 has some widely accepted provisions, such as adding legal protections for whistleblowers at AI companies, and studying the feasibility of building a public AI cloud that startups and researchers could use. 

More controversially, it requires makers of large AI models to notify the government when they train a model that exceeds a certain computing threshold and costs more than $100 million. It allows the California attorney general to seek an injunction against companies that release models that the AG considers unsafe. And it requires that large models have a “kill switch” that allows developers to stop them in the case of danger.   

SB-1047 was introduced in February by Sen. Scott Wiener, D-San Francisco. Wiener had released an outline of the bill last September and says he has gathered feedback from the industry and other stakeholders ever since. The bill passed out of the Senate’s privacy committee in June, and since then tech companies have become increasingly vocal about the risks that they argue the bill presents to the nascent AI industry.

On Thursday, before the bill passed out of the Senate’s appropriations committee, the industry won some significant concessions. The bill no longer enables the AG to sue companies for negligent safety practices before a catastrophic event occurs; it no longer creates a new state agency to monitor compliance; and it no longer requires AI labs to certify their safety testing under penalty of perjury. (AI companies had been warning loudly that the bill would result in startup founders being thrown in jail.)

The bill also no longer requires “reasonable assurance” from developers that their models won’t create harm. (Instead, they must only take “reasonable care.”) And amid widespread fears that the bill would chill the development of open-source models, the bill was amended to exempt anyone who spends less than $10 million to fine-tune an open-source AI model from the bill’s other requirements.

“We accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry,” Wiener told TechCrunch. “These amendments build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open source community, which is an important source of innovation.” 

Despite those changes, the bill still faces significant criticism — and not all of it comes from the tech industry. Shortly before the bill’s passage out of committee on Thursday, a group of eight Democratic members of Congress from California wrote a letter to California Gov. Gavin Newsom urging him to veto the bill in its then-current form. The lawmakers, led by Rep. Zoe Lofgren, write that they support a wide variety of AI regulations — but that the bill goes too far in asking tech companies to predict how people use their models.

“Not only is it unreasonable to expect developers to completely control what end users do with their products, but it is difficult if not impossible to certify certain outcomes without undermining the rights of end users, including their privacy rights,” they write. 

Moreover, they write, the bill could prompt AI companies to move out of California or stop releasing their AI models here. (Meta recently decided not to release multimodal AI models in Europe over similar rules, they note.)

Wiener’s bill also has some prominent backers, including two of the godfathers of AI — Geoffrey Hinton and Yoshua Bengio. Hinton and Bengio are among those who believe that we must put strong safeguards into place now before next-generation AI models arrive and potentially wreak havoc. 

But they have been countered by dozens of other academics who published a letter arguing that the bill will interfere with their academic freedom and hamper research efforts.

Ultimately, I suspect lawmakers will regulate both AI and the people who use it. But I’m sympathetic to the members of Congress who find SB-1047 to be — if nothing else — premature. Today’s models have shown no risk of creating catastrophic harm, and President Biden’s executive order from last year should provide at least some defense against worst-case scenarios in the near term if next-generation models prove out to be much more capable than today’s. 

And in any case, it seems preferable to regulate AI once at the national level than encouraging 50 states to all experiment with their own risk models. 

In the meantime, Lofgren notes, California is considering more than 30 other AI bills this term, including much more urgent and focused efforts to restrict the creation of synthetic, nonconsensual porn and to require disclosures when AI is used to create election ads.

“These bills have a firmer evidentiary basis than SB 1047,” Lofgren writes. And given the continued opposition to Wiener’s bill, I suspect they may also have higher odds of Newsom signing them into law. 

On the podcast this week: Kevin and I debate whether Elon Musk's attempts to get Trump elected are working. Then, former Microsoft CEO Steve Ballmer stops by to explain how he's trying to improve policy debates with USA Facts. And finally, it's time for This Week in AI.

Apple | Spotify | Stitcher | Amazon | Google | YouTube

Governing

Industry

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and AI legislation: casey@platformer.news.