Can AI regulation survive the First Amendment?

California’s first effort to regulate AI slop hit a wall. What now? 

Can AI regulation survive the First Amendment?
A still from a deepfake of Vice President Kamala Harris that was retweeted by Elon Musk. (X)

I.

In recent decades, government efforts to regulate the tech industry have lagged significantly behind the growth of new technologies. But the rise of artificial intelligence, which has coincided with AI developers loudly warning that their products carry significant risks of harm, has prompted lawmakers to work more quickly than usual.

Nowhere has this been more true than in California.

The state where Google, OpenAI, Anthropic and others are headquartered has been in the news this week for an bill that did not become law: Senate Bill 1047, which would have required safety testing for large language models that cost more than $100 million to train and created legal liabilities for harms that emerge from those models.  

The bill drew strong opposition from most of the big AI companies, along with a host of venture capitalists and other industry players. And while the bill that reached his desk was heavily watered down from earlier versions, on Sunday, Gov. Gavin Newsom vetoed the bill. (Newsom said he vetoed it not because it was too strong but because it was too weak, though as far as I can tell almost no one thinks he actually believes this.)

My inbox filled up this week with messages from the bill’s advocates, who accused Newsom of being reckless and exposing Californians to all manner of harms. But I suspect Newsom would sign another version of this bill, perhaps as early as next year — particularly if the next generation of models meaningfully increase the risk of all the harms that this year’s bill attempted to mitigate that.

I say that because, despite vetoing SB 1047, Newsom signed 18 other AI regulations into law. Taken together, it may be the most sweeping package of legislation we have seen so far intended to regulate the misuses of generative AI. 

The laws, some of which don’t take effect until 2026, include:

Newsom also announced an initiative to build further guardrails around the use of AI that will be led by Fei-Fei Lee, an AI pioneer and startup founder, along with an AI ethicist and a dean at the University of California at Berkeley. 

These bills take important steps to address harms taking place right now — not just in California, but all over the world. But in some cases, they address those harms by restricting free expression — and it’s here that the state may soon find that it has overstepped.

II.

California’s newly signed laws also seek to restrict the use of AI in ways that could deceive voters during an election. One law requires political advertisers to disclose the use of generative AI in their ads. Another requires social platforms to remove or label deepfakes related to the election, as well as to create a channel for users to report such deepfakes to the platform. And a third would punish anyone who distributes deepfakes or other disinformation intended to deceive voters within 60 days of an election.

On Wednesday, though, a federal judge blocked that third law — Assembly Bill 2839 — from taking effect. Here’s Maxwell Zeff at TechCrunch:

Shortly after signing AB 2839, Newsom suggested it could be used to force Elon Musk to take down an AI deepfake of Vice President Kamala Harris he had reposted (sparking a petty online battle between the two). However, a California judge just ruled the state can’t force people to take down election deepfakes – not yet, at least. [...]

Perhaps unsurprisingly, the original poster of that AI deepfake – an X user named Christopher Kohls – filed a lawsuit to block California’s new law as unconstitutional just a day after it was signed. Kohls’ lawyer wrote in a complaint that the deepfake of Kamala Harris is satire that should be protected by the First Amendment.

US District Judge John Mendez agreed, ordering a preliminary injunction ordering that the law not take effect — except for a plank of the bill that requires audio-only deepfakes to carry disclosures. (Which the plaintiff did not challenge.)

Mendez wrote in his decision:

Supreme Court precedent illuminates that while a well-founded fear of a digitally manipulated media landscape may be justified, this fear does not give legislators unbridled license to bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment. YouTube videos, Facebook posts, and X tweets are the newspaper advertisements and political cartoons of today, and the First Amendment protects an individual’s right to speak regardless of the new medium these critiques may take. Other statutory causes of action such as privacy torts, copyright infringement, or defamation already provide recourse to public figures or private individuals whose reputations may be afflicted by artificially altered depictions peddled by satirists or opportunists on the internet…

The record demonstrates that the State of California has a strong interest in preserving election integrity and addressing artificially manipulated content. However, California’s interest and the hardship the State faces are minimal when measured against the gravity of First Amendment values at stake and the ongoing constitutional violations that Plaintiff and other similarly situated content creators experience while having their speech chilled.“

I’m sympathetic to the judge’s argument here. The “deepfake” at issue strikes me as obvious satire. And the First Amendment should apply even in cases where the satire is less apparent.

But as Rick Hasen writes at Election Law Blog, there may have been a path forward to allow such content on social networks so long as it is labeled as being AI-generated. 

“The judge’s opinion here lacks nuance and recognition that a state mandatory labeling law for all AI-generated election content could well be constitutional,” writes Hasen, a law professor at UCLA. “I fear that the judge’s meat-cleaver-rather-than-scalpel-approach, if upheld on appeal, will do some serious harm to laws that properly balance our need for fair elections with our need for robust free speech protection.”

Perhaps an appeals court will find more nuance here. But the past half-decade or so of state-level tech regulation has repeatedly walked into the same buzzsaw. Lawmakers accuse social platforms of causing harm, and pass laws seeking to blunt those harms — only to be told time and time again by judges that their laws are unconstitutional. It happens with efforts to ban TikTok; it happens with efforts to force visitors to porn websites to verify their age; it happens with efforts to regulate social networks’ recommendation algorithms.

Everyone has an interest in elections taking place in an environment where voters can easily tell fact from fiction. We also have an interest in being able to easily remove malicious deepfakes of ourselves from social platforms, or otherwise prevent our digital likenesses from being used in harmful ways.

In the coming year, I expect many states to follow California’s lead, and seek to pass similar legislation that prevents these obvious AI harms from continuing to occur. When they do, though, they may learn the same lesson California just did — that some of the most obvious ways to protect people from AI may not be constitutional.

On the podcast this week: Kevin and I talk through California's new AI regulations. Then, The Information's Julia Black joins to talk about Silicon Valley's growing fascination with fertility tech — and baby-making in general. And finally, some fun updates on OpenAI, Reddit, and Sonos.

Apple | Spotify | Stitcher | Amazon | Google | YouTube

Governing

Industry

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and constitutional AI regulations: casey@platformer.news.