America crafts an AI action plan

Tech platforms got a rare opportunity to present President Trump with a wishlist — and they're using DeepSeek's success to push for controversial policies

America crafts an AI action plan
(Steve Johnson / Unsplash)

Programming notes: Platformer will be off Thursday to deal with my ongoing eye situation. 👀 Also: This is a column about AI. My boyfriend works at Anthropic, and I also co-host a podcast at the New York Times, which is suing OpenAI and Microsoft over allegations of copyright infringement. See my full ethics disclosure here.

Today, let’s talk about the Trump administration’s effort to develop an action plan for artificial intelligence in America. While the administration has solicited the public for ideas about how it should proceed, officials have already made it quite clear what kind of actions they’ll support — and which they are likely to forbid. 

One of President Trump’s first acts upon taking office again this year was to rescind President Biden’s executive order on AI, which had sought to place some light guardrails on tech companies while ordering federal agencies to improve their AI readiness. In doing so, Trump fulfilled a campaign promise: his platform complained that the executive order “imposes Radical Leftwing ideas on the development of” AI. (These radical left-wing demands included ideas like making tech companies test their models for safety problems before releasing them.)

In its place, the Trump administration has implemented a new policy that I have called “let’s see what happens.” Let any private actor attempt to build models that may someday enable the average user to create novel bioweapons, launch cyberattacks, or create perfectly realistic deepfakes; and let them offer it to anyone. “The A.I. future is not going to be won by hand-wringing about safety,” Vice President J.D. Vance told the Paris AI Action Summit last month. The primary risk that Vance articulated during his speech is that AI might exhibit values that differ from Republicans’. 

Two weeks later, though, Trump’s Office of Science and Technology Policy solicited public comments for a forthcoming (and presumably more comprehensive) action plan. “The AI Action Plan will define priority policy actions to enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation,” the administration said in a statement. “With the right governmental policies, continued U.S. AI leadership will promote human flourishing, economic competitiveness, and national security.”

Those comments were due Saturday, and over the past few days various companies and advocates have been sharing theirs publicly. I’ve spent the past day or so reading them, and they’re a mixed bag. Their preferred action plans contain sensible suggestions for increasing energy capacity, regulating the flow of high-powered chips to our adversaries, and exploring ways that AI could improve the government’s ability to serve its citizens.

At the same time, the platforms are also clearly overjoyed at the opportunity to tell Trump not to impose any meaningful regulation on them. Their comments contain long wishlists of risks they do not want to be liable for; laws they do not wish to be subject to; and copyright issues they would like to be exempt from. And both OpenAI and Meta invoke the rise of DeepSeek to justify policies that they had been lobbying for before anyone had heard of the Chinese open-source upstart.

To OpenAI, DeepSeek offers an opportunity to push for a firm declaration that training large language models on copyrighted material constitutes fair use and should therefore be allowed. The company’s explanation is that China is not going to respect American copyrights, and having to respect them would cause it to instantly lose the AI race.

“If the PRC’s developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over,” OpenAI wrote in its submission. “America loses, as does the success of democratic AI.”

I am open to arguments that some uses of copyrighted material are fair use, but to suggest that America will immediately lose to China if it has to pay to license data strikes me as absurd. 

Meanwhile, Meta is worried that the administration might seek to restrict it from making powerful models available for free to (almost) anyone who wants to download or build on top of them. Most of its submission is taken up advocating for “open source” AI — a term I put in quotes because the Open Source Initiative says the restrictions Meta places on its model disqualify it from using the term. (Many prefer “open weights,” referring to the model weights that anyone can download.) 

“Open source models are essential for the U.S. to win the AI race against China and ensure American AI dominance,” Meta writes in its submission, which the company provided to me. “Export controls on open source models would take the U.S. out of that race, allowing Chinese companies like DeepSeek to set the AI standard on which the world builds and embedding authoritarian values in the global tech ecosystem.”

Meta does not mention that researchers have been caught using its open-weights Llama model to build applications for the Chinese military. But perhaps the democratic values embedded in Llama will serve as a bulwark against whatever China plans to do with it.

(For what it’s worth, Meta also calls for Trump to declare that training on copyrighted data is fair use, and to do so unilaterally via an executive order.)

Google, for its part, is worried that someday an AI company might be held liable for using one of its models to cause great harm. In the event that someone used a future version of Gemini to create a novel pathogen, Google does not want to be held responsible. The reason, as best as I can understand, is that it did not know someone was going to use Gemini to create a novel pathogen. 

“Nor should developers bear responsibility for misuse by customers or end users,” Google writes in its submission. “Rather, developers should provide information and documentation to the deployers, such as documentation of how the models were trained or mechanisms for human oversight, as needed to allow deployers to comply with regulatory requirements.”

On the whole, Vice President Vance’s warning not to engage in “hand-wringing about safety” seems to have been thoroughly received by the platforms. The word “safety” appears just once in the submissions from OpenAI and Google, which run more than a dozen pages each; and not at all in Meta’s.

At Anthropic, which has traditionally paid more attention to AI safety issues, the word appears only in relation to the AI Safety Institute, which is currently being gutted. Instead, like a dog owner hiding a pill inside a spoonful of peanut butter, the company tries to engage the Trump administration on AI risks by couching them in the language of national security. 

Like OpenAI and Meta, it leans on the specter of DeepSeek to make its case.

“While DeepSeek itself does not demonstrate direct national security-relevant capabilities, early model evaluations conducted by Anthropic showed that R1 complied with answering most biological weaponization questions, even when formulated with a clearly malicious intent,” Anthropic wrote in its submission. “This highlights the crucial importance of equipping the U.S. government with the capacity to rapidly evaluate whether future models — foreign or domestic — released onto the open internet possess security-relevant properties that merit national security attention.”

It’s not only AI labs that are weighing in here. Ben Stiller, Mark Ruffalo, and Cynthia Erivo are among the more than 400 Hollywood stars who signed a submission calling on Trump not to roll back copyright protections in service of AI.

“There is no reason to weaken or eliminate the copyright protections that have helped America flourish,” they write. “Not when AI companies can use our copyrighted material by simply doing what the law requires: negotiating appropriate licenses with copyright holders — just as every other industry does.”

Two newspaper groups also published a series of editorials this week calling on Trump to reject the copyright arguments from the AI labs.

At the moment, though, it’s hard to see Trump handing a victory to the entertainment industry, or to journalists, both of which he has often described as enemies.

So what happens next?

On one hand, amid Trump’s full-bore assault on our democratic institutions, there’s a morsel of comfort to be found in the fact that the administration is even asking for public comment on a genuinely consequential issue.

On the other hand, though, in all sorts of ways, the Trump administration has already made its plans quite clear. If it means that AI develops faster, and that America beats China, it can be part of the plan. If it gets in the way of that, it’s off the table.

In the near term, that means tech platforms are going to get much of what they asked for. And Americans are going to get a lot of things they didn’t. 

Sponsored

Don’t let Foreign Influence Operations Infect Your AI

NewsGuard has exposed a major new threat to AI integrity: Foreign propaganda efforts designed to infect AI models with false claims undermining the U.S. Just one malign Russian actor—the Pravda Network—published 3.6 million articles in 2024 to influence AI responses on topics in the news.

In order to safeguard AI, NewsGuard is introducing FAILSafe (Foreign Adversary Infection of LLMs Safety Service).

Foreign Disinformation Narrative Feed: FAILSafe provides a continuously updated data stream of information about false narratives being spread by Russian, Chinese, and Iranian influence operations — with precise data about the narratives, language used to convey them, affiliations with specific influence operations, and where each narrative is being published. AI companies can use this data to ensure their systems do not inadvertently repeat these false claims in response to user prompts.

As AI adoption accelerates, foreign actors are working hard to distort LLM outputs. Don't let your AI be infected — protect it with FAILSafe.

Governing

Industry

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and AI actions: casey@platformer.news.