How Meta's concessions to the right may have backfired

A new inquiry from the Federal Trade Commission suggests that apologizing for "censorship" may have only emboldened the company's conservative critics

How Meta's concessions to the right may have backfired
(Julio Lopez / Unsplash)

Last month, Meta began to dismantle its content moderation systems in the hopes that doing so would win it favor with the incoming Trump administration and ward off any new regulations. Today, let’s see how that is working out for the company so far.

For the better part of a decade now, Republicans have criticized Meta for removing, down-ranking, or otherwise disfavoring posts expressing conservative viewpoints. On one hand, this is true: studies have found that conservative posts are removed in disproportionate numbers, but only because conservatives break platform rules more. On the other hand, Meta eventually did come to believe this was a problem: Facebook’s US user base leans conservative, and for years users’ top complaint to the company has been that their posts are too often removed in error.

In January, CEO Mark Zuckerberg threw in the towel. He announced that the company would no longer use the multibillion-dollar systems it had built to proactively detect and remove harms including hate speech, bullying, and misinformation, reserving them instead for higher-severity harms including terrorism and child exploitation. 

The company also went a step further in its efforts to court the right, implementing dehumanizing new speech guidelines for transgender people, killing off efforts to hire a more diverse workforce, and even deleting Pride themes from its messaging app.

The effort has borne fruit: Trump praised the company’s retreat from content moderation, and this month Vice President J.D. Vance strongly criticized European regulations — including the Digital Services Act and Digital Markets Act, which create significant compliance burdens for Meta. European regulators are already pulling back on AI safety initiatives in response, including one that would have created legal liability for companies like Meta when their artificial intelligence systems cause harm.

It says something about the depth of bipartisan disdain for Meta that the combination of praise from Trump and support from Vance abroad represent arguably the biggest successes Meta has had in lobbying in the past decade. But the company remains in the president’s dog house. “There is a lot more ass-kissing that needs to be done,” a senior Trump administration official told Rolling Stone last month. “He just needs to prove himself. It’s a good start, but he can’t just snap his fingers and make the past not happen.”

And in the meantime, Trump is keeping the pressure on. Today, the Federal Trade Commission announced that it will open a new inquiry into “censorship” on online platforms. Here’s Emily Birnbaum at Bloomberg:

The new leader of the US Federal Trade Commission is opening an inquiry into whether a wide range of online services from social media giants such as Meta Platforms Inc. to ride-sharing companies such as Uber Technologies Inc. “censor” users.

FTC Chair Andrew Ferguson said the agency is looking for the public to comment on “Big Tech censorship,” which he described as “un-American” and “potentially illegal.” [...] The agency’s request for public comment, which could lead to a formal probe, defines technology platforms as “social media, video sharing, photo sharing, ride sharing, event planning, internal or external communications, or other internet services” — a broad definition that could sweep in many types of companies. The request asks for input on how consumers have been “harmed” by platforms limiting their ability to share their ideas.

The First Amendment gives wide latitude to private companies to host (or not host) content as they see fit. Section 230 of the Communications Decency Act further gives private companies the right to remove content. Last year, a majority of Supreme Court justices agreed (albeit in a somewhat technical and nonbinding way) that platform content moderation is protected under the First Amendment.

And, as Birnbaum notes dryly, “there’s little precedent for applying consumer protection and antitrust laws against online platforms for removing certain accounts or posts.”

But just because the Supreme Court would likely uphold future legal challenges to whatever the FTC has planned, doesn’t mean it won’t be useful to Ferguson as a political project. His probe will no doubt turn up many instances of erroneous or (to conservatives) offensive content removals, and he can loudly promote them as evidence of Big Tech overreach to sympathetic audiences on Fox News, X, and Substack. It will help fuel conservative narratives that they are a persecuted minority, and put continued pressure on Meta and other platforms not to intervene in political affairs. 

In the meantime, it suggests that Meta's concessions to the right may have backfired: if you tell conservatives that you censored too much content, and that you're sorry for censoring them, you're giving incoming regulators a lot of material for their forthcoming "who exactly are you censoring" initiatives.

A signal irony of Ferguson’s inquiry is that information about how platforms moderate content and how users are informed of enforcement actions are prominent requirements of … the Digital Services Act, that awful piece of EU tech regulation that Vance spent part of last week beating up on. The DSA requires that platforms like Meta publish detailed content moderation reports, offer users clear reasons for why their posts were removed or accounts were suspended, and disclose the use and accuracy of any automated moderation tools.

And unlike Ferguson’s inquiry, which seems designed to unearth cherry-picked anecdotes that favor one political party, the DSA requires ongoing, systemic reporting about takedown rates, appeals, and enforcement actions across platforms.

Of course, there actually is one social media platform that has become notorious for removing content based on whims rather than policies. It has banned journalists, blocked links to rival social media and messaging platforms, and suppressed the publication of newsworthy documents related to the 2024 election. Just today, its CEO said he would “fix” the company’s crowdsourced moderation tool after its user base promoted ideas about Ukraine that he personally disagrees with.

I believe the law is on this platform’s side: it ought to be able to do all of that, if it wants to. But if Andrew Ferguson is determined to get to the heart of censorship on American social networks, and root out instances of policies that only ever favor one side or another, he can start with Elon Musk’s X. 

I will not be holding my breath.

As for the other platforms: they can look forward to another tumble through the right-wing noise machine. When Meta made its surrender to the right on speech issues last month, I felt confident that it would only lead the Trump administration to ask for even more. And now here’s the new FTC chairman, arriving right on queue with a timely reminder: in crony capitalism, there is always another ass to kiss.


Elsewhere in Meta:

On the podcast this week: Kevin and I discuss how Grok fits into Musk's larger ambitions. Then, Robinhood CEO Vlad Tenev comes by to explain why everyone should be able to invest in everything, preferably on his platform. Finally, Kevin walks me through his experiments in vibe coding.

Apple | Spotify | Stitcher | Amazon | Google | YouTube

Governing

Industry

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and censorship complaints: casey@platformer.news. Read our ethics policy here.