Can platforms outsmart Texas's social media law?
One idea for how they could try. PLUS: Announcing Platformer's new managing editor, Zoe Schiffer
Today, let’s talk about the effort to rewrite decades of First Amendment jurisprudence in Texas, in ways that could flood tech platforms with garbage. There’s still hope that the Supreme Court could rule the state’s controversial HB20 unconstitutional — but if it doesn’t, there’s some interesting new thinking around how platforms might be able to get around it anyway.
I’ve written a few times here about HB20, which allows the state’s attorney general and average citizens to sue platforms with more than 50 million users any time they believe the platform removed a post due to the viewpoint it expresses. In May, the US Supreme Court temporarily blocked the law from taking effect while it was appealed. But just over a week ago, the Fifth Circuit Court of Appeals overturned a lower court’s ruling and allowed the law to take effect. The case is now almost certainly headed for the Supreme Court.
By now there has been a lot of good commentary about the Fifth Circuit’s decision: about its obvious factual errors; its willful misunderstanding of Section 230; and its stated belief that Nazi and terrorism content posted to social networks is a “hypothetical” issue. (Mike Masnick did yeoman’s work on this front in a piece for the Daily Beast.)
And there have been several probing legal analyses, delivered in epic-length Twitter threads, pointing out the various confusions and contradictions within the law and the judges who upheld it: the sudden conservative antipathy toward private property rights, for example, and the potentially unconstitutional nature of its transparency requirements. (One terrible aspect of the Texas law among many is that it would prevent platforms from updating their rules more than once every 30 days, even if something horrible starts happening on the platform in the meantime that requires a policy adjustment.)
There have also been some fairly dramatic twists. Texas told the court that the law applied only to Meta, YouTube, and Twitter; in fact, platforms with more than 50 million monthly users in the United States include Wikipedia, Quora, Microsoft Teams, and iMessage. No one really knows what it would mean to force Wikipedia to carry every single political viewpoint; few are excited to find out.
If upheld by the Supreme Court, it’s unclear how big tech platforms would or would not comply with the law; they have so far mostly declined comment on the subject. It seems hard to imagine a world where Facebook has to leave up a bunch of pro-Nazi posts for fear of being sued by a random Texas citizen, and yet the law seems to grant it no discretion to do otherwise.
And yet despite the high stakes here, I find myself with relatively little to say on the subject. Not because it isn’t a huge deal — it is — but because none of the analysis or commentary really matters in a world where federal appeals courts are making laws up off the top of their heads. It has been less than 18 months since Justice Clarence Thomas essentially begged states to write laws like the ones passed in Texas and Florida this year; now those laws have landed in his lap, and all that remains is to see whether he can persuade four others that large technology platforms aren’t entitled to editorial discretion or any of the other speech rights that private corporations have long enjoyed.
That said, there are least two ideas worth considering as we await the results of this potential free-speech calamity. One was advanced by the platform regulation scholar Daphne Keller on the first episode of Moderated Content, a new podcast from content moderation expert Evelyn Douek at Stanford. On the show, Keller wonders whether platforms might be able to get around the most vexing parts of the Texas law by allowing users to opt in to content moderation: showing them all the abuse and hate speech and everything else by default, but letting them click a button that restores the community guidelines and the regular platform experience.
“The middle ground I'm most interested in … is to flood Texas with garbage by default, but give users an opt out to go back to the moderated experience,” Keller said. “And there's some language in the statute that kind of, sort of, arguably makes that okay. And it sort of illustrates the problem with the Texas law by flooding everyone with garbage by default, while avoiding a bunch of the bad consequences of actually flooding everybody with garbage permanently.”
I hope it doesn’t come to that. But if platforms really are put in this position, at the very least it seems like an idea worth exploring. Among other things, it would provide some valuable signal about an idea that I think very few people understand: that there is market demand for content moderation, and that a significant majority of users want platforms to remove harassment, abuse, anti-vaxx material, and so on.
Meanwhile, average platform users are thinking through solutions of their own. Over at TechDirt, Masnick has the amazing story of the r/PoliticalHumor subreddit, which is now requiring users to add the phrase (Texas Gov.) “Greg Abbott is a little piss baby” to every comment or else see their post deleted. Redditors are also directing users to a page where people can file a complaint with the Texas Attorney General, Ken Paxton, “asking him to investigate whether the deletion of any comments that don’t claim that his boss, Governor Greg Abbott, is ‘a little piss baby’ is viewpoint discrimination in violation of the law.”
The idea is to force Paxton to file a lawsuit defending the rule that all comments must call the governor a piss baby, and while I imagine that he will decline to do so, if nothing else the stunt demonstrates the absurdity of passing a law requiring “viewpoint neutrality” in the first place.
When I wrote about HB20 in May, I said that “the future of content moderation feels like it’s all about to come down to a coin flip, and I’m not sure anyone is fully prepared for what could come next.” Now that coin is up in the air, and I’m not feeling any more ready for the consequences than I did four months ago.
Announcing: Zoe Schiffer
Last week I told you that, as part of some changes to Platformer in year three, I hired someone. Today, I’m thrilled to tell you who that person is: Zoe Schiffer, a senior reporter at The Verge, is coming on board as Platformer’s managing editor.
I first began working with Zoe in 2019 when The Verge hired her to help me put together the predecessor to this newsletter each day. Over the next year-plus, we developed an excellent rhythm: Zoe summarized the day’s links before going to work on her own scoops, and I’d use the extra time to put more reporting and analysis into the newsletter. When I left to start this newsletter, losing Zoe as a partner was one of the hardest adjustments I had to make.
In the years since, though, Zoe has demonstrated time and again that she’s one of the most compelling reporters in tech: delivering path-breaking scoops about labor issues inside Apple, Google, and Netflix, among other companies. Recently we collaborated on stories about Twitter’s canceled OnlyFans competitor and discarded plans to combat extremism.
That’s why I’m so excited to team up again. As managing editor, Zoe will have a hybrid editorial and operational role. She’ll be collaborating with me on each day’s newsletter, reporting out stories of her own, and working behind the scenes to grow Platformer’s audience. And while I’ll continue to take the lead on the main column each week, expect to see Zoe’s byline here as well.
Like so much of what I’ve tried in this space over the past two years, hiring a partner is an experiment. My hope is that it will lead to more scoops, sharper analysis, and a bigger audience for what we do. (To that end, I’m also excited to share that Platformer will continue to syndicate a weekly column in The Verge.)
Zoe starts next week. What should we investigate? Reply to this email with your ideas.
And if you haven’t yet subscribed, there’s never been a better chance to signal your commitment to independent journalism:
Governing
- The United States and TikTok have loosely agreed to a deal that would make its data more secure without requiring ByteDance to sell it, but hurdles remain before a final deal is reached. (Lauren Hirsch, David McCabe, Katie Benner and Glenn Thrush / New York Times)
- TikTok faces a fine of up to $28.9 million in the United Kingdom after its data watchdog “found the company may have breached data protection rules by failing to sufficiently protect children’s data.” (Stephanie Bodoni / Bloomberg)
- A second Twitter whistleblower revealed herself, giving an interview about why she spoke to the Jan. 6 committee arguing that Twitter kept Donald Trump on the platform despite him violating platform rules because his constant use of the service made it more powerful. The whistleblower, Anika Collier Navaroli, is now working on a fellowship about the impact of hate-speech moderation at Stanford. (Drew Harwell / Washington Post)
- A look at the jousting between Twitter and Elon Musk in the run-up to their October trial, as Musk pushes endlessly to expand the scope of the case and discovery while the judge attempts to rein him in. (Erin Mulvaney and Alexa Corse / Wall Street Journal)
- More than 70 lawsuits this year have been filed against Meta, Snap, TikTok and Google this year, alleging that the companies’ products should be liable for harms suffered by children despite Section 230 protections. (Joel Rosenblatt / Bloomberg)
- A coalition of 60 civil rights organizations blasted Meta, Twitter, TikTok and YouTube for failing to limit the spread of the Big Lie and related election misinformation in the months leading up to the midterms. (Naomi Nix / Washington Post)
- California Gov. Gavin Newsom vetoed a bill that would have created a licensing regime for businesses that seek to facilitate crypto transactions. (Nikhilesh De / CoinDesk)
- An analysis of TikTok found that an account set up to represent a 16-year-old girl searching for diet pills was shown promotions for prescriptions drugs even though they aren’t supposed to be shown to people younger than 18. (Mia Sato / The Verge)
- A warning from the Food and Drug Administration that it’s dangerous to cook chicken in Nyquil led to a huge spike in searches for videos related to the subject. Interesting case of the government trying to get out in front of a potentially harmful trend but possibly raising the wrong kind of awareness about it. (Kelsey Weekman / BuzzFeed)
- The Senate advanced a bill that would let news organizations band together to negotiate with Google and Facebook over revenue sharing. (Diane Bartz / Reuters)
- The Pentagon launched an effort to determine whether cryptocurrencies pose national security risks. (Tory Newmyer / Washington Post)
- Videos promoting incel ideology have been viewed more than 24 million times on YouTube, according to a new report warning about the rising tendency toward violence in a popular incel forum. (Taylor Lorenz / Washington Post)
- A look at Germany’s use of anti-hate speech laws to prosecute people who post inflammatory comments on social networks. While authorities prosecute only a small fraction of cases, they insist it has been effective in chilling hate speech. (Adam Satariano and Christopher F. Schuetze / New York Times)
Industry
- Meta demonstrated a way to track users’ bodies without extra tracking sensors; instead, the company predicts the body’s position using machine learning. Wild! (David Heaney / UploadVR)
- A New York-based artist was granted the first known US copyright registration for a graphic novel that was partially generated by text-to-image AI. (Benj Edwards / Ars Technica)
- Google’s head of public policy in India quit after five months, in case you were wondering how much fun that job is. (Aditya Kalra / Reuters)
- NFT marketplaces say they won’t enable trading in their iOS apps due to Apple’s insistence that it take a 30 percent cut of all trades. (William Gallagher / Apple Insider)
- The senior vice president of global creators at Twitch announced she was leaving the company the same day her boss released a now-notorious blog post in which he announced that the company was cutting top creators’ pay by 20 percent and blaming it partially on high AWS bills. (Cecilia D'Anastasio / Bloomberg)
- Amazon mistakenly told some employees who were promoted that their new compensation was much higher than it actually was, saying it had miscalculated by using an older, higher stock price for the company. (Colin Lodewick / Fortune)
- A look at the growth of TikTok accounts that condense Hollywood films into minutes using translation tools, dubbing software, and VPNs, generating millions of views for Chinese creators. (Viola Zhou / Rest of World)
- Swizz Beatz and Timbaland settled their lawsuit with Triller after suing the company last month for $28 million. (Chris Willman / Variety)
Those good tweets
Talk to me
Send me tips, comments, questions, and assignments for Zoe and me: casey@platformer.news.