Why Substack is at a crossroads
Some thoughts on platforms and Nazis
Most days, this column looks at controversies unfolding on other tech platforms. Today, let’s take a look at the one that hosts this publication: Substack.
On Tuesday, I told subscribers that we are considering leaving the platform based on the company’s recent statement that it would not demonetize or remove openly Nazi accounts. After Jonathan M. Katz’s November article investigating extremism on the platform in The Atlantic, 247 Substack writers published an open letter asking the company to clarify its policies.
A few days later, Substack co-founder Hamish McKenzie responded in a blog post. While the platform would remove publications that are found to make credible threats of violence — a high bar — Substack would otherwise leave them alone, he said. “We don't think that censorship (including through demonetizing publications) makes the problem go away — in fact, it makes it worse,” he wrote. “We believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip bad ideas of their power.”
McKenzie’s perspective — that sunlight is the best disinfectant, and that censorship backfires by making dangerous ideas seem more appealing — is reasonable for many or even most circumstances. It is a point of view that informs policies at many younger, smaller tech platforms, owing both to the techno-libertarian streak that runs through many founders in Silicon Valley and the fact that a hands-off approach to content moderation is easier and less expensive than the alternatives.
There was a time when even Facebook, which has more restrictive policies than Substack does across the board, permitted users to deny the Holocaust. CEO Mark Zuckerberg occasionally cited this policy as evidence of the company’s commitment to free speech, even though it occasionally got him into trouble.
Then, in 2020, Facebook reversed course: going forward, it said, it would remove Holocaust denial from the platform. In doing so, Zuckerberg said, Facebook was seeking to keep pace with the changing times.
Here’s Sheera Frenkel in the New York Times:
In announcing the change, Facebook cited a recent survey that found that nearly a quarter of American adults ages 18 to 39 said they believed the Holocaust either was a myth or was exaggerated, or they weren’t sure whether it happened.
“I’ve struggled with the tension between standing for free expression and the harm caused by minimizing or denying the horror of the Holocaust,” Mr. Zuckerberg wrote in his blog post. “Drawing the right lines between what is and isn’t acceptable speech isn’t straightforward, but with the current state of the world, I believe this is the right balance.”
For a time when memories of the Holocaust were fresh, and anti-Semitism had ebbed, it may have seemed less dangerous to let a few cranks peddle their lies. But as those memories faded, and attacks on Jewish people surged, Facebook felt compelled to revisit those policies.
That brings us to Substack. When it was founded in 2017, Substack offered simple infrastructure for individuals to create and grow their email newsletters. From the start, it promised not to take a heavy hand with content moderation. And because it only offered software, this approach drew little criticism. If you wrote something truly awful in Word, after all, no one would blame Microsoft. Substack benefited similarly from this distance.
Over time, though, the company evolved. It began encouraging individual writers to recommend one another, funneling tens of thousands of subscribers to like-minded people. It started to send out an algorithmically ranked digest of potentially interesting posts to anyone with a Substack account, showcasing new voices from across the network. And in April of this year, the company launched Notes, a text-based social network resembling Twitter that surfaces posts in a ranked feed.
By 2023, in other words, Substack no longer could claim to be the simple infrastructure it once was. It was a platform: a network of users, promoted via a variety of ranked surfaces. The fact that it monetized through subscriptions rather than advertising did not change the fact that, just as social networks have at times offered unwitting support to extremists, Substack was now at risk of doing the same.
And in one key respect, Substack is even more vulnerable to this criticism than social networks had been. Extremists on Facebook, Twitter, and YouTube for the most part had been posting for clout: those platforms made it difficult or even impossible for them to monetize their audiences.
On Substack, on the other hand, extremists can post for money. The pieces are now all in place for an extremist Substack to grow an audience using the platform’s recommendation systems, and monetize that audience via subscriptions. And Substack, as it does with all publications, will get 10 percent of the revenue.
Now, other platforms have developed defenses against their systems being exploited in this way. Some, like Facebook, prevent designated dangerous organizations from starting accounts. They also might ban praise for those organizations, or ban hate speech in general. Others, like YouTube, might allow some speech that makes the company uncomfortable, but restrict them from monetizing. Or it will restrict them from appearing in search, or in recommendations.
Substack doesn’t want to do that. It wants to be seen as a pure infrastructure provider — something like Cloudflare, which seemingly only has to moderate content once every few years. But Cloudflare doesn’t recommend blogs. It does not send out a digest of websites to visit. It doesn’t run a text-based social network, or recommend posts you might like right at the top.
Recommendations might appear on their surface to be innocuous, and in most cases they are. In three years on Substack, I’ve been recommended plenty of boring posts, but no openly Nazi ones. My experience of them has been unobjectionable.
But turning a blind eye to recommended content almost always comes back to bite a platform. It was recommendations on Twitter, Facebook, and YouTube that helped turn Alex Jones from a fringe conspiracy theorist into a juggernaut that could terrorize families out of their homes. It was recommendations that turned QAnon from loopy trolling on 4Chan into a violent national movement. It was recommendations that helped to build the modern anti-vaccine movement.
The moment a platform begins to recommend content is the moment it can no longer claim to be simple software.
It is, of course, exhausting — and expensive — to have to police your platform this way. Some users really do want to censor everyone who disagrees with them, and lobby to remove all of their political opponents from the platform. Finding the real danger and harms in a sea of user reports is tedious, thankless work. And no matter how you choose to moderate content, you’ll make at least some groups of users mad.
At the same time, for the sake of your business, you have to draw a line somewhere.
Some of these lines are quite tricky — do you ban all nudity? What if a mother is breastfeeding?
Others are not. Until Substack, I was not aware of any major US consumer internet platform that stated it would not remove or even demonetize Nazi accounts. Even in a polarized world, there remains broad agreement that the slaughter of 6 million Jews during the Holocaust was an atrocity. The Nazis did not commit the only atrocity in history, but a platform that declines to remove their supporters is telling you something important about itself.
If it won’t remove the Nazis, why should we expect the platform to remove any other harm?
Our readers understand this. During the past couple weeks, dozens of paid subscribers to Platformer have canceled their memberships. “The reason is simple,” one of those readers wrote to us today. “I don't want to fund Nazis. I'm disturbed by a Substack leadership that looks at openly pro-Nazi content and says, ‘We won't de-platform you. In fact, we'll monetize you.’"
I’m proud of the Platformer readership for standing up for their principles in this way. Some of our earliest and best customers are people who work in tech policy, content moderation, and trust and safety. They’ve spent years doing the work, making the hard calls, and cleaning up the internet for all of our mutual benefit. It’s only natural that they would resist spending money on a platform that spurns their profession in this way.
Over the past few days, the Platformer team analyzed dozens of Substacks for pro-Nazi content. Earlier this week, I met with Substack to press my case that they should remove content that praises Nazis from the network. Late today, we submitted a list of accounts that we believe to be in violation of the company’s existing policies against incitement to violence. I am scheduled to meet with the company again tomorrow.
Whatever becomes of those accounts, though, I fully expect that more will spring up in their wake. So long as Substack allows itself to be perceived — encourages itself to be perceived! — as a home for Nazis, they will open accounts here and start selling subscriptions. Why wouldn’t they?
Every platform hosts its share of racists, white nationalists, and other noxious personalities. In some very real sense, there is no escaping them online. But there ought to be ways to see them less; to recommend them less; to fund them less. Other platforms have realized this as they’ve grown up. Here’s hoping Substack does the same.
Sponsored
Investors are focused on these metrics.
Startups should take notice.
It takes more than a great idea to make your ambitions real. That’s why Mercury goes beyond banking* to share the knowledge and network startups need to succeed. In this article, they shed light on the key metrics investors have their sights set on right now.
Even in today’s challenging market, investments in early-stage startups are still being made. That’s because VCs and investors haven’t stopped looking for opportunities — they’ve simply shifted what they are searching for. By understanding investors’ key metrics, early-stage startups can laser-focus their next investor pitch to land the funding necessary to take their company to the next stage.
Read the full article to learn how investors think and how you can lean into these numbers today.
*Mercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group and Evolve Bank & Trust®; Members FDIC.
Platformer has been a Mercury customer since 2020. This sponsorship gets us 5% closer to our goal of hiring a reporter in 2024.
On the podcast this week: Kevin and I discuss the Times’ lawsuit against OpenAI. Then, Beeper CEO Eric Migicovsky stops by to discuss his campaign to turn blue bubbles green on iMessage. And finally, Kevin and I trade our New Year’s tech resolutions.
Apple | Spotify | Stitcher | Amazon | Google | YouTube
Governing
- Pornhub has blocked access to people living in Montana and North Carolina in protest of the states’ newly-passed age verification laws. (Wes Davis / The Verge)
- Twitch is again updating its sexual content moderation policies to ban implied nudity. (Jay Peters / The Verge)
- The US labor board ruled that Alphabet illegally refused to negotiate with YouTube Music contract workers who voted to unionize. (Josh Eidelson / Bloomberg)
- A lawsuit against Snap brought by the family members of children who overdosed on drugs allegedly bought through Snapchat is moving forward. (Mike Masnick / Techdirt)
- The New York Times copyright lawsuit against OpenAI and Microsoft hinges on the legal question of “fair use.” The outcome of the case could change the future of AI. (Will Oremus and Elahe Izadi / Washington Post)
- Related: OpenAI is reportedly offering news publishers between $1 million and $5 million annually to license news articles for training. I’ll take it! (Sahil Patel and Stephanie Palazzolo / The Information)
- Gateway Pundit, a right-wing news outlet peddling false election claims, is continuing to push disinformation even as other publications pull back. (Sarah Ellison / Washington Post)
- Midjourney founder David Holz, who previously insisted he dislikes fake photos, must grapple with the tool’s influence and potential for misuse as the 2024 election looms (Parmy Olson / Bloomberg)
- The UK is looking to beef up its already extensive surveillance laws, sparking concerns from industry execs and privacy advocates on user privacy. (Laurie Clark / POLITICO)
- The biggest misinformation threat isn’t technology or AI, but prominent politicians, the director of the Reuters Institute for the Study of Journalism argues. (Rasmus Nielsen / Financial Times)
- The global cost of internet shutdowns was estimated to be over $9 billion last year, according to a report by Top10VPN. (Samuel Woodhams and Simon Migliano / Top10VPN)
- Hackers are increasingly targeting verified government and business X accounts to promote crypto scams. (Bill Toulas / Bleeping Computer)
Industry
- Almost half of British teenagers say they feel addicted to social media, according to data from the Millenium Cohort study. (Hannah Devlin / The Guardian)
- TikTok is reportedly raising the commission it takes from Shop sellers to 8 percent, a jump from the previous 2 percent plus 30 cents per transaction fee. (Theo Wayt and Ann Gehan / The Information)
- TikTok is also reportedly aiming to grow its e-commerce business to about $17.5 billion this year, closing in on Amazon. (Zheping Huang, Alex Barinka, Dong Cao and Olivia Poh / Bloomberg)
- A “GPT Store” is reportedly set to launch next week, allowing OpenAI customers to buy customized chatbots. (Stephanie Palazzolo / The Information)
- Organizations on X can now get a gold check mark badge for $200 a month instead of $1,000. You can read more about this hilariously botched rollout in Zoë’s forthcoming book! (Jay Peters / The Verge)
- Google is moving forward with its plan to eliminate cookies, and marketers say the ad industry is nowhere near ready. (Miles Kruppa and Patience Haggin / The Wall Street Journal)
- Amazon’s crackdown on sellers has spawned a new industry – e-commerce lawyers representing sellers trying to regain access to their accounts. (Camilla Hodgson / Financial Times)
- Jeff Bezos and other venture capitalists are betting on Perplexity, an AI startup that’s challenging Google’s dominance in search. (Miles Kruppa / The Wall Street Journal)
- Facebook rolled out “Link History”, a tool that saves users’ browsing activity in the Facebook mobile app and uses the data to target ads. (Thomas Germain / Gizmodo)
- A look at the legal battle over IRL, the social media startup that Softbank bet $150 million on and later shut down over fraud allegations. Former staffers suspect the platform was a fraud, but the founder says he was used as a scapegoat. (Joe Miller and David Keohane / Financial Times)
- Before AI, there was Twitter account @Horse_ebooks managed by Jacob Bakkila, reusing and repurposing existing content as a language-based bot. (Kari Paul / The Guardian)
- New PCs and laptops from Microsoft partners will have a Copilot key, the first big change to the Windows keyboard layout in almost 30 years. (Tom Warren / The Verge)
Those good posts
For more good posts every day, follow Casey’s Instagram stories.
(Link)
(Link)
(Link)
Talk to us
Send us tips, comments, questions, and whatever’s on your mind: casey@platformer.news and zoe@platformer.news.