The case of the missing platform policies

Exclusive: Amid a rising mental health crisis, Stanford researchers find many companies ignoring posts about self-harm

The case of the missing platform policies
Sydney Sims / Unsplash

This post discusses trends in suicide and self-harm, and what platforms can do to intervene. If you or anyone you know is in distress and would like to talk, there are folks out there who want to help. Call the National Suicide Prevention Lifeline (1-800-273-8255), the Trevor Project (1-866-488-7386), or find hotlines outside the United States at this link.


There’s plenty that scientists don’t know about the long-term effects of COVID-19 on society, but a year in at least one thing seems clear. The pandemic has been terrible for our collective mental health — and a surprising number of tech platforms seem to have given the issue very little thought.

First, the numbers. Nature reported that the number of adults in the United Kingdom showing symptoms of depression had nearly doubled from March to June of last year, to 19 percent. In the United States, 11 percent of adults reported feeling depressed between January and June 2019; by December 2020, that number had nearly quadrupled, to 42 percent.

Prolonged isolation created by lockdowns has been linked to disruptions in sleep, increased drug and alcohol use, and weight gain, among other symptoms. Preliminary data about suicides in 2020 is mixed, but the number of drug overdoses soared, and experts believe many were likely intentional. Even before the pandemic, Glenn Kessler reports at the Washington Post, “suicide rates had increased in the United States every year since 1999, for a gain of 35 percent over two decades.”

Issues related to suicide and self-harm touch nearly every digital platform in some way. The internet is increasingly where people search, discuss, and seek support for mental health issues. But according to new research from the Stanford Internet Observatory, in many cases platforms have no policies related to discussion of self-harm or suicide at all.

In “Self-Harm Policies and Internet Platforms,” the authors surveyed 39 online platforms to understand their approach to these issues. They analyzed search engines, social networks, performance-oriented platforms like TikTok, gaming platforms, dating apps, and messaging apps. Some platforms have developed robust policies to cover the nuances of these issues. Many, though, have ignored them altogether.

“There is vast unevenness in the comprehensiveness of public-facing policies,” write Shelby Perkins, Elena Cryst, and Shelby Grossman. “For example, Facebook policies address not only suicide but also euthanasia, suicide notes, and livestreaming suicide attempts. In contrast, Instagram and Reddit have no policies related to suicide in their primary policy documents.”

Among the platforms surveyed, Facebook was found to have the most comprehensive policies. But researchers faulted the company for unclear policies at its Instagram subsidiary; technically, the parent company’s policies all apply to both platforms, but Instagram maintains a separate set of policies that do not explicitly mention posting about suicide, creating some confusion.

Still, Facebook is miles ahead of some of its peers. Reddit, Parler, and Gab were found to have no public policies related to posts about self-harm, eating disorders, or suicide. That doesn’t necessarily mean that the companies have no policies whatsoever. But if they aren’t posted publicly, we may never know for sure.

In contrast, researchers said that what they call “creator platforms” — YouTube, TikTok, and Twitch — have developed smart policies that go beyond simple promises to remove disturbing content. The platforms offer meaningful support in their policies both for people who are recovering from mental health issues and those who may be considering self-harm, the authors said.

“Both YouTube and TikTok are explicit in allowing creators to share their stories about self-harm to raise awareness and find community support,” they wrote. “We were impressed that YouTube’s community guidelines on suicide and self-injury provide resources, including hotlines and websites, for those having thoughts of suicide or self-harm, for 27 countries.”

Outside the biggest platforms, though, it’s all a toss-up. Researchers could not find public policies for suicide or self-harm for NextDoor or Clubhouse. Dating apps? Grindr and Tinder have policies about self-harm; Scruff and Hinge don’t. Messaging apps tend not to have any such public policies, either — iMessage, Signal, and WhatsApp don’t. (The fact that all of them use some form of encryption likely has a lot to do with that.)

Why does all of this matter? In an interview, the researchers told me there are at least three big reasons. One is essentially a question of justice: if people are going to be punished for the ways in which they discuss self-harm online, they ought to know that in advance. Two is that policies offer platforms a chance to intervene when their users are considering hurting themselves. (Many do offer users links to resources that can help them in a time of crisis.) And three is that we can’t develop more effective policies for addressing mental health issues online if we don’t know what the policies are.

And moderating these kinds of posts can be quite tricky, researchers said. There’s often a fine line between posts that are discussing self-harm and those that appear to be encouraging it.

“The same content that could show someone recovering from an eating disorder is something that can also be triggering for other people,” Grossman told me. “That same content could just affect users in two different ways.”

But you can’t moderate if you don’t even have a policy, and I was surprised, reading this research, at just how many companies don’t.

This has turned out to be a kind of policy week here at Platformer. We talked about how Clarence Thomas wants to blow up platform policy as it exists today; how YouTube is shifting the way it measures harm on the platform (and discloses it); and how Twitch developed a policy for policing creators’ behavior on other platforms.

What strikes me about all of this is just how fresh it all feels. We’re more than a decade into the platform era, but there are still so many big questions to figure out. And even on the most serious of subjects — how to address content related to self-harm — some platforms haven’t even entered the discussion.

The Stanford researchers told me they believe they are the first people to even attempt to catalog self-harm policies among the major platforms and make them public. There are doubtless many other areas where a similar inventory would serve the public good. Private companies still hide too much, even and especially when they are directly implicated in questions of public interest.

In the future, I hope these companies collaborate more — learning from one another and adopting policies that make sense for their own platforms. And thanks to the Stanford researchers, at least on one subject, they can now find all of the existing policies in a single place.


The Ratio

Today in news that could affect public perception of the big tech companies.

⬆️ Trending up: TikTok joined the Coalition to End Wildlife Tracking Online. So much for my plans to sell a stolen elephant by teaching it to Renegade. (TikTok)

⬇️ Trending down: A 500-million person dataset of user information scraped from LinkedIn has been posted for sale online. One reason this will sting LinkedIn: the company previously sued another company for scraping its site and lost the case. (Katie Canales / Insider)

⬇️ Trending down: Google’s blocklist designed to prevent ads from running on hate videos contains obvious holes, this analysis finds. “Google Ads suggested millions upon millions of YouTube videos to advertisers purchasing ads related to the terms ‘White power,’ the fascist slogan “blood and soil,” and the far-right call to violence ‘racial holy war.’ (Leon Yin and Aaron Sankin / The Markup)


Governing

A quadrennial report from the National Intelligence Council predicted accelerating polarization and inequality as we emerge from the COVID-19 pandemic, with surveillance tech playing a starring, dystopian role. Here are Warren P. Strobel and Dustin Volz at the Wall Street Journal:

AI and other technologies—such as an Internet of Things ecosystem that could top a trillion devices by 2040—may come with potentially steep, Orwellian erosions of civil liberties and a common, shared reality.

“Privacy and anonymity may effectively disappear by choice or government mandate, as all aspects of personal and professional lives are tracked by global networks,” the report states. “Real-time, manufactured or synthetic media could further distort truth and reality, destabilizing societies at a scale and speed that dwarfs current disinformation challenges. Many types of crimes, particularly those that can be monitored and attributed with digital surveillance, will become less common while new crimes, and potentially new forms of discrimination, could arise.”

The vote count is under way in the historic Amazon union election in Bessamer, AL. No votes seemed to be leading in the early count; 3,215 ballots were cast, or about 55 percent of the facility's 5,800 eligible workers. (Lauren Kaori Gurley / Vice)

Related: the union says it will challenge any loss, saying Amazon interfered with the vote count in egregious ways. (Jay Greene / Washington Post)

In a legal filing, Apple argued it “has no monopoly or market power in gaming,” and so Epic’s claims against the company should be dismissed. Also: “According to Apple, Epic Games hired PR firms in 2019 to work on a media strategy called ‘Project Liberty’ aimed at portraying Apple ‘as the bad guy.’” (Filipe Espósito / 9to5Mac)

US Customs and Border Protection paid more than $700,000 for licenses to the encrypted chat app Wickr. The move shows how US law enforcement depends on encrypted chat even as top officials routinely denounce end-to-end encryption. (Joseph Cox / Vice)

At least 14 channels promoting QAnon content managed to evade YouTube’s ban, and some are running ads. Seven of them were removed after this piece was published Wednesday. (Alex Kaplan / MediaMatters)

An advocacy group sued Facebook for failing to effectively police anti-Muslim hate speech. Lawsuits like these seem doomed to fail, but: “The nonprofit group Muslim Advocates claims that Facebook officials breached a local consumer-protection law by falsely promising that the company would remove content that ran afoul of its moderation standards.” (David Yaffe-Bellany and Naomi Nix / Bloomberg)

In the days before Clarence Thomas issued his opinion that Section 230 possibly violates the First Amendment, his activist wife emailed supporters “to raise awareness for a new website fighting ‘corporate tyranny’ and social media’s growing power over political speech.” Say what you will, but this family always stays on message. (Dylan Byers / NBC)

Signal’s move to add cryptocurrency payments is “an incredibly bad idea,” says security expert Bruce Schneier. “adding a cryptocurrency to an end-to-end encrypted app muddies the morality of the product, and invites all sorts of government investigative and regulatory meddling: by the IRS, the SEC, FinCEN, and probably the FBI.” (Schneier on Security)


Industry

The global chip shortage is getting so bad that even Apple, with all its massive procurement power, is experiencing production delays for Macs and iPads. Starting to feel like my dream of building a gaming PC is just not going to happen this year. (Cheng Ting-Fang and Lauly Li / Nikkei)

A Google engineer movingly recounts how her experience of being sexually harassed by her manager led the company to manager her out of the organization. Another one for the “HR is not your friend” files. (Emi Nietfeld / New York Times)

The ascension of Rachell Hofstetter, better known as Valkyrae, to co-owner of gamer collective / lifestyle brand 100 Thieves shows the inroads that streamers are making in traditional entertainment. The collective is valued at $190 million and has a deal to explore new initiatives in film, television, and podcasts. (Alex Hawgood / New York Times)

India’s ShareChat raised $502 million, valuing the company at $2.1 billion. Snap and Twitter participated in the round; the company has soared over the past year after the country’s ban of TikTok made room for its homegrown clone, called Moj. (Surabhi Agarwal / Economic Times)

Facebook and Instagram suffered a temporary outage. No word yet on the cause. (Kim Lyons / The Verge)

A skeptical look at the rise of augmented reality filters in social apps, particularly among young people. Boys use them primarily to have fun; girls use them to make themselves feel more beautiful. Little research has been done about the long-term effects, if any, on the user’s sense of self. (Tate Ryan-Mosley / MIT Tech Review)

The best of Yahoo Answers. “Is there a spell to become a mermaid that actually works?” (Elizabeth Lopatto / The Verge)


Those good tweets


Talk to me

Send me tips, comments, questions, and vaccination selfies: casey@platformer.news.