To build trust, platforms should try a little democracy
Meta has its Oversight Board; Spotify has a new safety council. But where are the average users?
Amid increasingly dire predictions for American democracy, let’s look at one area where the voice of the people may be able to make some new inroads: tech platforms, which increasingly seem open to the idea of public participation on policy issues.
I.
On Monday, the Oversight Board — an independent body established by Facebook to make binding decisions about difficult questions of content moderation — issued two new rulings.
In the first case, the board overturned a decision by the company to remove an Instagram post. A (presumably LGBT) user had posted an image that included various anti-LGBTQ slurs in Arabic, they said, in an effort to reclaim the power of those words. Instagram initially removed the post under its policies against hate speech. In its decision, the board noted why this is a problem:
For LGBTQIA+ people in countries which penalize their expression, social media is often one of the only means to express themselves freely. The over-moderation of speech by users from persecuted minority groups is a serious threat to their freedom of expression. As such, the Board is concerned that Meta is not consistently applying exemptions in the Hate Speech policy to expression from marginalized groups.
In the second case, the board upheld a decision by Facebook to restore a post depicting violence against a civilian in Sudan last year. Facebook argued that the post should be considered newsworthy, since it drew attention to suffering, and placed the post behind a warning screen. The board agreed, but noted that Facebook had only granted 17 total exemptions for newsworthy posts that depict violence in the past 12 months — while removing 90.7 million posts for violating its rules against graphic content in the first three quarters of 2021.
Board members noted the obvious problem here:
The Board finds it unlikely that, over one year, only 17 pieces of content related to this policy should have been allowed to remain on the platform as newsworthy and in the public interest. To ensure such content is allowed on Facebook, the Board recommends that Meta amends the Violent and Graphic Content Community Standard to allow videos of people or dead bodies when shared to raise awareness or document abuses.
Neither of these decisions made headlines; I couldn’t find a single story on them when I looked. Now 20 months into its existence, the Oversight Board issues rulings like this every few weeks. One of those decisions — in which it punted the matter of what to do about Donald Trump back into Facebook’s lap — was front-page news. For the most part, though, the board works in relative obscurity, gently nudging the company now called Meta to follow its own policies and, in many cases, change them for the better.
It’s not democracy, exactly — the initial board members were picked by Facebook, and when their terms are up they will nominate their own successors — but it’s more formal representation than users have on any other big social platform. Whatever you think of the decisions above, they show a group working to make difficult trade-offs around free speech, harm reduction, and corporate self-interest.
The board has some clear flaws, starting with the relatively small number of cases it hears. And last month I wrote about how the company asked the board for guidance on how to moderate war-related content in Ukraine, but changed its mind amid fears that Russia would retaliate against the company and its employees. Those fears were almost certainly justified, but the move undermined the board’s independence.
At the same time, the board has succeeded at opening up Facebook and Instagram to public input on policy, if only (so far) at the margins. And that, I think, has put some gentle pressure on other platforms to consider how they might do something similar.
II.
That’s why I was interested on Tuesday to see this announcement from Spotify: the company will form a “safety advisory council” on which it can consult on policy questions.
Here’s Dawn Chmielewski at Reuters:
The group represents another step in Spotify’s efforts to deal with harmful content on its audio streaming service after backlash earlier this year over “The Joe Rogan Experience,” in which the podcaster was accused of spreading misinformation about COVID-19.
The 18 experts, which include representatives from Washington, D.C. civil rights group the Center for Democracy & Technology, the University of Gothenburg in Sweden and the Institute for Technology and Society in Brazil, will advise Spotify as it develops products and policies and thinks about emerging issues.
“The idea is to bring in these world-renowned experts, many of whom have been in this space for a number of years, to realize a relationship with them,” said Dustee Jenkins, Spotify’s global head of public affairs. “And to ensure that it's not talking to them when we're in the middle of a situation … Instead, we're meeting with them on a pretty regular basis, so that we can be much more proactive about how we're thinking about these issues across the company.”
The Rogan story dominated discussions of tech news in January — I wrote three columns about it in as many weeks — but has faded into the background amid more urgent stories about war, pandemic, and Elon Musk. Still, a defining aspect of that controversy was how confused even Spotify seemed to be about its content policies: what sort of criticism about vaccines is allowed, what isn’t, and how those policies are enforced when violated.
The most obvious explanation for that confusion is that Spotify is, first and foremost, a streaming music company. Its move into podcasting — itself necessitated by Apple’s distortion of the streaming music market — came well before the company considered the implications of beginning to host millions of podcasts.
Companies can and do seek external consultants when they create policies. Increasingly, though, they’re setting up external councils of experts to give them guidance continuously.
Twitter formed an advisory Trust and Safety Council in 2016 as it started to overhaul its own policies around abuse and harassment. TikTok has been setting up safety advisory councils as well — it them in the United States, Europe, Brazil, and the Middle East, among other places.
Again, these all fall short of the kind of public participation I’d like to see — one in which average users, not just policy experts, have a voice in the discussion. And they are clearly meant in part to stave off regulation, or at the very least, public-relations crises. None of it really feels like it’s being done in a public spirit.
III.
So what would a more public-spirited version of these boards and councils actually look like?
Over the weekend I had coffee with Aviv Ovadya. Ovadya is a technologist focused on the deterioration of our information environment, and when we met he had just completed a fellowship in technology and public purpose at the Harvard Kennedy School’s Belfer Center.
Ovadya first came to my attention in 2018, when Charlie Warzel profiled him in BuzzFeed. Ovadya’s focus, then as now, is on the design of large tech platforms and the behaviors that they incentivize. In 2018, he warned of the risks of an “infocalypse” — a day when ubiquitous synthetic media blurred the lines between truth and fiction in ways that could make the world ungovernable. (With the past week’s explosion of interest in DALL-E and Google’s LaMDA, the warning that remains timely.)
Last fall, Ovadya proposed a method by which tech platforms could resolve tricky policy questions in a way that builds trust among users, rather than undermine it. He calls it “platform democracy,” and it effectively takes some of the deliberative democracy techniques that have been used with some success around the world in recent years and applies it to companies like Facebook.
A key element of deliberative democracy is that it involves lay people: gathering together a representative sample of the body politic, educating them on the issues, and asking them to work toward a consensus. The technique has been used in South Korea to decide nuclear policy, and in Ireland to remove anti-abortion provisions from the country’s constitution.
In his paper, Ovadya took a question many platforms have debated in recent years — whether to allow political advertising — and explores what platform democracy might look like on Facebook. He imagines Mark Zuckerberg funding a third party to facilitate the deliberative process, convening a representative assembly of Facebook users via a lottery system, and committing to honor their recommendations. Various stakeholders would be invited to make their case to the Facebook users. Hearings would be streamed live so that other interested users could follow along.
The key is that the people most directly affected by the company’s decisions — its user base — would finally have a more direct voice in the policies that affect them. That’s much different than what we get with today’s advisory councils, Ovadya writes:
With a platform assembly, one still gets the ‘technocratic’ benefit of elite experts, since they will be brought in to inform the assembly members. However, the people being impacted are those who ultimately make a recommendation based off of that input, leading to choices that take into account the experiences of ordinary people—and thus giving a much stronger “people’s mandate” to the outcome. This democratic component may be less critical for the Facebook Oversight Board’s judicial-style decisions, but is crucial for policy creation.
When this has been tried in countries in recent years, this process seems to build both consensus among participants and trust in the political process more broadly.
Ovadya told me that some platforms — he wouldn’t tell me which — are considering giving the idea a try.
"Several major platforms are actively evaluating the platform assembly approach, and there is significant internal buy-in by leadership — and positive results from early exploratory pilots,” he said. (Ovadya added that interested platform employees should email him at aviv@aviv.me to discuss the idea further.)
Like Facebook’s Oversight Board, for a platform to try such an approach on a sensitive policy question could be risky: giving up control always is. It could be expensive, too, though likely much less expensive than the $130 million Facebook committed to the Oversight Board.
At the same time, platforms are already living with the high cost of low trust. Between regulatory agencies investigating their business practices and lawmakers seeking to more strictly regulate their content policies, platforms are going to be forced to change one way or another.
For years now these companies have told us that they’re “customer-obsessed” and “user first.” Platform democracy offers them a chance to show us that they mean it.
Governing
- Congress held a hearing Tuesday on national privacy legislation that has unusual bipartisan support. Like GDPR, it “allows for Americans to access, correct, and request deletion of any personal data companies have collected on them.” (Makena Kelly / The Verge)
- “A ProPublica analysis found that 15 of the largest firearms sellers in the United States … used Google’s systems to place ads that generated over 120 million impressions, a measurement roughly equivalent to an ad being shown to one person, between March 9 and June 6.” (Craig Silverman and Ruth Talbot / ProPublica)
- Apple faces a new antitrust probe in Germany related to its App Tracking Transparency technology. (Tim Hardwick / MacRumors)
- Smart piece on LaMDA and Blake Lemoine that argues the AI’s likely knowledge of sci-fi tropes effectively turned it into a collaborator in writing a fictional short story. (It also agrees with me that the real issue here is that lots of people are going to believe LaMDA-like technologies are sentient.) (Max Read / Read Max)
- Bumble is leading an effort to criminalize sending nudes photos without the consent of the recipient. This is a tricky one but I’m extremely concerned that laws like these could be weaponized against gay men in particular. (Valeriya Safronova / New York Times)
- A new analysis of YouTube highlights the ways that Hindu nationalists in India have used it to spread conspiracy theories. (Upmanyu Trivedi / Bloomberg)
- Adobe released an open-source toolkit for its Content Authenticity Initiative, which seeks to help developers detect visual misinformation. (Taylor Hatmaker / TechCrunch)
- The company also plans to release a free-to-use version of Photoshop on the web. (Jacob Kastrenakes / The Verge)
- Nigeria introduced a new plan to regulate platforms that includes ‘hostage-taking’ provisions — a local office with a designated representative to harass over moderation decisions. It comes six months after the company temporarily banned Twitter. (Tage Kene-Okafor / TechCrunch)
- Should Tinder be nationalized? I mean, probably not, but opaque algorithms driving who meets and eventually reproduces deserves much more discussion than it gets! (Nick French / Jacobin)
Industry
- Coinbase will lay off 18 percent of its staff during the economic downturn. CEO Brian Armstrong took care to note that employees’ email access would be turned off before they could be informed of their layoff so that they could not commit any crimes with customer data. (Sarah Roach / Protocol)
- Elon Musk will reportedly answer questions during an all-hands with Twitter employees on Thursday. Helpfully answering the question: what will my Thursday column be about. (Sarah E. Needleman / Wall Street Journal)
- Microsoft is working on games for its Teams chat software. I mean, everyone was playing Wordle on the conference call anyway. (Tom Warren / The Verge)
- Netflix plans to double its number of mobile games before the end of the year. (Jennifer Maas / Variety)
- WhatsApp will begin letting users transfer chats from Android to iOS while preserving end-to-end encryption. You could already do so in the opposite direction. (WABetaInfo)
- Meta rolled out new parental supervision tools for Quest VR and Instagram. The move comes amid growing questions around social networks and children’s well-being. (Kris Holt / Engadget)
- Meta’s Horizon Worlds is adding a mode that can garble strangers’ voices lest they say anything offensive to you. (Taylor Hatmaker / TechCrunch)
- OpenSea shifted its transactions to an open-source protocol named Seaport that it said would reduce fees by 35 percent. (Mitchell Clark / The Verge)
- A look at “Nextdoor’s quest to beat toxic content and make money.” “The platform can’t seem to shake a reputation for serving as a megaphone for the so-called Karens of the suburbs and everything that comes with them, including racial profiling, misinformation and fearmongering.” (Sarah Holder and Fola Akinnibi / Bloomberg)
Those good tweets
Talk to me
Send me tips, comments, questions, and platform democracy: casey@platformer.news.