Facebook's big new experiment in governance
What if platform policies were written in part by their users?
In June, I wrote that to build trust, platforms should try a little more democracy. Instead of relying solely on their own employees, advisory councils, and oversight boards, I wrote, tech companies should involve actual users in the process. Citing the work Aviv Ovadya, a technologist who recently published a paper on what he calls “platform democracy,” I suggested that social networks could build trust by inviting average people into the policymaking process.
I didn’t know it at the time, but Meta had recently finished a series of experiments which tried to do just that. From February to April, the company gathered together three groups across five different countries to answer the question: what should Meta do about problematic climate information on Facebook?
The question came as watchdogs are increasingly scrutinizing the company’s approach to moderating misleading information about the environment. Last year the Guardian reported on an analysis performed by the environmental group Stop Funding Heat that found 45,000 posts downplaying or denying the climate crisis. And in February, after Meta promised to label climate misinformation, a report from the watchdog group Center for Countering Digital Hate found that “the platform only labeled about half of the posts promoting articles from the world's leading publishers of climate denial,” according to NPR.
Against that backdrop, Meta hired a policy consulting firm named Behavioural Insights Team, or BIT, to bring Facebook users into policy development process. Specifically, users they were asked what Meta should do about “problematic information,” which BIT defined as “content that is not necessarily false, yet expresses views that may contain misleading, low quality, or incomplete information that can likely lead to false conclusions.”
Meta wouldn’t give me any examples of what it considers problematic climate speech. But I can imagine panels being asked whether Facebook should intervene if, for example, a user with a big following sometime this winter asks something like “if climate change is real, why is it cold outside?”
At all the big platforms today, average users do not have a say on how this question gets handled. Instead, it’s left to company executives and their policy teams, who often do consult experts, human rights groups, and other stakeholders. But the process is opaque and inaccessible to platform users, and in general has undermined confidence in the platforms. It’s hard to put trust in a policy when you have no idea who made it or why. (Not to mention who enforces it, or how.)
For its experiment, Meta and BIT worked to find about 250 people who were broadly representative of the Facebook user base. They brought them together virtually across two weekends to educate them about climate issues and platform policies, and offered them access to outside experts (on both climate and speech issues) and Facebook employees. At the end of the process, Facebook offered the group a variety of possible solutions to problematic climate information, and the group deliberated and voted on their preferred outcomes.
Facebook wouldn’t tell me what the groups decided — only that all three groups reached a similar consensus on what ought to be done. Their deliberations are now being taken under advisement by Facebook teams working on a policy update, the company told me.
In a blog post today, BIT said participants expressed high satisfaction with the process and its outcomes:
We found high amounts of both participant engagement and satisfaction with the deliberative process. As importantly, they demonstrated compelling evidence that participants could engage in meaningful and respectful deliberation around a complex topic. […]
Participants were impressed by how their groups were respectful of the wide range of opinions and values held across the group. As one participant commented: “I was going into this [assembly] knowing that not everyone is going to have the same opinions or feelings or thoughts as me… At the end of the day, we are not going to shame each other for how we felt or what we thought.” They were also pleased at how their groups came together to reach a decision. One participant reflected that “[e]everyone was very courteous, and I was surprised by the amount of common ground seemingly reached.”
Meta was impressed with the results, too, and plans to run further experiments in platform democracy.
“We don't believe that we should be making so many of these decisions on our own,” Brent Harris, vice president of governance at the company, told me in an interview. “You've heard us repeat that, and we mean it.”
Harris helped to oversee the creation of the Oversight Board, a somewhat controversial but (I’ve argued) useful tool for delegating authority on some matters of content moderation and pushing Meta to develop more open and consistent policies. Now Harris has turned his attention to platform democracy, and says he’s encouraged by the early results.
“We think that if you set this up the right way, that people are in a great position to deliberate on and make some of the hard decisions (around) trade-offs, and inform how we proceed,” Harris said. “It was actually really striking how many folks, when they came together, agreed on what they thought the right approach would be.”
In a survey after the process, 80 percent of participants said Facebook users like them should have a say in policy development. (I’d love to ask the other 20 percent a few questions!)
Promising though the early results may be, platform democracy is not a guaranteed feature of Facebook in the years to come. More executives and products teams need to buy into the idea; the process needs to be refined and made cheaper to run; and there are more experiments to conduct on using deliberative processes with specific groups or in specific geographies.
But in a world where, thanks to Texas and the rogue 5th Circuit of Appeals, platforms are at risk of losing the right to moderate content at all, Meta and its peers have every incentive to explore bringing more people into the process. With trust in tech companies at or near all-time lows, it’s clear that relying solely on in-house policy teams to craft platform rules isn’t working as intended for them. It may be time to give people more of a voice in the process — before the Supreme Court decides that, when it comes to regulating speech, the platforms don’t deserve any voice at all.
Governing
- The Federal Trade Commission opened an investigation into Amazon’s $1.7 billion acquisition of iRobot. (Dave Michaels / Wall Street Journal)
- New York and the Justice Department asked a federal appeals court to reinstate an antitrust suit that was thrown out because states had taken too long to file it. (Leah Nylen / Bloomberg)
- Five questions for platforms in the wake of an appeals court reversing a stay on Texas’ social media law. Starting with: Can platforms just block Texas? (Probably not.) (Issie Lapowsky / Protocol)
- Twitch said it would ban gambling live streams next month after a backlash from influencers. (Cecilia D'Anastasio / Bloomberg)
- Elon Musk will sit for a deposition in the Twitter lawsuit next Monday and Tuesday. (Tina Davis and Jef Feeley / Bloomberg)
- How Trump’s Big Lie spawned a new generation of social media influencers. A data-driven analysis finds that 77 election deniers gained a combined 25 million followers across Facebook and Twitter in the months leading up to the Jan. 6 coup attempt. (Elizabeth Dwoskin and Jeremy B. Merrill / Washington Post)
- With The Merge now complete, the Securities and Exchange Commission argued that all of Ethereum falls under the jurisdiction of the United States. (Sander Lutz / Decrypt)
- Under heavy political pressure, Google caved and launched a pilot program to ensure that emails from political campaigns don’t go to users’ spam folders. Cowardly, anti-user behavior meant to appease some of the world’s biggest blowhards (and spammers). (Ashley Gold / Axios)
- A study of recommendation data from 20,000 YouTube users found that usage of features including “not interested,” the dislike button, “stop recommending channel,” and “remove from watch history” had little effect on what content YouTube promotes. “Mozilla’s report doesn’t take into account how our systems actually work,” YouTube responded. (Mia Sato / The Verge)
- Kiwi Farms, which organizes harassment campaigns against trans and non-binary people, said it had suffered a breach that allowed attackers to obtain passwords, email addresses, and IP addresses. (Dan Goodin / Ars Technica)
- Indonesia passed a data protection law “that includes corporate fines and up to six years imprisonment for those found to have mishandled data in the world's fourth most populous country.” (Stanley Widianto / Reuters)
Industry
- YouTube rolled out its monetization program for Shorts — it will share 10 percent less revenue with creators than it does on longer videos — and announced that creators can now monetize videos with licensed music in them. Offering just 45 percent of revenue to creators is disappointing, but it beats TikTok’s static “creator fund.” (Mark Bergen / YouTube)
- Audio from TikTok meetings last year suggest that the company has a two-tiered moderation system that gives preferential treatment to influencers, celebrities, and other VIPs. The company said that such a system is not currently in place; in practice, though, every platform offers special treatment to large accounts. (Emily Baker-White / Forbes)
- The iPhone 14 is much easier to repair than its predecessors, a teardown shows, a sign of progress for the right-to-repair movement. (Kyle Wiens / iFixit)
- Apple said a software fix is on the way to solve an issue where the camera begins shaking violently when users try to use social networking apps. (Benjamin Mayo / 9to5Mac)
- Apple will raise prices in its App Store in several countries including Japan, South Korea, Sweden, and Pakistan, and every country that uses the euro. The move comes as the value of currencies is declining against the dollar. (Filipe Espósito / 9to5Mac)
- Snap shut down Zenly despite the fact that it had 15 million users and multiple acquisition offers, as executives worried that it would compete against the Snap Map if it were spun out. Great reporting here. (Andrew Deck / Rest of World)
- Alt-video platform Rumble went public in a SPAC deal valuing the company at more than $2 billion. (Lizette Chapman / Bloomberg)
- Spotify announced it would begin selling audiobooks within the app; 300,000 titles are now available for purchase. (J. Clara Chan / The Hollywood Reporter)
- Slack announced Canvas, a document editor powered by fellow Salesforce acquisition Quip, that will launch next year. (David Pierce / The Verge)
- Amazon began streaming Thursday Night Football games exclusively, and said it saw its “biggest three hours for US Prime signups ever,” whatever that means. (Taylor Soper / GeekWire)
- OpenAI said users would once again be able to upload faces and edit them with DALL-E, saying its safety systems had improved enough to trust users with the feature. (Kyle Wiggers / TechCrunch)
- A look at Coinbase’s struggles in crypto philanthropy, which fell short of its giving goals and alienated former evangelists for the program. (Leo Schwartz / Fortune)
Those good tweets
Talk to me
Send me tips, comments, questions, and deliberative processes: casey@platformer.news.