Everything platforms know about the war but won't tell us
CrowdTangle co-founder Brandon Silverman on social networks' responsibility to open up
Brandon Silverman knows more about how stories spread on Facebook than almost anyone. As co-founder and CEO of CrowdTangle, he helped to build systems that understood in real time which stories were going viral: valuable knowledge for publishers at a time when Facebook and other social networks accounted for a huge portion of their traffic. It was so valuable, in fact, that in 2016 Facebook bought the company, saying it would help the institutions of journalism identify stories of interest to help them in their own coverage plans.
But a funny thing happened along the way to CrowdTangle becoming just another tool in a publisher’s analytics toolkit. Facebook’s value to publishers declined after the company decided to de-emphasize news posts in 2018, making CrowdTangle’s original function less vital. But at the same time, in the wake of the 2016 US presidential election, Facebook itself was of massive interest to researchers, lawmakers, and journalists seeking to understand the platform. CrowdTangle offered a unique, real-time window into what was spreading on the site, and observers have relied upon it ever since.
As Kevin Roose has documented at the New York Times, this has been a persistent source of frustration for Facebook, which found that a tool it had once bought to court publishers was now used primarily as a cudgel with which to beat its owners. Last year, the company broke up the CrowdTangle team in what it has described, unconvincingly, as a “reorganization.” The tool remains active, but appears to be getting little investment from its parent company. In October, Silverman left the company.
Since then, he has been working to further what had become his mission at CrowdTangle outside Facebook’s walls: working with a bipartisan group of senators on legislation that would legally require Meta and other platform companies to disclose the kind of information you can still find on CrowdTangle today, along with much more.
I’ve been trying to convince Silverman to talk to me for months. He’s critical of his old employer, but only ever constructively, and he’s careful to note both when Facebook is better than its peers and where the entire industry has failed us.
With Russia’s invasion of Ukraine, and the many questions about the role of social networks that it has posed, Silverman agreed to an email Q&A. He has a lot to say about all this — the original version of what you’re about to read was longer by more than a third.
But I learned a lot along the way. What I liked about this conversation is the way Silverman focuses relentlessly on solutions: the interview below is a kind of handbook for how platforms (or their regulators) could help us understand them both my making new kinds of data available, and making existing data much easier to parse. It’s a conversation that shows how much is still possible here, and how low much of that fruit hangs to the ground.
“The next evolution in thinking about managing large platforms safely should be about empowering more of civil society, from journalists to researchers to nonprofits to fact-checkers to human rights organizations, with the opportunity to help,” Silverman told me. In our interview, he explains how.
Our conversation has been edited for clarity and length.
Casey Newton: What role is social media playing in how news about the war is being understood so far?
Brandon Silverman: This is one of the single most prominent examples of a major event in world history unfolding before our eyes on social media. And in a lot of ways, platforms are stepping up to the challenge.
But we’re also seeing exactly how important it is to have platforms working alongside the rest of civil society to respond to moments like this.
For instance, we’re seeing the open-source intelligence community, as well as visual forensics teams at news outlets, do incredible work using social media data to help verify posts from on the ground in Ukraine. We’re also seeing journalists and researchers do their best to uncover misinformation and disinformation on social media. They’re regularly finding examples that have been viewed by millions of people, including repurposed video game footage pretending to be real, coordinated Russian disinformation among TikTok influencers, and fake fact-checks on social media that make their way onto television.
That work has been critical to what we know about the crisis, and it highlights exactly why it’s so important that social media companies make it easier for civil society to be able to see what’s happening on their platforms.
Right now, the status quo isn’t good enough.
So far, the discussion about misinformation in the Russia-Ukraine war mostly centers on anecdotes about videos that got a lot of views. What kind of data would be more helpful here, and do platforms actually have it?
The short answer is absolutely. Platforms have a lot of privacy-safe data they could make available. But maybe more importantly, they could also take data that’s already available and simply make it easier to access.
For instance, one data point that is already public but incredibly hard to use is around “labels”. A label is when a platform adds their own information to a piece of content — whether a piece of content has been fact-checked, if the source of the content is a state-controlled media outlet, etc. And they’re becoming an increasingly popular way for platforms to help shape the flow of information during major crises like this.
However, despite the fact that the labels are public and don’t contain any privacy-sensitive material, right now there are no programmatic ways for researchers or journalists or human rights activists to be able to sort through and study all those labels. So, if a newsroom or a researcher wants to sort through all the fact-checked articles on a particular platform and see what the biggest myths about the war were on any given day, they can’t. If they want to see what narratives all the state-controlled media outlets were pushing, they can’t do that either.
It was something we tried to get added to CrowdTangle, but couldn’t get it over the finish line. I think it’s a simple piece of data that should be more accessible for any platform that uses them.
That makes a lot of sense to me. What else could platforms do here?
A big part of all this sort of work isn’t always about making more data available —it’s oftentimes about making existing data more useful.
Can a journalist or a researcher quickly and easily see which accounts have gotten the most engagement around the Ukrainian situation? Can anyone quickly and easily see who the first person to use the phrase “Ghost of Kyiv” was? Can anyone quickly and easily see the history of all the state-controlled media outlets that have been banned and what they were saying about Ukraine in the lead-up to the war?
All of that data is technically publicly available, but it’s incredibly hard to access and organize.
That’s why a big part of effective transparency is simply about making data easy to use. It’s also a big piece of what we were always trying to do at CrowdTangle.
How would you rate the various platforms’ performance on this stuff so far?
Well, not all platforms are equal.
We’ve seen some platforms become really critical public forums for discussing the war, but they are making almost no effort to support civil society in making sense of what’s happening. I’m talking specifically about TikTok and Telegram, and to a lesser extent YouTube as well.
There are researchers who are trying to monitor those forums, but they have to get incredibly creative and scrappy about how to do it. For all the criticism Facebook gets, including a lot of very fair criticism, it does still make CrowdTangle available (at least for the moment). It also has an Ad Library and an incredibly robust network of fact-checkers that they’ve funded, trained and actively support.
But TikTok, Telegram and YouTube are all way behind even those efforts. I hope this moment is a wake-up call for ways in which they can do better.
One blind spot we have is that whatever platforms remove content, researchers can’t study it. How would we benefit from, say, platforms letting academics study a Russian disinformation campaign that got removed from Twitter or Facebook or YouTube or TikTok?
I think unlocking the potential of independent research on removed content accomplishes at least three really important things. First, it helps build a much more robust and powerful community of researchers that can study and understand the phenomenon, and help the entire industry make progress on it. (The alternative is leaving it entirely up to the platforms to figure out by themselves). Second, it helps hold the platforms accountable for whether they made the right decisions — and some of these decisions are very consequential. Third, it can help be a deterrent for bad actors as well.
The single biggest blind spot in policies around removed content is that there are no industry-wide norms or regulatory requirements for archiving or finding ways to share it with select researchers after it’s removed. And in the places where platforms have voluntarily chosen to do some of this, it’s not nearly as comprehensive or robust as it should be.
The reality is that a lot of the removed content is permanently deleted and gone forever.
We know that a lot of content is being removed from platforms around Ukraine. We know that YouTube has removed hundreds of channels and thousands of videos, and that both Twitter and Meta have announced networks of accounts they’ve each removed. And that’s to say nothing of all the content that is being automatically removed for graphic violence, which could represent important evidence of war crimes.
I think not having a better solution to that entire problem is a huge missed opportunity, and one we’ll all ultimately regret not having solved sooner.
I think platforms should release all this data and more. But I can also imagine them looking at Facebook’s experience with CrowdTangle and say hmm, it seems like the primary effect of releasing this data is that everyone makes fun of you. Platforms should have thicker skins, of course. But if you were to make this case internally to YouTube or TikTok — what’s in it for them?
Well, first, I’d pause a bit on your first point. They’re going to get made of fun regardless — and in some ways, I actually think that’s healthy. These platforms are enormously powerful, and they should be getting scrutinized. In general, history hasn’t been particularly kind to companies that decide they want to hide data from the outside world.
I also want to point out that there’s a lot of legislation being drafted around the world to simply require more of this — including the Platform Accountability and Transparency Act in the U.S. Senate and Article 31 and the Code of Practice in the Digital Services Act in Europe. Not all of the legislation is going to become law, but some of it will. And so your question is an important one, but it’s also not the only one that matters anymore.
That being said, given everything that happened to our team over the last year, your question is one I’ve thought about a lot.
There were times over the past few years where I tried to argue that transparency is one of the few ways for platforms to build legitimacy and trust with outside stakeholders. Or that transparency is a useful form of accountability that can act as a useful counterweight to other competing incentives inside big companies, especially around growth. Or that the answer to frustrating analysis isn’t less analysis, it’s more analysis.
I also saw first-hand how hard it is to be objective about these issues from the inside. There were absolutely points where it felt like executives weren’t always being objective about some of the criticism they were getting, or at worst didn’t have a real understanding of some of the gaps in the systems. And that’s not an indictment of anyone in particular, I think that’s just a reality of being human and working on something this hard and emotional and with this much scrutiny. It’s hard not to get defensive. But I think that’s another reason why you need to build out more systems that involve the external community, simply as a check on our own natural biases.
In the end, though, I think the real reason you do it because you think it’s just a responsibility you have given what you’ve built.
But what’s the practical effect of sharing data like this? How does it help?
I can connect you to human rights activists in places like Myanmar, Sri Lanka and Ethiopia who would tell you that when you give them tools like CrowdTangle, it can be instrumental in helping prevent real-world violence and protecting the integrity of elections. This year’s Nobel Peace Prize Winner, Maria Ressa, and her team at Rappler have used CrowdTangle for years to try and stem the tide of disinformation and hate speech in the Philippines.
That work doesn’t always generate headlines, but it matters.
So how do we advance the cause of platforms sharing more data with us?
Instead of leaving it up platforms to do entirely by themselves, and with a single set of rules for the entire planet, the next evolution in thinking about managing large platforms safely should be about empowering more of civil society, from journalists to researchers to nonprofits to fact-checkers to human rights organizations, with the opportunity to help.
And that’s not to defer responsibility from the platforms around any of this. But it’s also recognizing that they can’t do it alone — and pushing them, or simply legislating, ways in which they have to collaborate more and open up more.
Every platform should have tools like CrowdTangle to make it easy to search and monitor important organic content in real time. But it should also be way more powerful, and we should hold Meta accountable if they try to shut it down. It means that every platform should have Ad Libraries — but also the existing Ad Libraries should be way better.
It means we should be encouraging the industry to both do their own research and share it more regularly, including calling out platforms that aren’t doing any research at all. It means we should create more ways for researchers to study privacy-sensitive data sets within clean rooms so they can do more quantitative social science.
That means that every platform should have fact-checking programs similar to Meta’s. But Meta’s should also be much bigger, way more robust, and include more experts from around civil society. It means we should keep learning from Twitter’s Birdwatch — and if it works, use it as a potential model for the rest of the industry.
We should be building out more solutions that enable civil society to help be a part of managing the public spaces we’ve all found ourselves in. We should lean on the idea that thriving public spaces only prosper when everyone feels some sense of ownership and responsibility for them.
We should act like we really believe an open internet is better than a closed one.
Pushback
Yesterday I published an interview with Substack CEO Chris Best about the company’s new app, focusing on what I considered its most surprising design choice: opting new app users out of receiving emails by default. To my mind, that represented a significant escalation of Substack’s platform ambitions, and could potentially shift the balance of power away from writers toward Substack itself.
For those reasons, I was heartened to get a message from Best today letting me know the company has changed its mind. Now when you download the app, you’ll still receive emails from the publications you subscribe to by default. In the next version of the app, it will be pulled out of the onboarding flow entirely and will simply be a setting readers can switch on if they like.
“We’re going to continue to iterate on making this great for writers and readers,” Best told me. I think this strikes the right balance, and I’m grateful to Substack for hearing me out on this one.
I’ve updated the original post with this information, as well as added a correction: it was Discourse that returned to Substack, not Defector.
Extracurriculars
I’ll be at South by Southwest on Saturday hosting this panel about “lawful and awful” content with members of the Oversight Board and Stanford’s Robert Reich. Come say hi if you’re in town!
Elsewhere: my recent interview of the CEOs of Parler and GETTR at Pivot Con is now available as a podcast; if you’ve ever wanted to hear me get dressed down by Candace Owens during audience Q&A, here’s your chance. I also went on Lizzie O’Leary’s Slate podcast, What’s Next, to talk about social media and the war in Ukraine.
Finally, my Twitter Spaces with Kara Swisher is about the impact of the war on the economy and the media. Our guests are the New York Times’ columnist Paul Krugman and assistant managing editor Michael Slackman. It’s at 5P PT — just after I send this newsletter — but will be recorded for later listening.
Platformer Jobs
Today’s featured jobs on the Platformer Jobs board include:
Quick update on the jobs board: it is mostly going unused! I’ve been happy to keep it open to host free listings from nonprofits, like the one above. But even those have dwindled, and it now seems likely I’ll wind the board down in the coming weeks. I hoped it would be a good resource for job-seeking readers, particularly for those working in trust and safety roles, but my discomfort with actively selling listings to companies I cover, along with the apparent lack of utility for job seekers, have made for a weak combination. Live and learn!
Governing
- Google said it would begin sending air raid alerts to Android phones in Ukraine. (Mia Sato / The Verge)
- Google said Russian users would no longer be able to buy apps from its Play Store in the coming days due to sanctions. Free apps can still be downloaded, though. (Abner Li / 9to5Google)
- The European Union ordered Google to remove Sputnik and RT from search results, saying that it was required under sanctions preventing Russian state media from “broadcasting.” A significant escalation of sanctions, with real implications for free speech. (Gerrit De Vynck / Washington Post)
- Twitter removed false tweets from the Russian embassy lying about the bombing of a hospital in Ukraine. Meta did as well. (Ilena Peng / Bloomberg)
- DuckDuckGo said it would begin down-ranking Russian disinformation, breaking a policy about ranking sites “neutrally.” Whatever that means! (Tom Parker / Reclaim the Net)
- Facebook and Instagram will temporarily allow users in some countries to call for violence against Russians and Russian soldiers, if their post is about the war. (Munsif Vengattil and Elizabeth Culliford / Reuters)
- A new whistleblower complaint says Facebook broke the law by not removing the accounts of Russian individuals on the US sanctions list. Experts say the law here is murky. (Cat Zakrzewski, Elizabeth Dwoskin and Craig Timberg / Washington Post)
- Meta’s former counterterrorism chief, Brian Fishman, talks about the company’s war response. “They've been trying to walk that tightrope. How do we limit our ability to be abused by an increasingly overtly authoritarian actor while trying to empower everyday people, who are not responsible for the crimes for the government, to speak to each other, to organize, to get information about what's happening?” (Issie Lapowsky / Protocol)
- Russia had to develop its own transportation layer security (TLS) certificates after sanctions prevented the country’s internet providers from verifying website identities. Invading your peaceful neighbor will create such hassles, I swear, honestly don’t even do it, it’s not worth it. (Bill Toulas / Bleeping Computer)
- A profile of Meta’s chief lobbyist, Joel Kaplan, and his influence at the company. The headline goes too far, and many of the stories in here have been oft-told. But I appreciated its focus on the clashes between the company’s policy and integrity teams. (Benjamin Wofford / Wired)
Industry
- A detailed and compelling account of the failure of Libra, Facebook’s cryptocurrency project. In the end, it really was as simple as the fact that no one in Washington trusted the company to do it right. (Hannah Murphy and Kiran Stacey / Financial Times)
- Stripe introduced new APIs for accepting payments in crypto — potentially a huge moment in the industry’s development. (Stripe)
- TikTok is nearing a deal that would see its data stored with Oracle. This was originally going to be a deal point in the attempted forced sale of TikTok under the Trump administration. (Echo Wang and David Shepardson / Reuters)
- Google Messages updated to display iMessage “tapback” reactions. This is great. (Jon Porter / The Verge)
- How Twitter plans to add its next 100 million users. The answer seems to be “trying a lot of things.” (Alex Heath / The Verge)
- Niantic bought 8th Wall, which helps make web-based augmented reality apps. It’s the company’s largest acquisition to date. (Jay Peters / The Verge)
- A look at Jeff Bezos’s day-to-day life after leaving his job as the CEO of Amazon. “Bezos now appears to spend much of his time focused on his opulent life with Sanchez, his private space company, and his $10 billion climate philanthropy.” (Brad Stone / Bloomberg)
Those good tweets
Talk to me
Send me tips, comments, questions, and real-time engagement metrics: casey@platformer.news.