Ten strategies for replacing Twitter from people who used to work there
Announcing 'Extremely Hardcore,' Zoë Schiffer’s book on Musk’s Twitter takeover
Today, we’re thrilled to share an announcement that has been in the works all year. Zoë’s book about Elon Musk’s takeover of Twitter, Extremely Hardcore, comes out February 27, 2024. And you can pre-order it today.
Extremely Hardcore is the culmination of more than a year of reporting from Platformer on what was perhaps the wildest corporate takeover in the history of Silicon Valley. We think you’ll really like it — but don’t take it from us.
Legendary Bloomberg columnist Matt Levine called it “the definitive book on perhaps the weirdest business story of our time. A fast-paced and riveting account of a hilarious and tragic mess." And Vanity Fair’s Nick Bilton said “Zoë Schiffer’s incredibly written and astonishingly reported book about the Musk era of the world’s craziest company tells the story of a man who took the clown car, strapped a rocket to the back of it, and then slammed it into a wall at 100,000 miles an hour. You simply won’t be able to put this book down.”
Extremely Hardcore is scoopy, juicy, and full of fresh details from the workers left to pick up the pieces after Musk let that sink in. If you’re so inclined, we invite you to snag a hardback copy, which we’re told is helpful in claiming a place on the bestseller lists. We’ll have more to say about the book in weeks to come — but in the meantime, that pre-order link again can be found right here. If you want a signed copy — an exclusive for Platformer subscribers — fill out this form once you’ve pre-ordered.
To celebrate the book’s announcement, today we reached out to a wide range of former employees who used to work at (and, in one case, study) Twitter for their thoughts on how to build its replacement. Threads, Mastodon, Bluesky and others have all gained momentum to varying degrees, but all have a long way to go to reach their full potential.
Where to go from here? Here’s what they told us, lightly edited for clarity and length.
- Focus on shipping — Seth Wilson, former director of threat management
You have to constantly innovate and experiment. Speed of innovation is critical in the early stages. In the early years of Twitter, one of the things I loved seeing was new features being rolled out every day. It was fast. So, talk to your engineers and ask if they have the tools and space to experiment and innovate.
You need to establish a culture where good ideas can come from everywhere — from any engineer, any employee, in any line of business. Ideas get filtered out and lost if they pass through a rigid hierarchy. What I loved about Twitter was seeing a junior engineer or intern go up at Tea Time (Twitter’s old weekly all-hands meeting) and demonstrate a feature they’ve developed.
Another source of innovation and ideas came from Hack Week projects. Do not under-invest in this area. There are countless features that Twitter implemented over the years that originated with a Hack Week project. The corporate development team used to run a “TweetTank” competition and have Tweeps pitch a partnership or acquisition that Twitter should do, “Shark Tank” style. One of the winners of that competition wound up being one of Twitter’s most successful acquisitions, Gnip.
- Build a great API — Menotti Minutillo, former senior engineering manager for privacy and engineering
History has shown that a well-supported API provides a lot of opportunity and value for content creators, developers, and the platform itself. It's critical for any amount of scale, and helps use cases emerge from experimentation. It made customer service possible. And it allowed for all sorts of wacky automated accounts that gave Twitter a lot of flavor.
I worked on Twitter’s API for two years, specifically on security and privacy features. Once you open up the API there’s the potential for abuse, but it’s not all or nothing. From the beginning, try to establish a system based on developer reputation. That way, developers can earn their way to higher limits or higher degrees of capability. This means disallowing most things by default, and then slowly starting to allow functionality to developers who prove they’re good players — rather than opening it up to everyone and trying to knock down those who display bad behavior.
Also, the more you can have your API mimic the application features, the better. Twitter played catch-up, launching a feature in the mobile app and then having developers be like “when is this coming to the API?” Like when polls came out, it took forever to get it built into the API. Every time you go to general availability with a feature, you also need to have it available in the API. Conceptually it's like “ya of course,” but it’s not an easy thing to do.
- Let creators own the audiences they build — Lara Cohen, former global head of marketing and partnerships.
I think creators of all size and scale have realized just how much value they’re bringing to these platforms, how much their content drives engagement and profit — and how little they’re getting in return. To keep creators, platforms need to build with this in mind — think of long-term ways for creators to monetize (i.e., not just launching a splashy creator fund for the headlines and then letting the money run out). Create features that allow creators to own the audience they’ve built there. And build safety features, because often the folks driving the most engagement are also subject to the most harassment.
- Draw a clear line on content moderation. — Yoel Roth, former head of trust and safety
You can draw a line that prioritizes safety over speech. Or you can draw a line the way Twitter historically drew it and say, “look, we’re going to prioritize having context, and we’ll have reports, and if you see something say something and sorry tough luck.” You can draw the line anywhere you want! But if you look at expectations of social platforms from 2018 onward, it’s clear that most people believe platforms should moderate proactively, not just reactively.
A key bit of being a public, real-time speech platform is that you’re not moderating one-to-one interactions for the most part. Nor are you even moderating interactions within small, closed groups of folks who’ve chosen to interact with each other. It's a free-for-all. You have to recognize that you lack context on these interactions, and that different people can have different expectations of the same conversation. One person could interpret an interaction as being horrible and offensive. They might be right, or they might be wrong. And you as the platform have only a tiny slice of visibility.
Every platform to date has gotten completely stuck on this. At Twitter, for many years, the operating mindset was “we as a company lack context on these interactions, and consequently need to be pretty hands off.” The result of that was a reporting practice that required a first-person report for an abusive post, which was slanted toward getting the company more context, Like, if the person who was the topic of the post says it was abusive, it’s not just friends fucking around. That’s really hard to do because it puts a lot of burden on the victim of abuse, and it’s out of touch with expectations of social platforms now.
- Amplify authentic, positive conversations — Karl Robillard, former global head of social impact.
These bring people together and reduce polarization. Twitter used to invest heavily in this area — bringing company and community together in the spirit of healthy conversation. It paid off in so many ways — building climate emergency response tools, finding missing kids, preserving aboriginal languages, and teaching internet safety and media literacy. When you lead with positivity and goodwill, the world opens up in beautiful and unexpected ways.
- Leverage AI to reduce the mental health burden on your content moderators. — Noam Segal, former head of health research
In August, OpenAI published a blog post about using GPT-4 for content moderation. It’s well suited to this sort of work. Policies change frequently, and it can take time for human moderators to learn, understand, and fully implement those changes. Large language models can read the policy and adapt more quickly.
What we’re seeing with AI is that it scales, and humans don’t. And I’m so saddened to say this, but the scale of hatred in this world is unimaginable, and it has to be met with tools that can match the scale of hate.
Since the war in Israel started, I have seen things that I cannot unsee. I, and every other Israeli and Palestinian, am going to need deep therapy to get over this. It’s traumatizing and horrific. As a person who is a parent to three kids, I can’t even speak about the things I have seen.
So I'm afraid of what we are doing to our human moderators. We are exposing people to things that no one should ever be exposed to. Can we afford from an ethical standpoint to put people through this trauma in order to create a healthier discourse? I feel very torn, but I don’t think so.
- Don’t be afraid to be opinionated in your content labels — Lisa Young, former head of content design.
Before the 2020 election, our labels on COVID misinformation read “Get the facts from health officials about the science behind COVID-19 vaccines.” The language wasn’t structured, it was hard to localize, and, to conservative users, it reinforced the perception that Twitter was biased. We also tested the word “disputed,” and everyone responded negatively. Finally we landed on the label “misleading” and added a bold caution logo. Then: “Learn why health officials say vaccines are safe for most people.” These labels were easier for our teams to implement, and showed a 17% increase in click-through rate, meaning that millions and millions more people were trusting us to give them more context on potentially misleading tweets.
This project reinforced how important it is to test language, and showed how structured content is essential for creating a scalable, quick-response product.
- Roll out a feature like Community Notes to help users govern themselves — Manu Cornet, former software engineer.
When you remove a post outright, you make it easier for the person who posted it to play the victim. Community Notes — which attempt to surface neutral, non-partisan, fact-based clarifications to popular posts that are written by users — offer a powerful alternative to that approach. Elon Musk’s own tweets are regularly flagged by the community. It’s self-regulating in a way.
That said, I don’t know that community moderation is enough. As much as my engineer’s mind wishes most problems could be solved with technology, I’m not naive enough to think algorithms can do 100 percent of this job in a social network. The community can help debunk fake stories, but many other content moderation challenges need humans, at least right now.
- Add lists! — Shauna Wright, former senior content strategist.
I want to be able to track NBA reporters, political reporters, and cultivate my specific interests like I did on Twitter, without having to follow those accounts and always see them in my main timeline. Like, I’m a Warriors fan, and when there’s a game on I want to see what people are talking about. But I don’t want all those tweets in my timeline all the time.
- Take brand safety seriously — Julianna Hayes, former senior vice president of sales and corporate finance.
Monetizing social networks is a challenge. From a revenue product perspective, find something that feels as organic to your platform as possible. Your ad products will need to drive results. Advertisers need to find value in their performance; no one owes you their spend. You will have to work hard for each sale, as you are up against companies that have products that perform exceptionally well. A focus on brand safety is key, as advertisers' reputation is everything. Have that in the back of your mind as you design advertiser tools and safety measures.
And here’s a bonus tip from someone who didn’t work at Twitter but that every would-be Twitter replacement needs to have on their mind, from one of our favorite thinkers in the space.
- Get ready for a flood of new regulations around the world. — Evelyn Douek, assistant professor of law at Stanford Law School, co-host of the Moderated Content podcast, and someone who was really good at tweeting back in the day.
For a long time these platforms were unregulated. But we’re seeing in the last couple years a raft of new regulatory requirements coming in, like the Digital Services Act in the EU, and the Online Safety Act in the UK. A bunch of different states are passing different bills in the United States, and some are on appeal in the Supreme Court. The writing on the wall is clear: governments everywhere are getting way more active in regulating these technologies, and new platforms are going to have to deal with that much more than they did in the past.
On the podcast this week: Kevin and I sort through a week’s worth of news about OpenAI. Then, the Times’ David Yaffe-Bellany joins to remind us why CZ is going to prison. And finally, the latest news about how AI sludge is taking over web search.
Apple | Spotify | Stitcher | Amazon | Google | YouTube
Governing
- A US judge blocked Montana from implementing its attempted ban of TikTok on Jan. 1, citing free speech concerns. (David Shepardson / Reuters)
- The CEOs of TikTok, X, Meta, Snap and Discord are set to testify at a Senate hearing on online child sexual exploitation in January. (David Shepardson / Reuters)
- Meta is challenging the constitutionality of the FTC’s in-house courts amid potential restrictions on how the company can monetize user data. (Jan Wolfe / The Wall Street Journal)
- The Canadian federal government reached a deal with Google over the Online News Act, where Google can continue to share news online by paying news companies annually. Another successful shakedown. Which countries will be next to line up at the trough? (Daniel Thibeault, David Cochrane and Darren Major / CBC)
- Apple will have to face a revived probe into its dominance in mobile browsers and cloud gaming by the UK Competition and Markets Authority, after the agency won an appeal. (Katharine Gemmell / Bloomberg)
- The rules for political and social issue advertising for the 2024 elections will be the same as past election cycles on Facebook and Instagram, Meta says. That means no political ads starting a week before the election. (Anna Edgerton and Alexandra Barinka / Bloomberg)
- After finding out their images were being used for AI nude deepfakes, a group of women in New York are fighting back and calling for more regulation. (Olivia Carville and Margi Murphy / Bloomberg)
- The Data and Trust Alliance, a consortium of large companies, has developed a standard for describing the origin, history, and legal rights of data. (Steve Lohr / The New York Times)
- A dozen big tech companies signed onto the UK Online Fraud Charter, which aims to fight online scams, fake ads and romance fraud. (Ben Mitchell / The Independent)
- Adobe is reportedly rushing a proposal to address European regulator concerns over its Figma acquisition. The proposal could include not tying Figma to Creative Cloud and divesting Adobe XD, which competes with Figma. (Samuel Stolton, Katharine Gemmell, Leah Nylen and Brody Ford / Bloomberg)
- As TikTok begins work on its Norwegian data center, the company is pledging €12 billion for the next 10 years to appease European regulators. (Paul Sawers / TechCrunch)
- Palestinian creators are finding it difficult to access YouTube’s revenue sharing and other Google services amid the war in Gaza. (Paresh Dave / WIRED)
- A new wave of appeals to Meta’s Oversight Board related to content moderation around the Israel-Hamas conflict could reshape moderation policies. Or the board could take three years to hear the cases and Meta could ignore the eventual policy recommendations. (Russell Brandom / Rest of World)
- Foreign governments, particularly in Russia, Iran and China, are likely to continue pushing influence campaigns through fake social media accounts in 2024, Meta warns. Meanwhile, Meta reported that the US government stopped sharing data with it in July — likely amid legal uncertainty created by the “jawboning” cases now before the US Supreme Court. (AJ Vicens / CyberScoop)
- China has “massively increased” the amount of cyberattacks on Taiwan in the last six months, Google cybersecurity experts say. (Ryan Gallagher / Bloomberg)
Industry
- Threads is reportedly launching in Europe in December, its largest market expansion since launch. (Salvador Rodriguez, Sam Schechner and Meghan Bobrowsky / The Wall Street Journal)
- Meta paused shipments of the Quest 3 Elite Battery Strap after user reports of a charging fault that made the battery useless. (Scott Hayden / Road to VR)
- Startup Stability AI is reportedly exploring a sale as it faces investor pressure over its financial position. People give OpenAI a lot of grief these days but Stability is arguably even messier. (Mark Bergen and Rachel Metz / Bloomberg)
- Elon Musk wants X’s fleeing advertisers to “go f— themselves”, claiming that the companies were blackmailing him with advertising. He singled out Disney’s Bob Iger. (Lora Kolodny / CNBC)
- But he says his antisemitic post was a “mistake” and possibly “the most foolish” thing he’s done on X. (Jacob Kastrenakes / The Verge)
- Meanwhile, advertisers are reacting to Musk’s latest meltdown exactly how you’d expect: by pledging to never advertise again. (Kate Conger / New York Times)
- OpenAI reportedly doesn’t plan on including outside investors, including Microsoft, on its new board of directors. (Amir Efrati, Jessica E. Lessin and Aaron Holmes / The Information)
- But Microsoft is getting a non-voting observer seat on the board. (Alex Heath / The Verge)
- Some high-profile women in AI say they would not consider joining the current all-male board either for fear of being marginalized. (Kate Knibss, Lauren Goode and Khari Johnson / WIRED)
- Despite playing a part in Sam Altman’s firing and his CEO position at Quora, which has become increasingly competitive with ChatGPT, Adam D’Angelo remains on the OpenAI board. (Priya Anand and Sarah McBride / Bloomberg)
- In an interview, Sam Altman said he was initially hurt and angry when he was ousted, but declined to say why he was fired. (Alex Heath / The Verge)
- Adam Selipsky, Amazon’s cloud division head, says that companies “don’t want a cloud provider that’s beholden primarily to one model provider,” in a shot at OpenAI. (Camilla Hodgson and Tim Bradshaw / Financial Times)
- Researchers found that ChatGPT’s training data can be leaked through a “divergence attack” — asking the chatbot to repeat a word constantly. (Alex Ivanovs / StackDiary)
- Amazon introduced its AI chatbot for companies, Q, built to be more secure and private than other chatbots. (Karen Weise / The New York Times)
- TikTok launched artist accounts to improve engagement and discoverability as the company pushes further into streaming. (Sheena Vasani / The Verge)
- Apple, the company that produced Robert De Niro’s new film, altered his speech last minute at the Gotham Awards to take out criticisms of Donald Trump and focus more on the film. (Brent Lang and Matt Donnelly / Variety)
- Advertisers running Google search ads are automatically opted in to the Google Search Partners Network, where their ads often run on controversial third-party websites, a report found. (Natasha Lomas / TechCrunch)
- Google Registry released a new top-level domain, “.meme. (Emma Roth / The Verge)
- Google fixed its sixth zero-day exploit this year through an emergency security update on Chrome. (Sergiu Gatlan / Bleeping Computer)
- Researchers at Google DeepMind used an AI tool to discover 2 million crystal structures, opening up possibilities in renewable energy and advanced computation. (Michael Peel / Financial Times)
- Gmail accounts that haven’t been used in two years will soon be deleted under a new policy. (Dalvin Brown / The Wall Street Journal)
- Google is celebrating 1 billion monthly active users for RCS messaging by and adding “Photomojis,” which let users make emojis out of photos, along with a slew of other new features. (Allison Johnson / The Verge)
- The latest Android updates include new emoji features, emoji for voice messages, and an AI-generated image description tool. (Lawrence Bonk / Engadget)
- Users can now hide locked WhatsApp chats behind a customizable secret code. (Emma Roth / The Verge)
- This year’s Spotify Wrapped is out, with Taylor Swift as the most-streamed artist and new features such as listening preference city matching and streaming habit highlights. (Ann-Marie Alcántara / The Wall Street Journal)
- Some users who were city matched to Burlington, Vt., Cambridge, Mass. or Berkeley, Calif. are joking that those cities were designated for LGBTQ users. This story was highly relevant to me, an LGBT Spotify user whose Wrapped located him in Burlington (Madison Malone Kircher and Sopan Deb / The New York Times)
- San Francisco startup Perplexity introduced two new online large-language models with real-time data that can provide updated responses. (Kristi Hines / Search Engine Journal)
- Mailchimp is ending its newsletter service TinyLetter to focus on its core marketing product. Which sucks. (Jay Peters / The Verge)
- Substack is rolling out a suite of video content creation tools, putting it more in direct competition with Patreon. (Taylor Lorenz / Washington Post)
- Pinterest is testing a “body type ranges” tool to search on the platform, in an effort to boost inclusivity. (Sarah Perez / TechCrunch)
Those good posts
(Link)
(Link)
(Link)
Talk to us
Send us tips, comments, questions, and posts: casey@platformer.news and zoe@platformer.news.