Inside Bluesky’s big growth surge

After adding millions of new users in weeks, the company tells Platformer that it will quadruple the size of its moderation team 

Inside Bluesky’s big growth surge
(Bluesky)

This article includes references to child sexual abuse material (CSAM). 

Aaron Rodericks didn’t know what he was looking at.

A post on Bluesky featured a sexualized image of a dragon that shared visual similarities with a human child. It had been reported to the company as a potential violation of the network’s community guidelines, and now it was being reviewed by Bluesky’s head of trust and safety.

The guidelines ban CSAM, which is illegal to distribute. They say nothing, though, about anthropomorphic dragons. And the artist claimed that the dragon in question was 9,000 years old.

Ultimately, the post was removed. But the surge in new users has brought with it concomitant growth in the number of tricky, disturbing, and outright bizarre edge cases that the trust and safety team must contend with.

In all of 2023, Bluesky had two confirmed cases of CSAM posted on the network. It had eight confirmed cases on Monday alone. 

“I still have to throw humans at a huge chunk of the problems because there’s all the gray-area content that we have to deal with,” Rodericks said in an interview. “We’re trying to go above what the legal requirements are, because we decided that we wanted to be a safe and welcoming space for a lot of users. And that requires a huge amount of humans, automation, and tooling. I have a very, very long wishlist.” 

At least one item on his wishlist has been granted: Bluesky has decided to quadruple the size of its contract workforce of content moderators from 25 to 100, the company told Platformer. The move comes in response to Bluesky’s rapid growth since this month’s US presidential election, which triggered a fresh exodus of users away from Elon Musk’s X and over the weekend took the platform past 22 million total users.

In a sign of Bluesky’s growing vitality, Meta has recently copied a number of features for its larger rival Threads, including custom feeds and letting users view a reverse-chronological feed of posts by accounts they follow by default

That’s great news for Bluesky, whose user base had grown slowly for most of this year. The company, which has yet to begin generating revenue, announced last month that it had raised $15 million led by Blockchain Capital.

Since then, its user base has grown by more than 50 percent. And that growth has strained the limits of what its 20-person core team and 25 contract moderators are able to do.

First imagined as an alternate future for Twitter by CEO Jack Dorsey in 2019, Bluesky is a decentralized network running on the AT Protocol. It is federated, allowing users who choose to do so to host their own data. Eventually, people will be able to set up new instances of Bluesky on their own servers. The owners of those servers might decide to have different community guidelines than Bluesky does.

For now, though, content moderation on Bluesky works the same way it does on most platforms. A combination of users and automated systems flag posts for review; some actions are taken automatically, and others require human review. The primary effect of growth on Bluesky’s trust and safety team is that they now have many more posts to review.

An early sign of the company’s new challenges came in September, when 2 million new users joined the network in the wake of Brazil temporarily banning X. Bluesky’s automated systems recorded a surge of reports related to the use of the letters “KKK,” which in the United States typically refer to the Ku Klux Klan and can signal support for racist ideologies. 

Upon investigating, though, the company realized that typing a series of K’s is simply how some speakers of Portuguese signal that they are laughing online. Bluesky updated its machine learning classifiers accordingly. 

The company had also set up systems to flag when a user received a large number of replies from new accounts, which is often a sign that a user is the target of a harassment campaign. 

“All of a sudden you’re taking a flamethrower to your new accounts accidentally, because you were looking for new accounts being created to harass people,” Rodericks said.

On Monday, Bluesky’s safety account posted a thread noting that “for some very high-severity policy areas like child safety, we recently made some short-term moderation choices to prioritize recall over precision.”

 “This resulted in over-enforcement and temporary suspensions for multiple users,” the company said. “We have reinstated accounts of some users, and are continuing to review appeals.”

As it looks to improve moderation for serious harms, the company has turned to using third-party tools.

To find and remove CSAM, for example, Bluesky uses Safer, a tool from the child safety nonprofit Thorn. For more than a decade now, the tech industry has been collecting and sharing hashes of known CSAM to make it easier for them and their peers to remove it from their sites. Safer makes those hashes available to Bluesky and augments them with its own machine-learning tools, which can also detect previously unseen CSAM. When CSAM is discovered, Safer routes the relevant posts to moderators and creates reports that can be shared with child safety authorities in the countries where Bluesky operates.

Safer now counts 50 platforms as customers, said John Starr, Thorn’s vice president of strategic impact. It flagged 4 million pieces of potential CSAM last year, including 1.5 million with its own machine-learning classifiers.

But potentially more difficult challenges lie ahead. Already, platforms are beginning to report that predators are using generative artificial intelligence to create and modify CSAM. As the technology improves, real and fake CSAM are becoming harder to distinguish. That can make it more difficult for child safety authorities to identify cases of CSAM where a child is currently in danger and being exploited.

“It’s hugely, hugely problematic,” Rodericks told me. For now, he said, Bluesky has seen little AI-generated CSAM. “But of course, it keeps on growing just with scale,” he said. 

In the meantime, Rodericks is hopeful that expanding the team will begin to bring Bluesky’s trust and safety team back into equilibrium. And for all the challenges that growth has brought, the company is clearly grateful for it. 

“For a 20-person company with no marketing and no revenue to get this many users, we must be doing something right,” Rodericks said. “Or somebody else must be doing something very wrong.” 


Elsewhere in Bluesky vs. Threads vs. X:

Sponsored

Keep your SSN out of criminals' hands

Every day, data brokers profit from your sensitive info—phone number, DOB, SSN—selling it to the highest bidder. And who’s buying it? Best case: companies target you with ads. Worst case: scammers and identity thieves. It's time you check out Incogni. It scrubs your personal data from the web, confronting the world’s data brokers on your behalf. And unlike other services, Incogni helps remove your sensitive information from all broker types, including those tricky People Search Sites.Help protect yourself from identity theft, spam calls and scams. Take advantage of a Black Friday deal and get 58% off Incogni with the code PLATFORMER.

Governing

Industry

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and your favorite Bluesky feed: casey@platformer.news.