The surgeon general's warning is a wake-up call for social networks
A growing body of evidence suggests that social products pose significant risks to teenagers
For many years now, a group of researchers and activists have warned about the potential dangers of children using social networks. The warnings resonated with me emotionally, since so many people I know — young and old — have struggled with their own relationships to apps like Facebook, Instagram, and TikTok. It seems logical that what many people experience as a kind of icky feeling after too much scrolling manifests as something much more serious in others — particularly in young people.
Anxiety over this state of affairs has contributed to a significant uptick this year in state-level regulation aimed at getting kids off their phones. (The other reason, of course, is a total failure of Congress to act.)
Utah just passed a law preventing children under 18 from using social networks without their consent. Arkansas considered something similar. Montana just banned TikTok altogether.
I’ve long been sympathetic to the idea that young people need greater protections from the social networks they use daily. But I’ve also had my doubts about how aggressively we ought to compel them to intervene. Data on the relationship between children, teens, social networks, and mental health has been slow in coming, limited in scope, and contradictory in its findings. Looking at the research that has trickled out so far, I have more than once found myself throwing up my hands in confusion.
Recently, though, I’ve begun to feel like we’re making real progress on understanding how social networks affect young people. For too many children, frequent use of social products really does seem bad for them. And the research now appears robust enough that lawmakers can be confident in demanding more from the companies that produce them.
That was my main conclusion today after reading the surgeon general’s advisory today on social media and youth mental health. Over a brisk 19 pages, US Surgeon General Vivek Murthy and his team synthesize more than a decade of research into risks posed by social networks, and conclude that the potential for harm is significant. While the report also makes welcome acknowledgement of the benefits social networks have for young people, it also highlights specific areas where action from social networks, lawmakers, and parents are long overdue.
“Nearly every teenager in America uses social media, and yet we do not have enough evidence to conclude that it is sufficiently safe for them,” the surgeon general writes. “Our children have become unknowing participants in a decades-long experiment. It is critical that independent researchers and technology companies work together to rapidly advance our understanding of the impact of social media on children and adolescents.”
The full report is well worth reading in its entirety. But several aspects of the surgeon general’s findings are worth calling out.
One, children are starting to use social media too young. The report found two in five children have begun using social networks between the ages of 8 and 12 — a deeply vulnerable time where it seems unlikely to me that the potential benefits outweigh the risks. And this comes despite the fact that companies’ own terms of service typically forbid children under 13 from using them. Platforms really ought to do more to keep young children off their platforms — and not openly court them with cynical growth-hack products like Messenger Kids from Meta.
Two, we’re learning a lot about what kinds of children are at higher risk of harm from social networks. It includes adolescent girls; kids with mental health issues; kids who have been cyber-bullied; kids with body image issues and disordered eating; and kids whose sleeping patterns have been disrupted by social media. Parents of children in these categories should pay particularly close attention to their kids’ social media use.
Three, there’s growing evidence that frequent social media usage can negatively affect the development of the body. “Small studies have shown that people with frequent and problematic social media use can experience changes in brain structure similar to changes seen in individuals with substance use or gambling addictions,” the report states.
Moreover, it noted that “a longitudinal prospective study of adolescents without ADHD symptoms at the beginning of the study found that, over a 2-year follow-up, high-frequency use of digital media, with social media as one of the most common activities, was associated with a modest yet statistically significant increased odds of developing ADHD symptoms.”
Four, a simple intervention that seems to produce significantly positive results is simply to reduce the time children spend using it. Spending more than three hours a day on social networks doubles the risk of bad mental health outcomes, including depression and anxiety. Voluntary screen-time controls don’t seem to be doing enough here; lawmakers should consider creating and enforcing daily time limits for apps like these.
All that said, social network usage clearly also has real benefits for young people. Most young people, even. There’s a reason 95 percent of them use it!
“For example,” according to the report, “studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support.”
It also notes that:
Seven out of ten adolescent girls of color report encountering positive or identity-affirming content related to race across social media platforms. A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%).
And in other cases, the authors found research suggesting that social media actually prompts some children with mental health care issues to seek treatment, in part because they’re learning about it there.
This is useful, I think, because it helps us understand who social networks can be particularly beneficial to. Understanding how and why LGBT kids benefit from these networks disproportionately, for example, could help platforms make themselves safer and more beneficial to everyone else.
Of course, there’s still a lot we still don’t know. In part that’s because, to climb back on an old hobbyhorse of mine, platforms are still too stingy with the data that might help researchers understand them better. Part of this is for good reasons related to user privacy; part of it is for the bad reason of not really wanting to understand too deeply the harms their own platforms can cause.
“There is broad concern among the scientific community that a lack of access to data and lack of transparency from technology companies have been barriers to understanding the full scope and scale of the impact of social media on mental health and well-being,” the surgeon general’s report states.
I’m hopeful this will change, though. Thanks to the European Union’s Digital Services Act, academic researchers now have a legal avenue to safely request and study platform data, and I imagine it will be hugely beneficial toward the cause of better understanding social networks’ effects on mental health and many other issues.
In the meantime, we have enough data to make good recommendations for platforms, lawmakers, parents and children. For platforms, good suggestions include conducting independent assessments of the effects on their products on children and adolescents; establishing scientific advisory committees to inform product development; and sharing data with researchers in a privacy-protective way.
Recommendations for policymakers include developing age-appropriate health and safety standards for platforms; funding more research on the subject; and cutting off growth and engagement hacks for kids.
It’s a lot to take in. And I know that plenty of you — especially those who work at social platforms — still might not be persuaded of the evidence that’s available.
But the more data we see, the harder it gets for me to keep an open mind on the subject, particularly for younger children in the high-risk groups mentioned above. If I were to become a parent, I’d endeavor to keep my kids away from media through middle school. (Though I imagine I wouldn’t be able to totally prevent them from at least some unsupervised use of YouTube and TikTok.) I’d also plan to continue monitoring their social media usage and any effects it might have on their mental health through high school.
When I first started writing a newsletter about social networks, the consequences of children using it were largely a mystery. But little by little, we’re beginning to understand both the risks and the benefits. And on the question of whether using social networks poses risks to children, the surgeon general’s warning today suggests that the answer is almost certainly yes.
More on teens and social media: Ezra Klein has a great discussion with psychologist Jean Twenge, author of the books iGen and Generations, on a recent episode of his essential podcast.
Microsoft Build 2023
Lots of interesting AI developments at today’s developer-centric event.
- Bing is now the default search engine for ChatGPT, and OpenAI’s new search functionality will be rolling out to Plus subscribers starting today and to all free ChatGPT users in the future through a plug-in. (Tom Warren / The Verge)
- Microsoft announced Azure OpenAI Service, which will let enterprise customers combine a model like GPT-4 with proprietary company data to create customized “copilots” for use in the workplace. (Kyle Wiggers / TechCrunch)
- Microsoft unveiled an AI-powered content moderation tool called Azure AI Content Safety that is trained to detect “inappropriate” text and images in a number of languages. The tool will assign a severity score to flagged content to help human moderators triage enforcement needs. (Kyle Wiggers / TechCrunch)
- Microsoft pledged to sign all AI art that its various apps create with a cryptographic watermark to label it as machine-generated. Good! (Mark Hachman / PC World)
- Microsoft will start using generative AI to create automated review summaries for apps in the Microsoft Store. (Wes Davis / The Verge)
Governing
- Florida Gov. Ron DeSantis plans to announced his presidential bid in a Twitter Spaces conversation with Elon Musk on Wednesday evening. DeSantis fan David Sacks will moderate. Twitter is now just a text-based alternative to Rumble. (Dasha Burns and Matt Dixon / NBC News)
- Meta sold Giphy to Shutterstock for $53 million as part of a deal with UK regulators to divest the company over antitrust concerns. Meta originally paid $400 million for the GIF-sharing platform. A profoundly useless regulatory move from the United Kingdom. (Reuters)
- A judge threw out a shareholder lawsuit against Elon Musk over his takeover of Twitter last year, arguing that the plaintiff failed to show harm from Musk’s belated financial disclosure. (Jonathan Stempel / Reuters)
- Child predators are exploiting generative AI tools to create and share fake child sexual abuse content, while also sharing tips for circumventing these tools’ guardrails. (Margi Murphy / Bloomberg)
- US labor board prosecutors said Amazon repeatedly broke federal law by retaliating against union supporters and changing company policies at its unionized warehouse in New York. The NLRB filed a complaint against the company on Monday. (Josh Eidelson / Bloomberg)
- A detailed recounting of how Kickstarter became the first US tech firm to unionize underscores the challenges facing the broader tech labor movement. (Simone Stolzoff / The Verge)
Industry
- OpenAI competitor Anthropic announced its $450 million Series C raise, led by Spark Capital with participation from Google, Salesforce and Zoom. (Kyle Wiggers / TechCrunch)
- In an op-ed, Google CEO Sundar Pichai called AI the “most profound” technology humanity is working on today, and called for responsible and safe development alongside international cooperation and regulation. (Sundar Pichai / Financial Times)
- Adobe said it will enable generative fill in Photoshop by adding its AI image generator Firefly, letting users extend images and also add or remove objects in the frame using text prompts. (Jess Weatherbed / The Verge)
- Google announced Project Studio, a free generative AI tool for Google Shopping merchants to edit product images by replacing background scenes or improving resolution. (Jess Weatherbed / The Verge)
- TikTok is restructuring its e-commerce business to focus on the US and UK in an effort to get livestreamed shopping off the ground outside of China. (Cristina Criddle / Financial Times)
- The music industry’s streaming era growth is starting to slow down as Universal reported a decline in streaming sales last quarter and Warner reported flat revenue in the first half of the year. (Lucas Shaw / Bloomberg)
- Stop Scams UK, a cross-industry group consisting of banks, tech companies and telecoms, is launching a pilot program to collect data on phone and email scammers and cut down on bank fraud. (Siddharth Venkataramakrishnan / Financial Times)
Those good tweets
For more good tweets every day, follow Casey’s Instagram stories.
(Link)
(Link)
(Link)
Talk to us
Send us tips, comments, questions, and additional surgeon general’s warnings: casey@platformer.news and zoe@platformer.news.