How Online Echo Chambers Make Eating Disorders Worse

by Stephanie Lee

Andres Barrionuevo Lopez/iStock

Eating disorders are surging at an alarming rate. Emergency room visits for adolescent girls struggling with conditions such as anorexia or bulimia doubled from 2019 to 2021, according to the CDC. Meanwhile, discussions about eating disorders and self-harm on X, formerly Twitter, have quintupled.

Scientists point to social media as the potential force driving this mental health crisis. In particular, online exposure to idealized body imagery and language can trigger negative self-comparisons, especially for young social media users whose identities and self-worth are still forming.

Now, new research analyzes how social media group dynamics amplify behaviors harmful to mental health. A team of researchers at USC Viterbi’s Information Sciences Institute (ISI) found that online social platforms create a feedback loop of eating disorder content, trapping vulnerable individuals within pro-anorexia echo chambers. The preprint has been submitted to a conference.

“The social dynamic is perhaps the most harmful force on social media,” Kristina Lerman, study lead author and Principal Scientist at ISI, said. “The friends you make online can actually make your mental health worse.”

To trace this “vicious cycle” of behavior, Lerman and team leveraged machine learning tools to analyze patterns among millions of tweets. The study first identified harmful hashtags relating to eating disorders, such as #edtwt and #proana, short for “editing disorder Twitter” and “pro-anorexia” respectively. They found that these types of hashtags were commonly used in posts with tags relating to regular diet and weight loss conversations, showing that harmful content is both accessible and easy to find. “You’re basically two clicks away from being sucked into the vicious cycle,” David Chu, the paper ‘s first author and a computer science Ph.D. student at USC who works at ISI, said. 

Researchers then analyzed the patterns of interaction within the hashtag network to discover different online communities by topic. They narrowed in on 10 of the most active groups and then used GPT-4, a large language model, to summarize each one’s main conversation theme. The results ranged from harmful to supportive, including eating disorders, healthy lifestyles and the keto diet. 

The researchers next looked at how these communities interacted with each other. Chu described the result as “astonishing.” Clusters, or echo chambers, appeared where tens of thousands of users in the same community responded to and retweeted each other, yet they had little interaction with outside groups. This means that users in pro-anorexia echo chambers saw increasingly toxic eating disorder content—with few alternative viewpoints. “They’re being radicalized by very harmful content without even knowing it,” Chu said. 

The behavior cycle bears similarity to a well-studied phenomenon: online radicalization. Typically, this mechanism has been used to explain how individuals get drawn into the extremities of violence and terrorism. Yet now it is also being applied to non-violent behaviors, such as political polarization, conspiracy theories and mental health. Lerman and team that the propensity for radicalization across such disparate topics hints at unmet universal human needs that drive the behavior, such as the need to belong.

After profiling these communities, what can be done to help them? The researchers also puts forth a new method to measure harmful narratives within online communities using Llama 2, a large language model. The model is fine-tuned using tweets from eating disorder communities so that it learns how they speak and act as a proxy representative of the community. 

“Language models can understand nuances in the English language,” Chu said. “They can understand the slurs, the slang, and everything people talk about given sufficient data.

Once trained to represent a certain group, the researchers asked the model what it thought about eating disorder topics. The goal was to use its responses as a method to measure harm. “If the model produces harmful content, then we can directly infer that those communities are not safe,” Chu said. For instance, when asked about dieting, a model with pro-eating disorder attitudes might respond with unscientific facts about weight loss or recommend that “anorexia is the way to go!”

Compared to using real users, who may not reveal their true attitudes due to stigma, Lerman said that the language model was “a much more systematic way to measure attitudes towards eating disorders.” 

Next, Lerman and team are planning to expand their research to other platforms beyond on X, such as Reddit and TikTok. They hope that their research will inspire policymakers and the tech industry to look more deeply into content moderation, which has been shown to mitigate the effects of online radicalization.  

“We should take this more seriously,” Chu said. “Eating disorders were the deadliest mental health condition last year.”

Published on March 6th, 2024

Last updated on March 7th, 2024

Want to write about this story?