Despite Twitter’s policies prohibiting the promotion of self-harm on the app, new research suggests hashtags and images of the topic – in particular, the practice of cutting – have increased exponentially over the course of the past year.
The study, conducted by the independent research organization the Network Contagion Research Institute in partnership with Rutgers University, examined the language used by accounts that frequently discuss or promote self-harm for coded language referring to the act.
Researchers found a series of specific hashtags and memes – many of which seem innocuous or are indecipherable to the average viewer – represent the self-harm community. For example, the term or hashtag “shtwt” is shorthand for the phrase “self-harm Twitter,” and users can search for those letters to pull up tweets and images that glorify the practice of self-harm. Other terms describe specific kinds of cuts or the layers of skin damaged by deep wounds; the term “catscratch” refers to superficial cuts that resemble those made by a cat, while the term “beans” refers to “cutting deep enough that one gets to the subcutaneous layer, which, as the image reveals, has the appearance of beans,” per the NCRI.
“Such insider jargon fosters a sense of community in which encouragement to increase the depth and severity of the self-inflicted wounds (e.g. “go deeper”) is perceived as positive and supportive rather than abusive and dangerous,” the study authors wrote in part. “Many also flag themselves as adolescents by posting their age and inviting peers their own age to interact with them on Twitter around self-harm.”
NCRI first started studying the trend of tweets promoting and glorifying self-harm in October 2021, at which time there were approximately 3880 tweets using the hashtag “shtwt,” including retweets; the past six months has averaged around 20,000 tweets using the same phrase. Researchers say, on average, monthly mentions of “shtwt” increased 500% over the past 11 months.
That increase comes despite such language being banned in Twitter’s suicide and self-harm policies, which state users “can’t promote, or otherwise encourage, suicide or self-harm. We define promotion and encouragement to include statements such as ‘the most effective’, ‘the easiest’, ‘the best’, ‘the most successful’, ‘you should’, ‘why don’t you’.”
However, users are allowed to share “personal stories and experiences related to self-harm or suicide” without being in violation of Twitter’s policies, and may also share coping mechanisms to deal with thoughts of self-harm or host discussions focused on research and advocacy.
Twitter was made aware of the prevalence of the self-harm hashtags last year, when U.K. children’s digital rights group 5Rights submitted the results of an investigation that found numerous social media companies – including Twitter – were “systematically endangering children online” by allowing them to be exposed to others who promoted self-harm.
In response, Twitter told the Financial Times it had blocked the hashtags #shtwt, #ouchietwt, and #sliceytweet, all of which are associated with accounts that promote self-harm.
“It is against the Twitter rules to promote, glorify or encourage suicide and self-harm. Our number-one priority is the safety of the people who use our service,” the company told the Financial Times last year. “If tweets are in violation of our rules on suicide and self-harm and glorification of violence, we take decisive and appropriate enforcement action.”
But a cursory search of Twitter for #shtwt pulled up dozens of tweets from this month containing images of self-harm. It is unclear which might be in violation of Twitter’s policies, but at least one user wrote they “wanna (sic) cut” themselves, while others shared images of bloody self-inflicted cuts on their bodies. Spectrum News has reached out to the social media company for clarification.
In a statement provided to The Hill, a Twitter spokesperson said the company is working with experts to determine how to best regulate self-harm tweets while also allowing users to discuss their concerns with a like-minded community in a healthy format.
“The safety of the people who use our service is our priority and we are committed to building a safer Internet and improving the health of the public conversation,” the spokesperson said in part.
But the NCRI report noted that the number of users who likely violated Twitter’s self-harm policies are “vastly greater than zero – certainly in the thousands, and possibly in the hundreds of thousands.”
“It is also a distinct possibility that as a result of an algorithm, young people seeking help to stop harming themselves could find themselves, instead, exposed to communities that encourage. and celebrate their compulsion to cut themselves,” researchers wrote in part.
The report comes as social media companies have continued to face pressure from international organizations and lawmakers to better protect children on their platforms, particularly in the wake of a late 2021 advisory from the U.S. surgeon general about the growing youth mental health crisis across the country.
Last December, Dr. Vivek Murthy said young people in America were experiencing an “alarming” and “widespread” mental health crisis, noting, in part, a “concerning increase in suicidal behaviors” among teens between 2009-2019.
The report attributed the pre-pandemic rise in mental health struggles to a number of factors, and specifically highlighted the adverse effect social media can have on adolescent mental health.
The surgeon general's report also pointed to media companies and their potential role in contributing to poor mental health, saying while some programs “can have a powerful impact on young people,” the onslaught of “false, misleading, or exaggerated media narratives can perpetuate misconceptions and stigma against people with mental health or substance use problems.”
Journalists and media outlets should focus on fact-based reporting and providing their viewers with sufficient context to understand events, while also weighing whether it is necessary to showcase images that some viewers might find disturbing.
Social media companies can take a number of steps to curb their negative impact on adolescent mental health – the first of which should be to provide the government with more accurate data of the behavioral impacts of time spent online, officials said in the report.
In the meantime, officials said companies like Instagram and TikTok should prioritize user safety across all stages of production.
If you or a loved one is struggling, help is available. Text HOME to 741741 to reach the Crisis Text Line.