A new report from Stanford University charges that Instagram’s content recommendation algorithms as playing a key role in promoting accounts offering child sexual abuse materials and helping connect buyers and sellers.


What You Need To Know

  • new report from Stanford University charges that Instagram’s content recommendation algorithms as playing a key role in promoting accounts offering child sexual abuse materials and helping connect buyers and sellers

  • In response, Meta — the parent company of Instagram and Facebook — says they’ve established a task force “to investigate these claims and immediately address them," but did not detail the scope of the newly established task force, what their goals were and who within the company would be on the team

  • Instagram was not alone in being identified by the report as a conduit of this material; social media networks like Twitter, Discord and Snapchat as well as payment services like CashApp, PayPal and GTG were also named in the report

  • Meta is also facing several lawsuits from current and former members of its content moderation teams for inadequately supporting their efforts

The report focused specifically on “self-generated child sexual abuse material,” or content being sold by minors themselves, content that is both in violation of Instagram’s policies and the law.

“Instagram is currently the most important platform for these networks, with features that help connect buyers and sellers,” the report from Stanford’s Internet Observatory reads. “Instagram’s recommendation algorithms are a key reason for the platform’s effectiveness in advertising” self-generated child sexual abuse material.

In response, Meta — the parent company of Instagram and Facebook — says they’ve established a task force “to investigate these claims and immediately address them.” Stanford Internet Observatory is led by Meta’s former chief security officer, Alex Stamos.

A spokesperson did not detail the scope of the newly established task force, what their goals were and who within the company would be on the team. 

The report’s examination of accounts found that “while it is likely that some seller accounts may be imposters redistributing content, scammers, or a third party coercing the child, it appears that by and large underage sellers are producing and marketing content of their own accord.”

The sellers, who mostly self-identified between the ages of 13 and 17, do not host their content on Instagram — a fact emphasized by Meta in their response — but instead advertise it and then exchange links to file sharing services like Dropbox or Mega for gift cards from companies like Amazon, PlayStation and DoorDash.

Even when Instagram’s safety programs identified search terms used to promote child sexual abuse material, the researchers found some searches would result in a warning prompt that “these results may contain images of child sexual abuse,” but allow users to see the results anyway if they chose to. 

Instagram was not alone in being identified by the report as a conduit of this material; social media networks like Twitter, Discord and Snapchat as well as payment services like CashApp, PayPal and GTG were also named in the report.

The report noted that failure to combat child sex abuse material by social media platforms can in part be credited towards their prioritization of censoring adult sex workers over efforts to ensure the safety of minors and pursuit of predators.

“With as much time and energy that platforms and service providers have spent policing and deplatforming legal, adult sex workers, the lack of attention to commercial [self-generated child sexual abuse material] and the apparent difficulty some platforms have in controlling it was unexpected and unfortunate,” the researchers wrote.

TikTok “is one platform where this type of content does not appear to proliferate,” the report said, pointing towards the social media company’s “stricter and more rapid content enforcement,” as well as differences in its algorithms as reasons for a lower prevalence of child sexual abuse material.

Snapchat declined to comment on the report. Stephen Hall, the executive chairman of MEGA, said in an email the file sharing company has a zero tolerance policy for child sexual abuse material and works with other tech companies to combat the spread. He also noted MEGA will be releasing a new transparency report covering their efforts "soon."

A Telegram spokesperson said in a message on their app that they have banned 10,000 groups, channels and bots in recent weeks, but did not address the report specifically.

Other than Meta, no other company implicated in the report responded to requests for comment.

Twitter’s press email automatically responded to inquiry with a poop emoji, the company’s policy since March, according to its billionaire owner Elon Musk.

On Wednesday, Musk tweeted a screenshot of a Wall Street Journal article titled “Instagram Connects Vast Pedophile Network” — Stanford launched their investigation in response to a tip from the Journal — calling the report “extremely concerning.” It was not immediately clear if he knew Twitter was also identified as failing to adequately moderate abuse material.

Musk previously said “removing child exploitation is priority No. 1.” In February, a New York Times report revealed Twitter had struggled to combat child sexual abuse material, allowing material to stay up even if after it was reported and, in some cases, racking up over 100,000 views. 

A Meta spokesperson responded to Spectrum News’ request for comment by listing tactics and accomplishments in the company’s fight against this kind of content, including the dismantlement of 27 “abusive networks” between 2020 and 2022 and the disabling of over 490,000 accounts for “violating our child safety policies” in January of this year. 

They also noted between May 27 and June 2, the company blocked account creation on 29,000 devices for violations of their child safety policies. And, after the report came out, Meta says they “restricted thousands of additional search terms and hashtags on Instagram.”

Meta’s content moderation systems have long been the subject of criticism, including by Amnesty International who said last year the company’s algorithms “substantially contributed to the atrocities perpetrated by the Myanmar military against the Rohingya people,” a minority group in the country. United Nations officials have called for Myanmar generals to be tried for genocide and implicated Facebook for inciting violence.

The company is also facing several lawsuits from current and former members of its content moderation teams for inadequately supporting their efforts.

The Stanford researchers recommended social media platforms proactively seek out material instead of relying on reports — noting journalists and researchers are often able to find content by simply looking for it — share information and strategies within the industry, develop better age detection methods and reevaluate their algorithms which “are extremely efficient” at suggesting accounts to users seeking out child sexual abuse material.

“This work also demonstrated a weakness the authors have noticed in the global child safety framework,” the report said. “The ease with which we found and explored this network, with no special data access or investigatory powers raises questions about the effectiveness of the current enforcement regime.”