Earlier this year, researchers at the
Stanford Internet Observatory found that Twitter failed to take down dozens of images of child sex abuse. The team identified 128 Twitter accounts selling child sex abuse material and 43 instances of known CSAM. “It is very surprising for any known CSAM to publicly appear on major social media platforms,” said lead author and chief technologist David Thiel. Twitter responded to the issue after being contacted by researchers. This year Twitter removed 525% more accounts related to child sexual exploitation content than a year ago, according to the company.
Twitter has been slow to catch and remove some harmful content since Musk fired or faced resignations for nearly 75% of Twitter’s staff, including the bulk of the trust and safety team, which is responsible for managing responses to content reports. On average,
only 28% of antisemitic tweets reported by the ADL between December and January were removed or sanctioned. The group found the posts by drawing a 1% sample of all posts from Twitter’s API, or application programming interface. Twitter has since restricted the reported tweets that were found to violate policies, the company said.
“Since Elon Musk took over Twitter, we have seen the platform go from having one of the best trust and safety divisions in the industry, to one of the worst,” said Nadim Nashif, director at the Arab Center for the Advancement of Social Media.