Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.
Updated

Are Facebook's and YouTube's efforts to remove terrorist propaganda really working?

According to new research, both platforms are failing to stop ISIS-related content from circulating over the platforms, even as Facebook and YouTube claim to be cracking down on the problem.

On Thursday, the consumer protection group, Digital Citizens Alliance, released a report documenting dozens of examples of the terrorist propaganda popping up on Facebook, Instagram, YouTube and Google Plus.

The posts include images of executions, such as victims being beheaded, shot and killed, or thrown off a rooftop. Other posts are recruitment related, and display ISIS flags, terrorist fighters and the Sept. 11 attacks.

More From PCmag

All the propaganda was collected in the past three months, and had managed to elude the content moderation efforts, Tom Galvin, the executive director of the Digital Citizens Alliance, told PCMag.

Although Facebook, YouTube and Google have since deleted some of the content, other posts have remained online, and been able to circulate undetected for several weeks to a few years.

At least some of the content also gained a sizeable audience. Galvin pointed to a now-deleted ISIS recruitment video his group found on YouTube that had attracted over 34,000 views. "There is much more of this stuff," he said. "And unfortunately, I think we will find more tomorrow."

The Digital Citizens Alliance announced their findings as both Facebook and YouTube have been talking up how they've been taking down millions of videos and posts related to objectionable content. Powering these take downs have been AI-systems that can supposedly flag the content before users need to report it.

However, Galvin said the platforms are still struggling to catch large swaths of the bad content. "Something is off here," he said. "Either their systems aren't as good as they say, or it's not a priority as they claim."

Galvin noted that his own group managed to uncover the terrorist content with the help of an AI system and human forensics from the Global Intellectual Property Enforcement Center (GIPEC). The investigators at GIPEC specifically focused on how the ISIS propaganda was being spread over the internet through memes and hashtags in other languages.

GIPEC managed to identify thousands of terrorist-related posts and videos on the platforms, Galvin said. For example, on Instagram, which Facebook owns, you can find numerous posts that appear to be ISIS-related using the hashtag #Islamic country, when written in Arabic.

Galvin said even though Facebook, Google, and Twitter have been telling the public they're cracking down on the terrorist content, he's doubtful the companies can police themselves. A big reason why is a lack of incentive. "Their business models don't allow them to solve this problem," Galvin said. He points to how today's internet giants are geared toward circulating information and monetizing it with ads.

"Why are months-old Jihadi videos and content still proliferating on Google platforms? There seems to be only one possible answer: the business model enables it," the Digital Citizens Alliance said in their report.

Google, which owns YouTube, so far hasn't commented on the findings. But on Thursday, Facebook said: "There is no place for terrorists or content that promotes terrorism on Facebook or Instagram, and we remove it as soon as we become aware of it."

"We take this seriously and are committed to making the environment of our platforms safe," the company added. "We know we can do more, and we've been making major investments to add more technology and human expertise, as well as deepen partnerships to combat this global issue."

This past week, Facebook for the first time released a transparency report on how much objectionable content is reaching the platform. The company is also hiring 10,000 more staffers to focus on "safety and security" on the platform.

Galvin said Facebook has been the tech company most open to the feedback and criticism. Nevertheless, the public needs to hold a conversation about the major internet platforms and whether they should be regulated, he said. Recent controversies around data privacy, fake news and election propaganda underscore the dangers when they fail to police themselves, he added.

This article originally appeared on PCMag.com.