Facebook, YouTube and Twitter were warned about an insidious new form of online extremism on Capitol Hill as they sought to push back on the notion that they aren’t doing enough to rid their platforms of terrorists and hate-filled content.
Clint Watts, a senior fellow at George Washington University’s Center for Cyber and Homeland Security, told lawmakers that the U.S. is facing a unique type of threat he dubbed “Anwar Awlaki meets PizzaGate.” Watts’s idea, referencing the U.S.-born militant, is that in the future, someone could combine the social media prowess displayed by Islamic State with the bizarre case of a man who, believing the debunked PizzaGate conspiracy peddled online, brandished a gun inside a Washington, D.C. pizza shop.
“The greatest concern moving forward might likely be a foreign intelligence service, posing as Americans on social media, infiltrating one or both political extremes in the U.S. and then recruiting unwitting Americans to undertake violence against a target of the foreign power’s choosing,” Watts said in his prepared testimony.
In other words, rather than radicalizing Americans to their ideologies, foreign actors could harness existing political divisions in the U.S. to manipulate citizens into committing a terrorist act. With alleged Russian meddling in the 2016 presidential election still firmly in the spotlight, social media is being closely scrutinized.
Each technology giant spent most of their testimony pushing back on the notion that they aren’t doing enough to battle extremism online.
YouTube, which claims its machine learning technology now enables it to take down “nearly 70 percent of violent extremist content within 8 hours of upload,” has also cracked down on what it calls “borderline videos”—content posted that espouses hateful or supremacist views, but is not technically in violation of the site’s community guidelines against direct calls to violence will now be harder to find, won’t be recommended or monetized and won’t have features like comments, suggested videos and likes.
US SENATE IN RUSSIAN HACKERS' CROSSHAIRS, CYBERSECURITY FIRM SAYS
Meanwhile, Twitter’s Director of Public Policy and Philanthropy for the U.S. and Canada, Carlos Monje, Jr., told lawmakers the company has plans to safeguard the upcoming midterm Congressional elections. Those efforts include: verifying major party candidates for all state and federal active offices, monitoring trends and spikes in certain conversations relating to the 2018 elections to search for potential manipulation activity and bringing more transparency to voters about political advertising on Twitter.
Facebook claims that "counterspeech" is one tool in its arsenal against hate speech and extremism online. The ubiquitous social media company has partnered with a range of nongovernmental organizations and community groups to dissuade people from falling prey to extremist content.
"Although counterspeech comes in many forms, at its core these are efforts to prevent people from pursuing a hate-filled, violent life or convincing them to abandon such a life," said Monika Bickert, head of product policy and counterterrorism at Facebook.
Watts testified that social media companies need to do more to distinguish between protected speech and speech that "endangers society."
“Account anonymity today allows nefarious social media personas to shout the online equivalent of 'fire' in a movie theater. Bad actors and their fictitious and/or anonymous social media accounts can and have created a threat to public safety," Watts said. "This is not protected free speech and many social media companies offer no method to hold these anonymous personas accountable."