The explosion of online antisemitism after Hamas attacked Israel is having real-world consequences, warned one tech expert, who emphasized that social media companies were unprepared to respond to Jewish hate.
"Oct. 7 was the largest hijacking of social media platforms by a terrorist organization," said Tal-Or Cohen Montemayor, the founder of CyberWell, an Israeli tech nonprofit that catalogs antisemitic online speech.
The "horrible, vicious antisemitic terror attack … acted as a catalyst for an outpouring of Jew hatred across the world," she added.
CyberWell uses AI to analyze thousands of posts in real time and flag content that is "highly likely" to be antisemitic. The nonprofit saw an 86% spike in such content after Oct. 7, according to a recent report. Facebook alone saw a 193% increase, but Montemayor said X had a higher "baseline" level of antisemitic content before the terror attacks.
Across the board, the outpouring of anti-Jewish hatred and graphic violence after Hamas' attack caught social media platforms off guard, Montemayor told Fox News. Livestreams showed Hamas kidnapping and executing families, videos circulated of women "being paraded in the streets in Gaza after being violently raped" and "live lynchings of Israeli soldiers" were posted on TikTok, she said.
"The psychological warfare with social media is still happening today," Montemayor added, with Hamas "releasing footage of hostages making statements under duress and then showing their bodies or even torture videos."
HAMAS' ‘PSYCHOLOGICAL WARFARE’ RAMPANT, OFTEN UNCHECKED ON SOCIAL MEDIA: TECH EXPERT
WATCH MORE FOX NEWS DIGITAL ORIGINALS HERE
Online antisemitism is currently the most rampant Montemayor has seen during her career. The nearest precedent was October 2022, she said, after Kanye West made hostile comments about Jewish people.
"Whenever there's been conflict in the Middle East, we've seen an increase in both antisemitic speech and counter speech," said Adrian Moore, vice president of policy at Reason Foundation, a libertarian think tank.
Companies don’t have a legal or moral obligation to censor speech on their platforms, Moore said, but he pointed out that they are competing for customers, so removing the most offensive content might be in their best interest.
"What these companies do have is an obligation to provide a place where people want to be," he said. "So I think the best defense we have against offensive online speech is a combination of these companies wanting to provide a decent place for people to exchange ideas and information."
JEWISH-OWNED SHOP VANDALIZED IN ATTACK REOPENS AS POLICE INVESTIGATE POTENTIAL HATE CRIME
Virtual venom "absolutely" correlates to a higher risk of real-life crime, Montemayor said, both in America and across the world.
"You're seeing the intersection of antisemitism and radical ideology, radical Islamic ideology and pro-terror content online that absolutely poses a risk to any Western democracy, including the United States of America," Montemayor said.
Antisemitic incidents surged more than 300% in the month following Hamas' attack on Israel, a recent Anti-Defamation League survey found. A separate ADL survey found 70% of participants had been exposed to "misinformation or hate" related to the conflict.
"We advise that companies should proactively create guidelines for better-resourced and more intensive content moderation during crises, including increasing the number of moderators and experts with cultural and subject matter expertise related to the crisis at hand," ADL spokesperson Kevin Altman told Fox News.
CyberWell started monitoring English and Arabic online antisemitic speech in 2022. After Hamas' October attack, Montemayor said the nature of the hate became "much more violent."
An analysis of Arabic-language content showed 62% of posts called for the physical "harming or killing of Jews," she said. "We just saw an inability to effectively deal with this outpouring of hate speech and pro-terror content in Arabic specifically."
Social media companies have long struggled with a lack of moderators in languages other than English. Facebook engineers revealed in a 2020 memo that 60% of Arabic content went undetected, POLITICO previously reported.
Arabic content on X and other platforms also went unchecked.
CAMPUS ANTISEMITISM HAS PARENTS, STUDENTS RECONSIDERING COLLEGE CHOICES
But past debate has focused more on the harm to Arabic-speakers than antisemitic Arabic content.
A report Meta commissioned concluded that Facebook unintentionally violated Palestinian users' freedom of expression during the 2021 Gaza war, which began after Hamas launched rockets at Israel. Facebook over-enforced rules when it came to Arabic-language posts and under-enforced Hebrew content, the report found.
Meta also apologized after it mistakenly banned the Instagram hashtag #AlAqsa, which referred to the Al-Aqsa Mosque in Jerusalem’s Old City. The company's algorithms had confused the holy Islamic site with the terrorist group Al-Aqsa Martyrs Brigade.
CLICK HERE TO GET THE FOX NEWS APP
While Montemayor awarded social media platforms across the world a "big fat F" for content moderation in the wake of Oct. 7, she added that her nonprofit has a good working relationship with Facebook, Instagram and TikTok and has seen a more than 90% removal rate for content CyberWell flagged.
X's removal rate is only 10%, but she's hopeful CyberWell will be able to "work with that platform to get those numbers up."
To hear more from Montemayor, click here.
Ramiro Vargas contributed to the accompanying video.