Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.
Join Fox News for access to this content
Plus special access to select articles and other premium content with your account - free of charge.
By entering your email and pushing continue, you are agreeing to Fox News' Terms of Use and Privacy Policy, which includes our Notice of Financial Incentive.
Please enter a valid email address.
By entering your email and pushing continue, you are agreeing to Fox News' Terms of Use and Privacy Policy, which includes our Notice of Financial Incentive.

The surge in antisemitic and violent content on social media after Hamas' attack on Israel has renewed debate over what role, if any, the government should have in policing online speech.

"You're seeing the intersection of antisemitism and radical ideology, radical Islamic ideology and pro-terror content online that absolutely poses a risk to any Western democracy, including the United States of America," said Tal-Or Cohen Montemayor, the founder of CyberWell, an Israeli tech nonprofit that tracks antisemitic online speech.

Man on laptop computer at night

The recent surge in online antisemitism and "pro-terror content" after Hamas' attack on Israel has some urging more regulation of websites. But free speech advocates say government isn't the answer. (Getty Images)

COLLEGES FACING ISRAEL-HAMAS UNREST MORE LIKELY TO HAVE THIS FREE SPEECH PATTERN

Montemayor calls Oct. 7 the "largest hijacking of social media platforms by a terrorist organization." Internet users across the world are going to sleep next to their phones and waking up to an onslaught of misinformation, she said.

CyberWell uses AI to analyze thousands of posts in real time and flag content that is "highly likely" to be antisemitic. The nonprofit saw an 86% spike in such content after Oct. 7, according to a recent report.

"The biggest kind of loophole in the legislative space that has kept us in this very toxic relationship with social media over the years" is Section 230, she said.

Section 230 of the Communications Decency Act of 1996 states that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

"The biggest kind of loophole in the legislative space that has kept us in this very toxic relationship with social media over the years [is Section 230]"

— Tal-Or Cohen Montemayor, CyberWell

Those 26 words keep online speech free, according to Adrian Moore, vice president of policy at Reason Foundation, a libertarian think tank. In effect, it means that social media platforms and other websites can't be held liable for things their users post. 

"We see discussions about Section 230 come up pretty much whenever some group of people gets offensive — essentially — online," Moore told Fox News. "People who are offended by that speech start talking about, 'Shouldn't there be a way for us to stop bad things from being posted on the internet?'"

WAVE OF ANTISEMITISM SPURS RENEWED CALLS TO CENSOR INTERNET:

WATCH MORE FOX NEWS DIGITAL ORIGINALS HERE

The answer to that question is no, in his opinion. And while Montemayor is a critic of Section 230, she acknowledges that numerous attempts to change the law over the years have failed. She advocates for something similar to the European Union's Digital Services Act. Passed last year, it requires online platforms to do more to police the internet for illegal content and to be more transparent with their data and algorithms.

"If you can't get rid of Section 230, we should at least have transparent access to the data of how this hate and how this terror content is being reported and is being treated by the social media platforms," she said.

Since many social media algorithms feed users media that's similar to content they have previously engaged with, Montemayor is particularly concerned about potential "silo" effects.

"There's no way you can define what are bad things that isn't subjective"

— Adrian Moore, Reason Foundation

"If you are, for example, anti-Israel or anti-Palestinian or anti-American, [they'll] just show you more of that content," she said, noting the recent popularity of Osama bin Laden's "Letter to America" on TikTok.

The algorithms become a problem "when there is a trend that is quite literally anti-American and is being exposed over and over again to your youngest population between the ages of 18 and 30, and they're relying on that as their primary news source," Montemayor said.

CAMPUS ANTISEMITISM HAS PARENTS, STUDENTS RECONSIDERING COLLEGE CHOICES

Moore cautioned against increased regulation.

"There's no way you can define what are bad things that isn't subjective," he said. "That means that whoever is in power decides what the bad things are. And whenever the people you don't agree with are in power, they're going to decide the things that you say are bad things."

But that doesn't mean social media platforms need to be troughs of offensive content, he said. Companies have a financial incentive to moderate posts that might drive other users away from their platform.

CLICK HERE TO GET THE FOX NEWS APP

"Offensive speech doesn't usually help you to grow your market share," Moore said. "So I think the best defense we have against offensive online speech is a combination of these companies wanting to provide a decent place for people to exchange ideas and information … and meanwhile, everybody's been free to push back against all that antisemitic speech."

To hear more from Montemayor and Moore, click here.

Ramiro Vargas contributed to the accompanying video.