Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.
Updated

Facebook took down more than 3 billion fake accounts from October to March, according to a report released on Thursday.

That eye-popping number is a record, and the tech giant estimates that about 5 percent of its monthly actives users are not real. Approximately 2.4 billion people log into Facebook worldwide each month. However, almost all of the fake accounts were pulled down by the firm's automated systems before users could even see them.

In a separate blog post, the company said it remains "confident" that most of the activity and people on Facebook are real.

The report details how Facebook took action against a wide range of prohibited content.

AMAZON PREPARING A WEARABLE DEVICE THAT 'READS HUMAN EMOTIONS,' REPORT SAYS

"For hate speech, we now detect 65 percent of the content we remove, up from 24 percent just over a year ago when we first shared our efforts. In the first quarter of 2019, we took down 4 million hate speech posts and we continue to invest in technology to expand our abilities to detect this content across different languages and regions," Guy Rosen, Facebook's VP of Integrity, said in a blog post.

The company, led by CEO and board chairman Mark Zuckerberg, released its third Community Standards Enforcement Report, which contains metrics across nine areas: adult nudity and sexual activity, bullying and harassment, child nudity and sexual exploitation of children, fake accounts, regulated goods, spam, global terrorist propaganda and violence and graphic content.

Facebook's founder and CEO Mark Zuckerberg speaks to participants during the Viva Technologie show at Parc des Expositions Porte de Versailles on May 24, 2018, in Paris, France. 

Facebook's founder and CEO Mark Zuckerberg speaks to participants during the Viva Technologie show at Parc des Expositions Porte de Versailles on May 24, 2018, in Paris, France.  (Getty Images)

In six of the policy areas included in the report, Facebook says it proactively detected over 95 percent of the content that it took action on before needing someone to report it.

On Thursday, the Facebook Data Transparency Advisory Group -- an independent group of experts established last year -- released its review of how Facebook enforces and reports on its community standards. The group found that the company's system for enforcing community standards and its review process is generally well designed.

FACEBOOK BACKS AWAY FROM THE HARD SELL ON POLITICAL ADS

But the group did make 15 separate recommendations to Facebook, including asking for more metrics to show the network's enforcement efforts, and a better explanation of its current policies. The group also said Facebook should make it easier for users to stay current on policy changes and give them a "greater voice" in what content it's allowed on the site.

The company announced it already had planned to implement some of the recommendations in future reports, and that for others, it's looking how best to put the group's suggestions into practice. "For a few, we simply don’t think the recommendations are feasible given how we review content against our policies, but we’re looking at how we can address the underlining areas for additional transparency the group rightfully raises," Radha Iyengar Plumb, head of product policy research for Facebook, said in a blog post.

"We thank the Data Transparency Advisory Group for their time, their rigorous review and their thoughtful recommendations that will help inform our efforts as we enforce our standards and bring more transparency to this work," the company said in a blog post.