Facebook removes 4.7 million posts for promoting violence or hate speech

FILE PHOTO: A Facebook logo is displayed on a smartphone in this illustration taken January 6, 2020. REUTERS/Dado Ruvic/Illustration

Facebook isn’t just a social media channel these days. It’s also a decider and enforcer of moral philosophy and an expert on contagious disease outbreaks.

The company is putting all this expertise to work, deciding what the millions of people who use its channel to communicate are and are not allowed to say, since these people are not smart enough to make such decisions themselves.

On May 12 Facebook reported a sharp increase in the number of messages it removed on all its apps for the crimes of promoting violence and “hate speech.” The purge was made possible by new technology that identifies text on images, according to a report by Reuters.

The world’s biggest social media company trashed about 4.7 million posts on its main app in the first quarter, up from 1.6 million in the fourth quarter of 2019.

The company also said it put “warning labels” on about 50 million posts related to COVID-19, after banning what it deemed harmful misinformation about the new coronavirus at the start of the pandemic.

“We have a good sense that these warning labels work. Ninety-five percent of the time that someone sees content with a label, they don’t click through to view that content,” Chief Executive Mark Zuckerberg claimed in a press call.

Facebook released the data as part of its fifth Community Standards Enforcement Report, which it introduced in 2018 along with more stringent regulations for users in response to people who were offended by some content on its platforms.

In a blog post announcing the report, Facebook described improvements to its “proactive detection technology,” which uses artificial intelligence to detect forbidden speech as it is posted and remove it before other users can be contaminated by it.

LEAVE A REPLY

Please enter your comment!
Please enter your name here