In a new report, the Wall Street Journal said that Facebook’s AI is not consistently successful at removing the objectionable content. Last Sunday, in a blog post, Facebook vice president of integrity Guy Rosen wrote that the prevalence of hate speech on the platform had dropped by 50 percent in the past three years.
He wrote “We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it. What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to develop our systems, identify issues and build solutions.”
It seems it is a response post from Facebook to WSJ’s Sunday post. The WSJ report states that internal documents show that two years ago, the company reduced the time that human reviewers focused on hate speech complaints, and made other adjustments that reduced the number of complaints.
This helped create the appearance that Facebook’s artificial intelligence had been more successful in enforcing the company’s rules. Last March, a team of Facebook employees found the company’s automated systems were removing posts that generated about 3 and 5 percent of the views of hate speech on the social platform.
Rosen, on the other hand, argued that focusing on content removals alone was “the wrong way to look at how we fight hate speech. We need to be confident that something is hate speech before we remove it.”
According to Rosen, for every 10,000 views of a piece of content on Facebook, there were five views of hate speech so he wrote “Prevalence tells us what violating content people see because we missed it. It’s how we most objectively evaluate our progress, as it provides the most complete picture.”
It would be interesting to see what security steps Facebook take after all these allegations because arguing in social media platforms won’t work long for the company, especially at a time when its rivals are poised to capture market share with their offerings.