The New York Times reported that Facebook is finally revealing how it deals with posts related to nudity, violence, hate speeches and other kinds of inflammatory comments on its platform. The social media giant has long been under pressure from the government, activists as well as academics to reveal how it handles such posts.
On Tuesday, Facebook, for the first time published numbers that gave details of how much content it takes down from its social media platform. It also gave details of what kind of content it takes down. The report, which was 86 pages long, disclosed that Facebook had deleted some 865.8 million posts in the first quarter of 2018 alone. Most of these posts were spam, however, there were a smaller number of posts on nudity, violence terrorism and hate speeches.
In this report, 97% of the content deleted was spam. 2.4% had nudity, while an even smaller percentage contained violence, terrorism or hate speeches.
The social media company said that it also deleted 583 million fake accounts in the first quarter of this year. Of the rest of the accounts, Facebook stated that another 3% to 4% would be fake.
The company’s VP of product management, Guy Rosen stated that the company had massively stepped up its efforts in the last 18 months to flag as well as remove inappropriate content from its social media platform. He said that this was an inaugural report and it was targeted at helping their teams understand what is happening on the Facebook site.
One of the main reasons why there was an increase in content removal was that better AI programs had been put in place that could detect as well as flag inappropriate content. Facebook’s president of data analytics stated that there may come a time when AI would be able to flag every piece of inappropriate content before users even saw it.
The company said that it hoped it would be able to publish this data at least once every six months. Facebook is focusing on bringing more transparency to how things work on it social media platform, especially since it has been facing a lot of criticism for the growth of false news, inappropriate content as well as divisive posts.
Despite its increased efforts, posts about graphic violence continue to be shared on the social media platform, especially in countries such as Myanmar and Sri Lanka. These messages are stoking tensions in these countries and increasing violence there.
Despite the size of the report, it wasn’t a complete one. Facebook declined to publish examples of the kind of graphic violence or the hate speech content that was removed. And while the company announced that it had removed more posts from its platform in the first quarter of 2018 vis-à-vis the last quarter of 2017, the company did not provide any specific numbers to validate this assertion.
The director for international freedom of expression, Electronic Frontier Foundation, Jillian York, said she was happy with the report. York said that this was a good move and was long awaited. The next step, however, needs to be more transparency around how Facebook classifies content as well as what will be removed in future.
In the past, Facebook had refused to share details about its content removal process, stating that internal metrics had not been put in place. Instead, it used to publish country-wise data on how many requests it received from governments to get social media data or restrict certain types of content from users in that specific country. The report did not, however, give specifics about what types of reports or data were governments asking the company for.