Facebook revealed the vast scale of troublesome content it has been dealing with -- including half a billion fake accounts

Justin Sullivan/ Getty ImagesFacebook CEO Mark Zuckerberg.

Facebook banned 583 million fake accounts in the first three months of 2018, the social network has revealed. It’s a major effort towards transparency from Facebook in the wake of the Cambridge Analytica scandal.

The Silicon Valley social networking giant announced on Tuesday that it will publish data about the scale and effectiveness of its content moderation efforts – offering outsiders an unprecedented look at Facebook’s efforts to keep its platform free from objectionable, rule-breaking, and sometimes illegal content.

Here are some of the key numbers for the first three months of 2018:

  • Facebook “took action” on 3.4 million pieces of content that contained graphic violence. In this particular case, the “action” consisted of placing a warning over the content rather than removing it – at least, in the “vast majority” of cases. Of all the content viewed by users, between 0.22-0.27% of views was of rule-breaking content showing graphic violence. Which is to say, this doesn’t mean that 0.22% of the content posted on Facebook contained graphic violence; just that the graphic content posted accounted for 0.22% of total views.
  • It took action on 21 million pieces of content containing nudity and sexual activity. 0.07-0.09% of views were of the content.
  • Facebook took action on 1.9 million pieces of content over terrorist propaganda. The company didn’t provide a number of views, but said it was “extremely low.”
  • Facebook took action on 2.5 million pieces of content over hate speech, but doesn’t have view numbers as it is still “developing measurement methods for this violation type.”
  • It took action on 837 million pieces of spam, though it didn’t have view numbers.
  • It disabled 583 million fake accounts. It estimates that overall, 3-4% of its monthly active users are fake.

The data also illustrates where Facebook’s AI moderation systems are effectively identifying and taking down problematic content – and the areas where it still struggles to identify problems. 99.5% of terrorist content was detected and taken down by Facebook before users reported it, for example, while just 38% of hate speech was detected without user-submitted reports.

The company plans to release the data periodically in the future, the same way it already releases reports on the numbers of government data requests and intellectual property takedown requests it receives.

The new disclosures come after Facebook published for the first time its full “Community Standards”: The internal rules it uses to determine exactly what is and isn’t allowed on the social network. The full guidelines are about 8,500 words long, and and go into explicit detail around what is and isn’t acceptable in terms of violent, sexual, or otherwise controversial content, along with hate speech and threatening language.

These releases come in the wake of the Cambridge Analytica scandal, which has left the company battling to restore its reputation with users and developers – though employees have said the decision to release the Community Standards was not driven by recent events.

Do you work at Facebook? Got a tip? Contact the author at [email protected], via Twitter DM at @robaeprice, or via Signal/WhatsApp at 650-636-6268. Anonymity guaranteed.

Business Insider Emails & Alerts

Site highlights each day to your inbox.

Follow Business Insider Australia on Facebook, Twitter, LinkedIn, and Instagram.