Three weeks ago, for the first time, Facebook published the internal guidelines it uses to enforce what it calls community standards. Now it has released the numbers in a Community Standards Enforcement Report.
• Facebook’s first ANZ news lead Andrew Hunter talks concerns, tools and ACCC inquiry
This report covers Facebook’s enforcement efforts between October 2017 to March 2018, and it covers six areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts. The numbers indicate:
• How much content people saw that violates Facebook standards
• How much content Facebook removed
• How much content Facebook detected proactively using technology – before people who use Facebook reported it
Most of the action Facebook took to remove bad content is around spam and the fake accounts used to distribute it.
Facebook took down 837 million pieces of spam in Q1 2018, nearly 100% of which it found and flagged before anyone reported it.
Facebook noted the key to fighting spam is taking down the fake accounts that spread it. In Q1, Facebook disabled about 583 million fake accounts, most of which were disabled within minutes of registration. This is in addition to the millions of fake account attempts Facebook said it prevents daily from ever registering with Facebook.
Overall, Facebook estimates that around 3% to 4% of the active Facebook accounts on the site during this time period were still fake.
In terms of other types of violating content:
• Facebook took down 21 million pieces of adult nudity and sexual activity in Q1 2018, 96% of which was found and flagged by its technology before it was reported. Overall, Facebook estimates that out of every 10,000 pieces of content viewed on Facebook, nine to 10 views were of content that violated its adult nudity and pornography standards.
• For graphic violence, Facebook took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018, 86% of which was identified by its technology before it was reported to Facebook.
• For hate speech, Facebook technology still doesn’t work that well. However Facebook still removed 2.5 million pieces of hate speech in Q1 2018, 38% of which was flagged by its technology.
—
Top photo: Shutterstock