The company finally lifted the curtain on what kind of content it boots from its platform.
Facebook removes tons of content from its platform for violating its rules, but it generally doesn’t reveal much about that content’s nature or quantity. On Tuesday, however, the social network pulled back the curtain with a new report on how it enforces its content-removal policies. The report, which covers the period between October 2017 and March 2018, reveals that Facebook saw an uptick in posts containing graphic violence and posts with sexual content and adult nudity in the first quarter of 2018 compared to the last three months of 2017. The company says it has been working to improve its ability to remove and respond to various categories of worrisome posts that pollute its platform, like spam, hate speech, posts from fake accounts, and posts that promote terrorism. In the report, Facebook says it estimates “that fake accounts represented approximately 3% to 4% of monthly active users” during the six months covered. That’s no small amount—and it suggests you very well may have interacted with a few fake friends or pages, granting them some access to your profile data in the process.
Mentions