Facebook says it’s getting better at detecting bullying and harassment on its platforms. Between October and December of last year, the company nearly doubled the number of posts it took down for breaking its rules.
The social network shared the new metrics as part of its quarterly community standards enforcement report, which documents the company’s content takedowns.
During the fourth quarter of last year, Facebook took action on 6.3 million pieces of content, compared with 3.5 million the previous quarter. There was a similar increase on Instagram, where takedowns increased from 2.6 million to 5 million.
Importantly, Facebook also reported a notable improvement in its ability to proactively detect bullying before it’s reported by users. Its “proactive rate” for bullying and harassment nearly doubled, from 26 percent to 49 percent.
Facebook credited its AI detection tools for the improvement, noting that its automated systems are better able to identify bullying and harassment in comments.
“This has historically been a challenge for AI, because determining whether a comment violates our policies often depends on the context of the post it is replying to,” Facebook’s CTO Mike Schroepfer wrote in a blog post.
He said that Facebook’s AI has also gotten better at detecting the context of memes and other forms of “mixed media.”
Facebook also reported a slight uptick in hate speech takedowns. Facebook removed 26.9 million pieces of content (up from 22.1 million the previous quarter), while Instagram reported 6.6 million content pieces of content (compared with 6.5 million the previous quarter).
The company has been more reliant on its AI moderation tools over the last year, as the coronavirus pandemic has prevented many human moderators from working on moderations.
“We’re slowly continuing to regain our content review workforce globally, though we anticipate our ability to review content will be impacted by COVID-19 until a vaccine is widely available,” Facebook wrote.