Facebook has more than 35,000 human workers filtering the fake information and hate speech from its platform. The company has invested time and efforts in developing an AI-enabled platform to ban content on the social network. On Wednesday, Facebook said that its software is getting better at spying content than working with humans.
During a briefing on Facebook’s latest report on ferreting out posts, CEO Mark Zuckerberg said,
“While we err on the side of free expression, we generally draw the line at anything that could result in real harm. Our efforts are paying off. Systems we built for addressing these issues are more advanced.”
The AI system developed by Facebook can automatically spot banned content before it is seen to the users. Facebook has employed human teams that act as reviewers who check whether the AI software was on target. The software can automatically find 80% of the content, which is a massive improvement over the last few years.
Facebook CEO notes that finding hate speech is tougher for AI as compared to detecting images or videos with obscene elements. Facebook users continue to share videos of mass attacks. The AI system is able to block 95% of such attempts by the users.
The AI system is capable of finding and deleting content on Facebook. The company has also ramped up its efforts to filter content that promotes self harm and abusive content. Facebook prevents millions of attempts from creating fake every day using this advance detection system.