AI to help Facebook moderate harmful content on its platform.

0
554

Facebook is one of the most loved and used social media in the world. It helps people in different parts of the world connect. As seen Facebook is all about friends, but sometimes who seems to be a friend might be an enemy. The scams and issues surrounding Facebook have been immense. The spread of harmful messages, fake news, fake accounts, hacking, etc. are increasing higher day by day. To prevent harmful messages from floating the platform Facebook is planning to use Artificial intelligence.

For this effective content moderation, Facebook will be using three aspects of technology. This technology will transform the content review process in the app thereby helping prevent the spread of harmful messages across the site. These steps include Proactive detection, Automation, and Prioritization. In the first step of Proactive detection, AI will detect any type of violation that happens in the platform. This detection by AI will be far more accurate than the reports from the users of the site. This helps to prevent the spread of harmful messages at the initial stage and prevent it from seen by hundreds and thousands of people.

In the second step which is called automation, AI systems will give automated decisions to some areas where the content can be seen highly violating the rules. Automation will also help the team to save time, as they will not have to review the same item again and again. In the “Prioritization’ stage instead of simply looking at the reported content AI will prioritize the most critical content reviewed and see whether it was reported by Facebook or any other proactive systems. The ranking of the most critical content will be prioritized based on the virality, severity of harm, and likelihood of violation.

There are certain areas in the process where they still need people to review the content. For example, discerning if someone is the target of bullying can be extremely nuanced and contextual. An advantage of using this type of data is that they review a large amount of content across all types of violations. Thereby helping to prevent maximum crimes as possible.