Facebook has said that it is improving its content moderation tool with enhanced artificial intelligence & machine learning capabilities, which it says will help combat hate posts & misinformation more effectively. A recent announcement by Ryan Barnes, product manager with the Facebook community integrity team, revealed that the new system would enable constructively prioritising content based on a number of key parameters – virality, severity & impact. Until now, posts that are thought to violate the company's rules are typically flagged to human moderators, both proactively & reactively, in chronological order. Now, Facebook intends to ensure that the most important posts are reviewed by human moderators first with the help of an improved AI algorithm system for content that requires moderation to queue the posts.
To achieve this, FB will reportedly use a combination of machine learning algos to sort a queue of flagged posts, based on the sensitivity of the content. These potentially harmful posts can either be reported by users, or in the proactive mode, detected automatically by FB's AI depending on a number of pre-defined parameters. The improved system will look to improve moderation of 'real world harm', i.e. posts such as fake propaganda that can have serious implications on the socio-economic fabric. Once such posts are passed from the AI level to human moderators & eventually resolved, the same system will then tackle spam & other less inciting posts. Posts that receive the highest level of priority include terrorism, child exploitation, self-harm & other related aspects. To do this, Facebook is using its AI expertise that it already has.
This, though, appears to be work in progress, & Facebook software engineer Chris Palow notes that the system might still witness flaws. However, the eventual goal that FB has is to instil a level of human-like intelligence in computer recognition models, which has so far been missing from such AI models. This can help it make contextual decisions in super important post moderations – something that Facebook will look forward in order to cut down on problematic content.
Palow said that "The system is about marrying AI & human reviewers to make less total mistakes".
In the last few years, FB has been criticised severely for mishandling hate posts & misinformation across its platforms. During the Coronavirus pandemic, the company faced the challenge of dealing with pandemic-related misinformation across platforms. Earlier in Oct, the social media giant banned QAnon accounts that are accused of spreading Covid-19-related misinformation from Facebook. The company also took several steps to limit misinformation during the United States Presidential elections.
Source Link
Picture Source :

