In an attempt to combat use of its social media platform by terrorists, Facebook has started using artificial intelligence services. The company officials stated in a blog post that working in coordination with human reviewers, Facebook will be using artificial intelligence to identify and eliminate spread of content by terrorist organisations instantly before it floats in updates to other users. Platforms such as YouTube and Facebook had previously used this technology to block child pornography, however Facebook only removes objectionable content when it is reported by the users first.
Need for Online Content Policing
Announcement by the Facebook company officials comes due to growing pressure from the government officials to prevent and identify propagation of terrorist propaganda and messages on the social media platforms. Prime Minister Theresa May called on governments in various countries to form an international agreement to curb propagation of terrorist propaganda on the internet services. According to the proposed measures suggested by various governments, companies legally accountable for content posted on their sites would be held responsible. Director of global policy management, Monika Bickert and Brian Fishman, counter-terrorism policy manager recently stated that people have questioned the tech companies on taking an initiative to curb terrorism online, and mentioned that social media should not be a voice for terrorist organisations.
Technology Deployed to Keep Check
Apart from the artificial technology, Facebook will use image matching techniques that compares videos and photos uploaded by the people with the images and videos uploaded by the terrorist organisations on the social media platform. These matches will mainly indicate that Facebook had previously removed that post or that content was stored in a database of similar images that Facebook shares with other platforms including YouTube, Microsoft and Twitter.
Facebook is further developing a technique known as “text-based signal” from posts that have been eliminated previously, and have supported or praised terrorist groups. The social media platform based in the United States added that in unusual cases, when the company officials discover evidence of forthcoming harm, they immediately update the related authorities for further actions. However, since the technology has not reached a point where it can comprehend the nuances of human context and languages, users and human reviewers will remain in the loop.
Facebook claims to employ over 150 people who are assigned the task to primarily and exclusively focus on checking propagation of terrorist propaganda, and are mainly responsible for countering terrorism. The team responsible for keeping a check on countering terrorism activities on the social media platform include former prosecutors, engineers, analysts and ex-law enforcement agents, according to the blog post.