Hailed the ‘Global Internet Forum to Counter Terrorism’, Facebook, YouTube, Microsoft and Twitter have joined forces to create a global forum fighting against the spread of online terrorist propaganda.
Shockingly, extremists have increasingly started to use internet platforms to spread their messages, attempt to radicalize users and inspire heinous acts. In a united attempt to stop this, each tech giant has pledged to make their platforms ‘hostile to terrorists and violent extremists’.
Fighting terrorism within online social media platforms is not an easy task. Previously left to government and supranational officials, tech firms now share the responsibility of determining what constitutes terrorist propaganda, while upholding the freedom of expression and respecting users’ privacy.
With Twitter suspending over 350,000 accounts associated with terrorism propaganda between 2015-16, it is obvious the world’s leading tech giants have considerable power and arguably a duty, to fight new-age terrorism and they certainly plan to do so.
The leading tech companies joining forces have not only partnered with each other but also with civil society groups, academics, governments and supranational bodies such as the United Nations. Arguably one of the most important collaborations, however, is that with smaller tech companies who can influence considerable change through the development of new technology and processes.
Collaborations are not, however, limited to high ranking professionals and bodies. Google, for example, has introduced the ‘YouTube Heroes’ scheme in which everyday users are encouraged to report inappropriate content, who in turn, receive a reward. This is an important element to counter-terrorism as although technology contributes enormously, human eye is still incredibly necessary.
Rather than combating online terrorism autonomously, these tech giants have created a shared industry knowledge database. This database is made up of ‘hashes’, posts which have a unique digital fingerprint and can be shared within the database after it has been removed from a site. Having shared access to valuable information enables the other tech giants to more readily identify similar extremist content and quickly remove it for their sites also.
Facebook and Google now monitor extremist content through the use of algorithms. These are essentially a mathematical equation to monitor and predict if terrorist attacks are likely based on specific content. Terrorism is a major source of news for media outlets around the world. Therefore, it is important these platforms have processes in which real news stories can be distinguished between propaganda.
Another pressing issue is the prevalence of previously shut-down users creating numerous new accounts. This technology can also autonomously block repeat offenders who attempt to do so, and between harmful content and genuine news articles.
The use of artificial intelligence (AI) has been incredibly successful in the automatic detection of extremist propaganda including language, images and phrases. Whether identifying lone individuals or groups, AI can identify specific language, images, audio and video previously used by propagandists and feed these into a ‘machine learning system’.
This machine can, over time, learn to detect the same or similar harmful content and remove it swiftly. This technology has been incredibly effective, so far responsible for sourcing approximately 50% of removed content.
YouTube recently partnered with ‘Jigsaw’, the company who introduced the ‘Redirect Method’. In this scheme, the most vulnerable and at risk of radicalization users are redirected to anti extremist advertisements. The point of difference of the ‘Redirect Method’ lays in the specific target of users who are actively seeking to view terrorism-related content.
In the first eight weeks of this initiative, over 320,000 individuals were reached and to date over 500,000 minutes worth of video has been viewed.
Research conducted in 2016, has revealed social media essentially acts as a terrorist recruitment platform aimed towards vulnerable supporters and sympathizers. Exposure to online content such as propagandist images and videos to simple online conversations can evolve into serious acts of terrorism.
In the past, tech moguls largely relied on users to identify offensive content, including extremist activity. Now, however, the increased use of modernised technology proves a hopeful answer to decrease the ways terrorists can spread their dangerous propaganda.