We need to say less to stop the spread of terror online

Thought Leadership
mobbie.nazir
Campaign recently published this article by our Chief Strategy Officer, Mobbie Nazir looking at why we need to change our thinking when it comes to shutting down harmful content on social media. They’ve been kind enough to let us reproduce it below.

Until this month, Facebook users have been entitled to talk about “white separatism”. Now, content that supports white nationalism and white separatism has been explicitly banned by the social media giant. But sometimes it’s not the support of hateful content that’s responsible for spreading it. Sharing our outrage has a significant role to play too.

It’s a sad fact that, in today’s online communities, content of a horrific or disturbing nature is readily available for those who want it. Sometimes, it lives in shady areas of the internet; often it has to be searched for. However, it does occasionally break into mainstream social media and international news headlines.

This was the case a few weeks ago, when a live stream of a mosque shooting in New Zealand appeared on Facebook. It was viewed 4,000 times before it was removed, and while Facebook blocked a further 1.2 million videos and images from being uploaded, hundreds of thousands got through.

Not every person who attempted to upload a video or image of the shooting was a terrorist or a sympathiser. Some were doing so to condemn the actions. But it’s not just the supporters or the silent voyeurs of this type of content who are responsible for its spread. Adding a comment to a post tells social media’s algorithms that it’s interesting, that more people should see it – even if it’s one of disgust. Sending a link or a notification on dark social channels spreads the content further – especially when these actions are done without reporting the content (no users reported the New Zealand shooting live stream to Facebook until after it ended).

I don’t want to downplay the role of the platforms; New Zealand prime minister Jacinda Ardern hit the nail on the head when she said social media platforms were “the publisher, not just the postman. There cannot be a case of all profit, no responsibility.” Live streams have to be better-moderated and if platforms can’t do this in real time, they must add time-delays.

Facebook is already considering restricting who can use the format. Aggressive blocking should absolutely be a priority and there should be severe fines in place for mistakes – even Mark Zuckerberg believes that governments need to get over their apparent blindspot when it comes to the internet and tech companies and force them to act.

The major platforms are attempting to crack down on hateful content through techniques including human moderators, artificial intelligence and audience reporting. But I believe the task of shifting this narrative is bigger than shutting down content. We need to, somehow, try to help people recognise their role in creating demand for the content through sharing it, even if it’s in outrage.

It’s a big ask. Law enforcement has a role to play. In the wake of the shootings, New Zealand’s privacy commissioner John Edwards called on Facebook to provide the names of those who had shared the footage with police. A potential penalty of $10,000 or up to 14 years in jail is a pretty big deterrent for most would-be sharers.

If blocking and prosecution are part of a potential solution, greater awareness is another. Did everyone who watched the stream on Facebook realise that their actions (inadvertently or not) lent credibility to acts of terrorism? As the publisher of the content, is it Facebook’s responsibility to retrospectively tell them? As part of its aforementioned stance on hateful content, Facebook announced that it would start connecting people who search for terms associated with white supremacy to resources focused on helping people leave behind hate groups. It’s a start.

If people don’t know how to report or flag inappropriate content or understand the importance of doing so, perhaps there should be a mandatory course to complete for all new and existing users, with login prevented until they do so. Should the likes of Facebook and YouTube invest in awareness campaigns to share the implications of watching content of this nature? And if people are repeatedly found to search for, watch or share horrific content – especially without reporting it – should they be kicked off social media platforms all together?

Ardern made a powerful point when she refused to speak the name of the perpetrator of the attack. It’s not about ignoring the problem, it’s about shifting the conversation away from those who are seeking notoriety and back to where it should be – the victims.

Perhaps the way to deal with hateful content online is not by giving it more airtime; it’s by doing our best to put out the flames altogether.