Following a surge in misinformation on social media around the war in Ukraine, UK CEO Jim Coleman, explores how the industry needs to take action against the spread of ‘dangerous and out of control’ lies.
When social historians look back on this period it will be interesting, with the benefit of hindsight, to pinpoint at exactly what point our global politics stopped being run in parliaments and hustings but via social media.
Was it the Arab Spring, in the early 2010s, when political protesters used social media to organise demonstrations, disseminate information and drum up support local and internationally?
Perhaps it was when Russian interference combined with Donald Trump’s powerful presence on Twitter spurred millions of Americans and created the most divisive election result of modern times.
Or maybe it was in the Brexit campaign when estimates showed one-third of the online conversation was generated by bots promoting pro-Leave hashtags, with a network of 13,000 fake accounts posting 65,000 Brexit-related tweets before disappearing from the site immediately after the vote.
Meanwhile, the press is often led by trending topics, following up stories that appear to have the most traction on Twitter, and consequently doing little to counter the social rhetoric.
There is no doubt social media is playing an increasingly important role in the progression and outcome of different political scenarios – sometimes good but often bad.
Isis is known to have used social media as a recruitment tool, manipulating the reality of their success and spreading propaganda, while legal action has been taken against Facebook after algorithms amplified hate speech around the genocide of Rohingya Muslims in Myanmar, and inflammatory posts were not removed.
And today we are seeing a surge in misinformation on social media around the war in Ukraine, peddled by pro-Russia supporters, intent on extending their influence beyond existing followers to those who are open to having their minds – and allegiance – changed.
In a war that is justified on lies and misinformation, this is neither a huge stretch nor a massive surprise, but it is what it represents that should cause concern.
Propaganda has always been a central part of any war, but digital media has facilitated the widespread unchecked dissemination of lies that is dangerous and out of control.
Last week’s release of a deep fake video of Ukrainian President Volodymyr Zelenskyy in which he supposedly issued a statement asking Ukrainians to “lay down arms”, was removed by Meta quickly, but the damage is done.
The video – as unbelievable as it was with a head that was both too big and badly pixelated compared to the body – is out there and will now be circulating privately.
However, while we can only urge the platforms to be more vigilant and to crack down on damaging content, there is a counter play we can make as an industry.
Global brands that have a presence in markets from Russia and Eastern Europe to the US and beyond, can harness social media to disassociate themselves from an aggressor or regime, and wield their significant influence to get behind a positive cause or political standpoint.
The temporary withdrawal of major consumer brands from Russia – from Apple and Disney to Starbucks and (eventually) McDonalds – will have powerful ramifications among consumers. Anyone left in doubt about whose side they should be landing on could well be swayed by these signs of support for Ukraine.
At SXSW in early March there were a number of powerful talks on the subject of digital media and the rise of disinformation.
Author Nicole Perlroth and cybersecurity expert Jonathan Reiber spoke brilliantly about the potential ramping up of what Perlroth described as “total mutual digital destruction” between the US and Russia.
While there’s not a huge amount any of us can do to prevent international hacking scandals (though a strong password and two-step verification may have helped prevent the infamous colonial pipeline hack that nearly brought the US to its knees in May 2021), I do believe there is something the industry can do to prevent “softer” forms of digital warfare such as “narrative jacking”, or more simply the spread of misinformation.
Nandini Jammi, one of the brains behind adtech monitoring company Check My Ads, managed to defund high profile hate speech online – removing 90% of ad revenue to Breitbart and Steve Bannon in just two months – by breaking the link between programmatic advertising revenue and publishers.
This led her to realise that the problem was not the publishers or the brands, or even media companies, but the lack of transparency in adtech. She describes disinformation and adtech as a closed ecosystem: one cannot work without the other.
Adtech companies have shown real enthusiasm for finding solutions to the problem but some of these solutions – such as the creation of blocklists – can only exacerbate the situation.
The term “Covid”, for example, made it onto numerous blocklists in 2020, ensuring ads could not be bought programmatically against content that referenced the pandemic, but when the mainstream media is writing about little else, blocklists fail.
Jammi and her partner Claire Atkin estimate $3.2bn of advertising revenue was withheld from the likes of The Guardian, New York Times, and other national, reputable newspapers around the world due to this blocklist “solution” during the pandemic.
At a time when rigorous, fact-based journalism is needed more than ever, it’s unbelievable to see this amount of revenue being withheld from such important publications.
The growth of AI will go some way to helping counter the rise of misinformation, with tech supplementing the role of content moderators, but no AI will be able to keep up with the sheer scale of the problem.
So as misinformation continues to take hold over the tech world, brands need to step up when it comes to combating this issue.
The ad world has the responsibility to educate themselves in order to help alleviate the problem, media agencies should be on the hook for protecting where their clients’ ads end up and do the diligence to make sure the spread of media aligns to their values.
Brands have a key responsibility here; it isn’t always easy to identify fact from fiction, but by making it a priority, brands will better their practices, combat misinformation and ultimately change the landscape for the better.
And maybe, just maybe, if this happens we’ll be able to shift away from the misinformation epidemic we are currently facing.
This article was originally published in Campaign Magazine.