Trump and the future of social media moderation

Thought Leadership
andy.volosky

After his encouragement of an attempted armed insurrection at the US Capitol on January 6, social media companies decided to finally, mercifully, ban Donald Trump from their platforms.

The sitting president was permanently removed from Twitter, while Facebook, Instagram, Snapchat, Pinterest, and Twitch all suspended Trump “indefinitely.” Google and Apple also moved to limit the right-wing Parler and Gab apps in their stores; Amazon Web Services then removed Parler from its servers, and therefore the internet.

These are welcome steps to limiting anyone’s ability to incite real-world violence, but questions remain: Are they committed to this new path forward for content moderation? What took them so long? And as professionals, what is our role in all of this moving forward?

A New, and Hopefully Welcome, Standard
After these moves, the most obvious question is: to what extent will social platforms continue to apply this new standard? For example, we don’t yet know whether it only be used after specific, verifiable instances of violence, or if encouragement and justification of abstract violence apply.




Twitter gave a detailed, albeit disturbing, accounting of how Trump’s tweets violated its Glorification of Violence policy. But as of this writing, Twitter still allows the accounts of various world leaders, governments, and spokespeople, who use Twitter for what one can only describe as propaganda as cover for autocracy, to continue to use their platform. While these speeches’ consequences are far from most of the world’s eyes, and certainly from those of most Americans, the effects are real all the same.

The flipside of this coin is a genuine concern regarding censorship of speech by other users, especially those on Twitter. Without getting into the larger conversation regarding the First Amendment and the oft-misunderstood Section 230 provision, this apprehension is borne from the fact that social platforms have not fully explained the extent of this new norm. Does it require a specific threat of violence that one is encouraging? Or will abstractions be cracked down on? How do we distinguish between hypotheticals and hyperbole, and what real-world contexts will be taken into account?

Social platforms need to be transparent about applying this new standard, and they need to do so quickly.

Closing the Barn Door
For years, researchers, journalists, and users called out the rampant abuse, misinformation, and threats prevalent on social platforms, but little was done to fight it.

An internal Facebook study found its own recommendations were sending people towards extremist content. And just last week, Twitter suspended the account of a woman reporting and publicizing QAnon-related accounts on the platform; as of writing, it still has not explained why.

 It’s clearer than ever that social platforms need to moderate the extremist content published on their sites, and the public evidence shows that they have the means and knowledge to do so. Taking these steps now is welcome, but it’s difficult to reconcile the swift action taken recently with the repeated inaction and varying standards of the last half-decade. If social media, and this industry, are serious about these efforts, these and even stronger measures will need to come in the very near future.

We Have a Role to Play 
It’s difficult to divorce all of this from the larger political context. The night before the storming of the Capitol, we learned that Democrats will control all three branches of the US government for the first time in a decade and, as an extension, will control all subcommittees tasked with regulating social platforms. 

The Democrats have long advocated for regulating the platforms to protect users’ privacy and safety, including taking such drastic steps as breaking up the big tech giants, likely resulting in advertising restrictions affecting the companies’ bottom lines.

As social professionals, this is initially scary; we know these platforms inside and out, and unknown regulations affecting our day to day work aren’t exactly something we list on our to-do. But with the same token, we can play a role in keeping the platforms honest and improving the positive role of social in people’s lives. 

The next couple of years will likely bring changes that will, at first, make our jobs harder; frankly, they should. But if people are safer, if they feel more comfortable sharing their lives on social media and continuing to create encouraging, uplifting communities on the internet, and that the platforms are encouraging them to do so without a business or privacy agenda, then not only are their lives better but so are our jobs.

We don’t know exactly where this conversation is going, but the first step to solving a problem is admitting that you have one. It’s past time for the platforms to take content moderation and user safety seriously; as social media professionals, we should be ready and eager to make that happen, and we hope that this can be a small step in getting that ball rolling.

—–

Andy Volosky is a Research & Insights Analyst based in our New York office. You can follow him on Twitter at @andyvolosky.