How In The Future Online Hate Content Might Be Both Created And Countered By Algorithms

Picture the scene: a webpage, online forum, or social media post containing a pile of hate speech content and also a heap of counter-speech content, none of which was created by an actual human being

Companies like Facebook—consider Mark Zuckerberg’s recent testimony to the Senate—are now talking about the need to develop anti-hate algorithms, to identify online hate speech. Once identified the bots could flag up the content to social media moderators or in-house regulators for removal, or could simply remove the content automatically, without human input. Or the bots could create and publish appropriate forms of counter-speech below the original content, such as content challenging ideas based on the racial inferiority of certain groups.

Of course, it is becoming clear to these companies, and technical gurus in both academia and private industry, that writing these algorithms is one thing; getting them to work properly is quite another. They have a tendency to significantly under-report and over-report content as hate speech.

Human beings might lag behind robots in the rate at which they can churn out content, but hitherto they far exceed the ability of automated systems when it comes to concealing hateful content. In a recent academic article, Tommi Gröndahl et al. (Aalto University) have found, based on their analysis of several anti-hate algorithms, one simple way to beat the bots: add the word ‘love’ to content that is actually all about hate.

But I want to focus on something else, something quite perverse about these recent technological developments in the virtual hate wars.

Hate speakers have always used the latest technologies to spread their messages to as many people as possible, as cheaply as possible, and as anonymously as possible. Lithographs, printed leaflets, mail shots, automated telephone phone messages—these were just some of the technologies used by white supremacists and anti-Semites in the Twentieth century. Suppose these groups start utilising Internet chatbots, that is, automated computer systems, to spew hate speech content, such as content expressed ideas based on the racial inferiority of blacks or the menace to society posed by all Muslims, onto social media and Internet chat forums at a level and rate of speed that cannot be matched by human beings.

The hate chatbots would not know if the content they create is being read by human beings or simply by other chatbots. And the anti-hate chatbots would not know if the original content they are counter-speaking against was created by human beings or by other chatbots.

So picture the scene: a webpage, online forum, or social media post containing a pile of hate speech content and also a heap of counter-speech content, none of which was created by an actual human being. Sitting behind this electronic facade of fake content would sit two algorithm developers; perhaps one in an anonymous city apartment block, and another in a glass and steal office building in Silicon Valley.

The irony is that hate speech itself is sometimes depicted by civil libertarians as a good thing, or at least not something that ought to be regulated, because it is a kind of pressure-release mechanism. It allows people with racist or bigoted beliefs and policy agendas to express themselves or blow off steam, without actually acting on their beliefs. Indeed, when the hate and counter-hate is expressed online, they need not even go to the trouble of leaving their houses.

The bot wars I imagine above take this a step further, however. Now the interlocutors don’t even need to create content or have an online conversation: they can just write an algorithm that does it for them.

What is disturbing about all this? Well, for one thing, it decreases even further the opportunity for people to meet face to face to express their views, to see the whites of the other person’s eyes, to recognise the feelings of pain and fear on the other person’s face, and to feel some empathy or even sympathy for the other person.

More importantly, when the conversation itself is automated, and does not involve an actual human being on either side, it takes away the opportunity for real people to truly learn something they did not know already. Part of being human is the ability to make the kind of volte face that even the cleverest bots can only dream of.

Dr. Alex Brown is Reader at the University of East Anglia

Photocredit: HERO IMAGES VIA GETTY IMAGES

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You may also like these