The Council of Europe recommends an end to the piecemeal regulation of online hate speech across the continent

The Council of Europe is the continent’s leading international human rights organization. Today it published the results of a 6 month study looking at emerging forms of governance of online hate speech. The European Commission’s Code of Conduct on Countering Illegal Hate Speech Online, Germany’s NetzDG Act, France’s new Avia Law, the UK’s Online Harms White Paper, and Facebook’s emerging Oversight Board were among the many different innovations examined.

Online hate speech includes racist, xenophobic, homophobic, transphobic etc. propaganda, stigmatization, vilification, and incitement to hatred, discrimination or violence. Governance of online hate speech includes moderation, oversight, and regulation. Such governance is typically undertaken by governmental authorities, Internet platforms and civil society organizations, often in partnership.

The study draws conclusions and makes recommendations in several policy areas including (but are not limited to): the potential for standardization in the governance of online hate speech across Europe; better collaboration among stakeholders or partners involved in the governance; public opinions about the governance; and the need for a victim-sensitive approach to the governance.

First, International organizations like the European Commission (the executive branch of the European Union) and the Council of Europe are pushing for greater standardization across European states. However, the study recommends that common standards for the regulation of online hate speech across Europe need not mean identical regulatory models or tools.

For one thing, rather than having a single European regulator it makes sense to adopt decentralised regulatory authorities, meaning each country establishes its own national regulator or devolves more powers to existing regulators.

Furthermore, whilst many states are minded to adopt a common standard on the responsibility of Internet platforms to remove illegal hate speech content within a specified time frame (e.g. 24 hours for obvious or manifest cases and 7 days for grey area cases), each national regulator can and should look to its own local laws to define “illegal hate speech”.

In addition, a common standard on the responsibility of Internet platforms to remove illegal hate speech content within a specified time frame is consistent with each national regulator designing and implementing slightly different exceptions, exemptions and leniency programmes for different Internet platforms under this main rubric. For instance, governmental authorities could provide exemptions from fines to Internet platforms that apply for and are granted a “responsible platform” status such as if they devote reasonable resources to removing illegal hate speech quite apart from how they perform on certain outcome metrics (such as percentage of content removed).

Second, the study found evidence of increasing forms of collaboration or multi-stakeholder partnerships between governmental authorities, Internet platforms and civil society organizations in the governance of online hate speech. But it also found that in some instance this can undermine the independence and integrity of the monitoring of Internet platforms.

For example, the European Commission’s Code of Conduct on Countering Illegal Hate Speech Online operates alongside a monitoring regime whereby monitoring organizations—many of whom are trusted flaggers that already work with the big Internet platforms—send reports to the platforms about suspected illegal hate speech content and record whether, and how swiftly, the platforms remove the content. In the various monitoring cycles conducted since 2016 the European Commission has reported a significant increase in the percentage of content removed—one sort of outcome metric—from 28 percent of reported content in 2016 to 72 percent in the most recent monitoring cycle. However, the study identified a problem that Internet platforms are being “made aware” of when the monitoring periods are taking place, both formally and informally. And so it is unclear the extent to which the increase in removal rates represents genuine changes in practice or simply Internet platforms’ improved capacity to “game” the monitoring process by significantly increasing removal rates during the monitoring periods only.

Moreover, some trusted flaggers report that during meetings and training sessions set up by big Internet platforms like Facebook, the platforms will typically offer “advertising grants” to trusted flaggers—grants that enable these non-for-profit human rights organizations to run their own campaigns on the platforms free of charge—as a sort of “goodie bag” for participating in the meeting or training session. This sort of “close” working relationship may reduce the independence—or, just as importantly, reduce the appearance of independence—of these organizations as monitoring bodies.

Third, the study commissioned a series of YouGov public opinion polls concerning the governance of online hate speech. The polls revealed that the general public have a balanced or non-absolutist view on how to regulate Internet platforms. For example, the general public in the UK, France and Germany all rated it as important to levy fines on Internet platforms that demonstrate a pattern of failure to remove illegal hate speech content. At the same time, however, the general public in all three countries also rated it as important to grant exemptions from such fines if Internet platforms devote reasonable resources to removing illegal hate speech (“responsible platforms”). They also rated it as important to offer Internet platforms reductions in fines in return for them providing full disclosure about the amounts of illegal hate speech on their platforms (“leniency programmes”).

Finally, the study recommends that when designing and implementing governance tools for online hate speech, governmental authorities, Internet platforms and civil society organizations should adopt a victim-sensitive approach wherever feasible and appropriate. The study sets out guidelines for what this means in general terms but also gives concrete examples of what it is required in practice.

At the moderation level (i.e. content moderation done by Internet platforms) key practical recommendations include: notification of moderation decisions must be sent to the victim; notifications sent to the victim should go beyond pro forma communications and standardized explanations to provide messages that contain at least some personalised or semi-personalised content of a suitable form; reporting mechanisms should use plain language and should be made available in multiple languages and formats; reporting forms should enable bulk reporting, such as by allowing victims to highlight and report several bits of content at the same time; Internet platforms should be proactive in identifying and removing “identical”, “equivalent” (e.g. language translations), or “very similar” content; moderation should empower the victim, or put them back in control, such as by giving them power to select between moderation outcomes.

At the regulatory level (e.g. dispute settlement organizations, regulatory adjudicators, appeals or complaints bodies, courts) key practical recommendations include: notification of regulatory decisions in particular cases must be sent to the victims concerned; any regulatory mechanisms that require victims to submit information should avoid any demands that would run a significant risk of retraumatisation; any regulatory mechanisms should not impose unnecessary and detrimental friction for victims, such as by avoiding draconian laws or disproportionate sanctions against malicious reporting; complaints bodies should be willing to act as a “one stop shop” for complaints, such as by enabling victims to launch group or “class action” complaints and to launch simultaneous complaints against multiple Internet platforms; victims should be empowered by regulatory processes, such as by enabling victims to play an active role in the case as it progresses through the relevant legal or administrative processes including by giving evidence or testifying where appropriate based on consent and with necessary legal and psychological support.

Dr. Alex Brown is Reader in Political and Legal Theory at the University of East Anglia, and author of A Theory of Legitimate Expectations for Public Administration.

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You may also like these