Dr. Alexander Brown argues that the internet can render people unable to avert their eyes from hate speech and considers the consequences of this.
Under US jurisprudential theory it is understood that people travelling to work on public transport are a sort of ‘captive audience’ to speech. Because of this—because they find it hard to ‘avert their eyes’ in these situations—the normal First Amendment rules might not apply. In other words, it might be acceptable to regulate forms of unwanted speech that might otherwise be protected, precisely when people can’t easily escape that speech. In my new article published in the Charleston Law Review, I argue that the Internet—or certain places and spaces on the Internet—can also render people captive audiences to unwanted speech (e.g. hate speech). This holds true partly because it is very difficult today to avert one’s eyes from the Internet. Put simply, the personal and professional costs of logging off or quitting social media are so high that it isn’t something people can reasonably do anymore.
People belonging to groups who are often subjected to online hate speech or cyberhate face a terrible dilemma: either they carry on performing normal online functions and risk suffering hate-based trolling; or else they avoid the trolling only by steering clear of the Internet. This is not a dilemma people should be faced with.
Suppose a fan of a transgender model posts on YouTube a video of the model performing on a catwalk in New York, and adds a positive comment about how good she looks. Soon after, another video appears on YouTube with an almost identical title showing someone imitating the transgender model only with exaggerated male features including a deep voice, a beard, and barely disguised male genitalia. A link to the parody video is posted in the comments section under the original video. The transgender model comes across both the original video and the parody video in the course of researching her public profile and with a view to learning what ordinary people think about transgenderism. If checking one’s reputation as a model (professional opportunities), confirming that one is a member of society in good standing (dignitary opportunities), and contributing to the discussion of issues surrounding transgender identity (public discourse opportunities), are components of a normal opportunity range on the Internet, then the transgender model is a captive audience. In order to avoid the cyberhate she’d have to give up using the Internet in the aforementioned normal ways.
What I am suggesting, in other words, is that having to use the Internet can be compelled by the facts of life in the Information Age, and this can leave victims of cyberhate trapped by their tormentors.
No doubt some people believe that regulating online hate speech can be justified fairly easily, simply because of the extreme type of speech this is, so the question of whether or not online hate speech involves captive audiences makes no difference one way or the other. Conversely, others believe that regulating online hate speech can never be justified, because hate speech is a price some must pay so that we may all enjoy the benefits of free speech, meaning once again that the question of whether or not online hate speech involves captive audiences makes no difference. My own view is that existing arguments for regulating online hate speech can be made stronger by appealing to the captive audience doctrine. In that sense appealing to the captive audience doctrine makes a positive difference to the justificatory score in this highly contested debate.
Dr. Alex Brown is Reader at the University of East Anglia