•  •  Dark Mode

Your Interests & Preferences

I am a...

law firm lawyer
in-house company lawyer
litigation lawyer
law student
aspiring student
other

Website Look & Feel

 •  •  Dark Mode
Blog Layout

Save preferences
An estimated 7-minute read

EU Code of Conduct on Countering Illegal Hate Speech Online: An Analysis

 Email  Facebook  Tweet  Linked-in

By Rishabh Bajoria

The Code

On 31st May, the European Commission (EC) announced a new Code of Conduct for online intermediaries. This Code was formulated after mutual agreement between the EC and Facebook, Microsoft, Google (including YouTube) and Twitter.[1] It targets the prompt removal of hate speech online through intermediaries. The EC stated:

“While the effective application of provisions criminalising hate speech is dependent on a robust system of enforcement of criminal law sanctions against the individual perpetrators of hate speech, this work must be complemented with actions geared at ensuring that illegal hate speech online is expeditiously acted upon by online intermediaries and social media platforms, upon receipt of a valid notification, in an appropriate time-frame.”[2]

It later clarifies that a notification must not be “insufficiently precise” or “inadequately substantiated”. Intermediaries are obliged “to review the majority of valid notifications” in “less than 24 hours and remove or disable access” to the content. They must review the notifications against the touchstone of their community rules and guidelines, and “national laws”, wherever necessary.

Reasons

The Code is understood to be in response to rising anti-Semitic and pro-Islamic State commentary on social media. Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality, said, “The recent terror attacks have reminded us of the urgent need to address illegal online hate speech. Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred.”[3]

It is noteworthy that the intermediaries are American. This could be a way to avoid any jurisdictional conflict. For example, in Licra et UEJF v Yahoo! Inc and Yahoo! France, Yahoo! refused to comply with a French Court’s order. The order imposed liability on Yahoo! for its failure to disable access to sale of Nazi memorabilia on its website. This was a crime in France. However, Yahoo! contended that because its servers were located in the United States, the order was inapplicable. Subsequently, the U.S. District Court for the Southern District of New York in Yahoo! Inc. v. La Ligue Contre Le Raisme et L’Antisemitisme held Yahoo! to be a mere distributor. Hence, it could only be held liable if it had notice of the content.[4] This Code will supplement Articles 12-14 of the E-Commerce Directive 2000/31/EC. These Articles preclude intermediaries from liability if they disable content “expeditiously”, after receiving a “notice” of it. However, standards are not provided for “expeditious” or “notice. This Code clarifies these ambiguous terms for the intermediaries, which are otherwise defined by domestic legislatures[5]. Moreover, because such intermediaries have agreed to abide by the E-Commerce Directive and the Code of Conduct, such a jurisdictional issue will not arise.

Problems

This Code forces intermediaries to judge the legality of content. Once intermediaries are notified of the content, they are obliged to investigate and determine if the speech should be deleted. Twitter’s Head of Public Policy for Europe, Karen White, commented: “Hateful conduct has no place on Twitter and we will continue to tackle this issue head on alongside our partners in industry and civil society. We remain committed to letting the Tweets flow. However, there is a clear distinction between freedom of expression and conduct that incites violence and hate.[6] Such a notice and takedown regime is problematic because this distinction is not always “clear”. There remains no universal consensus on the definition of hate speech. To evaluate if speech comes under this category, Courts across jurisdictions look at a number of factors:

  1. Severity of the speech
  2. Intent of the speaker
  3. Content or form of the speech
  4. Social context in which the speech is made
  5. Extent of the speech (its reach and the size of its audience)
  6. Likelihood or probability of harm occurring[7]

The last two criteria are not analysed for speech which incites hatred. Hate speech, per se, is an inchoate crime. These factors are analysed cumulatively. Courts look to balance the value of the speech against the State’s positive obligations of maintaining public order or protecting the rights of others. Former UN Special Rapporteur for Freedom of Expression Frank La Rue has argued that private intermediaries should not be forced to carry out censorship. They are not equipped to account for the various factors involved in determining the legality of speech.[8] Unlike a judiciary, evaluations by private intermediaries are often opaque. They provide none of the legal safeguards a trial does, such as a right to appeal.[9] The mandate to censor speech within 24 hours of notification exacerbates this problem.

Proponents of this code might argue that intermediaries engage in self-censorship according to their Community Guidelines in status quo.[10] Therefore, an extension of this obligation to domestic legislation is not harmful. However, intermediaries are profit oriented private corporations. The legal obligation placed by the code of conduct is accompanied by liability if breached. This threat of liability will cause them to err on the side of caution and over censor speech. Professor Seth Kreimer, a Constitutional and Human Rights Law expert, argues that intermediaries know that potential liability will outweigh additional revenue offered by a user.[11] This is likely to have a chilling effect on online speech[12]. Hence, the Indian Supreme Court in Shreya Singhal v Union of India[13] rejected the “private notice and takedown” standard. It held that an intermediary will only be liable if it fails to comply with a judicial order stating the illegality of content.

For example, assume someone posts a controversial tweet. Presumably, this would be flagged by users for removal. Even if the notice is “valid”, and not “insufficiently precise”, Twitter will still have to investigate this before taking it down. In status quo, big corporations like Twitter usually have a legal team for this. However, this legal team will have to evaluate, within 24 hours, if the speech is inciting violence or hatred. For this, it will have to analyse its content and severity, the intent of the speaker and the social context. It will also have to scrutinize the causality between the speech and potential violence. This is a nearly impossible task. Moreover, it does not know if the judiciary will render the same verdict. So, if it continues to disseminate the speech in good faith, and the judiciary later deems it illegal, it can be held liable. This threat will make it remove speech, wherever the “distinction between freedom of expression and conduct that incites violence[14] is not “clear”[15]. In the face of millions of such requests, intermediaries cannot be expected to make a sound legal evaluation. As a result, society may be deprived of potentially valuable speech.

Thus, this Code effectively mandates private censorship. Intermediaries will not be able to make nuanced evaluations of whether the speech incites hatred or violence within 24 hours. However, they can be liable, even if they do not delete content in good faith if a Court later finds it impermissible. The fear of this liability will make intermediaries err on the side of caution and over-censor. Hence, this Code is a recipe for a chilling effect online. Thus, while preventing terrorist propaganda is a legitimate aim, this response will disproportionately restrict freedom of speech and expression online.

[1] Code of Conduct on Countering Illegal Hate Speech Online, available at http://ec.europa.eu/justice/fundamental-rights/files/hate_speech_code_of_conduct_en.pdf; “European Commission’s Hate Speech Deal With Companies Will Chill Speech”, available at https://www.eff.org/deeplinks/2016/06/european-commissions-hate-speech-deal-companies-will-chill-speech.

[2] “European Commission and IT Companies announce Code of Conduct on illegal online hate speech”(Press Release) , available at http://europa.eu/rapid/press-release_IP-16-1937_en.htm,.

[3]Ibid.

[4] Omer, Corey. “Intermediary Liability for Harmful Speech: Lessons from Abroad.” Harv. J. Law & Tec 28 (2014): 289-593.

[5] Verbiest, Thibault, Gerald Spindler, and Giovanni Maria Riccio. “Study on the liability of internet intermediaries.” Available at SSRN 2575069 (2007).

[6] “European Commission and IT Companies announce Code of Conduct on illegal online hate speech”(Press Release) , available at http://europa.eu/rapid/press-release_IP-16-1937_en.htm.

[7] Toby Mendel, Study on International Standards Relating to Incitement to genocide or Racial Hatred, a study for the UN Special Advisor on the prevention of Genocide, April 2006, available at http://www.concernedhistorians.org/content_files/file/TO/239.pdf; “Towards an interpretation of Article 20 of the ICCPR: Thresholds for the prohibition of incitement to hatred”, available at http://www.ohchr.org/Documents/Issues/Expression/ICCPR/Vienna/CRP7Callamard.pdf..

[8] HRC, ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ by Frank La Rue available at http://www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf.

[9] Jack M. Balkin, ‘Old-School/New-School Speech Regulation”, (2014) 127 Harvard Law Review 2296.

[10] Freiwald, Susan. “Comparative Institutional Analysis in Cyberspace: The Case of Intermediary Liability for Defamation.” Harv. JL & Tech. 14 (2000): 569; MacKinnon, Rebecca, et al. Fostering Freedom online: the role of internet intermediaries. UNESCO Publishing, 2015.

[11] Seth F. Kreimer, ‘Censorship by Proxy: The First Amendment, Internet Intermediaries, and the Problem of the Weakest Link’ (2006) 155 (11) U. Pa. L. Rev 2-33.

[12] Chinmayi Arun & Sarvjeet Singh, NoC Online Intermediaries Case Studies Series: Online Intermediaries in India 24, 25 (2015), available at http://ccgtlr.org/wp-content/uploads/2015/02/CCG-at-NLUD-NOC-Online-Intermediaries-Case-Studies.pdf (last visited on July 4, 2015).

[13] (2013) 12 SCC 73 (India).

[14] “European Commission and IT Companies announce Code of Conduct on illegal online hate speech”(Press Release) , available at http://europa.eu/rapid/press-release_IP-16-1937_en.htm.

[15] Ibid.

(Rishab is a students at Jindal Global Law School and currently an intern at CCG)

Original author: Sarvjeet Singh
No comments yet: share your views