•  •  Dark Mode

Your Interests & Preferences

I am a...

law firm lawyer
in-house company lawyer
litigation lawyer
law student
aspiring student
other

Website Look & Feel

 •  •  Dark Mode
Blog Layout

Save preferences
An estimated 7-minute read

Parliamentary Standing Committee on a New Online Hate Speech Provision

 Email  Facebook  Tweet  Linked-in

(My thanks to Mr. Apar Gupta for providing this lead through his Twitter feed.)

Amidst the noise of the winter session of Parliament last month, a new proposal to regulate online communications was made. On December 7th, the Parliamentary Standing Committee on Home Affairs presented a status report (“Action Taken Report”) to the Rajya Sabha. This report was in the nature of a review of the actions taken by the Central Government on the recommendations and observations contained in another report presented to the Rajya Sabha in February, 2014 – the 176th Report on the Functioning of the Delhi Police (“176th Report”). In essence, these reports studied the prevalent law and order condition in Delhi and provided recommendations, legal and non-legal, for fighting crime.

One of the issues highlighted in the 176th Report was the manifest shortcomings in the Information Technology Act. The Report noted that the IT Act needed to be reviewed regularly. One particular suggestion given by the Delhi Police in this regard related to the lack of clarity in the definition of the erstwhile sec. 66A. The police suggested that “[s]everal generalized terms are being used in definition of section 66A of IT Act like annoyance, inconvenience, danger, obstruction, insult, hatred etc. Illustrative definition of each term should be provided in the Act with some explanation/illustration.”[1] Note that this report was published in 2014, more than a year before the Supreme Court’s historic ruling in Shreya Singhal finding sec. 66A unconstitutional.

An important proposition of law that was laid down in Shreya Singhal was that any restriction of speech under Art. 19(2) must be medium-neutral. Thus, the contours of the doctrines prohibiting speech will be the same over the internet as any other medium. At the same time, the Court rejected an Art. 14 challenge to sec. 66A, thereby finding that there existed an intelligible differentia between the internet and other media. This has opened the doors for the legislature to make laws to tackle offences that are internet-specific, like say phishing.

The Action Taken Report notes that as a result of the striking down of sec. 66A, some online conduct has gone outside the purview of regulation. One such example the report cites is “spoofing”. Spoofing is the dissemination of communications on the internet with a concealed or forged identity. The Report goes on to provide a working definition for “spoofing” and proposes to criminalise it. If this proposal falls through, spoofing will be an instance of an internet-specific offence

Another example of unjustifiable online conduct that has been exonerated post-Singhal is hate speech. Hate speech laws is a broad head that includes all such legal regulations that proscribe discriminatory expression that is intended to spread hatred or has that effect. The Report states that all online hate speech must be covered under the IT Act through an exclusive provision. It has suggested that this provision be worded as follows

whoever, by means of a computer resource or a communication device sends or transmits any information ( as defined under 2 (1) (v) of IT Act )

  1. which promotes or attempts to promote, on the ground of religion, race, sex, place of birth, residence, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between 
religious, racial, linguistic or regional groups or caste, or communities, or
  2. which carries imputations that any class of persons cannot, by reason of their being members of any religious, racial, linguistic or regional group or caste or community bear true faith and allegiance to constitution of India, as by law 
established or uphold the sovereignty or integrity of India, or
  3. which counsels advices or propagates that any class of persons shall or should be by reason of their being members of any religious, racial, language or religion group or caste or community or gender be denied or [sic] deprived of their rights as 
citizens of India, or
  4. carries assertion, appeal, counsel, plea concerning obligation of any class of 
persons, by reasons of their being members of any religion, racial, language or religion group or caste or community or gender and such assertion, appeal, counsel or plea causes or is likely to cause disharmony or feeling of enmity or hatred or ill-will between such members or other persons.”

shall be punishable with ………”

A mere perusal of these provisions reveals that they are substantially similar to the offenses covered under sec. 153A and sec. 153B of the Indian Penal Code, which along with sec. 295A of the IPC form the backbone of penal regulations on hate speech. In this backdrop, it would appear that the proposed insertion to the IT Act is redundant. The Action Taken Report justifies the inclusion of this proposed provision on the ground that the impact caused by the “fast and wider spread of the online material … may be more severe and damaging. Thus, stricter penalties may be prescribed for the same as against similar sections mentioned in IPC.” However, if the rationale is to employ stricter penalties to online content, then the Report could very well have suggested amendments to sec. 153A and sec. 153B.

What is disconcerting, however, is the assumption that because incendiary content is posted online, its effect will be “more severe and damaging”. Indeed social media has had a hand in the spread of violence and fear in tense situations over the last few years, starting from the North East exodus to the Muzzafarnagar riots and up to as recently as Dadri lynching. Yet, the blanket assertion that online content is more damaging does not take into account many variables like

  • the influence of the speaker – A popular public figure with a large following can exercise much more influence on public behaviour in an offline medium than a common man can on social media,
  • the atmospheric differences between viewing online content in your house and listening to speech at a charged rally, or
  • the internal contradictions of online speech, like the influence exerted by a 140 character tweet vis-à-vis a communally sensitive video (note here that the Supreme Court itself has emphatically recognized the difference between motion picture and the written word in stirring emotion in KA Abbas).

The Report could perhaps benefit from a more nuanced understanding of hate speech. A well-recognized effort in that direction is Prof. Susan Benesch’s Dangerous Speech framework. Prof. Benesch has devised a five-point examination of incendiary speech on the basis of the speaker, the audience, the socio-historical context, the speech act, and the means of transmission. This organizes the alleged hate speech in a more organized manner, allowing for a more informed adjudication on the possible pernicious effect that said speech might have.

An interesting question of debate could well centre on the proposed enhanced penalty for online hate speech. Would greater penalty for online speech (as opposed to offline speech) attract the ire of the doctrinal stance of medium-neutrality of the Court? Note that the Court in Shreya Singhal only mentions that the standards of determining speech restriction must be medium-neutral. Yet, the premise of enhanced penalties is based on the greater speed and access of online speech, which is necessarily internet-specific. Will a Court’s adjudication of penalties for criminalized speech amount to a standard or not?

Retweeting akin to Fresh Publication?

The Report also suggests that any person who shares culpable online content “should also be liable for the offence”. This includes those who “innocently” forward such content. Thus, for instance, anyone who retweets an original tweet that is later criminalized, will also be found liable for the same offence, as if he originally uploaded the content. According to the Report, “[t]his would act as a deterrent in the viral spread of such content.”

Forwarding of content, originally uploaded by one individual, is a popular feature in social media websites. Twitter’s version is called ‘Retweet’, while Facebook’s version is called ‘Share’. When a person X shares a person Y’s post, it may mean one of two things

  1. X endorses said opinion and expresses the same, through the mask of Y.
  2. X conveys to his followers the very fact that Y remarked said content. (In fact, many individuals provide a disclaimer on their Twitter profiles that Retweets do not necessarily meant endorsements.)

In an informative academic article, Nandan Kamath, a distinguished lawyer, termed people who forward information as “content sharers”, characterizing them as “a new breed of intermediaries”. Kamath goes on to liken content sharing to linked quotations and not as fresh publications. In doing so, he calls for restricted liabilities to content sharers. Kamath also examines the UK position on prosecution for social media content, which is multi-faceted, requiring “evidential sufficiency” and “public interest”.

The observations of the Action Taken Report appear linear in their stance of criminalizing all content sharing where the expression may be culpable. In doing so, it assumes all content sharing to amount to original speech. This approach turns a blind eye to instances where a sharer intends the post as a linked quotation. The Report would do well to take these concerns into account, thereby developing a more nuanced policy.

[1] Para 3.10.2

Original author: Nakul Nayak
No comments yet: share your views