•  •  Dark Mode

Your Interests & Preferences

I am a...

law firm lawyer
in-house company lawyer
litigation lawyer
law student
aspiring student
other

Website Look & Feel

 •  •  Dark Mode
Blog Layout

Save preferences
An estimated 3-minute read

Summary Report: Second Roundtable on Hateful and Harassing Speech Online

by
 Email  Facebook  Tweet  Linked-in

by sflc_admin    |    October 7, 2016

On 6th September, 2016, SFLC.in organized the second in a series of consultations at the India International Centre, New Delhi on the various facets of hateful and harassing speech online. Both the roundtables had representation from industry, civil society, and media actors, and were instrumental in achieving a nuanced and deeper understanding of what constitutes harmful speech on an online forum, the challenges that intermediaries face in moderating such content, and the roles and responsibilities of law enforcement agencies. A set of draft best practices that could be adopted by the intermediaries as a self regulatory measure were proposed by SFLC.in and were discussed at length over the course of these two roundtable discussions. A summary report of the first discussion can be accessed here. Please note that both roundtables were held under the Chatham house rules to facilitate an open exchange of ideas, and no attributions will be made as to the sources of viewpoints discussed below.

Over the course of the second discussion, it was highlighted that it is indeed difficult to capture the contours of online harassment and hate speech in definite terms, as the line between legitimate and abusive uses of the freedom of expression is subjective. In addition, social media is often the chosen platform for powerful players to further their propaganda, and these positions of power are at times used to popularize certain kinds of opinions at the expense of others. With regards to free speech and expression in the online space, anonymity poses a unique conundrum, where on one hand it facilitates free and open discourse amongst vulnerable groups and minorities, while on the other, it is used as a mask by the perpetrators of harassing and abusive speech.

Although the most widely used social media platforms have policies that strongly condemn and restrict the use of their networks for abusive and harassing speech, it was discussed that these terms of service and community standards prove to be problematic for both the user and the intermediary owing to the lack of objective criteria to determine the extent restricted content on particular platforms. This results in a situation where the user is unable to determine if their opinions would be violative of the set standards, and the intermediaries are caught between censoring too much, or not censoring enough. From a user perspective, it was suggested that due to the large volume of information available about users on certain types of platforms, intermediaries should probably develop mechanisms wherein they can enhance protection for information they retain, especially about vulnerable groups and minorities. To improve transparency on the part of intermediaries, it was recommended that a comprehensive explanation about the reasons for removal of particular content should be provided. For example, if the filtration is done through algorithms, the ‘phrases’ or words in the text, or graphics in the image that were flagged by the algorithm should be mentioned to better understand the working of community standards and content moderation policies.

It was also pointed out that the policies and standards developed by the platforms are not set in stone, and that the tools for customizing various platforms according to user needs evolve with public consultations with various groups and organizations. However, a lack of user awareness and know-how in the usage of the existing tools for blocking, muting, and reporting was unanimously acknowledged by all stakeholders present, and hence it was mentioned that various platforms are conducting campaigns on these fronts are ongoing especially amongst vulnerable groups and rural communities. A suggestion for automated filtering of entire phrases that could constitute as harassing and hateful was dismissed as being obstructive of legitimate free speech as well. To refine their practices with the expanding base of global users, certain intermediaries are using language experts to ensure that harassing and abusive content is removed from their platforms. Therefore, efforts are underway by the platforms to develop tools and better the mechanisms for detecting abusive content, as well as provide filters and tools for users to employ.

SFLC.in is currently working on a report on hateful and harassing speech online and these discussions will be contributing to our findings and analysis of the report.

Original author: sflc_admin
©Republished under Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) Licence
SFLC.IN is a donor supported legal services organisation that  brings together lawyers, policy analysts, technologists, and students to protect freedom in the digital world. SFLC.IN promotes innovation and open access to knowledge by helping developers make great Free and Open Source Software, protect privacy and civil liberties for citizens in the digital world by educating and providing free legal advice and help policy makers make informed and just decisions with the use and adoption of technology. SFLC.IN is a society registered under the Societies Registration Act, 1860, operating all over India.
No comments yet: share your views