Experts & Views
This post discusses the regulatory challenges that emerge from the changing nature of Facebook and other social networking websites
Facebook has recently faced a lot of criticism for circulating fake news and for knowingly suppressing user opinions during the 2016 U.S. elections. The social media website has also been criticised for over-censoring content on the basis of its community standards. In light of these issues, this post discusses whether Facebook can be considered a mere host or transmitter of user-generated content anymore. This post also seeks to highlight the new regulatory challenges that emerge from the changing nature of Facebook’s role.
The Changing Nature of Facebook’s Role
Social media websites such as Facebook and Twitter, Internet Service Providers, search engines, e-commerce websites etc., are all currently regulated as “intermediaries” under Section 79 of the Information Technology Act, 2000 (“IT Act”). An intermediary “with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record.” Accordingly, they are not liable for user-generated content or communication as long as they observe due diligence and comply with certain conditions such as acting promptly on takedown orders issued by the appropriate government or its agency.
Use of Human Editors
While Facebook is currently regarded as an intermediary, some argue that Facebook has ceased to be a mere host of user-generated content and has acquired a unique character as a platform. This argument was bolstered when Facebook’s editorial guidelines were leaked in May, 2016. The editorial guidelines demonstrated that the apprehensions that Facebook was acting in an editorial capacity were true for at least some aspects of the platform, such as the trending topics. Reports suggest that Facebook used human editors to “inject” or “blacklist” stories in the trending topics list. The social media website did not simply rely on algorithms to generate the trending topics. Instead, it instructed human editors to monitor traditional news media and determine what should be trending topics.
These editorial guidelines revealed that the editors at Facebook regularly reviewed algorithmically generated topics and added background information such as video or summaries to them, before publishing them as trending topics. Further the social media website also relied heavily on traditional news media websites to make such assessments. Critics have pointed out that the editorial policy of Facebook is extremely inadequate as it does not incorporate guidelines relating to checking for accuracy, encouraging media diversity, respecting privacy and the law, or editorial independence.
Months after this revelation, Facebook eliminated human editors from its trending platform and began relying solely on algorithms to filter trending topics. However, this elimination has resulted in the new problem of circulation of fake news. This is especially alarming because increased access to the Internet has meant that a large number of people get their news from social media websites. A recent research report pointed out that nearly 66% of Facebook users in the U.S , get news from Facebook. Similarly, nearly 59% of Twitter users rely on the website for news. In light of this data, eliminating human discretion completely does not appear to be a sensible approach when it comes to filtering politically critical content, such as trending news.
Facebook has also been criticised widely for over-censoring content. The social media website blocks accounts and takes down content that is in contravention to its “community standards”. These community standards prohibit hate speech, pornography or content that praises or supports terrorism, among others. In India, the social media website faced a lot of flak for censoring content and blocking users during the unrest that followed the death of Burhan Wani, a member of a Kashmiri militant organisation. Reports suggest that nearly 30 academics, activists and journalists from across the world were restricted from discussing or sharing information regarding the incident on Facebook.
Facebook’s community standards have also been criticised for lacking a nuanced approach to issues such as nudity and hate speech. The blocking of content by private entities on the basis of such “community standards” raises concerns of being too wide and the possible chilling effect that it can have on free speech. As highlighted before, Facebook’s unique position, where it determines what content qualifies as hate speech or praise of terrorism, allows it to throttle alternative voices and influence the online narrative on such issues. The power exercised by Facebook in such instances makes it difficult to identify it as only a host or transmitter of content generated by its users.
The discussion above demonstrates that while Facebook does not behave entirely like a conventional editor, it would be too simplistic to regard it as a host of user-generated content.
Facebook is a unique platform that enables content distribution, possesses intimate information about its users, and has the ability to design the space and conditions under which their users can engage with content. It has been argued that Facebook must be considered as a “social editor” which “exercises control not only over the selection and organisation of content, but also, and importantly, over the way we find, share and engage with that content.” Consequently, Facebook and other social media websites have been described as “privately controlled public spheres” i.e much like traditional media, they have become platforms which provide information and space for political deliberation.
However, if we agree that Facebook is more akin to a “privately controlled public sphere”, we must rethink the regulatory bucket under which we categorise the platform and the limits to its immunity from liability.