Accountability, Not Curbs on Free Speech, is the Answer to Harmful Content Online | The Wire

Op-Eds by Vidhi Karnataka · December 26, 2018
Author(s): Divij Joshi

The draft amendments to India’s regulations covering ‘intermediaries’, suggested by the IT ministry following ‘secret consultations’ with internet companies, are likely to introduce further ambiguity into the already broad and vague legal regime governing online intermediaries.

If passed, they will ease the private censorship and surveillance of speech by powerful companies, while doing little to address the actual problem of undemocratic and unsafe online spaces.

There is an urgent need not only to recall the vague and harmful draft rules, but to ask the government to ensure greater transparency and accountability from online platforms.

Safe harbour and the government’s dilemmas

Section 79 of the Information Technology Act, the so-called ‘safe harbour’ to intermediaries – a broad term encompassing (largely private) internet service providers like telecom companies, as well as online platforms like Facebook and Twitter – is the backbone to one of the most crucial enablers of online freedoms. This provision protects intermediaries from being directly liable for the words and actions of third parties using their services. Without this, any illegal content posted on or through any such service, could potentially warrant civil or criminal legal action against the intermediary.

The outcome would be a severely restricted internet – online platforms and services would heavily censor content to avoid liability, which would also require them to monitor of all content posted by their users – creating a private surveillance regime.

As per Section 79, the safe harbour is available to intermediaries only if they remove illegal content upon obtaining ‘actual knowledge’ and also observe ‘due diligence’ and comply with the rules made by the executive. These rules were notified in 2011, as the Intermediary Guidelines Rules, and when notified, contained vague provisions including a requirement to takedown content which was‘grossly harmful, harassing, blasphemous, defamatory, obscene, pornographic, paedophilic, libellous, invasive of another’s privacy, hateful, or racially, ethnically objectionable, disparaging, relating or encouraging money laundering or gambling, or otherwise unlawful in any manner.’

The vague drafting of Section 79 and the 2011 Rules created a regime where intermediaries were unsure of when could be held liable, prompting them to over-censor and take down any content notified by any private person or government authority, upon fear of criminal sanction. However, this regime was overturned by the Supreme Court in Shreya Singhal v Union of India. In this case, the court read down the ‘actual knowledge’ requirement under Section 79 to mean a judicial order or a notification by the ‘appropriate government’. The court noted the difficulty and danger in private parties like Facebook and Google being required to adjudge the legality of content and filter the content on their platforms. Today, this is the law of the land.

Recent events appear to have prompted the Indian government to rethink its intermediary liability regime. The government’s interests appear to be tied particularly to concerns over electoral interference by Cambridge Analytica on Facebook and disinformation on WhatsApp.

Having its hands tied by the Supreme Court’s directions, the government now appears to be attempting to curtail ‘unlawful speech’ by requiring proactive censorship by intermediaries. The most concerning aspect is contained in Rule 3(9) which reads that “the Intermediary shall deploy technology-based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content.”

Additionally, the draft rules have been amended to require intermediaries to provide ‘traceability’ of the origin of messages – an obvious reference to the end-to-end encryption provided by WhatsApp, which does not permit message traceability.

While the traceability requirement is concerning and ought to be opposed, it must be noted that this requirement is already required of intermediaries under Section 69 of the IT Act and the Rules made under that section. However, the multiple legal provisions do create uncertainty about the precise regime which applies to content decryption, and, due to its vagueness and in the absence of the safeguards present (which are still minimally present under Section 69), are almost certainly unconstitutional and foul of the Supreme Court’s position on the fundamental right to privacy.

The more concerning aspect is the requirement of ‘proactive’ censorship of ‘unlawful content’. Firstly, it assumes that intermediaries are or should be in a position to determine the legality of content – which must be a judicial determination – and censor speech without any standards for the same. Such broad ‘prior restraint’ of speech without a judicial determination of its legality is also likely to be unconstitutional for its tendency towards mass private censorship.

Moreover, the advocacy of the use of ‘automated tools’ assumes that such tools are capable of filtering only unlawful speech, whereas the reality is very different – even the most sophisticated filtering technologies are liable to both censor legal content and leave out illegal content.

Rights without responsibilities?

The draft rules bring to the fore certain tensions with the Indian intermediary liability regime, which both courts and governments have been grappling with. In the aftermath of Shreya Singhal, intermediaries need only comply with judicial orders for content takedown. However, in the absence of an expedited judicial process for takedown (as exists in Chile, for example), this requirement is onerous on users who are at the receiving end of unlawful speech online, and allows intermediaries to neglect unlawful content on their platform.

In this backdrop, the Supreme Court has already bypassed the requirements of the IT Act to evolve the doctrine of ‘auto blocking’ in specific cases. This doctrine is, in essence, the same as that under the new draft rules, and dangerous for the same reasons.

However, while online platforms like Facebook, Google or Twitter, project themselves as impartial ‘intermediaries’, the reality is that their primary role lies in filtering, censoring, prioritising or disabling certain forms of content.

In this manner, even in the absence of legal requirements, intermediaries are responsible for the private ordering of public speech, a dangerous trend which enables platforms to exercise enormous power over our online lives, without any responsibility towards the same. Facebook’s prioritisation of misleading political advertisements, and Twitter’s failure to act upon violence against marginalised communities is evidence of the need to democratise online platforms and make them more accountable.

The draft IT Rules pose no solution to this lack of transparency and accountability of online platforms – rather, they entrench their power and make them more culpable in the private censorship of public discourse, along with the government of the day. A smarter legal intervention could promote greater due process in the private practices of intermediaries – requiring them to make their content moderation practices open and transparent, for example by releasing transparency reports, and by making them more accountable to follow due process in the moderation of content on their platforms, like making it easier for users to notify abusive or illegal behaviour, and notifying the steps taken to address the same.

This would promote safer and more democratic online spaces where online communities, not the government or the executives in online companies, can be at the forefront of battling harmful and illegal speech.

You can make your voices heard by writing to the ministry of information technology until January 15. India must use this opportunity to oppose censorship and advocate for safer and democratic online communities.

This article was originally published here.


About Divij Joshi:

Divij Joshi is a Research Fellow at Vidhi, Bengaluru. He graduated from the National Law School of India University, Bangalore, in 2016, and subsequently worked in the disputes resolution team of AZB & Partners in Mumbai. He has also worked as an independent consultant and researcher with civil society organizations, and regularly writes on topics of intellectual property rights and the information society. His areas of interest include access to knowledge, innovation and law and technology.