The Algorithm, the Intermediary and the (Holy?) State

Balancing Algorithmic Bias, Intermediary Liability, and Constitutional Rights

**Vyomkesh Didwania

Introduction

With the increased scale of content being posted online, the task of content moderation has its own set of problems. With billions of users and millions of posts per day on various platforms, it has become humanly impossible to enforce the content moderation guidelines on these platforms. These platforms, therefore, rely heavily on Artificial Intelligence (AI) based algorithms for content moderators. 

However, these algorithms themselves suffer from inherent biases and are also not the most effective when it comes to context-based content moderation. Algorithmic biases can manifest in multiple ways, from allowing actual hate speech to stay and spread unchecked to suppressing legitimate political opinions. 

In the Indian context, given the recent boom in Internet usage, it becomes even more pertinent to understand the effectiveness of the content moderation policies of these social media platforms. In the complex socio-cultural landscape of the country, where communal, caste-based, and gender-based tensions continue to persist, the fight often spills over to the digital domain. 

The primary point of contention in this article is whether these social media platforms can be subjected to the Fundamental Rights test by extending the application of horizontality and considering them as quasi-public authorities. There are multiple facets to this intersectional issue of digitalism and the Constitution.

Algorithmic Moderation and Its Implications

Algorithms that are based on Artificial Intelligence have been deployed to implement the moderation policies on a mammoth scale worldwide. These algorithms work through a diverse array of mechanisms, but the most common one is known as hashing.  To put it simply, hashing is assigning certain values to all the contents being posted online and, thereafter, using such hash values to locate and classify offensive content. Without getting too technical, there is a litany of complex algorithmic operations being performed by social media platforms to ensure that any and all forms of explicit content are immediately taken down. However, this has not always worked out too well, and there have been instances where entities such as Meta have failed to take down targeted posts against religious groups in India. There are three issues with respect to Algorithmic Content Moderation:

I. Failure of the Algorithm to detect actual hate speech. 

The issue arises when the content does not prima facie appear to be violative of moderation policies, and can be termed as borderline content. In such circumstances, the posts are made in relation to a particular context, such that, in isolation, the content appears harmless, but is riddled with innuendos and agendas that the algorithm cannot detect. The failure of artificial intelligence and algorithms in flagging and taking down such content has been found to be concerning, particularly in the Indian context. Such context-based posts are rarely flagged by the algorithms and continue to stay. The issue is further compounded when the statements are made in vernacular languages, which the algorithms are not always best trained to handle. 

The scale at which the algorithms operate also means that a small typo in the sentences or a watermark in an image or video could easily bypass the hash-based identification systems that most algorithms use. Hence, it becomes abundantly clear that while explicit content is effectively flagged and removed from these platforms, it is the borderline content, or context/innuendo-based posts, that an algorithm cannot objectively label as “hate speech” that passes under the radar. 

II. Biases in algorithms stifle differences in political opinions under the garb of “hate speech”

Algorithms are not created on their own, and it has been empirically seen that there exist inherent biases within the algorithms, based on who is making them. This means that the algorithms have the potential to perform badly while implementing moderation policies on information related to underrepresented groups, including Dalits, women, and religious groups. Underrepresentation leads to two consequences: firstly, given the lack of training data in the algorithm to differentiate between genuine hate speech and the difference of opinions, the AI frequently takes down statements made by people against the incumbent majority. 

The matter is further aggravated in the Indian context, where the government itself orders platforms like X (formerly Twitter) to take down such posts. When the State authority is used to influence these intermediaries to take down politically sensitive posts, that may not be per se objectionable or “hate speech”, it becomes a serious threat towards free speech. 

The procedure of enforcing the content moderation policies on most platforms is also very arbitrary, providing inadequate appellate mechanisms, and sometimes not even providing reasons for the takedowns

A glaring example of state-influenced censorship on digital platforms is found during the erstwhile farmer protests in the country. In 2021, Twitter was involved in a political controversy because it had to comply with the government’s orders to take down certain hashtags. Users, images, etc, from its platform. This meant that the algorithm was specifically reprogrammed to take down content which was earlier not objectionable in any way, because of the State orders.

The fundamental right to freedom of speech and expression under Article 19 has been interpreted by the Supreme Court to include the right to protest as a bulwark of democratic governance. Such gag orders by the Government, coupled with the general lack of algorithmic sensitivity towards digital minorities, mean that their legitimate expressions of protest often get suppressed under the garb of hate speech. 

III. Incentivised misinformation due to algorithmic prioritisation

Algorithms are, in the end, being made by private enterprises with profit-maximising motives. User engagement and post proliferation are still the first priority of any social media algorithm. So, the posts that get the most user engagement get prioritised, and there is more incentive for the post-makers to increase user engagement. What a lot of players, especially news agencies, have habitually resorted to is to create misleading, misquoted headlines to create sensational clickbait to drive up user engagement. While the news outlets have been notorious for making sensational headlines, other individuals have resorted to spreading misinformation and lies for the sake of getting traction.  

The process of fact-checking is more complex than simple hate-checking, and such posts continue to stay online for longer periods of time. This mechanism creates a precarious feedback loop that not only influences public perception but also undermines the credibility of the content being shared online. The inherent lag in fact-checking, which is almost always done by human actors within social media leads to untrue narratives taking hold, which further affects the public trust and socio-political harmony within the country. 

This also has an ancillary “echo chamber” effect, causing confirmation biases, and leading to extremism in social thought. The echo chamber effect is the largest by-product of the traction-driven algorithms, i.e. the AI catches on to a user’s preferences and presents content to their feed that is not always true or accurate. 

These three primary issues with algorithm-based moderation and content curation make it abundantly clear that there is a need to seriously reconsider the procedure of regulating social media content, especially in the Indian context, the reason for which is elaborated upon in the next section. 

Social Media Intermediaries and Their “Quasi-Public” Role

The power of the social media intermediaries is irrefutable, given the access to their role in shaping public discourse in contemporary times. This enormous concentration of control over the services and public function of social discourse requires a change in perspective from viewing public actors, i.e. the State as the only threat to fundamental constitutional rights to including private entities, particularly such platforms, as comparable threats. The right to freedom of speech and expression naturally flows into the discourse of the powers and liabilities of such intermediaries. 

In the context of India, the issue becomes even more unique, since the line between public and private in social media platforms is increasingly getting blurred. With the recently introduced IT Rules 2021, the Government of India has actively pushed itself into the role of social media regulation. These rules were supposedly enacted to have a “soft touch” oversight mechanism in relation to digital intermediaries and to improve the grievance redressal mechanism. Moreover, the rules have now classified larger platforms as Significant Social Media Intermediaries (SSMIs) that include any intermediary with the number of registered users above a notified threshold. Such SSMIs will have to, in addition to the due diligence expected from all intermediaries, appoint statutory officers such as CCOs, GOs, etc. This move by the Government has been criticised, calling the panels “essentially a government censorship body for social media that will make bureaucrats arbiters of online free speech.” 

The practice in which the Government has been using these IT Rules to moderate the content has brought multiple petitions in which the State is made a Respondent. In fact, the substantive IT Rules 2021 themselves have been challenged in a matter before the Supreme Court. While this particular matter is sub-judice, and the validity of the IT Rules is a matter outside the ambit of this article, it becomes clear that the Fundamental Rights themselves are in question within the realm of digital intermediaries. So, it is argued the SSMIs should be given the status of “quasi-public” bodies based on a two-pronged argument:

I. The element of Public Functions being performed by the Intermediaries

Recently, the Supreme Court has, in the RMGL case, reiterated the concept of “public law” element while characterising the functions of a private entity. It is also relevant to contextualise the judgment of Zee Telefilms with respect to the issue of SSMIs. In that case, it was stated that the test for determining “State” should be based on the “public functions” being carried out by the authority. Now, it is argued that Social Media Intermediaries, especially SSMIs, carry out a large public function of mass communication, and given its access at the present scale, it is akin to a large broadcasting body. The Supreme Court, in the case of Binny Ltd. v Sadasivan, has stated that it is difficult to draw a line between public and private functions when it is being discharged by a purely private authority. A body is performing a “public function” when it seeks to achieve some collective benefit for the public or a section of the public and is accepted by the public or that section of the public as having authority to do so. 

SSMIs are performing a public function when they are regulating the content being posted on their platforms because the fundamental right to freedom of speech and expression, including the right to protest, comes into the equation the moment a moderation is unjust, inequitable or arbitrary. 

II. Content Moderation by SSMIs now has a significant role being played by the Government 

As mentioned afore, with the new IT rules, the Indian Government has taken a more “hands-on” approach with respect to content moderation on social media platforms. The matter was further complicated by the new amendments introduced in the IT rules in 2023. The new amendment had empowered the Government to set-up “Fact Checking Units” (FCUs) to identify and take down fake news on social media platforms. The Bombay HC struck down these amendments in a writ petition, calling them to be unconstitutional after a split verdict. The Court stated that the act of creating an FCU would be a restriction, above and beyond the constitutionally mandated restrictions under Article 19, i.e. against the ratio laid down in Kaushal Kishor. And therefore, the State cannot now add new restrictions on the fundamental right to freedom of speech and expression. 

While this Amendment has been struck down by the Bombay HC, this signifies how the Courts have been applying the “Horizontality of Fundamental Rights” Doctrine while dealing with social media platforms. 

The act of striking down the amendments to the IT Rules in 2023 only furthers the assertion that social media content regulation is increasingly being influenced by the Government. And while the act of setting up FCUs has been struck down, the IT Rules, as they were prior to 2023, continue to be valid. 

While the influence of the Government over general content moderation might not be as explicit as the FCUs, the apprehensions of the Court regarding the threats to Fundamental Rights continue to be true, even for algorithm-based content moderation. 

The Algorithm, the State and the Intermediary: The Solution in the Trinity

The possibility of applying the public law remedy of writ jurisdiction cannot be exercised unless some form of State role is determined. While the existing jurisprudence has subjected these intermediaries to writ remedies, they have been based on the potential of the State to impose unreasonable restrictions on the fundamental right to free speech and expression. However, as mentioned afore, the algorithmic bias in itself also poses significant threats to freedom of speech and expression, especially in the Indian context, where caste and religion continue to be heavily contentious issues. Since the Courts have already acknowledged the role of the State in social media moderation, albeit in the form of fact-checking, it is only logical to assume that the State’s influence continues to pervade the SSMIs in the form of the unamended IT Rules 2021. 

Therefore, the State cannot play an ancillary part anymore in content moderation if it supposedly wishes to ensure that social media platforms are democratically equitable. What should entail is a symbiotic relationship between the State and the SSMI, whereby both are implicated in the consequences, both intended and unintended, of algorithm-based content moderation. 

This has twin benefits: firstly, the issue of the algorithm not recognising hate speech could be read into the State’s constitutional duty to protect against “internal disturbances”. Any non-recognition of hate speech could thereafter be directly attributable to the State. However, to prevent the State from misusing this, and suppressing any dissent under the garb of “public order, morality or tranquillity”, all forms of moderate action should secondly be amenable to a judicial review under a Fundamental Rights scrutiny. 

It could also incentivise the State to address the algorithm’s bias, particularly its casteist and communal insensitivity within the Indian context. The intermediaries are, in the end, private corporations with profit-maximising motives. They would not have any incentive to amend their algorithms as long as it is generating revenue for them. However, the Union, being tasked with the duty of a welfare state, should have the added burden of working towards addressing the algorithmic biases themselves. 

In summary, the judicial review should be expanded, and if the State wishes to moderate digital content in a bona fide manner, it should be given a greater responsibility and a higher standard of accountability in addressing the same. The existing jurisprudence has largely construed the State’s actions to be in bad faith. However, if both the State and the Judiciary work harmoniously, they might just succeed in making the digital space a more inclusive and democratic platform for the Indian populace to voice and express themselves.

**Vyomkesh Didwania ia a 3rd Year, B.A. L.L.B. (Hons.) student at NUJS, Kolkata.

**Disclaimer: The views expressed in this blog do not necessarily align with the views of the Vidhi Centre for Legal Policy.