There is need for a legal, organisational framework to regulate bias in algorithms | The Indian Express

Op-Eds by Public Law · February 28, 2019
Author(s): Sohini Chatterjee and Sunetra Ravindran

What is an algorithm, and what is the big deal about permitting it to make decisions? After all, it is merely a set of instructions that can be used to solve a problem. The reasons for the increasing reliance on algorithms are evident. First, an algorithm can make decisions more efficiently than human beings, thus indicating its superiority to human rationality. Second, an algorithm can provide emotional distance — it could be less “uncomfortable” to let a machine make difficult decisions for you.

However, algorithms are susceptible to bias — and machine learning algorithms are especially so. Such bias may often be concealed until it affects a large number of people. We should examine their potential for bias as algorithms are being used to make evaluative decisions that can negatively impact our daily lives. Algorithms are also dictating the use of scarce resources for social welfare.

The use of AI in governance in India is still nascent. However, this will soon change as the use of machine learning algorithms in various spheres has either been conceptualised or has commenced already. For example, the Maharashtra and Delhi police have taken the lead in adopting predictive policing technologies. Further, the Ministry of Civil Aviation has planned to install facial recognition at airports to ease security.

The primary source of algorithmic bias is its training data. An algorithm’s prediction is as good as the data it is fed. A machine learning algorithm is designed to learn from patterns in its source data. Sometimes, such data may be polluted due to record-keeping flaws, biased community inputs and historical trends. Other sources of bias include insufficient data, correlation without causation and a lack of diversity in the database. The algorithm is encouraged to replicate existing biases and a vicious circle is created.

It is worth remembering that algorithms are premeditated to differentiate between people, images and documents. Bias can lead algorithms to make unfair decisions by reinforcing systemic discrimination. For example, a predictive policing algorithm used for foretelling future crimes may disproportionately target poor persons. Similarly, an algorithm used to make a hiring call may favour an upper-caste Hindu man over an equally qualified woman.

The extant law in India is glaringly inadequate. Our framework of constitutional and administrative law is not geared towards assessing decisions made by non-human actors. Further, India has not yet passed a data protection law. The draft Personal Data Protection Bill, 2018, proposed by the Srikrishna Committee has provided the rights to confirmation and access, sans the right to receive explanations about algorithmic decisions. The existing SPDI rules issued under the IT Act, 2000 do not cover algorithmic bias.

Possible solutions to algorithmic bias could be legal and organisational. The first step to a legal response would be passing an adequate personal data protection law. The draft law of the Srikrishna Committee provides a framework to begin the conversation on algorithmic bias. The right to the logic of automated decisions can be provided to individuals. Such a right will have to balance the need for algorithmic transparency with organisational interests.

Second, a general anti-discrimination and equality legislation can be passed, barring algorithmic discrimination on the basis of gender, caste, religion, sexual orientation, disability etc in both the public and private sectors.

Additionally, organisational measures can be pegged to a specific legislation on algorithmic bias. In the interests of transparency, entities ought to shed light on the working of their algorithms. This will entail a move away from the current opacity and corporate secrecy. However, considering the complexity of most machine learning algorithms, seeking absolute transparency alone may not be practical.

Instead, mandating accountability from developers and users is expedient. Developers should design fair algorithms that respect data authenticity and account for representation. Further, organisations could develop internal audit mechanisms to inspect whether the algorithm meets its intended purpose, and whether it discriminates between similarly placed individuals. Organisations could also outsource the auditing to certified auditors.

Entities relying on evaluative algorithms should have public-facing grievance redressal mechanisms. Here, an individual can confirm that an algorithm has been used to make a decision about them, and the factors that prompted it. An aggrieved individual or community should be able to challenge the decision. Finally, the use of algorithms by government agencies may require public notice to enable scrutiny.

Considering their pervasiveness, algorithms cannot be allowed to operate as unaccountable black boxes. The law in India, as well as companies reaping the benefits of AI, must take note and evolve at a suitable pace.

This article first appeared in the print edition on February 28, 2019, under the title ‘Rules for the machine’. The writers are research fellows in public law at the Vidhi Centre for Legal Policy, New Delhi.

Originally Published –

About Sohini Chatterjee:

About Sunetra Ravindran:

Sunetra is a Research Fellow in the Public Law vertical. Her current projects at Vidhi include reforms in the area of Digital Economy, Privacy law and Data Protection. Sunetra graduated with B.A, LL. B (Hons) from NALSAR University of Law, Hyderabad in 2012. Subsequently, she worked at AZB& Partners, Bangalore for two years, where her work primarily involved general corporate matters, employment law and litigation. Thereafter, she obtained her LL.M in Intellectual Property Law from George Washington University, Washington D.C. in 2015. Link to full bio