Indian Law Enforcement’s Ongoing Usage of Automated Facial Recognition Technology – Ethical Risks and Legal Challenges
A primer on the key challenges of AFRTs in Indian Law Enforcement
With an increasing interest in the potential for AI, there are also concerns that have been highlighted, driven by ideas of responsible and ethical innovation. One of the most contentious use of intelligent algorithms is the development and deployment of automated facial recognition technology (AFRT), especially for law enforcement and surveillance purposes.
Across the globe, concerns have been flagged regarding the design and technical flaws in AFRTs, along with its potential chilling effect on constitutional freedoms and values. In India, across different states, as well as at a national level, different government agencies are pursuing AFRTs’ integration into domestic law enforcement practices. Where this becomes particularly alarming is how most of this has happened without much information or transparency, thereby eluding a necessary public debate around the issues.
In this paper, along with the subsequent working papers of this series, will bring to light certain facets which are crucial from the Indian standpoint, yet under discussed or completely ignored. This paper specifically, is a primer on how the issues around AFRTs in Indian law enforcement have currently been discussed, some of the key risks and challenges that have emerged therein. The challenges can broadly be classified into constitutional and legal challenges; ethical risk; and the exact value of AFRTs under Indian Evidence Act for their usage in the criminal justice system.
The paper concludes by stipulating why and how more elaborate discourses are needed around these issues of AFRTs, to achieve India’s stated goal of “responsible AI for all”.
This is the first in a series of papers. Read the second Working Paper here.