India’s Tryst With Predictive Policing

The need for safeguards for greater transparency

The need for safeguards for greater transparency

In the past years, state governments have increasingly used technology for assistance in several areas, particularly in law enforcement. Police forces are using analytical techniques to prevent or solve crime by identifying likely individuals as targets. The techniques, collectively known as predictive policing, are helping law enforcement agencies track data of past crimes which is then analysed by artificial intelligence (AI) algorithms to find how such crimes could be duplicated.

Media has often referred to this technology as capable of ‘stopping crime before it starts’, and much has been written of its benefits, most of which rests on the claim that profiling individuals can enable a safer and more secure environment for citizens. Proponents say its use can speed up identification and analysis of crime patterns, which usually takes longer to discern through human observation, if at all.

At the same time, questions on the security of the data collected, misidentification of persons, and opaque architecture of the technology have also been debated on several forums. Apart from these, the use of AI in fighting crime, though well-intentioned, raises important privacy and human rights concerns. This piece addresses these concerns and suggests checks in implementation of the technology.

Initial efforts

As early as 2015, the Delhi Police made public its intention to use predictive policing through its Crime Mapping, Analytics and Predictive System (CMAPS), a software that accesses real-time data from the city police’s helpline to identify criminal hotspots in the city. Initially introduced to map crime patterns, years later, very little is known about the effect of the CMAPS due to large exceptions for law enforcement agencies under the Right to Information (RTI) Act.

Other states such as Punjab, Uttar Pradesh and Rajasthan have also, over the years, come to use the technology, particularly for facial recognition. A few states such as Madhya Pradesh espouse noble intentions such as rehabilitating people who commit crime for a livelihood. However, their methods could lead to potential discrimination against the very people they seek to help.

Problems with predictive policing

Risk of being discriminatory: Persons from marginalised communities are more reliant on social welfare services provided by states. Their information, gathered through welfare schemes, results in a database with a higher number of people from these communities.  This skewed demographic set is eventually digitised and used to train the technological tools. Thus, when used, the technology can create heightened exclusionary systems, and further the existing human biases under the garb of ‘neutral technological choices’. 

In the field of predictive policing, the pitfalls are even more pronounced as disadvantaged communities tend to be more negatively impacted due to being seen more of as a ‘risk’ to law and order.

Diluting citizens’ rights: In protests against the Citizenship Amendment Act, it was reported that the Delhi police was filming protestors and then running images through its Automated Facial Recognition Software (AFRS) to screen the crowd. While this is more of an issue of identification than predictive policing directly, the potential of it being used to judge protestors as future criminals cannot be ignored.

India’s digital domain is also increasingly being used to infringe upon citizens’ rights. The Central government has, in the recent past, resorted to social media platforms for moral policing.

Also, Rule 3(5) of the Intermediary Rules 2018, which govern the flow of information across the internet, requires every intermediary to enable tracing of originators of information on its platform. This could mean significant pressures for IT companies to monitor content and hand over data which could affect citizens’ privacy and rights.

Adding to this, India’s reliance on technological tools for governance is expected to augment as India looks to incorporate more tech-driven solutions. This is bound to have ramifications, often negative ones on the rights of citizens.

Need for safeguards

The use of AI in law and order in India needs increased oversight, greater transparency, and sufficient mechanisms to contest violation of rights caused by it. A few solutions are detailed below.

Agencies practising predictive policing should proactively inform citizens about it. They should be making public information on the types of technology being used and their perceived impact.

Government procurement of such technological systems should be subject to algorithmic impact assessments that can assess and mitigate the risks associated with the use of automated systems as explained above. The impact assessments should have a criterion for examining the data set to find out how it has been collected and how representative it is of the population.

However, these safeguards cannot emanate only from the government authorities using the tools, but also from the private sector companies creating them.

Conclusion

The problems with predictive policing are accentuated by the inherent fault lines in society which make targeting of a certain category of individuals acceptable. Private and public sector entities need to make AI more inclusive by, addressing issues of bias, and making it more open and accessible to the masses.

Right now, the focus of law enforcement agencies is on removing human intervention completely. But human intervention will always remain an essential process in maintaining law and order and should be supporting technological interventions.

Views are personal. 

Filed Under