The adoption of Artificial intelligence (AI) tools for governance has been steadily increasing. State authorities in India have increasingly started moving towards a surveillance society. Citizens are tracked under the guise of maintaining peace and security, which has met with resistance from privacy advocates. Unregulated deployment and misuse of artificial intelligence tools have a negative effect on the fundamental rights of citizens.
The need to regulate Artificial Intelligence based tools
The lingering pertinent question is when and how to regulate AI tools. A prominent proposition is to regulate AI tools either at the time of design or at the time of development. Regulating at either stage would rule out any possible misuse of AI tools but could restrict innovation. Developers would adopt a more restrictive and cautious approach while developing AI tools, hampering meaningful innovations. Therefore, regulating the deployment of these tools would be the optimum approach, ensuring a balance between innovation and protection of rights.
Even a single AI tool is often deployed to serve multiple purposes. For instance, administrations use Automated Facial Recognition Software (AFRS) to identify traffic violations, law enforcement, and national security. At the same time, they may also use AFRS for racial segregation, denying basic services, and surveilling citizens. For meaningful regulation, it must target the purposes or uses of AI tools.
There have been multiple cases in India where authorities have misused AFRS. Delhi police deployed AFRS in 2021 to track protestors, even though the tool was originally developed to locate missing children. They matched facial biometrics with existing databases to conduct an investigation and track 1,100 individuals. Interestingly, the tool is notorious for poor accuracy, with a success rate of only 2%. As per official claims, the number of children matched using AFRS was less than 1%, and that the system would match the pictures of boys with girls. Deploying this tool without oversight or within defined regulatory parameters can negatively affect the rights of individuals. It may result in false or mistaken identification even as the tool’s security purpose is undermined.
Over the past few years, the Telangana Government has become notorious for deploying different kinds of technologies to expand surveillance across the state under the garb of digital governance. This has led to a proliferation of CCTV cameras, AI-based solutions, facial recognition, etc. in every aspect of a citizen’s life. The Telangana Government is using this same AI tool to contain the COVID-19 spread in the city. They are uploading the biometric data of those entering the state via the airport in Hyderabad to the TSCOP app for the purpose of contract tracing and geotagging.
There have also been reports of AFRS use by the Hyderabad Police. They are making a database of citizens to identify repeat offenders of petty offenses. Citizens’ personal details such as their face, fingerprints, identification number are randomly collected and made accessible via TSCOP. Although the Police have made attempts to justify this surveillance through the Prisoner’s Act 1920, their contentions have been controversial. The Act permits collecting personal details only after an arrest and not merely on the basis of suspicion. Such legal distortions demonstrate the need for purpose-oriented regulation governing how AI tools can be used as well as how they are deployed.
In another development, Lucknow Police has claimed they deployed AI tools using facial expression to determine if a woman is in danger. In case the tool makes a positive determination it automatically sends a distress signal. The tool, therefore, detects a woman’s ‘distressed’ or ‘unhappy’ facial expression and alerts the nearest police station. The Lucknow Police has not given a satisfactory explanation of the arbitrariness of ‘distressed’ and ‘unhappy’ expressions. Moreover, it is unclear how the tool could make accurate contextual judgments of ‘distress’. It could potentially send police personnel on wild goose chases. This could lead to extensive surveillance on women, interference in private affairs, and wrongful arrests.
Consequently, regulatory protection is very crucial where governments might use the AI tool for profiling and associated purposes. Irrespective of the motive for surveillance, adequate laws are necessary for oversight over data collection and processing. This ensures fundamental rights like freedom of expression, movement, religion, and life with dignity are not compromised for national security. Moreover, the landmark Puttaswamy judgment clearly states a Government may collect citizens’ personal data subject to defined data laws.
Legal Issues and Safeguards
In order to regulate the use of AI tools, India is actively working towards implementing a national strategy. Various governmental authorities including Niti Ayog and MEITY have come up with their own set of policies & guides towards responsible use of AI tools. Expert opinion is that the existing laws are sufficient for the regulation of AI.
They propose the following:
(i) a self-regulation model for use of AI,
(ii) conscious development of explainable AI and concepts such as ‘Differential Privacy’ by implementing ‘Federated Learning’, and
(iii) practicing technical best practices based on FATE (fairness, accountability, transparency, Ethics).
While self-regulation is the core principle, these policy guides recommend framing regulations for minimizing indirect impacts and malicious use.
They also identify broad guiding principles for self-regulation. These include ensuring safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and protection and reinforcement of positive human values. The elements of effective self-regulation via self-audit are also clearly prescribed.
Such a model of self-regulation, however, may not be sufficient in the long run. Misuse by developers or those associated with them is a real concern. The long-term ineffectiveness of self-regulation has led the EU to draft extensive legislation addressing such loopholes. It proposes banning the use of A.I. for what it terms “manipulative, addictive, social control and indiscriminate surveillance practices” and imposes restrictions based on the risk level of the AI tool. Many experts have supported this legislation while terming it as an end of the free reign of AI usage.
Supporting Privacy Regulations
Experts in India, too, have warned about the ineffectiveness of self-regulation. They are especially wary in the absence of a basic data protection law. Ideally, a data protection law would address some of the core issues surrounding data collection and processing. However, there are also legitimate concerns regarding the effectiveness of the Indian Personal Data Protection Bill, 2019 (PDPB) to curtail misuse of AI tools for governance. Certain sections about ‘exceptions’ to the other provisions are extremely problematic.
Section 35 of the Bill gives the Central Government authority to revoke application data processing provisions on any governmental agency if ‘deemed necessary’. The predefined conditions for such an exemption from compliance under the Bill are extremely broad.
An exemption is also provided under Section 36 for data processing in the “interests of prevention, detection, investigation, and prosecution of any offense or any other contravention of any law for the time being in force”, amongst other conditions. The language of this section has the potential to allow law enforcement agencies unmitigated power for surveillance.
Meanwhile, Section 91 of the Bill permits the State to access vast amounts of anonymized data for the purpose of conducting evidence-based policymaking. The data may be used for profiling entire communities. In addition, Governments might frame targeted policies based on political agendas that are detrimental to an individual and the community, e.g. the “re-education camps” for Uygurs in Xinjiang.
The only safeguards that cannot be exempted under the Bill are de-identification and encryption methods. Its also necessary to implement steps preventing misuse, unauthorized access to, modification, disclosure, or destruction of personal data. However, the current regulatory framework in India does not do so, leaving much to be desired.
With a growing dependence on AI tools, it is inevitable governments will misuse AI tools and violate fundamental rights, unless kept in check. The lack of any regulation on the use of such tools has allowed the State to have free reign on collecting citizens’ personal data without following due process. The industrial revolution taught us that we cannot predict the adverse consequences of technology. Even as our legislators must act, we need to have a more comprehensive discussion involving the legal and tech fraternities.
A self-regulatory regime based on FATE may be beneficial as a precautionary measure in the short term. Making certain uses of AI tools illegal is the first step in this direction. That said, a wait and watch strategy for AI use would be counterproductive. Enacting precautionary watertight regulations governing what is permitted and what is not, is the need of the hour. Such regulation should also establish a system of checks and balances depending on the continued use of AI tools.
Regulators could take a leaf out of the playbook of existing regulated domains. The concept of ‘post-market surveillance’ mandates the developer bears responsibility for the use of their technology. Developers have a burden of post-deployment monitoring and that’s what helps them identify adverse consequences and then actually take corrective actions.
While the intent of using AI tools such as AFRS can be to provide a safer society, a lack of regulation on such use could be detrimental to the rights of citizens. Without regulations, there are no safeguards. The current debate around AI regulation can serve as a light in the path for India as we figure out what framework we want to adopt. India has an extremely bright future in the tech domain, which should not be tainted by poor regulation.