Site icon myLawrd

US Law enforcement still using Clearview AI, without authorization: Report

Clearview AI

According to a recent BuzzFeed News investigation, US law enforcement personnel are using Clearview AI facial recognition without authorization. The report indicates that officers are using Clearview AI while the public and higher authorities remain in the dark. Clearview AI is a private Canadian enterprise providing facial recognition technology to law enforcement agencies across the globe. It claims to have collected 3+ billion facial images from open sources – news media, social media, and the web – to develop the AI. Irrespective of how Clearview’s tool functions, the fact remains there is little public oversight, transparency of use, or accountability for misuse of data.

Findings of the investigation

BuzzFeed’s investigation shows more than 7000 individuals from nearly 2000 public agencies in US have used Clearview AI for various purposes – searching for Black Lives Matter protesters, Capitol insurrectionists, or their own family and friends. The enforcement agencies using this tool include local and state police, US immigration and customs enforcement, the Air Force, State healthcare organizations, offices of state attorney general, and public schools. Interestingly, employees of these departments were using facial recognition AI without the knowledge or sanction of their supervisors. Moreover, responses showed supervisors would regulate or even ban the use of this tool if they were aware.

Collection of Data

The investigation is based on records of Clearview AI searches between 2018 and February 2020. It also relied on public records and information aggregated through outreach to the agencies involved. BuzzFeed crosschecked the authenticity of that data with Clearview AI, which refused to accept or deny. Additionally, 337 public entities in the dataset confirmed to BuzzFeed News that their employees have either worked or tested the Clearview AI, while 210 denied its use. Majority of the entities (1159) never responded to the queries of BuzzFeed.

The collected data is indicative of Clearview AI’s broach reach, especially to federal agencies around the country. So much so that the tool could have been used in a small town with no one the wiser.

Nathan Freed Wessler, a senior staff attorney with the American Civil Liberties Union, told BuzzFeed News that “Protecting privacy means maintaining control of private information that is most revealing of our identities, our activities, our selfhood, and Clearview’s business is based on taking that control away from people without their consent,” … “You can change your Social Security number if it is taken without your consent, but you can’t change your face.”

Response to Clearview in Other Jurisdictions

Law enforcement and other public agencies using Clearview has put them at odds with data protection authorities. On 5th Feb, Canada banned Clearview AI’s facial recognition service for collecting highly sensitive biometric data without consent. Following this, in mid-February, Sweden’s data watchdog imposed a fine on local police for unlawful use of Clearview AI. UK  has also declared facial recognition implementations as illegal.

Authenticity of Clearview AI

BuzzFeed interviewed dozens of officers for their investigation. Ironically, the officers reported that facial recognition searches through Clearview AI invariably yielded an ineffective response. Detective Adam Stasinopoulos of the Forest Park Police Department in Illinois told BuzzFeed that the department stopped using Clearview after its free trial expired due to false positives in search results. Mutale Nkonde, CEO of AI for the People, told BuzzFeed News that, “The technology is not built to identify Black people”. 

BuzzFeed claims it asked a person with access to Clearview AI to conduct searches. He conducted 30 searches including some computer generated photos. In response to two computer generated photos, a woman of color and a girl of color, the tool returned results of real people. It matched an artificial face to an image of a real girl of color in Brazil, whose school had posted her picture to Instagram. This shows how adversely Clearview AI’s technology can affect the lives of people. It can flag innocent and unrelated people as potential suspects and persons of interest.  

Conclusion

This investigation reveals how companies are using our social media profiles to offer solutions to law enforcement agencies. It is not a stretch of the imagination to assume law enforcement and intelligence agencies are also creating their own AI systems for domestic surveillance purposes.

In light of these developments and the rapid adoption of facial recognition technology in governance, the lack of regulatory oversight is keenly felt. Without clear guidelines, use of facial recognition technology can impose serious violations of privacy. India has also not shied away from regularly using the technology.

CBSE has launched a facial recognition system to access documents. Delhi government is using it in schools. Indian Railway is also investing in such tech and Bengaluru City Railway station had put out a procurement notice. Central Railway was also in order to install facial recognition systems for registering attendance of employees. Lucknow police is deploying the technology to identify women in distress and alert police officials.


Do subscribe to our Telegram channel for more resources and discussions on technology law and news. To receive weekly updates, and a massive monthly roundup, don’t forget to subscribe to our Newsletter.

You can also follow us on InstagramFacebookLinkedIn, and Twitter for frequent updates and news flashes about #technologylaw.

Exit mobile version