The UK’s Information Commissioner’s Office (ICO) and the Office of the Australian Information Commissioner (OAIC) have concluded their joint investigation of the personal information handling practices of Clearview AI. Both authorities opened an investigation in July last year, focusing on the company’s use of ‘scraped’ data and biometrics of individuals.
What is Clearview AI?
Clearview has a facial recognition application that allows users to upload a photo of an individual and match it to photos of the person collected from the internet. The entire Clearview database reportedly consists of more than three billion images that the company has ‘scraped’ from the internet, including social media websites.
Multiple jurisdictions have declared Clearview’s actions as illegal. On 5th Feb, Canada banned Clearview AI’s facial recognition service for collecting highly sensitive biometric data without consent. Following this, in mid-February, Sweden’s data watchdog imposed a fine on local police for unlawful use of Clearview AI. The UK has also declared facial recognition that searches for people in public places. Most recently, the EU Parliament has called for a permanent ban on AI-based facial recognition.
One investigative report suggested that U.S. law enforcement is using Clearview AI without authorization.
What does the investigation reveal?
The ICO & OAIC worked together on the evidence-gathering stage of the investigation, according to a press release. However, since the data protection regime in both countries is different, they will now separately look into the use of technology.
While the UK ICO is considering its next steps, the OAIC has determined that Clearview broke Australian Privacy Act, 1988. It determined that the company collected sensitive information without consent, through unfair means, failed to notify individuals whose information it collected, and also failed to take reasonable steps to ensure compliance with the existing law.
Ultimately, the OAIC ordered Clearview to stop collecting facial biometrics and biometric templates from Australians. Further, the company must destroy all existing images and templates that it holds.
Lack of Transparency and Monetisation of Individual Data
OAIC highlighted the lack of transparency around Clearview AI’s collection practices and monetisation of individuals’ data for a purpose entirely outside reasonable expectations.
Speaking on the order, OAIC Chair Angelene Falk said that the covert collection of sensitive information is “unreasonably intrusive and unfair”. She added:
“When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes.”
Speaking on the adverse impact of the collection, she said that it carries a significant risk of harm to individuals, including vulnerable groups such as children and victims of crime. Further, “biometric identity information cannot be reissued or cancelled” and may risk identity theft/ misidentification.
Apart from the risk to privacy, such indiscriminate collection of information may also “adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance”, she said.
Clearview will appeal
As reported by The Guardian, Clearview itself intends to appeal the decision before the Administrative Appeals Tribunal. It argues that the images it collected were publicly available. Hence, there was no breach of privacy. It further argued that Australian laws do not apply since the company does not operate in Australia and does not have any Australian users.
You can read the full order here.