Facial recognition: Legal complaints lodged against Clearview AI in five countries

Company has built a business on the faces of 3 billion people scraped from the web without their knowledge

Privacy campaigners filed a series of legal complaints with five European regulators against the US tech firm Clearview AI on Thursday, alleging the company scraped facial images of 3 billion people from the web without their knowledge or permission, in contravention of the GDPR and other regulations.

New York-based Clearview AI sells facial recognition software to law enforcement agencies and businesses. Its customer list includes banks, governments, many US police forces and also the London Metropolitan Police.

The company built its machine learning models using data scraped from the public web from sites such as Instagram and Facebook. It has raised a total of $17 million in equity, with early Investors including Palantir chief Peter Theil and Kirenaga Partners, but has faced multiple legal challenges.

In 2020, Twitter, Facebook and YouTube threatened legal action against the firm if it did not stop the practice, and Clearview AI has faced multiple privacy lawsuits in the US, and elsewhere. The Swedish government fined its national police authority for using Clearview's software to identify people, and the company stopped operating in Canada after being told to delete Canadian citizen's images. It claims it does not operate in the EU but on Thursday four campaigning groups - Privacy International, nyob, Hermes Centre for Transparency and Digital Human Rights and Homo Digitalis - filed a series of complaints with four EU regulators and the UK ICO.

The complainants say harvesting and biometric analysis of photos of faces represents mass processing of European residents' personal data which is illegal under GDPR, and that Clearview has no lawful basis to collect this data.

They also allege the use of Clearview's tool by law enforcement authorities breaches the EU Law Enforcement Directive, as transposed into law by member states.

"The use of such an invasive, privately developed facial recognition database enabling social media intelligence by law enforcement would not be based on law, nor would it be necessary and proportionate," the claim says, as published on Privacy International's website.

The regulators have three months to respond to the complaints.

This and similar cases highlight the difference between common perceptions of the 'public web', and the uses to which information and personal data found there may legally be put.

Lucie Audibert, a legal officer at PI, said: "Clearview seems to misunderstand the internet as a homogeneous and fully public forum where everything is up for grabs. This is plainly wrong. Such practices threaten the open character of the internet and the numerous rights and freedoms it enables."

Clearview CEO Hoan Ton-That said in a statement that the company does not operate in Europe: "Clearview AI has never had any contracts with any EU customer and is not currently available to EU customers," he said.

In April, the EU proposed a sweeping ban on the use of facial recognition technologies for 'high-risk' uses, including its deployment in public places.