CBP scanned 23 million faces in 2020, and didn't find a single imposter

Despite scanning millions of people, Customs and Border Protection failed to identify anyone trying to enter the USA under a false ID

The US Customs and Border Protection used biometric facial recognition tools on more than 23 million travellers in fiscal year 2020, according to its own report.

CBP said that the facial comparison technology at 30-plus entry, exit, and Preclearance locations had a match rate of more than 97 per cent last year. It further revealed that since 2018, it has identified seven imposters at US airports and 285 imposters trying to enter by land, using the technology. While CBP attempts to sound positive about this, none of these indiviuals were identified in the last year. Even if they had all been found in 2020, it would only represent a positive return rate of 0.001 per cent.

'Biometrics have proven an effective tool to combat the use of stolen and fraudulent travel and identity documents,' the agency states in its annual report [pdf].

It considers biometric technology to be 'the way of the future', which can achieve 'faster processing times for travellers'.

According to CBP, facial recognition technology could have a positive impact on the travel industry's ability to resume operations following the Covid-19 pandemic.

The technology has enabled the travellers and the travel industry to practice sound public health safety measures, the agency says, and 'will be a key component in restoring consumer confidence that travel is safe'.

CBP reopened its proposed policy for biometric data collection for public comments earlier this week.

The current regulations allow CBP to collect biometric data only at exit, at a limited number of air and sea ports, and from a limited population.

The proposed policy, if approved, would enable the agency to collect biometric data of non-US citizens upon entry to and departure from the US.

It would also allow CBP to collect photos or other biometrics from non-US citizens departing from any authorised point of departure - land, air or sea.

While CBP asserts that facial recognition tools are helping to improve the efficiency and effectiveness of security screenings, not everyone thinks in the same way.

Researchers say the technology exhibits various kinds of racial and gender bias and fails to give satisfactory results.

The USA's own Government Accountability Office (GAO) identified multiple issues with the CBP programme, including a lack of information on public notices and a lack of audit plans.

In 2019, San Francisco became the first US city to ban its police force from using facial recognition tools. The same year, the House of Commons' Science and Technology Committee urged the government to suspend 'highly intrusive' trials of facial recognition technology. It said that no trials should take place until a legislative framework had been introduced.

In 2020, Microsoft announced that it would not sell its facial recognition software to police departments until there is a national law to regulate the technology's use, following

Microsoft's decision followed similar moves by IBM and Amazon, over concerns that the technology could be used to promote racial injustice and discrimination.