New university research warning over voice recognition biometric security
Rory Bremner could have a lucrative criminal career cracking voice recognition biometric security systems...
Voice recognition systems used as a form of biometric authentication can be easily fooled by impersonators, according to a new study by researchers from the University of Eastern Finland.
The systems, claim the researchers, aren't accurate enough in recognising voice modifications to provide genuine security.
The study shows that cyber criminals could use different technologies to fool the voice recognition software, such as voice conversion, speech synthesis and replay attacks - or just impersonating the voice of someone else.
Voice impersonation, the University claimed, is common in the entertainment industry. Professionals and amateurs are commonly copying voice characteristics of other speakers, notably public figures.
There's also the issue of voice disguise, where speakers change the way they speak to avoid being recognised. The latter is common in situations that don't require face-to-face communication.
And, as a result, people can blackmail others or conduct threatening calls. Because of this, there's a need to improve the robustness of voice recognition so they cannot be as easily fooled by voice modifications.
In the study, researchers analysed the speech of two professional impersonators mimicking eight Finnish public figures. It also looked at acted speech from 60 Finnish speakers, who participated recording sessions.
The speakers were asked to fake their age by changing their voices to sound older or younger, and an overwhelming number of them were able to trick the speech systems.
Tom Harwood, chief product officer and co-founder at Aeriandi, said: "Biometrics technology has been shown to significantly reduce fraud, especially in the financial sector - but it's not the whole solution. Earlier this year, twins tricked the HSBC voice biometrics security system, and this instance showed that no security technology is 100% fool-proof.
"Technology advances have also shown that it is now possible to cheat voice recognition systems. Voice synthesiser technology is a great example.
"It makes it possible to take an audio recording and alter it to include words and phrases the original speaker never spoke, thus making voice biometric authentication insecure.
He added: "The good news is that there is a way to protect against phone fraud beyond biometrics - and that's fraud detection technology. Fraud detection on voice looks at more than the voice print of the user; it considers a whole host of other parameters.
"For example, is the phone number being used legitimate? Where is the caller located? Increasingly phone fraud attacks on UK banks come from overseas. Voice Fraud technology has been proven to protect against this as well as domestic threats."