Technology to convert brain signals into speech developed by Columbia University scientists

System uses a computer algorithm similar to those used in smart assistants, such as Apple Siri and Amazon Echo

Scientists at Columbia University have developed technology that, they claim, is able to convert thoughts into recognisable speech.

The system uses speech synthesisers and artificial intelligence (AI) to translate brain signals into words and could help people with communication problems to regain their ability to converse with others.

When people speak (or imagine speaking), measureable patterns of activity appear in their brain, as shown by earlier studies. In the same way, brain activity patterns appear when people listen to (or imagine) some speaking.

For decades, scientists have been trying to decode such activity patterns in the human brain and convert them into recognisable words. However, accomplishing this feat has proved to be highly challenging for scientists.

"Our voices help connect us to our friends, family and the world around us, which is why losing the power of one's voice due to injury or disease is so devastating," says Dr Nima Mesgarani, associate professor of electrical engineering at Columbia University, and the lead researcher of the study.

"With today's study, we have a potential way to restore that power. We've shown that, with the right technology, these people's thoughts could be decoded and understood by any listener."

In the current study, Mesgarani's team used AI to translate brain activity pattern into words. The experiments were conducted with a group of epilepsy patients undergoing brain surgery by Dr Ashesh Dinesh Mehta at the Northwell Health Physician Partners Neuroscience Institute.

These patients were asked to listen to words spoken by other people, while the research team measured brain activity patterns in those patients.

A computer algorithm, called vocoder, was used to translate recorded patterns into words. According to researchers, this computer algorithm is similar to algorithms used in smart assistants like Apple's Siri and Amazon's Echo, and can be trained on voice recordings to synthesise speech.

Finally, the words generated by vocoder were converted into speech through a robotic voice system.

The accuracy of the speech produced was tested on another group of individuals who were asked to listen to the sounds and reveal what they heard. According to Dr Mesgarani, people could understand the sounds about 75 per cent of the time.

Researchers believe the system could be further refined to design a wearable brain-computer interface able to convert an individual's thoughts, such as 'I need a glass of water', into speech or text.

"This would be a game changer," says Dr Mesgarani.

The findings of the study are published in the journal Scientific Reports.