MIT researchers unveil mind-controlled robot

MIT team develop artificial intelligence algorithms that can communicate human brain-signals to robots

Computer scientists have demonstrated mind-controlled robots that can respond to brain activity.

In the demonstration the robots - basically robot arms performing basic tasks - were given the task of placing either a cannister of paint or a ball of wire into an appropriate box.

The human controller, meanwhile, wearing an electro-encephalography (EEG) cap, would have their brainwaves read for the cognitive signs that the robot was about to make an error.

The readings from the EEG cap were parsed through machine-learning algorithms developed by the researchers, which focused on ‘error-related potentials', signals from the brain when it detects a mistake. The robot, picking up the reading from the machine learning algorithms could then correct their behaviour.

The team's novel machine-learning algorithms enable the system to classify brain waves in the space of 10 to 30 milliseconds, claim the team of computer scientists at Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University.

"Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word," said CSAIL director Daniela Rus. "A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven't even invented yet."

She continued: "As you watch the robot, all you have to do is mentally agree or disagree with what it is doing. You don't have to train yourself to think in a certain way - the machine adapts to you, and not the other way around."

In the past, robots controlled with human thought via EEG caps required humans to "think" in a prescribed way that computers can recognise. For example, an operator might have to look at one of two bright lights, each of which corresponds to a different task for the robot to execute. That, however, requires a high degree of concentration.

Instead, the team focused on error-related potentials or ErrPs, which are generated when the human brain notices what it regards as a mistake. As the robot indicates which choice it plans to make, the system uses ErrPs, read via the machine-learning algorithms, to determine whether the human agrees with the decision.

However, ErrP signals are extremely faint, which means that the system has to be fine-tuned to both classify the signal and incorporate it into the feedback loop for the human operator, according to CSAIL. In addition to monitoring the initial ErrPs, the team also sought to detect ‘secondary errors' that occur when the system doesn't notice the human's original correction.

"If the robot's not sure about its decision, it can trigger a human response to get a more accurate answer," said Gil. "These signals can dramatically improve accuracy, creating a continuous dialogue between human and robot in communicating their choices."

The researchers have published an eight-page paper outlining their work.