New model to identify 'blind spots' in AI systems developed by MIT researchers

Model could be used to improve the safety of AI used by self-driving vehicles and autonomous robots

Artificial intelligence (AI) systems can sometimes miss important details and commit major errors. But researchers from the Massachusetts Institute of Technology (MIT) and Microsoft claim to have developed a new model that can detect such mistakes.

According to the researchers, their novel AI assessment model identifies events/instances that autonomous systems learn from during their training programme, but which don ' t match with events that happen in the real world.

The research team suggests their model could be used to improve the safety factor in AI systems like self-driving vehicles and autonomous robots.

To describe the functioning of their new model, the researchers give an example of an ambulance and a large white car, which a self-driving car may find very similar in appearance.

If the car has not been trained on or is not equipped with the necessary sensors to distinguish between these two types of vehicle, the car will not be able to know that it should slow down and pull over when an ambulance with red, flashing lights approaches on the road. These types of scenarios are described as blind spots in AI training.

In the new model developed by the researchers, an AI system is first put through simulation training, where it creates a policy describing the best action to take for each possible situation. The system is then made to run in the real world, and its actions are closely monitored by a human, who gives error signals when the system makes, or is about to make, a mistake.

Humans can input signals to the system through 'corrections' or 'demonstrations'. In corrections, the AI system acts in the real world and the human monitors the system. If the system takes incorrect actions, the human gives an error signals to inform the system that its actions were unacceptable in that specific situation.

Alternatively, humans can provide demonstrations, where humans act in the real world, and the AI system observes human's actions. The AI system then compares human ' s actions to what it would have done in that situation.

The training data is then combined with the human feedback data. Using machine-learning techniques, a model is created to highlight situations where more information is needed by the AI system to act correctly in the real world.

The papers describing the findings of the study were presented at the Autonomous Agents and Multi-agent Systems conference last year. These papers will also be presented at the upcoming Association for the Advancement of Artificial Intelligence conference.