University of Washington uses deep learning to mimic MRI body scanning tech

Holds potential to lower costs of top notch medical imaging

Researchers at the University of Washington have injected new life into the old magnetic resonance imaging (MRI) machines using the power of deep learning.

By using artificial intelligence (AI) on MRI data, the university has been able to mimic the results of newer magnetic resonance angiography (MRA) equipment, which is able to take more detailed images of blood vessels, enabling detection of aneurysms and other life-threatening conditions.

The university said the discovery holds the potential to lower the costs of such top-notch medical imaging, which would benefit hospitals, especially those in rural areas and emerging markets, unable to afford an MRA machine.

The institute's assistant professor of ophthalmology, Aaron Lee trained deep learning models to make inferences from single-shot structural images from both MRI and OCT (optical coherence tomography) machines, creating angiography imaging from each, respectively.

The pioneering work in medical imaging synthesis with AI represents a first of its kind, making assessments of a host of vascular diseases more widely available, Lee said.

"The idea that you can take a single snapshot and extrapolate what's happening is kind of mind-boggling," he said, explaining the advances poised to boost older machines.

To complete the research, the team used algorithms to map between OCT and OCTA images as well as between MRI and MRA images, made possible by GPU-accelerated deep learning.

The research team had nearly 2TB of data but lacked the computing heft to run its algorithms on it. Using CPUs would have taken years to handle the task of processing the dataset, Lee said. However, by taking advantage of the advances in deep learning for computer vision and the application of GPUs, the researchers were able to go to work on their dataset.

The team used Nvidia Titan X GPUs running the Pascal architecture to speed up the training of large deep convolutional neural networks.

"The graphics cards allowed us to do the algebra necessary for deep learning very quickly," Lee added.

It's hoped the team's methods could potentially be applied to libraries of medical images to make screening easier for a variety of diseases.