Research technique generates images from brain activity

Can 'simply' generate accurate, high-resolution images

Researchers propose novel stable diffusion AI technique to generate images from the brain activity

Image:
Researchers propose novel stable diffusion AI technique to generate images from the brain activity

The researchers from Osaka University's Graduate School of Frontier Biosciences claim to have devised a novel approach based on a diffusion model (DM) to reconstruct images from functional magnetic resonance imaging (fMRI) data obtained from human brain activity.

Authors Yu Takagi and Shinji Nishimoto claim their proposed technique can simply generate accurate, high-resolution images, eliminating the need for further training and fine-tuning of complex deep-learning models.

Reconstructing visual images from brain activity, measured by fMRI, is a difficult task due to the brain's underlying representations being largely unexplored. In addition, the sample size of brain data is usually relatively small.

Scientists have recently begun to tackle this challenge using deep-learning models and algorithms, such as generative adversarial networks (GANs) and self-supervised learning techniques.

More recent investigations have achieved higher accuracy by explicitly integrating images' semantic content as auxiliary inputs for the reconstruction process.

The Japanese researchers suggest a method for reconstructing visual images from fMRI signals, using a latent diffusion model (LDM) called Stable Diffusion.

Stable Diffusion AI is a machine learning algorithm capable of analysing intricate networks and systems, including the human brain.

The algorithm is based on the principles of the diffusion process, which simulates the movement and propagation of particles across a system over time.

Researchers can apply the technique to better understand how different brain regions interact and process information.

The researchers' LDM architecture is trained on an extensive dataset and exhibits "exceptional" text-to-image generative performance, the report says.

For this project, the researchers utilised the Natural Scenes Dataset (NSD), which includes data collected from a 7-Tesla fMRI scanner across 30-40 sessions.

During these sessions, each participant was exposed to three repetitions of 10,000 images. The researchers examined data from four out of the eight subjects who participated in all the imaging sessions.

They say the results demonstrate a "promising approach" for reconstructing images from human brain activity, and a new framework for understanding diffusion models.

The research also provides a neuroscientific perspective for quantitatively interpreting various LDM components.

Researchers have been utilising AI models to decode information from the human brain for several years.

In 2019, a group of Russian researchers said they had devised a mind-reading neural network with the ability to decipher brainwaves and generate real-time visual representations of what an individual was observing.

A Meta research team, led by Jean-Remi King, has also published a study that extracts text from fMRI data.

Last year, researchers at Radboud University in the Netherlands said they trained a generative AI network, a precursor to Stable Diffusion, on fMRI data gathered from 1,050 distinct faces, and subsequently converted the brain imaging outcomes into real images.

The study showed the AI was capable of performing "unparalleled" stimulus reconstruction.