AMD's next-gen APUs to boost performance with unified memory

Kaveri APU coming later this year will enable CPU and GPU to access the same memory space

AMD aims to unlock the performance potential of its APU chips by enabling the CPU and GPU cores to have access to a unified memory space, driving development of applications that can use both types of core.

Announced today, heterogeneous uniform memory access (Huma) will enable both CPU and GPU processes to allocate memory from anywhere within the available memory space.

It is set to debut in the third-generation accelerated processing units (APUs) codenamed Kaveri, set for release in the second half of 2013.

Huma is intended to make it easier for programmers to create apps that use both types of core and eliminate the need for special APIs, according to AMD. The technology will thus move closer to delivering on the promise of what AMD calls the Heterogeneous System Architecture (HSA).

All AMD's APUs since the first generation launched back in 2011 have used separate memory spaces for the CPU and GPU cores. To take advantage of the GPUs capabilities to handle parallel workloads, developers had to use the CPU to move data to the GPU's memory space, then retrieve the results after calculation.

Under Huma, the CPU can just pass a pointer to the GPU that tells it where in the unified memory space to find the data, which simplifies the process and boosts performance by eliminating the need to move data around.

"HSA's revolutionary memory architecture is a new standard for high-speed GPU access to the system memory and removing the obstacle of having the GPU 'starved for data," said AMD's Sasa Marinkovic, writing on the company's blog.

HSA will empower software developers to innovate and unleash new levels of performance and functionality on modern devices, Marinkovic added, and will lead to powerful new experiences such as visually rich, intuitive, human-like interactivity.

The new approach also improves cache coherency, according to AMD, as both the CPU and GPU caches can see an up-to-date view of data across the memory space.