What's the point of secure software if you can't trust your CPU?

Modern chip designs make it increasingly difficult for developers to be sure that systems are not compromised, says security expert Joanna Rutkowska

What's the point of developing secure software when the infrastructure it runs on cannot be trusted?

That's the question posed by Joanna Rutkowska, security researcher and head of the Invisible Things Lab, which develops the security-focused operating system Qubes OS, during a presentation to the Chaos Computer Club in Germany last month.

The components that run a PC or laptop, each with its own firmware, have always been fairly opaque in terms of the way they operate and interact, but this lack of visibility is increasing for reasons that are part and parcel of developments in CPU design. Logical functions that used to be carried out by other components on the motherboard or by the operating system and application layers are increasingly being moved to the CPU. There are many solid engineering reasons for this architectural development, but the downside is a loss of the visibility and fine control available to developers.

Rutkowska gave the example of the Intel x86 processors that power many of the world's personal computers. These, she said, are now far more than just a central processing unit: they integrate graphics, memory, PCIe, SATA and USB controllers as well as SPI flash modules and something called the Management Engine (ME), which is an embedded microcomputer with access to many other components as well as its own memory and storage.

While there is no indication that these components have been used for any purpose other than efficient operation and cost-effective manufacturing, Rutkowska's contention is that some of the processes that are now handled exclusively by the CPU could be hijacked to introduce backdoors and malware. If this were to occur, we would have no way of knowing, since the firmware is proprietary, its operations impenetrable, and data stored in the flash elements of the CPU is encrypted.

"Intel wants to eliminate all the logic that touches data in the apps and the operating system layers and move it to ME," she said. "Intel ME is also an operating system, but no one knows how it works. Any functionality it offers is fully controlled by Intel."

This lack of openness makes the system vulnerable to bugs, she argued, which could be exploited by malware writers.

"If I was to imagine an ideal environment for writing rootkits, I couldn't imagine anything better than ME, because it has access to everything that is important."

Potentially, ME could access, store and transmit disk encryption keys and personally identifiable information, she said, adding that initial experiments by her group suggest that ME can also bypass Intel's built-in virtualisation technology, VT-d, which enables the "security by isolation" capabilities deployed by Qubes OS.

Furthermore, she added, as Intel (and other chipmakers) move to incorporate the logic currently handled by applications and operating systems into its processors, there is a danger of the "zombification" of general-purpose computing, as control, visibility and the possibility for verification of firmware and operations is taken progressively away from developers.

Even if Intel were to release the source code for ME, Rutkowska went on, the architecture of modern processors is so complex that the most expert of developers would struggle to understand the system. For example, it is impossible to know what is being written to the various flash modules on the CPU where the BIOS and wireless and other controllers reside, meaning that users and developers of software have no option other than to trust that boot processes have not been compromised.

A possible solution, Rutkowska said, is a new architecture that removes all the "persistent state" elements (those processes that persist after shutdown or that write data to disk or memory) from the processor and places them on a removeable stick, putting control back in the hands of the user. Such "stateless hardware" would be more secure as it would "eliminate firmware infections, remove places where stolen secrets can be stored, and provide a reliable way to verify and choose firmware", she said.

However, she conceded that there is little commercial imperative for chipmakers to do this, and said it is up to engineers to start working to provide secure open source hardware.

The world of open source hardware and its associated firmware is decades behind the software equivalent. Rutkowska namechecked projects like OpenSSD but said that the emergence of an open-source alternative to x86 and equivalent architectures by AMD and ARM is still five years away at least. Until then, users should assume that their personal computers are ultimately compromised.

"Personal computers are extensions of our brains yet they are insecure and untrustworthy," she said. "But if the world runs on computers, shouldn't it be engineers [rather than business interests] that decide how it runs?"

Computing's inaugural Open Source Summit takes place in London on 7 July. Registration is FREE for most delegates.