Peter Cochrane: What we need to understand about the rise of distributed AI

n the world of hardware we have become expert in the creation of extremely reliable machines and artefacts constructed of components of limited capability and resilience. However, this is a much bigger challenge in the world of software, and even worse, in those spaces that are a mix of the two.

Right from the off, at its most juvenile stage, software and computing provided many surprises and unexpected mysteries. And despite all efforts to componentise software to replicate our hardware successes, we have failed. It is as if software refuses to migrate to any abstraction level above some molecular analogy and on up to a more solid and understandable form

Today, the vast majority of operational surprises are credited to software failures, but in reality we can no longer fully test the hardware, software, or indeed, our networks. The combinatorially dynamic complexity involved at every level renders all testing and characterisation fundamentally impossible, and it looks destined to remain so for some considerable time.

With chip technologies now seeing feature sizes down to 5nm or less and with over 100 billion transistors per chip, along with operating code often in excess of 10 to 100k lines or more, there seems little hope of ever fully ‘understanding' what causes occasional spurious outcomes and behavioural anomalies. And all this can only get worse with the rollout of the IoT and smart materials with embedded AI.

Best estimates of the total number of IoT devices to be deployed vary widely year-on-year and currently span some 50 to 500 billion devices by 2030. In contrast, predictions regarding smart materials (SMARTS) are not yet meaningful as the technologies have yet to reach critical mass, but we might expect another 5,000 billion intelligent entities joining our networks directly or indirectly by 2040.

Things that think want to link, and things that link want to think

What is happening here? Devices and network connections appear to be shadowing Moore's Law in a steeply exponential growth. For sure, they are all related, but we don't know exactly how! However, it would seem reasonable to assume that the growth of surprise events will track this exponential growth in some way.

But, there now comes another behavioural likelihood: the spontaneous emergence of intelligence(s). After all, life and intelligence are no more than the emergent properties of networked sensors, processors and memory on a relatively modest scale. So in brief, this fundamental mechanism is likely to kick in: "Things that think want to link, and things that link want to think".

Early theoretical models, experiments and operational network mysteries tend to support this hypothesis, and it is already thought to happen in our mobile nets, but have not been smart enough to spot, catch or record it yet. Clearly, rogue properties have been reasonably contained so far, but in the context of Industry 4.0, the SMARTS, AI and IoT are destined to become a vital nervous system for the planet - and we need to understand it.

We also need to address our ability to ensure a reasonable equilibrium of the many mixed uncertainties by design. That is, we have to be able to predict, or at least contain, outcomes. If we cannot achieve this understanding, we will have to relinquish control for well-defined governance bounds and boundary conditions in the individual domains and the concatenated whole. At this point the naysayers will be raising their voices in protest without a due consideration of the benefits on offer.

We have a good history of risk assessment and management in representative biological fields. The reality is, we have no choice but to push on if we are to create sustainable societies.

Peter Cochrane OBE is professor of sentient systems at the University of Suffolk, UK