How cloud computing - and other new technology - could lead to the destruction of humanity

Cloud computing is one of a slew of emerging technologies that, together, could render humanity redundant

Back in 2000, technologist Bill Joy, one of the co-founders of Sun Microsystems, penned a feature for Wired magazine that caused a storm. Although Joy could boast a CV packed with technology breakthroughs, "Why the future doesn't need us" saw him cast as a neo-Luddite.

In it, he postulated that far from ushering in an age of comfort and leisure, new and advanced technology posed a potential threat to humanity. He cited nanotechnology with "uncontrolled replicators", genetic engineering, and robotics, to name just three, that separately or collectively posed a mortal threat to humanity if mis-used or mis-applied.

If Joy were writing the same article today, he would no doubt add cloud computing to the list of threats: networks of servers controlled by a small group of companies - which will only get smaller with consolidation - that will increasingly communicate with each other so that organisations can run applications across disparate clouds.

"We used to have debates at lunch times in BT: how would we know if the network was conscious? Because the total number of components in the [telecoms] network was more than the total number of components in the brain," says futurologist Ian Pearson, who for years was BT's own futurologist from 1991 to 2007. Their conclusion? "You probably wouldn't notice until it was too late," he adds.

Pearson, who founded Futurizon when he left BT, believes that there is still some way to go before the cloud could acquire Matrix-style sentience.

What it will require, he says, is evolutionary algorithms running on adaptive analogue neurons in order for the cloud to emulate - rather than simulate - consciousness. Work on evolutionary algorithms were all the rage in the early 1990s, but died out as experts in artificial intelligence struggled to emulate nature as precisely as they'd hoped.

However, research into neuromorphic computing continues, with such initiatives as the US Defense Advanced Research Projects Agency (DARPA) SyNAPSE programme.

This provides funds to develop electronic neuromorphic machine technology that can "scale to biological levels". Or to put it more simply, "it is an attempt to build a new kind of computer with similar form and function to the mammalian brain. Such artificial brains would be used to build robots whose intelligence matches that of mice and cats", according to Artificial Brains, the private website of James Pearn, chief technology officer of news site TheLocal.com.

Under the SyNAPSE programme, DARPA has funneled more than $100m to computer giant IBM and HRL Laboratories, a technology research organisation co-owned by General Motors and Boeing.

"The ultimate aim is to build an electronic microprocessor system that matches a mammalian brain in function, size, and power consumption. It should recreate 10 billion neurons, 100 trillion synapses, consume one kilowatt (the same as a small electric heater), and occupy less than two litres of space," says Pearn.

Should IBM and its partners in the project prove successful, it would almost certainly spell the end of conventional computing, with today's silicon-based electronics replaced by such devices - in the home, in your pocket and in the cloud.

Partners in crime

It isn't just about cloud computing, though, but big data and other emerging technologies too. Big data is the name given to a new breed of analytic technologies, which can take disparate bits and bytes of data in order to make links that conventional analytic software cannot - something that will dovetail well with neuromorphic computing. And it can do this in real-time, as well - there's no need to extract existing alpha-numeric information from an operational database into a separate system, it can do it in seconds, on-the-fly.

Perhaps that is why the CIA is so interested in big data. We don't need an Edward Snowden-style whistleblower to spill the beans about the intelligence community's deep interest in big data. We already know this simply by examining the activity of the CIA's own in-house venture capital company, In-Q-Tel.

"We identify, adapt, and deliver innovative technology solutions to support the missions of the Central Intelligence Agency and broader US intelligence community," claims In-Q-Tel on its home page.

It also publishes its portfolio, which includes a high proportion of big data start-ups. Take Adaptive Computing, for example, a company responsible for high-performance computing workload management software, which "manages the world's largest private cloud computing environment with Moab, a self-optimising, dynamic cloud-management solution".

In-Q-Tel also helps fund Skytree, a machine-learning technology platform that "gives organisations the power to discover deep analytic insights, predict future trends, make recommendations and reveal untapped markets and customers".

And big data isn't just about analysing structured data - documents and figures - but unstructured data too, such as videos, sound and images. NerVve Technologies' NVSS system can "detect anything the user wants to find at extreme speeds, enabling automatic extraction and alerts of the most relevant or interesting items, even for large volumes of data".

It claims its technology can analyse 24 hours-worth of video footage in less than five minutes - and the technology can only get faster and more accurate.

In other words, all the various elements are already being put in place, and it will only require a "spark of lightning", metaphorically speaking, to start the process.

Another feature required of sentience, perhaps, is the ability to move from the static computer out to the physical world. Again, there are a number of developments happening here that are just reaching fruition.

Google's self-driving cars, for example, are not just about mobility, but about making computers that can understand and respond to their physical environment in order to be able to negotiate it. The technology need not be limited to cars and commercial vehicles, but could be built into robots, too, or drones.

Then, there are 3D printers, which will enable virtually anything to be made from a CAD file. In the films, the robots always need to be maintained by a human being, for some reason, but in the future, robots will not only be able to maintain themselves, but also design and make their own spare parts - no human required.

Bill Joy also believes that nanotechnology, or nano-scale molecular electronics, has more potential to be used for malign purposes, by accident or design, than good - and could be used to engineer human destruction. "As with nuclear technology, it is far easier to create destructive uses for nanotechnology than constructive ones," he wrote in his Wired article in 2000.

He continued: "Nanotechnology has clear military and terrorist uses, and you need not be suicidal to release a massively destructive nanotechnological device - such devices can be built to be selectively destructive."

Take, for example, nanotech plants with leaves. These could out-compete conventional plants (even Japanese knot weed) leaving behind a biosphere of inedible foliage. Nanotech bacteria might not just be able to out-compete conventional biological bacteria, but could spread more swiftly and replicate even more quickly, and we would have no immunity against them. "We have enough trouble controlling viruses and fruit flies," says Joy.

In other words, all the ingredients are slowly being put in place whereby the cloud - with the aid of big data, robotics, 3D printers, self-driving technology, nanotech and other technology - may not only be able to become sentient, but will have all the tools at hand to rapidly replace us.