Death of the keyboard and mouse: How human-computer interfaces are evolving

The keyboard and mouse is already giving way to human-computer interfaces that could once only have been seen in sci-fi films, writes Nic Fearn

For decades, the primary human-computer interface has been the keyboard, augmented only by the introduction of the mouse in the mid-1980s. But as technology continues to advance, the way in which we interact with our devices is set to change dramatically.

In the past ten years, tech giants such as Apple, Google and Amazon have increasingly added reliable voice recognition. And last year even Facebook recently purchased a firm called CTRL-labs, which specialises in a form of ‘mind-reading'. It has developed a wristband that can pick up electrical impulses from muscle fibres, interpret these pulses and translate them to movements on a screen.

Interactions between humans, such as speech, gestures, facial expression, body language, tone of voice and even smell, can now be detected and analysed by a computer system

US tech billionaire Elon Musk, who founded Tesla and SpaceX, has also advocated new forms of human-computer interaction. Previously, he called for volunteers to test out a brain-machine interface being developed by Neuralink - one of his pioneering technology companies.

But what other technology is being developed, and by what other companies? And what form will it take and will the future of the human-computer interface be intrusive, or subtle? And, indeed, how will the interface be human and computer be protected and privacy be assured?

New modes of interaction

While controlling computers by voice or even thought may have sounded like something out of a sci-fi movie not so long ago, this is quickly becoming a reality. Andrew Rogoyski, innovation director at Roke Manor Research, notes how the scope of human-computer interactions has radically increased in recent times.

He says: "Interactions between humans, such as speech, gestures, facial expression, body language, tone of voice and even smell, can now be detected and analysed by a computer system. Computer systems are already being created that make use of this kind of information to alter their interaction with you, for example, if you seem tired, stressed or just disinterested."

Rogoyski believes that one aspect of human-computer interaction that has improved is maintaining the continuity of interaction. "Counter-intuitively, continuity across multiple devices is easier, for example, where your mobile phone may know that you were browsing for a specific restaurant on your laptop and bring up a phone number on your mobile for you to make a booking," he says.

Measuring brainwaves in isolation is unlikely to form any meaningful way of exchanging information

"This kind of continuity is already happening. Continuity with a single device, for example, having a conversation with a digital assistant or a chatbot where each interaction builds on the last, is much harder but is also starting to appear."

But he argues that interactions must be two-way and that technologists must find new ways to get information over to humans. "This includes extraordinary visualisations that make complex data meaningful, augmented and virtual realities that allow you to explore synthetic worlds, and natural language speech and even subtle effects like haptics which simulate a sense of touch."

Over the next few years, Rogoyski expects that human traits will be increasingly ascribed to computers and robots. "This can be useful if you're trying to create a social robot that will care for the elderly, but it's potentially dangerous if a child expects a robot to provide the same level of care and attention that a human caregiver would," he says.

"Digital assistants that use natural speech such as Siri, Alexa, Cortana and Google Assistant are becoming ever-more capable, already genuinely useful at finding information, in the future they will become invaluable assistants that provide help to organise every aspect of your life."

Death of the keyboard and mouse: How human-computer interfaces are evolving

The keyboard and mouse is already giving way to human-computer interfaces that could once only have been seen in sci-fi films, writes Nic Fearn

Ian Hughes, an analyst at 451 Research, agrees that human-computer interaction is quickly evolving. He says: "Advances in artificial intelligence are creating ranges of computing tools, such as cooperative robotics or ‘cobots'. In the truest sense, these are intelligent workmates that understand our movements or our intentions to assist in a task.

"As such they have no traditional user interface. General-purpose computing is on an evolutionary path to augmented reality (the user interface for IoT) with digital assets represented within the physical world, again, to meet our needs as people.

"A longer-term future could lead to the combination of Cobots and augmented reality more directly connected to our physical bodies, as advances in medical prosthetics start to be used by choice rather than necessity."

The newest digital technologies in the age of the algorithm are compelling, but they also need to provide a level of comfort

Huge potential

Current human-computer interfaces usually measure only a single information source. But in the future, our technology could process a variety of information sources simultaneously.

John Yardley, CEO of Threads Software, says: "We process speech based not just on the acoustic sound, but on literally hundreds of other factors, for example, the speaker's body movements, mood, age, sex, English grammar, past interactions and past experience and so on.

"Surprisingly, random strings of words are often recognised better by a computer than by a human. But the human outperforms the computer because he or she uses a variety of other information sources to understand the speech rather than recognise the words. This additional information provides context for the interaction."

He suggests that although measuring brainwaves in isolation is unlikely to form any meaningful way of exchanging information in the near future, doing so in the context of a spoken word, viewed image or touched object could help identify inputs that would be impossible from a single source of information.

The future is in combining as much information as can be gathered and aggregating it

"The future is in combining as much information as can be gathered and aggregating it - even if it appears irrelevant or the reason it correlates with the action is unknown," says Yardley.

But, he adds: "Neither Apple, Google or Amazon have added increasingly reliable voice recognition. What they have is speech recognition. Voice recognition is about who is speaking, whereas speech recognition is about what they are saying."

Challenges

Of course, technology that can read our minds and take into account other aspects of our bodies won't come without its shortcomings, even if it can be perfected. For a start, it may require the kind of implant that will, at the least, put off the most squeamish.

Indeed, Rob Brown, vice president of Cognizant's Center for the Future of Work, suggests that it could put people off: "While mind-reading technology is being discussed, the last thing the tech industry needs is the perception of another ‘scary technology' on the horizon.

"The newest digital technologies in the age of the algorithm are compelling, but they also need to provide a level of comfort."

Medical insurers are looking very carefully at whether to provide cover for a surgeon who implants an untried neural interface

Privacy-related laws will need to evolve and change, according to Brown. He continues: "Leading legal scholars have argued for, and evangelised, increased consumer protections along the lines of the client confidentiality laws applied to the healthcare and legal professions.

"CTRL Labs and Neuralink would be in a similar relationship with consumers, as they would have knowledge about, and therefore power over, their users - and thus should be considered ‘information fiduciaries'.

"Ultimately, safety, security and ethics are of the utmost importance. End-users of all technology need to feel safe when they are using it, so privacy and ethical 'guardrails' of use are necessary, proper and essential concerns that are everyone's responsibility. Any developing technology needs to get the right ethics baked into these new models early - while there is still time."

Tamara Quinn, partner at law firm Osborne Clarke, says that companies looking to sell implantable devices must already deal with a range of legal issues, so the idea of using such technology as a computing interface already faces a number of regulatory challenges.

There's no doubt that in another ten years, human-computer interaction will change again

"There are the obvious regulatory constraints in areas such as manufacture and marketing of medical devices, advertisements to the public, use of animals for experiments, and data privacy, to name but a few."

There are also issues around liability. "Who is liable to pay compensation if an implant fails, or has side effects, or cripples the user? Is it the manufacturer of the device, the company which provided the software, the medical team that implanted it - or all of them?" she says.

Such an eventuality would no doubt only be resolved as a result of an expensive and long-running legal dispute.

"We can be certain that medical insurers are looking very carefully at whether to provide cover for a surgeon who implants an untried neural interface to allow a healthy person to do something for fun or convenience, rather than to alleviate serious illness."

In the past decade alone, technology has evolved rapidly. While traditional computers still play an important role in the lives of many, the rise of touchscreen devices has fundamentally changed the way we use technology. There's no doubt that in another ten years, human-computer interaction will change again.