Two decades on from the Matrix, AI is removing drudgery not seeking domination

No need to take the red pill just yet says PagerDuty's Steve Barrett

This year marks the 20th anniversary of the seminal science fiction movie ‘The Matrix'. I'm a fan, like many of my fellow IT professionals, so I thought it fitting to address some of the film's themes to see how they hold up today.

Ray Bradbury once described science fiction as "any idea that occurs in the head and doesn't exist yet, but soon will". Bradbury could certainly have been talking about The Matrix, which posits the future relationship between humans and machines, forged through the power of artificial intelligence (AI).

Unrealised potential

In the movie, AI is hardly a force for good, but we do have high hopes for the more benign potential of AI in its many forms.

Two decades after the film's premiere, that potential is only just starting to be realised. Voice recognition and imaging capabilities might power digital assistants, such as Siri and Alexa, and first-generation self-driving cars but they have a long way to go. The user interfaces for AI may resemble the level of sophistication presented in movies like The Matrix, but the underlying foundation is still very elementary. That's not a bad thing, but it can lead to disappointment when the actual capabilities don't match what the movies have lead us to believe they should. Try asking Alexa a more complex question or request like "start my car and map the quickest route to work" and I assure you that it won't meet your expectations.

Glitches in the matrix

For me as an IT professional, one of the more memorable elements of the film was the idea of ‘glitches in the matrix'. Who among us hasn't experienced the short periods of time when an IT incident makes system vulnerabilities all too apparent and user experience is disrupted?

These glitches may have been tolerable even a few short years ago, but today, the demands of the always-on digital business put pressure on IT professionals to manage incidents and associated data more quickly than ever before. Traditional command-and-control methods no longer make sense because they result in over-notification and process inefficiencies.

So here's where machine learning, one small subset of AI, can work for us.

When an IT incident occurs, alerts come in from multiple sources. Machine learning can be used to identify commonalities among incidents, reducing ‘alert blindness'.

Operational complexity is increasing and time is not. Triaging incidents can be particularly painful - especially when doing so from scratch. We'd prefer to know whether a similar incident has happened before and, if so, how it played out. We want to learn from historic events. Machine learning has the potential to provide us with that insight - drawing on data from both within our own organisations and thousands of others around the world. It puts context directly into the hands of those closest to the incident.

Will jobs be lost?

Will machine learning ultimately render our roles obsolete - a risk cited by some of AI's opponents? Will IT operations jobs be lost and skills atrophy from lack of use? Will machines rather than humans be on the end of that midnight support call? I doubt it. (But I'll leave it to you to decide whether that is a good thing). Humans will always be central to this notion of machine learning. I wouldn't worry about AI taking over humanity's job anytime soon.

Of course, with potential comes risk. Much has been written about the fact that data can be manipulated - either deliberately, for political or criminal gain, or unintentionally. Bias can be introduced during machine ‘training', and privacy can be invaded. But, to borrow a phrase from The Matrix, with AI and machine learning we can be confident that "the answer is out there".

Fortunately, the hellish, machine-dominated world of the Matrix - where humans have been rendered little more than a power source - remains far-fetched.

Steve Barrett, is vice president EMEA at PagerDuty