Artificial intelligence: the potential and the reality

A look at AI and machine learning in the UK today

Artificial intelligence is a prime contender for this year's big tech buzzphrase. All of a sudden it's everywhere. But its history goes back a long way, as far as the 1940s and 1950s in fact. At that time an intense interest in human psychology in general and human intelligence in particular led scientists to speculate whether human learning could be simulated using machines. They were confident it could.

One of AI′s early movers and shakers was Marvin Minsky, whose work includes the first randomly wired neural network learning machine which he built in 1951. Confidence was running high. In 1967 Minsky predicted that "within a generation... the problem of creating 'artificial intelligence' will substantially be solved."

But he was wrong about that. Not only had Minsky and others in the field underestimated the complexity of simulating human-like intelligence, but the computers of the day were simply not up to the job. Even a generation later with the advent of supercomputers in the 80sand 90s, most attempts at AI resulted in disappointment.

Over time, though, the problem of inadequate computing power was largely solved by Moore's Law, and in addition to the arrival of much more powerful machines, vast volumes of data became available for training algorithms, and that started to change the game. For certain tasks at least, AI began to overtake humans in complex parallel processing tasks.

In 1997, IBM's Deep Blue beat chess grandmaster Garry Kasparov, pitting raw number crunching computing power against human genius. Was this AI? Many argued that Big Blue's victory was more about brute force than intelligence, but it was a sign of things to come. Twenty years later, this time using deep neural networking techniques, Google's Alpha Go surprised many by vanquishing champion Lee Sedol at the much more computationally challenging game of Go. And this certainly was a victory for AI.

These were impressive accomplishments, no doubt, but something of the grand early vision had been lost. Much of today's AI, including these examples, is narrowly focused on specific tasks rather than being generally applicable.

But things are beginning to change. The volume of data available to tech giants like Google, Amazon and Facebook and China′s Tencent and Alibaba together with the newer neural networking models and open-source frameworks mean that data scientists are now able to begin joining the dots between the different AI domains. So a more general purpose AI is at least starting to emerge.

Definition creep

In the last few years AI has certainly turned a corner, but the task of understanding how far it has gone is not helped by the fact that the phrase has come to mean pretty much anything that IT marketers want it to mean. You can now buy all sorts of gizmos labelled 'powered by AI', including cameras and even an AI-powered suitcase.

This definition creep is exasperating to anyone working in the field. "The next time someone mentions AI ask them what they're really talking about," advised one practitioner in a Computing story recently.

Even the word 'intelligence' is open to many interpretations. But this simple definition has found traction in the field of AI:

Intelligence is ′goal directed adaptive behaviour′ (Steinberg & Salter, 1982).

But rather than pondering the semantics of AI, a more practical viewpoint is to consider how machines that adapt their behaviour might be useful. What do we want to use AI for? What information do we want it to give us and how do we want to act on it? What are we using our data for currently? Are we looking backwards in time to see what has happened, as in classic business intelligence, or are do we want to use models to foresee what is likely to happen in the manner of predictive analytics so we can maybe automate the next step? Predictive analytics is very closely aligned with machine learning (ML).

We asked 200 Computing readers from organisations across all sectors and sizes, all of whom are involved in AI in some way, to give us a rough split between BI and predictive analytics at their firm. The results were: 70 per cent BI, 30 per cent predictive analytics (which seems on the high side in favour of the predictive side of things).

How does that translate into maturity with AI? Well only eight per cent said they have implemented AI and machine learning in production but another 25 per cent were experimenting with pilot studies. Remember, this is an audience that is interested in AI, so even these numbers almost certainly overestimate the true number of AI deployments. So for most it's early days, but there is definitely a head of steam brewing behind AI and ML learning and for those with the right use case, early days could mean early opportunities.

So what are the use cases where AI is a good fit?

Currently, the main driver is making existing processes better and more efficient - exactly as you'd expect with any new technology.

At the top we have business intelligence and analytics. Potentially AI can help businesses move from descriptive through predictive and ultimately prescriptive analytics, where machines take actions without first consulting their human masters.

Then there are the various types of process automation. Typically this means handing over repetitive on-screen tasks to so-called soft bots that are able to quickly learn what is required of them.

Much of the focus of that activity is on the customer, providing better customer experience - by learning what they like and giving them more of it - and better customer service, by improving the responsiveness of the organisation using chat bots for example.

Then there's cybersecurity. Machine learning systems can be trained in what is normal and to recognise abnormal behaviour on the network and either alert those in charge or, increasingly, act on the causes of the anomaly themselves.

About a quarter of our respondents had introduced or were looking to introduce robotic process automation (RPA).

RPA is often considered the most straightforward type of AI in that it doesn't usually require vast quantities of training data. Also, its use cases are the easiest to identify.

Software robots can respond to emails, process transactions, and watch for events 24 hours a day seven days a week. They are good at things humans are not good at, namely simple, standardised repetitive tasks which they can do at great speed and with low rates of error. Unlike some humans, soft bots are low-maintenance. They generally require little integration, and - also unlike some humans - they can be relied upon to do the right thing time after time after time.

The reasons for deploying RPA? No surprises really. Cost reduction, improved productivity, lower risk through human error were the main ones. Slightly less obvious is improved data quality and accuracy. Because bots can be relied on to do the same thing in the same way time after time a very useful side-on effect can be an improvement in the quality of the company's core data.

So far so good, unless you are one of the people who might be put out of a job by a soft bot of course. In which case you might take comfort from a recent report by McKinsey that found that many RPA rollouts have failed to deliver, being more complex to implement than anticipated due to the unpredictable nature of many business processes, unexpected side effects resulting from automation and variable data quality. Soft bots it seems aren't necessarily so low-maintenance after all.

Indeed, half of those that have gone ahead with RPA in our survey said they'd experienced more integration problems than expected.

As with Marvin Minsky's over-optimism, it pays to note that even with relatively simple AI, achieving the desired outcomes can be far more complicated than might first appear.

AI by sector

Our respondents came from a variety of sectors so we asked about AI use cases in a few particular areas.

In the finance and insurance sector we found some early interest in intelligent anti-fraud systems that look at suspicious patterns of behaviour.

Another one was actuarial modelling, pulling in all sorts of data from many different sources in order to quantify the probability that a property will be prone to flooding or damaged by fire or subsidence.

AI-based techniques could also speed the introduction of individualised insurance on demand, something a lot of insurers are looking at.

The health sector is another that's often mentioned in conjuction with AI. Already there are some specialised precision operations that are performed or assisted by robots. However, robotic surgery was a little way down the list of priorities for our medically focused AI interviewees.

Automated diagnostics from medical imaging - identifying tumours from x-rays and scans through pattern recognition - was the top one.

That was followed by patient monitoring, both in hospitals and outside. Trials are already under way in many locations in which elderly people's flats are fitted with sensors and systems that learn their behaviour - what time they get up, how many times the fill the kettle or flush the toilet - so that the alert can be raised if the pattern changes. Perhaps they have had a fall are at risk of dehydration or cannot get out of bed.

Drug discovery is another area where a lot of hope is being pinned on AI. Certainly the pharmaceutical companies are chucking a lot of money at it. For example, Pfizer is using IBM Watson to power its search for immuno-oncology drugs.

The largest proportion of current use cases in our research were found among those in manufacturing, logistics and agriculture. This is where big data and the internet of things intersect, where ever increasing volumes of data must be processed, often at the edge of networks, and the results acted upon by autonomous devices.

We're already familiar with automated production lines and warehouses staffed with robots and autonomous vehicles, but what about agriculture where the water and nutritional needs of vineyards are tended to automatically or crops monitored by drones?

Then we have the emergence of smart grids where power generated through renewables is automatically sent to where it's needed in the most efficient way reducing the need for baseload generation.

How about being able to buy things with your face? That's something that's already being rolled out in South Korea and China and will surely turn up in the UK sooner rather than later. Indeed, some British supermarkets are reported to be rolling out age-checking cameras to vet people wanting to buy alcohol and cigarettes next year.

Among our retail respondents most attention is currently being placed on personalised marketing and advertising however.

Grunt work

So those are some of the main uses for AI and machine learning mentioned in our research. Most are early-stage projects though. So what are the hurdles?

Well, a lot of it is in the grunt work. Collecting, cleaning, de-duplicating, reformatting, serialising and verifying data was the time and effort consuming task most mentioned by our respondents. Without good data, the 'intelligence' part of AI just won′t happen.

The second most mentioned practical difficulty is training the models. With machine learning this can take weeks or months of iteration, tweaking the parameters to eliminate bias and error and to cover gaps in the data.

While models may start life in IT, at some stage they need to make contact with the real world of production engineers and end users. Interdisciplinary and cross-departmental collaboration is another mountain that must be climbed.

Other familiar bottlenecks were mentioned too, including integrating with legacy systems, a shortage of skills - and gaining acceptance from those who might fear the introduction of AI and what it could mean for their jobs and livelihoods.

The bigger picture

Indeed, 18 per cent of those we asked thought that their sector would see a net loss of jobs because of AI. On the other hand, 20 per cent thought AI would create more jobs than it displaces. Numbers were fairly low for both though, suggesting a degree of uncertainly about what the future will bring.

Indeed, the overall impact of AI is all but impossible to predict. For certain professions though, such as lorry drivers and warehouse staff, that estimation is easier than others. Change will be profound and it will arrive at a rate that will make it hard to adapt to. Twenty-nine percent of our respondents felt that insufficient thought has been given to the ethical side, the effects of AI on society.

However, more saw a happy alliance between humans and technology, with AI enabling people to work more efficiently, although one third said that for these efficiencies to be realised a radical restructuring of the workplace will be necessary.

On the question of the potential for organisational advancement, 12 per cent of our respondents said that AI is already helping to differentiate their company from the competition; a further 29 per cent predicted that would be the case in three years' time. Certainly many are seeing the potential of machine learning′s ability to drive efficiencies and as the basis of new products and services.

There will always be a need for human qualities though, and it will be a long time before machines can emulate empathy. The largest number agreed with the statement: ‘AI is not appropriate for all jobs and probably never will be'.

Justified or not, AI is generating a degree of trepidation. It's easy to scoff about the fears of Terminator-style killer robots, but what is happening right now in China with the Social Credit system in which citizens are given a ranking that depends on not just their behaviour but also that of their friends and family and social media connections, shows that fictional dystopias like the Black Mirror episode Nosedive are already uncomfortably close to reality.

In the main our respondents were optimistic though, with 10 per cent believing society will be 'much better' as a result of increased automation and 54 per cent saying 'better'. Eight per cent said it will be 'worse', while a pessimistic two per cent are presumably going off grid and digging drone-proof bunkers in the back garden. Nine per cent didn't provide an opinion.

In conclusion

The main conclusion is that AI is hard. It's difficult to understand, challenging to implement and tricky to integrate into existing systems. For most deploying AI it's early days and most efforts are at the experimental stage.

In the main, AI is still narrow and task-specific. It is an additional capability that can be bolted onto existing processes, rather than a new product that can be bought off the shelf.

That said, ML and AI are already having a big impact in multiple areas. The easy availability of algorithms and frameworks and new IoT data sources mean that things are changing fast and progress will likely continue to accelerate. Those companies are able to make something of it now are getting in at the ground level.

John Leonard

Author spotlight

John Leonard

View profile

More from John Leonard

Ofcom fines TikTok £1.9m for failure to provide child safety information

UK and Irish police take down 'most prolific' DDoS site