Examples of AI as a force for social good

Nesta in Scotland showcases a number of projects in which humans and AI work as a team

News stories about AI tend to dwell on dystopian projections, be they job losses, ever-increasing surveillance, increased alienation, or, at the apocalyptic extreme, robots throwing off their shackles and doing away with puny humanity altogether.

Perhaps this is inevitable. All new technology creates anxiety, and AI's ability to 'learn' and emulate life makes it seem particularly creepy, but - and not to make light of the ways it can be used for ill effect - many assumptions around the technology are false. But AI doesn't learn in the same way as us: it's very bad at things we do naturally from a young age, but superb at some tasks we are bad at, especially those that require sifting through huge quantities of data to find subtle patterns.

In partnership, therefore, people and AI can do great things, particularly in the public sector where needs are great and funds are short.

In a video meetup this week, Kyle Usher, programme manager at innovation charity Nesta in Scotland, showcased a number of projects supported by Nesta in which AI is being used for the public good. Some are at the development stage, others already in production, but all demonstrate how AI and machine learning can complement human qualities of empathy and morality to change the dynamic, potentially in big ways.

Some of the projects are run out of higher education establishments. A team at Heriot-Watt University Interaction Lab is using AI to eliminate gender bias in virtual assistants by designing and testing different personas and adapting their responses to biased behaviour. Meanwhile, in City of Glasgow College, researchers have come up with a solution to help adult literacy apps get to grips with Scottish accents, which voice recognition technology famously fails to parse. Users can train the app quickly by repeating a few words, from which the AI learns the accent and is able to infer the rest of the word library.

Then there was Blackwood Homes and Care, a provider of residential units for independent living, which uses AI on smart meter data to help identify changes in usage patterns that might indicate an elderly person has had a fall, failed to get out of bed or even detect the slow degenerative effects of chronic illness. This has been particularly useful during the pandemic where contact has been limited, Usher said.

Sticking with medical applications, Red Star AI has set its algorithms to work on the unenviable task of sifting through doctors' notes to find small clues in people's health history that might anticipate the onset of dangerous or debilitating conditions. Working through 100,000 pieces of medical history to spot patterns would be impossible for a human, Usher noted.

And AI could be a godsend for users of prosthetic limbs, such as those being worked on in Edinburgh University, with artificial fingers able to work out the right pressure to apply when picking up an egg or holding a child's hand, and realistic haptic feedback making them much easier and more natural to use.

Finally, Space Intelligence CEO Murray Collins demonstrated the processing of satellite data to create land use maps for the charity Scottish Wildlife, which fill a large gap in knowledge about exactly what habitat exists and where and how it is changing over time, so that interventions to increase biodiversity might be planned and CO2 emissions targets met.

These days many AI tools are free to use and models can be used off the shelf, Collins said, but the trick is knowing how to implement them in practice, and understanding why you are doing what you are doing.

The field is maturing fast and there should be nothing stopping public sector and charitable organisations taking a first step to see how it might be applied. Indeed, to ignore AI at this stage is an ethical judgement in itself, Usher argued.

"New, emergent technology often brings moral questions. Should we be using it to help society, or leave it to those who want to use it purely for financial gain or to corner a market? There are implications in using new technologies in this way, but there are also implications in not."

Usher continued: "Let's say you work in healthcare and you knew you could diagnose conditions earlier and not only [improve health] but also save costs and provide a better service. If you knew how to do that and didn't, it could be seen as professional neglect."