The age of mistrust, and why it’s AI’s biggest problem

Right now, the benefits of AI for business and individuals are far from clear

Organisations are racing to embed AI into everyday life. But, people don’t trust institutions and private enterprise to do this in a way that benefits them - and that’s a problem.

Whom do you trust? Hundreds of years ago people limited their trust to perhaps close family (not always wisely) or God (ditto). Over the nineteenth and twentieth centuries people collectively learned to trust more widely, putting their faith in governments, institutions like the police and the judiciary, doctors, teachers - and brands.

Along came social media, which initially seemed to widen our circles of trust. We could build trusting relationships virtually with people we’d never meet in the real world and have a richer, more informed perspective for it. But then came the misinformation, conspiracy theories, political polarisation and the rise of the algorithmically fuelled, rage farming, industrial political-grievance-nurturing hate machine.

The only thing it seems we can all agree on now is that nobody trusts institutions or governments anymore. The most recent Edelman Trust Barometer survey found that business is trusted more as an institution (62% trusted) than charities/NGOs (58%), government (52%) and media (52%.) Those are global averages, and trust varies by geography and the degree to which you nurse a sense of grievance. But wherever you were starting from, the chances are that your sense of trust in these institutions has diminished in the last five years.

All of which presents a challenging marketplace for the businesses and public sector organisations trying to transform themselves with AI and persuade us to engage with the technology more as customers, private citizens and employees.

Who benefits?

For any technological advancement to have a transformative economic and societal impact, people need to understand both how to use it and what’s in it for them. For that process to occur, the technology’s end user needs to trust its makers. Trust acts as a force multiplier.

For inventions like the printing press, the television and the internet more generally, it’s easy to see what was in for the masses – and the masses could see it for themselves.

The challenge faced by institutions seeking to lead with AI is that the benefits for anyone that isn’t a tech CEO are far from clear. Satya Nadella admitted as much recently. Individuals really can’t see what’s in it for them at all. In the same Edelman Survey referenced above, an average of 45.3% said that they trusted AI, but only 40.3% were comfortable with the use of AI by business.

Gaia Marcus is Director at the Ada Lovelace Institute, an independent research organisation focused on data and AI. Speaking at a recent launch of a Parliamentary One-Pager by Lord Holmes on the necessity of cross-industry AI legislation, Marcus set out some stark research findings.

“AI is no longer a technical, low salience topic, something that only data people think about,” she said. “It's very much a kitchen table issue, from misinformation to even just our energy and water prices and other problems that are on the horizon.

“We published UK public polling that showed 91% of the UK public feel it's important that AI systems are developed and used in ways that treat people fairly. When presented with trade-offs people want fairness to be prioritised over economic gains, speed and innovation and even international competition.”

The problem Marcus pinpointed was that the public feel, correctly in most cases, shut out from decision making about a technology that has to the potential to transform their lives – for good or for ill.

“The public feel disenfranchised and excluded from AI decision making. Eighty-four percent of the public fear that when regulating AI, the government will prioritise its partnerships with large technology companies over the public interest. That is not somewhere that we want to be in the UK in 2026.”

Doom loop

Public distrust is a rational response to the behaviour of some of the companies developing AI, the largest of which have stolen data and intellectual property to build their models. Big Tech has proven, time and again, utterly unwilling to responsibly regulate its own activities.

“People do not trust private companies to self-regulate,” Marcus continued. “They do not trust private companies to mark their own homework. Eighty-one percent support an independent regulator for AI equipped with important powers.”

Self-regulation has failed. People are right not to trust the people who, having spent the last 15 years denying the increasingly obvious harms their products and platform cause, are now spending billions of dollars in a race to automate them. The problem is that they don’t trust the government either, as Hannah Perry, Director at Demos Digital (Interim), a cross party thinktank, explained.

“At Demos, we believe that we're living in a democratic emergency fuelled by the breakdown of relationships between the state and the citizen,” she said. “What we're describing is a democratic doom loop of mistrust, disengagement and, sadly, political ineffectiveness, inhibiting the government's ability to deliver on its democratic promises, which is in turn further damaging trust in government.”

Demos believes that innovation and AI could be part of a potential solution in reversing our democratic decline, and we will explore that in the second part of this two-part article.

It’s possible that lurking behind much of the public distrust of AI is a fear businesses, governments and other institutions will use it to avoid taking responsibility for decisions that are complicated, expensive or morally fraught.

Again, it’s a rational response to the way that AI, and automation more widely, has been adopted by organisations belonging to all these groups as a barrier to access. If you’re a job seeker, you must navigate hundreds of automated screening systems and even AI interviews; medical receptions are now often completely automated; and if you have any type of customer service query your first port of call is likely to be a chatbot.

A recent YouGov poll of 2,200 UK consumers, commissioned by Pegasystems, on consumer perceptions of how businesses use AI found that 68% reported they are either "not very confident" or "not at all confident" in the way businesses use GenAI when interacting with them, while more than half (54%) lacked confidence that organisations use GenAI responsibly.

Whatever service you’re trying to access in 2026, the chances are you’ll have to navigate an AI obstacle first. When you think about the extent to which AI has been so enthusiastically deployed as a barrier, it’s not difficult to see why people might have trouble trusting it as an enabler.

Lots of businesses don’t trust AI either

Are private citizens right not to trust business use of AI? Does business itself trust AI? In speaking to and surveying many Computing members over the course of the last two years, many companies are AI curious, but there’s an inertia, partly fuelled by risk aversion.

Erin Young is Head of Tech Policy at the Institute of Directors. Speaking at the same event as Hannah Perry and Gaia Marcus, she shared some results of a survey the IoD carried out of its 20,000 members last year.

“There is enthusiasm, but it is tempered by a deep uncertainty and scepticism. Boards are really concerned about accuracy and reliability of systems, what's happening to their data, IP, ethics, security risks. There’s a pushback against the AI hype, and this lack of trust is really stalling responsible adoption.

“The fragmentation across sectoral regulation is a huge risk for mainstream business. We have different regulators moving at different speeds and directors are incredibly concerned that this leaves them open to commercial risk, legal risk, financial risks, reputational risks when adopting AI.”

At Computing, we are currently reviewing submissions for our inaugural AI Leadership Index, which celebrates the individuals innovating with AI. When asked about the challenges, many submissions talk of the lack of trust from employees at the beginning of the process and explain how they have overcome this distrust with education, collaboration and transparency.

It is possible to build – and rebuild – trust in AI between individuals, employers, institutions and the state.

The next feature in this series discusses how we break the doom loop, with examples of organisations, both in private enterprise and the public sector which are doing just that.