Breaking the doom loop: How to rebuild trust in AI

Because it’s the only path to sustainable AI adoption

Image:
Humans made the AI doom loop, but we can break it

Rebuilding public trust in AI requires meaningful citizen engagement, transparent governance, and robust legislation. By involving communities and prioritising effective oversight, institutions can break the cycle of mistrust and enable sustainable AI adoption.

Part One of this article discussed why the age of mistrust poses such a challenge for public and private institutions trying to persuade us as consumers, citizens and employees, to engage with AI. We now turn to how these institutions can educate and collaborate to break the doom loop of mistrust.

As we saw in the first article, technology itself is not the problem. The issue is that few people trust institutions to deploy it wisely and for their benefit. This makes the first step to answer the following question:

What’s it in for me?

Speaking at the launch of a Parliamentary One-Pagerin January, Hannah Perry, Director at Demos Digital (Interim), summarised how Demos is working to break the “democratic doom loop”.

“Innovation and AI-based technology could be part of the solution to reversing democratic decline,” she said, citing Demos’ experiments with AI-based deliberative technology to build policy and relationships between MPs, councils and communities.

Deliberative technology refers to digital tools designed to facilitate thoughtful discussion and informed decision-making – the emphasis being on the ‘thoughtful’ and ‘informed’.

Perry argued that government should involve citizens meaningfully and equitably at key points in the AI governance cycle. Participatory approaches, she suggested, would help navigate divisive and heated debates that have stalled policy developments around AI and copyright, chatbot safety standards and digital ID.

The necessity of public engagement in the development of AI-enabled systems was also argued recently by Lord Hague, who, writing in The Times, said:

“We need to make democracy healthy and even save its life. Give citizens power over technology, rather than the other way around.”

Amplify citizens voices rather than automate decisions

A strong example of this approach appeared among submissions to Computing’s AI Leadership Index from the London Borough of Camden. Chief Digital & Information Officer Tariq Khan described two challenges: avoiding the perception of AI being “done to people rather than with them”, and overcoming polarised, misinformed debate that makes nuanced discussion difficult.

Khan took a deliberative approach, positioning citizens as partners in the design of AI and data systems. He explains:

“At Camden, this meant working with residents to co-define the principles, rules and safeguards governing how data and AI are used, rather than relying solely on technical or legal expertise. Through long-running resident panels and partnerships with organisations such as the Alan Turing Institute and Involve, we co-created the Camden Data Charter, grounded in the belief that data rights are human rights.”

Image
Description
Tariq Khan, Chief Digital & Information Officer, London Borough of Camden

Khan’s team designed Waves – an AI powered platform developed in partnership between the London Borough of Camden and Demos designed to transform how citizens are involved in public decision making and which “enables meaningful deliberation at a scale that traditional engagement methods struggle to achieve.”

Khan explains further:

“The platform is designed to amplify citizen voice rather than automate decisions. AI is used as an assistive tool, supporting human judgement and transparency rather than replacing it. Strong governance, explainability and bias mitigation are embedded by design, aligning the platform with emerging global standards for trustworthy AI.”

More details will be available when the AI Leadership Index is published, but this project proves that it is possible to rebuild both democratic trust and trust in AI, despite the challenges we collectively face.

Legislate

It isn’t just local government that should be taking responsibility for breaking the doom loop and rebuilding trust in how the state and its institutions use AI.

Lord Holmes of Richmond is campaigning for cross-industry, cross-sector AI legislation on the basis that the current ‘wait and see’ approach is failing to cultivate the trust from both private enterprise and individuals that is necessary to empower digital innovation.

Lord Holmes’s argument is compelling. A principles-based AI Bill can strengthen the obligations of transparency and accountability and enshrine meaningful public engagement in law.

It may not be sufficient to rebuild public trust in AI, but it is a necessary condition.

Innovation needs guardrails

Business leaders must also understand what’s in it for them. Headlines often frame AI as a route to job cuts and profit growth, but Erin Young, Head of Tech Policy at the Institute of Directors, argues the reality is more complex, particularly for smaller firms which employ most of the UK workforce.

“Directors are responsible for risk compliance and long-term value creation, but many boards feel like they're being asked to decide very quickly on new AI strategies and approve AI use without a clear sense of what AI is for them and what guardrails are available,” she said.

“Businesses are currently relying on this patchwork of existing laws like data protection, which, are being stretched to fit AI.”

Young rejects the idea that AI hampers innovation.

“Good governance doesn't stop innovation. It enables innovation. Done well, it means creating the conditions for safe experimentation, helping to build and maintain the trust of stakeholders and the wider public in business activities. It avoids costly missteps and encourages board level clarity and thus quicker investment decisions and quicker investment cycles.

“Effective governance is how boards give their organisations confidence to adopt AI rather than delaying or avoiding it, which is what we're seeing happening at the moment.”

The message is clear. If we want individuals and businesses to trust AI, we must legislate for it.

Client-centric AI

As Young pointed out, larger organisations can better absorb regulatory uncertainty. One example is global insurer AXA Group, where building trust sits at the heart of its AI strategy.

What’s in it for AXA customers? Use cases include agentic AI being deployed to personalise claims journeys, with a view to making claims shorter and easier. This model stands in contrast to the “chatbot as a barrier to access” approach, often as a consequence of radical headcount reduction in customer service teams.

Other use cases like AXA Wildfire, empower customers with information about risks in order that they can take action to mitigate them.

For employees, trust is maintained by keeping human expertise central. Cali Wood, Head of Data & AI Strategy and Culture at AXA, has emphasised that “domain expertise is non-negotiable” when building trust in AI systems.

A collaborative approach whereby data scientists work shoulder-to-shoulder with domain experts ensures that the AI supplements rather than replaces professional knowledge. This institutional expertise is, according to Matthieu Caillat, AXA Group Chief Technology & AI Officer and AXA Group Operations CEO, crucial in the retention of AXA’s competitive edge.

Trust is reinforced through careful governance, with AI tools vetted by dedicated teams, and through architectural independence to ensure resilience and data protection.

Caillat, speaking earlier this year, also explained that maintaining architectural independence is critical for resilience and data protection – both essential conditions for AXAs customers to continue to trust the insurer with their data.

"Locking in with a few partners is easier and faster, “he said. “But that is not the path we will take."

Trust is the precondition for breaking the technical, economic and democratic doom loop that we’re all trapped in. With trust earned through transparency, participation, robust governance and human-centred design, AI can strengthen institutions, businesses and democracy itself.

The choice facing leaders in business and the state is not simply how far and how fast they are prepared to go with AI, but how deliberately they are prepared to act to build and sustain the trust on which its legitimacy and success ultimately depend.