Rolling out AI that works: The trust factor

The trust deficit is one of the most serious blockers to an AI rollout, but it can be overcome with attention to safeguards

AI is new, fast-moving, hyped to the heavens - and potentially revolutionary. Small wonder, then, that most organisations feel compelled to adopt it yet are struggling to make it work.

There are several pull factors for the C-suite, including potential productivity gains, efficiency boosts and jumping ahead of the competition. But against this are the drag factors: immaturity of the technology, technical debt, and as we explored recently, a lack of trust in both the potential outcomes and the AI companies themselves.

This trust deficit is one of the most serious blockers to an AI rollout. People who are fearful, suspicious or uncertain are unlikely to step forward to help make it work.

"If you want employees to adopt AI, you need assurance in every step," says David Girvin, AI security researcher at Sumo Logic. "Most importantly, try to stress how these moves will support the organisation to do a better job. If employees don’t trust their leaders, why would they trust their AI tools to deliver either?"

Image
The privacy illusion: when deleting your data doesn't actually delete ...
Description
David Girvin, Sumo Logic

Responsible by design

AI safety may be out of fashion in some circles, but it’s precisely what's required for a successful rollout.

Employees need assurance that the tool they are being asked to use is not going to leak their private information, cause harmful outcomes because of bias or do them out of a job. Managers are extremely, mindful of the reputational and financial risks that could result from an unpredictable AI running amok.

Despite this, AI safety and ethical AI can seem like vague concepts, complex and hard to implement.

In fact it is entirely possible to innovate with AI in ethical and safe ways, says Heather Dawe, chief data scientist and head of responsible AI at tech transformation company UST. More than a nice-to-have, AI with transparency, explainability and human oversight baked in will deliver higher success rates.

Dawe works with financial institutions, healthcare, consumer product designers and other businesses, helping them identify where AI can add value and working on scaling prototypes to realise that value. Value is not just about efficiencies and stripping out layers, improvements in quality is another important measure of success.

Trough of disillusionment

Currently, AI is in the “trough of disillusionment” phase of the hype cycle, says Dawe, but the negative headlines should diminish as it beds in.

Image
Profielafbeelding
Description
Heather Dawe, UST

“The whole industry is learning how to gain value with AI. There's a lot of negativity about it but there's a whole learning process going on, and in five years time we will be in quite a different place.”

But that won’t happen on its own. Achieving those quality, efficiency and productivity improvements requires both a culture of continuous learning and making AI responsible by design.

“Responsible by design” is a shorthand for embedding ethical considerations at the architectural level, tracking metrics for explainability, fairness, privacy and robustness, and enforcing policy through automated management systems and guardrails.

It’s about creating systems that with a high degree of confidence will always operate within acceptable limits.

With responsible AI, a big hurdle to adoption is lowered.

“Build guardrails into any AI project with security so employees aren't scared of using the tools or inadvertently leaking data,” advises Girvin.

Right tool for the job

But stepping back for a moment: where is AI most likely to add value at scale?

The answer will vary from organisation to organisation, but a good candidate will be a task or workflow that takes a lot of time and/or effort each week to achieve, where the desired outcome is nonetheless clear, and where the underlying process is sound. If the underlying process is complex or compromised, AI is unlikely to help.

“If you slap AI on junk, it is still junk, just faster junk,” remarks Girvin. “There is a huge missing gap, understanding what AI products actually do and if they are beneficial to your use cases.”

Selecting the right tool for the job is also vital. While the generative branch is currently hogging the AI limelight, it should not all about the new and shiny; there’s also “traditional” machine learning which is best suited to optimisation and prediction problems.

“Blending together the more classic styles of machine learning with the newer methods is yielding even more benefit,” says Dawe.

Start with governance

Responsible by design means you should start with the governance, perhaps aligning with ready-made standards like ISO/IEC 42001:2023, UNESCO’s Recommendation on the Ethics of AI or relevant regulations such as the EU AI Act. Attention should be paid to how the governance will scale without becoming a blocker. This is likely to mean automation of guardrails, for example systems that automatically parse prompts and outputs.

High-risk applications require greater compliance, while low-risk uses are less regulated.

Issues of bias in models and model drift should be identified and addressed early in the process, with input to ensure proper representation.

“You've got developer bias. It's not just in the people who are coding the AI - or who are using Gen AI to code their AI - product developers and others will have bias, and there’s a built-in bias in the data,” says Dawe.

It’s well known that AI systems can have biases against women and minorities.

“Fifty percent of humans are women, so to push towards 50% of the workforce in any space. That’s the right thing to do, but also you're going to get AI and AI services that are much more reflective of society.”

Strong data governance underpins AI governance, and improving this foundation will produce multiple benefits, as well as helping the AI intervention to scale safely.

Another vital consideration is accountability, with responsibilities clearly defined by an AI governance board or named member of staff.

Creating an AI-ready culture

It’s a cliché that the tech is the easy part of any transformation, but that’s because it’s true. The hard part is building a receptive culture. If people are not receptive, no intervention will scale and deliver value.

Building an ethical AI culture requires leadership commitment, employee training, and the establishment of AI governance committees or executives.

As a technology and AI company, Dawe UST tries ideas in house before exporting them, Dawe explains.

“We’re training everyone in new AI skills, encouraging people to use it safely and responsibly. We run things like innovation days and competitions and hackathons, and because we're an AI company we need to stay ahead of the curve, and we need to really help our people to change with it.”

The bigger picture

The issues of AI viability, culture and trust go beyond individual organisations. The last couple of years have seen a rise in the number of international in conferences, notably the recent AI Impact Summit in New Delhi, in which tech companies, academics governments and user groups meet to align their efforts and shape the future. There is also an increasing recognition of the importance of open technologies in democratising responsible AI so it can deliver real value.

Image
Description
Amanda Brock, OpenUK

“Open tech is inevitable. Eventually it became the dominant force in software and over time the same will happen in AI. It just needs a bunch of stars to align,” states OpenUK CEO Amanda Brock.

She points to the example of the Model Context Protocol (MCP), which was create by Anthropic, and is now housed in a foundation to safeguard its future.

“This gives reassurance that it is not only open now but always will be open and allows innovators to rely on it to build and create interoperability. Interoperability in our general-purpose AI and agentic AI will become increasingly critical with the passage of time.

“We must learn from our past mistakes in recent digital history and not end up locked into a few big providers.”

Brock continues: “If AI is to be safe, inclusive and sustainable we must see local structures to hold IP and projects, enabling their good governance and co-creation. There must be the funding for open communities, AI ops tools that are open source in a commons, where we can all use them to innovate, develop skills and access compute.

“We don’t need laws - we need tools, access to innovation and compute.”