UK workers say AI agents are 'unreliable'
When AI agents fail, responsibility is unclear
British employees are sounding the alarm over the rise of AI agents in the workplace, warning that the tools are unreliable, unresponsive to feedback, and in some cases creating more work instead of reducing it.
That's according to the Global State of AI at Work 2025 Report, commissioned by work management platform vendor Asana.
The study surveyed 2,025 workers worldwide, including 1,021 in the UK, and paints a picture of a workforce eager to embrace AI but held back by trust and oversight gaps.
Employees expect to hand off almost a third (32%) of their workload to AI within a year, and 41% within three years. Yet today, only one in four (25%) say they feel ready.
Even as use grows, confidence is lagging. Nearly two-thirds (64%) of workers describe AI agents as unreliable. More than half say the tools confidently produce incorrect results or ignore feedback altogether. Without clear accountability, mistakes go unresolved, undermining both efficiency and trust.
Still, adoption is accelerating, with 74% of workers already using AI agents in some form and 76% considering them a transformative change in the way work is carried out.
Employees are most likely to use them for administrative tasks such as organising documents (30%), locating files (29%), and scheduling meetings (26%).
In fact, 68% would rather delegate these jobs to AI than to colleagues.
Still, the report warns that without stronger guardrails, adoption could stall at the "admin level," leaving organisations unable to unlock deeper productivity gains.
Accountability vacuum
When AI agents fail, responsibility is unclear. Some workers blame end users (20%), others point to IT teams (18%) or creators (9%), but the largest share (39%) say no one is accountable, or that they simply don't know.
Few organisations have introduced safeguards. Just 10% have ethical frameworks for AI agents, 10% use structured deployment processes, and 9% review employee-created tools.
Oversight is inconsistent too: more than a quarter (26%) of companies let staff build AI agents without approval, and only 12% have formal rules on what should be handled by humans versus AI.
Even basic error tracking is rare. Only 18% of organisations measure AI mistakes, despite 63% of workers ranking accuracy as the top metric.
Without such systems, mistakes are repeated, contributing to what Asana calls "AI debt" – the mounting costs of unreliable systems, poor data, and weak oversight.
Nearly 79% of organisations say they expect to accumulate this burden.
Lack of context is a key frustration. Nearly half (49%) believe agents don't understand their team's work, while 47% say they often pursue the wrong priorities.
Widening training gap
Workers also point to a lack of preparation. While 82% believe training is essential for effective AI use, only 32% of organisations have provided it. More than half (53%) want clearer boundaries between human and AI responsibilities, and 58% are calling for formal usage guidelines.
Crucially, workers remain open to collaboration: 68% say they would prefer to delegate some tasks to AI agents rather than human colleagues. But they stress that guardrails, context, and training must come first.
The report concludes that organisations must start treating AI agents as teammates rather than simple tools.
That means embedding feedback loops, defining responsibilities, supplying context, and putting accuracy at the centre of evaluation.
The fact that Asana is a vendor of tools purporting to rectify some these issues should prompt some scepticism, but the findings do chime with other recent reports on attitudes to AI in the workplace.
A report by the Tony Blair Institute this week found that more Britons believe that AI poses an economic threat than see it as a driver of opportunity. The TBI called for regulation that makes AI “trustworthy” and visible campaigns that demonstrate tangible benefits.
Earlier this month, a survey revealed that British workers are concealing their use of AI at work, with many fearing that revealing reliance on the technology could damage their reputation.