Business and policy leaders join peers in renewed call for cross-industry AI legislation
Uncertainty is stalling adoption and damaging public trust
Peers, business leaders and policy experts are calling for cross-industry AI legislation, warning that the government’s “wait and see” approach is failing business, citizens and the economy.
UK businesses are being urged to adopt AI at speed to drive productivity and growth, yet the absence of cross-industry, outcomes focused legislation is leaving boards exposed to legal, financial and reputational risk. This is according to speakers at yesterday’s launch of a new Parliamentary One Pager (POP) by Lord Holmes of Richmond calling for comprehensive AI legislation.
At the Westminster event, hosted by Lord Holmes, peers, policy experts, senior figures from UK think tanks and academia and business leaders warned that the government’s “wait and see” approach to AI governance is not tenable. Furthermore, regulatory uncertainty is actively holding back responsible AI deployment across private enterprise and the public sector.
Introducing the initiative, Lord Holmes said the purpose of the one-page parliamentary briefing was “to push the government to introduce cross-sector, cross-economy AI legislation.”
He continued: “This government in opposition were quite keen on legislation. Since coming into office, wait and see has become the approach,” he said, describing it as ill-suited to a technology already shaping decisions across society and the economy.
Uncertainty is slowing AI adoption
While much of the debate has focused on citizen harm and democratic risk, Erin Young, Head of Tech Policy at the Institute of Directors argued that mainstream UK business is increasingly expected to deploy AI rapidly, while absorbing the consequences if systems fail.
“On one hand, AI is critical for growth. You’ve got to adopt AI as quickly as possible,” she said. “But on the other hand, you’re responsible if it all goes wrong.”
Young, whose organisation represents around 20,000 directors and senior business leaders, said the current fragmented regulatory landscape is creating deep anxiety in boardrooms.
“Uncertainty makes boards very, very nervous,” she said, noting that directors are being asked to approve AI strategies “without a clear sense of what AI is for them, what guardrails are available, or even if there are any guardrails at all.”
In the absence of clear AI-specific law, businesses are relying on “a patchwork of existing laws like data protection,” which she said are being “stretched to fit AI,” leaving “huge gaps” around liability, accountability and good practice.
“Who is liable if an AI system adopted by a company causes harm?” Young asked. “What does good practice actually mean?”
She warned that this ambiguity disproportionately affects small and medium-sized enterprises, which lack the legal resources of large firms. “While large firms with big legal budgets can absorb this uncertainty, SMEs will face disproportionate consequences,” she said, particularly outside the tech sector.
Young argued that the government’s emphasis on AI-driven growth sits uneasily with weak governance. “If companies are reticent to employ AI systems because the governance isn’t there, then we won’t have adoption at the scale that we want or need,” she said.
Her conclusion was blunt: “Businesses don’t fear regulation. They fear uncertainty.”
Other speakers reinforced the case for a cross-sector framework. Lord Clement-Jones, former chair of the Lords AI Select Committee, said that voluntary codes and sector-by-sector approaches were insufficient. “It’s not about voluntary codes and industry self-regulation. It’s about binding legislation,” he said, rooted in long-standing principles of “transparency, accountability, fairness and human oversight.”
Trust in AI
Gaia Marcus, Director of the Ada Lovelace Institute warned that the current piecemeal responses risked turning into a “whack-a-mole approach” to AI harms. “We will not get the benefits of technology if we do not manage its risks,” she said, pointing to polling the Institute published last year which found that 91% of the public want AI to be used fairly.”
She continued, “When presented with trade-offs, they want fairness to be prioritised over economic gains, speed and innovation and even international competition.
“The public feel disenfranchised and excluded from AI decision making. 84% of the public fear that when regulating AI, the government will prioritise its partnerships with large technology companies over the public interest. 81% support an independent AI regulator.”
Hannah Perry, Director at Demos Digital (Interim) developed further the argument that effective AI governance could help rebuild trust between citizens, the state and institutions, which is in a parlous state right now.
“At Demos, we believe that we’re living in a democratic emergency fuelled by the breakdown of relationships between the state and the citizen,” Perry said, calling for “binding and enforceable, cross-sector, human rights-based AI regulation.”
“This has to be at the heart of a new deal between state and citizen, providing obligations for the design, deployment and ongoing evaluation of AI technologies across all sectors by state, non-state and private actors.”
For business leaders the message of the event was clear: without legal clarity, responsible adoption will stall. As Erin Young put it, “Good governance doesn’t stop innovation. It enables innovation,” giving boards “something to govern against” and the confidence to invest.
Summing up, Lord Holmes said the status quo is failing on every front. “It’s not working for citizens, it’s not working for our society, it’s not working for our communities, and it’s not working for business.”