Caught in regulatory crossfire - how diverging data laws are threatening trade
We don’t need lighter tech regulation; what we need is regulation that is clear
As the EU presses ahead with its artificial intelligence and data agenda, backlash at home and divergence abroad are raising concerns about the bloc’s ability to sustain its regulatory leadership position. Can Europe sustain its role as a global regulatory leader while shaping the future of AI and data governance?
With the EU pursuing comprehensive, prescriptive regulation, the UK leaning toward principles-based, market-friendly rules and the US prioritising innovation acceleration under the Trump administration, the global digital economy is increasingly fractured along jurisdictional lines. Inaction risks leaving citizens unprotected, but overreaction risks leaving Europe’s economy trailing behind.
While Europe hopes to position itself as the global standard-setter on data rights and AI through the development of comprehensive legislation, the reality on the ground is increasingly complex – legislation overlaps and businesses struggle with compliance. Inaction or overreaction risks harming citizens and society, and leaving the continent’s economy behind its competitors.
Divergence
The EU AI Act, which is being rolled out gradually until the 2027 compliance deadline, adopts a risk-based approach, categorising AI systems based on their potential harm and introducing stringent obligations for high-risk uses. By contrast, the UK has proposed a pro-innovation regulatory framework for AI, relying on existing law and sectoral guidance. Across the Atlantic, the U.S. has recently published its AI Action Plan, which shows a clear direction of anti-regulation, prioritising speed and market leadership.
Meanwhile, Asia is charting its own course. Singapore’s voluntary AI governance framework focuses on transparency, accountability and human-centric principles without creating a heavy compliance burden. This flexible model has positioned Singapore as a hub for responsible AI innovation. Japan and South Korea are following similar paths, favouring agile frameworks and industry collaboration over exhaustive legislative detail.
The UK’s data adequacy agreement with the EU, last renewed in 2021, is now up for review. If the UK’s shift to a looser “not materially lower” test under its Data Use and Access Act fails to meet the EU’s “essential equivalence” standard, adequacy could be revoked. The fallout would be serious: barriers to data flows would hit sectors like digital services, cloud computing, finance and e-commerce, and could make the UK less attractive to global businesses.
Barriers to data transfer could also reduce the attractiveness of the UK as a place to do business, potentially leading some to relocate. Some argue the UK is hedging its bets, eyeing closer ties with the US as Washington ramps up pressure on Brussels to ease constraints on American Big Tech.
The regulatory maze - unclear, overlapping and increasingly complex
Europe’s ambition to harness AI for economic growth and societal good is laudable. But its regulatory landscape is becoming unwieldy.
The EU must seek to do so in a manner that protects citizens' rights, ensures fairness, and maintains public trust. Even if the UK maintains adequacy, the EU regulations are complicated and also present financial challenges for businesses.
Across the EU, businesses, civil society groups and even national regulators are raising concerns about the current regulatory landscape. Organisations now navigate GDPR, the Digital Markets Act, the Digital Services Act, the Data Governance Act, the Data Act, and soon, the AI Act. Overlap is inevitable; clarity is not. The Open Data Institute’s (ODI) European Data and AI Policy Manifesto aims to support these efforts. It challenges all those in the European data ecosystem to leverage the Union's collective strength to pioneer a distinctly European, data-centric and people-centric model of AI development. It should balance innovation with responsibility, economic growth with societal well-being, and technological advancement with unwavering respect for human rights and European values.
In an open letter published on 3rd July 2025, 44 CEOs from firms including Airbus, BNP Paribas, Carrefour, Lufthansa and Philips warned that “unclear, overlapping and increasingly complex EU regulations jeopardise not only the development of European champions, but also the ability of all industries to deploy AI at the scale required by global competition.”
Compliance costs are soaring, and smaller firms face the steepest barriers. As Michael Pisa at the Centre for Global Development notes, “Large tech firms can pay the compliance price, but for smaller players, it’s a major barrier.” Contrast this with Singapore, where principle-based regulation evolves with technology and regulators actively engage with industry. This approach fosters trust while keeping innovation alive.
The Commission has acknowledged these concerns. Its 2025 work programme, A Simpler and Faster Europe, commits to an assessment of the digital acquis, including GDPR. The objective is to clarify contradictions, streamline rules, close gaps and improve enforcement, particularly for SMEs. But will this rationalisation come fast enough?
Simpler laws won’t spark innovation on their own - access to capital, a supportive entrepreneurial culture and effective enforcement all matter just as much. Some EU laws get the balance right, but others risk doing more harm than good if they're poorly implemented or unevenly enforced.
For UK firms, the challenge is even more acute – how to stay compliant with multiple diverging frameworks? The real threat isn’t overregulation, it’s fragmentation; a digital trade environment so fractured that only the largest players can navigate it, while smaller innovators are priced out.
The risk of complex overregulation should set alarm bells ringing in Brussels. As the Draghi report made clear, the economic gap between the EU and the US is reflected in declining European productivity and living standards. While the AI revolution offers a chance to revitalise Europe's economy by integrating intelligent technologies across traditional industries, it can only do so if the complexity of its legislation doesn’t act as a barrier to innovation.
And although we are no longer EU members, the UK’s economic performance is still tied to European markets and regulatory frameworks, and any decline in EU competitiveness will also be felt on this side of the Channel, particularly for firms with supply chains or clients on the continent.
Charting a way forward
Although frustration continues to mount over the complexity of Europe’s digital rulebook, the EU still has an opportunity to lead in the global data and AI landscape – if it can adjust its course. The ODI’s European Data and AI Policy Manifesto presents a set of practical suggestions based on the ODI's six key principles for a coherent and trustworthy data and AI ecosystem.
These principles emphasise the development of a strong and open data infrastructure, fostering trust through inclusive participation and robust assurance mechanisms, enabling independent scrutiny, promoting equity and inclusivity, and advancing skills and data literacy. The manifesto stresses the importance of balancing innovation with responsibility, ensuring robust rights protections and preserving values while positioning Europe as a leader in data-driven technology and AI.
Other regions are already moving fast. Singapore and Japan have been prioritising adaptable, collaborative governance. California, home to some of the most innovative firms in the world, demonstrates that targeted legislation like the Delete Act (effective 2026) can coexist with dynamic tech ecosystems.
Europe doesn’t need lighter regulation; it needs regulation that is clear, consistent and streamlined. Adding new layers on top of old ones isn’t the answer. Simplification, not expansion, should be the mantra.
The ODI believes that the foundation stone of AI innovation lies with a strong, ethical data ecosystem, which enables seamless data flow while upholding our democratic rights. Central to this is transparency - people need to know how data about them is used, who’s making decisions and what safeguards are in place.
Equipping people with the skills to understand and engage with data is another priority, alongside strengthening international partnerships to tackle shared challenges. At its core, the manifesto promotes an approach to data and AI that is open, participatory and focused on real-world outcomes, ensuring these technologies work for people, not just for markets.
The road ahead
The Commission’s digital acquis review offers a critical opportunity to reset the agenda. Rather than layering yet more new laws on top of old ones, there’s an opportunity to clarify what works, fix what doesn’t and ensure data governance is inclusive and ethical. We don’t need lighter tech regulation; what we need is regulation that is clear, well-structured, doesn’t overlap, and therefore, is applied consistently. Equally important, those rules must have teeth. Strong enforcement is essential to ensure compliance delivers real benefits rather than becoming an expensive box-ticking exercise.
While the ODI is based outside the EU, it shares many of the same aspirations, including fostering innovation, safeguarding fundamental rights, ensuring responsible data stewardship and supporting the development of AI that serves society. In this spirit, the ODI would like to see corporate rules simplified.
There is still time to get this right. But delay and fragmentation come with a price – our prosperity and credibility are on the line.
Resham Kotecha is Global Head of Policy at the Open Data Institute