EU approves AI Act

Aims to regulate high-risk AI applications, without harming innovation

EU approves AI Act

Negotiators agreed the wording of the Act late last year, and MEPs have now voted to adopt it. However, the Act will not come into force until May 2025.

The Act takes a comprehensive approach to regulating artificial intelligence. It encompasses binding regulations covering aspects from risk assessment to copyright protection.

The AI Act places the EU at the forefront of AI regulation, although some critics have raised fears that it could harm innovation.

Bruna de Castro e Silva, AI governance specialist at Saidot, said, "While some seek to present any AI regulation in a negative light, the final text of the EU AI Act is an example of responsible and innovative legislation that prioritises technology's impact on people.

"When the EU AI Act comes into force, 20 days after its publication in the official journal, it will enhance Europe's position as a leader in responsible AI development, establishing a model for the rest of the world to follow."

Although the UK is no longer part of the EU, trade body Tech UK has called on the government to implement its own "industrial strategy" aimed at accelerating AI adoption.

Antony Walker, deputy CEO of TechUK said, "We live with the fundamental issue that AI adoption and uptake is not straightforward. If we want to win the AI race, we need to look at the nuts and bolts of the deployment and adoption process."

How does it work?

The AI Act mandates that AI systems be evaluated based on their potential risks. Some systems, like applications that pose a "clear risk to fundamental rights," will be banned entirely.

Low-risk services like spam filters will face minimal regulation, but high-risk applications - those used in critical infrastructure, law enforcement or healthcare, for example - will be subject to stricter obligations.

Unlike the general purpose AI rules, these obligations will not apply until three years after the AI Act comes into force - probably early-mid 2027.

High-risk systems will be under the oversight of national authorities, and it is now up to each EU member state to set up these agencies. They have 12 months to do so.

Generative and general purpose AI is not exempt from the law. Producers of these systems will need to be transparent about the data they use to train models, and must comply with EU copyright law.

Several firms, including ChatGPT developer OpenAI, are facing lawsuits over their use of copyrighted material to train AI language models.