EU reveals proposals to regulate AI technology

AI systems that pose a clear threat, including those that manipulate human behaviour, opinions or decisions or enable 'social scoring' will be prohibited under the guidelines

European Union lawmakers on Wednesday presented the bloc's first ever legal framework on regulating high risk applications of artificial intelligence (AI) technology, with the bloc's single market.

The EU said that it wanted to achieve "proportionate and flexible rules" to address the risks of AI and to strengthen Europe's position in setting the highest standard in regulating the AI technology.

"Today, we aim to make Europe world-class in the development of a secure, trustworthy and human-centred Artificial Intelligence, and the use of it," said European Commission EVP, Margrethe Vestager at a press conference.

Under new proposals, most AI uses will not face any regulation, although a subset of so-called "high risk" uses will be subject to regulatory requirements.

The proposals, presented in a white paper, state that high-risk AI systems, for example those used in law enforcement, critical infrastructure and healthcare must be traceable, transparent and placed under human oversight.

AI systems that pose a clear threat to the livelihoods, safety and rights of people will be banned. These include systems that manipulate human behaviour, opinions or decisions of people to prioritise access to public and private services.

Systems that enable "social scoring" of individuals based on their personality or behaviour would also be prohibited under the new AI guidelines.

Additionally, the EU regulators want vendors to provide unbiased data sets for their AI applications. They also want to see an EU-wide discussion over the use of facial recognition technology, which has become a leading point of concern among privacy advocates across the world.

However, there will be several exemptions for national security and other purposes. The new rules will not cover the military applications of AI, and special permissions will be given to cover public security and anti-terrorism activities.

The guidelines would apparently build on the data privacy regulations introduced in the EU's GDPR.

Companies that violate the rules may face fines of up to four per cent of their global turnover.

The European Commission said that it is putting forward the proposed regulatory framework on AI with the following specific objectives:

Google and other big tech firms, which have invested huge resources into developing the AI technology, are said to be taking the EU's AI framework very seriously.

Tech industry group Dot Europe - whose members include Apple, Airbnb, Google, Microsoft, Facebook and other tech firms - welcomed the release of the new proposals.

However, some civil liberties activists are complaining that the framework does not go far enough in restricting potential abuses in the advanced technologies that look set to have an extensive impact on everyday life.

"It still allows some problematic uses, such as mass biometric surveillance," said Orsolya Reich of umbrella group Liberties. "The EU must take a stronger position... and ban indiscriminate surveillance of the population without allowing exceptions."