EU unveils 'human centric' artificial intelligence data strategy

New strategy intended to both fuel the development of AI and data-driven business across the EU, as well as to regulate it

The European Union has published its European data strategy [PDF], intended to provide the framework for what it describes as human-centric artificial intelligence.

The white paper, said President of the European Commission, Ursula von der Leyen, is intended to "shape Europe's digital future". She continued: "It covers everything from cybersecurity to critical infrastructures, digital education to skills, democracy to media. I want that digital Europe reflects the best of Europe - open, fair, diverse, democratic, and confident."

However, while the strategy has been pitched as boosting the EU's technology sector and preparing the bloc for a shift to an ever-more data-driven economy, increasingly governed by AI, the strategy is driven by a desire to regulate artificial intelligence and data platforms before they take off. Indeed, back in December, von der Leyen had promised GDPR-style legislation around AI across the European Union.

"Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection. Furthermore, the impact of AI systems should be considered not only from an individual perspective, but also from the perspective of society as a whole," the report asserts.

AI, it continues, could help achieve the EU's Sustainable Development Goals, and support "the democratic process and social rights", but added that "the environmental impact of AI systems needs to be duly considered throughout their lifecycle and across the entire supply chain".

Furthermore, it adds, "a common European approach to AI is necessary to reach sufficient scale and avoid the fragmentation of the single market. The introduction of national initiatives risks to endanger legal certainty, to weaken citizens' trust and to prevent the emergence of a dynamic European industry".

The white paper, therefore, is intended to provide "policy options to enable a trustworthy and secure development of AI in Europe".

The main building blocks, however, will be comprised of:

* A policy framework intended to align the strategy across Europe "in partnership between the private and the public sector". The aim is to provide what the white paper describes as "an ecosystem of excellence along the entire value chain";

* A regulatory framework for AI in the EU in order to create "an ecosystem of trust" that "ensure(s) compliance with EU rules". This regulatory framework will be based on the Communication on Building Trust in Human-Centric AI, as well as taking into account input from Ethics Guidelines prepared by the EU's High-Level Expert Group on AI.

Commenting on the new strategy, John Buyers, head of International AI at law firm Osborne Clarke, picked his words carefully: "Getting regulation right around a fast-changing, very powerful emerging technology is not easy and the Commission's horizontal, one-size-fits-all approach is very ambitious. A lot of industries will be concerned that the right balance has been struck between enabling a vibrant European market in these new technologies and protecting the rights of EU citizens."

He indicated, though, that Brexit might present an opportunity for the UK to pitch itself as a more tech'-friendly locale, although much uncertainty remains: "For post-Brexit UK, this initiative is highly significant - we know that the government is actively considering regulatory divergence where it would serve UK interests. Data and AI are areas where we can't assume the UK will opt for alignment. So this White Paper sets a clear threshold for UK regulatory bodies to work with in deciding the right direction for the UK AI industry. Which direction are we going to take? The decision could prove to be highly determinative."

Jake Moore, a cyber security specialist with ESET, however, suggested that AI, in particular, needed a body of regulation to control it before it gets out of hand: "AI is moving at such a pace that we need to regulate it before it gets out of control. However, one of the major issues is that when creating AI there is a huge input by humans, and humans are naturally prone to having all sorts of bias.

"Machine learning, which is at the heart of AI, includes external factors such as opinions and feelings, which all help influence the decision makers. AI and machine learning data is only as good as the data you feed it, so the regulation of this needs to start at the earliest stages... Regulation, however, will be extremely difficult to follow through with, especially as much of this won't be relevant to the rest of the world."

The Computing and Business Green AI and Machine Learning Awards are coming - Enter here!