Experts debate moral issues of artificial intelligence at ESOF 2018

Delegates heard that the digital age 'desperately needs' regulation around technologies like AI and machine learning

A professor of ethics and technology has told scientists and policymakers that digital technology like artificial intelligence ‘desperately' need to be regulated using an institutional framework and system of values.

Jeroen van den Hoven, of Delft University of Technology in the Netherlands, told delegates at the EuroScience Open Forum (ESOF) 2018 in France that people are becoming aware that their perceptions of the digital age are biased:

"People are becoming aware that this digital age is not neutral," he said. "It is presented to us mainly by big corporations who want to make some profit."

Van den Hoven, a member of the European Group on Ethics in Science and New Technologies (EGE), said: "We need to think about governance, inspection, monitoring, testing, certification, classification, standardisation, education, all of these things. They are not there. We need to desperately, and very quickly, help ourselves to it."

He also spoke about the need for a cross-Europe network of institutions that could provide a set of values, based on the EU's Charter of Fundamental Rights, which the technology industry could use to inform future work on AI.

The EGE published a statement on AI, robotics and autonomous systems earlier this year, which highlighted the moral issues around the topic and encouraged the development of a structured framework.

Following the release of the document, the European Commission announced in June that it had formed a group of 52 people from academia, science and industry, with the aim of developing guidelines for the EU's AI policy. The group will present its conclusions early next year.

Ethics in robotics

Ethical issues were a hot topic at the ESOF conference, Phys.org reports. One of the major points discussed was the lack of transparency in AI and machine learning, especially around neural networks. In such a system humans can see the data going in and answers coming out, but not how those conclusions were reached.

Maaike Harbers, of Rotterdam University, pointed out the ramifications of such a system in military drones:

"In the military domain, a very important concept is meaningful human control," she said. "We can only control or direct autonomous machines if we understand what is going on."

Other topics of concern were the effect of companion robots on children, which may impact social relationships; and autonomous cars, using the well-known moral quandary of which is the greater evil: a self-driving car that hits a group of people, or one that swerves to avoid the group and hits a single person.

Ebru Burco Dogan, of France's Vedecom Institute, said that while research has shown that most people are in favour of the pragmatic solution - i.e. hitting fewer people - they wouldn't want to buy or ride in a car that contained programming to that effect.