AI & ML latest: Google disbands another AI ethics committee

John Leonard
clock • 31 min read

Tricky stuff, ethics

Google's attempts to assemble expert bodies to discuss the ethical aspects of AI seem to have run into further problems.

As reported in this blog, two weeks ago Google shut down its Advanced Technology External Advisory Council (ATEAC) which it had set up only days before to look at facial recognition, fairness in machine learning and other ethical issues, after some members objected to the makeup of the council which included a fossil fuel lobbyist and a manufacturer of military drones (see below).

Now the Wall Street Journal reports that Google has "quietly disbanded another AI review board" which it had set up in the UK to look into AI in health care. This time problems arose after disputes between panel members and Google-owned DeepMind over access to information, the WSJ reports citing unnamed sources. Panel members were also unhappy about the binding power of recommendations made by the board and the closeness of that DeepMind and the parent company.

The panel was set up in 2016 when Google set up its AI healthcare project DeepMind Health, two years after it acquired London-based AI startup DeepMind. The problems have been reportedly rumbling on for some time with Google planning to implement reforms, but it seems the company has now decided to cut its losses.

Losing one AI ethics board might be unlucky, but two is surely careless given the ethical hot water the company has found itself in lately.

10/04/2019 EC lists seven requirements for ethical AI

A few months ago (see below) we reported that the European Commission (EC) High-Level Expert Group was seeking feedback on its draft AI ethics guidelines. It has now published the results of the consultations as "seven key requirements that AI systems should meet in order to be deemed trustworthy".

These requirements, summarised, are as follows:

  • Human agency and oversight: AI systems should empower human beings while at the same time guaranteeing human oversight and ensuring that intervention is possible.

  • Technical robustness and safety: AI systems need to be resilient and secure as well as accurate reliable and reproduceable, and there needs to be a backup plan in case things go wrong. "That is the only way to ensure that also unintentional harm can be minimised and prevented," the Group says.

  • Privacy and data governance: this requirement encompasses respect for privacy and date protection, governance mechanisms to ensure data quality and ensuring legitimate access.

  • Transparency: briefly, the workings of models should be understandable and decisions made by AI explainable in terms that can be understood by the affected parties. Moreover, people should be made aware of the fact that they are dealing with AI, with the purpose and any limitations of the system explained.

  • Diversity, non-discrimination and fairness: AI systems should be free from bias and accessible by all groups.

  • Societal and environmental well-being: this states that systems should benefit all human beings, including future generations, so must be sustainable and environmentally friendly.

  • Accountability: mechanisms need to be in place to enforce responsibility and accountability for AI systems and their outcomes. The systems should be auditable, with mechanisms in place for redress, in case of bad outcomes.

The Group is now seeking further feedback to advance the conversation on the basis of the seven points above. The report concludes that AI presents unique opportunities for Europe because of its citizen-centric foundations.

The AI and Machine Learning Awards are coming! In July this year, Computing will be recognising the best work in AI and machine learning across the UK. Do you have research or a project that you think deserves wider recognition? Enter the awards today - entry is free.

05/04/2019 Google abandons AI ethics board after controversy over membership

Google has closed down its new AI ethics board just a few days after it was announced.

The Advanced Technology External Advisory Council (ATEAC) was created by Google to look at ethical problems with AI including "facial recognition and fairness in machine learning," according to VP of global affairs, Kent Walker, but it immediately ran into controversy over the make-up of its eight-member board.

One of the eight was Kay Coles James, president of the US lobby group the Heritage Foundation which campaigns for the fossil fuel industry and against measures to mitigate climate change. James also riled Google employees over her stated views on trans people, according to Vox, leading thousands of them to petition against her inclusion on the council.

CEO of drone company Trumball Unmanned, Dyen Gibbens, was another unpopular choice because of the use of the company's hardware by the US military.

The make up of the panel led Alessandro Acquisti, a professor of IT at Carnegie Mellon University, to decline Google's invitation to join ATEAC. "I don't think this is the right forum for me to engage in this important work," he tweeted. Other council members were also pressured on social media to withdraw.

On Thursday, Google bowed to the inevitable.

"It's become clear that in the current environment, ATEAC can't function as we wanted. So we're ending the council and going back to the drawing board. We'll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics," the company said in a statement.

Google was recently forced to publicly abandon its secret work on a censored search engine for the Chinese government after a revolt by members of staff.

22/03/2019 Fetch.AI reveals first details of its decentralised consensus protocol

Cambridge-based AI startup Fetch.AI has created a new consensus protocol for its decentralised market ecosystem.

In the absence of a central authority, the means by which nodes in a decentralised system come to an agreement about events on the network is critical in defining its performance and security. Consensus protocols are therefore a big part of networks' DNA.

The best known decentralised consensus protocol is the bitcoin blockchain's Proof of Work (PoW) mechanism which governs the order in which transactions are added to the immutable distributed ledger and protects against double-spend, but there are a number of other methods for arriving at a ‘single version of the truth' in a decentralised network, each with its own strengths and weaknesses. Bitcoin's PoW has been effective against attacks but consensus never reaches 100 per cent (finality), it is slow and its energy consumption is notoriously profligate.

Fetch.AI's platform is a virtual world in which intelligent agents trade autonomously with each other using an internal cryptocurrency, each seeking the best deal for iteslf according to predefined criteria. For scalability and rapid transactions the network itself is based on a directed acyclic graph (DAG) rather than a blockchain, but since the agents are trading with each other they require a ledger to record all the transactions. So after being held temporarily within the DAG they are written permanently to a blockchain.

The new protocol is called the Minimal Agency Consensus Scheme. Computing was shown a draft of a white paper which the firm will publish shortly. The ‘minimal agency' bit indicates that each node has only a small say over which transactions are written to the ledger in order to reduce the opportunity for bad behaviour (plus MACS makes for a nice acronym). Fetch.AI is pitching it as a halfway house between a DAG and a blockchain that overcomes some of the cons of each.

The protocol is based on a Proof-of-Stake (PoS) consensus which is faster and more energy-efficient than proof of work. It includes a system of incentives designed to reduce the ‘rich get richer' problem with PoS which can lead to consolidation of power in a few hands. It features a DAG to store transactions temporarily, a randomised selection algorithm called a Decentralised Random Beacon (DRB) to chose nodes to verify a particular block of transactions, coordination layers and finally a blockchain for permanent storage.

The company claims this is an answer to the 'blockchain trilemma', where typically systems need to choose two out of security, decentralisation and scalability rather than all three, something which it says similar DAG-based systems such as IOTA and SPECTRE have failed to fully achieve so far.

"Our take on the trilemma is to use Proof of Stake because we want it to be as efficient as possible, and to achieve scalability by using cryptographic source of randomness that you cannot tamper with and you cannot bias," explained lead cryptographer at Fetch.AI, David Galindo.

"And very importantly to have a system that is incentive compatible so that if nodes want to attack the system it's much more likely that if they don't follow the protocol they are not going to make money."

07/02/2019 AI and ML latest news: NewRelic acquires startup SignifAI to bring 'applied intelligence' to DevOps

NewRelic, the cloud-based application analytics firm, has added machine learning to its armoury in the shape of SignifAI, an Israeli-US startup focused on event intelligence - or sorting signal from noise in the fast-moving field of DevOps.

SignifAI, according to its white paper, is a 'SaaS-based correlation engine that leverages AI and machine learning to break down the data silos found in complex enterprise IT environments.'

It integrates via APIs to around 60 commonly used monitoring and collaboration tools including Splunk, AppDynamics, Slack, AWS, Nagios - and NewRelic itself - and promises to help developers identify the root cause of errors by sifting through events, logs, time-series data and infrastructure alerts, ranking and outputting the most probable causes to a user console or through the monitoring tool. Users can also train the model to improve the accuracy of the correlations.

AIOps is a relatively new addition to the blizzard of AI and DevOps-related phrases. It refers to the use of machine learning to analyse data collected from various IT operations tools and devices in order to spot and react to issues in real time.

In the company's blog, NewRelic's chief product officer Jim Gochee comes down in favour of  the term ‘applied intelligence' to describe the application of machine learning to the software development process.

"We decided to use the term 'Applied Intelligence' to describe our philosophy and approach to bringing artificial intelligence (AI) / machine learning (ML) to our space. We chose 'Applied Intelligence' to remind ourselves to continuously reflect on the individual words and their meanings, with the goal of keeping us on track to deliver truly meaningful customer value with our AI efforts," he writes.

04/02/2019 Heathrow trials AI system for landing planes in bad weather

In bad weather, visibility from Heathrow's 87-metre tall control tower can be drastically reduced as low clouds obscure the views of the runways below.

When this happens, air traffic controllers need to rely on radar to ensure that a plane that has just landed has cleared the runway, and a margin of error must be built in for safety. This causes delays and backlogs and a 20 per cent loss of landing capacity, according to Airport World.

This is an area where AI - together with ultra HD 4K cameras - can help, believes air traffic control service NATS. The organisation has begun a trial using 20 such cameras installed in the tower together with a machine learning system called Aimee which has been developed by Canadian vendor Searidge Technologies. A similar trial is underway at Singapore's Changi Airport.

Once trained, Aimee should be able to track the aircraft from the time they land until they are clear of the runway, even when the cameras' images are difficult to interpret by the human eye. It will then alert the air traffic controller who will make the final decision on whether to clear the next arriving flight for landing. As such, Aimee is very much an aid to human decision making rather than being an autonomous control system. 

The behaviour of 50,000 arriving aircraft will be studied over the next 12 months to see how the system responds to real-world situations.

AI is likely to play an increasingly important role in the management of air traffic in the future, including for Heathrow's planned third runway, said Heathrow's operations director, Kathryn Leahy.

"We'll be keeping a close eye on this trial, as the technology could have a major role as we prepare for the expanded airport. We will watch how AI and digital towers could be used to monitor all three of the expanded airport's runways in future," she said.

The results of the trial will be presented to be presented to the UK Civil Aviation Authority next March.

28/01/2019 Amazon open-sources its SageMaker Neo machine learning optimisation software

Amazon has open-sourced SageMaker Neo, its software for training machine learning models and optimising the way they run on different types of devices.

SageMaker Neo is designed to ensure that ML models run as efficiently as possible on a variety of machines and environments. Unlike the training stage, in which models are generally honed on high-powered machines, for the inference stage, where the model makes predictions based on new data, it may be running on much lowlier devices.

The environment can have a big effect on the amount of time it takes for a model to infer a result and on the number of such calculations that can be run in parallel. On edge devices in the IoT - a single board computer like a Raspberry Pi for example - latency and limited concurrency can be severe impediments. ML models are far from device-agnostic.

Most devices can be optimised to speed up the inference process, but this generally involves a fair bit of manual tinkering and trial and error. The original SageMaker was a model training framework, but SageMaker Neo, introduced last November, combines that with an optimisation stage, taking the fully trained model and generating an executable that's specific to the target device - be that a GPU or a Pi.

"Amazon SageMaker Neo automatically optimises machine learning models to perform at up to twice the speed with no loss in accuracy. You start with a machine learning model built using MXNet, TensorFlow, PyTorch, or XGBoost and trained using Amazon SageMaker. Then you choose your target hardware platform from Intel, Nvidia, or Arm. With a single click, SageMaker Neo will then compile the trained model into an executable," the company says on its website, adding that the compiler uses neural networking techniques to analyse the target platform so that optimisations can be applied.

The Neo-AI project is available on Github under the Apache Software License, allowing developers to tailor the code to suit their own needs.

Other cloud firms, including Microsoft and Google, are also working on applying AI to the ‘intelligent edge'.

07/01/2019 Huawei announces 'highest performance ARM-based CPU' aimed at AI workloads

Huawei has announced what it claims is the world's best performing ARM-based processor.

Unveiled on Sunday, the Kunpeng 920 is purpose-built for AI workloads that involve processing large volumes of data utilising distributed storage.

In SPECint benchmarking tests, the Kunpeng 920 scored more than 930, or almost 25 per cent higher than the industry benchmark, while using 30 per cent less power than competitors, Huawei claims.

The Huawei-designed processor is manufactured on a 7-nanometer process based on the ARM architecture. It has 64 cores with clock speed 2.6GHz and 8-channel DDR4 memory.

The enhanced performance is primarily due to optimised branch prediction algorithms and an increased number of OP units, along with an improved memory subsystem architecture, according to the firm.

"Today, with Kunpeng 920, we are entering an era of diversified computing embodied by multiple cores and heterogeneity. Huawei has invested patiently and intensively in computing innovation to continuously make breakthroughs. We will work with our customers and partners to build a fully connected, intelligent world," said William Xu, Huawei's chief strategy marketing officer.

Huawei also announced three new servers in its TaiShan range that will be powered by Kunpeng 920 processors. These are aimed at corporate data centres for big data tasks requiring high concurrency and low power consumption.

The Kunpeng 920 announcement follows hot on the heels of the AI-focused Ascend AI IP and chip series unveiled in October (see earlier in this blog).

With its emphasis on in-house design, Huawei is becoming less reliant on non-Chinese chipset suppliers such as Intel, Qualcomm, AMD and Nvidia.

The company has been enmeshed in controversy in recent months, with a number of countries banning its products in their networking infrastructure, claiming they are a security risk.

The company has close connections to the Chinese government whose ‘Made in China 2025' strategy targets ten advanced technology areas including AI, robotics, renewable energy and biotechnology. The US has claimed this strategy is a "real existential threat to US technological leadership".

Next page: Machine learning to diagnose dyslexia; EU expert group draws up draft AI ethics guidlines; Pharma giant Merck in deal to use AI for drug discovery; AI cracks text-based captchas; Oracle acquires DataFox; UK supermarkets to trial AI checkouts for age-verification; Big companies leading in machine learning; China will overtake the US in AI; Spark ML; AI chips form Nvidia and Huawei; Apple buys Spektral; Autonomous agents

You may also like
'Meticulously commendable':  AI's fingerprints found all over recent academic papers

Education

Tell-tale signs of genAI are rising fast

clock 27 March 2024 • 3 min read
Peter Cochrane: Fish can't climb trees and AI won't eclipse humanity

Strategy

AI will steer our evolution but not take over

clock 27 March 2024 • 3 min read
Power demand from UK datacentres set to surge six-fold over the next decade

Datacentre

National Grid CEO's warning comes as OpenAI chief Sam Altman says nuclear fusion is the answer

clock 27 March 2024 • 3 min read

More on Big Data and Analytics

Human-in-the-loop: How AI can boost your organisational culture

Human-in-the-loop: How AI can boost your organisational culture

Why it’s vital to consider your organisation’s people when implementing AI

Arrow
clock 26 March 2024 • 2 min read
How Space Aye is putting Scottish space tech on the map

How Space Aye is putting Scottish space tech on the map

Company merges real-time satellite imagery with data from billions of IoT devices

John Leonard
clock 12 March 2024 • 3 min read
 'We decided to build a better energy retailer'

'We decided to build a better energy retailer'

Octopus Energy’s David Sykes on how the disrupter built a platform to make green energy work for consumers

Penny Horwood
clock 04 March 2024 • 5 min read