AI & ML latest: Google disbands another AI ethics committee

Tricky stuff, ethics

Google's attempts to assemble expert bodies to discuss the ethical aspects of AI seem to have run into further problems.

As reported in this blog, two weeks ago Google shut down its Advanced Technology External Advisory Council (ATEAC) which it had set up only days before to look at facial recognition, fairness in machine learning and other ethical issues, after some members objected to the makeup of the council which included a fossil fuel lobbyist and a manufacturer of military drones (see below).

Now the Wall Street Journal reports that Google has "quietly disbanded another AI review board" which it had set up in the UK to look into AI in health care. This time problems arose after disputes between panel members and Google-owned DeepMind over access to information, the WSJ reports citing unnamed sources. Panel members were also unhappy about the binding power of recommendations made by the board and the closeness of that DeepMind and the parent company.

The panel was set up in 2016 when Google set up its AI healthcare project DeepMind Health, two years after it acquired London-based AI startup DeepMind. The problems have been reportedly rumbling on for some time with Google planning to implement reforms, but it seems the company has now decided to cut its losses.

Losing one AI ethics board might be unlucky, but two is surely careless given the ethical hot water the company has found itself in lately.

10/04/2019 EC lists seven requirements for ethical AI

A few months ago (see below) we reported that the European Commission (EC) High-Level Expert Group was seeking feedback on its draft AI ethics guidelines. It has now published the results of the consultations as "seven key requirements that AI systems should meet in order to be deemed trustworthy".

These requirements, summarised, are as follows:

The Group is now seeking further feedback to advance the conversation on the basis of the seven points above. The report concludes that AI presents unique opportunities for Europe because of its citizen-centric foundations.

The AI and Machine Learning Awards are coming! In July this year, Computing will be recognising the best work in AI and machine learning across the UK. Do you have research or a project that you think deserves wider recognition? Enter the awards today - entry is free.

05/04/2019 Google abandons AI ethics board after controversy over membership

Google has closed down its new AI ethics board just a few days after it was announced.

The Advanced Technology External Advisory Council (ATEAC) was created by Google to look at ethical problems with AI including "facial recognition and fairness in machine learning," according to VP of global affairs, Kent Walker, but it immediately ran into controversy over the make-up of its eight-member board.

One of the eight was Kay Coles James, president of the US lobby group the Heritage Foundation which campaigns for the fossil fuel industry and against measures to mitigate climate change. James also riled Google employees over her stated views on trans people, according to Vox, leading thousands of them to petition against her inclusion on the council.

CEO of drone company Trumball Unmanned, Dyen Gibbens, was another unpopular choice because of the use of the company's hardware by the US military.

The make up of the panel led Alessandro Acquisti, a professor of IT at Carnegie Mellon University, to decline Google's invitation to join ATEAC. "I don't think this is the right forum for me to engage in this important work," he tweeted. Other council members were also pressured on social media to withdraw.

On Thursday, Google bowed to the inevitable.

"It's become clear that in the current environment, ATEAC can't function as we wanted. So we're ending the council and going back to the drawing board. We'll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics," the company said in a statement.

Google was recently forced to publicly abandon its secret work on a censored search engine for the Chinese government after a revolt by members of staff.

22/03/2019 Fetch.AI reveals first details of its decentralised consensus protocol

Cambridge-based AI startup Fetch.AI has created a new consensus protocol for its decentralised market ecosystem.

In the absence of a central authority, the means by which nodes in a decentralised system come to an agreement about events on the network is critical in defining its performance and security. Consensus protocols are therefore a big part of networks' DNA.

The best known decentralised consensus protocol is the bitcoin blockchain's Proof of Work (PoW) mechanism which governs the order in which transactions are added to the immutable distributed ledger and protects against double-spend, but there are a number of other methods for arriving at a ‘single version of the truth' in a decentralised network, each with its own strengths and weaknesses. Bitcoin's PoW has been effective against attacks but consensus never reaches 100 per cent (finality), it is slow and its energy consumption is notoriously profligate.

Fetch.AI's platform is a virtual world in which intelligent agents trade autonomously with each other using an internal cryptocurrency, each seeking the best deal for iteslf according to predefined criteria. For scalability and rapid transactions the network itself is based on a directed acyclic graph (DAG) rather than a blockchain, but since the agents are trading with each other they require a ledger to record all the transactions. So after being held temporarily within the DAG they are written permanently to a blockchain.

The new protocol is called the Minimal Agency Consensus Scheme. Computing was shown a draft of a white paper which the firm will publish shortly. The ‘minimal agency' bit indicates that each node has only a small say over which transactions are written to the ledger in order to reduce the opportunity for bad behaviour (plus MACS makes for a nice acronym). Fetch.AI is pitching it as a halfway house between a DAG and a blockchain that overcomes some of the cons of each.

The protocol is based on a Proof-of-Stake (PoS) consensus which is faster and more energy-efficient than proof of work. It includes a system of incentives designed to reduce the ‘rich get richer' problem with PoS which can lead to consolidation of power in a few hands. It features a DAG to store transactions temporarily, a randomised selection algorithm called a Decentralised Random Beacon (DRB) to chose nodes to verify a particular block of transactions, coordination layers and finally a blockchain for permanent storage.

The company claims this is an answer to the 'blockchain trilemma', where typically systems need to choose two out of security, decentralisation and scalability rather than all three, something which it says similar DAG-based systems such as IOTA and SPECTRE have failed to fully achieve so far.

"Our take on the trilemma is to use Proof of Stake because we want it to be as efficient as possible, and to achieve scalability by using cryptographic source of randomness that you cannot tamper with and you cannot bias," explained lead cryptographer at Fetch.AI, David Galindo.

"And very importantly to have a system that is incentive compatible so that if nodes want to attack the system it's much more likely that if they don't follow the protocol they are not going to make money."

07/02/2019 AI and ML latest news: NewRelic acquires startup SignifAI to bring 'applied intelligence' to DevOps

NewRelic, the cloud-based application analytics firm, has added machine learning to its armoury in the shape of SignifAI, an Israeli-US startup focused on event intelligence - or sorting signal from noise in the fast-moving field of DevOps.

SignifAI, according to its white paper, is a 'SaaS-based correlation engine that leverages AI and machine learning to break down the data silos found in complex enterprise IT environments.'

It integrates via APIs to around 60 commonly used monitoring and collaboration tools including Splunk, AppDynamics, Slack, AWS, Nagios - and NewRelic itself - and promises to help developers identify the root cause of errors by sifting through events, logs, time-series data and infrastructure alerts, ranking and outputting the most probable causes to a user console or through the monitoring tool. Users can also train the model to improve the accuracy of the correlations.

AIOps is a relatively new addition to the blizzard of AI and DevOps-related phrases. It refers to the use of machine learning to analyse data collected from various IT operations tools and devices in order to spot and react to issues in real time.

In the company's blog, NewRelic's chief product officer Jim Gochee comes down in favour of the term ‘applied intelligence' to describe the application of machine learning to the software development process.

"We decided to use the term 'Applied Intelligence' to describe our philosophy and approach to bringing artificial intelligence (AI) / machine learning (ML) to our space. We chose 'Applied Intelligence' to remind ourselves to continuously reflect on the individual words and their meanings, with the goal of keeping us on track to deliver truly meaningful customer value with our AI efforts," he writes.

04/02/2019 Heathrow trials AI system for landing planes in bad weather

In bad weather, visibility from Heathrow's 87-metre tall control tower can be drastically reduced as low clouds obscure the views of the runways below.

When this happens, air traffic controllers need to rely on radar to ensure that a plane that has just landed has cleared the runway, and a margin of error must be built in for safety. This causes delays and backlogs and a 20 per cent loss of landing capacity, according to Airport World.

This is an area where AI - together with ultra HD 4K cameras - can help, believes air traffic control service NATS. The organisation has begun a trial using 20 such cameras installed in the tower together with a machine learning system called Aimee which has been developed by Canadian vendor Searidge Technologies. A similar trial is underway at Singapore's Changi Airport.

Once trained, Aimee should be able to track the aircraft from the time they land until they are clear of the runway, even when the cameras' images are difficult to interpret by the human eye. It will then alert the air traffic controller who will make the final decision on whether to clear the next arriving flight for landing. As such, Aimee is very much an aid to human decision making rather than being an autonomous control system.

The behaviour of 50,000 arriving aircraft will be studied over the next 12 months to see how the system responds to real-world situations.

AI is likely to play an increasingly important role in the management of air traffic in the future, including for Heathrow's planned third runway, said Heathrow's operations director, Kathryn Leahy.

"We'll be keeping a close eye on this trial, as the technology could have a major role as we prepare for the expanded airport. We will watch how AI and digital towers could be used to monitor all three of the expanded airport's runways in future," she said.

The results of the trial will be presented to be presented to the UK Civil Aviation Authority next March.

28/01/2019 Amazon open-sources its SageMaker Neo machine learning optimisation software

Amazon has open-sourced SageMaker Neo, its software for training machine learning models and optimising the way they run on different types of devices.

SageMaker Neo is designed to ensure that ML models run as efficiently as possible on a variety of machines and environments. Unlike the training stage, in which models are generally honed on high-powered machines, for the inference stage, where the model makes predictions based on new data, it may be running on much lowlier devices.

The environment can have a big effect on the amount of time it takes for a model to infer a result and on the number of such calculations that can be run in parallel. On edge devices in the IoT - a single board computer like a Raspberry Pi for example - latency and limited concurrency can be severe impediments. ML models are far from device-agnostic.

Most devices can be optimised to speed up the inference process, but this generally involves a fair bit of manual tinkering and trial and error. The original SageMaker was a model training framework, but SageMaker Neo, introduced last November, combines that with an optimisation stage, taking the fully trained model and generating an executable that's specific to the target device - be that a GPU or a Pi.

"Amazon SageMaker Neo automatically optimises machine learning models to perform at up to twice the speed with no loss in accuracy. You start with a machine learning model built using MXNet, TensorFlow, PyTorch, or XGBoost and trained using Amazon SageMaker. Then you choose your target hardware platform from Intel, Nvidia, or Arm. With a single click, SageMaker Neo will then compile the trained model into an executable," the company says on its website, adding that the compiler uses neural networking techniques to analyse the target platform so that optimisations can be applied.

The Neo-AI project is available on Github under the Apache Software License, allowing developers to tailor the code to suit their own needs.

Other cloud firms, including Microsoft and Google, are also working on applying AI to the ‘intelligent edge'.

07/01/2019 Huawei announces 'highest performance ARM-based CPU' aimed at AI workloads

Huawei has announced what it claims is the world's best performing ARM-based processor.

Unveiled on Sunday, the Kunpeng 920 is purpose-built for AI workloads that involve processing large volumes of data utilising distributed storage.

In SPECint benchmarking tests, the Kunpeng 920 scored more than 930, or almost 25 per cent higher than the industry benchmark, while using 30 per cent less power than competitors, Huawei claims.

The Huawei-designed processor is manufactured on a 7-nanometer process based on the ARM architecture. It has 64 cores with clock speed 2.6GHz and 8-channel DDR4 memory.

The enhanced performance is primarily due to optimised branch prediction algorithms and an increased number of OP units, along with an improved memory subsystem architecture, according to the firm.

"Today, with Kunpeng 920, we are entering an era of diversified computing embodied by multiple cores and heterogeneity. Huawei has invested patiently and intensively in computing innovation to continuously make breakthroughs. We will work with our customers and partners to build a fully connected, intelligent world," said William Xu, Huawei's chief strategy marketing officer.

Huawei also announced three new servers in its TaiShan range that will be powered by Kunpeng 920 processors. These are aimed at corporate data centres for big data tasks requiring high concurrency and low power consumption.

The Kunpeng 920 announcement follows hot on the heels of the AI-focused Ascend AI IP and chip series unveiled in October (see earlier in this blog).

With its emphasis on in-house design, Huawei is becoming less reliant on non-Chinese chipset suppliers such as Intel, Qualcomm, AMD and Nvidia.

The company has been enmeshed in controversy in recent months, with a number of countries banning its products in their networking infrastructure, claiming they are a security risk.

The company has close connections to the Chinese government whose ‘Made in China 2025' strategy targets ten advanced technology areas including AI, robotics, renewable energy and biotechnology. The US has claimed this strategy is a "real existential threat to US technological leadership".

Next page: Machine learning to diagnose dyslexia; EU expert group draws up draft AI ethics guidlines; Pharma giant Merck in deal to use AI for drug discovery; AI cracks text-based captchas; Oracle acquires DataFox; UK supermarkets to trial AI checkouts for age-verification; Big companies leading in machine learning; China will overtake the US in AI; Spark ML; AI chips form Nvidia and Huawei; Apple buys Spektral; Autonomous agents

AI & ML latest: Google disbands another AI ethics committee

Tricky stuff, ethics

02/01/2019 Researchers use machine learning to diagnose dyslexia

A pair of researchers have achieved promising results from a machine learning study of dyslexia among school children, diagnosing the condition successfully in 80 per cent of cases without any human intervention

Alex Frid and Larry Manevitz from the University of Haifa, Israel, ran a series of tests on 32 school children, 17 of whom had previously been diaganosed with dyslexia. The children performed a Lexical Decision Task (LDT) in which they were asked to judge whether strings of letters appearing on a screen were meaningful or not.

During this task, the eye movements of subjects were monitored and at the same time electrical activity in different areas of the brain was recorded using electrodes placed on the scalp.

After a preprocessing stage, the researchers used the ReleifF to extract meaningful features from the results in an attempt to classify the results. Many valuable features were in fact discovered in areas of the signal traditionally considered as noise.

By training their ML algorithm using the 60 ‘best' features extracted from the results, the researchers were able to diagnose dyslexia with a 79 per cent success rate; using just the ten best features, a success rate of 70 per cent was still achieved.

However, the goal was not just to use ML for diagnosis.

One theory of the cause of dyslexia, which affects an estimated ten per cent of the population, is that the different parts of the brain involved in decoding written texts operate asynchronously, so signals can arrive in the wrong order. The greater the asynchrony, the more difficult reading will be for the subject.

The greatest difference between the dyslexic and other readers was observed in activity in the left hemisphere of the brain, particularly the left temporal node which has long been thought to be important in the process of interpreting the written word.

The paper is published by arxiv.org

19/12/2018 EU expert group draws up draft AI ethics guidlines, seeks feedback

The European Commission's High-Level Expert Group on Artificial Intelligence (AI HLEG) has released draft ethics guidelines on trustworthy AI. The 37-page document covers proposals to ensure AI is always human-centric and deployed with "an ethical purpose ground in". It includes discussions around limiting bias and maximising inclusiveness, as well as ensuring that AI is robust in design and implementation so that it does not cause unintentional harm.

AI HLEG boils down the core requirements for trustworthy AI into ten broad areas: Accountability; Data governance; Design for all; Governance of AI autonomy (human oversight); Non-discrimination; Respect for (and enhancement of) human autonomy; Respect for privacy; Robustness; Safety; and Transparency.

It looks at each of these topics from technical and non-technical standpoints with a view to creating guidance on how each might be ensured.

"Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation, and core principles,ensuring 'ethical purpose'; and (2) it should be technically robust and reliable," the report notes in its conclusion.

"However, even with the best of intentions, the use of AI can result in unintentional harm. Therefore ... AI HLEG has developed a framework to actually implement Trustworthy AI, offering concrete guidance on its achievement."

The group is seeking feedback on its draft proposals before 18 January. In March a final version will be presented to the Commission.

10/12/2018 Pharma giant Merck in deal to use AI for drug discovery

The use of AI in drug discovery - identifying promising molecules and testing their suitability using models rather than patients - has been a focus of the pharmaceutical industry for some time. Typically drugs require years (and a huge amount of money) to move from the lab into production.

In the latest move, pharmaceutical giant Merck is to licence an AI-based drug screening platform from Canadian biotechnology firm Cyclica as part of a year-long trial.

Cyclica's Ligand Express is a drug screening tool which Cyclica says is a "cloud-based platform that screens small-molecule drugs against repositories of structurally-characterized proteins or ‘proteomes' to determine polypharmacological profiles."

By understanding how a small-molecule drug will interact with all proteins in the body, scientists can prioritise candidate molecules for drugs, understanding possible side effects, and identify genetic variations which might affect the binding of the proposed drug to the target protein. Another use is investigating new uses for existing drugs.

Small-molecule drugs typically target specific proteins associated with disease, but once in the body they may bind to other proteins too, and the side effects are hard to predict in the lab, leading to lengthy R&D and trial procedures. But AI-based techniques can help build a full picture of the likely interactions of a particular molecule.

Friedrich Rippmann, Computational Chemistry & Biology director at Merck said the goal is to identify promising molecules and test them as quickly as possible.

"Assessing new technologies is central to how we will advance our discovery programmes, and artificial intelligence applications like Ligand Express will provide important insights to enhance how we think about target identification to support phenotypic screening and off-target profiling in general," Rippman said.

"Artificial intelligence has the power to make the previously unimaginable a reality - we are eager to harness these new possibilities to help drive the discoveries that can transform the lives of people affected by difficult-to-treat diseases," added Belén Garijo, member of the executive board and CEO Healthcare at Merck.

07/12/2018 AI cracks text-based captchas

A team of computer scientists has developed a new machine learning program that can crack the text-based captchas still used by many websites to protect against bots in a fraction of a second.

The program was developed by scientists from Lancaster University in the UK, Northwest University in the US and Peking University in China. It uses deep learning techniques and its makers claim a much higher success rate compared to other captcha attack methods.

It uses Generative Adversarial Network (GAN) to decode the captchas, creating synthetic captchas that are used to train the base solver algorithm. This is then fine-tuned using a smaller set of real captchas. This saves time and effort, with just 500 genuine captchas required to train the attack programme.

In tests on 33 captcha schemes, some used by the likes of Wikipedia, Google, eBay and Microsoft, the program was able to solve a captcha within 0.05 seconds when run on desktop graphics processing unit (GPU).

It was also able to evade advanced security features and demostrated a high level of accuracy.

"It allows an adversary to launch an attack on services, such as Denial of Service attacks or spending spam or phishing messages, to steal personal data or even forge user identities," said Guixin Ye, a researcher on the project, which was presented at the ACM Conference on Computer and Communications Security (CCS) 2018 in Toronto.

"Given the high success rate of our approach for most of the text captcha schemes, websites should be abandoning captchas."

14/11/2018 Facebook, Google and Twitter use AI to track illegal drug dealing

During his rather feeble 'grilling' by US Congress over Cambridge Analytica and other topics related to the influence of his social networks, one question that did elicit a (relatively) direct response from Mark Zuckerberg was the illegal sale of drugs on Facebook and Instagram - presumably because he could use his favourite stock answer of throwing some more AI at the problem.

The opioid crisis is a raw issue in the US: 72,000 people died from overdoses in 2017 alone.

Now Facebook has said it is actively using AI to track down dealers, coordinating its efforts with experts including forensic labs and local and national organisations. Google and Twitter are also involved in the collaboration.

Facebook's vice president of US public policy Kevin Martin laid out the company's thinking in a blog post.

"We want to make vital resources for treatment easier to find. When people search for information about opioids on Facebook and Instagram, we direct them to SAMSHA's [Substance Abuse and Mental Health Services Administration] National Helpline Information Page and other resources for free and confidential treatment and education," he said.

"We've also begun to roll out proactive detection on Facebook and Instagram to take down more content that violates our policies before people may see or report it. Our technology is able to detect content that includes images of drugs and depicts the intent to sell with information such as price, phone numbers or usernames for other social media accounts.

"By catching more posts automatically, this technology allows our team to use their expertise instead to investigate accounts, Pages, Groups and hashtags, as well as work with experts to spot the next trends."

Susan Molinari, vice president for public policy at Google, said that 50,000 searches about specific opioid drugs were made each day on the search engine.

"We know overwhelmingly the people who are searching for help are the parents or family members of opioid users, so we know that if we can push them toward organisations like the Partnership for Drug-Free Kids, then we're getting them instant connections," Molinari said.

The efforts so far are all about blocking content and nudging users towards getting help rather than reporting transgressors to law enforcement.

24/10/2018 Oracle acquires AI company-intelligence firm DataFox

Oracle has acquired DataFox, a SaaS AI firm that crunches large volumes of data on public and private businesses and feeds the results into an AI engine to create company-intelligence that customers can add to their CRM.

The San Francisco-based startup received initial funding from Google Ventures in 2014 and counts Bain Capital, NetApp and Goldman Sachs among its customers. Co-founder and CEO Bastiaan Janmaat was a growth equity analyst at Goldman Sachs before founding the firm, and the investment bank has a stake in DataFox.

In a letter to DataFox customers and partners, Steve Miranda, executive VP applications development at Oracle, said: "The combination of Oracle and DataFox will enhance Oracle Cloud Applications with an extensive set of AI-derived company-level data and signals, enabling customers to reach even better decisions and business outcomes."

DataFox pulls information on millions of businesses from multiple sources including news articles, digital properties and 'unique signals' and analyses them to provide real-time information on when a company's fortunes might be about to change. Oracle says it plans to "enrich" its cloud applications such as ERP, CX, HCM and SCM with "AI-driven company-level data". Presumably the idea is to steal a march on cloud compeition from the likes of Salesforce.

Terms of the acquisition have not been disclosed.

19/10/2018 UK supermarkets to trial AI checkouts for age-verification

Facial recognition technology is to be trialled by UK supermarkets for age verification purposes, with a few as-yet-unidentified stores rolling out the scanning tech at self-service checkouts this year and more widely in 2019.

The rollout is being led by US vendor NCR, which makes self-service checkout machines for Tesco and Asda among others. The company will integrate an 'AI-powered camera' (whatever that may be) into the checkout machines, which will be able to estimate the age of shoppers when they are buying restricted items like cigarettes and alcohol. Read more on this story here.

19/10/2018 AI - where does the liability lie?

The arguments regarding liability in the event of error or incident are beginning to expand. As developments continue, and the use of AI becomes more mainstream, there will increasingly be cases which call in to question who has liability for the systems in use.

So says Emma Stevens, associate solicitor - dispute resolution, at law firm Coffin Mew in this article for Computing.

The majority of the existing legislation and case law in relation to liability and duty in cases of negligence significantly pre-dates the ongoing robotics revolution. It is clear that the legal system has a lot of ground to cover before it can effectively regulate such advances and the existing law will need to be translated to apply to situations where considering the role and impact of AI and robotics was not previously necessary.

Businesses would be sensible to make themselves aware of the technological advances in the sectors in which they operate, to ensure that their contracts are clear regarding liability (both generally and for AI) and that they have adequate insurance in place for any systems used, where appropriate.

16/10/2018 It's big companies that are making the running in machine learning, survey

A survey of data scientists, software engineers, architects and senior management has found that large organisations are taking the lead in their experiments with machine learning, with respondents in large organisations more likely to consider their efforts as 'sophisticated' and to have their early successes rewarded by increasing budgets than those in smaller firms.

About half of the respondents were located in the US, a quarter in Asia with the remainder based elsewhere. The survey was conducted by Algorithmia, a US company offering a marketplace for machine-learning models.

Across the entire sample, the main drivers for deploying machine learning models were generating customer insights and intelligence and improving the customer experience. However, in large enterprises improving customer loyalty topped the list, mentioned by 59 per cent. Large enterprises were also more likely to mention cutting costs as being a motivating force.

Just 10 per cent of companies counted themselves as sophisticated in their use of AI and machine learning. The report notes that the sort of companies that pioneered big data techniques a few years back also have a headstart when it comes to deploying machine learning models. They have the data, the infrastructure and the skills required to build proprietary internal platforms - or 'AI layers' - on which to deploy. Examples include Facebook's FB Learner, Netflix's Notebook Data Platform and Twitter's BigHead. It seems likely that this lead widen as investment follows success.

A statistic that demonstrates the general immaturity of the field is the fact that 55 per cent of efforts are driven by IT compared with 37 per cent by the business.

12/10/2018 China will overtake the US in AI, predicts former president of Google China Kai-Fu Lee

Kai-Fu Lee, head of VC firm Sinovation Ventures and former president of Google China, says that AI's influence will be hugely disruptive to everything from the geopolitical power balance to the job market and peoples′ individual feelings of self worth. While some of the changes will be for the better, many will not, he says, warning against the techno-utopianism common in Silicon Valley.

The speed of the coming AI revolution makes parallels with the job creation that accompanied the proliferation of electrical power and the industrial revolution redundant, Lee argued.

"Those earlier technological revolutions took a century or longer," Lee explained, in a fascinating if discomfiting interview with IEEE Spectrum. "That gave people time to grow, and develop, and invent new jobs. But we have basically one generation with AI, and that's a lot less time."

"We've opened Pandora's box," Lee went on, contrasting AI with other technological threats. "We did, as humans, control the proliferation of nuclear weapons, but that technology was secret and required a huge amount of capital investment. In AI, the algorithms are well known to many people, and it's not possible to forbid people to use them. College students are using them to start companies."

Lee believes the fact that the algorithms are easily available means that the nations with the most computing power - and the most centralised command structures - will get make the running, ultimately exporting their innovations to others that might try to slow the tide to cushion its impacts. China has big advantages over current leader the USA, he said, as companies such as Tencent, which has close connections to the Chinese government, have the data, the infrastructure and a workforce that′s quite prepared to get stuck into the more humdrum parts of developing AI.

"Chinese entrepreneurs find areas where there's enough data and a commercially viable application of AI, and then they work really hard to make the application work. It's often very hard, dirty, ugly work. The data isn't handed to you on a silver platter."

Much of the learning data for the ML algorithms comes from applications like Tencent′s all-encompassing WeChat app, which is "Facebook, Twitter, iMessage, Uber, Expedia, Evite, Instagram, Skype, PayPal, GrubHub, LimeBike, WebMD, Fandango, YouTube, Amazon and eBay" rolled into one. Detailed information about a large proportion of China′s huge population resides in one place. And size matters when you're training neural networks.

These factors, together with the Chinese government′s ability to squash any opposition to developments like driverless trucks, are all in the country′s favour as it seeks to become the dominant force in AI.

Whichever power bloc ultimately takes the lead, the real challenge, says Lee, will be how to manage societies characterised by increasing inequality and the loss of up to 50 per cent of current jobs, many with no obvious alternative role for those that hold them, over the coming decades.

11/10/2018 New AI-focused announcements from Nvidia and Huawei

Graphics processing firm Nvidia has announced an open-source GPU-acceleration platform called Rapids which is aimed squarely at data scientists who need to crunch large volumes of data. Nvidia claims that for machine learning-type use cases Rapids has proved to be 50 times faster than CPU-only systems.

Unveiled yesterday in Munich, Rapids is a two-year-old collaboration between Nvidia engineers and Python contributors, building on Apache Arrow, Pandas and Scikit-learn. It is released at rapids.ai under the Apache 2.0 open-source licence.

"Rapids connects the data science ecosystem by bringing together popular capabilities from multiple libraries and adding the power of GPU acceleration." the firm says in its blog.

Meanwhile, Huawei has unveiled two AI-focused chips of its own. "As part of its full-stack AI portfolio, Huawei today unveiled the Ascend AI IP and chip series, the world's first AI IP and chip series that natively serves all scenarios, providing optimal TeraOPS per watt," proclaims the company's press release. "Their unified architecture also makes it easy to deploy, migrate, and interconnect AI applications across different scenarios," it says.

Alibaba, the "Chinese Amazon" which is investing heavily in AI capailities is also reported to be developing a new AI chip for release next year.

10/10/2018 Apple buys machine learning firm Spektral

Apple had kept its $30m acquisition of virtual reality (VR) firm Spektral last year a secret until Danish newspaper Brsen got hold of the story, reports Apple Insider.

Spektral, whose founders have now joined Apple, was a startup specialising in computer vision, using deep learning techniques and GPU hardware to improve the real-time processing of images and video directly from the camera. Apple is known to be keen to get ahead in the field of augmented reality (AR), and Apple Insider speculates that this may be behind the acquisition. Apple recently changed the design of iPhone cameras to better support AR and VR it notes.

10/10/2018 What's new in Spark and machine learning?

Creating useful machine learning models is a tough job, but making models that are robust enough to support business processes operationally is far tougher still. This is why the web giants build their own platforms to support their data scientists and then engineers. Matei Zaharia and Andy Konwinski of Databricks told Computing about two open source projects that are designed to bring such capabilities to within the reach of mere mortals. ML Flow is a framework for standardising and packaging workflows and models, while Project Hydrogen improves the integration of popular deep learning frameworks such as Tensorflow and PyTorch with Apache Spark. Read the full story here.

09/10/2018 Autonomous agents are the next phase of enterprise AI, claims Fetch.AI

What if AI could take on complex negotiation tasks without requiring human intervention? This is where we are going next, according to Cambridge-based startup Fetch.AI, which recently partnered with Clustermarket, a booking platform for loaning scientific equipment. Using the system, instruments are represented by autonomous agents which navigate a virtual landscape seeking the best possible deal for themselves and optimising availability and price overall. Read more here.