AI & ML latest: Google disbands another AI ethics committee

John Leonard
clock • 31 min read

Tricky stuff, ethics

02/01/2019 Researchers use machine learning to diagnose dyslexia

A pair of researchers have achieved promising results from a machine learning study of dyslexia among school children, diagnosing the condition successfully in 80 per cent of cases without any human intervention

Alex Frid and Larry Manevitz from the University of Haifa, Israel, ran a series of tests on 32 school children, 17 of whom had previously been diaganosed with dyslexia. The children performed a Lexical Decision Task (LDT) in which they were asked to judge whether strings of letters appearing on a screen were meaningful or not.

During this task, the eye movements of subjects were monitored and at the same time electrical activity in different areas of the brain was recorded using electrodes placed on the scalp.

After a preprocessing stage, the researchers used the ReleifF to extract meaningful features from the results in an attempt to classify the results. Many valuable features were in fact discovered in areas of the signal traditionally considered as noise.

By training their ML algorithm using the 60 ‘best' features extracted from the results, the researchers were able to diagnose dyslexia with a 79 per cent success rate; using just the ten best features, a success rate of 70 per cent was still achieved.

However, the goal was not just to use ML for diagnosis.

One theory of the cause of dyslexia, which affects an estimated ten per cent of the population, is that the different parts of the brain involved in decoding written texts operate asynchronously, so signals can arrive in the wrong order. The greater the asynchrony, the more difficult reading will be for the subject.

The greatest difference between the dyslexic and other readers was observed in activity in the left hemisphere of the brain, particularly the left temporal node which has long been thought to be important in the process of interpreting the written word.

The paper is published by arxiv.org

19/12/2018 EU expert group draws up draft AI ethics guidlines, seeks feedback

The European Commission's High-Level Expert Group on Artificial Intelligence (AI HLEG) has released draft ethics guidelines on trustworthy AI. The 37-page document covers proposals to ensure AI is always human-centric and deployed with "an ethical purpose ground in". It includes discussions around limiting bias and maximising inclusiveness, as well as ensuring that AI is robust in design and implementation so that it does not cause unintentional harm.

AI HLEG boils down the core requirements for trustworthy AI into ten broad areas: Accountability; Data governance; Design for all; Governance of AI autonomy (human oversight); Non-discrimination; Respect for (and enhancement of) human autonomy; Respect for privacy; Robustness; Safety; and Transparency.

It looks at each of these topics from technical and non-technical standpoints with a view to creating guidance on how each might be ensured.

"Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation, and core principles,ensuring 'ethical purpose'; and (2) it should be technically robust and reliable," the report notes in its conclusion.

"However, even with the best of intentions, the use of AI can result in unintentional harm. Therefore ... AI HLEG has developed a framework to actually implement Trustworthy AI, offering concrete guidance on its achievement."

The group is seeking feedback on its draft proposals before 18 January. In March a final version will be presented to the Commission.

10/12/2018 Pharma giant Merck in deal to use AI for drug discovery

The use of AI in drug discovery - identifying promising molecules and testing their suitability using models rather than patients - has been a focus of the pharmaceutical industry for some time. Typically drugs require years (and a huge amount of money) to move from the lab into production.

In the latest move, pharmaceutical giant Merck is to licence an AI-based drug screening platform from Canadian biotechnology firm Cyclica as part of a year-long trial.

Cyclica's Ligand Express is a drug screening tool which Cyclica says is a "cloud-based platform that screens small-molecule drugs against repositories of structurally-characterized proteins or ‘proteomes' to determine polypharmacological profiles."

By understanding how a small-molecule drug will interact with all proteins in the body, scientists can prioritise candidate molecules for drugs, understanding possible side effects, and identify genetic variations which might affect the binding of the proposed drug to the target protein. Another use is investigating new uses for existing drugs.

Small-molecule drugs typically target specific proteins associated with disease, but once in the body they may bind to other proteins too, and the side effects are hard to predict in the lab, leading to lengthy R&D and trial procedures. But AI-based techniques can help build a full picture of the likely interactions of a particular molecule.

Friedrich Rippmann, Computational Chemistry & Biology director at Merck said the goal is to identify promising molecules and test them as quickly as possible.

"Assessing new technologies is central to how we will advance our discovery programmes, and artificial intelligence applications like Ligand Express will provide important insights to enhance how we think about target identification to support phenotypic screening and off-target profiling in general," Rippman said.

"Artificial intelligence has the power to make the previously unimaginable a reality - we are eager to harness these new possibilities to help drive the discoveries that can transform the lives of people affected by difficult-to-treat diseases," added Belén Garijo, member of the executive board and CEO Healthcare at Merck.

07/12/2018 AI cracks text-based captchas

A team of computer scientists has developed a new machine learning program that can crack the text-based captchas still used by many websites to protect against bots in a fraction of a second.

The program was developed by scientists from Lancaster University in the UK, Northwest University in the US and Peking University in China. It uses deep learning techniques and its makers claim a much higher success rate compared to other captcha attack methods.

It uses Generative Adversarial Network (GAN) to decode the captchas, creating synthetic captchas that are used to train the base solver algorithm. This is then fine-tuned using a smaller set of real captchas. This saves time and effort, with just 500 genuine captchas required to train the attack programme.

In tests on 33 captcha schemes, some used by the likes of Wikipedia, Google, eBay and Microsoft, the program was able to solve a captcha within 0.05 seconds when run on desktop graphics processing unit (GPU).

It was also able to evade advanced security features and demostrated a high level of accuracy.

"It allows an adversary to launch an attack on services, such as Denial of Service attacks or spending spam or phishing messages, to steal personal data or even forge user identities," said Guixin Ye, a researcher on the project, which was presented at the ACM Conference on Computer and Communications Security (CCS) 2018 in Toronto.

"Given the high success rate of our approach for most of the text captcha schemes, websites should be abandoning captchas."

14/11/2018 Facebook, Google and Twitter use AI to track illegal drug dealing

During his rather feeble 'grilling' by US Congress over Cambridge Analytica and other topics related to the influence of his social networks, one question that did elicit a (relatively) direct response from Mark Zuckerberg was the illegal sale of drugs on Facebook and Instagram - presumably because he could use his favourite stock answer of throwing some more AI at the problem.

The opioid crisis is a raw issue in the US: 72,000 people died from overdoses in 2017 alone.

Now Facebook has said it is actively using AI to track down dealers, coordinating its efforts with experts including forensic labs and local and national organisations. Google and Twitter are also involved in the collaboration.

Facebook's vice president of US public policy Kevin Martin laid out the company's thinking in a blog post.

"We want to make vital resources for treatment easier to find. When people search for information about opioids on Facebook and Instagram, we direct them to SAMSHA's [Substance Abuse and Mental Health Services Administration] National Helpline Information Page and other resources for free and confidential treatment and education," he said.

"We've also begun to roll out proactive detection on Facebook and Instagram to take down more content that violates our policies before people may see or report it. Our technology is able to detect content that includes images of drugs and depicts the intent to sell with information such as price, phone numbers or usernames for other social media accounts.

"By catching more posts automatically, this technology allows our team to use their expertise instead to investigate accounts, Pages, Groups and hashtags, as well as work with experts to spot the next trends."

Susan Molinari, vice president for public policy at Google, said that 50,000 searches about specific opioid drugs were made each day on the search engine. 

"We know overwhelmingly the people who are searching for help are the parents or family members of opioid users, so we know that if we can push them toward organisations like the Partnership for Drug-Free Kids, then we're getting them instant connections," Molinari said.

The efforts so far are all about blocking content and nudging users towards getting help rather than reporting transgressors to law enforcement.

24/10/2018 Oracle acquires AI company-intelligence firm DataFox

Oracle has acquired DataFox, a SaaS AI firm that crunches large volumes of data on public and private businesses and feeds the results into an AI engine to create company-intelligence that customers can add to their CRM.

The San Francisco-based startup received initial funding from Google Ventures in 2014 and counts Bain Capital, NetApp and Goldman Sachs among its customers. Co-founder and CEO Bastiaan Janmaat was a growth equity analyst at Goldman Sachs before founding the firm, and the investment bank has a stake in DataFox.

In a letter to DataFox customers and partners, Steve Miranda, executive VP applications development at Oracle, said: "The combination of Oracle and DataFox will enhance Oracle Cloud Applications with an extensive set of AI-derived company-level data and signals, enabling customers to reach even better decisions and business outcomes."

DataFox pulls information on millions of businesses from multiple sources including news articles, digital properties and 'unique signals' and analyses them to provide real-time information on when a company's fortunes might be about to change. Oracle says it plans to "enrich" its cloud applications such as ERP, CX, HCM and SCM with "AI-driven company-level data". Presumably the idea is to steal a march on cloud compeition from the likes of Salesforce.

Terms of the acquisition have not been disclosed.

19/10/2018 UK supermarkets to trial AI checkouts for age-verification

Facial recognition technology is to be trialled by UK supermarkets for age verification purposes, with a few as-yet-unidentified stores rolling out the scanning tech at self-service checkouts this year and more widely in 2019.

The rollout is being led by US vendor NCR, which makes self-service checkout machines for Tesco and Asda among others. The company will integrate an 'AI-powered camera' (whatever that may be) into the checkout machines, which will be able to estimate the age of shoppers when they are buying restricted items like cigarettes and alcohol. Read more on this story here.

19/10/2018 AI - where does the liability lie?

The arguments regarding liability in the event of error or incident are beginning to expand. As developments continue, and the use of AI becomes more mainstream, there will increasingly be cases which call in to question who has liability for the systems in use.

So says Emma Stevens, associate solicitor - dispute resolution, at law firm Coffin Mew in this article for Computing.

The majority of the existing legislation and case law in relation to liability and duty in cases of negligence significantly pre-dates the ongoing robotics revolution. It is clear that the legal system has a lot of ground to cover before it can effectively regulate such advances and the existing law will need to be translated to apply to situations where considering the role and impact of AI and robotics was not previously necessary.

Businesses would be sensible to make themselves aware of the technological advances in the sectors in which they operate, to ensure that their contracts are clear regarding liability (both generally and for AI) and that they have adequate insurance in place for any systems used, where appropriate.

16/10/2018 It's big companies that are making the running in machine learning, survey

A survey of data scientists, software engineers, architects and senior management has found that large organisations are taking the lead in their experiments with machine learning, with respondents in large organisations more likely to consider their efforts as 'sophisticated' and to have their early successes rewarded by increasing budgets than those in smaller firms.

About half of the respondents were located in the US, a quarter in Asia with the remainder based elsewhere. The survey was conducted by Algorithmia, a US company offering a marketplace for machine-learning models.

Across the entire sample, the main drivers for deploying machine learning models were generating customer insights and intelligence and improving the customer experience. However, in large enterprises improving customer loyalty topped the list, mentioned by 59 per cent. Large enterprises were also more likely to mention cutting costs as being a motivating force.

Just 10 per cent of companies counted themselves as sophisticated in their use of AI and machine learning. The report notes that the sort of companies that pioneered big data techniques a few years back also have a headstart when it comes to deploying machine learning models. They have the data, the infrastructure and the skills required to build proprietary internal platforms - or 'AI layers'  - on which to deploy. Examples include Facebook's FB Learner, Netflix's Notebook Data Platform and Twitter's BigHead. It seems likely that this lead widen as investment follows success.

A statistic that demonstrates the general immaturity of the field is the fact that 55 per cent of efforts are driven by IT compared with 37 per cent by the business.

12/10/2018 China will overtake the US in AI, predicts former president of Google China Kai-Fu Lee

Kai-Fu Lee, head of VC firm Sinovation Ventures and former president of Google China, says that AI's influence will be hugely disruptive to everything from the geopolitical power balance to the job market and peoples′ individual feelings of self worth. While some of the changes will be for the better, many will not, he says, warning against the techno-utopianism common in Silicon Valley.

The speed of the coming AI revolution makes parallels with the job creation that accompanied the proliferation of electrical power and the industrial revolution redundant, Lee argued.

"Those earlier technological revolutions took a century or longer," Lee explained, in a fascinating if discomfiting interview with IEEE Spectrum. "That gave people time to grow, and develop, and invent new jobs. But we have basically one generation with AI, and that's a lot less time."

"We've opened Pandora's box," Lee went on, contrasting AI with other technological threats. "We did, as humans, control the proliferation of nuclear weapons, but that technology was secret and required a huge amount of capital investment. In AI, the algorithms are well known to many people, and it's not possible to forbid people to use them. College students are using them to start companies."

Lee believes the fact that the algorithms are easily available means that the nations with the most computing power - and the most centralised command structures - will get make the running, ultimately exporting their innovations to others that might try to slow the tide to cushion its impacts. China has big advantages over current leader the USA, he said, as companies such as Tencent, which has close connections to the Chinese government, have the data, the infrastructure and a workforce that′s quite prepared to get stuck into the more humdrum parts of developing AI.

"Chinese entrepreneurs find areas where there's enough data and a commercially viable application of AI, and then they work really hard to make the application work. It's often very hard, dirty, ugly work. The data isn't handed to you on a silver platter."

Much of the learning data for the ML algorithms comes from applications like Tencent′s all-encompassing WeChat app, which is "Facebook, Twitter, iMessage, Uber, Expedia, Evite, Instagram, Skype, PayPal, GrubHub, LimeBike, WebMD, Fandango, YouTube, Amazon and eBay" rolled into one. Detailed information about a large proportion of China′s huge population resides in one place. And size matters when you're training neural networks.

These factors, together with the Chinese government′s ability to squash any opposition to developments like driverless trucks, are all in the country′s favour as it seeks to become the dominant force in AI.

Whichever power bloc ultimately takes the lead, the real challenge, says Lee, will be how to manage societies characterised by increasing inequality and the loss of up to 50 per cent of current jobs, many with no obvious alternative role for those that hold them, over the coming decades.

11/10/2018 New AI-focused announcements from Nvidia and Huawei

Graphics processing firm Nvidia has announced an open-source GPU-acceleration platform called Rapids which is aimed squarely at data scientists who need to crunch large volumes of data. Nvidia claims that for machine learning-type use cases Rapids has proved to be 50 times faster than CPU-only systems.

Unveiled yesterday in Munich, Rapids is a two-year-old collaboration between Nvidia engineers and Python contributors, building on Apache Arrow, Pandas and Scikit-learn. It is released at rapids.ai under the Apache 2.0 open-source licence.

"Rapids connects the data science ecosystem by bringing together popular capabilities from multiple libraries and adding the power of GPU acceleration." the firm says in its blog.

Meanwhile, Huawei has unveiled two AI-focused chips of its own. "As part of its full-stack AI portfolio, Huawei today unveiled the Ascend AI IP and chip series, the world's first AI IP and chip series that natively serves all scenarios, providing optimal TeraOPS per watt," proclaims the company's press release. "Their unified architecture also makes it easy to deploy, migrate, and interconnect AI applications across different scenarios," it says.

Alibaba, the "Chinese Amazon" which is investing heavily in AI capailities is also reported to be developing a new AI chip for release next year.

10/10/2018 Apple buys machine learning firm Spektral

Apple had kept its $30m acquisition of virtual reality (VR) firm Spektral last year a secret until Danish newspaper Brsen got hold of the story, reports Apple Insider.

Spektral, whose founders have now joined Apple, was a startup specialising in computer vision, using deep learning techniques and GPU hardware to improve the real-time processing of images and video directly from the camera. Apple is known to be keen to get ahead in the field of augmented reality (AR), and Apple Insider speculates that this may be behind the acquisition. Apple recently changed the design of iPhone cameras to better support AR and VR it notes.

10/10/2018 What's new in Spark and machine learning?

Creating useful machine learning models is a tough job, but making models that are robust enough to support business processes operationally is far tougher still. This is why the web giants build their own platforms to support their data scientists and then engineers. Matei Zaharia and Andy Konwinski of Databricks told Computing about two open source projects that are designed to bring such capabilities to within the reach of mere mortals. ML Flow is a framework for standardising and packaging workflows and models, while Project Hydrogen improves the integration of popular deep learning frameworks such as Tensorflow and PyTorch with Apache Spark. Read the full story here.

09/10/2018 Autonomous agents are the next phase of enterprise AI, claims Fetch.AI

What if AI could take on complex negotiation tasks without requiring human intervention? This is where we are going next, according to Cambridge-based startup Fetch.AI, which recently partnered with Clustermarket, a booking platform for loaning scientific equipment. Using the system, instruments are represented by autonomous agents which navigate a virtual landscape seeking the best possible deal for themselves and optimising availability and price overall. Read more here.

 

You may also like
More layoffs at Google, as company fires protesters

Corporate

28 terminated over Israel protest

clock 18 April 2024 • 1 min read
Microsoft injects $1.5 billion into UAE's G42

Artificial Intelligence

Reported 'behind-the-scenes deals' to ensure G42 removed some Chinese tech

clock 18 April 2024 • 2 min read
DeepMind CEO: Google will spend $100+ billion on AI

Artificial Intelligence

DeepMind CEO Demis Hassabis says Google's processing power exceeds rivals'.

clock 17 April 2024 • 2 min read

More on Big Data and Analytics

Even CERN has to queue for GPUs. Here's how they optimise what they have

Even CERN has to queue for GPUs. Here's how they optimise what they have

'There's a tendency to say that all ML workloads need a GPU, but for inference you probably don't need them'

John Leonard
clock 17 April 2024 • 4 min read
Partner Content: Why good data is the foundation of AI success

Partner Content: Why good data is the foundation of AI success

Does your organisation have the right quantity and quality of data to make its AI ambitions a reality?

Arrow
clock 04 April 2024 • 2 min read
Partner Content: Human-in-the-loop - How AI can boost your organisational culture

Partner Content: Human-in-the-loop - How AI can boost your organisational culture

Why it’s vital to consider your organisation’s people when implementing AI

Arrow
clock 26 March 2024 • 2 min read