Covid-19 and contracting for AI: don't skip the fine print in the rush for solutions

AI is different. Contractual mistakes now could have critical and expensive consequences, both in terms of human health and business viability

The use of AI to help combat the global crisis we find ourselves living through highlights a fundamental purpose of this technology: to assist humans, not replace us.

The emergence of Covid-19 has seen a huge collaborative effort by the tech industry to combat the pandemic, much of which is being enabled by AI: from tracking infectious cases, modelling available data to try to get ahead of the spread of the infection, to searching for vaccines and treatments, triaging patients, and even analysing the potential for food shortages.

The pressing nature of the emergency means collaborations are rapidly being put in place between both public and private sector entities, with AI technology providers making their models, solutions and expertise available to government organisations, large, commercial enterprises and SMEs. Often, multiple players are involved, and frequently the collaborations have a multinational aspect to them, as well.

Given the urgency of the situation, some may feel they lack the time for a disciplined approach to contracting, while others may simply drop the ball. But mistakes now could have critical and expensive consequences, both in terms of human health and business viability. Here then are some of the key commercial legal principles that every company should address in their collaboration deals, even when moving at pace.

Intellectual property (IP)

In most collaborations, it's important for each player to ensure that the knowledge, data, and technology (collectively referred to here as ‘IP') they bring to the project remains theirs afterwards and that their ability to continue using it outside or after the collaboration is not lost. This is achieved using contractual provisions dealing with ‘background IP' (setting out clearly what it is, what it isn't, and who owns what). The results of AI projects often consist of several of the collaborating entities' IP, which can lead to problematic issues of co-ownership, as well as the potential for the inadvertent transfer of rights between the collaborating parties. It's therefore critical there is clarity about how IP created during the project should be owned.

In more standard customer and supplier relationships, it's typical for customers to approach transactions expecting to own all IP generated by the supplier, but this isn't always appropriate. For example, in addition to improvements to a customer's background IP, new IP coming out of the project could be a completely new algorithm or model that has little to do with the Customer's data or business and which the supplier would expect - and potentially need - to be able to use in future projects. Suppliers will, therefore, want to own these technical improvements, but it's common to see Customers resist this in negotiations, nervous that they will be giving Suppliers rights to their IP and the project output.

Getting this right is important when deploying or using AI. If not dealt with properly, it can be a difficult and expensive mistake to correct.

Data privacy

The training and deployment of AI present new and challenging data protection considerations, including in cases where, on the face of it, no personal data is made available to the supplier or fed to the AI solution. For example, if a health service provider made available anonymised data about patients who have been admitted to hospital with Covid-19 in order for AI to predict trends associated with the spread and impact of the disease, reidentification of individuals could nevertheless accidentally occur.

As the inherent nature of AI is to learn and adapt, it is possible for AI to learn to pull information from other sources or make highly accurate inferences which, in combination with the anonymised data, could lead to the identification of individuals. It is therefore advisable that, where reidentification is at least a foreseeable possibility, even anonymised data be treated as though it is, or could be, personally identifiable data in the context of a project implementing AI.

More fundamentally, all entities involved in any AI project that makes use of data that relates to individuals should ensure that they cover off their data protection compliance obligations in their contract and that they actually abide by them operationally, too.

Liability

It is established practice that software provided as a service in the business-to-business context is offered ‘as is', and it's unusual for suppliers to take responsibility for how customers use the software, or for any damage to others caused as a result of the customer's use of it. However, as soon as you start to look at this through an AI lens, it's easy to see why customers may want to attribute responsibility to their supplier for harm caused to third parties.

For instance, if AI made available to the NHS resulted in damage to a third party, or even worse, to a human being, the NHS would want the supplier held liable. AI is an incredibly complex, specialist technology, and non-experts in AI, are unlikely to consider themselves capable of assessing what risks and losses might arise as a result of the use of the technology, when they can't fully understand what the technology does, how it works, or what it's capable of doing as it ‘learns'.

Such predictions are, surely, more suitably left to the minds of the engineers who created the AI (or their employers). But suppliers will be hesitant about accepting liability when the accuracy and effectiveness of the AI they develop could be materially impacted by the data provided by the client - if the data is biased or skewed, the output of the AI is likely to be biased and skewed, too. Similarly, suppliers providing AI algorithms or models for customers to use going forwards are not going to want to accept broad and open-ended liability for their customer's future use.

There's no right answer to all of this. The key point is that these issues must be considered upfront and the parties need to reach a commercial agreement on how they wish to deal with them. It will be interesting to see how EU (and UK) regulators address many of these points as AI-specific regulation develops.

That's not to say, though, that the players involved in AI projects fighting Covid-19 today shouldn't be thinking about these key issues. For now, it comes down to the usual principles of negotiation, and the balance and allocation of risk between parties to a contract for the potential losses or risks that could arise from the use of AI.

David Naylor andCharlie Lyons-Rothbart are partners at Wiggin LLP