Who owns agentic AI?

Developers must be aware of developing case law around artificial intelligence

Agentic AI is evolving quickly but there are significant legal questions still to be answered, writes Dr Harry Strange.

In many ways, agentic AI is a natural extension of generative AI, made possible by the continuous development of large language models (LLMs). Agentic AI models use LLMs’ advanced natural language processing capabilities to problem solve and act autonomously, making decisions and executing tasks with a predetermined goal in mind.

The potential end uses for agentic AI are almost endless, but there are areas where it is likely to be more readily accepted. For example, if you asked an AI agent to book you a holiday and gave it some dates, it would be capable of finding a flight, booking it, then finding and booking a hotel, and sorting your airport transfers too.

In the above application agentic AI is unlikely to do much harm, but even here, there might be a need for human touchpoints to avoid the model making decisions you might not like.

In an industrial setting, agentic AI models could help to improve operational efficiency. For example, an AI agent might be capable of analysing sensor data to predict when a piece of machinery is likely to fail, and take action to prevent that from happening by scheduling maintenance. In a healthcare setting, an agentic model might be used to optimise staff rotas autonomously, making adjustments for unplanned absenteeism and ensuring the right cover is provided.

Developers need to be sensitive to the current limitations of agentic AI and the challenges this field of development could bring in the future. For example, the ethical considerations surrounding the use of AI agents to perform safety-critical tasks – and the developer’s inability to provide formal guarantees of how it will work and what it will do – are a significant barrier that is yet to be resolved. In the meantime, some end uses might be regarded as off-limits for agentic AI.

In any agentic AI application, there could be concerns about using an autonomous system to problem solve by making decisions and taking action, without any way of ensuring they stick to specific terms of use. For example, they could start scraping data that a website has blocked them from doing so, which could give rise to copyright infringement claims in the future.

Developers can’t afford to be complacent in this area - they must ensure that their AI agent adheres to the right behaviours. This can be achieved by using guard rails and locks, and by putting in place a modern AI audit framework that monitors both their intent and action in real time.

Assuming developers are keen to commercialise their agentic AI model, for example by licensing it to third parties, it might be sensible to seek patent protection for the nuts and bolts, including some of the underlying algorithms.

It is sometimes wrongly assumed that software is not patentable, even though both the UK Intellectual Property Office (UKIPO) and the European Patent Office (EPO) have been clear that in many cases it can meet the requirement for patentable subject matter. However, there could be specific challenges for agentic AI.

For example, a patent directed to an agentic AI that utilises multiple disparate systems can be difficult to enforce. Also, an agentic AI directed primarily to a business use case, such as a financial method or a HR improvement, is unlikely to be deemed a technical application and so would struggle to meet the requirements of patentable subject matter. This means applications must be prepared with care.

Developers should also bear in mind that patent protection for their innovation provides a robust mechanism for preventing others from commercialising it without permission and, as such, strengthens any commercial potential it might have.

The legislative landscape for AI is still taking shape and there is still a great deal of uncertainty for developers, but they can’t afford to disregard legal considerations altogether. From an IP perspective, there are a number of cases in progress that have been brought against large tech companies for the alleged theft of copyright protected material, and other liability issues could arise for developers in the future. It is therefore vital that new AI models, whether generative or agentic, adhere to current legislation.

To ensure peace of mind, developers of AI agents should make sure there is a legal dimension to the development process at every stage. Understanding current legislation and how to apply it is critical, but it is also important to understand where legal issues might arise in the future. This could help developers to avoid litigation risks and create models with longer term commercial potential.

Image
Description
Dr Harry Strange is a partner and patent attorney specialising in AI, computing and consumer electronics at European IP firm Withers & Rogers.