Artificial Intelligence regulation bill introduced in House of Lords

Lord Holmes of Richmond argues UK must lead on ethical AI

Image:
The time to act on AI safety and ethics is now

The Artificial Intelligence (Regulation) Bill will be introduced as a Private Members Bill in the House of Lords this afternoon by Lord Holmes of Richmond. He argues that time is running out to regulate AI for the best outcomes.

In light of the government last week putting the regulation of AI very firmly on the back burner, Lord Holmes of Richmond will introduce later today a private members bill (PMB) in the House of Lords.

The PMB would allow for the establishment of a new AI Authority, which would seek to ensure regulators from different sectors of the economy were aligned, and that any regulatory gaps are identified.

The AI Authority would have other functions, including monitoring economic risks arising from AI, horizon scanning developing technologies, facilitating sandbox initiatives to allow the testing of new AI models, and accrediting AI auditors.

Image
Description
Lord Holmes of Richmond MBE

The bill would introduce a set of regulatory principles governing the development and usage of AI. These include safety, security and robustness, transparency, fairness, accountability, governance, contestability and redress.

There are clauses in the Bill on “meaningful, long-term public engagement” on the opportunities and risks of AI, as well as transparency around the use of third-party data and the principle of informed consent on the use of intellectual property in training datasets.

Why now?

Lord Holmes originally introduced the Bill right at the end of 2023, following the AI Safety Summit in Bletchley. The Bill fell when Parliament was dissolved before last year’s election. Lord Holmes explained his concern about what looks like government backsliding on AI safety to Computing.

“Weeks after the AI Safety Summit and declaration, I introduced my private members bill – the AI (Regulation) Bill to Parliament. I drafted the Bill with the essential principles of trust, transparency, inclusion, innovation, interoperability, public engagement, and accountability running through it. The intention of my Bill is to do precisely what was promised in that declaration, to ensure the ‘human-centric, trustworthy and responsible’ use of AI.

“This technology is evolving at such a pace, far faster than our legislative processes are used to and we must do better. My AI Bill fell when parliament was dissolved for the general election. The party that won that election promised us binding regulation on AI but since then the government seems to have changed tack, siding instead with the US and Big Tech and there is no sign of the promised regulation on AI.”

Public discourse about generative AI often ends up catastrophising and focusing on the existential risks posed by the technology. Indeed, cynics would argue that such risks have been talked up by Big Tech, eager to prevent greater transparency in the name of safety and keep the dollars flowing.

Whilst the Bill does acknowledge the risks the technology could theoretically impose; it’s focus is on the day-to-day reality of how AI is already affecting people's lives. Lord Holmes published a report last week which sets out this reality and explains why the UK urgently needs an approach to AI that has humanity at its heart. He says:

“In my report I consider the current reality of eight individuals at the sharp end of this under-regulated part of our modern world. Whether it’s discrimination and bias in AI algorithms, disinformation from synthetic imagery, scams using voice mimicking technology, copyright theft or unethical chatbot responses we are already facing a host of problems from existing AI.”

For each of the examples, the report sets out the challenge and explains how the proposed AI bill would address them.

For the jobseeker for example, the report sets out that AI is increasingly being deployed in recruitment but that there are no specific laws which regulate the use of AI in employment decisions.

Computing covered some of these implications in a recent article in our series on the tech skills crisis, and the report considers these risks such as women being discriminated against due to training data being distorted by years of male skewed hiring decisions. There are also problems with data collection and a lack of transparency.

The remaining seven examples highlighted by the report are the teacher, the teenager, the scammed, the benefit claimant, the creative, the voter, and the transplant patient. Lord Holmes continues:

“We desperately, urgently need action and I am today, introducing my AI Bill to parliament in the hope that action, taken now, will help the UK lead on regulating AI in a way that focuses on the potential to transform and enhance human wellbeing, peace and prosperity.”