Technologists told: Engage or be ignored
AI laws are coming – but who’s writing them?
Policy makers at the State of OpenCon 2025 urged technologists to get involved in shaping legislation.
Regulating a technology like AI is never going to be easy, and while progress is being made, technologists still feel they don’t have a voice in setting incoming standards.
That was the sense from attendees at OpenUK’s State of OpenCon this week, where the open source community came together to share, learn and geek out.
Not many conferences have a Star Wars pun as their WiFi password, or Stephen Fry reading Douglas Adams replacing the background muzak.
But, while there was plenty of celebration, there were also big discussions with tech policy makers.
“The challenge [for technologists]” said Chi Onwurah, MP for Newcastle upon Tyne Central and West, “is that the creative industries communicate [their problems with AI legislation] well, but the tech community doesn’t.”
Onwurah, who has 20 years as a technologist behind her, was referencing the creative industries’ campaign against AI companies’ perceived IP theft. While media organisations tend to be very good at communicating their grievances like this and know who to lobby in Parliament, it isn’t a familiar process for techies.
“There are two routes to communicate with Parliamentarians,” she explained: Under the microscope and innovator sessions. “Get in touch with your MP,” she urged.
Lord Tim Clement-Jones, who leads the AI Select Committee, said most MPs and Lords don’t know what is actually happening on the ground in tech, and told delegates to reach out.
Onwurah added, “Unless that level of debate happens, we are not going to be taking decisions based on your interests. I would say to all of you, engage with the political process; engage with politicians, engage with Select Committees.”
A week is a long time in politics, but not in tech
There is a problem, though: government’s speed looks glacial versus the quickly changing tech space.
“I’m on the open standards committee and we haven’t met for 18 months,” OpenUK CEO Amanda Brock remarked.
Attendees told us this holds them back from working with government: they can’t trust Parliamentarians will understand what they’re talking about.
It’s a problem Dr Laura Gilbert, who founded the government’s AI incubator in 2020, is all too familiar with.
“Trust in government is very low [in the UK]. When we announced on Twitter that we were building an AI incubator we got two types of responses: one, that the government isn’t competent enough to work with AI; and the other, that we would be using AI to steal people’s benefits.”
Despite the distrust, the incubator has had its own wins. One example is a tool, tested with the End of Life Bill, that can analyse speeches in Parliament and an MP’s voting record to predict if proposed legislation will make it through.
However, a commitment to transparency like the incubator follows brings its own dangers:
“We don’t want large multinationals taking our [open source] work, repackaging it and selling it back into government,” Laura said. Her team are starting to think about how to do a licensing model – anything in the public interest might be allowed, but not commercial prospects.
As that would take the incubator further away from its open source goal, the debate is ongoing – with Laura remaining committed to “radical transparency.”