MPs want to see what's behind the curtain in government’s OpenAI deal
Putting the fox in charge of the henhouse, say critics
MPs, campaigners and academics say the government’s MoU with OpenAI is “crazy” and have called for more transparency over what is included.
Chi Onwurah, chair of the House of Commons select committee on science, innovation and technology, has warned that the government’s new agreement with ChatGPT-developer OpenAI is “very thin on detail,” and called for guarantees around data security.
OpenAI signed the memorandum of understanding with technology secretary Peter Kyle this week to explore the deployment of AI models across government, in areas including justice, security and education.
Just like the government’s deal with Google announced earlier this month, the agreement falls short of including KPIs, metrics or indeed any way to judge success. According to the text of the MoU that is because it is “voluntary, not legally binding.”
Still, Labour is positioning the agreement as a big win for its AI plans. It will work with OpenAI to identify areas where “advanced AI models” can be deployed in the public and private sectors to improve civil servants’ efficiency and service effectiveness.
However, Onwurah has said: "We want assurance that there will be transparency over what public data OpenAI will have access to for training and that it will remain in the UK and within the UK’s data protection framework and legislation.
"It’s important for public trust that the government is more transparent about how this relationship will work. The public is certainly not convinced that the tech giants are on their side or that AI is on their side. They need to have confidence that the government is on their side.”
She cited previous public sector procurement failures, like the Horizon scandal, and said the committee hopes the government has learned its lesson.
A spokesperson for the Department of Science, Innovation and Technology said, "This partnership does not give OpenAI access to government datasets. We always comply with data protection legislation, and any future decisions in this space, including around procurement, would need to follow the usual robust processes.”
Onwurah was not the only critic to raise concerns. Martha Dark, executive director of the Foxglove campaign group, said, “The British government has a treasure trove of public data that would be of enormous commercial value to OpenAI in helping to train the next incarnation of ChatGPT.
"This is yet more evidence of this government’s credulous approach to big tech’s increasingly dodgy sales pitch. Peter Kyle seems bizarrely determined to put the big tech fox in charge of the henhouse when it comes to UK sovereignty."
Wayne Holmes, professor of critical studies of artificial intelligence and education at University College London's Knowledge Lab, was more damning. He told The Register:
“Policymakers and idiots around the world are just getting sucked into this hype-fest, believing the nonsense that these people are saying, that this is going to sort everything, can help solve all the problems of the world, and cancer is going to be solved in three weeks, poverty in five weeks.”
He added that the hype around artificial intelligence is “utter, utter drivel and neoliberal nonsense,” and to sign an MoU with an AI company is “just crazy.”