Amanda Brock, CEO of influential not-for-profit group OpenUK which advocates for open technology, will be delivering a keynote on AI, open source software and cybersecurity at the IT Leaders Summit in the autumn. We do hope you'll be able to join us.
In the meantime, we caught up with Amanda and asked her to whet our appetites by providing us with some background to her talk - without giving too much away of course.
Computing: How did you first become interested in Open Source software?
Amanda Brock: As with many things in life, it was a coincidence. I was approached by a company called Canonical, which is the commercial sponsor of the Ubuntu operating system. I joined to do a three month contract in 2008 where I would scope the legal function, and help recruit the staff. Then, I was due to go on to Amazon to work on their new electrical retail device. I joined, and within weeks had fallen in love with this area of tech. The values it represents match my own, and the people I was working with were some of the brightest and most interesting that I had known. It was somehow irresistible to me.
What would you say are the main strengths of the open source approach?
I have a very big picture view on this, but fundamentally I see it as a correction of the history of software that has shifted tech from the hands of the few to the many, allowing better innovation.
In the beginning, developers in universities were the early coders and they collaborated with each other. However, lawyers and regulators extended copyright law to cover code, with the consequence that, to use another's code or to enable another to use your code, you needed to have a licence. This led to companies building vast armouries of code with copyright which they kept secret, they built business models and generated revenue from that secret sauce. This was all down to the application of copyright.
Had this not been the case, we can only imagine how our technology industry would have evolved differently. I like to think it would have been collaborative and that we would have seen code as something society as a whole benefitted from. There would still be companies around it but it would be very different. It's not about removing capitalism, but supporting a different form of capitalism.
The open source software movement 30 years ago started a shift where a choice was made by individual developers to freely licence their code, to allow others to use and modify it, to share the secret sauce and to enable the democratisation of technology. Even 10 years ago, those of us in this space could never have imagined that today the majority of code created would be open source software. I think of that as the societal correction of the mistake the lawyers and regulators made in applying copyright to code.
That may be the case - open source is everywhere - but many decision makers still don't really get it. Why do you think that is?
I already mentioned that the scale of production and indeed the utilisation today could not have been imagined a decade ago, perhaps not even five years ago. What we have seen happen is that the pace of learning has not matched the pace of adoption. That's unsurprising when it's happened so fast.
We need decision makers and users to understand that adopting open source is different from using proprietary code and that as a user you must exercise discernment in your choice of which code to use. Either you, your staff or a third party on your behalf must manage the code to ensure it is well "curated." In this context, curation describes the good technical hygiene and governance being in place for the code. This requires an understanding that there is a shift in risk management away from contracts to a software or open source policy, and that you have appropriate procedures that your engineers will follow as good practice.
You're going to be talking about AI in this context in your keynote. Are open source and AI a good match?
This is a difficult question. In theory, there is no reason why AI software components could not be distributed on open source licences - that is, standard licences approved by the Open Source Initiative (OSI) which meet the Open Source Definition and which allow anyone to use the software for any purpose.
However, as regulators call for some level of "responsibility" around AI distribution, it appears that we will see something along the lines of a code of conduct or acceptable use requirements imposed on its distribution. Those restrictions or qualifiers restrict this absolute freedom around open source distribution, that would allow anyone to use it for any purpose. The OSI has raised the need for a new "Open Source AI" definition which it is currently consulting on to meet this need.
The data used to train AI and produced by it forms a significant component of AI overall, and we need to see openness there, allowing us to understand the data used. Open source software licences alone, even without the requirement for responsibility, would not be enough to cover this.
Meta rightly framed the release of Llama 2 as "open innovation" and I believe that AI and open innovation are a good match, allowing for the regulatory requirements to "do no harm" to be met and bringing open data into the mix too.
I don't believe that the concerns of bad actors in the open are legitimate. Open innovation, like open source software, is built on transparency and this engenders trust. I am far more concerned that we would see bad actors in a closed proprietary "black box."
Regulators might not agree, fearing an uncontrollable free-for-all? What's your view?
I think that the balance here lies in the acknowledgement that there had to be some give and this is met through the "do no harm" acceptable use or code of conduct approach.
Tell us a little about your keynote at the IT Leaders Summit. Who do you hope to reach?
For many years I have been so lucky to travel the world speaking as part of our open source software community. However, my speaking in the UK has been much more limited and equally my speaking to tech folk who are not from my open source world, has been far less than across open source. So, I am really looking forward to the opportunity to speak with this broader audience here in the UK, where our open source community has tended to sit under the radar and to help improve understanding.
I spent 25 working years as a lawyer, across a broad sweep of the technology sector, before I shifted into the world of not-for-profits as a CEO five years ago. Almost 20 years was spent in companies and there are few roles where you get more into the weeds of tech than when you run a legal function in it. So my experience is much broader than open source and there are few things across tech that I don't have an opinion on, some of which I am looking forward to sharing.
Amanda Brock will be delivering her keynote: AI, Open Source Software and Cybersecurity: Is open source software the answer to AI? On 5th October at the IT Leaders Summit 2023. The ITLS is a two-day event bringing together the most senior and influential voices from IT leaders in the UK at Down Hall in Essex. Register today.