BSI releases international standard for responsible AI management

Crucial step toward ensuring the safe, ethical, and responsible use of AI

BSI releases international standard for responsible AI management

The British Standards Institute (BSI) has taken a step towards shaping the responsible use of AI with the release of a new international standard: BS ISO/IEC 42001.

Designed to guide organisations in developing and deploying AI technology safely, the standard addresses the growing need for guidelines in the AI space and aims to bridge the "AI confidence gap."

It also addresses concerns such as non-transparent automatic decision-making and the incr,easing reliance on machine learning in system design.

"As the adoption of AI systems becomes more widespread, ISO/IEC 42001:2023 provides a certifiable AI management system framework within which AI products can be developed as part of an AI assurance ecosystem," the BSI says.

"The endgame is to help businesses and society get the most benefit from AI, while also reassuring stakeholders that systems are being developed responsibly."

The BSI's recent Trust in AI Poll, involving 10,000 individuals across nine countries, revealed a global demand for international guidelines. Sixty-one percent of respondents expressed a desire for such standards, and 62% in the UK.

The poll found that nearly 38% of respondents worldwide use AI daily at work, with 62% expecting their industries to do the same by 2030.

Closing the AI confidence gap and building trust in the technology were identified as crucial steps to unlocking its benefits for society and the world.

With BS ISO/IEC 42001, BSI - which serves as the UK's national standards body and representative at the International Organisation for Standardisation (ISO) - aims to empower organisations to responsibly manage AI technology.

Its guidance outlines how organisations can establish, implement, maintain, and continually improve an AI management system.

BSI's intended audience for BS ISO/IEC 42001 includes consultants specialising in AI, personnel responsible for AI policy, senior staff seeking to integrate or customise AI within their business and providers of AI solutions and services. Those encompass machine learning, natural language processing (NLP) and computer vision. The BSI also has AI researchers and developers of AI standards, among others, in its sights.

The new standard is seen as a crucial step toward ensuring the safe, ethical, and responsible use of AI, as acknowledged by its prominent reference in the UK Government's National AI Strategy.

Susan Taylor Martin, CEO of BSI, highlighted the transformational nature of AI and the critical role of trust in its widespread acceptance.

"For it to be a powerful force for good, trust is critical. The publication of the first international AI management system standard is an important step in empowering organizations to responsibly manage the technology," she said, adding that the move offers the opportunity to leverage AI for accelerating progress towards a better future and a sustainable world.

Scott Steedman, Director General, Standards, BSI, acknowledged the urgent need for guidelines in the AI space, especially as technologies like self-driving cars, digital assistants and medical diagnoses are increasingly relying on AI.

The release of BS ISO/IEC 42001 aligns with ongoing AI regulatory developments in the UK.

The Information Commissioner's Office (ICO) recently announced a consultation on how AI developers comply with data protection laws.

The UK government is also poised to publish benchmarks for new legislation regulating AI technology.