Third draft of EU AI code of practice published
Not everyone agrees on what constitutes ‘systemic risk’
The third draft of the EU General-Purpose AI (GPAI) Code of Practice has been published, marking the final consultation phase before its expected completion in May 2025
The revised Code introduces refined commitments and implementation measures, with a particular focus on transparency, copyright obligations and risk assessment for providers of AI models classified as posing systemic risks.
The third draft features a more streamlined structure, consolidating commitments related to transparency and copyright requirements for all providers of general-purpose AI models and safety and security obligations specifically for providers of AI models deemed to pose systemic risks.
A notable addition is a model documentation form, designed to simplify the process of documenting AI models for regulatory compliance. The draft also clarifies that certain open-source AI providers may be exempt from transparency obligations, aligning with AI Act provisions.
Systemic risk requirements and industry concerns
For AI models deemed to carry systemic risk, the Code introduces stricter requirements, including risk assessments, model evaluations, incident reporting and cybersecurity obligations. The AI Office has indicated that further guidance will be issued to clarify responsibilities across the AI value chain, particularly for downstream actors who modify or fine-tune existing models. This has drawn mixed reactions from industry stakeholders. While some view these measures as essential for ensuring AI safety, others argue that the lack of clear definitions around what constitutes “systemic risk” could lead to inconsistent enforcement and unnecessary burdens for AI companies.
The evolving nature of AI technology presents an ongoing challenge in balancing regulatory oversight with the need for innovation. The Chairs of the Code stress the importance of maintaining flexibility, allowing regulations to adapt alongside technological advancements. However, some industry voices warn that frequent regulatory changes could create uncertainty for businesses investing in AI development.
Additional guidance from the AI Office
Beyond the Code of Practice, the AI Office is separately working on a public summary template for training data transparency. The Office says it has committed to publishing additional guidance, addressing key issues such as clarifying the definition of general-purpose AI models, establishing the responsibilities of providers and downstream actors, and determining how rules apply to models launched before August 2025.
Stakeholders can submit written feedback on the third draft until 30 March 2025, with further discussions planned through working groups and dedicated workshops. Civil society organisations and downstream AI users are also being invited to participate, potentially broadening the range of perspectives influencing the final draft.
Industry pressure and international influence
The EU’s AI regulation efforts have faced criticism, particularly from US officials and industry leaders concerned about overregulation. At the Paris EU Action Summit US Vice President JD Vance warned that stringent regulations could stifle innovation, contrasting the EU’s approach with the US administration’s focus on “AI opportunity.”
Meanwhile, European AI firms, including Mistral, have raised concerns about what it describes as the increasing European legislative burden.