The GPAI Code of Practice, a long journey not over yet

0
  1. The Code of Practice

The AI Act Provisions on General-Purpose AI were inserted into the regulation “at the last minute” to put a Band-Aid on a new technology that started to become popular in 2022, one year after the first version of the AI Act proposal was published. Due to urgency of the matter, the new rules had little time to reach political consensus at the EU Member States level, and a compromise was necessary.

Therefore, we now have the bare minimum standard of protection for GPAI risks, including the so-called systemic ones. As a result, a lot of the actual content is left to secondary sources, soft law, the guidance from the European Commission, and case law interpretation.

One of these “out-of-the-law” solutions is the Code of Practice, a voluntary yet legally-binding instrument instituted by the AI Act’s article 56.

Table 1 – Article 56 – In bold, the main elements to consider

Article 56
Codes of practice
1.   The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level in order to contribute to the proper application of this Regulation, taking into account international approaches.
2.   The AI Office and the Board shall aim to ensure that the codes of practice cover at least the obligations provided for in Articles 53 and 55, including the following issues:

 

(a) the means to ensure that the information referred to in Article 53(1), points (a) and (b), is kept up to date in light of market and technological developments;

 

(b) the adequate level of detail for the summary about the content used for training;

 

(c) the identification of the type and nature of the systemic risks at Union level, including their sources, where appropriate;

(d) the measures, procedures and modalities for the assessment and management of the systemic risks at Union level, including the documentation thereof, which shall be proportionate to the risks, take into consideration their severity and probability and take into account the specific challenges of tackling those risks in light of the possible ways in which such risks may emerge and materialise along the AI value chain.

3.   The AI Office may invite all providers of general-purpose AI models, as well as relevant national competent authorities, to participate in the drawing-up of codes of practice. Civil society organisations, industry, academia and other relevant stakeholders, such as downstream providers and independent experts, may support the process.
4.   The AI Office and the Board shall aim to ensure that the codes of practice clearly set out their specific objectives and contain commitments or measures, including key performance indicators as appropriate, to ensure the achievement of those objectives, and that they take due account of the needs and interests of all interested parties, including affected persons, at Union level.
5.   The AI Office shall aim to ensure that participants to the codes of practice report regularly to the AI Office on the implementation of the commitments and the measures taken and their outcomes, including as measured against the key performance indicators as appropriate. Key performance indicators and reporting commitments shall reflect differences in size and capacity between various participants.
6.   The AI Office and the Board shall regularly monitor and evaluate the achievement of the objectives of the codes of practice by the participants and their contribution to the proper application of this Regulation. The AI Office and the Board shall assess whether the codes of practice cover the obligations provided for in Articles 53 and 55, and shall regularly monitor and evaluate the achievement of their objectives. They shall publish their assessment of the adequacy of the codes of practice.

The Commission may, by way of an implementing act, approve a code of practice and give it a general validity within the Union. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).

7.   The AI Office may invite all providers of general-purpose AI models to adhere to the codes of practice. For providers of general-purpose AI models not presenting systemic risks this adherence may be limited to the obligations provided for in Article 53, unless they declare explicitly their interest to join the full code.
8.   The AI Office shall, as appropriate, also encourage and facilitate the review and adaptation of the codes of practice, in particular in light of emerging standards. The AI Office shall assist in the assessment of available standards.
9.   Codes of practice shall be ready at the latest by 2 May 2025. The AI Office shall take the necessary steps, including inviting providers pursuant to paragraph 7.

If, by 2 August 2025, a code of practice cannot be finalised, or if the AI Office deems it is not adequate following its assessment under paragraph 6 of this Article, the Commission may provide, by means of implementing acts, common rules for the implementation of the obligations provided for in Articles 53 and 55, including the issues set out in paragraph 2 of this Article. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2).

The adherence to the code has the legal effect of granting to the signatories a presumption of compliance with the AI Act. For this reason, it represents a crucial step towards the actual implementation of the AI Act for providers.

However, the drafting process did not go exactly as envisioned in article 56. Several obstacles have slowed the process and hindered its effectiveness, ultimately leading to a 3-month delay and several protests among stakeholders.

  1. The Drafting Process

The drafting process started with a public consultation launched in July 2024, which received several responses from stakeholders. These documents were the basis for the first draft, which was entirely written by the Chairs and Vice-Chairs with the support of the AI Office, which was published on November 14th, 2024.

Subsequently, a pool of relevant stakeholders and experts were selected to form the 4 working groups to participate in the drafting process.

After the publishing of the first version, the member of the groups could give their input in 4 ways:

  • Participating in the working groups’ Webex meeting, answering online polls
  • Asking to speak during a meeting (preselected-only, 10 per meeting)
  • Writing their questions in an online form
  • Giving feedback in an online questionnaire

The working groups started their operations in October 2024, following a tight schedule.

On the basis of the feedback received by the participants, the Chairs and Vice-Chairs drafted the second draft, which was published on December 19th, 2024.

The feedback process was repeated, and the third draft was published on March 11th, 2024. However, due to several comments and critics on this version of the Code, the drafting process paused for a while. Consultations with providers took place, and the final draft was then released on July 10th, 2024.

The final Code consists of twelve commitments, grouped across three chapters: Transparency, Copyright, and Safety and Security. The Transparency and Copyright Chapters apply to all GPAI providers, while the Safety and Security Chapter is tailored specifically for those whose models pose systemic risk. Each chapter contains detailed measures that interpret and implement the legal requirements of the AI Act.

  1. Obstacles and objections during the drafting

Despite the good intentions and hard work by the Chairs and Vice-chairs, the process started with a disadvantage from the start. Despite having formed 4 working groups, each with about 300 participants, the members of such groups were not invited to the writing process.

The whole process has been criticized by several parties for its lack of democratic participation, rigid structure, and tight schedule, elements that raise serious concerns about the legitimacy, inclusivity, and transparency of the process that led to the approved Code. The criticisms reflect a broader tension in AI governance between technocratic power and democratic legitimacy.

The most problematic issues are the following:

  1. At no point during the whole process stakeholders were invited to ad hoc meetings/workshops or to participate in the actual writing of the Code. The writing was carried out entirely by the Chairs and Vice-Chairs.
  2. AI Providers were granted the opportunity to speak in dedicated workshops with the Chairs, the AI Office, and policymakers, while civil society organizations, independent experts, academic institutions, and affected individuals were not invited.
  3. The space for giving a feedback and discuss the Code’s provisions was rather limited, raising doubts on the democratic participation in the process.
  4. The Chairs and Vice-Chairs were not all EU citizens or affiliated with EU institutions, sparking debate over foreign influence in democratic processes of the EU.
  5. The very tight schedule was not inclusive, as benefitted large organizations with multiple human resources available, penalizing marginalized categories such as disabled persons, single mothers, and other people who could not dedicate their full work day to the reviewing and commenting tasks.

The first issue (the exclusion of working group participants from the actual drafting process) highlights a disconnect between the symbolic formation of multi-stakeholder groups and their real influence. Despite forming four working groups with hundreds of participants, the writing was carried out exclusively by individuals who were appointed based on a political choice. This top-down approach severely limits participatory legitimacy and contradicts the AI Act’s own emphasis on stakeholder engagement under Article 56, which calls for “involvement of all interested parties.” “Involvement” of the stakeholders should not be merely formal or superficial, but it should be meaningful.

The asymmetry in stakeholder access, favoring AI providers in dedicated workshops while sidelining civil society, academics, and individuals directly affected by AI, is equally troubling. It calls into question the neutrality of the process and risks the perception that the Code reflects the interests of the most powerful actors, rather than a balanced consensus. This is especially problematic in the context of AI regulation, where the societal impacts are vast and cross-sectoral.

Additionally, the limited opportunity for feedback and the compressed timeline may have unintentionally – or, more correctly, structurally – excluded marginalized voices. An inclusive regulatory framework cannot function if participation is contingent on having institutional support, financial stability, or full-time availability. This favors well-resourced corporate actors and disadvantages civil society, independent researchers, and those from underrepresented communities, ironically, the very people whose interests AI legislators often claim to protect.

The concern over the nationality and institutional affiliation of the Chairs and Vice-Chairs adds another layer of complexity. While expertise should always be a driving criterion in such roles, the lack of a clear EU institutional anchor, especially in a process meant to shape compliance under EU law, can reasonably prompt questions about democratic accountability and geopolitical influence.

  1. The Commission’s opinion on the Code

On August 1st, 2025, the European Commission issued an opinion assessing the General-Purpose AI Code of Practice under Article 56. This assessment aimed to evaluate whether the Code adequately supported the proper application of the Regulation concerning the obligations set forth in Articles 53 and 55, which govern transparency requirements and the management of systemic risks associated with general-purpose AI (GPAI) models.

The Commission found that the Code’s Transparency Chapter contributes effectively to the proper application of Article 53(1)(a) and (b) of the AI Act, as it includes a “Model Documentation Form” that allows providers to document and update critical information about their AI models in a consistent and meaningful way, ensuring thatregulators and downstream providers receive accurate, up-to-date data to assess capabilities and limitations of the models. The measures also address how this information should be maintained and shared, taking into account model updates and confidentiality provisions.

Similarly, the Commission believes that the Copyright Chapter supports compliance with Article 53(1)(c) by requiring providers to adopt and implement a copyright policy, including measures to avoid infringing on protected content, such as excluding known piracy websites from data collection and respecting rights reservation protocols like robots.txt. The Code explicitly states that adherence does not equate to compliance with broader Union copyright law, which remains governed by national implementation of EU directives.

Regarding the obligations of Article 55, which apply to GPAI models with systemic risk, the Commission found that the Safety and Security Chapter of the Code provides adequate guidance as it includes commitments to assess, mitigate, and document systemic risks before and after releasing a model. Providers must evaluate their models according to standards, implement cybersecurity measures, and report serious incidents promptly. These provisions are supported by detailed appendices that explain how to carry out evaluations, define systemic risks, and implement technical and governance-related safeguards.

The Commission also considered whether the Code meets the procedural standards set out in Article 56. It concluded that the Code clearly defines its objectives to serve as a compliance guide and facilitate oversight by the AI Office. Although it does not currently include key performance indicators (KPIs), the Commission deemed this acceptable at present, given the early stage of implementation and the varied capacities of participating organizations. Nevertheless, it encouraged participants to consider developing KPIs in the future to enhance transparency and accountability.

In terms of inclusivity and stakeholder engagement, the Code was found to take due account of the needs of affected parties across the EU. The reasons for this assessment mention that documentation obligations are proportional to the size and role of the provider, and the copyright measures account for both provider feasibility and rightsholder protections. The Commission also points out that systemic risk provisions reflect the realities of the AI value chain, requiring participants to consider how their models will interact with downstream systems and affect end users, including vulnerable groups.

Although the Commission’s opinion asserts that the Code fulfills its legal function, it largely sidesteps the procedural issues identified in section 3. A process that lacks transparency, inclusivity, and equity cannot be expected to produce a framework that enjoys lasting public trust or robust compliance.

  1. The road ahead

It is important to note that the Code does not preclude future enforcement action or revisions, nor does it bind the Commission in its interpretation of the AI Act. For this reason, it is possible that the AI Office will correct the code and integrate it after careful consideration, which was not really possible under previous time constraints. The Commission itself stated that the Code would not remain static. The AI Office and the European AI Board are tasked with regularly reviewing their effectiveness and encouraging updates in response to technological, legal, and societal developments. Updates may also be triggered by the emergence of new threats, novel capabilities, or large-scale incidents that challenge existing safeguards.

For the Code to achieve both legal robustness and democratic legitimacy, however, future modifications should include structural reforms to the participatory process: open and transparent writing procedures, equitable stakeholder access, accessible consultation timelines, and firm safeguards against disproportionate influence by any one interest group. Without such changes, the Code risks becoming a technocratic artefact, formally adequate, but politically and socially fragile.

Share this article!
Share.

About Author

Leave A Reply