Balancing AI Innovation and Risk: Inside the GPAI Code of Practice

0

This is the first in a series of blog posts curated by Chiara Gallese and the MediaLaws Team, dedicated to unpacking the emerging legal and policy framework surrounding the Code of Practice on General Purpose AI. Through this series, we aim to shed light on the operational, ethical, and regulatory responsibilities shaping the future of GPAI development and deployment in Europe.

  1. Introduction

General-Purpose AI (GPAI) models mark a paradigm shift in artificial intelligence, offering exceptional versatility while introducing significant risks. Unlike specialized AI systems, GPAI models can operate across multiple domains, performing diverse tasks without requiring customized programming for each application. The ability of GPAI models to seamlessly transition across tasks and domains introduces groundbreaking opportunities but also necessitates careful regulatory intervention to address emerging risks

To address these concerns, the EU AI Act adopts a risk-based regulatory approach, classifying GPAI models based on their potential for systemic harm. As part of this framework, the legislation mandates the development of a Code of Practice, overseen by the AI Office under Commission supervision.

  1. Objectives of the GPAI Code of Practice

The GPAI Code of Practice serves a dual purpose: i) providing structured risk assessment guidance to operators; ii) offering presumption of conformity protections during the transitional phase before unified technical standards are established.

Developed through a multi-stakeholder approach, the Code of Practice integrates diverse perspectives and international best practices. The AI Office fosters collaboration among GPAI providers, national competent authorities, civil society organizations, and independent experts and expert bodies, including the Scientific Panel.

This inclusive process ensures that the Code effectively addresses especially two key topics: i) the general obligations for all GPAI providers; ii) the specific systemic risk requirements, including a comprehensive risk taxonomy identifying the types, nature, and sources of AI-related risks at a Union level.

  1. Structure and key commitments 
  • Transparency Obligations

Under the Artificial Intelligence Act, providers of General Purpose AI (GPAI) models are subject to a range of transparency obligations aimed at ensuring responsible innovation and downstream compliance. First and foremost, these providers must maintain up-to-date technical documentation detailing the training, testing, and evaluation processes that underpin their models. This documentation must be sufficiently detailed to enable regulatory scrutiny and facilitate integration by downstream deployers. Additionally, GPAI providers are expected to furnish comprehensive information to those who incorporate these models into their own AI systems, ensuring traceability and enabling them to comply with their own legal obligations.

  • Copyright Obligations

The regulatory framework also introduces specific copyright-related duties for GPAI providers, which reflect growing concerns over the use of protected content in AI training processes. Providers are required to adopt and internalize responsible development practices that include the creation of explicit copyright compliance policies. These policies must feature transparent documentation and version control mechanisms, enabling oversight and auditability. Moreover, lawful access to copyrighted content is not optional: providers must ensure that any data used for training purposes respects intellectual property rights and complies with applicable licensing conditions.

To that end, the scraping or crawling of infringing websites, or the disregard of copyright reservations expressed through metadata or machine-readable notices (such as opt-outs), is explicitly discouraged. In addition, providers must proactively engage with the public by disclosing relevant information and offering a dedicated point of contact for rights holders seeking to raise copyright-related concerns.

  • Systemic Risk Management for High-Risk GPAI Models

For GPAI models that may pose systemic risks, such as those used in critical infrastructure, democratic processes, or public safety, the Act requires a higher level of vigilance. Providers in this category must develop a Risk Taxonomy Roadmap to classify and continuously update the spectrum of potential harms associated with their models. This roadmap is complemented by a robust Safety and Security Framework, which integrates model risk assessments across the entire AI lifecycle, from development to post-deployment monitoring.

To ensure that these safeguards are effective, external risk evaluations must be conducted both before the model enters the market and on an ongoing basis thereafter. Furthermore, providers are required to implement mechanisms for reporting serious incidents and protecting whistleblowers, thus reinforcing internal accountability structures and facilitating early detection of harmful effects.

Together, these obligations aim at reflecting a shift in the EU’s AI governance paradigm, one that imposes proactive responsibilities not only on deployers and end-users, but also on foundational model providers whose technologies shape the broader AI ecosystem.

  1. Conclusion

The GPAI Code of Practice, scheduled for completion by August 2, 2025, represents a dynamic and evolving regulatory framework. Its focus on risk assessment methodologies and targeted mitigation measures ensures that AI governance remains adaptive to technological advancements and emerging challenges.

By addressing core commitments including transparency, copyright compliance, and risk management the Code provides structured guidance for AI providers, ensuring that GPAI models uphold responsible development practices. It acts as a bridge between technological progress and regulatory oversight, ensuring AI innovation continues thriving within ethical and legal boundaries.

 

Author: Petruta Pirvan, Lawyer & Founder of EU Digital Partners, member of the Working Group on General Purpose AI Code of Practice at the EU AI Office

Share this article!
Share.

About Author

Leave A Reply