Two Books by Katsushika Hôtei Hokuga (cropped)

Our thoughts on the final version of the GPAI Code of Practice

The EU General-Purpose AI Code of Practice is finally out (download the copyright chapter as a PDF file). The Code of Practice—drafted by independent experts with input from over 1,000 stakeholders—should have been ready “at the latest by 2 May 2025” (Article 56(9) AI Act). However, as a result of several stakeholders pushing back against the Code, the final version was only released on July 10.

The EU AI Office and the Board are expected to assess the adequacy of the GPAI Code of Practice pursuant to Article 56(6) AI Act by August 2. If approved by the AI Office and the Board, GPAI model providers will be able to demonstrate compliance with the obligations provided for in Articles 53 and 55 AI Act—including the copyright-related obligations—by voluntarily adhering to the Code.

In this blog post we look into the fourth and final version of the copyright chapter of the Code, in particular the copyright-related measures that have centered our attention throughout the consultations: (1) measures addressing copyright compliance at the input level; (2) measures addressing copyright compliance at the output level; and (3) copyright transparency measures. For an analysis of the first draft, the second draft and the third draft, check our blog.

Copyright compliance at the input level

One of the main points of contention in the working group discussions on copyright was the group of measures aimed at ensuring compliance with rights reservations. Throughout the various drafting rounds, the Chair insisted that the Code could only require model providers to commit to comply with the Robot Exclusion Protocol (REP), since the REP was the only international standard currently available. For other rights reservation approaches, they argued, the Code could only implement a “best efforts” obligation.

COMMUNIA opposed this understanding, as did the rightholders. We argued that the Code should be future-proved and require a full commitment with regard to all standardised machine-readable means to express rights reservations that may emerge over time. This approach would not conflict with the CDSM Directive nor with the AI Act and would be more adequate, as the REP has a number of conceptual shortcomings that make it unsuitable as an expression of rights reservations, particularly for content that is distributed through third party platforms, like music and audio-visual materials.

In the final version of the Code, the experts finally answered our calls, strengthening the level of commitment with regards to all appropriate machine-readable rights reservation protocols. Under Measure I.2.3, paragraph (1)(b), Signatories commit to:

identify and comply with other appropriate machine-readable protocols to express rights reservations pursuant to Article 4(3) of Directive (EU) 2019/790, for example through asset-based or location-based metadata, that have either have been adopted by international or European standardisation organisations, or are state-of-the-art, including technically implementable, and widely adopted by rightsholders, considering different cultural sectors, and generally agreed through an inclusive process based on bona fide discussions to be facilitated at EU level with the involvement of rightsholders, AI providers and other relevant stakeholders as a more immediate solution, while anticipating the development of standards.

Another development within this group of measures refers to third party datasets. In the last session of the Code of Practice Plenary, the Chair mentioned that they were reconsidering to drop the proposed due diligence obligations completely due to criticisms from all categories of stakeholders, and indeed these measures have been deleted from the final version of the Code.

Copyright compliance at the output level

From a users rights’ perspective, the group of measures that raised more concerns throughout the drafting rounds were the measures aimed at preventing copyright-infringing outputs. These measures were subject to significant changes from one round to the next. An initial proposal to implement downstream compliance measures was heavily criticised by AI model providers and civil society organisations. System-level measures to prevent output similarity entailed non-negligible risks to fundamental rights as they would effectively require the use of output filters—we therefore welcomed their deletion in the second draft. This draft still targeted output similarity, albeit at the model-level, and for this reason we continued to criticize it. By the third version of the Code, the Chair and Vice-Chair had abandoned output similarity, focusing instead on the issue of “memorisation”, a change welcomed by COMMUNIA.

Ultimately, this direction was also abandoned. In the last session of the Code of Practice Plenary, the Chair mentioned that they were reconsidering that measure because “memorisation” is a technical term, not a legal term, and therefore there is no legal definition for it. In fact, the final version of the Code no longer contains a model-level measure targeting “memorisation”. Instead, model providers are required to implement appropriate technical safeguards to mitigate the risk of output infringement. Measure I.4, paragraph (a), reads as follows:

In order to mitigate the risk that a downstream AI system, into which a general-purpose AI model is integrated, generates output that may infringe rights in works or other subject matter protected by Union law on copyright or related rights, Signatories commit:

1. a) to implement appropriate and proportionate technical safeguards to prevent their models from generating outputs that reproduce training content protected by Union law on copyright and related rights in an infringing manner

It’s also important to note that the fourth version of the Code kept the previous improvements related to open source AI. Open source GPAI model providers are not subject to the commitment to “prohibit copyright-infringing uses of a model in their acceptable use policy, terms and conditions, or other equivalent documents”—a contractual limitation that they would not be able to introduce in their free and open source licenses. Measure I.4, paragraph (b), only requires them “to alert users to the prohibition of copyright infringing uses of the model in the documentation accompanying the model without prejudice to the free and open source nature of the license.”

Copyright transparency

The last set of issues targeted by our advocacy efforts relate to copyright transparency, in both its forms—AI training data transparency and transparency regarding opt-out compliance. The former will be dealt with in the template for a summary of the training data (see Article 53(1)(d) AI Act), the last version of which is yet to be released by the AI Office. The latter is addressed by the Code and was significantly watered-down in the 3rd draft round.

The third draft replaced the public disclosure commitments related with the application of Article 53(1)(c) of the AI Act with bilateral obligations between model providers and rightholders: model providers were not no longer required “to provide public transparency about their policy to comply with Union copyright law” and “to making public adequate information about the measures they adopt to identify and comply with rights reservations”, but only encouraged to publicise a summary of their internal copyright policy and to enable rightsholders to obtain information about their rights reservation compliance measures. The final version of the Code maintained this approach, leaving behind all public disclosure commitments (see Measure I.1, paragraph (2) and Measure I.3., paragraph (4)).

In our opinion, this is a missed opportunity to provide new transparency standards that would be more effective at supporting the current opt-out framework than the current AI Act obligation to simply disclose a summary of the AI training data.

Print of Paul preaching to Athens (cropped).
Featured Blog post:
INI on copyright and generative AI: After the vote
Read more
Sign up to our Newsletter:
Newer post
COMMUNIA’s submission to the Danish consultation on the proposal for introducing protection against digitally generated imitations in copyright law
August 21, 2025
Older post
Higher Regional Court of Stuttgart confirms territoriality of Italian Cultural Heritage Code
July 10, 2025