Submission to the CSPLA questionnaire on AI-generated outputs

Last week, COMMUNIA submitted its response (PDF file) to the CSPLA Task Force on the Protection of Content Generated Using Generative AI. Our contribution build on our Policy Paper #22 against new exclusive rights for AI-generated outputs, stressing that EU copyright law is fully equipped to deal with AI-generated outputs, that the existing EU originality standard remains the correct legal test, and that no new exclusive rights are needed for synthetic outputs that do not meet the bar for copyright protection.

We also underlined that the appropriate response to address harms arising from non-consensual digital imitations lies in strengthening personality rights and ensuring effective enforcement mechanisms under the Digital Services Act. This overall approach avoids unnecessary exclusive rights expansion, preserves the coherence of EU copyright law, and protects the Public Domain.

Human creativity remains the cornerstone of copyright protection

Our core message to this questionnaire is that EU copyright law already provides a complete framework for assessing the protectability of AI-assisted creativity. Copyright applies only when the final expression reflects free and creative choices made by a natural person. Depending on how AI systems are used, human involvement may range from minimal to extensive, and this variation directly determines whether the resulting output can be protected. Some AI-assisted works will qualify when the user exercises meaningful control over the expressive outcome, while many others will not, particularly in highly automated contexts where the human input does not shape expression in a legally relevant way.

We also highlighted that most related rights lack an originality threshold. As a result, some non-original AI outputs may already benefit from strong protection as phonograms, broadcasts or other related rights subject matter. This outcome is problematic, because related rights were designed to reward investment and effort, not the automated production of content. The fact that non-original AI outputs may fall within the scope of related rights demonstrates that the current system is poorly adapted to the realities of generative technology.

For these reasons, we oppose the creation of any new exclusive right for non-original AI generated outputs. Introducing such rights would further diminish the Public Domain without any demonstrated societal or economic justification. Outputs that do not meet the originality threshold should remain in the Public Domain.

Copyright compliance at the output level: infringement requires copying, not mere resemblance

In our response, we emphasised that copyright infringement requires the reproduction of protected expression, not mere resemblance. Stylistic similarity alone has no relevance in copyright law. A synthetic output is only infringing when it reproduces protectable elements from an existing work and does not qualify as an independent similar creation, and when no exception or limitation applies.

This understanding aligns with the position we defended throughout the EU GPAI Code of Practice consultations. We argued successfully that filtering mechanisms aimed at suppressing similarity would interfere with users rights and lawful uses of protected materials. The final version of the Code correctly abandoned approaches that linked output similarity to copyright infringement, and instead adopted targeted measures focused on actual infringement.

We also reiterated that Article 53(1)(c)of the AI Act imposes obligations on model providers only. It does not extend compliance duties to downstream deployers. Therefore, such legal provision cannot be read as requiring system-level content monitoring and filtering by downstream deployers. Imposing such obligations without users rights safeguards would pose unjustifiable risks to users’ rights and freedom of expression, and contradict the express intent of the AI Act.

Personality rights can address harms caused by synthetic imitations of individuals

The questionnaire also considers situations where synthetic content targets identifiable individuals. In this context, COMMUNIA takes the view that AI-generated imitations of a person’s voice or image can be effectively addressed through personality rights. These rights protect autonomy, dignity and control over one’s identity, and offer an appropriate response to non-consensual digital imitations and deceptive synthetic representations.

In our view, personality rights also form a strong basis for invoking notice and action procedures under the Digital Services Act, and we encouraged policymakers to examine whether people face practical barriers when asserting personality rights in cases involving synthetic media. Where difficulties arise, the appropriate response would be to clarify and strengthen procedural mechanisms, including improving notice and action pathways, and ensuring that remedies are fast and accessible in cases involving reputational harm or risks to democratic discourse.

Cropped print depicting an experiment with a bird in an air pump by Valentine Green after painting by Joseph Wright of Derby.
Featured Blog post:
Submission to the open consultation on the ERA Act
Read more
Sign up to our Newsletter:
Newer post
The Digital Omnibus and open data: Risks for open projects
December 5, 2025
Older post
SCCR/47: COMMUNIA statement on limitations and exceptions
December 3, 2025