The Third Draft of the EU Code of Practice under the AI Act is fundamentally imbalanced and overwhelmingly favours AI developers at the expense of creators, cultural industries, and legal certainty - despite extensive evidence from rightsholders during the consultation process.
Shared culture across the EU is built on the contributions of the creative and media sectors. It fosters social cohesion, shared values, and democratic dialogue. Yet, if this Code is not substantially amended, its adoption will have profound negative consequences.
What the Third Draft Will Mean in Practice:
- There is no sustainable future for a free and fair press. Journalists’ work will continue to be scraped, reused, and monetised by AI companies, without licensing or transparency, and with growing risks of misinformation through AI hallucinations.
- Severe harm to Europe’s creative industries: A multi-billion-euro sector will be left unprotected as AI systems reproduce their content without permission or remuneration and without an effective mechanism to license or assert rights.
- The erosion of centuries of copyright protection: Without enforceable obligations, musicians, artists, authors, journalists, film-producers, sports bodies, and other rightsholders will lose the ability to defend their rights or seek legal redress in court.
Why the Third Draft of the Code Practice fails its objectives
The Third Draft represents a serious regression compared to the Second Draft. Key safeguards have been weakened or removed:
1. No Enforcement for Internal Copyright Policies
. Not only are GPAI providers no longer required to publish their copyright compliance policies, but the internal policy requirements remain vague and lack any form of enforceability—there are no training obligations, internal audits, or accountability mechanisms.
2. Severe Weakening of Third-Party Dataset Compliance
- The shift from requiring "assurances" to merely checking a dataset provider’s website is a significant downgrade, eliminating any meaningful due diligence obligations. AI providers can now rely on self-serving statements from third-party dataset suppliers without independent verification.
3. Complete Removal of the General Lawful Access Requirement
- This is particularly dangerous. The previous draft still had language reinforcing that AI models should only be trained on lawfully accessed works. The removal of the lawful access requirement makes Measure I.2.2 even weaker than its previous version.
- This shift creates the impression that any content not paywalled is fair game for AI training, even though copyright law does not support this interpretation.
4. Robots.txt Given Privileged Status While Other Opt-Outs Are Downgraded
- The arbitrary preference for robots.txt remains unchanged, even though it was never designed as a copyright opt-out tool.
- Worse still, metadata-based opt-outs have been explicitly downgraded to “best efforts” compliance, making them effectively unenforceable.
- The phrasing around "protocols" now implicitly excludes natural language opt-outs, despite the EU Copyright Directive recognizing them as a valid mechanism.
5. Eliminating Transparency: No More Reporting on Rights Reservation Compliance
- The Second Draft at least required some level of transparency regarding AI providers' compliance with opt-out mechanisms.
- The Third Draft removes that entirely, meaning AI companies will not be required to disclose whether or how they honor opt-out requests.
6. Weaker Protections Against Copyright-Infringing Outputs
- The new language on output restrictions is full of loopholes.
- By limiting the obligation to merely mitigating infringement risks even when a model repeatedly generates infringing outputs, the provision implies that single-instance infringements may be permissible, despite their potential to cause significant harm.
7. Key Performance Indicators (KPIs) Removed
- The previous draft at least contained KPIs to track compliance, even if they were weak. These have now been removed, making it nearly impossible to measure whether AI providers are fulfilling their commitments.
As it stands, the Third Draft offers fewer protections and more loopholes than its predecessor. It places the burden entirely on rightsholders, while asking almost nothing of AI developers.
CEPIC firmly maintains that a flawed Code is worse than no Code at all. Unless the EU substantially revises the text to establish clear, binding obligations and real accountability for AI providers, the Code risks legitimising systematic infringement and setting a deeply harmful precedent for the future of European copyright, and European creativity, culture and a free and fair media.