As the late Amy Winehouse almost said, "What kind of [fakery] is this?". Making AI-generated materials identifiable as such has been a long considered issue, and we are therefore delighted to host this guest contribution from Katfriend Natalia Basałaj (Hansberry Tomkiel) on the EU's first draft code of practice on transparency of AI-generated content. Over to Natalia:
The Code aims to assist AI system providers and deployers in meeting the transparency obligations set out in Article 50 of the AI Act. However, the Draft highlights that practical implementation will be challenging, as it does not identify any universal technical solutions ensuring effective and abuse-resistant content labelling.
Work on the Code is ongoing. Comments may be submitted until 23 January 2026, with further drafts expected in March and June 2026. The obligation to label AI-generated or modified content will apply from August 2026.
Structure of Obligations
The Draft differentiates the obligations of future signatories depending on their role in the AI ecosystem: AI system providers and entities deploying such systems in practice.
1. AI system providers should, in particular:
- apply multi-layered content-labelling mechanisms, covering both user-visible labels and machine-readable solutions;
- make publicly available tools enabling verification of content provenance;
- ensure interoperability of labels between different systems and platforms;
- document the resilience of the applied labels to removal, modification, or other forms of manipulation;
- implement compliance frameworks covering testing, monitoring, and ongoing assessment of the effectiveness of the applied solutions.
2. Entities deploying AI systems will be required to:
- clearly label deepfake content from the moment of the recipient’s first contact with the material;
- use a uniform EU-wide icon signalling the use of AI;
- label public content that has been generated or significantly modified using AI systems.
Ad. 1 (AI system providers)
The Draft Code assumes that reliable identification of AI-generated content requires a multi-layered technical approach, combining visible and invisible labelling methods that complement one another (e.g. watermarks linked to metadata or other system layers). This is intended to increase the resilience of labels against removal, alteration or loss during further processing or reuse of content.
AI system providers are also expected to prohibit the removal or manipulation of labels in their terms, policies or technical documentation and to ensure access to publicly available verification tools (such as APIs or detectors) enabling users to verify the origin of content and its provenance, including data from third-party sources.
Although the Draft refers to the principle of proportionality for SMEs, small mid-cap companies and startups, the scope of these simplifications remains unclear, and the uniform organisational obligations may in practice be disproportionately burdensome for smaller entities.
Law Does Not Keep Pace with Technology
The Draft Code represents a further step towards greater transparency in the labelling of AI-generated content, but it confirms that no single technical solution is capable of ensuring uniform, effective and abuse-resistant implementation across the market. As expressly stated in the Draft, “no single active marking technique is sufficient to meet all legal requirements.” In practice, this means that compliance will be assessed not by reference to the use of a specific tool, but by the overall robustness of the measures adopted, including internal processes, documentation, testing procedures and audit readiness.
Ad. 2 (Entities deploying AI systems)
According to the Draft, the obligation to inform that content has been generated or modified using AI arises at the recipient’s first point of contact with such content, regardless of whether it takes the form of text, audio or video. This obligation applies not only to AI system providers, but also to deployers, including publishers and editorial teams.
Labelling methods should be adapted to the format and context of publication and, in the case of artistic, fictional or satirical content, should not interfere with reception or diminish creative value. However, the Draft does not yet propose a technical solution that would fully reconcile transparency requirements with the needs of the creative sector.
Signatories are expected to use a common “AI icon” to label deepfake content and AI-generated or modified text, placing it in a fixed and clearly visible location within the publication.
In practice, the obligations relating to training, human oversight and compliance processes may entail significant costs and organisational burdens for businesses deploying AI, particularly in the absence of uniform technical tools and established market standards. At the same time, some measures may contribute to information overload for recipients and, for creators and artists, risk affecting freedom of expression and the perception of their work.
One European AI Icon
The Draft Code ultimately envisages the introduction of a uniform, interactive EU icon indicating whether content has been wholly or partly generated or modified by AI. Annex I to the Draft presents four illustrative labelling icons for AI-generated or AI-assisted content.
One example, prepared using the standard version of ChatGPT in December 2025, features a minimalist design and a two-colour “AI” label distinguishing fully automated content from content merely assisted by artificial intelligence:
A more sophisticated proposal, developed by the Artifact studio, differentiates between three categories of content: H (human-created), AI (fully AI-generated) and AI-H (hybrid). In the hybrid model, a circular visual indicator displays the proportion of human and AI contribution, with a muted blue colour scheme intended to reinforce trust and credibility:
At present, there is no single public platform or initiative providing comprehensive, EU-wide labelling of AI-generated content. Existing solutions remain fragmented and privately driven, while the common, uniform EU icon is still under development.
National Market Surveillance Authority in Poland – Possible Solutions
Signatories to the Code will be required to cooperate with market surveillance authorities and other stakeholders, including by applying relevant technical standards (such as ETSI EN 301 549 on ICT accessibility) and ensuring human oversight of the labelling of AI-generated or AI-modified content.
In Poland, the competent authorities for supervising AI systems have not yet been formally designated. Draft legislation on AI systems (UC71) the establishment of the Commission for the Development and Security of Artificial Intelligence (KRiBS) as the central national market surveillance authority and contact point under the AI Act. KRiBS would be responsible, inter alia, for monitoring the AI market, cooperating with EU authorities, issuing explanations and publicly available binding opinions, handling complaints regarding AI-related infringements, and supporting the safe use of AI technologies.
The draft provides that KRiBS will include representatives of key public bodies, including the President of the Personal Data Protection Office (UODO) and the President of the Office of Competition and Consumer Protection (UOKiK). These authorities will primarily exercise their competences within KRiBS, while retaining the ability to conduct separate proceedings within their statutory powers.
Summary: The European AI Content Market Enters an Era of Transparency
From 2026 onwards, content generated or modified by AI in the EU will have to be clearly labelled. The first Draft of the EU Code of Practice sets out general frameworks for transparency, responsibility, and legal certainty in the generative AI ecosystem, supporting providers and users in preparing for the requirements of Article 50 of the AI Act.
The Draft provides for various labelling technologies – watermarks, metadata, or verification tools – but none of them guarantees full resistance to manipulation or interoperability. Many issues remain open: the lack of a clear interpretation of Article 50 of the AI Act for hybrid content and artistic works, the voluntary nature of the Draft, and the absence of a link between labelling rules and administrative sanctions or civil liability.
For the digital content market and intellectual property law, this creates new challenges in licensing, verifying the originality of works, and determining the liability of entities publishing AI-generated content."