Insights

European Commission Publishes Draft Code of Practice on AI Labelling and Transparency

In Short

The Situation: Article 50 of the EU Artificial Intelligence Act ("AI Act") introduces new transparency obligations for AI-generated and AI-manipulated content, including deepfakes and certain AI-generated publications informing the public. These obligations will apply as of August 2026.

The Development: On December 17, 2025, the European Commission, through the EU AI Office, published the first draft of the Code of Practice on Transparency of AI-Generated Content. The code is intended to provide practical guidance to providers and deployers of generative AI systems on how to comply with Article 50.

Looking Ahead: Although voluntary, the Code of Practice on Transparency of AI-Generated Content is likely to become a key reference point for regulators and courts when assessing compliance with the AI Act's transparency obligations. A further draft is expected in March 2026, with a final code anticipated in June 2026 ahead of the entry into force of Article 50 in August 2026.

Legal and Regulatory Context

The AI Act establishes a horizontal set of transparency obligations aimed at mitigating risks of deception, manipulation, and misinformation arising from generative AI.

Article 50 requires that:

  • Outputs of generative AI systems be identifiable as AI-generated or manipulated; and
  • Users be informed where content constitutes a deepfake or where AI-generated text is published to inform the public on matters of public interest.

To support consistent implementation across Member States, the AI Act expressly provides for the development of voluntary codes of practice, which may be relied upon by economic operators to demonstrate compliance. In 2025, the European Commission published the final version of the General-Purpose AI Code of Practice, which aims to help providers comply with the AI Act's legal obligations on safety, transparency, and copyright of general-purpose AI models (see our Commentary, "European Commission Publishes General-Purpose AI Code of Practice").

The draft Code of Practice on Transparency of AI-Generated Content ("Draft Labelling Code") is the first initiative addressing marking, detection, and labelling of AI-generated content.

Scope and Structure

The Draft Labelling Code mirrors the structure of Article 50 and distinguishes between obligations applicable to providers of generative AI systems and those applicable to entities deploying such systems.

Providers of Generative AI Systems

Providers are expected to ensure that outputs generated by their systems—across text, audio, image, and video formats—are:

  • Marked in a machine-readable manner and detectable as artificially generated or manipulated; and
  • Supported by technical solutions that are effective, interoperable, robust, and reliable, taking into account the state of the art, content-specific constraints, and implementation costs.

The Draft Labelling Code explicitly rejects the idea that a single technical solution could satisfy Article 50 in all cases. Instead, it promotes a multilayered approach combining visible disclosures with invisible or machine-readable techniques (such as metadata or watermarking), in order to improve resilience against removal or manipulation.

In addition to technical measures, providers are expected to implement organizational safeguards, including:

  • Internal frameworks for testing, monitoring, and periodically assessing the effectiveness of labelling solutions;
  • Documentation demonstrating the robustness and limitations of the chosen measures; and
  • Contractual or policy-based prohibitions on the removal or manipulation of labels.

Providers are also encouraged to make available verification tools (for example, detectors or APIs) enabling users and third parties to assess the provenance of content.

Deployers of Generative AI Systems

The Draft Labelling Code imposes distinct obligations on entities deploying generative AI systems, reflecting their proximity to end users and audiences.

Deployers are expected to:

  • Clearly label AI-generated content no later than at the time of the natural person's first interaction with, or exposure to, that content;
  • Disclose the use of AI where text has been generated or significantly modified for the purpose of informing the public on matters of public interest, unless the content has undergone meaningful human review and is subject to editorial responsibility; and  
  • Apply a common icon to disclose deepfakes and AI-generated or AI-manipulated text publications, placed in a visible and consistent manner at first exposure.

Pending the development of a uniform EU-wide icon, signatories may rely on an interim two-letter acronym icon (e.g., "AI," "KI," and "IA"), while committing to support the future rollout of a common interactive EU icon.

The Draft Labelling Code emphasizes that transparency obligations apply regardless of whether the content is disseminated online or offline, and across all relevant formats. At the same time, it recognizes that labelling should be adapted to context, including for artistic, fictional, or satirical works, so as not to unnecessarily interfere with creative expression or audience reception.

Governance Process and Timeline

The AI Office has emphasized that the Draft Labelling Code is intended to provide direction on the likely structure and content of the final code, while further work continues on specific commitments and measures.

According to the Commission's indicative timeline:

  • The first draft was published in December 2025, and is open to stakeholders' feedback until January 23, 2026;
  • Further drafts are expected in the first half of 2026; and
  • The final Code of Practice is expected to be approved ahead of the entry into force of Article 50 obligations, currently slated for August 2026. This timeline, however, may be affected by the proposed EU AI Digital Omnibus initiative, which could alter or delay certain aspects of the AI Act framework (see our Commentary, "EU Digital Omnibus: How EU Data, Cyber, and AI Rules Will Shift").

In parallel, the Commission has indicated that it will issue nonbinding guidelines clarifying key concepts and interpretative questions under Article 50.

Practical Implications

Although voluntary, the Draft Labelling Code is likely to become a benchmark for regulatory expectations under the AI Act. Organizations that do not align with the code, or an equivalent framework, may face closer scrutiny from the European AI Office and national authorities.

Therefore, providers and deployers of generative AI systems should begin preparing by:

  • Mapping where and how AI-generated content is produced or disseminated;
  • Assessing the adequacy of existing labelling, disclosure, and provenance mechanisms; and
  • Integrating AI transparency obligations into broader governance, editorial, and compliance frameworks.

Early engagement will be critical, particularly given the interaction between Article 50 and other EU regulations, including the Digital Services Act, media regulation, and copyright law.

Three Key Takeaways

  1. Once finalized and endorsed, the Draft Labelling Code is expected to become a central reference for regulators and courts when assessing whether providers and deployers have met their transparency obligations for AI-generated and AI-manipulated content.
  2. The Draft Labelling Code confirms that no single labelling or marking technique is sufficient in all cases. Instead, organizations will be expected to demonstrate the overall robustness of their technical and organizational measures, including documentation, testing, monitoring, and governance processes.
  3. Providers and deployers of generative AI systems should begin assessing their content labelling and disclosure practices now, while also keeping an eye out for the future versions of the Draft Labelling Code, which are expected to include more granular and detailed compliance measures.
Insights by Jones Day should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only and may not be quoted or referred to in any other publication or proceeding without the prior written consent of the Firm, to be given or withheld at our discretion. To request permission to reprint or reuse any of our Insights, please use our “Contact Us” form, which can be found on our website at www.jonesday.com. This Insight is not intended to create, and neither publication nor receipt of it constitutes, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.