The European Commission has just released its draft guidelines on Article 50 of the AI Act. Walkthrough — for lawyers, DPOs, communications and product teams.
Why this article matters, even if you “don’t do AI”
On 2 August 2026, in about 80 days, Article 50 of Regulation (EU) 2024/1689 (the “AI Act”) becomes applicable. Four distinct transparency obligations will trigger simultaneously for virtually anything that resembles, even remotely, an AI system in contact with a human being.
On 8 May 2026, the European Commission (AI Office, DG CONNECT) opened a consultation on a 40-page draft guidelines that finally answers the question lawyers and product teams actually care about: in practice, what is enough and what is not?
Caveat: at this stage, this is a draft, non-binding, open for public consultation. The final text may evolve. The present analysis reflects the current state of the draft and will be updated once the final version is adopted.
Here is what the draft says, translated into operational English, with the five traps already emerging.
1. Four obligations, not one — and they can stack
Article 50 is not just about deep fakes. It contains four distinct obligations, which can apply cumulatively to the same system.
| Article | Who? | What? | When? |
|---|---|---|---|
| 50(1) | Provider | Inform the person they are interacting with an AI | Interactive system (chatbot, voice assistant, AI agent, conversational robot) |
| 50(2) | Provider | Mark content in a machine-readable format + provide a detection tool | Generation/manipulation of image, audio, video or text |
| 50(3) | Deployer | Inform the person exposed to the system | Emotion recognition or biometric categorisation |
| 50(4) | Deployer | Label content clearly and perceptibly | Deep fakes + texts published on matters of public interest |
Important: the same service can fall under several regimes. The provider of the system will, depending on the case, carry the obligations of 50(1) and 50(2). The company using or publishing the outputs may, in turn, be a deployer under 50(3) or 50(4) — or even a provider if it places the system on the market or puts it into service under its own name or trademark.
Article 50(5) adds a horizontal layer: information must be clear, distinguishable, provided no later than the time of the first interaction or exposure, and compliant with applicable accessibility requirements (notably Directive 2019/882).
2. The five traps to avoid right now
Trap no. 1 — Thinking the mention in T&Cs is enough
This is the natural reflex for lawyers: three lines added at the bottom of the contract, problem solved.
What the Commission says (§35 of the draft): a disclosure included only in T&Cs, in a URL or in documentation does not fulfil the obligation under Article 50(1). It may complement a visible notification, never replace it.
The test to apply: if the user does not perceive the disclosure at the time of the interaction, it does not meet the AI Act’s objective.
Trap no. 2 — Writing “this system uses an LLM”
This is the mistake software publishers make when they think they are being transparent about their stack.
What the Commission says (§35 of the draft): technical or capability-based descriptions (“this system uses LLMs”) do not fulfil the obligation. They explain neither the system’s function, nor its implications for the user, nor — most importantly — its artificial, non-human origin.
The rule: the user must understand they are interacting with an AI, not which model you have chosen.
Trap no. 3 — Placing the label at the end
Many AI-assisted audiovisual productions add a disclaimer in the end credits or in an edition notice at the bottom of the page.
What the Commission says (§132 and examples of the draft): disclosure must take place at the latest at the time of the first interaction or exposure. A mention in end credits or at the end of the conversation does not comply with Article 50(5). For continuous broadcasts (live, podcast), the initial disclosure must be complemented by periodic reminders for viewers joining mid-stream.
Trap no. 4 — Believing that machine-readable marking is enough
A C2PA-signed cryptographic watermark, clean metadata embedding — technically elegant. For Article 50(2), it is required. But for Article 50(1)?
What the Commission says (§35 of the draft): machine-readable marking is not perceivable by users at the point of interaction. It cannot, therefore, fulfil the information obligation under Article 50(1). A textual, audio or visual disclosure, directly perceivable, is required.
And on Article 50(2) itself, the Commission drops a bombshell (§78 of the draft):
“Under the current state of the art, no single marking and detection technique meets all four requirements [effectiveness, reliability, robustness, interoperability] at the legally required level simultaneously.”
Direct consequence: you must combine several techniques (watermark, cryptographic signature, metadata, fingerprinting). An invisible tattoo alone will not hold up to scrutiny.
Trap no. 5 — Underestimating “banner blindness”
The draft (§36) explicitly acknowledges the phenomenon: overly intrusive or repeated notifications degrade the effectiveness of transparency and create habituation that defeats the regulatory objective.
The implicit message: compliance is not a copy-paste of a cookie-banner v2. It is an exercise in experience design, calibrated to the use context and the target audience. The Commission explicitly recommends (§34) a multimodal approach: combine text, audio and visual cues for sensitive contexts.
3. The exceptions that change everything (and those that save no one)
For professional chatbots: the “obviousness” exception
A coding assistant available only to professional developers does not need to disclose itself as an AI (§42). Same for a medical-diagnosis support tool restricted to trained healthcare professionals, or for an NPC in a video game.
By contrast, a robotic companion pet (closely resembling a real animal), a realistic human avatar in an immersive environment, or a helpdesk chatbot do not benefit from the exception: they must disclose.
For the press: the “editorial control” exception
This is the key point for media. Article 50(4), second subparagraph, exempts AI-generated or manipulated text where two cumulative conditions are met (§125-128):
- The text has undergone human review or editorial control by a person with the relevant competence and professional judgement — not a mere spell-check.
- A natural or legal person assumes editorial responsibility, with identity and contact details publicly accessible.
A “superficial editorial approval” or the mere existence of an editorial policy is not enough. And editorial responsibility must be interpreted in line with the European Media Freedom Act (Regulation (EU) 2024/1083, Art. 2(8)).
For private individuals: the “purely personal use” exception
The Commission’s example is a gem (§17):
- Christmas card featuring a deep fake of family members → excluded from the AI Act, no labelling required.
- Deep fake of the mayor posted on social media to criticise a local decision → not covered by the exception; the impact on public debate strips away the “purely personal” character.
The exclusion only covers deployer obligations. The provider of the system itself remains bound to mark in a machine-readable format.
4. The crossovers not to miss
AI Act × GDPR
Article 50 does not replace the GDPR’s information obligations (Articles 13-14) or Article 22 on automated individual decision-making. The Commission has even announced (notes 24 and 29 of the draft) the preparation of joint guidelines with the EDPB on the interplay between the AI Act and EU data protection law.
Concrete case: an emotion-recognition system deployed in a retail store will require:
- Article 50(3) AI Act information (you are exposed to an emotion-recognition system);
- GDPR Articles 13-14 information (controller, purpose, legal basis, retention period, rights);
- If biometric data is processed to uniquely identify a person: GDPR Article 9;
- In the workplace or educational settings: prohibition under Article 5(1)(f) AI Act.
The draft allows these notices to be merged into a single privacy notice where appropriate (§104).
AI Act × DSA
For very large online platforms (VLOPs/VLOSEs), Article 35(1)(k) DSA provides a complementary labelling mechanism for manipulated content. Crucially, unmarked content in violation of Article 50 AI Act may qualify as illegal content within the meaning of Article 3(h) DSA, opening the door to notice-and-takedown mechanisms (§91 of the draft).
AI Act × related rights, trademarks, image rights
The attenuated regime for artistic, satirical, creative or fictional works (§116) does not waive other legislation: copyright, trademark, image and voice rights. A deep fake “lawful” under the AI Act may be unlawful under national personality rights.
5. What to do in the coming days
A reasonable roadmap for a law firm, a software publisher or a product team:
- Mapping. Identify every AI system in direct or indirect contact with a human, internal and external. Distinguish provider vs deployer role (Article 50 assigns obligations differently depending on the role).
- Qualification. For each one: 50(1), 50(2), 50(3), 50(4)? Several stacked?
- Exception testing. “Obviousness” applicable? Law-enforcement carve-out? Purely personal use? Standard editing?
- Notification design. Clear text, first interaction, accessible, multimodal in sensitive contexts. No T&C-only disclosure. No technical description. No end-credit labels.
- For 50(2), select a combination of techniques on the basis of recognised or emerging standards and practices, notably C2PA / Content Credentials, signed metadata, watermarking, fingerprinting, and standards that will be retained or reflected in the Commission’s Code of Practice on marking and labelling — the preferred route to demonstrate compliance.
- Gap analysis and documentation. Non-signatories to the Code of Practice will have to document their own technical choices and justify them to market surveillance authorities.
- GDPR alignment. Align Article 50 notifications with GDPR information obligations. Anticipate the joint Commission/EDPB guidelines.
Closing thoughts
Article 50 of the AI Act is not a distant, abstract or technical obligation. At 80 days from its application, it concerns virtually any organisation deploying a conversational assistant, generating visual or audio content with AI assistance, or publishing analyses on matters of public interest with algorithmic support.
The public consultation on the draft is open. It is time to contribute — or, failing that, to seriously prepare for 2 August.
Sources
- European Commission (AI Office), Draft Guidelines on the implementation of the transparency obligations for certain AI systems under Article 50 of Regulation (EU) 2024/1689, May 2026 (in consultation).
- Regulation (EU) 2024/1689 of 13 June 2024 (AI Act), CELEX 32024R1689 — Articles 3, 5, 6, 50, 85, 96, 113; Recitals 132 to 136. Full text.
- Regulation (EU) 2016/679 of 27 April 2016 (GDPR), Articles 13, 14, 22, 35.
- Regulation (EU) 2022/2065 of 19 October 2022 (DSA), Articles 3(h), 16, 34, 35.
- Regulation (EU) 2024/1083 of 11 April 2024 (European Media Freedom Act), Article 2(8).
- Directive (EU) 2005/29/EC (Unfair Commercial Practices Directive).
- Directive (EU) 2019/882 (Accessibility).
- European Commission, Guidelines on prohibited artificial intelligence practices, C(2025) 5052.
Jeoffrey Vigneron is a member of the Brussels Bar and founder of Lawgitech, the first Belgian law firm specialising in artificial intelligence law. He advises companies and institutions on the AI Act, the GDPR and cybersecurity regulation.
This article reflects the state of the law as of 12 May 2026 and is based on a draft set of guidelines not yet adopted. It does not constitute legal advice.





