Introduction: The End of Black Boxes in Europe?
Since the rapid adoption of LLMs at the end of 2022, European regulators have become aware of a major regulatory gap: the lack of clear information about how so-called « general-purpose » AI models or GPAI (General-Purpose AI Models) work. The result? A race against time to legally regulate their design, deployment, and above all… their transparency.
On 10 July 2025, the AI Office published the final version of the Code of Practice GPAI, a voluntary tool structured around three pillars: transparency, copyright, and safety. The corresponding obligations become legally applicable from 2 August 2025, under Articles 53 and 55 of the AI Act. Providers with models already on the market have a transitional period until 2 August 2027. Major players such as Google, OpenAI and Anthropic have signed the Code to benefit from recognised compliance, while some companies (Meta, xAI) have partially or fully refused its terms.
These provisions require standardised technical documentation covering the model’s objectives, training data, performance metrics and known limitations — through two separate instruments (Annex XI for authorities, Annex XII for integrators).
For the professional audience (businesses or authorities), this transparency materialises through a « Model Card » dossier:
-
Technical Form (Annex XI): technical documentation for providers of general-purpose AI models
-
Information Block (Annex XII): technical documentation for providers of general-purpose AI models intended for downstream providers integrating the model into their AI system
For truly open open source models (weights, code, architecture and information on the model’s use), an exemption is provided under Article 53(2), unless the model poses a systemic risk.
Importantly, this exemption only covers technical documentation but not the documentation relating to training data and copyright!
But let us first revisit the basics: the AI Act does not define the concept of a model (which could be described as an autonomous software entity, trained, capable of generating inferences based on mathematical criteria) but rather that of a general-purpose AI model (GPAIM Art. 3(63)): « an AI model, including where such an AI model is trained by using a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market ».
With this distinction established, what are the implications for these GPAI?
Why This Transparency Has Become Essential
-
Provider accountability — no more « black box models »: the provider must adopt the role of an informative partner for professional integrators, reflecting the rise of the accountability by design standard.
-
Sectoral compliance — like the manual for critical equipment, these cards (Model Card) become essential for public authorities or businesses integrating external AI components into their products, services or activities.
-
Meeting governance expectations — certain organisations (public or private) already include these elements in their market access criteria (e.g. DPIA or technical specifications). Thus, providing clear documentation becomes a lever for trust and a condition for accessing certain markets.
What Is a Compliant « Model Card »?
The Code of Practice GPAI, published by the AI Office in July 2025, proposes a validated template for structuring the required documentation (Measure 1.1). This template must be updated with any successive versions, with mandatory archiving for 10 years after market placement.
Must include:
-
Information for authorities (Annex XI):
-
Model description (version, developer, purpose, architecture).
-
Training data: sources, licences, anti-poisoning and illegal scraping measures.
-
Evaluation results (metrics, known biases, adversarial robustness).
-
Incident and risk management.
-
-
Information for integrators (Annex XII):
-
Main capabilities and operational limitations.
-
Human oversight recommendations, discouraged prompts, « out-of-scope » biases.
-
Usage domains (e.g. healthcare, finance) and cases to avoid.
-
Technical best practices:
-
Specify the test datasets used (benchmarks, arbitration).
-
Report malfunctions (e.g. misinterpreted sentences, typical hallucinations).
-
Describe prompt injection constraints or contamination vectors.
How Businesses and Authorities Should Prepare
1. Preparatory analysis
-
Identify whether your model is a GPAI (cf. Art. 3(63)).
-
Map the intended downstream uses and define a documentation chain.
2. Building a documentary governance model
-
Use the official AI Office template.
-
Refer to the « Model Card Regulatory Check » tool offered by the OECD to standardise legal/technical checklists.
3. Internal legal-technical coordination
-
Work in a lawyer / R&D team pairing: data, architectures, fraud detection, usage catalogues (favouring use-case cards inspired by the UML / AI Cards model).
4. Review cycle: security & quality control
-
It is also necessary to maintain the quality of information, in order to ensure the traceability, integrity and consistency of recorded data — notably through versioning, hash fingerprint controls and internal logging.
This « Model Card » dossier is now the keystone for any general-purpose AI model compliant with the European regime. Regulatory technical documentation (Annexes XI/XII) no longer reflects a voluntary practice: it is a necessity for every professional provider or public administration.
Here is the template to help you with your compliance efforts.
As a specialised law firm, Lawgitech can assist you with:
-
Managing the audit of your existing documentation
-
Co-designing the model version (managing confidential information)
-
Training your teams (technical + legal) on the documentation chain
-
Supporting you in case of requests from the AI Office or national authority





