The European Commission has just published the 2026 edition of its Guidelines on the Ethical Use of Artificial Intelligence and Data in Teaching and Learning for Educators.
Why ethical AI in education is at the heart of these guidelines
Since the first version published in 2022, the use of artificial intelligence in education has surged, driven notably by public access to generative AI. According to data cited in the document, 87% of Europeans believe that all teachers should be trained in the use of AI, 85% consider digital skills necessary to use generative AI safely, and 75% think that everyone will need to be « AI literate » by 2030.
This update comes in a profoundly renewed regulatory context: the AI Act (Regulation (EU) 2024/1689) has now entered into force, and its obligations concerning high-risk AI systems in education will apply from 2 August 2026. The 2022 version of these guidelines could not incorporate this legal framework, as the regulation had not yet been adopted. The 2026 edition therefore articulates for the first time the legal requirements of the AI Act and the GDPR with ethical considerations and practical tools.
It is important to note, as the document itself states, that these guidelines are not binding and do not constitute an implementation guide for the AI Act. They offer an ethical reflection framework that goes beyond legal obligations alone.
Document structure: three pillars
The guidelines are organised around three complementary axes:
- Fundamental principles and legal context — AI Act, GDPR and ethical considerations underpinning the responsible use of AI in education.
- Guiding questions and practical scenarios — concrete examples of applying these principles in the classroom and at the institutional level.
- Reference resources — technical definitions, competence frameworks (DigComp 3.0, AI Literacy Framework defined in Art. 3(56) AI Act) and policy context.
The 5 fundamental ethical considerations
The document identifies five ethical pillars, directly inspired by the Ethics Guidelines for Trustworthy AI from the Commission’s High-Level Expert Group:
- Human dignity — The human-centred approach must prevail: individuals are not data objects. Respect for privacy, autonomy and human agency.
- Fairness — Equity, inclusion, non-discrimination and fair distribution of rights and responsibilities in the use of AI tools.
- Trust and trustworthiness — A reliable AI tool is transparent about how it works, respects privacy, avoids bias and supports learning in accordance with the values of the school community.
- Academic integrity — Fostering a culture where critical thinking, values and human agency coexist with technological innovation.
- Justified choice — Collective decisions must be based on transparency, participation and explainability.
AI Act and education: what schools need to know
The document devotes a detailed section to the European regulatory framework for AI applied to education.
Prohibited practices (Art. 5 AI Act)
Article 5(1)(f) of the AI Act prohibits emotion recognition systems in workplaces and educational institutions. A system that detects students’ emotions in an educational context is prohibited, even if the stated objective is pedagogical. Exception: systems used for medical or safety reasons.
The document provides useful nuances: eye-tracking software to detect cheating is not prohibited, as long as it does not infer emotions. However, emotion recognition during an admissions test is prohibited. The prohibition applies to all levels of education and training.
High-risk systems (Annex III, point 3)
Education is classified as a high-risk domain. Annex III, point 3, identifies four cases: (a) access or admission, (b) assessment of learning outcomes, (c) assessment of the appropriate level of education, (d) monitoring of prohibited behaviour during examinations. These systems are subject to strict obligations: risk assessment, high-quality data, logging, documentation, human oversight, cybersecurity.
An AI system may escape the high-risk classification when it does not substantially materialise the outcome of a decision (recital 53 AI Act). The obligations will apply from 2 August 2026.
Transparency obligations (Art. 50) and right to explanation (Art. 86)
Transparency obligations include informing users when interacting with an AI system (Art. 50(1)) and labelling AI-generated content (Art. 50(2)). Article 86 introduces a right to explanation for persons affected by a decision of a high-risk AI system. Article 3(56) defines AI literacy and Article 4 requires a sufficient level of AI competence for the staff of providers and deployers.
GDPR and education
The GDPR applies in full. Educational institutions must communicate clearly about data processing (Art. 12-15 GDPR) and conduct a DPIA (Art. 35 GDPR) before deploying high-risk AI systems. Article 27 of the AI Act adds the obligation of an FRIA for public deployers of high-risk systems.
Concrete guiding questions for the field
The guiding questions framework is organised into 8 themes: human agency and oversight, transparency and explainability, diversity and inclusion, fairness and non-discrimination, societal and environmental well-being, privacy and data governance, technical robustness and safety, and accountability.
10 illustrated practical scenarios
The guide offers ten concrete scenarios, each accompanied by five priority ethical questions. The approach is non-binary: a negative answer does not prohibit the use of the tool, but signals that complementary action is needed.
Our analysis: ethical AI in education in practice
This updated guide on ethical AI in education is a valuable tool, and not only for the education sector. It offers a methodology transferable to any organisation deploying AI systems: the guiding questions framework, the articulation between the AI Act and the GDPR, and the risk-based approach are reflexes that every DPO, compliance officer or executive should adopt.
A few critical observations: the document remains at a soft law level and does not replace a legal compliance analysis. The guiding questions would benefit from being complemented by more structured self-assessment tools, along the lines of the CCB’s CyFun framework for NIS2. The issue of data transfers outside the EU to American EdTech platforms is not addressed in depth.
At Lawgitech, we support organisations — including educational institutions — in their compliance with the AI Act and the GDPR. If you would like to adapt these guidelines to your context or train your teams, contact us.
Download the document
The full document (46 pages, PDF, CC BY 4.0 licence) is available for free download:
View and download the Guidelines on the Ethical Use of AI in Education (2026)
Source: European Commission, « Guidelines on the Ethical Use of Artificial Intelligence and Data in Teaching and Learning for Educators – Updated edition », 2026, ISBN 978-92-68-33189-7, doi:10.2766/7967834. Document published under CC BY 4.0 licence.





