Developing a Human Rights and Democracy Framework for AI Governance in European Educational Settings
Abstract
This study develops an operational governance framework for the deployment of artificial intelligence in the European education sector, ensuring compliance with human rights, the rule of law, and democratic principles established by the Council of Europe. Specifically, the framework addresses obligations arising from the European Convention on Human Rights (ECHR), the Council of Europe's Recommendation CM/Rec(2020)1 on algorithmic system impacts, and the European Union's Artificial Intelligence Act (2024), which designates educational AI systems as high-risk applications. Employing a mixed-methods approach combining Delphi methodology with legal and ethical consultation, this research engaged a multidisciplinary panel of 22 European experts specializing in educational law, AI ethics, democratic theory, digital pedagogy, and human rights adjudication. Through three iterative Delphi rounds, the study systematically identified high-risk AI scenarios in educational contexts, established normative consensus regarding implicated human rights and democratic values, and formulated operationally specific, ethically binding, and technically implementable governance guidelines. The resulting framework the European Educational AI Governance Framework (EEAGF) comprises four foundational pillars: (1) Dignity and Non-Discrimination, addressing prevention of automated systems reproducing structural inequalities; (2) Transparency and Explainability, safeguarding stakeholders' rights to understand AI-assisted decisions affecting educational trajectories; (3) Democratic Accountability, establishing institutional oversight mechanisms; and (4) Pedagogical Integrity, ensuring AI deployment serves rather than distorts education's fundamental democratic purposes. The Delphi process identified automated student assessment, behavioural surveillance systems, and AI-assisted admissions as the three highest-risk application categories. Panel consensus revealed that existing educational data protection frameworks, including GDPR Article 22 and national educational legislation, are substantively inadequate for governing the rights implications of these systems without sector-specific supplementary guidance. The EEAGF provides this critical guidance, offering European education stakeholders operationally feasible mechanisms for implementing AI governance aligned with fundamental European values.
Full Text:
PDFReferences
Apple, M. W. (2019). On doing critical policy analysis. Educational Policy, 33(1), 276,287. https://doi.org/10.1177/0895904818807307
Bates, J., Lin, Y. W., & Willson, M. (2020). Governance of AI in public services. Information, Communication and Society, 23(6), 798,811. https://doi.org/10.1080/1369118X.2020.1736513
Biesta, G. (2010). Good education in an age of measurement: Ethics, politics, democracy. Paradigm Publishers.
Coeckelbergh, M. (2020). AI ethics. MIT Press.
Council of Europe. (2020). Recommendation CM/Rec(2020)1 of the Committee of Ministers to member States on the human rights impacts of algorithmic systems. Council of Europe.
Council of Europe. (2024). Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (CETS No. 225). Council of Europe Treaty Series.
European Commission. (2021). Digital Education Action Plan 2021,2027: Resetting education and training for the digital age. European Commission.
European Court of Human Rights. (2007). D.H. and Others v. Czech Republic [GC], Application No. 57325/00, Judgment of 13 November 2007.
European Parliament. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 2024/1689.
Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.
Habermas, J. (1996). Between facts and norms: Contributions to a discourse theory of law and democracy (W. Rehg, Trans.). MIT Press.
Hasson, F., Keeney, S., & McKenna, H. (2000). Research guidelines for the Delphi survey technique. Journal of Advanced Nursing, 32(4), 1008,1015. https://doi.org/10.1046/j.1365,2648.2000.01567.x
Heersmink, R. (2017). The internet, cognitive enhancement, and the values of cognition. Minds and Machines, 26(4), 389,407. https://doi.org/10.1007/s11023,016,9404,7
Linstone, H. A., & Turoff, M. (Eds.). (2002). The Delphi method: Techniques and applications. Addison,Wesley.
OECD. (2022). OECD Digital Education Outlook 2021: Pushing the Frontiers with Artificial Intelligence, Blockchain and Robots. OECD Publishing. https://doi.org/10.1787/589b283f,en
Perrotta, C., & Selwyn, N. (2020). Deep learning goes to school: Toward a relational understanding of AI in education. Learning, Media and Technology, 45(3), 251,269. https://doi.org/10.1080/17439884.2020.1686012
Selwyn, N. (2019). What is digital sociology? Polity Press.
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. UNESCO.
Williamson, B., Bayne, S., & Shay, S. (2020). The datafication of teaching in higher education: Critical issues and perspectives. Teaching in Higher Education, 25(4), 351,365. https://doi.org/10.1080/13562517.2020.1748811
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
DOI: https://doi.org/10.51817/jas.v6i2.443
Refbacks
- There are currently no refbacks.

This work is licensed under a Creative Commons Attribution 4.0 International License.
















