What the EU AI Act Means for Education, Ethics and our Future

What the EU AI Act Means for Education, Ethics and our Future

Back in August 2024, the European Union made history by passing the world’s first comprehensive legal framework for artificial intelligence: the AI Act. Aimed at fostering trustworthy, human-centric AI, this legislation marked a major step in ensuring that technology develops in ways that protect, rather than undermine, human rights, safety and dignity. 

At Semio-Semantics, where we explore the intersections of language, meaning and human connection in an age increasingly shaped by AI, the AI Act feels like a philosophical moment as much as a regulatory one. It is a sign that even in our rapidly-moving world of automation, values do still matter. 

Why Regulate AI at All? 

While most AI systems are low-risk (e.g. spam filters or voice-to-text tools), some systems, especially those used in education, healthcare, hiring and policing, carry high stakes. If a chatbot marks a student unfairly or an automated CV filter screens out applicants based on biased data, the consequences are not just technical errors, but moral failures. 

The AI Act’s risk-based model reflects a widely shared ethical principle: context matters. As Aristotle (Ancient Greek Philosopher and Polymath) observed, what’s right or fair often depends on the situation and the people involved and not just a universal rule. Not all technologies are equally dangerous, so this law divides AI systems into four categories: 

  • Unacceptable Risk: banned outright, such as social scoring or facial recognition in public spaces 
  • High Risk: subject to strict oversight, like AI used to grade students or assess job applications 
  • Limited Risk: systems requiring transparency, like chatbots that must disclose they are not human 
  • Minimal Risk: no restrictions, such as AI in video games or email filters 

The Classroom and the Code 

Education sits at a pivotal point in this framework. AI tools used to assess students, recommend educational paths or support learning (especially those affecting access to education) fall into the high-risk category. 

This includes systems that: 

  • Rank or filter students for admissions 
  • Evaluate student performance in ways that impact their educational trajectory 
  • Monitor students during assessments through AI proctoring 

Schools and teachers using such tools are required to: 

  • Undergo conformity assessments 
  • Ensure data quality and fair use 
  • Disclose when students are interacting with AI 
  • Maintain human oversight 
  • Log and monitor the AI’s decisions 
  • Complete a fundamental rights impact assessment 

Why is this fundamentally a good thing? Because today teaching is data processing as much as it is emotional labour, ethical guidance and the slow work of meaning-making. A system that scores a student should be explainable, transparent and fair. The EU’s insistence on quality data, human oversight and clear documentation reflects a growing understanding that learning is not neutral. 

As the philosopher Hannah Arendt (a 20th-century political philosopher known for her deep reflections on totalitarianism, responsibility and the human condition) once wrote:

"Education is the point at which we decide whether we love the world enough to assume responsibility for it."

In this light, regulation is not just red tape; it is care in action. 

AI, Language and Connection 

In language learning, where so much depends on nuance, empathy and cultural context, the stakes are particularly high. AI tools can now generate fluent French paragraphs or grade short responses, but what do they know of voice, identity or motivation? If we’re not careful, we risk building systems that confuse fluency with understanding. 

The AI Act requires that high-risk systems in education be robust, traceable and accountable. However, as we know, meeting legal requirements is not the same as meeting human needs. As educators, designers and citizens, we must remember to go beyond the question of compliance to ask: Is this responsible? Does it support learning and human dignity?

Designing for Dignity 

The Act’s emphasis on transparency, safety and fundamental rights echoes the values championed by teachers and educators across the UK. For instance, the Bell Foundation highlights the importance of inclusive assessment and clear communication to support learners with EAL (English as an Additional Language); students who may be unfairly disadvantaged by opaque or culturally biased AI systems. Similarly, the Education Endowment Foundation (EEF) provides evidence-based guidance on reducing barriers to learning for disadvantaged pupils, including the need for explicit instruction and accessible materials. If AI tools reinforce existing inequalities, through flawed training data or decisions that lack transparency, we’ve failed not only technologically but also morally. These frameworks remind us that inclusion is not an optional extra in digital education; it is the foundation of ethical design. 

This means recognising that even the "limited risk" tools, like chatbots or AI writing assistants, require transparency. Teachers are therefore obligated to: 

  • Disclose when students are known to be engaging with AI 
  • Label AI-generated content, especially when it informs public understanding 
  • Avoid using generative AI for high-risk purposes like grading 

Semio-Semantics and the Age of Accountability 

The term Semio-semantics refers to how meaning is made and shared, especially through symbols, language and systems. The AI Act is itself a kind of symbolic act; a declaration that ethics must scale alongside innovation. As we build and deploy learning technologies, we’re shaping both student outcomes and ultimately the kind of society we want to live in. 

We could say that good code and good questions make AI trustworthy - questions such as: 

Who gets to decide what knowledge matters?  

Whose language, accent or phrasing is seen as “correct”?  

These are curriculum questions as well as political and philosophical ones. 

As Søren Kierkegaard (a Danish philosopher and writer widely regarded as the father of existentialism - focusing on themes such as individual freedom, choice, anxiety and the meaning of life) wrote;

“Life can only be understood backwards; but it must be lived forwards.”

In such vein, where teachers are able to carve out headspace, time and feel trusted to make professional decisions, these EU regulations will help us to look back with wisdom and plan forward with care. 

A Future Worth Designing 

The AI Act provides us with an opportunity to make technology serve human learning instead of replacing it. 

To educators, it says: our judgement, care and expertise matter. Our daily choices shape how tools are used and whether students are included, empowered or overlooked. 

To developers and designers, it says: please slow down, think deeply and build with intention. Not just because the law says so, but because human lives and learning environments deserve such a level of thoughtfulness. 

This law will not provide answers to every question, but it can open doors to deeper dialogue and more thoughtful, critical questions between people in pedagogy, programming, ethics and machines. 

This conversation is worth having and at Semio-Semantics, it is one we plan to keep showing up for. 

Read more