Information contained in this publication is intended for informational purposes only and does not constitute legal advice or opinion, nor is it a substitute for the professional judgment of an attorney.
Earlier this year, the European Parliament approved the EU Artificial Intelligence Act (the “AI Act”) by a sweeping majority, becoming the world’s first comprehensive set of rules for artificial intelligence (see our more detailed updates here and here).
As part of these new regulations, the AI Act sets out a list of prohibited AI practices that pose an “unacceptable risk.” The focus of the prohibition is on AI systems with an unacceptable level of risk to people’s safety, or systems that are intrusive or discriminatory.
Most of the provisions of the AI Act will not bite until August 2026, but the ban on AI systems that pose an unacceptable risk comes into force on February 2, 2025.
To assist employers in their preparation, we have summarized the banned AI systems below.
Banned AI systems
- AI systems that manipulate individuals' decisions subliminally or deceptively, causing or reasonably likely to cause significant harm.
- AI systems that exploit vulnerabilities like age, disability, or socio-economic status to influence behavior, causing or reasonably likely to cause significant harm.
- AI systems that evaluate or classify individuals based on their social behavior or personality characteristics, causing detrimental or unfavorable treatment.
- AI systems that assess or predict the risk of an individual committing a criminal offence based on their personality traits and characteristics.
- AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- AI systems that infer emotions in workplaces or education centers (except where this is needed for medical or safety reasons).
- AI systems that categorize individuals based on their biometric data to infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
- AI systems that collect “real time” biometric information in publicly accessible spaces for the purposes of law enforcement (except in very limited circumstances).
Employers should carefully consider whether they make use of any of the prohibited uses of AI, for example as part of their recruitment or employment practices.
Scope and Penalties
As a reminder, the AI Act will have extra-territorial scope, and international companies that are not based in the EU may still find themselves subject to the AI Act.
Furthermore, the penalties for non-compliance are significant, reaching up to EUR 35 million (USD 38 million) or 7% of the company's global annual turnover in the previous financial year.
Therefore, global businesses should keep this deadline in mind if they are developing, deploying or marketing AI systems in the EU from now onwards.