Information contained in this publication is intended for informational purposes only and does not constitute legal advice or opinion, nor is it a substitute for the professional judgment of an attorney.
On June 14, 2023, the European Parliament voted to approve the first draft of the new AI Act, the world’s first comprehensive law regulating artificial intelligence (“AI”). Although the law has not yet been passed and may be subject to further amendments, this marks a significant milestone in the European legislative process. The Act will now enter the last phase, in which the text will be finalized over the coming months before adoption as law.
What Will the Legislation Cover?
The European Parliament’s stated aim for the Act is to make sure that AI systems used in the EU are “safe, transparent, traceable, non-discriminatory and environmentally friendly.” The European Parliament has said, “AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”
The AI Act sets out a broad definition of AI, which would apply across all 27 European Member States and likely includes the regulation of both predictive AI (AI used to make future predictions or decisions) and generative AI (AI used to generate new outputs based upon the patterns within data on which they have been trained, such as ChatGPT). The technological scope covered is not, however, crystal clear. Machine learning (“ML”) techniques are squarely within this scope; the open question is whether a far broader swath of techniques and systems—Bayesian inferential models and other so-called “statistical approaches,” as well as so-called “logic- and knowledge-based approaches,” none of which necessarily carry the same risks as more autonomous, black-box ML—will also be covered. As the voted-on draft notes, these are “simpler techniques,” but may still result in “legal gaps” worth addressing.
The new AI Act would impose obligations on both providers (those that develop AI) and deployers (organizations under whose authority the AI system is used, which will include employers) according to the potential level of risk that results from the AI.
How Will the AI Act Operate?
The AI Act follows a risk-based approach to the regulation of AI systems, where the obligations imposed are proportionate to the level of potential risk that the AI poses to individuals. The greater the potential risk posed by the AI, the stricter the rules:
- Unacceptable risk: The Act sets out a list of AI practices that pose an “unacceptable risk” and would therefore be prohibited, such as social scoring, subliminal behavior manipulation, and most uses of biometric identification in public spaces. This list was the subject of much debate, but its focus is on AI systems with an unacceptable level of risk to people’s safety or that are intrusive or discriminatory. In practice, it is unlikely that these uses of AI would routinely apply in an employment context.
- High risk: The Act also identifies a category of AI systems that are deemed to be “high risk,” and would therefore be subject to significant regulatory oversight including a range of detailed compliance requirements for both providers and deployers, with the majority of the obligations falling on providers. Of particular note to employers, AI systems used in “employment, workers management and access to self-employment” that pose a significant level of risk to individuals would be classified as high-risk, since “those systems may appreciably impact future career prospects, livelihoods of these persons and workers’ rights.” The potential compliance requirements for employers using high-risk AI systems may include detailed technical documentation, record keeping, transparency requirements, human oversight, and ensuring the accuracy, robustness, and security of the AI system.
- Limited risk and minimal risk: The draft regulation also envisages two lower-risk categories of AI system—“limited risk” and “minimal risk”—which would be subject to less-burdensome obligations, which relate to transparency. These are less likely to be relevant to employers.
Scope of Liability for High-risk AI Tools
For high-risk systems, the draft Act places significant responsibility on providers to ensure that their products comply with the Act, requiring them to install a risk management system, to thoroughly test and assess the compliance of the AI tool (including via a “conformity assessment”), and to provide detailed policies, procedures and instructions for compliant use of the AI tool. However, the Act also shifts responsibility in certain instances, from the manufacturers to importers, distributors, and deployers (i.e., employers and other end-users). For example, if a downstream entity substantially modifies the functioning or use of an off-the-shelf system, the upstream manufacturer is exempted from any liability resulting from that modified system. On the other hand, if the downstream entity merely places its name or trademark on an unmodified system, then both it and the upstream manufacturer remain jointly liable.
Territorial Scope of the Act
As with the European data protection regulation (“GDPR”), the AI Act would have extra-territorial scope. It would relate to both AI systems established and operating in the EU, as well as systems established outside of the EU that affect EU individuals. As a result, non-European companies that deploy AI in the EU could find themselves subject to the AI Act regardless of their geographic location.
Penalties for Non-compliance
The potential fines for non-compliance are high and have increased significantly as the legislation has progressed. The maximum penalties for violating the AI Act are the higher of EUR 40 million (USD 43.5m) or 7% of the company’s total worldwide annual turnover for the preceding financial year. By way of comparison, this is almost double the maximum penalties for breach of GDPR.
Next Steps
The three European institutions (the Parliament, Commission and Council) will now negotiate the final text of the Act; it is anticipated that they will reach an agreement by the end of the year. Once finalized, the AI Act would be an EU Regulation, which would be binding in its entirety and directly applicable in Member States without the need for national implementing legislation.
It is anticipated that there will be a two-year period for implementation, although given the speed at which this technology is being developed and integrated, there is pressure on implementation to come sooner than that.
As the world’s first comprehensive legal framework for AI, the AI Act is likely to be influential as other countries consider the regulation of AI. As with the GDPR, it may set the international standard for global tech companies in terms of AI compliance.
This legislation is likely to impact employers that make use of AI in the EU. We will continue to monitor the progress of this legislation, particularly since the text approved by the European Parliament has not yet been published.
The AI Act is a complex piece of legislation and there are many areas that are open to interpretation, or which may require further guidance. Employers should consult the latest version of the AI Act and seek legal advice on how it applies to their specific circumstances.