Information contained in this publication is intended for informational purposes only and does not constitute legal advice or opinion, nor is it a substitute for the professional judgment of an attorney.
On September 24, 2024, the U.S. Department of Labor (DOL) announced publication of its “AI & Inclusive Hiring Framework” website, described as “a new tool designed to support the inclusive use of artificial intelligence in employers’ hiring technology and increase benefits to disabled job seekers.” Previously, DOL released AI Principles, with guidance for employers and AI developers about designing and implementing AI systems in the workplace. DOL issued these AI Principles at the direction of President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The AI Executive Order and AI Principles did not specify what practices run afoul of the AI Principles, but DOL’s AI Framework seeks to “chart[] a clear course for employers to navigate” implementation of AI systems in the workplace.
The AI Framework was not published by the DOL but by the Partnership on Employment & Accessible Technology (PEAT), which DOL’s Office of Disability Employment Policy (ODEP) funded to develop the AI Framework. Even though PEAT is funded by ODEP, the organization is managed by a private company. According to PEAT’s website, “PEAT material does not necessarily reflect the views or policies of” ODEP or DOL and is not endorsed by the federal government. PEAT’s mission is to “to foster collaborations in the technology space that build inclusive workplaces for people with disabilities” with a vision of “a future where new and emerging technologies are accessible to the workforce by design.” PEAT previously provided input to federal agencies including the EEOC on guidance related to AI and the employment of people with disabilities.
According to DOL’s press release, ODEP and PEAT “developed the [AI Framework] with input from disability advocates, AI experts, government and industry leaders and the public at large.” The AI Framework concludes a process first discussed during DOL and PEAT’s virtual think tank held on April 17, 2023, where attendees included “federal agencies, technology innovators, disability organizations and civil rights groups.” Noticeably absent from the attendees were employers, even though the website expressly states that the “Framework’s primary audience is employers who deploy artificial intelligence (AI) hiring technology.” While employers were consulted during group stakeholder sessions, employers did not have an opportunity to provide direct feedback on the draft guidance.
The AI Framework consists of ten focus areas: (1) identify legal requirements; (2) establish staff roles; (3) inventory technology; (4) work with vendors; (5) assess impacts; (6) provide accommodations; (7) use explainable AI; (8) ensure human oversight; (9) manage incidents; and (10) monitor regularly. Although these focus areas appear different from prior DOL and other agency guidance or initiatives with respect to AI, the focus areas largely echo the AI Principles and President Biden’s AI Executive Order. For instance, all three urge implementation of AI with human oversight and in an ethical, transparent manner that employees and applicants can understand. Additionally, all three center on employee rights. These similarities serve to underscore two points. First, the federal government encourages the use of AI for workers’ benefit. Second, federal agencies are not providing clear guidance or rules relating to employers’ implementation of AI systems.
Within the AI Framework website, however, PEAT attempts to provide employers with resources to consider when implementing AI systems. For instance, within the first focus area, the AI Framework lists two major practices and four major considerations related to legal requirements. Within the four major considerations, PEAT references various laws, such as the Americans with Disabilities Act, the Rehabilitation Act of 1973, Title VII of the Civil Rights Act of 1964, and other laws, along with official regulatory and policy guidance from federal agencies.
Despite being modeled on the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, PEAT’s guidance may be lacking in practical applicability. Unlike the NIST framework, PEAT’s guidance leans heavily on third-party auditing, although there are no consensus standards for AI auditing in general, much less for disability discrimination. Additionally, PEAT’s guidance refers to the Center for Democracy and Technology’s “Civil Rights Standards,” a policy document and model legislation not designed as practical guidance for employers and vendors.
Key Takeaways for Employers
Employers should regularly assess their AI use and the impact of AI systems in the workplace, not only for employees, but also applicants. Agencies are expected to issue more AI-focused guidance and publications going forward. To date, these publications primarily emphasize avoidance of disability discrimination. Until Congress enacts new legislation, federal agencies seem to be utilizing existing laws and applying them to AI systems. Although the AI Framework may seem like the repetition of information provided by other agencies and prior guidance, it is critical that employers and their counsel pay close attention to current and developing legal authority concerning AI.