Employers using algorithmic decision-making tools in their recruiting, hiring, and review of applicants and employees should take careful note of the EEOC’s position as to where these tools may run afoul of the ADA.
New York City is deferring enforcement of its first-in-the-nation regulation of the use of AI-driven hiring tools (Local Law 144 of 2021), which was initially slated to go into effect on January 1, 2023.
On October 17, 2022, President Biden signed into law the AI Training Act. The purported purpose of the Act is to ensure the federal government’s workforce has knowledge of how artificial intelligence (AI) works, AI’s benefits, and AI’s risks.
NLRB General Counsel announced she will urge the Board to adopt a new framework that seeks to hold employers accountable for use of “omnipresent surveillance and other algorithmic-management tools” if they tend to impair the exercise of §7 rights.
On September 23, 2022, the New York City Department of Consumer and Worker Protection (DCWP) proposed additional rules relating to Local Law 144 of 2021, which will regulate the use of automated employment decision tools starting January 1, 2023.
On May 12, 2022, the U.S. Equal Employment Opportunity Commission (EEOC) issued a “Technical Assistance” document addressing compliance with ADA requirements and agency policy when using AI and other software to hire and assess employees.
Littler’s tenth annual survey – completed by nearly 1,300 in-house lawyers, C-suite executives and HR professionals – provides a window into how U.S. employers are managing labor and employment issues and where their principal concerns lie.
Two noteworthy developments have occurred since the California Fair Employment & Housing Council released draft revisions to the state’s employment non-discrimination laws that relate to the nascent law surrounding the use of AI in employment.