Information contained in this publication is intended for informational purposes only and does not constitute legal advice or opinion, nor is it a substitute for the professional judgment of an attorney.
Artificial intelligence (AI) continues to dominate headlines and even the most recent Super Bowl advertisements. The use of AI in the workplace is rapidly expanding in a wide variety of ways throughout the hiring process, including scanning and filtering resumes and AI-driven video interviews to assess candidates. If appropriately designed and applied, AI can help people find their most rewarding jobs and match companies with their most valuable and productive employees. Equally important, AI has been shown to advance diversity, inclusion, and accessibility in the workplace.
AI tools can help increase diversity in several noteworthy ways. First and foremost, AI tools can remove the human element and thus, at least in theory, remove the subjectivity involved with hiring and other employment decisions, helping ensure that candidates whose skills and experience are the best match for a role advance through the selection process. Notably, AI can anonymize certain information about candidates, reducing the possibility that characteristics, such as a candidate’s name, that may be associated with certain protected classes, are not considered during the evaluation process. AI tools have also been shown to increase the diversity of candidates interviewed by displaying biographical information only after the candidates pass certain skills tests or meet certain metrics. AI-powered virtual interviews can also help increase diversity by standardizing interviews so that bias does not seep into the process. A human is more inclined to deviate from the interview script whereas an AI tool is more likely to follow the script. In addition, AI-powered chatbots help advance diversity by providing consistent information to applicant questions so that all candidates are on a level playing field and have equal access to critical information and resources during the hiring process. These resources can be invaluable in helping applicants regardless of their backgrounds.
Despite the benefits of using AI during the hiring process to advance diversity goals, its use is not without challenges. Legal risks can arise if the AI tools are used to discriminate against protected classes either intentionally or unintentionally. In most cases, there are unintentional discrimination risks. As noted above, the use of AI tools can remove the human element but there are concerns that the AI tools will simply inherit (or maybe even worsen) existing biases. If an AI tool is trained on data that reflect past discriminatory decision-making, it may unintentionally perpetuate those biases, even if the goal is to promote diversity. This is known as the “garbage in, garbage out” problem with AI tools. Another potential legal risk is if the AI tools use neutral criteria to evaluate candidates in a way that disproportionately impacts protected classes. For instance, after the COVID-19 pandemic, many employers wanted to attract and recruit applicants closer to the office because these applicants were more likely to go to the office if the commuting distance was shorter. But in such cases legal risks arise because limiting an applicant pool by geographic distances could result in unintentional discrimination. Zip codes, for example, are often highly correlated with racial and/or ethnic groups.
Another concern is the evolving legal landscape with AI tools. Indeed, the steady implementation and rapid development of AI tools has led to a growing number of proposals for increased oversight, including measures regulating the use of AI in the employment context at the state and local level. For instance, New York City now has a broad AI employment law that regulates employers’ use of all AI tools used for hiring and promotion decisions. The law requires employers that use AI to screen candidates who apply for a job or a promotion for a role in New York City to conduct an annual bias audit, publish a summary of the audit results, inform the candidates that AI is being used, and give them the option of requesting an accommodation or an alternative selection process. The law is meant to promote transparency and give employers the opportunity to detect and correct unintentional bias in the candidate screening process.
Employers can take steps to increase diversity in hiring by utilizing AI tools that mitigate against employment discrimination risks. Employers can begin by identifying specific diversity goals that AI tools can help achieve, such as enhancing the size and diversity of the applicant pool, which will guide the selection of the appropriate tool. The selected AI tool should be one that is known for its transparency and fairness. Training—not only on how to use the AI tool but also on the limitations involved in its application—can be an effective way to guard against associated risks. Employers can monitor and audit AI uses and processes to proactively identify intentional misuse or potential discriminatory outcomes. Employers can also track AI legislation and litigation because this is a rapidly developing area. Situational awareness of the evolving legal environment will become increasingly important in the future.
Finally, employers should consider implementing AI policies and practices to ensure there are proper guardrails in place and that AI is used in a responsible and legally compliant way. The challenge with these policies, practices, and guardrails will be striking a balance between AI’s potential to foster a more diverse workplace, against nuances that can only be captured and contextualized by human judgment. However, a prophylactic, multi-pronged approach leveraging policies, recommended practices, and training, is critical for ensuring objectivity and that bias does not seep into AI processes unintentionally.