Considerations for Artificial Intelligence Policies in the Workplace

  • Because the use AI in the workplace can present serious risks to an organization, particularly involving security, intellectual property, confidentiality, and labor and employment legal risks, employers should consider adopting an AI policy to ensure that their use of AI is responsible, ethical, and legally compliant.
  • AI policies can help employers comply with regulations, reduce liabilities, and manage AI risks.
  • Regular audits, employee training, and policy updates help ensure that AI is used in a responsible and legally compliant manner, especially because of the rapidly evolving legal landscape.

In recent years, many organizations have implemented new policies on artificial intelligence (AI) use to help prevent bias, plagiarism, or use of AI tools that produce inaccurate or misleading information. Meanwhile, many courts and state bars across the country have introduced AI usage policies to ensure that AI is properly used in the practice of law, including policies requiring attorneys to certify that generative AI did not draft any portion of a filing. Employers should consider similar measures, as the widespread use of generative AI programs such as ChatGPT and its newer iterations increases the risks associated with the use of AI. Indeed, because AI will continue to have an increasingly significant role throughout the workplace and at all stages of the employment lifecycle, organizations should strongly consider implementing policies to ensure that AI is used properly in the workplace.

Why Employers Should Adopt an AI Usage Policy

AI usage policies can help minimize legal, business, and regulatory risks by ensuring compliance with operative laws and regulations. AI usage policies are also beneficial with the evolving regulatory landscape by preemptively establishing a framework that helps mitigate risks. Having a policy in place before engaging in high-risk uses of AI (such as, for example, AI systems intended to be used in HR processes to evaluate job candidates or make decisions affecting the employment relationship) is critical for businesses to protect themselves from open-ended liability.

In many cases, companies engage third-party vendors that offer AI-powered algorithms to perform HR tasks. Having an AI usage policy can also improve employers’ relationships with third-party software vendors by establishing clear expectations and guidelines.

What to Include in an AI Usage Policy

At the outset, an employer should identify areas where they do not want AI to be used and set clear guidelines accordingly. To accomplish this, it is important to identify potential risks associated with AI usage and tailor the policy to address those particular areas. These risks can range from AI tools that undermine data security, exhibit biases, or generate inaccurate or misleading information. To identify the potential risks, an employer needs to determine what tools will be approved for use and what tasks those tools are capable of performing. It is only by understanding what the tools can do that an employer can begin to understand the risks that might flow from their use.

In most cases, general AI usage policy templates should be avoided because the specific needs of the employer must be accounted for. Accordingly, employers should consider the following categories while creating tailored policies for their organizations.

1. Purpose or Mission Statement: An effective AI usage policy should include a purpose or mission statement that clearly defines the purpose of the policy. This will help promote trust, credibility, and a greater awareness and appreciation of the merits of AI systems. The absence of a purpose or mission statement will likely undermine the benefits of its use. The goal of an effective AI usage policy is to create a policy that allows companies to monitor AI use and encourage innovation but also ensure that AI is only used to augment internal work and with proper data. Generally, the basic purpose of an AI policy is to provide clear guidelines for the acceptable use of AI tools, thereby ensuring consistently compliant behavior by all employees.

2. Define AI and the AI Tools Covered: Another critical component is a section providing key definitions, including how the employer defines AI for purposes of the policy. Defining AI is often challenging, in large part because of the multitude and ever-growing variety of use cases. However, with AI being widely incorporated into other tools, it is important to delineate what is and isn’t covered by the policy in plain, non-technical language to eliminate any doubts among employees and others. This section should also specify which AI tools are approved and covered by the policy. Specific generative AI tools such as ChatGPT, Copilot or DALL-E can be included in this section, as applicable. Although generative AI has recently been the star of the AI world, a comprehensive AI policy must address all potential applications of AI. While the policy does not need to specifically identify every tool that is not approved for use, the policy should make clear that any AI system not explicitly approved in the policy is expressly prohibited.

3. Specify Who the AI Usage Policy Applies to: An effective AI usage policy should explain how the policy applies to its workforce, including employees, independent contractors, and others. It is critical that an employer have a policy that covers anyone who might have access to the employer’s AI tools or systems.

4. Scope of the Policy: An effective AI usage policy must clearly define the scope of its applicability. A policy may allow for open use or prohibit or limit certain AI use. For example, an AI usage policy may specify that human resources departments may not use AI in recruitment due to the risk of bias that may result, and in light of the evolving legal landscape of this area. Or a policy may specify that employees are not to provide customer information to publicly available AI tools due to the data security risks involved.

The scope of the policy may differ based on several factors. For example, different categories of employees with different job roles are likely to need AI for different tasks, or they may need different tools entirely. While some positions may require open-ended access to AI tools, others may only need to use AI tools for specifically delineated job functions. The policy should be properly scoped to appropriately control the potential use by any groups and individuals with access to any AI tools or systems.

5. Data Security and Risk Management: It is also important for an AI usage policy to establish guidelines for data collection, storage, processing, and deletion. Addressing how AI technologies will handle personal and sensitive information ensures compliance with data protection laws and safeguards against unauthorized access or data breaches. An effective AI usage policy must also address an employer’s sensitive, proprietary, and confidential information. For example, employers should consider an AI usage policy that prohibits any sensitive, proprietary, and confidential information from being uploaded or used, especially with ChatGPT and other publicly available generative AI programs. Similarly, employers should consider prohibiting AI use related to any company or third-party proprietary information, any personal information, or any customer or third-party data as an input. Employers need to be intimately familiar with the data security guarantees being made by any AI vendors, and have a clear understanding of how those guarantees operate with respect to the employer’s data, its employees’ data, and any customer data being used. And while it might be outside the scope of the AI usage policy for personnel, employers should take steps to communicate with individuals and customers whose data may be processed to provide notice and secure consent whenever possible.

6. Training: Employers should also consider addressing training and awareness in their AI usage policies. More specifically, employers should provide training to ensure employees are well-informed about the AI tools they’ll have available, the AI usage policy in general, and how the tools impact their roles and responsibilities. Employers should consider training managers on how the individuals they supervise should and shouldn’t be using AI, and on how managers can help to monitor for any usage of unapproved AI tools or for any misuse of approved AI tools. Training and awareness can help reinforce fairness, transparency, and accountability. Training can help ensure that employees remain vigilant about the potential for AI to produce inaccurate or incomplete information or perpetuate or magnify historical biases. Because of the pace of AI-driven technology developments and the evolving legal framework, it is important for organizations to routinely review and update training materials to stay current.

7. Vendor Guidelines: An AI usage policy can also establish guidelines for evaluating and selecting vendors, and outline responsibilities for maintaining compliance with the AI usage policy. Some vendors may impose their own limitations on the use of their AI products that may need to be incorporated or otherwise addressed by an employer’s AI usage policy.

8. Additional Guardrails: Employers should also consider including additional guardrails within the AI usage policy. Notably, employers may consider designating certain point people who can approve of AI use. Similarly, an employer might consider designating certain point people to oversee AI usage and troubleshoot problems if they arise. Another possible guardrail is including a section discussing any potential disciplinary actions for non-compliance within the AI usage policy. Employers should strongly consider whether certain tools need to be blocked by IT at the domain level to prevent employees from accessing those tools altogether.

One guardrail that is key to observe at all stages of AI selection, deployment and use is human oversight. Everyone interacting with AI systems needs to appreciate the overwhelming importance of keeping a human in the loop when utilizing these systems at work. An effective AI policy should specify that AI tools, including generative AI tools, cannot be used to make a final decision of any kind without independent human judgment and oversight, including but not limited to any business or employment decision.

9. Know the Rapidly Evolving Regulatory Landscape. Knowing the landscape will allow for the timely placement of safeguards, so when new laws go into effect, employers are already prepared. Similarly, because many states within the country are looking to European and other international standards, it is also important to account for international AI developments, especially the European Union’s AI Act. For instance, the Colorado AI Act was largely modeled on the EU AI Act and the bill being considered in Texas was also modeled on it. In the absence of any forthcoming national AI legislation, state regulations are likely to continue proliferating, leading to further inconsistencies.

10.      Understand the Interplay with Other Applicable Policies. Awareness of the inherent risks of AI usage is key to understanding the potential interplay between an AI Usage Policy and employers’ other policies and ensure alignment.  For example, algorithmic bias, or systemic errors that disadvantage individuals or groups of individuals based on a protected characteristic, is often cited as a leading concern for AI tools, especially in the recruiting context.  Even generative AI tools designed to create images, videos, or music, may be alleged to contribute to a hostile work environment. Thus, employers would be well-served to cross-reference other applicable policies (e.g., anti-discrimination/harassment policies) in their AI usage policies. 

After the AI Usage Policy Is in Place

To ensure the guardrails are maintained, companies can conduct periodic audits to ensure compliance with their AI usage policy. Employers should also consider addressing training and awareness of their AI usage policies. Because AI tools are being seamlessly integrated into existing software products, including computers and phones, which may obscure the fact that the underlying technology is AI-driven, companies should cultivate awareness of the AI capabilities of the various technological platforms they use in the workplace, to avoid inadvertent or unknowing use of AI tools. As a result, employers should openly communicate how AI is used in the workplace to build trust, enhance credibility, and promote a deeper appreciation of its benefits. Without transparency, accountability, and clarity, even properly implemented AI may fail to deliver its full advantages.

Finally, employers should regularly review and update their AI usage policy to keep pace with evolving legal requirements, and industry best practices. To continuously improve the AI usage policy, employers should strongly encourage feedback.

Conclusion

Properly tuned to an employer’s specific circumstances, the components above provide a strong initial framework for an AI usage policy. Each section needs to be appropriately tailored to address the specific issues that AI tools will present, and those issues will depend on the nature of the employer’s business. A clear and effective policy can enable employers to take advantage of the benefits that properly leveraged AI tools can provide while helping to mitigate risks and minimize potential liabilities that can arise from the use of those tools.  

Information contained in this publication is intended for informational purposes only and does not constitute legal advice or opinion, nor is it a substitute for the professional judgment of an attorney.