Artificial Intelligence (AI) and other automated decision-making tools are becoming more popular in all sectors and used for a variety of purposes. These tools are anything that help automate decision-making processes like generative AI, machine learning models, or statistical models and decision trees. Because of their increasing popularity, especially for use in hiring, the New Jersey Division on Civil Rights (DCR) issued guidance on how the New Jersey Law Against Discrimination (LAD) applies to use of these technologies. The DCR notes that while not all automated decision-making tools cause discrimination, if employers are not careful, creating or using these tools in a way that results in unlawful discrimination will implicate the LAD.
Under the LAD, employers are prohibited from engaging in discriminatory conduct during the hiring process. This includes making employment decisions based on protected characteristics like race, religion, color, national origin, sexual orientation, pregnancy, gender identity, or disability, whether real or perceived. The new guidance from the DCR does not impose any new requirements for employers but clarifies that an employer can still be liable under the LAD if they engage in Algorithmic Discrimination by creating or using AI or another tool that disadvantages applicants and employees based on protected characteristics.
Algorithmic Discrimination can appear in a few ways, including making or using an automated decision-making tool:
- with the intent to treat members of protected classes differently;
- that is biased;
- in a way that disadvantages certain groups; or
- that interferes with or fails to account for an individual’s reasonable accommodations.
This could look like an AI analyzing applicant names to assume the applicants’ race then using that perceived race to rank the applicants’ qualification for the position. If an employer acts on the AI’s decisions without further review, they could be liable for discrimination under the LAD regardless of whether they made the tool or intended to discriminate. Algorithmic Discrimination in the form of failing to account for or offer reasonable accommodations could include utilizing a tool that is incompatible with certain software or external tools.
Algorithmic Discrimination does not require the employer to develop the tool. Simply using a tool in these ways can impose liability. To limit their risk of liability, employers should look for tools that are carefully designed to reduce bias and review all recommendations made by automated decision-making tools.
StraightforWARD Legal Advice:
Employers must not engage in discrimination in the workplace. This is particularly important when using AI and other automated decision-making tools. When using these tools, an employer must ensure that they review the tools and the tool’s recommendations to limit the possibility of discrimination. Employers should also train employees using these tools on the risks of Algorithmic Discrimination and alert applicants to their use of these tools so they can request appropriate accommodations if the tools cannot support some applicants. If an employer fails to monitor and review their tools, they may be subject to liability under the NJ LAD.
If you have questions or concerns regarding employment discrimination, hiring practices, reasonable accommodation policies, or legal obligations in the workplace, please contact attorney Renee Harris at (215) 647-6616 or rharris@thewardlaw.com for guidance and support.