HR Tip: EEOC warns employers of disability bias risks in AI – Duncan Financial Group
Mortgage 101: The Basics of Funding Your Home Purchase
June 1, 2022
Key takeaways from the 2022 NCCI Annual Insights Symposium
June 8, 2022
Mortgage 101: The Basics of Funding Your Home Purchase
June 1, 2022
Key takeaways from the 2022 NCCI Annual Insights Symposium
June 8, 2022
HR Tip: EEOC warns employers of disability bias risks in AI

Smiling African American manager sitting at his desk in an office shaking hands with a job applicant after an interview

HR Tip: EEOC warns employers of disability bias risks in AI

Employers should review their artificial intelligence tools to ensure they are not violating the Americans with Disabilities Act (ADA), according to new guidance released by the Equal Employment Opportunity Commission (EEOC) and the U.S. Department of Justice (DOJ).

Employers can begin by inventorying all AI or algorithm-based tools that are used for HR functions and ensuring that they do not create disadvantages for workers or applicants with disabilities and other protected groups. Some examples of how such tools can inadvertently violate the ADA include excluding someone with a vision disability because they failed to make good eye contact or could not play an online game that measures memory, or a chatbot screening out an applicant who says they can’t stand for 30 minutes without giving the applicant the chance to say they can work from a wheelchair.

AI should not be used exclusively; alternative tests should be administered to determine if applicants or workers can perform the essential functions of the job. The DOJ guidance explains, “If a test or technology eliminates someone because of disability when that person can actually do the job, an employer must instead use an accessible test that measures the applicant’s job skills, not their disability, or make other adjustments to the hiring process so that a qualified person is not eliminated because of a disability.”

Another way AI or algorithms can violate the ADA is when applicants or employees must provide information about disabilities or medical conditions. This may result in prohibited disability-related inquiries or medical exams.

The EEOC document makes clear employers are responsible under the ADA for their use of these tools even if they are administered by other entities. It suggests employers ask the vendor whether the tool was developed with individuals with disabilities in mind and provides a list of possible questions to ask about the development of the tool. Employers need to understand how the tools work, what they’re measuring, and how they may affect different employees.

The document also says the tool used by the employer should indicate the availability of reasonable accommodations, provide clear instructions for requesting reasonable accommodations in advance of the assessment, and be transparent and provide all applicants and employees with as much information about the tool as possible. Applicants should have sufficient notice that they may need to request an accommodation.

Cities and states are also starting to regulate the use of AI tools. Colorado bans insurers from using AI that unfairly discriminates based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression. Illinois enacted the Artificial Intelligence Video Interview Act, which mandates notice, consent, sharing, deletion, and reporting obligations for employers that “use[] an artificial intelligence analysis of … applicant-submitted videos” in the hiring process. Maryland prohibits employers from using facial recognition technology during pre-employment job interviews without the applicant’s consent.

A New York City law that takes effect in 2023 bans AI use in making employment decisions unless the technology has first been subject to a “bias audit.” California’s Fair Employment & Housing Council also released draft modifications to employment regulations regarding “automated-decision systems” in March.