The California Fair Employment and Housing Council published a draft of new regulations regarding AI being used by employers and their staffing agencies to make hiring decisions. Due to the risks of AI eliminating protected classes during the screening process, it’s up to companies using AI hiring algorithms to make sure the software they use isn’t illegally discriminating against any protected class.
To save time and resources, many companies are using artificial intelligence (AI) algorithms in their hiring practices. AI recruiting software doesn’t replace face-to-face interviews, but it helps narrow down the candidates before your hiring team gets started. AI hiring software can screen the candidates’ behaviors and skills and take the candidates who pass your criteria and load them into a database for you to then pull candidates to talk to on the phone or through email to schedule a video chat or in-person interview.
Who is using AI-powered hiring platforms? Some big names like L’Oreal, Oracle, Unilever, and Wendy’s are on the list. The AI can screen candidates for jobs by their facial expressions, body language, speech patterns, tone of voice, and even vocabulary level. That in itself can lead to problems. As you use AI hiring algorithms, are you sure the AI isn’t inadvertently using discriminatory practices that are keeping you from getting a job?
Government Actions Aim to Stop Discrimination
This summer, the Equal Employment Opportunity Commission and the U.S. Department of Justice came up with suggestions for human resources and hiring teams in government agencies and businesses to consider when using AI. The goal is to make sure the AI algorithms are in compliance with the Americans With Disabilities Act. While these are not laws, yet, they are guidelines for hiring agencies to use to prevent discrimination.
The first suggestion is that companies should make it apparent when AI is being used to evaluate applicants. It’s also important for a company to disclose what traits the algorithms are using for its evaluation process.
It’s best to get used to these changes as the AI Bill of Rights is in work. The framework for this bill was expected by June, but it’s been delayed. While no one is certain when it will be ready, it’s important for companies to start thinking about AI when it comes to hiring practices. If AI is being used to determine your standing for a job, you should be able to opt out if desired, have the right to correct information that is not accurate, and be protected against biased data sets. If AI is impacting your civil liberties, you should be made aware of that fact.
The U.S. Equal Employment Opportunity Commission (EEOC) established the Artificial Intelligence and Algorithmic Fairness Initiative in 2021. It aimed to help AI vendors, employers, and job applicants better understand how to address technical issues related to algorithm fairness in terms of employment decisions and compile information related to the adoption, design, and impact of AI hiring algorithms.
Discriminatory Practices With AI
AI needs to follow the same discrimination rules as any hiring team. This includes avoiding discrimination or bias against applicants due to their color, nationality, race, religion, or sex.
Suppose the AI algorithm being used requires applicants to fill out a pre-screening assessment. The test requires applicants to watch a video and listen to the speaker and answer questions. There’s no closed-captioning option for someone with hearing loss. This can be discriminatory as no option was given to someone who cannot hear the speaker.
One way that AI can discriminate is by not understanding “reasonable accommodations.” If AI automatically screens out someone with a disability, that’s discriminatory. That applicant may be able to complete the job if an accommodation was made. AI needs to factor in what accommodations could be used to help a qualified applicant to complete a job.
When this happens, the applicant has a valid complaint. The AI eliminated that applicant’s chances without even considering possible accommodations. That applicant was well-qualified for the job but was turned down illegally and that could lead to a lawsuit against your company. Vision impairment is another common reason for AI to eliminate a candidate, but it’s not enough of a reason. That person may be fully qualified for a job with the help of braille or speech recognition software.
The same is true if AI rules out a person because they have a stutter and don’t answer a question in time through a video interview. Or, a chatbot rules out a worker because there’s a year’s gap in the applicant’s work history. Had you interviewed that applicant personally, you would have learned the worker took a year off to care for a parent with Alzheimer’s. That’s clearly not a reason to turn an applicant down.
Employers should use software that doesn’t immediately eliminate candidates who are unable to complete a pre-screening test due to a hearing disability. You should be offered an alternative to that pre-screening test if you’re hearing impaired and need closed-captioning, which isn’t offered on audio portions of a test.
If you take more time to read questions due to dyslexia, but you kept running out of time that’s discriminatory. Timed tests are unfair, and you should be offered an alternative, such as having the questions read to you by the AI.
Did the pre-screening application ask questions like your age, gender, religion, or nationality? That application is asking questions that could be used to narrow out specific groups, and that’s discriminatory. You shouldn’t have to answer any pre-screening questions that can create unconscious or implicit bias.
You experienced issues of this nature, but you’re certain you’re well qualified for a job. You should talk to the company’s HR department and ask why your application was not considered. You have the right to learn what prevented them from considering you for an interview.
If it’s determined you were inadvertently ruled out by the software, you may be offered an interview. This is good, but the company should also use this as a chance to go back and correct the software’s discriminatory algorithm to prevent it from happening again. The HR or hiring team should also go back through other eliminated candidates and make sure they were not erroneously eliminated for the same reason as you experienced.
What do you do if you believe AI has affected your rights and the hiring team doesn’t seem to care? Contact an employment law specialist for advice. You shouldn’t lose out on a job due to errors or discriminatory practices in AI’s hiring algorithms. Shegerian Conniff is a highly-recommended employment law firm that specializes in workplace discrimination. Reach out to the team to schedule a free consultation.