AI in the Workplace – Challenges Lie Ahead

By Mary Moffatt

Artificial Intelligence (AI) is getting a lot of attention these days. On November 8, 2023, the Hollywood actors and artists reached an agreement to end their strike, largely driven by the threat of the industry using AI in lieu of live actors.      

On October 30, 2023, President Biden signed an Executive Order aimed at safety and security standards regarding AI. But the federal government’s attention to AI is nothing new. On January 1, 2021, President Trump signed the National Defense Authorization Act (NDAA), which included the National AI Initiative Act of 2020 (NAIA).  

WHAT IS AI? 

Perhaps like Justice Stewart’s comment regarding pornography, (it’s hard to define, but “I know it when I see it,”) there is no singular definition for AI. However, the NAIA defines artificial intelligence as follows: 

“…a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to—

(A) perceive real and virtual environments;

(B) abstract such perceptions into models through analysis in an automated manner; and

(C) use model inference to formulate options for information or action.”

15 USC 9401(3).

That definition seems like a good place to start, but what does all this mean for employers? This article will explore resources available for employers regarding AI in the workplace and some of the risks of AI as well. 

EEOC TARGETS AI  

In 2021, the Equal Employment Opportunity Commission (EEOC) began an initiative to address the use of software, including artificial intelligence, machine learning and other technologies in making hiring and other employment decisions. The aim of the Initiative was to ensure that these tools, and the resulting employment decisions, did not violate federal civil rights laws which the EEOC enforces. 

As part of its initiative, the EEOC issued a technical assistance document entitled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees.”  https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence

In May of 2023, the EEOC released a second technical assistance document, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.,”  https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial

The EEOC makes several recommendations in these technical documents. Written in Q&A  format, for example, the EEOC queries: 

 Q: Is an employer responsible under the ADA for its use of algorithmic decision-making tools even if the tools are designed or administered by another entity, such as a software vendor? 

A: In many cases, yes. For example, if an employer administers a pre employment test, it may be responsible for ADA discrimination if the test discriminates against individuals with disabilities even if the test was developed by an outside vendor. 

EEOC, The ADA and Use of Software, etc., Question 3

The EEOC considers that an individual is “screened out” due to a disability when a disability prevents person from meeting-or lowers their performance-on a selection criterion or the individual loses a job opportunity as a result. Or, for example when a chatbot is used to engage in a “conversation” with the job applicant. If the applicant has a speech impediment the chatbot cannot discern, the result could be discrimination based on a disability. (See, EEOC, Question 8).  

Another Q&A from the Technical Document:

Q: Can employers assess their use of an algorithmic decision-making tool for adverse impact in the same way that they assess more traditional selection procedures for adverse impact? 

A.: As the Guidelines explain, employers can assess whether a selection procedure has an adverse impact on a particular protected group by checking whether use of the procedure causes a selection rate for individuals in the group that is “substantially” less than the selection rate for individuals in another group.  If use of an algorithmic decision-making tool has an adverse impact on individuals of a particular race, color, religion, sex, or national origin, or on individuals with a particular combination of such characteristics (e.g., a combination of race and sex, such as for applicants who are Asian women), then use of the tool will violate Title VII unless the employer can show that such use is “job related and consistent with business necessity” pursuant to Title VII.

EEOC, Assessing Adverse Impact in Software, etc. Question 2 

THE DOJ, FTC, CFPB & EEOC

In April 2023, the Department of Justice, the Federal Trade Commission, the Consumer Financial Protection Bureau and the EEOC issued a Joint Statement entitled “Pledge to Confront Bias/Discrimination in AI.” https://www.eeoc.gov/joint-statement-enforcement-efforts-against-discrimination-and-bias-automated-systems The Joint Statement included commitments from the agencies to monitor the development and use of AI and to protect individual rights under the laws they each enforce.   

As part of the release, Assistant AG Kristen Clarke of the DOJ Civil Rights Division stated: 

As…employers, and other businesses that choose to rely on artificial intelligence, algorithms and other data tools to automate decision-making and to conduct business, we stand ready to hold accountable those entities that fail to address the discriminatory outcomes that too often result…This is an all hands on deck moment and the Justice Department will continue to work with our government partners to investigate, challenge, and combat discrimination based on automated systems. 

RECENT ACTIVITY

In May 2022 the EEOC brought suit against ITutorGroup, Inc., a Chinese-based conglomerate which provides online English-language tutoring services to students in China. EEOC alleged the company’s use of AI violated the Age Discrimination in Employment Act (ADEA) by intentionally programming software to automatically reject female applicants 55 or older and male applicants 60 years or older.  (Civil Act. 1:22-cv-02565 (E.D.N.Y.). As a result, over 200 qualified applicants in the United States were rejected based on their age. After several months of litigation, the matter was resolved by way of settlement with iTutorGroup agreeing to pay $365,000 to be distributed to the rejected applicants.  

In another lawsuit, Derek Mobley filed suit against Workday, Inc., seeking to initiate a class action against the company in the U.S. District Court for the Northern District of California. (Case No. 4:23-cv-00770).  In the Complaint, Plaintiff Mobley alleges that Workday, Inc. unlawfully offers an algorithm-based applicant screening system that determines whether an employer should accept or reject an application for employment based on the individual’s race, age, and or disability.”  In response, Workday has filed a Motion to Dismiss the Complaint on various legal grounds and the Motion is currently pending. Mobley v. Workday, Inc., Case No. 4:23-cv-00770 (N.D. Ca. 2023). 

 LEGISLATION 

Several states have issued have enacted legislation to address the use of AI in the workplace. Illinois employers using AI analysis in video interviewing must give advance notice to applicants of how the AI tool works and what characteristics will be used to evaluate the applicants. 

 In New York City, an employer may not use AI to screen candidates and employees unless (1) the tool has undergone a bias audit no more than one year prior to its use, (2) a summary of the most recent bias audit is made publicly available, and (3) notice of the AI use and an opportunity to request an alternative selection process is provided. 

Legislation addressing AI is pending in numerous states. Stay tuned for developments in that regard and be careful to check the states in which your company does business to ensure compliance.    

ACTION STEPS TO CONSIDER 

AI’s capabilities are vast. AI can conduct job interviews, with chatbots detecting favorable or unfavorable characteristics for a “job fit,” performing résumé scans, prioritize applications using keywords; evaluate employees for promotions, determine participants for a reduction in force,  monitor employees, assess performance criteria, evaluate accommodation requests, or determine compensable time. 

This article has only scratched the surface of the many ways that unchecked use of AI in a business can create liability. Whether its extension of credit, trademark or copyright infringement, violation of privacy laws, or hiring and employment decision-making, businesses should consider the following steps to address these risks in a pro-active manner:  

  • Develop an internal team to assess if and where AI is being used (you may be surprised);
  • Be aware and make sure others who may use AI are aware that the Company may be liable for AI decision-making even if designed and administered by 3rd party; 
  • Assess ways to minimize risk and conduct audits of the AI program(s) to confirm  the outcome does not suggest a violation of applicable employment laws; 
  • Ensure that those using AI are trained on the risks and the fact that “AI did it” is not a defense to those risks; 
  • Consider on-going assessments/3rd party audits of programs/employee and contractor policies; 
  • Consider engaging counsel for internal policy development and best-practices guidance;
  • Review AI vendor contracts for protective provisions such as verification of validation audits, etc.

By taking a pro-active approach to AI, employers can hopefully reduce risks and not wind up lost in space thanks to AI like Frank Poole in 2001: A Space Odyssey

Mary C. Moffatt, Member 
Wimberly Lawson Wright Daves & Jones PLLC
Knoxville, Tennessee office
[email protected]