Briefly Put: The Evolving Legal Framework of AI Based Employment Decisions
February 18, 2026
Employers are increasingly utilizing AI-powered tools to screen resumes, analyze video interviews, and to predict a candidate’s potential for success within the organization. While these tools offer unprecedented efficiency and data-driven insights, they also carry substantial legal risks, which are explored in more detail below.
Kistler et al. v. Eightfold AI Inc.: The Privacy & Transparency Battle
The Allegations: Filed in early 2026, this class-action lawsuit takes a new approach at tackling the use of AI in employment decisions. Rather than focusing on discrimination, the plaintiffs allege that Eightfold AI (“Eightfold”) creates “consumer reports” to evaluate job applicants without complying with longstanding federal and state requirements. Specifically, the plaintiff’s claim that Eightfold’s AI technology searches the internet to build secretive and unreliable third-party reports to make employment decisions without an applicant’s knowledge or consent.
Key Legal Issues: The core question is whether Eightfold acts as a “consumer reporting agency.” If so, they are subject to the federal Fair Credit Reporting Act (“FCRA”) and the California Investigative Consumer Reporting Agencies Act. Under these laws, companies must provide applicants with a clear and written disclosure that a consumer report will be obtained for employment purposes, receive written authorization for the procurement of a consumer report, and allow applicants to dispute inaccuracies included in the consumer report.
Impact on Employers: If the court finds that AI vendors like Eightfold are consumer reporting agencies, then these companies utilizing AI technology to create reports would be required to comply with the FCRA and similar state laws. This could fundamentally change how automated "knockout" criteria are used.
Mobley v. Workday, Inc.: Challenging the "Agent" of Discrimination
The Allegations: The plaintiff, an African American applicant over age 40, filed this suit alleging that Workday Inc.’s (“Workday”) AI-based applicant recommendation system discriminated against job applicants on the basis of race, age, and disability. The plaintiff alleges that he was rejected from over 100 positions with companies using Workday, sometimes within minutes, despite being qualified or over-qualified for the jobs. The plaintiff alleges the AI relies on biased training data that reinforces and exacerbates historical discrimination against older and non-white applicants.
Key Legal Issues: This is a landmark case because it addresses "agent" liability. Workday argued it is merely a software provider and not an "employer." However, the court ruled that a vendor can be held liable as an agent if an employer delegates traditional hiring functions to the software. In May 2025, the judge certified the lawsuit as a nationwide collective action.
Impact on Employers: This case shatters the defense that "the vendor is responsible." It warns service providers and employers that they can be held liable for the decisions made by their software. Courts are now viewing AI tools as a "unified policy" of the company, requiring employers to perform regular bias audits rather than rubber-stamping an algorithm’s recommendation.
Harper v. Sirius XM Radio, LLC: The "Proxy" Discrimination Battle
The Allegations: Filed in August 2025, this case involves an African American IT professional who applied for approximately 150 positions at Sirius XM Radio, LLC (“Sirius XM”) and was rejected for all but one position, despite meeting or exceeding the qualifications needed for most positions. The plaintiff alleges that the company's AI screening tool used data points like zip codes, educational institutions, and employment history as "proxies" for race, effectively filtering out African-American candidates under the guise of neutral criteria.
Key Legal Issues: The case brings disparate impact and disparate treatment theories to the forefront of AI. The key legal issue is whether Sirius XM engaged in a systematic pattern of discrimination against African Americans. Unlike Mobley, which focuses on the vendor, Harper focuses squarely on the employer's choice to use a tool that utilizes non-job-related variables that correlate strongly with protected characteristics.
Impact on Employers: This case serves as a warning against "proxy variables." Employers can no longer assume a tool is safe just because it doesn't ask for "race" or "age." This case further underscores the necessity of "job-relatedness" in every data point that AI evaluates.
Illinois Law Regulating AI in Employment Decisions
As of January 1, 2026, Illinois stands as one of the most strictly regulated jurisdictions in the country regarding the use of automated decision-making. Illinois now prohibits the use of AI that results in a disparate impact on protected classes and explicitly bans the use of zip codes as a proxy for race or other protected characteristics. Furthermore, it mandates that employers provide clear, timely notice to applicants and employees whenever AI is used to influence employment decisions, including hiring, promotions, discipline, and discharge. This builds upon Illinois’ Artificial Intelligence Video Interview Act, which continues to require prior consent and transparency for any AI analysis of video interviews, collectively shifting the legal burden to employers to ensure their digital tools are both transparent and bias-free.
The message for employers is clear: efficiency does not excuse liability. Between the transparency demands in Kistler, the agent theories in Mobley, and the proxy warnings in Harper, employers are now being held to the same standards for their software as they are for their human managers.
The information in this newsletter is provided for educational purposes only and does not constitute legal advice or create an attorney–client relationship.
