The Federal Government Removed Its Hiring Guidance. Four States A


In January 2025, the EEOC removed its AI employment guidance from eeoc.gov. Several outlets reported this at the time — Bloomberg Law on February 20, 2025, KMK Law in a detailed analysis shortly after, and K&L Gates in a January 31, 2025, article in the National Law Review. A separate February 7, 2025, National Law Review article covered the broader rollback of Biden-era AI policies.

That initial reporting was brief and reactive. The EEOC removed some pages. It was noted. The news cycle moved on.

Over a year later, we went back to check. The pages are still down. And in the time since they were removed, four states have enacted AI employment laws, a landmark AI vendor-liability lawsuit has cleared class certification, and the current EEOC chair has publicly outlined enforcement priorities that do not include AI.

This article is a systematic look at where things stand now — not what was removed in early 2025, but what has and has not been built in its place.

What We Verified

On March 21, 2026, we visited eeoc.gov/ai. The page returned “The requested page could not be found.” We visited eeoc.gov/ai-and-algorithmic-fairness-initiative. Same result.

The Wayback Machine confirms eeoc.gov/ai existed with 158 archived captures spanning from June 11, 2022, to March 13, 2026. The most recent snapshot showing the page live with full content is from December 23, 2024. The page title was “Artificial Intelligence and Algorithmic Fairness Initiative,” and it contained the technical assistance documents, listening session records, joint enforcement statements, and links to guidance that the agency had published since launching the initiative in October 2021.

All of that content is gone.

One page technically survives: “Artificial Intelligence and the ADA”. But it is functionally a shell. The detailed ADA guidance document it links to — “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees” — returns a 404. The Tips for Workers resource returns a 404. The link to the Algorithmic Fairness Initiative returns a 404. What remains is a landing page with broken links and a YouTube video.

The EEOC’s technical guidance document — formally titled “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” — is no longer accessible on the agency’s website.

What the Law Still Says

This is the most important distinction in this entire story: removing guidance does not repeal law.

Title VII of the Civil Rights Act of 1964 (42 U.S.C. Section 2000e et seq.) remains fully in force. It prohibits both disparate treatment and disparate impact in employment. It applies to AI hiring tools the same way it applies to any other selection procedure.

The Uniform Guidelines on Employee Selection Procedures (UGESP), codified at 29 C.F.R. Part 1607 and adopted jointly by the EEOC, DOL, DOJ, and OPM in 1978, also remain in force. UGESP requires that any selection procedure producing an adverse impact be validated as job-related and consistent with business necessity. “Selection procedure” is defined broadly enough to encompass AI-driven screening tools.

The EEOC’s own Employment Tests and Selection Procedures fact sheet — which references UGESP and explains how it applies to employer testing — is still live on eeoc.gov as of March 2026.

And the EEOC’s Strategic Enforcement Plan for FY 2024-2028 — also still live — explicitly identifies “technology-related employment discrimination” as an enforcement priority, including “the use of software that incorporates algorithmic decision-making or machine learning, including artificial intelligence.” As Cooley LLP noted in February 2025, this plan can only be modified by a quorum vote of commissioners.

What was removed were the technical assistance documents that explained how these laws apply specifically to AI. Those documents were non-binding — they did not create new legal obligations. They explained existing ones. The obligations remain. The explainer is gone.

What Has Changed: The State Patchwork

While the federal explainer was being removed, four states enacted AI employment laws. Each uses a different legal standard for liability.

California finalized FEHA regulations on automated decision systems, effective October 1, 2025. The regulations apply a disparate impact framework to any computational process that makes or facilitates employment decisions. Employers cannot use an automated decision system that discriminates on the basis of a protected characteristic, whether through disparate treatment or disparate impact. Evidence of anti-bias testing is expressly relevant to discrimination claims. The regulations extend liability to AI vendors as employer “agents.” Employers must retain records related to automated decision systems for four years.

Illinois amended the Illinois Human Rights Act through HB 3773 (Public Act 103-0804, effective January 1, 2026). The standard is disparate impact — it is a violation if an employer uses AI “that has the effect of” discriminating on the basis of a protected class. Employers must notify employees and applicants that AI is being used. Unlike the other three states, Illinois provides a private right of action through the Illinois Human Rights Commission. Penalties can reach $70,000 per violation for repeat offenders.

Texas enacted the Responsible Artificial Intelligence Governance Act (TRAIGA, HB 149, effective January 1, 2026). The Texas standard is fundamentally different. Under Section 552.056, it is unlawful to develop or deploy an AI system “with the intent to unlawfully discriminate against a protected class in violation of state or federal law.” This is a general prohibition — not limited to employment — but it applies to employment among other contexts. Section 552.056(c) explicitly states that “a disparate impact is not sufficient by itself to demonstrate an intent to discriminate.” An employer could have an AI tool producing discriminatory outcomes in Texas and face no liability under state law if intent cannot be shown.

Colorado enacted SB 24-205 (Consumer Protections for Artificial Intelligence, effective June 30, 2026, after being delayed from February 1, 2026, by SB 25B-004). Colorado uses a “reasonable care” standard for deployers of high-risk AI systems making “consequential decisions” — a category that includes employment alongside lending, healthcare, housing, insurance, education, and legal services. Deployers must implement risk management programs, complete impact assessments, and provide consumer notices. The law also creates an affirmative defense for businesses following the NIST AI Risk Management Framework.

Four states. Four legal theories: disparate impact with vendor liability (California), disparate impact with private right of action (Illinois), intent-only (Texas), and reasonable care with affirmative defense (Colorado). An employer using AI hiring tools across these states must satisfy all four simultaneously, with no federal framework to unify the approach.

What Is Happening in the Courts

While state legislatures are building divergent frameworks and federal guidance has been removed, the judiciary is moving on its own.

In Mobley v. Workday, Inc. (N.D. Cal., Case No. 23-cv-00770), a federal court is testing a question no legislature has answered: whether an AI software vendor — not the employer that uses the tool, but the company that built it — can be held liable for discriminatory hiring outcomes.

Derek Mobley filed suit alleging that Workday’s AI-powered applicant screening system discriminated against him on the basis of race, age, and disability. He was not employed by Workday. He applied to over 100 jobs through employer portals using the Workday platform and was consistently screened out. He alleges that Workday functions as an “agent” of its employer-clients and that its AI tools incorporate and perpetuate historical patterns of discrimination.

In July 2024, the court denied Workday’s motion to dismiss. In May 2025, the court granted conditional certification of ADEA claims for a nationwide collective of applicants age 40 and older. In court filings, Workday disclosed that 1.1 billion applications were rejected using its software — meaning the certified collective could include hundreds of millions of members.

The EEOC itself filed an amicus brief in the case in April 2024, supporting the plaintiff’s legal theories. That brief was filed before the agency removed its AI guidance.

Separately, the enforcement track record for existing AI hiring laws is not encouraging. A December 2025 audit by the New York State Comptroller examined enforcement of NYC Local Law 144 — the nation’s first law requiring bias audits of AI hiring tools, in effect since July 2023. The audit found that the city’s Department of Consumer and Worker Protection received only two complaints in the entire two-year period. When auditors reviewed 32 employer websites, they identified at least 17 instances of potential non-compliance that DCWP had missed.

The Current Enforcement Picture

EEOC Chair Andrea Lucas has publicly outlined her enforcement priorities. They include combating “unlawful DEI-motivated race and sex discrimination,” protecting American workers from “anti-American national origin discrimination,” and defending “the biological and binary reality of sex.” AI-related employment discrimination does not appear among her stated priorities.

The removal of the AI guidance pages appears to be part of the broader rescission of Biden-era policies following Executive Order 14179. It was not framed as a standalone EEOC policy decision on AI enforcement. As KMK Law reported in February 2025, the EEOC, under new leadership, also began reviewing its guidelines on whether employers can be held responsible under Title VII for discriminatory AI tools.

The Strategic Enforcement Plan listing AI as a priority remains in force, unchanged on the EEOC’s website. Whether it will translate into enforcement activity under the current leadership is an open question the agency has not addressed publicly.

What Employers Should Do

The federal guidance vacuum is real, but the legal obligations have not changed. Here is what matters for employers using AI in hiring and other employment decisions.

The law did not change. Title VII’s disparate impact prohibition and UGESP’s validation requirements apply to AI-driven selection procedures regardless of whether the EEOC publishes guidance explaining how. The guidance was helpful. Its absence does not create a safe harbor.

Document everything about every AI tool you use in employment decisions. What the tool does, what data it uses, what outputs it produces, and how those outputs influence actual decisions. This is the foundation under every state standard and remains the baseline expectation under Title VII.

Conduct vendor due diligence. The Mobley v. Workday litigation underscores that the relationship between employers and AI vendors carries legal weight. Ask every AI vendor what bias testing they perform, what demographic data they use, what their impact ratio results show, and what they will provide if a regulator or court asks. Get the answers in writing.

Know which state laws apply to you. California, Illinois, Texas, and Colorado each impose distinct obligations on employers using AI. These are not interchangeable. Illinois allows individuals to file complaints. Texas requires proof of intent. California extends liability to vendors. Colorado creates an affirmative defense for following NIST. If you operate across state lines, you need a compliance program that addresses each state’s standards specifically. Our analysis of how to build a unified governance framework across all enacted state AI laws walks through the practical approach.

Use the NIST AI Risk Management Framework as a portable technical anchor. The NIST AI RMF is voluntary, not law — but it is the closest thing to a neutral federal technical standard that still exists. Colorado’s law references it explicitly. Texas’s TRAIGA references NIST-aligned risk management. Building your AI governance around a recognized framework gives you a defensible methodology to point to regardless of jurisdiction. See our NIST AI RMF compliance guide for a practical breakdown.

Watch Mobley v. Workday. If the court ultimately holds that an AI vendor can be liable as an employer’s “agent” for discriminatory outcomes, it will reshape how every company in the country evaluates its AI hiring tools and vendor relationships. The conditional certification of a class that could include hundreds of millions of applicants means the stakes are not theoretical.

The information gap left by the EEOC’s removed guidance is not going to be filled by the federal government any time soon. What fills it instead is the combination of your documentation, your audit records, your vendor agreements, your state-law compliance materials, and your awareness that the legal obligations never went away — even when the explainer did.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *