States Expand AI Hiring Rules as Federal Action Lags


In a reaction to the rapid adoption of artificial intelligence (AI) in hiring and workforce management, states are racing to regulate AI-driven employment tools, creating a complex compliance patchwork that HR leaders must navigate now. AI advancements have far outpaced the development of a federal regulatory framework, leaving states to fill the void with an increasingly difficult to navigate patchwork of laws. As of spring 2026, HR leaders face a landscape in which multiple state and local jurisdictions impose distinct obligations around bias audits, impact assessments, employee notice, and anti-discrimination enforcement tied to algorithmic employment tools. For organizations that recruit or employ workers across state lines, understanding and operationalizing these requirements is no longer optional, it is a core compliance imperative.

No Federal AI Hiring Law Leaves States to Fill the Gap

While existing anti-discrimination laws including Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA) apply regardless of whether employment decisions are made by humans or algorithms, no federal law specifically addresses AI governance in the employment context. The Equal Employment Opportunity Commission (EEOC) has issued guidance clarifying that its enforcement authorities extend to AI-driven hiring tools, but that guidance does not carry the force of new legislation.

An attempt at federal preemption surfaced in mid-2025, when the One Big Beautiful Bill Act initially included a proposed ten-year moratorium on state and local AI regulation. However, that provision faced bipartisan opposition and was removed from the final bill. In December 2025, Executive Order 14365 directed the U.S. Department of Justice (DOJ) to establish an AI Litigation Task Force to challenge “burdensome” state AI laws, and the White House published a National AI Legislative Framework in March 2026 recommending broad federal preemption, but neither has yet produced binding legal change. Since that time, however, the limited April 2026 release of Anthropic’s Mythos model, which demonstrated an unprecedented ability to identify and exploit cybersecurity vulnerabilities in widely used software, has prompted the White House to reverse course on AI regulation. In early May 2026, the administration began circulating a draft executive order that would, for the first time, establish a federal review process for advanced AI models prior to public release, convene an interagency working group of government officials and industry leaders (including Anthropic, Google, and OpenAI) to design oversight procedures, and direct the White House cyber office, in coordination with the Department of Defense, to develop a mandatory safety-testing framework for models deployed by federal, state, and local agencies. While no policy has been finalized and any announcement is expected to come directly from the President, the shift marks a significant departure from the administration’s prior laissez-faire posture and signals that targeted federal AI oversight at least for frontier models posing national security risks is now actively under consideration.

How Three States Are Defining AI Rules for Employers

Three jurisdictions, Illinois, Colorado, and New York City, represent the leading edge of state AI employment regulation, though they take different approaches.

Illinois HB 3773, effective January 1, 2026, amends the Illinois Human Rights Act (IHRA) to expressly prohibit employers from using AI that has the effect of subjecting employees or applicants to discrimination based on a protected class across the full employment life cycle, from recruitment through termination. The law’s broad definition of artificial intelligence encompasses any machine-based system that influences employment decisions. Critically, Illinois also requires employers to notify applicants and employees whenever AI is used in employment-related decisions. Illinois has been described as a “plaintiff’s blueprint state,” because the law creates a civil right of action tied to discriminatory AI use and failures to disclose. Unlike Colorado and New York City, Illinois does not require bias audits or impact assessments, relying instead on the state’s existing disparate impact enforcement framework. In early 2026, the Illinois Department of Human Rights (IDHR) released draft implementing regulations that detail the content, timing, and posting requirements for AI-use notices, impose a four-year recordkeeping obligation for AI-related notices and disclosures, and confirm that failure to provide the required notice is itself an IHRA violation; the draft rules remain subject to the formal rulemaking process.

Colorado’s Artificial Intelligence Act takes a governance-heavy approach to “deployers” of high-risk AI systems used for consequential employment decisions and imposes duties including the implementation of risk management policies, annual impact assessments, consumer-facing transparency disclosures, and notification to the state Attorney General within 90 days of discovering algorithmic discrimination. The Colorado Attorney General holds exclusive enforcement authority; there is no private right of action. The Act’s status, however, is in significant flux. On April 9, 2026, xAI filed suit in the U.S. District Court for the District of Colorado (X.AI LLC v. Weiser, No. 1:26-cv-01515) seeking declaratory and injunctive relief on First Amendment and other constitutional grounds, and the U.S. Department of Justice intervened in its first litigation effort targeting state AI regulation. On April 27, 2026, the federal court stayed enforcement pending the litigation and potential legislative revisions, leaving the Act unenforceable for the time being.

New York City Local Law 144, in effect since July 5, 2023, requires employers using an automated employment decision tool (AEDT) to conduct an independent bias audit before deployment, publish audit results, and provide candidates with notice, including a description of the job qualifications the AEDT will assess, at least ten business days before the tool is used. Enforcement lies with the New York City Department of Consumer and Worker Protection, and noncompliance can result in civil penalties.

Beyond these headline jurisdictions, the regulatory field is widening. Texas’s Responsible Artificial Intelligence Governance Act (TRAIGA), also effective January 1, 2026, prohibits developing or deploying AI with the intent to discriminate, though it limits liability to intentional discrimination and provides a 60-day cure period. Several other states, including California and New York, continue to consider expansions of their regulatory frameworks as well.

Litigation Signals the Stakes With AI

The evolving regulatory environment is matched by a growing litigation docket. Mobley v. Workday, Inc., filed in the U.S. District Court for the Northern District of California, is the first major judicial test of whether an AI vendor and not just an employer can be held directly liable for algorithmic hiring bias under federal anti-discrimination statutes. In that case, the class alleges that Workday’s AI-powered applicant screening tools systematically discriminated against the class plaintiff and others based on age, race, sex, and disability. A separate suit against AI hiring vendor Eightfold AI raises the novel theory that AI employment tools should be subject to the Fair Credit Reporting Act (FCRA). The aspect of joint and several liability between employers and their AI vendors will likely be an ongoing issue for litigation, making it critical for all parties to document the measures taken to protect against bias.

Navigating this patchwork demands a proactive, well-documented compliance strategy. The regulatory trajectory is unmistakable, state and local oversight of AI in employment decisions is expanding rapidly, enforcement is intensifying, and the compliance window is closing. Employers should act now to: (i) inventory all AI and algorithmic tools used across the employment lifecycle, from sourcing through performance management; (ii) conduct and retain bias audits and impact assessments that satisfy the most stringent applicable standard; (iii) implement candidate and employee notice, consent, and accommodation protocols that meet emerging state requirements; (iv) negotiate vendor contracts that allocate risk through robust representations, audit rights, indemnification, and cooperation obligations; and (v) establish cross-functional governance spanning HR, legal, and IT with ongoing monitoring, documentation, and oversight.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *