NYC Local Law 144 Explained: AI Bias Audit Requirements for Employers and Vendors
Table of Contents
TL;DR:
- NYC Local Law 144 requires annual independent bias audits for any AI tool that “substantially assists” hiring or promotion decisions — and it’s been in effect since July 2023.
- A December 2025 Comptroller audit found DCWP enforcement “ineffective,” signaling stricter crackdowns ahead.
- Penalties run $500–$1,500 per violation per day — and states like Illinois, Colorado, and New Jersey are building on NYC’s model.
NYC Local Law 144 is the most mature AI hiring regulation in the United States. It’s been enforceable since July 5, 2023, and if your organization uses automated tools to screen resumes, rank candidates, or evaluate video interviews for NYC-based roles, you’re covered.
But here’s the catch: a December 2025 audit by New York State Comptroller Thomas DiNapoli found that the agency tasked with enforcement — the NYC Department of Consumer and Worker Protection (DCWP) — has been doing a poor job of it. The Comptroller’s team reviewed 32 companies and found 17 instances of potential non-compliance that DCWP missed. DCWP found just one.
That gap won’t last. The audit’s 13 recommendations are pushing DCWP toward proactive enforcement, and other states are watching NYC’s model closely. If you’ve been treating LL 144 as optional, 2026 is the year that changes.
What Counts as an AEDT Under Local Law 144?
The law defines an Automated Employment Decision Tool (AEDT) as:
“Any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.” — NYC Local Law 144 of 2021
Two critical phrases here:
“Substantially assist or replace.” If a human makes the final call but heavily relies on the tool’s score or ranking to narrow the pool, that’s covered. The tool doesn’t need to make the decision autonomously.
“Simplified output.” A score, classification, or recommendation. If your ATS ranks candidates on a 1–100 scale, flags them as “strong/weak,” or auto-advances the top tier — that’s simplified output.
What’s Covered (and What Isn’t)
| Covered | Not Covered |
|---|---|
| AI resume screening that scores or ranks candidates | Job board search algorithms |
| Video interview analysis (behavioral/sentiment) | Calendar scheduling tools |
| Chatbot assessments that score/classify | Simple keyword matching filters without scoring |
| ML-based candidate matching with ranked output | HR analytics dashboards (no individual decisions) |
| Automated skills testing with pass/fail classification | Background check services (regulated separately) |
The key test: does the tool produce a score, classification, or recommendation that substantially assists a hiring or promotion decision? If yes, it’s an AEDT under LL 144.
The Three Core Requirements
1. Annual Independent Bias Audit
Before using an AEDT — and annually thereafter — employers must have it audited by an independent auditor. The law doesn’t define “independent” beyond requiring the auditor not be the tool’s developer, but the DCWP final rules effective July 5, 2023, clarify the audit must be “an impartial evaluation.”
What the audit must include:
- Disparate impact testing across EEOC Component 1 categories: race, ethnicity, and sex
- Impact ratio calculations — the selection rate (or scoring rate) for each demographic group divided by the rate for the most-selected group
- The audit can use historical data from the AEDT’s actual use or test data if historical data isn’t available
- Categories representing less than 2% of the audit data may be excluded from impact ratio calculations at the auditor’s discretion
The impact ratio mirrors the EEOC’s four-fifths rule: an impact ratio below 0.80 (80%) for any group signals potential adverse impact.
Critical nuance: The law does not require specific remedial action based on audit results. DCWP has confirmed that LL 144 “does not require any specific actions based on the results of the bias audit.” But publishing results that show disparate impact creates ammunition for discrimination lawsuits under Title VII, the New York City Human Rights Law, and state anti-discrimination statutes.
2. Public Disclosure of Audit Results
Employers must publish a summary of the most recent bias audit on their website. The summary must include:
- Date of the most recent bias audit
- Distribution date of the AEDT
- Source and explanation of data used
- Number of individuals in the “unknown” demographic category
- Selection rates, scoring rates, and impact ratios for all categories
This is a transparency mechanism, but it’s also a legal exposure point. Publishing impact ratios below 0.80 doesn’t violate LL 144 — but it’s discoverable evidence in a disparate impact lawsuit.
3. Candidate and Employee Notice
Before using an AEDT, employers must notify affected individuals by:
- Posting notice on the employment section of their website at least 10 business days before use
- Providing written notice to the candidate (email is acceptable)
- Identifying the job qualifications and characteristics the AEDT assesses
- Informing candidates of their right to request an alternative selection process or reasonable accommodation
Candidates who request an alternative must be provided one. The law doesn’t specify what the alternative looks like, but it must be a genuine option — not just re-running the same tool.
Penalties: The Math That Gets Expensive
LL 144 penalties are enforced by DCWP:
- First violation: Up to $500
- Same-day additional violations: Up to $500 each
- Subsequent violations: $500–$1,500 each
Each day an AEDT is used without a required bias audit or proper notice constitutes a separate violation. If you’re running a non-compliant AEDT against 100 candidates per day for 30 days, the math gets large fast.
The Comptroller’s audit noted that DCWP received only two AEDT complaints during the entire July 2023–June 2025 period. But the audit also revealed that DCWP’s complaint routing was fundamentally broken: when auditors made 12 test calls to NYC’s 311 system, only 3 of 12 calls (25%) were correctly routed to DCWP. Eight were misdirected to the NYS Department of Labor. One was sent back to the employer.
Low complaint volume doesn’t mean low non-compliance. It means the enforcement mechanism hasn’t been working.
The Comptroller’s Audit: What It Means for 2026
The December 2025 Comptroller audit is the most important LL 144 development since the law took effect. Key findings:
| Finding | Detail |
|---|---|
| Enforcement rated | ”Ineffective” |
| Audit period | July 2023 – June 2025 |
| Companies reviewed by DCWP | 32 |
| Non-compliance found by DCWP | 1 instance |
| Non-compliance found by Comptroller | 17 instances in the same 32 companies |
| AEDT complaints received | Only 2 in two years |
| 311 test calls correctly routed | 3 of 12 (25%) |
The Comptroller made 13 recommendations, including fixing complaint routing, conducting proactive compliance reviews beyond complaint-based enforcement, and actually using the technical resources OTI created for DCWP.
DCWP agreed to adopt the majority of recommendations. Translation: enforcement is about to get more aggressive.
What this means for you: The window of lax enforcement is closing. If you’ve been relying on the fact that DCWP hasn’t been actively policing LL 144, the Comptroller just put a spotlight on that gap. Budget for compliance now.
How to Conduct a Bias Audit: Step by Step
Step 1: Determine If You Have an AEDT
Map your hiring technology stack. For each tool, ask:
- Does it use ML, statistical modeling, or AI?
- Does it produce a score, classification, or recommendation?
- Does it substantially assist or replace a hiring/promotion decision?
If all three are yes, it’s an AEDT.
Step 2: Engage an Independent Auditor
The auditor must be independent of the AEDT developer. Options include:
- Third-party audit firms specializing in algorithmic audits (BABL AI, Holistic AI, DCI Consulting, etc.)
- Consulting firms with AI audit practices (Deloitte, PwC, etc.)
- Academic researchers with relevant expertise
Budget: Third-party bias audits typically run $10,000–$75,000+ depending on the tool’s complexity and the auditor’s depth of analysis.
Step 3: Prepare Data
The auditor needs either:
- Historical data: Actual candidate data with demographic information and AEDT outcomes (selection/scoring)
- Test data: If insufficient historical data exists, synthetic or representative test data
The biggest practical challenge: demographic data gaps. Race, ethnicity, and sex disclosure is voluntary, so your historical data likely has significant missing values — especially for candidates who dropped off early in the process.
Step 4: Calculate Impact Ratios
For each EEOC category (race/ethnicity intersected with sex):
For selection-rate AEDTs (pass/fail, advance/reject):
Impact Ratio = (Selection rate of group) ÷ (Selection rate of most-selected group)
For scoring-rate AEDTs (continuous scores):
Scoring Rate = Proportion of group scoring above the median Impact Ratio = (Scoring rate of group) ÷ (Scoring rate of highest-scoring group)
Step 5: Publish and Notify
- Post the audit summary on your careers/employment page
- Update candidate communication templates to include AEDT disclosure
- Document the alternative selection process for candidates who opt out
Step 6: Set the Annual Calendar
Mark the audit anniversary. The next audit must be completed within one year of the prior audit — before you continue using the AEDT.
What Other States Are Copying From NYC
LL 144 is the template for a wave of state AI hiring laws:
Illinois (HB 3773 — effective January 1, 2026): Amends the Illinois Human Rights Act to prohibit employers from using AI in ways that produce discriminatory effects. Requires notice to employees and candidates when AI is used in employment decisions. Enforced through the Illinois Department of Human Rights — which means administrative complaints and potential civil rights investigations, not just DCWP-style fines.
Colorado (SB 24-205 — effective June 30, 2026): The broadest state AI law yet. Covers all “high-risk AI systems” used in “consequential decisions” including employment, lending, insurance, and housing. Requires impact assessments, consumer notification, and a risk management program. Attorney General has exclusive enforcement authority.
New Jersey (Assembly Bill 4909 — proposed): Would require annual bias audits of AEDTs before they can be sold in the state — notably targeting vendors, not just employers. Mirrors LL 144’s disparate impact analysis framework.
New York State (Assembly Bill A00567 — proposed): Would expand LL 144’s approach statewide, requiring bias audits of AEDTs used anywhere in New York.
The pattern is clear: NYC was the pilot. These states are scaling it. If you’re a multi-state employer, building a single algorithmic fairness audit framework that satisfies the strictest requirements now saves you from playing catch-up in every jurisdiction.
Common Compliance Gaps (and How to Fix Them)
Gap 1: “We don’t think our tool is an AEDT.” Many employers argue their ATS isn’t covered because a human makes the final decision. The “substantially assist” standard is broad. If the tool’s ranking materially influences who gets interviews, it’s likely covered. Get a legal opinion — don’t self-determine.
Gap 2: “We published the audit but didn’t update the notice.” The audit and the notice are separate requirements. You need both: published audit results AND individual candidate notification. Many employers post the audit but forget the 10-business-day advance notice and opt-out mechanism.
Gap 3: “Our vendor handles compliance.” LL 144 applies to employers and employment agencies, not vendors. Even if HireVue or Pymetrics conducts their own bias audit, the employer using the tool in NYC has an independent obligation to ensure compliance. Vendor audits can satisfy the requirement, but the employer must verify and publish.
Gap 4: “The audit showed no adverse impact so we’re fine.” Today’s audit is a snapshot. Model drift, changing applicant pools, and updated algorithms can shift impact ratios quarter to quarter. Annual audits are the minimum — quarterly monitoring is the best practice.
So What?
NYC Local Law 144 isn’t perfect. The Comptroller’s audit proved that enforcement has been toothless for nearly three years. But that’s changing — and the bigger risk isn’t DCWP fines anyway.
The real exposure is discrimination litigation. Published bias audit results that show impact ratios below 0.80 are discoverable evidence. Plaintiffs’ attorneys and the EEOC are watching this space. A published audit that reveals disparate impact without documented remediation is an invitation to litigation.
Meanwhile, Illinois, Colorado, and proposed laws in New Jersey and New York State are all building on LL 144’s foundation. The organizations that build a compliant bias audit framework now will be ready when those laws take effect. The ones that wait will be scrambling.
Need a framework for assessing AI risk — including hiring tools? Our AI Risk Assessment Template & Guide includes a structured assessment methodology that maps to LL 144’s requirements, the Colorado AI Act, and NIST AI RMF.
FAQ
Does NYC Local Law 144 apply to remote jobs?
If the job is based in NYC or the employer is hiring for a position that will be performed in NYC, LL 144 applies. For fully remote positions, the analysis depends on whether the position has a nexus to NYC — this is an evolving area. If any candidates in your pipeline are NYC-based, the safest approach is to comply.
Who qualifies as an “independent auditor” for the bias audit?
The law requires the auditor be independent of the AEDT developer. The DCWP final rules define independence as someone who exercises “objective and impartial judgment.” There’s no licensing requirement or certification needed — but using an auditor with expertise in algorithmic fairness, statistics, and employment law strengthens the audit’s defensibility.
What happens if our bias audit reveals disparate impact?
LL 144 does not require you to stop using the tool or take specific remedial action based on audit results. However, publishing results showing impact ratios below 0.80 creates litigation risk under Title VII and state anti-discrimination laws. Best practice: document that you reviewed the results, investigated root causes, and took reasonable steps to mitigate — even if the law doesn’t mandate it.
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Keep Reading
Long Island Investment Adviser Pleads Guilty to $160 Million Fraud: What Compliance Teams Should Learn
Vincent Camarda of A.G. Morgan Financial Advisors pleaded guilty to $160M investment fraud. Here's what went wrong and the compliance red flags every firm should watch for.
Apr 3, 2026
Regulatory ComplianceAI in Consequential Decision-Making: Where Regulators Draw the Compliance Line
How state and federal regulators define consequential AI decisions — and what compliance teams must do before June 2026 to avoid enforcement.
Apr 3, 2026
Regulatory ComplianceWho Needs a Contingency Funding Plan? FINRA, OCC & Interagency Requirements Explained
Contingency funding plan requirements vary by regulator, but most banks and larger credit unions need a CFP now. Here’s what OCC, Fed, FDIC, NCUA, and FINRA expect.
Apr 3, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.