Employers should proactively analyze AI hiring processes for adverse impact

Employers should proactively analyze AI hiring processes for adverse impact

Summary

Benjamin Shippen of BRG warns that AI‑driven hiring tools are drawing heightened legal scrutiny after the Mobley v. Workday case, which alleges that applicant‑screening algorithms disproportionately excluded workers over 40. The suit — now conditionally certified as a collective action — could shift liability and encourage plaintiffs to target vendors and their customers. Shippen argues employers must map every AI touchpoint in the applicant flow, apply statistical tests (for example logistic regression or Fisher’s exact test) to detect disparate impact by age, gender or race/ethnicity, and ensure explainability, documentation and ongoing monitoring to catch model drift. Examples include AI scoring of video interviews and candidate retrieval tools that may unintentionally favour certain demographics. The bottom line: don’t treat hiring AI as a black box — test and monitor it now to reduce compliance and legal risk.

Key Points

  • Mobley v. Workday alleges algorithms disproportionately screen out applicants over 40; the case could be a defining moment for AI in HR.
  • AI is used across nearly every stage of hiring (screening, ranking, video analysis, candidate recall), improving scale but obscuring where bias can occur.
  • Employers should map the applicant flow to identify every point where AI makes or influences decisions.
  • Use statistical analyses (eg logistic regression, Fisher’s exact test) and engage labour economists and legal counsel to test for disparate impact.
  • Document model design and validation, and implement continuous monitoring to detect drift as data or hiring practices change.

Context and relevance

This article is important because regulators and courts — including the EEOC and state courts — are increasingly examining whether automated hiring tools produce unlawful disparate impact. A successful claim against a major vendor could change how employers and procurement teams approach AI: demanding explainability, audit rights and stronger vendor assurances. HR, compliance and procurement teams implementing or buying hiring AI should view this as part of the broader push toward stronger AI governance and bias mitigation.

Why should I read this?

Quick heads up: if your organisation uses AI to screen, grade or recall candidates, this could cost you. The Workday case could make vendors and their clients liable. Read this to get practical, immediate steps you can start doing this week — map the flow, test outcomes, document everything and keep an eye on model drift.

Author style

Punchy: Shippen delivers a legal wake‑up call. He’s blunt about the risks and gives actionable steps that HR and compliance teams can follow — this isn’t academic; it’s a practical call to act now.

Source

Source: https://www.hrdive.com/news/audit-hiring-AI-bias/806422/

Leave a Reply

Your email address will not be published. Required fields are marked *