Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions
Summary
Author style: Punchy. This story matters — it shows the human cost behind the polished AI features many of us use daily.
More than 200 contractors who helped evaluate and refine Google’s AI products (including Gemini and AI Overviews) were laid off in at least two rounds of cuts. The workers, employed via GlobalLogic and other outsourcing firms, say the layoffs followed efforts to organise around pay, job security and poor working conditions. Many of those roles required advanced degrees and specialised skills; workers claim pay was inconsistent between cohorts and that third-party hires were paid far less for similar work.
Key Points
- Over 200 contracted AI raters were dismissed in multiple rounds of layoffs, according to multiple worker accounts.
- These contractors—often with master’s or PhDs—rated and rewrote AI outputs to improve quality for Google products such as Gemini and AI Overviews.
- Workers allege pay disparities: original GlobalLogic hires earned substantially more than some third-party contractors doing the same tasks.
- Attempts to organise and discuss pay and conditions were reportedly stifled; social chat spaces were banned during work hours and some organisers say they faced retaliation and dismissals.
- Internal documents indicate the company may be training models to automate some of the raters’ work, raising concerns about replacement by AI.
- Two workers have filed complaints with the National Labor Relations Board alleging unfair dismissal tied to organising and wage-transparency advocacy.
- Google says the workers are employed by suppliers (GlobalLogic and subcontractors) and that suppliers are accountable for employment conditions; GlobalLogic declined to comment.
Content summary
Google outsourced much of its AI rating work to GlobalLogic and other contractors. Initially, a team of “super raters” with advanced degrees was hired to evaluate AI-generated summaries and rewrite outputs to be more reliable and human-sounding. Problems emerged as hiring expanded via third-party contractors paid lower rates. Workers describe mounting pressures: strict time metrics that prioritise quantity over quality, mandatory office returns that disadvantaged some employees, and restrictions on internal social channels used to organise and share experiences.
Organising efforts grew through informal groups and contact with the Alphabet Workers Union; membership increased after a pay-and-conditions survey was shared. Soon after, workers report the company clamped down on social spaces and disciplined or dismissed organisers. Workers claim GlobalLogic used the layoffs to quash dissent and that some internal processes appear aimed at automating rater tasks, potentially replacing the workforce they once relied upon.
Context and relevance
This story sits at the intersection of AI development practices and labour rights. The quality and safety of AI systems depend on human evaluators; how those people are treated affects model reliability, bias mitigation and the ethical deployment of AI. Outsourcing and contractor models are common across tech, and this account highlights recurring issues: pay disparity, precarious contract work, anti-organising risks, and the tension between human oversight and automation. Globally, similar organising efforts among AI annotators and content moderators suggest this is part of a broader labour trend.
Why should I read this?
Because it’s the messy, behind-the-scenes stuff nobody usually tells you: the people teaching chatbots how to behave are underpaid, precarious and apparently getting sacked when they try to push back. If you care about AI accuracy, ethics or workers’ rights — or you want to know what happens when companies outsource critical human oversight — this is worth five minutes of your time.