California AI labor laws leave workers at risk


By Alberto Rocha, especially for CalMatters

This comment was originally posted by CalMatters. Sign up for their newsletters.

Guest Comment written by

When Gov. Gavin Newsom vetoed Senate Bill 7, the “No Robo-Boss Act,” which would have required human verification before the algorithm can fire or discipline California worker, the governor’s message last year was unmistakable: protecting livelihoods from automated solutions would put too much of a “burden” on innovation.

This was no minor political disagreement. It was a signal that Sacramento was willing to allow algorithmic systems — many built and controlled by out-of-state tech giants — to make life-changing decisions for Californians without meaningful safeguards.

Over the past two years, California lawmakers have taken up more than 30 AI-related bills, earning headlines for the state’s leadership in safety, transparency and consumer protection. Yet the laws which survived lobbying pressure and gubernatorial vetoes share a common flaw: They rely almost entirely on delayed documentation — summaries of training data, incident reports and audits that arrive long after the damage has been done.

When an algorithm quietly denies someone a job, demotes them, or terminates their employment, the harm is immediate and personal. Waiting months or years for a redacted transparency report does nothing to prevent this harm or hold anyone accountable when it happens.

This is not hypothetical. Major AI-powered recruiting platforms are already influencing decisions at Fortune 500 companies with operations in California. Lawsuits filed in 2025 and early 2026 alleged that some of these systems generated opaque results that excluded older workers or perpetuated racial bias — but the underlying rationale remained hidden from affected individuals and regulators.

Last year, some of the big tech companies spent more than $4.6 million in lobbying in California. The result: most of the strongest protections in the technology bills were or watered down or postponed to distant effective dates—some not until 2030. By then, algorithmic decision-making models will be deeply embedded in the state’s economy.

We don’t need any more delayed disclosure. We need architectural authority—engineered constraints that make discriminatory or arbitrary outcomes impossible at the moment of decision.

One promising path forward comes from the Luevano standard, a framework that modernizes the lessons of the landmark Luevano v. Campbell consent decree, a court decision that ended discriminatory federal hiring tests in the 1980s.

The standard requires algorithmic hiring decisions to be predictable and tied to job-related criteria, not hidden statistical correlations. It also enforces runtime enforcement, meaning that legal and ethical rules are constantly checked by the system itself to block illegal actions before they happen.

Finally, the standard requires forensic auditability, so that each solution creates a clear technical record of how it was achieved to enable accountability without reverse engineering proprietary designs.

This is not anti-innovation. It’s the opposite. Verifiable restrictions would create a safe harbor for responsible companies and protect Californians from unchallenged black-box convictions.

California’s proposed Algorithmic Accountability and Fairness Act — as described in the Luevano Standard — could make these requirements mandatory for high-stakes systems used in employment, lending, housing and insurance. Without that kind of structural change, Sacramento’s current approach risks becoming a hollow victory: lots of press releases, very little defense.

Californians deserve more than token legislation. When an algorithm can end a career in a millisecond and the state’s response is to wait five years for a report, it seems clear that some people’s livelihood is less than some companies’ convenience.

It’s time for lawmakers and the governor to go beyond promises of future transparency. Workers, families and communities are being judged by machines right now. They need real guarantees today, not in 2030.

This article was originally published on CalMatters and is republished under Creative Commons Attribution-NonCommercial-No Derivatives license.

Leave a Reply

Your email address will not be published. Required fields are marked *