Sunday, August 17, 2025

Innovate then litigate?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5383935

The Litigation Solution: Why Courts, Not Code Mandates, Should Address AI Discrimination

As artificial intelligence systems increasingly influence decisionmaking in high-stakes sectors, policymakers have focused on regulating model design to combat algorithmic bias. Drawing on examples from the European Union's AI Act and recent state legislation, this Article critiques the emerging "fairness by design" paradigm. It argues that design mandates rest on a flawed premise: that bias can be objectively defined and mitigated ex ante without compromising competing values such as accuracy, privacy, or innovation. In reality, efforts to engineer fairness through prescriptive regulation risk distorting markets, entrenching incumbents, and stifling technological advancement. Moreover, the opaque, evolving nature of AI systems—especially generative models—makes it difficult to anticipate or eliminate future biases through design alone, often creating tradeoffs that regulators are ill-equipped to manage.

Rather than regulating AI inputs, the Article advocates for a litigation-first approach that focuses on AI outputs and leverages existing antidiscrimination law to address harms as they arise. By applying traditional disparate treatment and disparate impact frameworks to AI-assisted decisions, courts can assess when biased outcomes rise to the level of unlawful discrimination—without prematurely constraining innovation or imposing rigid mandates. This model mirrors America’s historical preference for permissive innovation, allowing technology to evolve while holding bad actors accountable under general principles of law. The result is a more flexible, targeted regulatory regime that fosters AI development while safeguarding civil rights.



No comments: