Sunday, March 23, 2025

Interesting question. Do we have an answer?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5179224

Artificial Intelligence and the Discrimination Injury

For a decade, scholars have debated whether discrimination involving artificial intelligence (AI) can be captured by existing discrimination laws. This article argues that the challenge that artificial intelligence poses for discrimination law stems not from the specifics of any statute, but from the very conceptual framework of discrimination law. Discrimination today is a species of tort, concerned with rectifying individual injuries, rather than a law aimed at broadly improving social or economic equality. As a result, the doctrine centers blameworthiness and individualized notions of injury. But it is also a strange sort of tort that does not clearly define its injury. Defining the discrimination harm is difficult and contested. As a result, the doctrine skips over the injury question and treats a discrimination claim as a process question about whether a defendant acted properly in a single decisionmaking event. This tort-with-unclear-injury formulation effectively merges the questions of injury and liability: If a defendant did not act improperly, then no liability attaches because a discrimination event did not occur. Injury is tied to the single decision event and there is no room for recognizing discrimination injury without liability.

This formulation directly affects regulation of AI discrimination for two reasons: First, AI decisionmaking is distributed; it is a combination of software development, its configuration, and its application, all of which are completed at different times and usually by different parties. This means that the mental model of a single decision and decisionmaker breaks down in this context. Second, the process-based injury is fundamentally at odds with the existence of “discriminatory” technology as a concept. While we can easily conceive of discriminatory AI as a colloquial matter, if there is legally no discrimination event until the technology is used in an improper way, then the technology cannot be considered discriminatory until it is improperly used.

The analysis leads to two ultimate conclusions. First, while the applicability of disparate impact law to AI is unknown, as no court has addressed the question head-on, liability will depend in large part on the degree to which a court is willing to hold a decisionmaker (e.g. and employer, lender, or landlord) liable for using a discriminatory technology without adequate attention to the effects, for a failure to either comparison shop or fix the AI. Given the shape of the doctrine, the fact that the typical decisionmaker is not tech savvy, and that they likely purchased the technology on the promise of it being non- discriminatory, whether a court would find such liability is an open question. Second, discrimination law cannot be used to create incentives or penalties for the people best able to address the problem of discriminatory AI—the developers themselves. The Article therefore argues for supplementing discrimination law with the application of a combination of consumer protection, product safety, and products liability—all legal doctrines meant to address the distribution of harmful products on the open market, and all better suited to directly addressing the products that create discriminatory harms.





Can AI help?

https://taapublications.com/tijsrat/article/view/453

THE IMPACT OF ARTIFICIAL INTELLIGENCE ON LEGAL SYSTEMS

Artificial Intelligence (AI) is transforming sectors of the world, and the legal sector is no different. The emerging use of AI technologies into the judiciary has gigantic potential and issues. However, AI holds the potential to enhance the efficacy of legal processes by automating routine tasks like document searching, legal analysis, and contract analysis,reducing the cost of legal services, and making legal services more accessible. Artificial intelligence technologies such as predictive analytics also possess the ability to facilitate better decision-making by having the ability to distinguish between case law trends and predict outcomes. However, the universal application of AI also presents firm ethical, legal, and privacy concerns. Of these is the possibility of algorithmic bias, which can lead to biased or discriminatory judicial decisions. Another problem is a lack of transparency in AI decision-making, and therefore it can be difficult to explain how algorithms make decisions. Also, AI poses problems for traditional legal systems, as they were designed to take into account passive systems, not active ones. This paper discusses the advantages as well as challenges of AI legal systems, having a detailed look at their implications for lawyers, clients, and lawmakers. By this analysis, the paper emphasizes the need for a balanced regulatory environment for ensuring ethical use of AI while protecting individuals' rights and upholding justice in the legal system.





Perspective.

https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/18975

Authoritarian Surveillance: An Introduction

Authoritarian surveillance is no longer an exceptional or rare practice. In many parts of the world, we are witnessing an increase of pervasive government monitoring, of curtailing privacy protections, of stringent control of information flows, and of intimidation towards self-censorship. These hallmarks of authoritarian surveillance are not confined to authoritarian or undemocratic regimes. In a political landscape that favours strongarm authoritarian leaders, the boundaries between authoritarian and democratic regimes, the liberal and the illiberal ones, are blurrier than ever. The increasing availability of advanced technologies for analyzing (big) data, particularly when integrated with artificial intelligence (AI), has heightened the temptation for governments to adopt authoritarian surveillance tools and practices—and has amplified the potential dangers involved.

This Dialogue section introduces the multiple dimensions of contemporary authoritarian surveillance, going beyond a dichotomy between “democratic” and “authoritarian” regimes to identify and map authoritarian surveillance in diverse geographical and political contexts. We focus on surveillance beyond the exceptional and beyond the rule of law to examine an increasingly mundane but dangerous practice undermining the limited democratic spaces that remain in our world. The seven articles in this special Dialogue section explore different angles of authoritarian surveillance— the technologies that facilitate it, the laws that govern it, and the legacies that precede it or linger thereafter—and the social and political consequences that emerge as a result. Together, this collection revisits existing literature on authoritarian surveillance, calls for a renewed scholarly focus on its consequences, and proposes new directions for future research.

Vol. 23 No. 1 (2025): Open Issue





Perspective.

https://sites.duke.edu/lawfire/2025/03/22/podcast-lt-gen-jack-shanahan-usaf-ret-on-the-military-uses-of-artificial-intelligence/

Podcast: Lt Gen Jack Shanahan, USAF (Ret.) on “The Military Uses of Artificial Intelligence”

Want to get caught up on the latest about artificial intelligence (AI) in the armed forces?  Then today’s video of Prof Gary Corn’s, Fireside Chat with Lt Gen John N.T. “Jack” Shanahan, USAF (Ret.), the former Director of the Department of Defense’s Joint Artificial Intelligence Center, on “The Military Uses of Artificial Intelligence” is for you.



No comments: