AI “helpers.” I’m not sure this idea will work.
https://openyls.law.yale.edu/entities/publication/794e6d6c-abeb-4002-80e3-7f1c5f19c477
Law-Following AI: Designing AI Agents to Obey Human Laws
Artificial intelligence (AI) companies are working to develop a new type of actor: "AI agents," which we define as AI systems that can perform computer-based tasks as competently as human experts. Expert-level AI agents will likely create enormous economic value but also pose significant risks. Humans use computers to commit crimes, torts, and other violations of the law. As AI agents progress, therefore, they will be increasingly capable of performing actions that would be illegal if performed by humans. Such lawless AI agents could pose a severe risk to human life, liberty, and the rule of law. Designing public policy for AI agents is one of society's most important tasks. With this goal in mind, we argue for a simple claim: in high-stakes deployment settings, such as government, AI agents should be designed to rigorously comply with a broad set of legal requirements, such as core parts of constitutional and criminal law. In other words, AI agents should be loyal to their principals, but only within the bounds of the law: they should be designed to refuse to take illegal actions in the service of their principals. We call such AI agents "Law-Following AIs" (LFAI). The idea of encoding legal constraints into computer systems has a respectable provenance in legal scholarship. But much of the existing scholarship relies on outdated assumptions about the (in)ability of AI systems to reason about and comply with open-textured, natural-language laws. Thus, legal scholars have tended to imagine a process of "hard-coding" a small number of specific legal constraints into AI systems by translating legal texts into formal machine-readable computer code. Existing frontier AI systems, however, are already competent at reading, understanding, and reasoning about natural-language texts, including laws. This development opens new possibilities for their governance. Based on these technical developments, we propose aligning AI systems to a broad suite of existing laws as part of their assimilation into the human legal order. This would require directly imposing legal duties on AI agents. While this would be a significant change to legal ontology, it is both consonant with past evolutions (such as the invention of corporate personhood) and consistent with the emerging safety practices of several leading AI companies. This Article aims to catalyze a field of technical, legal, and policy research to develop the idea of law-following AI more fully. It also aims to flesh out LFAI's implementation so that our society can ensure that widespread adoption of AI agents does not pose an undue risk to human life, liberty, and the rule of law. Our account and defense of law-following AI is only a first step and leaves many important questions unanswered. But if the advent of AI agents is anywhere near as important as the AI industry supposes, then law-following AI may be one of the most neglected and urgent topics in law today, especially in light of increasing governmental adoption of AI.
Worth taking a peek…
https://open.mitchellhamline.edu/cgi/viewcontent.cgi?article=1380&context=mhlr
Generative AI as Courtroom Evidence: A Practical Guide
You are the lawyer in a case in which the crucial incident was captured by dozens of smartphone, surveillance, and other cameras. Imagine your forensic video expert putting all of those videos into a generative artificial intelligence (GenAI)1 model that quickly synchronizes the audio and video streams, links relevant documents, and provides an outline for the strategy of your case—enabling you to understand exactly what happened in minutes instead of weeks and then suggesting ways to prove it at trial. The expert could also employ GenAI to enhance those videos, making relevant facts clearer by rendering blurry images more legible and inaudible conversations more intelligible, or even by creating important camera angles showing views not found in the original images. Or imagine, in a complex commercial dispute, feeding masses of documents and other data into a GenAI model that produces timelines and other visualizations of the relevant events, as well as lists of inherent contradictions in the evidence, which you could then use to prepare your arguments and illustrate your theory of the case in court. All of these tools and more will soon be available.
No comments:
Post a Comment