Sunday, March 03, 2024

Can we improve poor writing skills with hallucinations?

https://www.crimrxiv.com/pub/c5lj2rmy/release/1

Large Language Models and Artificial Intelligence for Police Report Writing

Large Language Models (LLMs), such as ChatGPT, are advanced artificial intelligence systems capable of understanding and generating human-like text. They are trained on vast amounts of textual data, enabling them to comprehend context, answer questions, generate summaries, and even engage in meaningful conversations. As these models continue to evolve, their potential applications in various industries, including law enforcement, are becoming more apparent, as are the potential threats. One particularly promising area of application for LLMs in policing is report writing. As many police executives know, not all officers possess strong writing skills, which can lead to inaccurate or incomplete reports. This can have serious consequences for criminal prosecutions, as well as expose departments to civil liability concerns. Implementing LLMs like ChatGPT for report-writing assistance may help address these issues. Even if not fully implemented at the agency level, officers across the country are already using these tools to help in their report generation. Given the stakes, it is wise for agencies to have a sophisticated view and policy on these tools. This paper introduces practitioners to LLMs for report writing, considers the implications of using such tools, and suggests a template-based approach to deploying the technology to patrol officers.





Just the facts, Ma’am.”

https://royalsocietypublishing.org/doi/full/10.1098/rsta.2023.0162

AI and the nature of disagreement

Litigation is a creature of disagreement. Our essay explores the potential of artificial intelligence (AI) to help reduce legal disagreements. In any litigation, parties disagree over the facts, the law, or how the law applies to the facts. The source of the parties' disagreements matters. It may determine the extent to which AI can help resolve their disputes. AI is helpful in clarifying the parties’ misunderstanding over how well-defined questions of law apply to their facts. But AI may be less helpful when parties disagree on questions of fact where the prevailing facts dictate the legal outcome. The private nature of information underlying these factual disagreements typically fall outside the strengths of AI's computational leverage over publicly available data. A further complication: parties may disagree about which rule should govern the dispute, which can arise irrespective of whether they agree or disagree over questions of facts. Accordingly, while AI can provide clarity over legal precedent, it often may be insufficient to provide clarity over legal disputes.





Slow lawyers…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4735389

The Legal Ethics of Generative AI

The legal profession is notoriously conservative when it comes to change. From email to outsourcing, lawyers have been slow to embrace new methods and quick to point out potential problems, especially ethics-related concerns.

The legal profession’s approach to generative artificial intelligence (generative AI) is following a similar pattern. Many lawyers have readily identified the legal ethics issues associated with generative AI, often citing the New York lawyer who cut and pasted fictitious citations from ChatGPT into a federal court filing. Some judges have gone so far as to issue standing orders requiring lawyers to reveal when they use generative AI or to ban the use of most kinds of artificial intelligence (AI) outright. Bar associations are chiming in on the subject as well, though they have (so far) taken an admirably open-minded approach to the subject.

Part II of this essay explains why the Model Rules of Professional Conduct (Model Rules) do not pose a regulatory barrier to lawyers’ careful use of generative AI, just as the Model Rules did not ultimately prevent lawyers from adopting many now-ubiquitous technologies. Drawing on my experience as the Chief Reporter of the ABA Commission on Ethics 20/20 (Ethics 20/20 Commission), which updated the Model Rules to address changes in technology, I explain how lawyers can use generative AI while satisfying their ethical obligations. Although this essay does not cover every possible ethics issue that can arise or all of generative AI’s law-related use cases, the overarching point is that lawyers can use these tools in many contexts if they employ appropriate safeguards and procedures.

Part III describes some recent judicial standing orders on the subject and explains why they are ill-

advised.

The essay closes in Part IV with a potentially provocative claim: the careful use of generative AI is not only consistent with lawyers’ ethical duties, but the duty of competence may eventually require lawyers’ use of generative AI. The technology is likely to become so important to the delivery of legal services that lawyers who fail to use it will be considered as incompetent as lawyers today who do not know how to use computers, email, or online legal research tools.



No comments: