Monday, July 14, 2025

Why indeed?

https://www.bespacific.com/cops-favorite-ai-tool-automatically-deletes-evidence-of-when-ai-was-used/

Cops’ favorite AI tool automatically deletes evidence of when AI was used

Ars Technica: AI police tool is designed to avoid accountability, watchdog says. On Thursday, a digital rights group, the Electronic Frontier Foundation, published an expansive investigation into AI-generated police reports that the group alleged are, by design, nearly impossible to audit and could make it easier for cops to lie under oath.  Axon’s Draft One debuted last summer at a police department in Colorado, instantly raising questions about the feared negative impacts of AI-written police reports on the criminal justice system. The tool relies on a ChatGPT variant to generate police reports based on body camera audio, which cops are then supposed to edit to correct any mistakes, assess the AI outputs for biases, or add key context. But the EFF found that the tech “seems designed to stymie any attempts at auditing, transparency, and accountability.” Cops don’t have to disclose when AI is used in every department, and Draft One does not save drafts or retain a record showing which parts of reports are AI-generated. Departments also don’t retain different versions of drafts, making it difficult to assess how one version of an AI report might compare to another to help the public determine if the technology is “junk,” the EFF said. That raises the question, the EFF suggested, “Why wouldn’t an agency want to maintain a record that can establish the technology’s accuracy?” It’s currently hard to know if cops are editing the reports or “reflexively rubber-stamping the drafts to move on as quickly as possible,” the EFF said. That’s particularly troubling, the EFF noted, since Axon disclosed to at least one police department that “there has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the ‘guardrails’ that supposedly deter officers from submitting AI-generated reports without reading them first.” The AI tool could also possibly be “overstepping in its interpretation of the audio,” possibly misinterpreting slang or adding context that never happened.

A “major concern,” the EFF said, is that the AI reports can give cops a “smokescreen,” perhaps even allowing them to dodge consequences for lying on the stand by blaming the AI tool for any “biased language, inaccuracies, misinterpretations, or lies” in their reports. “There’s no record showing whether the culprit was the officer or the AI,” the EFF said. “This makes it extremely difficult if not impossible to assess how the system affects justice outcomes over time.” According to the EFF, Draft One “seems deliberately designed to avoid audits that could provide any accountability to the public.” In one video from a roundtable discussion the EFF reviewed, an Axon senior principal product manager for generative AI touted Draft One’s disappearing drafts as a feature, explaining, “we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices.”





Sure to be a very common question.

https://www.bespacific.com/can-you-trust-ai-in-legal-research/

Can you trust AI in legal research?

Sally McLaren post on LinkedIn: “Can you trust AI in legal research? Our study tested how leading Generative AI tools responded to fake case citations and the results were eye-opening. While some models correctly flagged the fabricated case, others confidently generated detailed but entirely false legal content, even referencing real statutes and cases.  Our table of findings (which is not behind a paywall) breaks down how each model performed. We encourage you to repurpose this for use in your AI literacy sessions to help build critical awareness in legal research.”

We have another article, this time in Legal Information Management. A deep dive into Generative AI outputs and the legal information professional’s role (paywall). In the foot notes we have included our data and encourage use of it in you AI literacy sessions. “You’re right to be skeptical!”: The Role of Legal Information Professionals in Assessing Generative AI Outputs | Legal Information Management | Cambridge Core: “Generative AI tools, such as ChatGPT, have demonstrated impressive capabilities in summarisation and content generation. However, they are infamously prone to hallucination, fabricating plausible information and presenting it as fact. In the context of legal research, this poses significant risk. This paper, written by Sally McLaren and Lily Rowe, examines how widely available AI applications respond to fabricated case citations and assesses their ability to identify false cases, the nature of their summaries, and any commonalities in their outputs. Using a non-existent citation, we analysed responses from multiple AI models, evaluating accuracy, detail, structure and the inclusion of references. Results revealed that while some models flagged our case as fictitious, others generated convincing but erroneous legal content, occasionally citing real cases or legislation. The experiment underscores concern about AI’s credibility in legal research and highlights the role of legal information professionals in mitigating risks through user education and AI literacy training. Practical engagement with these tools is crucial to understanding the user experience. Our findings serve as a foundation for improving AI literacy in legal research.”



No comments: