Friday, November 21, 2025

To little, too late?

https://www.washingtontimes.com/news/2025/nov/20/judge-rules-trumps-deployment-national-guard-dc-illegal/?utm_source=newsshowcase&utm_medium=gnews&utm_campaign=CDAQxpSN9K3ezs0lGPH5yN31_5TIlwEqKggAIhBcERswRnPnLkaJ3_gLN8OaKhQICiIQXBEbMEZz5y5Gid_4CzfDmg&utm_content=rundown

Judge rules Trump’s deployment of National Guard in D.C. was illegal

A federal judge ruled Thursday that the Trump administration broke the law in deploying the National Guard to patrol the streets of the District of Columbia without the city’s approval.

Judge Jia Cobb, a Biden appointee, stayed her ruling for three weeks to give President Trump a chance to mount an appeal.

She said Mr. Trump has limited powers to call up the Guard and that using it for police duty goes beyond what the law allows.



(Related)

https://www.bespacific.com/do-llms-truly-understand-when-a-precedent-is-overruled-2/

Do LLMs Truly “Understand” When a Precedent Is Overruled?

Do LLMs Truly “Understand” When a Precedent Is Overruled? September 2025. Abstract. Large language models (LLMs) with extended context windows show promise for complex legal reasoning tasks, yet their ability to understand long legal documents remains insufficiently evaluated. Developing long-context benchmarks that capture realistic, high-stakes tasks remains a significant challenge in the field, as most existing evaluations rely on simplified synthetic tasks that fail to represent the complexity of real-world document understanding. Overruling relationships are foundational to common-law doctrine and commonly found in judicial opinions. They provide a focused and important testbed for long-document legal understanding that closely resembles what legal professionals actually do. We present an assessment of state-of-the-art LLMs on identifying overruling relationships from U.S. Supreme Court cases using a dataset of 236 case pairs. Our evaluation reveals three critical limitations: (1) era sensitivity – the models show degraded performance on historical cases compared to modern ones, revealing fundamental temporal bias in their training; (2) shallow reasoning – models rely on shallow logical heuristics rather than deep legal comprehension; and (3) context-dependent reasoning failures – models produce temporally impossible relationships in complex open-ended tasks despite maintaining basic temporal awareness in simple contexts. Our work contributes a benchmark that addresses the critical gap in realistic long-context evaluation, providing an environment that mirrors the complexity and stakes of actual legal reasoning tasks.





Have we reached a tipping point? (Probably not)

https://www.theguardian.com/law/2025/nov/21/judges-have-become-human-filters-as-ai-in-australian-courts-reaches-unsustainable-phase-chief-justice-says

Judges have become ‘human filters’ as AI in Australian courts reaches ‘unsustainable phase’, chief justice says

The chief justice of the high court says judges around Australia are acting as “human filters” for legal arguments created using AI, warning the use of machine-generated content has reached unsustainable levels in the courts.

Stephen Gageler told the first day of the Australian Legal Convention in Canberra on Friday that inappropriate use of AI content by litigants self-representing in court proceedings, as well as trained legal practitioners, included machine-enhanced arguments, preparation of evidence and formulation of legal submissions.

Gageler said there was increasing evidence to suggest the courts had reached an “unsustainable phase” of AI use in litigation, requiring judges and magistrates to act “as human filters and human adjudicators of competing machine-generated or machine-enhanced arguments”.



No comments: