Saturday, November 22, 2025

Possible, but unlikely to be critical. All professions face this issue.

https://fortune.com/2025/11/21/are-doctors-at-risk-from-ai-automation/

Are doctors at risk of AI automation? ‘Those who don’t use it will be replaced by those who do’

AI is spreading in workplaces around the globe—and healthcare isn’t being left out. From fortifying diagnostic accuracy to filling up electronic medical records (EMRs), AI is helping to ease the workload of healthcare professionals worldwide. In June, Microsoft unveiled an AI diagnostic system that scored four times higher than human doctors in identifying complex medical cases from the New England Journal of Medicine.



Friday, November 21, 2025

To little, too late?

https://www.washingtontimes.com/news/2025/nov/20/judge-rules-trumps-deployment-national-guard-dc-illegal/?utm_source=newsshowcase&utm_medium=gnews&utm_campaign=CDAQxpSN9K3ezs0lGPH5yN31_5TIlwEqKggAIhBcERswRnPnLkaJ3_gLN8OaKhQICiIQXBEbMEZz5y5Gid_4CzfDmg&utm_content=rundown

Judge rules Trump’s deployment of National Guard in D.C. was illegal

A federal judge ruled Thursday that the Trump administration broke the law in deploying the National Guard to patrol the streets of the District of Columbia without the city’s approval.

Judge Jia Cobb, a Biden appointee, stayed her ruling for three weeks to give President Trump a chance to mount an appeal.

She said Mr. Trump has limited powers to call up the Guard and that using it for police duty goes beyond what the law allows.



(Related)

https://www.bespacific.com/do-llms-truly-understand-when-a-precedent-is-overruled-2/

Do LLMs Truly “Understand” When a Precedent Is Overruled?

Do LLMs Truly “Understand” When a Precedent Is Overruled? September 2025. Abstract. Large language models (LLMs) with extended context windows show promise for complex legal reasoning tasks, yet their ability to understand long legal documents remains insufficiently evaluated. Developing long-context benchmarks that capture realistic, high-stakes tasks remains a significant challenge in the field, as most existing evaluations rely on simplified synthetic tasks that fail to represent the complexity of real-world document understanding. Overruling relationships are foundational to common-law doctrine and commonly found in judicial opinions. They provide a focused and important testbed for long-document legal understanding that closely resembles what legal professionals actually do. We present an assessment of state-of-the-art LLMs on identifying overruling relationships from U.S. Supreme Court cases using a dataset of 236 case pairs. Our evaluation reveals three critical limitations: (1) era sensitivity – the models show degraded performance on historical cases compared to modern ones, revealing fundamental temporal bias in their training; (2) shallow reasoning – models rely on shallow logical heuristics rather than deep legal comprehension; and (3) context-dependent reasoning failures – models produce temporally impossible relationships in complex open-ended tasks despite maintaining basic temporal awareness in simple contexts. Our work contributes a benchmark that addresses the critical gap in realistic long-context evaluation, providing an environment that mirrors the complexity and stakes of actual legal reasoning tasks.





Have we reached a tipping point? (Probably not)

https://www.theguardian.com/law/2025/nov/21/judges-have-become-human-filters-as-ai-in-australian-courts-reaches-unsustainable-phase-chief-justice-says

Judges have become ‘human filters’ as AI in Australian courts reaches ‘unsustainable phase’, chief justice says

The chief justice of the high court says judges around Australia are acting as “human filters” for legal arguments created using AI, warning the use of machine-generated content has reached unsustainable levels in the courts.

Stephen Gageler told the first day of the Australian Legal Convention in Canberra on Friday that inappropriate use of AI content by litigants self-representing in court proceedings, as well as trained legal practitioners, included machine-enhanced arguments, preparation of evidence and formulation of legal submissions.

Gageler said there was increasing evidence to suggest the courts had reached an “unsustainable phase” of AI use in litigation, requiring judges and magistrates to act “as human filters and human adjudicators of competing machine-generated or machine-enhanced arguments”.



Thursday, November 20, 2025

A summary.

https://pogowasright.org/cipl-publishes-discussion-paper-comparing-u-s-state-privacy-law-definitions-of-personal-data-and-sensitive-data/

CIPL Publishes Discussion Paper Comparing U.S. State Privacy Law Definitions of Personal Data and Sensitive Data

Hunton Andrews Kurth writes:

On November 12, 2025, the Centre for Information Policy Leadership (“CIPL”) at Hunton published a discussion paper titled “Comparing U.S. State Privacy Laws: Covered and Sensitive Data” (“Discussion Paper”), the latest in its discussion paper series comparing key elements of U.S. state privacy laws.
The concepts of personal data – and the types of personal data categorized as “sensitive” – are foundational elements of U.S. state privacy laws and regulations. However, the criteria for what qualifies as “sensitive” – and the legal consequences that follow – are not always aligned across U.S. state privacy laws. As a result, organizations are tasked with operationalizing varying definitions across a fragmented and inconsistent legal landscape.
The Discussion Paper analyzes the scope, applicability, exemptions and key definitions of “personal data” and “sensitive” data under comprehensive U.S. state privacy laws. It examines the most common approaches, as well as outliers, with a focus on three topics:
  1. The concept of personal data (or “personal information”) (including an analysis of exclusions such as “deidentified” and “publicly available” data)
  2. The definition of “sensitive data” (or “sensitive personal information”)
  3. Relevant exemptions

Read more at Privacy & Information Security Law Blog.

Direct link to their Discussion Paper.





The politics of AI law?

https://www.theverge.com/ai-artificial-intelligence/824608/trump-executive-order-ai-state-laws

Here’s the Trump executive order that would ban state AI laws

President Donald Trump is considering signing an executive order as soon as Friday that would give the federal government unilateral power over regulating artificial intelligence, including the creation of an “AI Litigation Task Force” overseen by the attorney general, “whose sole responsibility shall be to challenge State AI laws.”

According to a draft of the order obtained by The Verge, the Task Force would be able to sue states whose laws are deemed to obstruct the growth of the AI industry, citing California’s recent laws on AI safety and “catastrophic risk” and a Colorado law that prevents “algorithmic discrimination.” The task force will occasionally consult with a group of White House special advisers, including David Sacks, billionaire venture capitalist and the special adviser for AI and crypto.





Integrating the tools of war.

https://thehackernews.com/2025/11/iran-linked-hackers-mapped-ship-ais.html

Iran-Linked Hackers Mapped Ship AIS Data Days Before Real-World Missile Strike Attempt

Threat actors with ties to Iran engaged in cyber warfare as part of efforts to facilitate and enhance physical, real-world attacks, a trend that Amazon has called cyber-enabled kinetic targeting.

The development is a sign that the lines between state-sponsored cyber attacks and kinetic warfare are increasingly blurring, necessitating the need for a new category of warfare, the tech giant's threat intelligence team said in a report shared with The Hacker News.

… As an example, Amazon said it observed Imperial Kitten (aka Tortoiseshell), a hacking group assessed to be affiliated with Iran's Islamic Revolutionary Guard Corps (IRGC), conducting digital reconnaissance between December 2021 and January 2024, targeting a ship's Automatic Identification System (AIS) platform with the goal of gaining access to critical shipping infrastructure.

Subsequently, the threat actor was identified as attacking additional maritime vessel platforms, in one case even gaining access to CCTV cameras fitted on a maritime vessel that provided real-time visual intelligence.

The attack progressed to a targeted intelligence gathering phase on January 27, 2024, when Imperial Kitten carried out targeted searches for AIS location data for a specific shipping vessel. Merely days later, that same vessel was targeted by an unsuccessful missile strike carried out by Iranian-backed Houthi militants.



Wednesday, November 19, 2025

I’ll believe it when I see it.

https://www.theregister.com/2025/11/18/the_us_wants_to_go/

Take fight to the enemy, US cyber boss says

America is fed up with being the prime target for foreign hackers. So US National Cyber Director Sean Cairncross says Uncle Sam is going on the offensive – he just isn't saying when.

Speaking at the Aspen Cyber Summit in Washington, D.C., on Tuesday, Cairncross said his office is currently working on a new National Cyber Strategy document that he said will be short, to the point, and designed to pair policy with actions that go beyond improving defensive posture. He wants the US government, in cooperation with private industry, to start going after threat actors directly.

Cairncross' talking points suggest the US is damn well going to try to turn the tables, but when asked for a timeline on release of the document, he deflected. Hard.

"We're going to roll out a strategy, we're going to roll out an action plan … and then we'll start moving deliverables," Cairncross said. Until then, it's going to be entirely defensive, with fewer people keeping watch. Business as usual.





Tools & Techniques.

https://www.bespacific.com/google-scholar-labs/

Google Scholar Labs

Today, we are introducing Google Scholar Labs, a new feature that explores how generative AI can transform the process of answering detailed scholarly research questions. Scholar Labs is powered by AI to act as an advanced research tool, helping you tackle questions that require looking at a subject from multiple angles. It analyzes your question to identify key topics, aspects and relationships, then searches all of them on Scholar. For example, let’s say you’re looking to find out how caffeine consumption might affect short-term memory. Scholar Labs could look for papers that cover the relationships between caffeine intake, short-term memory retention and age-specific cognitive studies to gather the most relevant papers. After evaluating the results, it identifies papers that answer your overall research question, explaining how each paper addresses it. Google Scholar Labs is now available to a limited number of logged-in users.”



Tuesday, November 18, 2025

Does the First Amendment have an age limit?

https://www.theverge.com/news/822475/netchoice-virginia-lawsuit-social-media-time-limit-law

NetChoice sues Virginia to block its one-hour social media limit for kids

The tech industry trade group NetChoice is suing Virginia over a new law that will restrict minors from using social media for more than one hour per day. The lawsuit, filed on Monday, asks the court to block the law over claims it violates the First Amendment by putting “unlawful barriers on how and when all Virginians can access free speech online.”

Virginia Governor Glenn Youngkin signed the social media bill (SB 854) into law in May, and it’s set to go into effect on January 1st, 2026. Under the law, social media platforms will have to prevent kids under 16 from using the sites for more than one hour every day unless they receive permission from a parent.



Monday, November 17, 2025

Could JAG lawyers answer these questions?

https://www.bespacific.com/military-personnel-seek-legal-advice-on-whether-trump-ordered-missions-are-lawful/

Military personnel seek legal advice on whether Trump-ordered missions are lawful

PBS: Military service personnel have been seeking outside legal advice about some of the missions the Trump administration has assigned them. The strikes against alleged drug traffickers and deployments to U.S. cities have sparked a debate over their legality. Amna Nawaz discussed more with Frank Rosenblatt, president of the National Institute of Military Justice, which runs The Orders Project.  Read the Full Transcript





Too difficult to be more specific in their request?

https://pogowasright.org/openai-fights-order-to-turn-over-millions-of-chatgpt-conversations/

OpenAI fights order to turn over millions of ChatGPT conversations

Blake Brittain reports:

OpenAI asked a federal judge in New York on Wednesday to reverse an order that required it to turn over 20 million anonymized ChatGPT chat logs amid a copyright infringement lawsuit by the New York Times and other news outlets, saying it would expose users’ private conversations.
The artificial intelligence company argued that turning over the logs would disclose confidential user information and that “99.99%” of the transcripts have nothing to do with the copyright infringement allegations in the case.
To be clear: anyone in the world who has used ChatGPT in the past three years must now face the possibility that their personal conversations will be handed over to The Times to sift through at will in a speculative fishing expedition,” the company said in a court filing.
The news outlets argued that the logs were necessary to determine whether ChatGPT reproduced their copyrighted content and to rebut OpenAI’s assertion that they “hacked” the chatbot’s responses to manufacture evidence. The lawsuit claims OpenAI misused their articles to train ChatGPT to respond to user prompts.

Read more at Reuters.





What if this becomes common?

https://www.theregister.com/2025/11/17/asia_tech_news_roundup/

Jaguar Land Rover hack cost India's Tata Motors around $2.4 billion and counting

India’s Tata Motors, owner of Jaguar Land Rover, has revealed the cyberattack that shut down production in the UK has so far cost it around £1.8 billion ($2.35 billion).

The company last week posted results for the quarter ended September 30th, and revealed it incurred exceptional costs of £196 million ($258 million) as a direct consequence of the attack, and saw revenue fall year-over-year from £6.5 billion to £4.9 billion ($8.5bn to $6.4bn).

The company’s results would have been worse, were it not for sales growth in India.





Tools & Techniques.

https://www.zdnet.com/article/how-to-vibe-code-your-first-iphone-app-with-ai-no-experience-necessary/

How to vibe code your first iPhone app with AI - no experience necessary

But in this article, I'll show you how to use an AI to generate your very first, very, very basic iPhone app. We're going to do it step by step, screenshot by screenshot, so all you have to do is follow along.



Sunday, November 16, 2025

AI “helpers.” I’m not sure this idea will work.

https://openyls.law.yale.edu/entities/publication/794e6d6c-abeb-4002-80e3-7f1c5f19c477

Law-Following AI: Designing AI Agents to Obey Human Laws

Artificial intelligence (AI) companies are working to develop a new type of actor: "AI agents," which we define as AI systems that can perform computer-based tasks as competently as human experts. Expert-level AI agents will likely create enormous economic value but also pose significant risks. Humans use computers to commit crimes, torts, and other violations of the law. As AI agents progress, therefore, they will be increasingly capable of performing actions that would be illegal if performed by humans. Such lawless AI agents could pose a severe risk to human life, liberty, and the rule of law. Designing public policy for AI agents is one of society's most important tasks. With this goal in mind, we argue for a simple claim: in high-stakes deployment settings, such as government, AI agents should be designed to rigorously comply with a broad set of legal requirements, such as core parts of constitutional and criminal law. In other words, AI agents should be loyal to their principals, but only within the bounds of the law: they should be designed to refuse to take illegal actions in the service of their principals. We call such AI agents "Law-Following AIs" (LFAI). The idea of encoding legal constraints into computer systems has a respectable provenance in legal scholarship. But much of the existing scholarship relies on outdated assumptions about the (in)ability of AI systems to reason about and comply with open-textured, natural-language laws. Thus, legal scholars have tended to imagine a process of "hard-coding" a small number of specific legal constraints into AI systems by translating legal texts into formal machine-readable computer code. Existing frontier AI systems, however, are already competent at reading, understanding, and reasoning about natural-language texts, including laws. This development opens new possibilities for their governance. Based on these technical developments, we propose aligning AI systems to a broad suite of existing laws as part of their assimilation into the human legal order. This would require directly imposing legal duties on AI agents. While this would be a significant change to legal ontology, it is both consonant with past evolutions (such as the invention of corporate personhood) and consistent with the emerging safety practices of several leading AI companies. This Article aims to catalyze a field of technical, legal, and policy research to develop the idea of law-following AI more fully. It also aims to flesh out LFAI's implementation so that our society can ensure that widespread adoption of AI agents does not pose an undue risk to human life, liberty, and the rule of law. Our account and defense of law-following AI is only a first step and leaves many important questions unanswered. But if the advent of AI agents is anywhere near as important as the AI industry supposes, then law-following AI may be one of the most neglected and urgent topics in law today, especially in light of increasing governmental adoption of AI.





Worth taking a peek…

https://open.mitchellhamline.edu/cgi/viewcontent.cgi?article=1380&context=mhlr

Generative AI as Courtroom Evidence: A Practical Guide

You are the lawyer in a case in which the crucial incident was captured by dozens of smartphone, surveillance, and other cameras. Imagine your forensic video expert putting all of those videos into a generative artificial intelligence (GenAI)1 model that quickly synchronizes the audio and video streams, links relevant documents, and provides an outline for the strategy of your case—enabling you to understand exactly what happened in minutes instead of weeks and then suggesting ways to prove it at trial. The expert could also employ GenAI to enhance those videos, making relevant facts clearer by rendering blurry images more legible and inaudible conversations more intelligible, or even by creating important camera angles showing views not found in the original images. Or imagine, in a complex commercial dispute, feeding masses of documents and other data into a GenAI model that produces timelines and other visualizations of the relevant events, as well as lists of inherent contradictions in the evidence, which you could then use to prepare your arguments and illustrate your theory of the case in court. All of these tools and more will soon be available.