Good to see someone thinking
about this…
https://ojs.stanford.edu/ojs/index.php/grace/article/view/4337
Regulating
LLMs in Warfare: A U.S. Strategy for Military AI Accountability
Large language
models (LLMs) are rapidly entering military workflows that shape
intelligence synthesis, operational planning, logistics, cyber
operations, and information activities, yet U.S. governance has not
kept pace with their distinct risk profile. This memo argues that
existing frameworks remain ill-suited to LLM-enabled
decision-support: international efforts under the UN Convention on
Certain Conventional Weapons focus primarily on lethal autonomous
weapons, while U.S. policy relies on high-level ethical principles
that have not been operationalized into enforceable requirements for
evaluation, monitoring, logging, and lifecycle control. The
paper identifies four core risks arising from LLM
deployment in high-consequence contexts: inadvertent escalation
driven by overconfident or brittle recommendations under uncertainty;
scalable information operations and disinformation; expanded security
vulnerabilities including data poisoning, prompt-injection, and
sensitive-data leakage; and accountability gaps when human actors
defer responsibility to opaque model outputs. In response, the memo
proposes a U.S. regulatory framework organized around four pillars:
(1) human decision rights and escalation controls, including
documented authorization for crisis-sensitive uses; (2) mandatory
human review and traceability for information-operations content; (3)
baseline security, data governance, and continuous adversarial
testing for training and deployment pipelines; and (4) accountability
mechanisms, including auditable logs and incident reporting overseen
by an independent Military AI Oversight Committee. The memo
concludes that LLM-specific guardrails complement, rather than
displace, existing weapons autonomy policy and would strengthen U.S.
credibility in shaping international norms for responsible military
AI. This paper was submitted to Dr. Cynthia Bailey's course CS121
Equity and Governance for Artificial Intelligence, Stanford
University
More general
than this lawyerly orientation.
https://mrquarterly.org/index.php/ojs/article/view/46
Artificial
Intelligence and the Transformation of Legal Practice: From
Automation to Augmented Lawyering
The rapid rise
of artificial intelligence (AI) is transforming the legal profession
worldwide. Rather than replacing lawyers, AI reshapes legal
workflows, automating routine tasks such as research, document
review, and contract analysis, while enhancing human judgment,
ethics, and strategic decision-making. This article examines these
changes through theoretical and empirical lenses, focusing on the
French legal system. It highlights organizational shifts in law
firms, including new governance structures, multidisciplinary teams,
and AI management practices ensuring ethical compliance and data
security. The article
concludes that the future of law lies in human–machine
collaboration, where AI augments lawyers’ professional
values of responsibility, trust, and justice: from Automation to
Augmented Lawyering.
Lawyers should
have been doing this, right?
https://sd34.senate.ca.gov/news/reuters-california-senate-passes-bill-regulating-lawyers-use-ai
Reuters
- California Senate passes bill regulating lawyers' use of AI
A bill passed
on Thursday by the California Senate would require lawyers in the
state to verify the accuracy of all materials produced using
artificial intelligence, including case citations and other
information in court filings.
The measure,
which appears to be one of the first pending in a state legislature
on the use of AI by lawyers, has gone to the State Assembly for
consideration.
In addition to
governing California lawyers' use of AI, the bill prohibits
arbitrators presiding over out-of-court disputes from delegating
decision-making to generative AI and from relying on information
produced by AI outside case records without first telling the parties
involved.
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB574