Honest cops?
https://brooklynworks.brooklaw.edu/blr/vol91/iss2/6/
Police
and AI: When Abundantly Helpful Becomes Intrinsically Harmful
Artificial
intelligence (AI) has rapidly crept into nearly all aspects of life,
including in government, the criminal justice system, and policing.
While Supreme Court Due Process jurisprudence has outlined certain
boundaries for police interrogations, much police conduct is left for
the states to regulate. Such regulation is sporadic and less
restrictive than the public might assume, especially in the realm of
police deception. Across jurisdictions, courts
allow police to deceptively inform suspects that a witness
identified the suspect of the perpetrator of a crime. That the
suspect’s finger prints, DNA, or shoe prints were found at the
scene of the crime. Police can even present fake evidence to
suspects in an interrogation, including falsified lab reports,
photograpghs, and more. With AI’s use expanding into law
enforcement, there is a clear need to regulate police deception in
interrogations before constitutional rights are infringed. This Note
argues that while courts have long permitted various deceptive police
tactics, the increasing sophistication and accessibility of AI tools
pose unprecedented risks such as false confessions, bias, and
potentially unwarranted public repremand. Through an analysis of
case law, the evolution of Miranda and Due Process jurisprudence, and
emerging AI applications in policing, this Note demonstrates how
AI-enabled deception could exacerbate Due Process violations,
undermine public trust, and increase wrongful convictions. It
concludes by urging state legislatures to preemptively prohibit the
use of AI to create false evidence in interrogations, advocating for
a state-by-state legislative approach as the most effective means to
safeguard constitutional protections in a rapidly evolving world.
Law is a
Matrix?
https://scholarworks.uark.edu/arlnlaw/31/
Prompt
Engineering For Lawyers: Blue Pill Or Red Pill: Hallucinations Risks
And An Introduction To Prompt Engineering
In The Matrix,
Neo’s choice between the blue pill and the red pill is essentially
a choice between a comfortable illusion and an unsettling reality.
Lawyers now face a similar decision with artificial intelligence.
They can take the blue pill: ignore artificial intelligence or treat
it like just another search engine, continuing a comfortable illusion
that the new technology may not transform the practice of law. Or
lawyers can take the red pill: acknowledge that artificial
intelligence will transform the practice of law and learn how to use
it competently, ethically, and effectively.
This Article
is for those who choose the red pill. It begins with the problem of
hallucinations, which makes blind reliance on artificial intelligence
a professional hazard, and then turns to the first step in using
artificial intelligence productively: understanding how it differs
from Googling. When artificial intelligence is approached as a
role-playing collaborator, such as a litigator, contract drafter, or
judge, lawyers can enhance the accuracy, tone, and usefulness of the
responses it provides.
Outside the
box?
https://ojs.scipub.de/index.php/MSC/article/view/8331
THE
PROBLEM OF THE CONSTITUTIONAL AND LEGAL REGULATION OF ARTIFICIAL
INTELLIGENCE
This article
examines the constitutional and legal problems arising against the
background of the rapid development of artificial intelligence (AI),
as well as the new realities generated by digital transformation. It
offers a comparative analysis of the advanced constitutional
practices of countries such as Chile, Greece, Mexico, and Brazil in
the regulation of AI.
Referring to
the theoretical concepts of prominent international scholars such as
Lawrence Lessig, Frank Pasquale, and Mireille Hildebrandt, the
article explores the principles of “code as law” and “legal
protection by design.”
At the same
time, it interprets the fundamental threats posed by AI in the
spheres of algorithmic discrimination, the privacy of personal data,
and neuro-rights.
The article
proposes the application of a strict liability model within the civil
law system of Azerbaijan for the compensation of damage caused by AI
and suggests recognizing AI as an “autonomous source of risk.”
In conclusion, it advances strategic solutions aimed at ensuring that
national legislation evolves on the basis of the principles of
digital constitutionalism and that the supremacy of human will over
program code is preserved.
Maury Nichols
points me to another interesting article.
https://www.straitstimes.com/multimedia/graphics/2026/04/ai-chatbots-privacy-risk/index.html?ref=thefuturist
Marcus
asks AI chatbots various questions.
They
seem entirely harmless. But they can tell the chatbots a lot about
him.
Modern war.
https://carnegieendowment.org/research/2026/04/ukraine-russia-war-changing-warfare-practice-military-strategy
The
New Revolution in Military Affairs
How
Ukraine is driving doctrinal change in modern warfare.