AI
weapons: We’ve had them all along.
https://techcrunch.com/2024/01/13/anthropic-researchers-find-that-ai-models-can-be-trained-to-deceive/
Anthropic
researchers find that AI models can be trained to deceive
Most
humans learn the skill of deceiving other humans. So can AI models
learn the same? Yes, the answer seems — and terrifyingly, they’re
exceptionally good at it.
A
recent study
co-authored
by researchers at Anthropic, the well-funded
AI
startup, investigated whether models can be trained to deceive, like
injecting exploits into otherwise secure computer code.
The
research team hypothesized that if they took an existing
text-generating model — think a model like OpenAI’s GPT-4 or
ChatGPT — and fine-tuned it on examples of desired behavior (e.g.
helpfully answering questions) and deception (e.g. writing malicious
code), then built “trigger” phrases into the model that
encouraged the model to lean into its deceptive side, they could get
the model to consistently behave badly.
We’re
going to use them.
https://ojs.journalsdg.org/jlss/article/view/2443
Criminal
Responsibility for Errors Committed by Medical Robots: Legal and
Ethical Challenges
This
study aims to know Criminal Responsibility for Errors Committed by
Medical Robots, where the use of robots in healthcare and medicine
has been steadily growing in recent years. Robotic surgical systems,
robotic prosthetics, and other assistive robots are being into
patient care. However, these autonomous systems also carry risks of
errors and adverse events resulting from mechanical failures,
software bugs, or other technical issues. When such errors occur and
lead to patient harm, it raises complex questions around legal and
ethical responsibility
Traditional
principles of criminal law have not been designed to address the
issue of liability for actions committed by artificial intelligence
systems and robots. There are open questions around whether
autonomous medical robots can or should be held criminally
responsible for errors that result in patient injury or death. If
criminal charges cannot be brought against the robot itself, legal
responsibility could potentially be attributed to manufacturers,
operators, hospitals, or software programmers connected to the robot.
However, proving causation and intent in such cases can be very
difficult.
Hacking the
Terminator. (Or an autonomous drone?)
https://academic.oup.com/jcsl/advance-article/doi/10.1093/jcsl/krad016/7512115
Can
Autonomous Weapon Systems be Seized? Interactions with the Law of
Prize and War Booty
The
military has often been used as a proving ground for advances in
technology. With the advent of machine learning, algorithms and
artificial intelligence, there has been a slew of scholarship around
the legal and ethical challenges of applying those technologies to
the military. Nowhere has the debate been fiercer than in examining
whether international law is resilient enough to impose individual
and State responsibility for the misuse of these autonomous weapon
systems (AWSs). However, by introducing increasing levels of
electronic and digital components into weapon systems, States are
also introducing opportunities for adversaries to hack, suborn or
take over AWSs in a manner unthinkable compared to conventional
weaponry. Yet, no academic discussion has considered how the law of
prize and war booty might apply to AWSs that are captured in such a
way. This article seeks to address this gap.
Perspective.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4688156
The
Interplay Between Artificial Intelligence and the Law and the Future
of the Law-Machine Interface
Since the
early 1970s, and especially in the last decade, commentators have
widely explored how artificial intelligence (AI) will affect the
legal system. Will intelligent machines replace—or at least
displace—judges, lawyers, prosecutors and law enforcement
personnel? Will computers powered by ever-improving AI technology
pass bar exams? Will lawyers use this new technology in daily
practice to save time and money even when it may "hallucinate"—or,
more precisely, when it may cite wrong or non-existent cases? Will
greater AI deployment affect the future development of law and legal
institutions—if so, how? Will such deployment drastically reduce
legal costs and thereby improve access to justice? Or will it
instead undermine democratic governance and the rule of law?
Finally, are we heading toward what one commentator has called "legal
singularity"—or, worse, what another has referred to as the
"end of law"?
A few years
ago, I wrote a couple of law review articles discussing whether AI
systems can be effectively deployed to analyze whether an
unauthorized use of a copyrighted work would constitute fair use.
Based on these analyses, I further explored whether we could draw
some useful lessons on the interplay between AI and the law and what
I termed the "law-machine interface." A focus on this
interface is important because we are increasingly functioning in a
hybrid world in which humans and machines work alongside each other.
Commissioned for the Research Handbook on the Law of Artificial
Intelligence, this chapter collects those lessons that are relevant
to the future development of law and legal institutions. The chapter
specifically discusses the interplay between AI and the law in
relation to law, the legislature, the bench, the bar and academe.