Could any evaluation of an attack
happen before the attack.
https://www.tandfonline.com/doi/full/10.1080/15027570.2025.2569130
Robocop
Reimagined: Harnessing the Power of AI for LOAC Compliance
This article
is intended as a contribution to the growing literature on the
potential benefits of military applications of AI to ensure
compliance with the Law of Armed Conflict. Drawing on foundational
notions of the philosophy of mind and legal philosophy, the article
proposes the introduction of a secondary LOAC-compliance
software, the “e-JAG”, in order to police the results
offered by primary targeting software, while at the same time
remaining always under human control, as what can overall be
considered a positive redundancy in the sense of being an additional
guard rail to strengthen the precautions in attack that militaries
are legally obligated to implement.
Perhaps we are
not ready for an automated legal system.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5585290
Law,
Justice, and Artificial Intelligence
Judges,
physicians, human resources managers, and other decision-makers often
face a tension between adhering to rules and taking into account the
specific circumstances of the case at hand. Increasingly, such
decisions are supported by—and soon may even be made by—artificial
intelligence, including large language models (LLMs). How LLMs
resolve these tensions is therefore of paramount importance.
Specifically,
little is known about how LLMs navigate the tension between applying
legal rules and accounting for justice. This study compares the
decisions of GPT-4o, Claude Sonnet 4, and Gemini 2.5 Flash with those
of laypersons and legal professionals, including judges, across six
vignette-based experiments comprising about 50,000 decisions.
We find that,
unlike humans, LLMs do not
balance law and equity: when instructed to follow the law, they
largely ignore justice in both their decisions and reasoning; when
instructed to decide based on justice, they disregard legal rules.
Moreover, in contrast to humans, requiring reasons or providing
precedents has little effect on their responses. Prompting LLMs to
consider litigant sympathy, or asking them to predict judicial
decisions rather than make them, somewhat reduce their formalism, but
they remain far more rigid than humans.
Beyond their
formalism, LLMs exhibit far less variability ("noise") than
humans. While greater consistency is generally a virtue in
decision-making, the article discusses its shortcomings as well. The
study introduces a methodology for evaluating current and future LLMs
where no demonstrably single correct answer exists.
Prosecuting
the Terminator?
https://www.pzhfars.ir/article_231761_en.html?lang=fa
Civil
Liability for Robots and Artificial Intelligence: Legal Challenges
and Solutions in the Age of New Technologies
New
technologies, especially robots and artificial intelligence, have
created extensive transformations in social, economic, and industrial
life. However, the rapid development of these technologies has
created numerous challenges in the field of civil liability, which
traditional legal systems cannot easily adapt to. The main question
is how to explain and regulate civil liability arising from damages
or injuries attributed to robots and artificial intelligence systems
based on current civil law. This
research aims to investigate the legal challenges of civil liability
for robots and artificial intelligence and to provide innovative
legal solutions. The research method was
descriptive-analytical and comparative, and the topic was analyzed by
reviewing legal sources, international documents, and comparative
studies. The research findings show that ambiguities related to
determining the culprit, proving fault, and direct liability of
robots are among the most important legal issues in this field and
require the development of specific rules and regulations based on
the characteristics of intelligent technologies. The innovation
aspect of this research is in providing a comparative framework and
proposing native solutions appropriate to technological developments
and the country's legal system. Finally, to guarantee the rights of
individuals and protect the public interest, it is necessary to amend
and update civil liability laws, and legal and judicial institutions
must do their utmost in this regard.
Searching for
evidence of a specific crime or anything that look suspicious?
https://www.cnet.com/home/security/amazons-ring-cameras-push-deeper-into-police-and-government-surveillance/
Amazon's
Ring Cameras Push Deeper Into Police and Government Surveillance
Less
than two years after removing
a feature that
made it easier for law enforcement agencies to request footage from
owners of Ring doorbells and other security products, Amazon has
partnered with two companies that will help facilitate the same kinds
of requests.
Two
weeks after rolling
out a new product line for 2025,
Ring, owned by Amazon, announced
a partnership with Flock Safety,
as part of its expansion of the Community
Requests feature in the Ring Neighbors app.
Atlanta-based Flock is a police technology company that sells
surveillance technology, including drones, license-plate reading
systems and other tools. The announcement follows a partnership Ring
entered into with Axon,
previously Taser International, which also builds tools for police
and military applications.