Is the assumption that technology
has never replaced humans?
https://thenextweb.com/news/china-court-ai-layoffs-illegal-labor-law
China
has decided that firing a worker because an AI can do their job is
illegal. No Western country has done the same.
Dealing with
AI as evidence…
https://jurnalius.ac.id/ojs/index.php/jurnalIUS/article/view/1880
Admissibility
of Artificial Intelligence as Electronic Evidence: Comparative
Perspectives from Indonesia, the United States, and Japan
Artificial
Intelligence (AI) is increasingly integrated into digital forensic
and evidentiary processes, raising unresolved doctrinal questions in
criminal procedure law. In Indonesia, although electronic evidence
is formally recognized, the law does not yet provide specific
admissibility standards for AI-based materials, particularly
regarding authenticity, methodological reliability, process
traceability, explainability, and accountability. This study
examines the admissibility of AI as electronic evidence in Indonesia
and compares it with legal approaches in the United States and Japan.
This study employs a normative juridical method using statutory,
conceptual, and comparative approaches to analyze the evidentiary
frameworks of the three jurisdictions. The findings show that the
United States emphasizes expert gatekeeping and digital
authentication, while Japan adopts a softer regulatory model centered
on traceability, documentation, and actor accountability. By
contrast, Indonesia, still lacks specific procedural standards for
assessing AI-generated outputs beyond the general recognition of
electronic evidence. This article argues that the key legal issue is
no longer whether electronic evidence is admissible in general, but
how AI-based evidence should be evaluated in a legally reliable and
accountable manner. The scientific contribution of this study lies
in proposing a five-parameter evaluative model for AI
admissibility—covering authenticity and integrity, process
traceability, model performance, identity verification, and
accountability. This model is offered as a normative reference for
future reform of the Criminal Procedure Code and the Electronic
Information and Transactions Law, while safeguarding legal certainty
and justice.
Too much data?
Trust AI to find the interesting bits?
https://ijlr.iledu.in/wp-content/uploads/2026/04/V6I555.pdf
ARTIFICIAL
INTELLIGENCE AS A TOOL FOR EVIDENCE AND INVESTIGATION IN
INTERNATIONAL CRIMINAL LAW
Artificial
intelligence (AI) is changing how international criminal
investigators collect, sort, authenticate, and present evidence. The
shift is driven by the digital turn in atrocity documentation:
conflicts now generate enormous volumes of user-generated videos,
social-media posts, satellite images, geolocation data, intercepted
communications, and sensor-derived material. International criminal
law (ICL), however, remains anchored in fair-trial guarantees,
adversarial testing, and cautious evidentiary assessment. This
article examines AI as a practical investigative tool rather than as
a substitute decision-maker. It argues that AI is most valuable in
five functions: triage of large datasets, pattern detection, linkage
analysis, authenticity checks, and courtroom visualization. Drawing
on recent ICC practice, open-source investigation standards, and
contemporary scholarship, the article shows that AI can strengthen
accountability when deployed inside a rigorous legal framework. Yet
it also identifies serious risks: bias in training data, black-box
outputs, synthetic media, privacy intrusions, chain-of-custody gaps,
and unequal technological
capacities between prosecution and defense. [Isn’t that always the
case? Bob] The central claim is that AI should be used
as an assistive layer under strong human oversight. In ICL, the
measure of success is not whether AI is impressive, but whether it
produces evidence that is reliable, explainable, contestable, and
consistent with the rights of the accused and the interests of
victims.
Feel the heat?
https://www.reuters.com/legal/litigation/us-judge-says-senior-lawyers-must-pay-mistakes-by-subordinates-using-ai-tools-2026-05-01/
US
judge says senior lawyers must pay for mistakes by subordinates using
AI tools
A
federal judge has sanctioned the manager of a California law firm
over a junior attorney's artificial intelligence-assisted court brief
that contained a false case citation, saying the responsibility for
such errors extends to supervising lawyers.
, opens new tab
U.S.
Magistrate Judge Peter Kang in San Francisco in an
order on Tuesday said
the attorney, Lenden Webb, should have exercised greater oversight of
a lawyer in his small law office who said she used AI to help
craft the brief.