Can we build a prison for AI and
robots?
https://digitalcommons.bau.edu.lb/lsjournal/vol2024/iss1/6/
THE
CRIMINAL LIABILITY OF INTELLIGENT ROBOTS: BETWEEN REALITY AND THE LAW
Artificial
intelligence, in its modern perspective, is regarded as having the
capacity to perform duties. But is it, in turn, capable of bearing
responsibility—specifically, criminal liability?
In
principle, punishment under criminal law is imposed on an accused
individual because they deliberately violate the rules and provisions
of the law, aiming to achieve criminal outcomes they intend. This
implies the presence of a conscious and aware will. In contrast, a
robot lacks such will and awareness, meaning that, from a legal
standpoint, it does not qualify as a legal person under the
traditional classification of legal entities.
Accordingly,
this study raises the
question of how criminal penalties could be imposed on a robot and
whether this is even possible. If the penalties
stipulated in criminal law cannot be applied, what are the possible
alternatives, and can they be considered legally valid?
This
research follows the attached plan, which forms the basis for the
findings and recommendations.
Have
we forgotten how to be polite?
https://www.independent.com/2025/07/09/first-amendment-auditors-near-cottage-hospital-harass-and-film-patients-and-customers/
‘First
Amendment Auditors’ near Cottage Hospital Harass and Film Patients
and Customers
Wednesday
morning, on the sidewalks around Cottage Hospital on Nogales Avenue,
three men dressed in dark clothing, one masked, armed with tripods
and cameras were reportedly harassing members of the public by
recording videos, shouting profanity, and threatening identity theft,
according to sources at the scene.
Engaged
in what is called “First Amendment auditing,” the trio, including
two who later identified themselves as Mr. Dick Fitzwell and Mr.
Hill, succeeded in having bystanders call 9-1-1. Santa Barbara
Police Department officers and security personnel for nearby
businesses responded, arriving around 10 a.m. The men had remained
on public property and were not targeting specific individuals,
Lieutenant Antonio Montojo said, and no arrests were warranted.
Montojo, who was on watch command duty for SBPD, said the “auditors”
were not associated with law enforcement, and were trying to provoke
a response from people to get them to call 9-1-1.
… “First
Amendment Auditing” is trending among citizen activists, who record
public officials and employees in public spaces to test their
understanding and respect for First Amendment rights, particularly
the right to photograph and record in public. The
“auditors” target unwitting members of the public in the hope
they call 9-1-1. Once they do, arriving law enforcement is
photographed, with any missteps uploaded to YouTube or TikTok.
Did
they get it right?
https://www.sacbee.com/opinion/op-ed/article311536381.html
How
artificial intelligence is reshaping California's judicial system |
Opinion
Imagine
you’re in court for a traffic ticket or a child custody dispute.
You expect a judge to weigh your case with impartial wisdom and a
thorough understanding of the law. But what if, behind the scenes,
parts of your ruling were drafted by artificial intelligence?
This
month, the California Judicial Council, which oversees the largest
court system in the country, approved groundbreaking rules regulating
generative AI use by judges, clerks and court staff. By September 1,
every courthouse from San Diego to Siskiyou must follow policies that
require human oversight, protect confidentiality and guard against AI
bias.
… The
council’s new guidelines are prudent: They forbid court personnel
from allowing AI to draft legal documents or make decisions without
meaningful human review. They warn against inputting sensitive case
details into public AI platforms, preventing data leaks. They
recognize the danger of bias baked into AI systems trained on flawed
or discriminatory case law.
In
an overstretched judicial system, these safeguards are essential.
But safeguards are not barriers. And the AI genie is out of the
bottle. California courts already rely on algorithmic tools. Judges
use AI-powered risk assessments, like COMPAS, to predict defendants’
likelihood of reoffending, guiding bail and sentencing decisions.
These tools have sparked fierce controversy as there is racial bias
in the technology, yet they remain widespread.
Perspective.
https://www.researchgate.net/profile/Nishchal-Soni/publication/394105140_Social_Media_Forensics_Foundations_Technical_Frameworks_and_Emerging_Challenges/links/6889e8d5f8031739e609a006/Social-Media-Forensics-Foundations-Technical-Frameworks-and-Emerging-Challenges.pdf
Social
Media Forensics: Foundations, Technical Frameworks, and Emerging
Challenges
Social
media forensics (SMF) has emerged as a critical subdomain of digital
forensics, addressing the complex task of collecting, analyzing, and
preserving evidence from dynamic, user-driven platforms. As social
media plays an increasingly central role in communication, crime, and
civil disputes, investigators face significant obstacles related to
data volatility, platform encryption, legal jurisdiction, and user
privacy. This review explores the foundational theories behind SMF,
the legal frameworks that govern its practice, the array of technical
tools and methodologies used for investigation, and the tactics
employed by adversaries to evade detection or manipulate evidence.
Special emphasis is placed on the evolving threat landscape,
including deepfakes, ephemeral messaging, and decentralized
platforms, as well as emerging solutions in artificial intelligence,
blockchain, and real-time forensics. The paper concludes with a
forward-looking perspective on the strategic, technological, and
policy innovations needed to strengthen forensic readiness and ensure
the integrity of digital investigations in an increasingly complex
online ecosystem.