Unique approach? (Is there an
algorithmic version of mens rea?)
https://www.atlantis-press.com/proceedings/icsliai-26/126022002
When
AI Breaks the Law: Rethinking Mens Rea in the Age of Autonomous
System
Artificial
Intelligence (AI) has quickly shifted to the periphery of fantasy and
the core of human decision-making of humans, challenging the very
principles by which criminal law has always been built. The crux of
this interference is mens rea, the presence of a guilty mind on which
intention, awareness, and moral agency are assumed. The AI systems,
however, lack consciousness or emotion. Thus, their independent
actions may cause damage in the real world, where the human agent
cannot be easily identified. This conflict reveals a growing fault
line in the law: how can a system built to judge human culpability
respond when the culpability is of the algorithm?
The paper
examines how the disturbances caused by the rise of AI affect the
traditional principles of criminal liability under Indian laws,
especially the Bhartiya Nyaya Sanhita, 2023. It
examines the hypothetical impossibility of assigning mens rea to
machines, the practical ineffectiveness of the existing laws, and the
dangers of uncontrolled and unregulated autonomy. Basing
its arguments on the comparative experience of other jurisdictions,
the paper finds a way forward in reform, including the clarification
of AI-specific definitions, the creation of risk-based negligence
standards, and the development of accountability models that focus on
human accountability and not on constraining innovation. Finally, it
argues that the criminal justice system in India needs to develop
wisely without sacrificing fairness, accountability, and the rule of
law in a world in which algorithms are defining the destinies of
humans more and more.
Our AI rights…
https://digitalcommons.law.mercer.edu/jour_mlr/vol77/iss2/7/
From
Phone Booths to Digital Booths: Rethinking Fourth Amendment Privacy
in the Age of Open Source Intelligence
The use of
Open Source Intelligence (“OSINT”) by the U.S. intelligence
community marks a paradigm shift in national security practices,
leveraging vast troves of publicly available and commercially
acquired data. Yet this shift raises urgent constitutional questions
regarding the applicability of the Fourth Amendment’s protections
in the digital age. As OSINT practices increasingly rely on
sophisticated aggregation techniques and artificial intelligence
tools, the line between publicly available information and
constitutionally protected privacy interests begins to blur. This
Article critically examines whether certain forms of OSINT collection
and analysis, particularly those that aggregate digital data at scale
or use predictive algorithms, may constitute an unreasonable search
or seizure under the Fourth Amendment.
Relying on an
evolving body of caselaw, this Article argues that the long‑standing
Third Party Doctrine is increasingly ill‑suited for the
realities of the modern digital age. It explores how the aggregation
of seemingly public data can reveal deeply private patterns,
behaviors, and insights, thereby implicating a reasonable expectation
of privacy under the Fourth Amendment. To help courts, the
intelligence community, and policymakers navigate this complex legal
terrain, this Article introduces a three‑part framework to
assess when OSINT practices risk constitutional infringement: (1)
whether the government obtains aggregated data, including
commercially available information, of a type and volume that
implicates a reasonable expectation of privacy; (2) whether advanced
technologies are employed to extract digital information that would
otherwise be unknowable through conventional means; and (3) whether
such technologies are used to enhance insights into areas where
courts have recognized a reasonable expectation of privacy. This
Article concludes by urging a more balanced approach that reflects
both the operational needs of the intelligence community and civil
liberties. As technology evolves and OSINT capabilities grow courts,
the intelligence community, and policymakers must act to ensure that
the Fourth Amendment remains a meaningful safeguard, not an obsolete
artifact, in the digital era.
The first AI
war?
https://www.researchgate.net/profile/Zaza-Tsotniashvili/publication/401535600_Algorithmic_Warfare_in_the_Iran_Conflict_AI-Driven_Decision_Compression_the_Erosion_of_Human_Oversight_and_Accountability_Gaps_in_Contemporary_Military_Operations/links/69a7e6ebceb31f79ab23081c/Algorithmic-Warfare-in-the-Iran-Conflict-AI-Driven-Decision-Compression-the-Erosion-of-Human-Oversight-and-Accountability-Gaps-in-Contemporary-Military-Operations.pdf
Algorithmic
Warfare in the Iran Conflict: AI-Driven Decision Compression, the
Erosion of Human Oversight, and Accountability Gaps in Contemporary
Military Operations
Introduction/Background:
The joint United States–Israeli military offensive against Iran
commencing on February 28, 2026, Operation Epic Fury/Operation
Roaring Lion, produced an unprecedented operational tempo: nearly 900
strikes within the first twelve hours. What made this possible was
not merely superior firepower but the deep integration of artificial
intelligence (AI) into every phase of the kill chain. The Iran
conflict has thus emerged as the first large-scale armed
confrontation in which AI functioned not as a supporting analytical
tool but as a core operational component of military decision-making,
compressing targeting cycles from days to minutes and systematically
marginalizing substantive human deliberation.
Methods: This
article employs a critical analytical framework drawing on OSINT
based investigative reporting on Operation Epic Fury, the academic
literature on AI-enabled military targeting, documented AI
deployments in prior conflicts (Gaza, Ukraine), emerging scholarship
on the Iran-Israeli confrontation, international humanitarian law,
and analysis of corporate governance tensions between leading AI
developers and defense establishments.
Results: The
Iran conflict demonstrates three interlocking phenomena: first,
AI-driven decision compression that reduced multi-day planning cycles
to hours; second, the structural transformation of human oversight
into a performative 'rubber stamp' - a formal authorization with no
substantive deliberative content; and third, the collapse of
corporate AI ethics under competitive military procurement pressure,
illustrated most sharply by the simultaneous events of February 28,
2026, when Anthropic was blacklisted by the Pentagon for refusing to
remove constraints on autonomous weapons, while its model was already
embedded in Iran strike operations and OpenAI immediately assumed its
defense contracts.
Conclusions:
Current governance frameworks are structurally inadequate to address
the accountability gaps created by AI-assisted targeting. The Iran
conflict has rendered urgent the development of binding international
instruments that operationalize meaningful human control not as a
nominal designation but as an enforceable behavioral standard,
anchored in minimum deliberative time requirements and technical
transparency mandates for AI-DSS used in lethal force decisions.
(Related)
https://www.theguardian.com/world/2026/mar/07/it-means-missile-defence-on-data-centres-drone-strikes-raises-doubts-over-gulf-as-ai-superpower
‘It
means missile defence on datacentres’: drone strikes raise doubts
over Gulf as AI superpower
It is believed
to be a first: the deliberate targeting of a commercial datacentre by
the armed forces of a country at war.
At 4.30am on
Sunday morning, an Iranian Shahed 136 drone struck an Amazon Web
Services datacentre in the United Arab Emirates, setting off a
devastating fire and forcing a shutdown of the power supply. Further
damage was inflicted as attempts were made to suppress the flames
with water.
Soon after, a
second data centre owned by the US tech company was hit. Then a
third was said to be in trouble, this time in Bahrain, after an
Iranian suicide drone turned to fireball on striking land nearby.
… Millions
of people in Dubai and Abu Dhabi woke up on Monday unable to pay for
a taxi, order a food delivery, or check their bank balance on their
mobile apps.
Whether there
was a military impact is unclear – but the strikes swiftly brought
the war directly into the lives of 11 million people in the UAE, nine
out of 10 of whom are foreign nationals. Amazon has advised its
clients to secure their data away from the region.
An Iranian
hack or normal government incompetence?
https://www.theverge.com/policy/890904/trump-administration-cbp-tariff-refunds-technology-issues
The
Trump administration says it can’t process tariff refunds because
of computer problems
The US Customs
and Border Protection says it currently can’t comply with an order
to process billions of dollars in refunds stemming from tariffs
imposed by President Donald Trump. In a filing on Friday, CBP
executive director Brandon Lord says the agency’s digital import
processing system is “not well suited to a task of this scale,”
as reported earlier by CNBC.
The CBP’s
admission comes after the Supreme Court struck down the tariffs
imposed by Trump under the International Emergency Economic Powers
Act (IEEPA) last month. This week, the International Trade Court
ruled that importers impacted by the tariffs are entitled to refunds
with interest. The CBP estimates that it collected around $166
billion in IEEPA duties as of March 4th, 2026.