If
it exists, it’s taxable.
https://pogowasright.org/the-irs-says-your-digital-life-is-not-your-property/
The
IRS Says Your Digital Life Is Not Your Property
Brent
Skorup and Laura Bondank write:
When
the IRS secretly demands your financial records and private
information from a third party, without a warrant, what rights do you
still have?
That’s
the question at the heart of Harper
v. O’Donnell, which
is before the
Supreme Court.
New Hampshire resident Jim Harper is fighting back against the IRS
after discovering he was swept up in a massive digital dragnet. The
case could redefine how the Fourth Amendment applies in the age of
cloud storage—and it may determine whether your emails, location
history, search queries, and financial records that tech companies
store on your behalf are treated as your property.
In
2016, the IRS ordered the cryptocurrency exchange Coinbase to hand
over transaction records of over 14,000 customers. Harper was among
them and only learned of the government’s records grab after the
IRS sent him a warning letter, mistakenly suggesting he’d
underreported his cryptocurrency income. He soon discovered the IRS
had his transaction logs, wallet addresses, and public keys—allowing
the agency to monitor any future transactions he made.
Harper
hadn’t done anything wrong. He’d simply used a legal platform to
buy and sell cryptocurrency. But his digital footprint became
visible to the government overnight.
Read
more at Reason.
Sorry,
the AI says we shouldn’t waste time treating you.
https://www.researchgate.net/profile/John-Mathew-26/publication/391318390_Predictive_AI_Models_for_Emergency_Room_Triage/links/68121727ded43315573f521a/Predictive-AI-Models-for-Emergency-Room-Triage.pdf
Predictive
AI Models for Emergency Room Triage
Emergency
room (ER) triage is a critical process that prioritizes patients
based on the severity of their conditions, aiming to ensure timely
care in high-pressure environments. However, traditional triage
methods are often subjective and may lead to delays in treatment,
overcrowding, and suboptimal patient outcomes. This paper explores
the role of predictive Artificial Intelligence (AI) models in
enhancing ER triage by providing data-driven, real-time insights to
optimize decision-making, improve patient prioritization, and
streamline resource allocation. We examine various AI techniques,
including machine learning (ML), deep learning (DL), and natural
language processing (NLP), highlighting their application in
analyzing structured and unstructured data such as electronic health
records (EHRs), patient vital signs, medical imaging, and clinical
notes. The paper also discusses the importance of data
preprocessing, including handling missing values, data normalization,
and feature selection, to ensure accurate model predictions. Through
case studies and clinical implementations, we demonstrate how AI
models have been successfully integrated into real-world ER settings
to predict patient acuity, early deterioration, and patient outcomes.
Ethical, legal, and practical considerations such as data privacy,
algorithmic bias, and model transparency are also addressed. The
paper concludes with a discussion on the future directions of AI in
ER triage, including the integration of multimodal data, real-time
monitoring, and personalized care. Predictive AI has the potential
to significantly enhance ER efficiency and improve patient care,
making it a valuable tool for modern healthcare systems.
AI
is no big deal?
https://scholarship.law.unc.edu/cgi/viewcontent.cgi?article=1508&context=ncjolt
Liability
for AI Agents
Artificial
intelligence (“AI”) is becoming integral to modern life, fueling
innovation while presenting complex legal challenges. Unlike
traditional software, AI operates with a degree of autonomy,
producing outcomes that its developers or deployers cannot fully
anticipate. Advances in underlying technology have further enhanced
this autonomy, giving rise to AI agents: systems capable of
interacting with their environment independently, often with minimal
or no human oversight. As AI decision-making—like that of
humans—is inherently imperfect, its increasing deployment
inevitably results in instances of harm, prompting the critical
question of whether developers and deployers should be held liable as
a matter of tort law.
This
question is frequently answered in the negative. Many scholars,
adopting a framework of technological exceptionalism, assume AI to be
uniquely disruptive. Citing the lack of transparency and
unpredictability of AI models, they contend that AI challenges
conventional notions of causality, rendering existing liability
regimes inadequate.
This
Article offers the first comprehensive normative analysis of the
liability challenges posed by AI agents through a law-and-economics
lens. It begins by outlining an optimal AI liability framework
designed to maximize economic and societal benefits. Contrary to
prevailing assumptions about AI’s disruptiveness, this analysis
reveals that AI largely aligns with traditional products. While AI
presents some distinct challenges—particularly in its complexity,
opacity, and potential for benefit externalization—these factors
call for targeted refinements to existing legal frameworks rather
than an entirely new paradigm.
This
holistic approach underscores the resilience of traditional legal
principles in tort law. While AI undoubtedly introduces novel
complexities, history shows that tort law has effectively navigated
similar challenges before.
For
example, AI’s causality issues closely resemble those in medical
malpractice cases, where the impact of treatment on patient recovery
can be uncertain. The legal system has already addressed these
issues, providing a clear precedent for extending similar solutions
to AI. Likewise, while the traditional distinction between design
and manufacturing defects does not map neatly onto AI, there is a
compelling case for classifying inadequate AI training data as a
manufacturing defect—aligning AI liability with established legal
doctrine.
Taken
together, this Article argues that AI agents do not necessitate a
fundamental overhaul of tort law but rather call for targeted,
nuanced refinements. This analysis offers essential guidance on how
to effectively apply existing legal standards to this evolving
technology.
Who
really done it?
https://ijlr.iledu.in/wp-content/uploads/2025/04/V5I723.pdf
ARTIFICIAL
INTELLIGENCE, LEGAL PERSONHOOD, AND DETERMINATION OF CRIMINAL
LIABILITY
The
broad adoption of artificial intelligence (AI) across vital domains
ranging from autonomous vehicles and financial markets to healthcare
diagnostics and legal analytics has exposed significant gaps in our
legal systems when AI-driven errors or malfunctions cause harm.
Autonomous systems often involve multiple stakeholder hardware
suppliers, software developers, sensor manufacturers, and corporate
overseers making it difficult to pinpoint who is responsible for a
system’s failure. The 2018 Uber autonomous-vehicle crash in Tempe,
Arizona, where a pedestrian was misclassified repeatedly by the AI’s
perception module and the emergency braking function was disabled,
underscores this challenge: with safety overrides turned off and
state oversight minimal, liability became entangled among engineers,
operators, and corporate policy not the machine alone.
Traditional
criminal law doctrines rest on actus reus (the guilty act) and mens
rea (the guilty mind), both premised on human agency and intent. AI
entities, however, can execute complex decision-making without
consciousness or moral awareness, creating a “responsibility gap”
under current frameworks. To bridge this gap, scholars like Gabriel
Hallevy have proposed three liability models—perpetration-via-another
(holding programmers or users accountable), the
natural-probable-consequence model (liability for foreseeable harms),
and direct liability (attributing responsibility to AI itself if it
meets legal thresholds for actus reus and an analogue of mens rea).
Each model offers insight but struggles with AI’s semi-autonomous
nature and opacity.
This
paper argues against prematurely conferring legal personhood on AI an
approach that risks absolving human actors and diluting
accountability. Instead, it advocates for a human-centric policy
framework that combines clear oversight duties, mandated
explainability measures, and calibrated negligence or
strict-liability standards for high-risk AI applications. Such
reforms are especially urgent in jurisdictions like India, where AI
governance remains nascent. By anchoring liability in human
oversight and regulatory clarity rather than on machines themselves,
we can ensure that accountability evolves in step with AI’s growing
capabilities, safeguarding both innovation and public safety.