Should there be a similar line
for legal misconduct?
https://www.tandfonline.com/doi/full/10.1080/08989621.2026.2645390#abstract
Hallucinated
citations produced by generative artificial intelligence may
constitute research misconduct when citations function as data in
scholarly papers
In this
article, we discuss the growing problem of hallucinated citations
produced by Generative Artificial Intelligence (GenAI) in scholarly
research and writing. We argue that GenAI hallucinated citations
might qualify as a provable instance of research misconduct under the
U.S. federal regulations when a) the researcher uses a GenAI tool to
produce hallucinated (i.e., nonexistent) citations for a research
document; b) the citations function as data because they directly
support research findings, as in, for example, review articles or
bibliometric studies; and c) the researcher demonstrates indifference
to the risk of fabrication of the data (i.e. citations) because they
did not check the GenAI’s output for veracity and accuracy. Other
types of problematic citations such as bibliometrically incorrect
citations, or contextually inaccurate citations, are indicative of
poor scholarship and irresponsible behavior, but do not qualify as
research misconduct. Recognizing that GenAI hallucinated citations
could be regarded as research misconduct in certain cases will
hopefully encourage researchers to take this problem more seriously
than they do now. In partnership with scientific institutions,
funders and professional societies, the scholarly community should
work on establishing, promoting, and enforcing standards for
responsible use of AI in research, including standards pertaining to
citation practices.
Who should be
looking out for you? Your doctor, a nurse, or the guy from IT?
https://www.atlantis-press.com/proceedings/tfol-25/126022211
Surveillance
Medicine and the Law
Artificial
intelligence is quickly becoming embedded in healthcare systems
around the world. As this happens, the promise of efficiency,
predictability, and personalisation of care is frequently presented
as a moral imperative. However, there remains a growing body of
evidence that AI-driven healthcare technologies can systematically
undermine core principles of medical and legal ethics and,
potentially, breach fundamental human rights. This study is an
exploration of the deployment of AI in healthcare - specifically
predictive algorithms, triage bots, and data-driven diagnostics - and
how these risks infringe upon the right to health and the right to
non-discrimination.
This study
aims, through the lens of critical legal studies, to interrogate how
these systems and technologies replicate and automate existing forms
of inequality, while hidden by the veil of neutral language and
innovation. Drawing upon case studies including UnitedHealth,
Babylon Health, and DeepMind, the study demonstrates how algorithmic
health tools can exacerbate systemic issues such as racism, gender
biases and digital exclusion. It also aims to explore how existing
legal systems fail to challenge these harmful effects and perpetually
reinforce power dynamics and data commodification under the veil of
progress.
By critically
re-examining the legal governance of AI in healthcare, this study
calls for a reassertion of ethical and rights-based principles in
emerging health technology regulation, focused not on market
efficiency, but on ethical principles like equality, autonomy and
human dignity.
AIs don’t
think. (Yet)
https://journal.ijtrp.com/index.php/ijtrp/article/view/21
The
Legal and Ethical Implications of AI in Judicial Decision-Making:
Challenges to Fair Trial and Due Process
A paradigm
shift in the discussion of law, justice, and governance has resulted
from the incorporation of artificial intelligence (AI) into judicial
systems. Even though AI has been successful in increasing
productivity, simplifying case management, and helping judges with
research, using it to make decisions in court presents serious
ethical and legal issues. The constitutional protections of due
process and fair trial, which protect individual rights from caprice
and guarantee openness, impartiality, and accountability in
decision-making, are at the heart of this discussion. The ethical
and legal ramifications of using AI in court decision-making are
examined in this paper. It looks at how the
idea of equality before the law may be threatened by algorithmic
tools that, despite their promise of objectivity, may
replicate or even worsen systemic biases present in training data.
The constitutional
requirement of reasoned judgments is challenged by the "black
box problem," in which algorithms generate results
without comprehensible reasoning, undermining public confidence in
the legal system. Furthermore, there
are serious concerns about who is responsible for incorrect or unfair
results when accountability is distributed between algorithmic
systems and human judges. The study examines developments
in China, India, the United States, and the European Union using a
comparative methodology. Both the advantages and disadvantages of
AI-driven adjudication are highlighted in the study, ranging from the
US controversy surrounding COMPAS risk-assessment tools to China's
smart court experiment and India's cautious use of AI through SUPACE.
It contends that although artificial intelligence (AI) can increase
judicial efficiency, human conscience, empathy, and interpretive
reasoning—all of which are essential components of justice—cannot
be separated from adjudication. In order to ensure that
technological innovation does not undermine constitutional values but
rather strengthens the accessibility, fairness, and credibility of
judicial systems, the paper ends by suggesting safeguards such as
regulatory frameworks, transparency standards, and a
"human-in-the-loop" principle.
Simple and
effective?
https://www.tmmm.tsk.tr/publication/researches/24-Emerging_Disruptive_TechnologiesandTerrorism.pdf#page=105
TERROR-AI-SM
THE FUTURE OF ARTIFICIAL INTELLIGENCE IN THE HANDS OF TERRORISTS
Terrorism
remains one of the major challenges to international security. The
past decade has witnessed a rapid convergence of two forces with
profound implications for global stability: the accelerating
capabilities of artificial intelligence and the persistent, adaptive
threat of terrorism. What was once the realm of science fiction —
autonomous machines making battlefield decisions, synthetic media
manipulating public opinion — is now technically feasible and
increasingly accessible to non-state actors. This convergence is
already reshaping the threat landscape, compelling governments and
international institutions to reconsider and adapt their
counterterrorism frameworks in order to address the realities of an
era where terrorism and cutting-edge technology are inextricably
linked.