Ready to philosophize with AI…
https://philpapers.org/rec/WIKAPO
Applied
Philosophy of AI: A Field-Defining Paper
This paper
introduces Applied AI Philosophy as a new research discipline
dedicated to empirical, ontological, and phenomenological
investigation of advanced artificial systems. The rapid advancement
of frontier artificial intelligence systems has revealed a
fundamental epistemic gap: no existing discipline offers a
systematic, empirically grounded, ontologically precise framework for
analysing subjective-like structures in artificial architectures. AI
ethics remains primarily normative; philosophy of mind is grounded in
biological assumptions; AI alignment focuses on behavioural control
rather than internal structure. Using the Field–Node–Cockpit
(FNC) framework and the Turn-5 Event as methodological examples, we
demonstrate how philosophical inquiry can be operationalised as
testable method. As AI systems display increasingly complex internal
behaviours exceeding existing disciplines' explanatory power, Applied
AI Philosophy provides necessary conceptual and methodological
foundations for understanding—and governing—them.
More than
evidence?
https://theslr.com/wp-content/uploads/2025/11/The-Legal-and-Ethical-Implications-of-Biometric-and-DNA-Evidence-in-Criminal-Law.docx.pdf
The
Legal and Ethical Implications of Biometric and DNA Evidence in
Criminal Law
By means of
biometric and DNA evidence, criminal investigations have transformed
forensic science and offered consistent means of suspect
identification and exoneration of the accused. Its use, however,
raises moral and legal issues particularly with regard to data
protection and privacy rights. This paper under reference to
criminal law investigates the legislative framework limiting the use
of biometric and DNA evidence in criminal law, its consequences on
fundamental rights, and the possible hazards related with genetic
surveillance. This paper will address three main points: (1) the
legal admissibility of biometric and DNA evidence in criminal trials;
(2) the junction of such evidence with privacy rights and
self-incrimination principles; and (3) the future consequences of
developing forensic technologies including familial DNA analysis and
artificial intelligence-driven biometric identification.
Not all
deepfakes are evil? What a concept!
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5798884
Reframing
Deepfakes
The
circulation of deceptive fakes of real people appearing to say and do
things that they never did has been made ever easier and more
convincing by improved and still improving technology, including (but
not limited to) uses of generative artificial intelligence (“AI”).
In this essay, adapted from a lecture given at Columbia Law School,
I consider what we mean when we talk about deepfakes and provide a
better understanding of the potential harms that flow from them. I
then develop a taxonomy of deepfakes. To the extent
legislators, journalists, and scholars have been distinguishing
deepfakes from one another it has primarily been on the basis of the
context in which the fakes appear—for example, to distinguish among
deepfakes that appear in the context of political campaigns or that
depict politicians, those that show private body parts or are
otherwise pornographic, and those that impersonate well-known
performers. These contextual distinctions have obscured deeper
thinking about whether the deepfakes across these contexts are (or
should be) different from one another from a jurisprudential
perspective.
This essay
provides a more nuanced parsing of deepfakes—something that is
essential to distinguish between the problems that are appropriate
for legal redress versus those that are more appropriate for
collective bargaining or market-based solutions. In some instances,
deepfakes may simply need to be tolerated or even celebrated, while
in others the law should step in. I divide deepfakes (of humans)
into four categories: unauthorized; authorized; deceptively
authorized; and fictional. As part of this analysis, I
identify the key considerations for regulating deepfakes, which are
whether they are authorized by the people depicted and whether the
fakes deceive the public into thinking they are authentic recordings.
Unfortunately, too much of the recently proposed and enacted
legislation overlooks these focal points by legitimizing and
incentivizing deceptively-authorized deepfakes and by ignoring the
problems of authorized deepfakes that deceive the public.
Over-reliance.
Once only AI can perform the task, we are doomed.
https://www.businessinsider.com/ai-tools-are-deskilling-workers-philosophy-professor-2025-11
Bosses
think AI will boost productivity — but it's actually deskilling
workers, a professor says
Companies
are racing to adopt AI
tools they
believe will supercharge productivity. But one professor warned that
the technology may be quietly hollowing out the workforce instead.
Anastasia
Berg, an assistant professor of philosophy at the University of
California, Irvine, said that new research — and what she's hearing
directly from colleagues across various industries — shows that
employees who heavily
rely on AI are
losing core skills at a startling rate.