Another
security tool outflanked by the advance of technology.
https://www.theguardian.com/technology/2023/mar/16/voice-system-used-to-verify-identity-by-centrelink-can-be-fooled-by-ai
AI
can fool voice recognition used to verify identity by Centrelink and
Australian tax office
A
voice identification system used by the Australian government for
millions of people has a serious security flaw, a Guardian Australia
investigation has found.
Centrelink
and the Australian Taxation Office (ATO) both give people the option
of using a “voiceprint”, along with other information, to verify
their identity over the phone, allowing them to then access sensitive
information from their accounts.
But
following reports that an AI-generated voice trained to sound like a
specific person could be used to access phone-banking services
overseas, Guardian Australia has confirmed that the voiceprint system
can also be fooled by an AI-generated voice.
(Related)
Technology fights back.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4384122
Detecting
Deep Fake Evidence with Artificial Intelligence: A Critical Look from
a Criminal Law Perspective
The
widespread use of deep fakes is particularly worrisome for criminal
justice, as it risks eroding trust in video, image or audio evidence.
Since detecting deep fakes is a challenging task for humans, several
projects are currently investigating the use of AI-based methods to
identify manipulated content. This paper critically assesses the use
of “deep fake detectors” from the perspective of criminal
evidence and procedural law. It contends that whilst the use of AI
detectors is beneficial (if not inevitable), risks arising from their
use in criminal proceedings must not be underestimated. After a
brief introduction to deep fake technology and detection methods, the
paper analyses three key issues, namely accuracy, scientific validity
and fair access to detection tools between parties. In light of
these challenges, the paper argues that the introduction of deep
fake detectors in criminal trials must comply with the same standards
required for expert evidence. To do so, higher
transparency and increased collaboration with software providers are
needed.
A
new term to further define the problem…
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4384541
Privacy
Nicks: How the Law Normalizes Surveillance
Privacy
law is failing to protect individuals from being watched and exposed,
despite stronger surveillance and data protection rules. The problem
is that our rules look to social norms to set thresholds for privacy
violations, but people can get used to being observed. In this
article, we argue that by ignoring de minimis privacy encroachments,
the law is complicit in normalizing surveillance. Privacy law helps
acclimate people to being watched by ignoring smaller, more frequent,
and more mundane privacy diminutions. We
call these reductions “privacy nicks,” like the proverbial
“thousand cuts” that lead to death.
Privacy
nicks come from the proliferation of cameras and biometric sensors on
doorbells, glasses, and watches, and the drift of surveillance and
data analytics into new areas of our lives like travel, exercise, and
social gatherings. Under our theory of privacy nicks as the Achilles
heel of surveillance law, invasive practices become routine through
repeated exposures that acclimate us to being vulnerable and watched
in increasingly intimate ways. With acclimation comes resignation,
and this shift in attitude biases how citizens and lawmakers view
reasonable measures and fair tradeoffs.
Because
the law looks to norms and people’s expectations to set thresholds
for what counts as a privacy violation, the normalization of these
nicks results in a constant re-negotiation of privacy standards to
society’s disadvantage. When this happens, the legal and social
threshold for rejecting invasive new practices keeps getting redrawn,
excusing ever more aggressive intrusions. In effect, the
test of what privacy law allows is whatever people will tolerate.
There is no rule to stop us from tolerating everything. This
article provides a new theory and terminology to understand where
privacy law falls short and suggests a way to escape the current
surveillance spiral.
Do
we need a ‘Top Gun’ for robots?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4388065
Autonomous
Weapons
This
chapter examines autonomous weapon systems (AWS) from an
international law perspective. It demystifies AWS, presents the
landscape around them by identifying the main legal issues AWS might
pose, and suggest solutions whenever possible. To be of use for both
the layperson and expert looking for an overview of the increasingly
convoluted debate about AWS, it begins with a lexicon of terms and
concepts key to understanding the field, and provides an overview of
the state-of-the-art of autonomy in military equipment. Having set
the scene, the chapter continues with a critical examination of the
ongoing international regulatory debate about AWS, and its reception
in scholarship. The analysis closes with a
reflection on attribution of responsibility for
internationally wrongful conduct resulting from the combat employment
of AWS, considered by some as the pivotal legal peril ostensibly
resulting from weapon autonomy.
My
AI questions why this is limited to robots (AIs) with bodies? Would
HAL be covered?
https://researchportal.helsinki.fi/en/publications/robotics-ai-and-criminal-law-crimes-against-robots
Robotics,
AI and Criminal Law. Crimes Against Robot
This
book offers a phenomenological perspective on the criminal law debate
on robots. Today, robots are protected in some form by criminal law.
A robot is a person’s property and is protected as property. This
book presents the different rationale for protecting robots beyond
the property justification based on the phenomenology of human-robot
interactions. By focusing on robots that have bodies and act in the
physical world in social contexts, the work provides an assessment of
the issues that emerge from human interaction with robots, going
beyond perspectives focused solely on artificial intelligence (AI).
Here, a phenomenological approach does not replace ontological
concerns, but complements them. The book addresses the following key
areas: Regulation of robots and AI; Ethics of AI and robotics; and
philosophy of criminal law.
It
will be of interest to researchers and academics working in the areas
of Criminal Law, Technology and Law and Legal Philosophy.
Conclusions
seem hard to come by…
https://asistdl.onlinelibrary.wiley.com/doi/abs/10.1002/asi.24750
ChatGPT
and a new academic reality: Artificial Intelligence-written research
papers and the ethics of the large language models in scholarly
publishing
This
article discusses OpenAI's ChatGPT, a generative pre-trained
transformer, which uses natural language processing to fulfill
text-based user requests (i.e., a “chatbot”). The history and
principles behind ChatGPT and similar models are discussed. This
technology is then discussed in relation to its potential impact on
academia and scholarly research and publishing. ChatGPT is seen as a
potential model for the automated preparation of essays and other
types of scholarly manuscripts. Potential ethical issues that could
arise with the emergence of large language models like GPT-3, the
underlying technology behind ChatGPT, and its usage by academics and
researchers, are discussed and situated within the context of broader
advancements in artificial intelligence, machine learning, and
natural language processing for research and scholarly publishing.
I
wish I could make a backup copy!
https://www.insidehighered.com/views/2023/03/17/librarians-should-stand-internet-archive-opinion
The
Internet Archive Is a Library
The Internet
Archive, a nonprofit library in San Francisco, has grown into one
of the most important cultural institutions of the modern age.
What began in 1996 as an audacious attempt to archive and preserve
the World Wide Web has grown into a
vast library of books, musical recordings and television shows,
all digitized and available online, with a mission to provide
“universal access to all knowledge.”
Right now, we are at a
pivotal stage in a copyright
infringement lawsuit against the Internet Archive, still pending,
brought by four of the biggest for-profit publishers in the world,
who have been trying to shut down core programs of the archive since
the start of the pandemic. For the sake of libraries and library
users everywhere, let’s hope they don’t succeed.
You’ve probably heard of Internet Archive’s
Wayback Machine,
which archives billions of webpages from across the globe. Fewer are
familiar with its other extraordinary collections, which include 41
million digitized books and texts, with more than three million
books available to borrow. To make this possible, Internet Archive
uses a
practice known as “controlled
digital lending,” “whereby a library owns a book, digitizes
it, and loans either the physical book or the digital copy to one
user at a time.”