Interesting.
I’m sure he has not identified everything, but this is a good
start.
https://www.schneier.com/blog/archives/2023/11/ten-ways-ai-will-change-democracy.html
Ten
Ways AI Will Change Democracy
Artificial
intelligence will change so many aspects of society, largely in ways
that we cannot conceive of yet. Democracy, and the systems of
governance that surround it, will be no exception. In this short
essay, I want to move beyond the “AI-generated disinformation”
trope and speculate on some of the ways AI will change how democracy
functions—in both large and small ways.
… Some
items on my list are still speculative, but none require
science-fictional levels of technological advance. And we can see
the first stages of many of them today. When reading about the
successes and failures of AI systems, it’s important to
differentiate between the fundamental limitations of AI as a
technology, and the practical limitations of AI systems in the fall
of 2023. Advances are happening quickly, and the impossible is
becoming the routine. We don’t know how long this will continue,
but my bet is on continued major technological advances in the coming
years. Which means it’s going to be a wild ride.
A
question that really needs an answer.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4623126
Criminal
Liability of Artificial Intelligence
Artificial
intelligence is a new and extremely quickly developing technology,
which is expected, and maybe even feared to bring enormous changes in
every aspect of our society. Even though this technology is still
comparatively underdeveloped, we already hand over a multitude of
everyday-tasks. As for now AI is mostly used to take over tasks,
which are often perceived as “annoying” or highly time consuming.
Therefore, it shall enhance productivity in first place. It is
expected to do many of the tasks even better than human beings. At
least in future. Some of these tasks, such as autonomous driving are
quite dangerous, bearing the potential to infringe peoples protected
rights, and even cause physical harm and death to human beings.
Obviously, such technology needs a solid and reliable legal basis,
especially in terms of liability, if the inevitable happens and the
technology causes events that were not intended to happen. However,
a well-developed set of rules should not only concern private law.
Especially when such technology causes harm or even death to human
beings, the question of a criminal deed arises, in a sense of
criminal negligence for example. Future criminal law must be
prepared and probably adjusted effectively tackle any questions
concerning criminal liability of artificial intelligence.
Are
we evolving toward an AI lawyer?
http://192.248.104.6/handle/345/6771
Impact
of Artificial Intelligence on Legal Practice in Sri Lanka
Artificial
Intelligence (AI) a machine-based system used to ease the human
workload, has been popular globally and its influence can be seen
even in developing countries like Sri Lanka. Although it has
dominated areas such as machine problem detecting, calculating and
speech recognition, it is questionable whether this sophisticated
technology can address the traditional roles of legal practice. The
research aims to explore the positive and negative influence of AI in
the legal field while determining the degree to which this technology
should be incorporated into the legal sector in Sri Lanka. The
research was carried out as a literature survey with a comparative
analysis of other jurisdictions. Currently, many countries including
the USA have used AI-based tools such as LawGeex, Ross Intelligence,
eBrevia and Leverton in legal practice due to their efficiency,
accuracy and ease of use. Findings revealed that AI can be used even
in Sri Lanka for legal research, preliminary legal drafting and
codification of law. But according to the prevailing economic and
social background of Sri Lanka, it will be discriminatory to totally
rely on an AI-induced legal system since it may create barriers to
equal access to legal support for the common masses. Also, excessive
dependency on AI will be a
barrier to innovative legal actions such as public
interest litigation since it would not assess the humanitarian
aspect. Hence, it is concluded that AI should be used in Sri Lankan
legal practice with limitations.
Thoughtful.
Something for Con-Law at last!
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4626235
AI
Outputs and the Limited Reach of the First Amendment
Not
all communications are “constitutional speech” - determining
whether machine-generated outputs qualify for First Amendment
protection requires some work. In order to do so, we first explore
aspects of both linguistic and communication theories, and then under
what circumstances communication can become First Amendment speech.
We
reach the bounds of the First Amendment from two directions. Working
from a linguistic definition of speech, we capture non-linguistic
forms of protected speech. Using communication theory, we reach a
divide between human-sender communication and non-human-sender
communication. Together these approaches support the location of a
constitutional frontier. Within we find all instances of recognized
First Amendment effectiveness. Outputs of non-human autonomous
senders (e.g. AI) are outside and constitute an unexamined case.
“Speech”
under the First Amendment requires both a human sender and a human
receiver. Concededly many AI outputs will be speech –
due to the human factor in the mix. But just because a human
programmed the AI, or set its goals, does not mean the AI’s output
is substantially the human’s message. Nor does the fact that a
human receives the output, for listener’s First Amendment rights
arise only where actual speech occurs. Thus, we
resist the claim that all AI outputs are necessarily speech. Indeed,
most AI outputs are not speech.
For
those who raise objection to the challenge we pose – determining
which AI outputs are speech and which are not – we respectfully
note that there will be additional Constitutional work to be done.
We are confident that our courts will be up to this challenge.
Whether
AI outputs are First Amendment speech has profound implications. If
they are, then state and federal regulation is severely hobbled,
limited to the few categories of speech that have been excluded by
the Supreme Court from strong constitutional protection.
With
limited exception, neither the sponsors/developers of AI, the AI
itself, nor the end users have rights under the First Amendment in
the machine’s output. We express no opinion on other rights they
may have or on what types of regulations state and federal
governments should adopt. Only that they may constitutionally do so.
(Related)
They may have put a finger on the problem. AI output is based on
the data it has scanned.
https://ojs.journalsdg.org/jlss/article/view/1965
The
Impact of Developments in Artificial Intelligence on Copyright and
other Intellectual Property Laws
Objective:
The objective of this study is to investigate the impact of AI
breakthroughs on copyright and challenges faced by intellectual
property legal protection systems. Specifically, the study aims to
analyze the implications of AI-generated works in the context of
copyright law in Indonesia.
… Result:
The research findings reveal that according to Law Number 28 of 2014
in Indonesia, AI-generated
works do not meet the originality standards required for copyright
protection.
This
interests me because of the years I spent auditing computer systems.
https://link.springer.com/article/10.1007/s44206-023-00074-y
Auditing
of AI: Legal, Ethical and Technical Approaches
AI
auditing is a rapidly growing field of research and practice. This
review article, which doubles as an editorial to Digital Society’s
topical collection on ‘Auditing of AI’, provides an overview of
previous work in the field. Three key points emerge from the review.
First, contemporary attempts to audit AI systems have much to learn
from how audits have historically been structured and conducted in
areas like financial accounting, safety engineering and the social
sciences. Second, both policymakers and technology providers have an
interest in promoting auditing as an AI governance mechanism.
Academic researchers can thus fill an important role by studying the
feasibility and effectiveness of different AI auditing procedures.
Third, AI auditing is an inherently multidisciplinary undertaking, to
which substantial contributions have been made by computer scientists
and engineers as well as social scientists, philosophers, legal
scholars and industry practitioners. Reflecting this diversity of
perspectives, different approaches to AI auditing have different
affordances and constraints. Specifically, a distinction can be made
between technology-oriented audits, which focus on the properties and
capabilities of AI systems, and process-oriented audits, which focus
on technology providers’ governance structures and quality
management systems. The next step in the evolution of auditing as an
AI governance mechanism, this article concludes, should be the
interlinking of these available—and complementary—approaches into
structured and holistic procedures to audit not only how AI systems
are designed and used but also how they impact users, societies and
the natural environment in applied settings over time.
(Related)
You mean I can generate my own version of the evidence!
https://iplab.dmi.unict.it/mfs/user/pages/03.publications/2024_an%20Overview%20of%20Deepfake%20Technologies%20from%20Creation%20to%20Detection%20in%20Forensics.pdf
An
Overview of Deepfake Technologies: from Creation to Detection in
Forensics
Advancements
in Artificial Intelligence (AI) techniques have given rise to
significant challenges in the field of Multimedia Forensics,
particularly with the emergence of the Deepfake phenomenon.
Deepfakes are images, video and audio generated or altered by
powerful generative models such as Generative Adversarial Networks
(GANs) [5] and Diffusion Models (DMs) [12]. While GANs have long
been recognized for their ability to generate high-quality images,
DMs offer distinct advantages, providing better control over the
generative process and the ability to create images with a wide range
of styles and content [2]. In fact, DMs have shown the potential
to produce even more realistic images than GANs. The
AI-generated contents span diverse domains, including films,
photography, video games, and virtual reality productions. A major
concern of the Deepfake phenomenon is the application on important
people such as politicians and celebrities to spread misinformation.
However, the most alarming aspect is the misuse of GANs and DMs to
create pornographic Deepfakes, posing a serious security threat.
Notably, a staggering 96% of Deepfakes available on the internet fall
into this pornographic category. The malicious use of Deepfakes
extends to issues such as misinformation, cyberbullying, and privacy
violation. In addition, Deepfakes have been applied in the fields of
art and entertainment, sparking ethical discussions about the limits
of creativity and authenticity. To counteract the illicit use of
this powerful technology, novel forensic detection techniques are
required to identify whether multimedia data has been manipulated or
altered using GANs and DMs. Regarding image deepfake detection
methods in the state of the art, the primary focus lies in binary
detection, distinguishing between Real and AI-generated images [14,
16]. Notably, some methods in the state of the art have already
demonstrated the ability to effectively differentiate between various
GAN architectures [4, 7, 6, 15] and several DM engines [13, 1, 9].
These researches showed that generative models leave unique
fingerprints in the generated multimedia data, which can be used not
only to identify Deepfakes, but also to recognize the specific
architecture used during the creation process [11]. This can be
extremely important in forensics in order to reconstruct the history
of the multimedia data under analysis (forensic ballistics) [8]. In
order to create increasingly sophisticated deepfakes detection
solutions, several challenges have been proposed by the scientific
community such as the Deepfake Detection Challenge (DFDC) [3] and the
Face Deepfake Detection Challenge [10]. The latter has also launched
a new challenge among
researchers in the field: reconstructing the original image from
deepfakes; a task that can be extremely important in forensics.