Interesting
approach…
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4432941
From
Ethics to Law: Why, When, and How to Regulate AI
The
past decade has seen a proliferation of guides, frameworks, and
principles put forward by states, industry, inter- and
non-governmental organizations to address matters of AI ethics.
These diverse efforts have led to a broad consensus on what norms
might govern AI. Far less energy has gone into determining how these
might be implemented — or if they are even necessary. This chapter
focuses on the intersection of ethics and law, in particular
discussing why regulation
is necessary, when regulatory changes should be made, and how it
might work in practice. Two specific areas for law reform
address the weaponization and victimization of AI. Regulations aimed
at general AI are particularly difficult in that they confront many
‘unknown unknowns’, but the threat of uncontrollable or
uncontainable AI became more widely discussed with the spread of
large language models such as ChatGPT in 2023. Additionally,
however, there will be a
need to prohibit some conduct in which increasingly lifelike machines
are the victims — comparable, perhaps, to animal cruelty laws.
(Related)
https://read.dukeupress.edu/the-minnesota-review/article-abstract/2023/100/118/351618/Co-creating-with-AI
Co-creating
with AI
The
concept of “co-creation” is particularly timely because it
reframes the ethics of who creates, how, and why, not only
interpreting the world but seeking to change it through a lens of
equity and justice. An expansive notion, co-creation embraces a
constellation of methods, frameworks, and feedback systems in which
projects emerge out of process and evolve from within communities and
with people, rather than being made for or about them. Co-creation,
we contend, offers a hands-on heuristic to explore the expressive
capacities and possible forms of agency in systems that have already
been marked as candidates for some form of consciousness.
In this article, we ask if humans can co-create with nonhuman systems
and, more specifically, artificial intelligence (AI) systems. To
find out, we interviewed more than thirty artists, journalists,
curators, and coders, specifically asking about their relationships
with the AI systems with which they work. Their answers often
reflected a broader spectrum of co-creation, expanding the social
conversation and complicating issues of agency and nonagency,
technology and power, for the sake of human and nonhuman futures
alike.
Does
this require personhood? Can you punish a tool?
https://link.springer.com/chapter/10.1007/978-3-031-29860-8_6
Punishing
the Unpunishable: A Liability Framework for Artificial Intelligence
Systems
Artificial
Intelligence (AI) systems are increasingly taking over the day-to-day
activities of human beings as a part of the recent technological
revolution that has been set into motion ever since we, as a species,
started harnessing the potential these systems have to offer. Even
though legal research on AI is not a new phenomenon, due to the
increasing “legal injuries” arising out of commercialization of
AI, the need for legal regime/framework for the legal accountability
of these artificial entities has become a very pertinent issue that
needs to be addressed seriously. This
research paper shall investigate the possibility of attaching civil
as well as criminal liability to AI systems by analysing
whether mens rea can be attributed to AI entities and, if so, what
could be the legal framework/model(s) for such proposed culpability.
The paper acknowledges the limitations of the law in general and
criminal law in particular when it comes to holding AI systems
criminally responsible. The paper also discusses the legal
framework/legal liability model(s) that could be employed for
extending the culpability to AI entities and understanding what
forms of “punishments” or sanctions would make sense for these
entities.
Eventually
we will need to address the constitution and all the amendments.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4431251
Artificial
Intelligence and the First Amendment
Artificial
intelligence (AI), including generative AI, is not human, but
restrictions on the activity or use of AI, or on the dissemination of
material by or from AI, might raise serious first amendment issues if
those restrictions (1) apply to or affect human speakers and writers
or (2) apply to or affect human viewers, listeners, and readers.
Here as elsewhere, it is essential to distinguish among
viewpoint-based restrictions, content-based but viewpoint-neutral
restrictions, and content-neutral restrictions. Much of free speech
law, as applied to AI, is in the nature of “the law of the horse”:
established principles of multiple kinds applied to a novel context.
But imaginable cases raise unanswered questions, including (1)
whether AI as such has
constitutional rights, (2) whether and which person or
persons might be a named defendant if AI is acting in some sense
autonomously, and (3) whether and in what sense AI has a right to be
free from (for example) viewpoint-based restrictions, or whether it
would be better, and correct, to say that human viewers, listeners,
and readers have the relevant rights, even if no human being is
speaking. Most broadly, it remains an unanswered question whether
the First Amendment protects the rights of human viewers, listeners,
and readers, seeking to see, hear, or read something from AI.
An
AI is people?
https://scholarlycommons.law.wlu.edu/wlulr-online/vol80/iss6/1/
The
Perks of Being Human
The
power of artificial intelligence has recently entered the public
consciousness, prompting debates over numerous legal issues raised by
use of the tool. Among the questions that need to be resolved is
whether to grant intellectual property rights to copyrightable works
or patentable inventions created by a machine, where there is no
human intervention sufficient to grant those rights to the human.
Both the U. S. Copyright Office and the U. S. Patent and Trademark
Office have taken the position that in cases where there is no human
author or inventor, there is no right to copyright or patent
protection. That position has recently been upheld by a federal
court. This article argues that the
Constitution and current statutes do not compel that result,
that the denial of protection will hinder innovation, and that if
intellectual property rights are to be limited to human innovators
that policy decision should be made by Congress, not an
administrative agency or a court.
In
other words, will there ever be a robot Pope?
https://journals.sagepub.com/doi/full/10.1177/09539468231172006
Could
a Conscious Machine Deliver Pastoral Care?
Could Artificial Intelligence (AI)
play an active role in delivering pastoral care? The question rests
not only on whether an AI could be considered an autonomous agent,
but on whether such an agent could support the depths of relationship
with humans which is essential to genuine pastoral care. Theological
consideration of the status of human-AI relations is heavily
influenced by Noreen Herzfeld, who utilises Karl Barth's I-Thou
encounters to conclude that
we will never be able to relate meaningfully to a computer since it
would not share our relationship to God. In this article,
I look at Barth's anthropology in greater depth to establish a more
comprehensive and permissive foundation for human-machine encounter
than Herzfeld provides—with the key assumption that, at some stage,
computers will become conscious. This work allows discussion to
shift focus to the challenges that the alterity of the conscious
computer brings, rather than dismissing it as a non-human object. If
we can relate as an I to a Thou with a computer, then this allows
consideration of the types of pastoral care they could provide.