I
see it as a mere swing of the pendulum. I trust it will swing back.
https://re.public.polimi.it/handle/11311/1225493
Is
ethics evaporating in the cyber era? Part 2: Feeling Framed
In
continuation to the Part 1 published in this volume, this part
discusses the oversupply of
information and approaches the concerning rights we are
alienating to enjoy digital technology. Instead
of using the Internet as space for free exchange of ideas, it is
being used as a tool for supervision, management, and control.
There is an increasing merger of artificial intelligence and machine
learning in any sector for analysing, optimizing, and even framing
humans. Our digital “buddies” take note of our everyday life,
our itinerary, our health parameters, our messages and our content.
Big data centres, computer farms are the new “caveau” (Bank
Vault) full of “our” data.
Do
we ask the AI of it meant to commit a crime? Can we believe its
answer?
https://repository.uchastings.edu/hastings_science_technology_law_journal/vol14/iss1/2/
The
Artificially Intelligent Trolley Problem: Understanding Our Criminal
Law Gaps in a Robot Driven World
Not
only is Artificial Intelligence (AI) present everywhere in people’s
lives, but the technology is also now capable of making unpredictable
decisions in novel situations. AI poses issues for the United
States’ traditional criminal law system because this system
emphasizes mens rea’s importance in determining criminal liability.
When AI makes unpredictable decisions that lead to crimes, it will
be impractical to determine what mens rea to ascribe to the human
agents associated with the technology, such as AI’s creators,
owners, and users. To solve this issue, the
United States’ legal system must hold AI’s creators, owners, and
users strictly liable for their AI’s actions and also
create standards that can provide these agents immunity from strict
liability. Although other legal scholars have proposed solutions
that fit within the United States’ traditional criminal law system,
these proposals fail to strike the right balance between encouraging
AI’s development and holding someone criminally liable when AI
causes harm.
This
Note illuminates this issue by exploring an artificially intelligent
trolley problem. In this problem, an AI-powered self-driving car
must decide between running over and killing five pedestrians or
swerving out of the way and killing its one passenger; ultimately,
the AI decides to kill the five pedestrians. This Note explains why
the United States’ traditional criminal law system would struggle
to hold the self-driving car’s owner, programmers, and creator
liable for the AI’s decision, because of the numerous human agents
this problem brings into the criminal liability equation, the
impracticality of determining these agents’ mens rea, and the
difficulty in satisfying the purposes of criminal punishment.
Looking past the artificially intelligent trolley problem, these
issues can be extended to most criminal laws that require a mens rea
element. Criminal law serves as a powerful method of regulating new
technologies, and it is essential that the United States’ criminal
law system adapts to solve the issues that AI poses.
Good
technology used poorly.
https://www.vice.com/en/article/5d3edx/apple-airtag-stalking-police-family-court
The
Legal System Is Completely Unprepared for Apple AirTag Stalking
… Apple
has been under fire for stalking capabilities of its AirTag tracking
devices for almost the entirety of the lifetime of the device, and
this week, two
women brought a lawsuit against Apple, claiming that the devices
make it easy for stalkers to track victims. One of the women claims
that her ex-boyfriend placed an AirTag in the wheel well of her car
to track her. The other’s story is similar to Dozier’s: her
estranged husband, she claimed, placed an AirTag in their child’s
backpack in order to follow her.
Cynthia Godsoe, a professor of law at Brooklyn Law
School, told me that the role of technology in family law is becoming
more and more prevalent. Where someone used to have to hire a
private investigator to follow someone around to build evidence
against them in a custody or divorce case, she said, they can now use
something like a tracking device—or even just Facebook posts to
make a case against their ex.
Will there be liability for failure to speak?
https://ir.lawnet.fordham.edu/flr/vol91/iss3/5/
Let's Get
Real: Weak Artificial Intelligence Has Free Speech Rights
The right to free speech is a strongly protected
constitutional right under the First Amendment to the U.S.
Constitution. In 2010, the U.S. Supreme Court significantly expanded
free speech protections for corporations in Citizens United v. FEC.
This case prompted the question: could other nonhuman actors also be
eligible for free speech protection under the First Amendment? This
inquiry is no longer a mere intellectual exercise: sophisticated
artificial intelligence (AI) may soon be capable of producing speech.
As such, there are novel and complex questions surrounding the
application of the First Amendment to AI. Some commentators argue
that AI should be granted free speech rights because AI speech may
soon be sufficiently comparable to human speech. Others disagree and
argue that First Amendment rights should not be extended to AI
because there are traits in human speech that AI speech could not
replicate.
This Note explores the application of First
Amendment jurisprudence to AI. Introducing relevant philosophical
literature, this Note examines theories of human intelligence and
decision-making in order to better understand the process that humans
use to produce speech, and whether AI produces speech in a similar
manner. In light of the legal and philosophical literature, as well
as the Supreme Court’s current First Amendment jurisprudence, this
Note proposes that some types of AI are eligible for free speech
protection under the First Amendment.
Not yet ready to replace all those judges…
https://lawresearchmagazine.sbu.ac.ir/article_102915.html?lang=en
The
Challenges in Employing of AI Judge in Civil Proceedings
Artificial intelligence (AI), as one of the most
important human achievements in the 21st century, is expanding its
dominance in science, technology, industry, art, etc. this technology
is spreading its shadow over various jobs in these fields. The field
of law, and specifically proceedings and courtrooms, are reluctantly
being influenced by this technology. This article aims to explain
the challenges of employing this modern technology as a substitute
for civil court judges. Despite all the AI‘s achievements and the
opportunities it can bring to the judiciary, it seems this technology
faces severe challenges in matters such as legal reasoning,
impartiality, and public acceptance. This research, with a
descriptive-analytical method, while explaining the shortcomings of
AI in the field of judgment, reveals that AI, with its current
capabilities, cannot be considered as a complete substitute for a
human judge. This means that it would be more effective to use AI as
a tool in the service of judges, helping them in handling and
resolving disputes faster and more accurately. These challenges are
compounded in Iranian law, which is affected by Feqh regarding
judges' conditions and hindrances of the Iranian legal system
compared with other legal systems in employing new technologies such
as AI.
A sure fire conversation starter?
https://www.tandfonline.com/doi/abs/10.1080/13600834.2022.2154050
Artificially
intelligent sex bots and female slavery: social science and Jewish
legal and ethical perspectives
In this paper, we shed light on the question of
whether it is morally permissible to enslave artificially intelligent
entities by looking at up to date research from the social sciences –
as well as the ancient lessons from Jewish law. The first part of
the article looks at general ethical questions surrounding the ethics
of AI and slavery by looking at contemporary social science research
and the moral status of ‘Sex Bots’ – AI entities that are built
for the purpose of satisfying human sexual desires. The second part
presents a Jewish perspective on the
obligation to protect artificial intelligent entities from abuse
and raises the issue of the use of such entities in the context of
sex therapy. This is followed by a review of slavery and in
particular, female slavery in Jewish law and ethics. In the
conclusions, we argue that both perspectives provide justification
for the ‘Tragedy of the Master’ – that in enslaving AI we risk
doing great harm to ourselves. This has significant and negative
consequences for us – as individuals, in our relationships, and as
a society that strives to value the dignity, autonomy, and moral
worth of all sentient beings.