Is this the path to AI
personhood? (Could you sentence an AI to “Life?”)
https://cyberleninka.ru/article/n/criminal-liability-for-actions-of-artificial-intelligence-approach-of-russia-and-china
CRIMINAL
LIABILITY FOR ACTIONS OF ARTIFICIAL INTELLIGENCE: APPROACH OF RUSSIA
AND CHINA
In the Era of
Artificial intelligence (AI) it is necessary not only to define
precisely in the national legislation the extent of protection of
personal information and limits of its rational use by other people,
to improve data algorithms and to create ethics committee to control
risks, but also to establish precise liability (including criminal
liability) for violations, related to AI agents. According to
existed criminal law of Russia and criminal law of the People’s
Republic of China AI crimes can be divided into three types: crimes,
which can be regulated with existed criminal laws; crimes, which are
regulated inadequately with existed criminal laws; crimes, which
cannot be regulated with existed criminal laws. Solution
of the problem of criminal liability for AI crimes should depend on
capacity of the AI agent to influence on ability of a human to
understand public danger of committing action and to guide his
activity or omission. If a machine integrates with an
individual, but it doesn’t influence on his ability to recognize or
to make decisions. In this case an individual is liable to be
prosecuted. If a machine influences partially on human ability to
recognize or to make decisions. In this case engineers, designers
and units of combination should be prosecuted according to principle
of relatively strict liability. In case, when AI machine integrates
with an individual and controls his ability to recognize or to make
decisions, an individual should be released from criminal
prosecution.
Has the
pendulum swung too far?
https://www.wsj.com/articles/trial-of-former-uber-executive-has-security-officials-worried-about-liability-for-hacks-11662638107?mod=djemalertNEWS
Trial
of Former Uber Executive Has Security Officials Worried About
Liability for Hacks
Joe
Sullivan, a former federal prosecutor, is accused of helping to cover
up a security breach, a charge he denies
The
federal trial of a former Uber
Technologies Inc.
executive over a
2016 hack has
raised concerns among cybersecurity professionals about the liability
they might face as they confront attackers or seek to negotiate with
them.
Joseph
Sullivan, the former executive, is facing criminal obstruction
charges in a trial that began Wednesday in San Francisco for his role
in paying hackers who claimed to have discovered a security
vulnerability within Uber’s systems.
Federal
prosecutors have charged Mr. Sullivan with criminal obstruction,
alleging that he helped orchestrate a coverup of the security breach
and sought to conceal it to avoid required disclosures.
AI will not be
as friendly (or as smart) as Judge Judy!
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4206664
AI,
Can You Hear Me? Promoting Procedural Due Process in Government Use
of Artificial Intelligence Technologies
This Article
explores the constitutional implications of algorithms, machine
learning, and Artificial Intelligence (AI) in legal processes and
decision-making, particularly under the Due Process Clause.
Regarding Judge Henry J. Friendly’s procedural due process
principles of the U.S. Constitution, decisions
produced using AI appear to violate all but one or two of them.
For instance, AI systems may provide the right to present evidence
and notice of the proposed action, but do not provide any opportunity
for meaningful cross-examination, knowledge of opposing evidence, or
the true reasoning behind a decision. Notice can also be inadequate
or even incomprehensible. This Article analyzes the challenges of
complying with procedural due process when employing AI systems,
explains constraints on computer-assisted legal decision-making, and
evaluates policies for fair AI processes in other jurisdictions,
including the European Union (EU) and the United Kingdom (UK).
Building on existing literature, it explores the various stages in
the AI development process, noting the different points at which bias
may occur, thereby undermining procedural due process principles.
Furthermore, it discusses the key variables at the heart of AI
machine learning models and proposes a framework for responsible AI
designs. Finally, this Article concludes with recommendations to
promote the interests of justice in the United States as the
technology develops.
People for the
Ethical Treatment of Fish? Automating bias: determining outcomes by
how the salmon looks?.
https://www.wageningenacademic.com/doi/abs/10.3920/978-90-8686-939-8_73
Ethics
through technology – individuation of farmed salmon by facial
recognition
One
fundamental element in our
moral duties to sentient animals, according to some
central ethical approaches, is to treat them as individuals that are
morally significant for their own sake. This is impossible in
large-scale industrial salmon aquaculture due to the number of
animals and their inaccessibility under the surface. Reducing the
numbers to ensure individual care would make salmon farming
economically unfeasible. Technology may provide alternative
solutions. FishNet is an emerging facial recognition technology
which allows caretakers to
monitor behaviour and health of individual fish. We argue
that FishNet may be a solution for ensuring adequate animal welfare
by overcoming current obstacles to monitoring and avoid stress caused
by physical interaction with humans. This surveillance can also
expand our knowledge of farmed fish behaviour, physical and social
needs. Thus, we may learn to perceive them as fellow creatures
deserving of individual care and respect, ultimately altering the
industry practices. However, the technology may serve as a
deflection, covering up how individual salmon are doomed to adverse
and abnormal behaviour. This may strengthen a paradigm of salmon as
biomass, preventing the compassion required for moral reform, where
the understanding of fish welfare is restricted to the prevention of
suffering as a means to ensure quality products. Whether FishNet
will contribute to meet the moral duty to recognize and treat farmed
fish as individuals or not, requires reflection upon the ethical
dualities of this technology, simultaneously enabling and
constraining our moral perceptions and freedom to act. We will
discuss the conditions for realizing the ethical potential of this
technology.
I
wonder how easily this translates to humans listening to
misinformation? (Should we test every AI?) Could we design ‘self
testing’ into our AI?
https://www.the-sun.com/tech/6158380/psychopath-ai-scientists-content-dark-web/
‘Psychopath
AI’ created by scientists who fed it content from ‘darkest
corners of web’
PSYCHOPATHIC
AI was created by scientists who fed it dark content from the web, a
resurfaced study reveals.
In
2018, MIT scientists developed an AI
dubbed
'Norman', after the character Norman Bates in Alfred Hitchcock’s
cult classic Psycho, per BBC.
The
aim of this experiment was to see how training AI on data from "the
dark corners of the net" would alter its viewpoints.
'Norman'
was pumped with continuous image captions from macabre Reddit groups
that share death and gore content.
And
this resulted in the AI meeting traditional 'psychopath’
criteria, per psychiatrists.
… This
led to insight for the MIT scientists behind 'Norman', who said that
if an AI displays bias, it's
not the program that's at fault. [I think it is! Bob]
"The
culprit is often not the algorithm itself but the biased data that
was fed into it,” the team explained.
Some
headlines just catch your eye.
https://www.washingtonexaminer.com/news/justice/how-trump-fbi-raid-may-have-exposed-lawyers
‘Make
Attorneys Get Attorneys’: How Trump FBI raid may have exposed MAGA
lawyers