Because
people?
Why
contact tracing may be a mess in America
Dozens
of states across the US are pinning their hopes on contact tracing to
control the spread of the coronavirus and enable regions to reopen
without sparking major resurgences of the outbreak.
… Contact
tracing is a proven tool in containing outbreaks of highly infectious
diseases. But this particular virus could pose significant
challenges to tracing programs in the US, based on new studies and
emerging evidence from initial efforts. Stubbornly high new
infection levels in some areas, the continued shortage of tests, and
American attitudes toward privacy could all hamstring the
effectiveness of such programs.
How
do I program ‘reasonable?’
How
Can I Tell If My Algorithm Was Reasonable?
Self-learning
algorithms are gradually dominating more and more aspects of our
lives. They do so by performing tasks and reaching decisions that
were once reserved exclusively for human beings. And not only
that—in certain contexts, their decision-making performance is
shown to be superior to that of humans. However, as superior as they
may be, self-learning algorithms (also referred-to as artificial
intelligence (AI) systems, “smart robots”, or “autonomous
machines”, among other terms) can also cause damage.
When
determining the liability of a human tortfeasors causing damage, the
applicable legal framework is generally that of negligence. To be
found negligent, the tortfeasor must have acted in a manner not
compliant with the standard of “the reasonable person”. Given
the growing similarity of self-learning algorithms to humans in the
nature of decisions they make and the type of damages they may cause,
several scholars have proposed the development of a “reasonable
algorithm” standard, to be applied to self-learning systems.
To
date, however, the literature has not attempted to address the
practical question of how such a standard might be applied to
algorithms, and what the content of analysis ought to be in order to
achieve the goals behind tort law of promoting safety and victims’
compensation on the one hand, and achieving the right balance between
them and encouraging the development of beneficial technologies on
the other.
This
paper analyses the “reasonableness” standard used in tort law, as
well as the unique qualities, weaknesses and strengths of algorithms
versus humans, and examines whether the reasonableness standard is
at all compatible with self-learning algorithms. Concluding that
it generally is, the paper’s main contribution is its proposal of a
concrete “reasonable algorithm” standard that could be
practically applied by decision-makers. Said standard accounts for
the differences between human and algorithmic decision-making, and
allows the application of the reasonableness standard to algorithms
in a manner that promotes the aims of tort law and at the same time
avoids a dampening effect on the development and usage of new,
beneficial technologies.
Interesting
conclusions. Are we doomed?
The
Threat of AI and Our Response: The AI Charter of Ethics in South
Korea.
Abstract:
Changes in our lives due to Artificial Intelligence (AI) are
currently ongoing, and there is little refutation of the
effectiveness of AI. However, there have been active discussions to
minimize the side effects of AI and use it responsibly, and
publishing the AI Charter of Ethics (AICE) is one result of it. This
study examines how our society is responding to threats from AI that
may emerge in the future by examining various AIECs in the Republic
of Korea. First, we summarize seven AI threats and classify
these into three categories: AI's value judgment, malicious use of
AI, and human alienation. Second, from Korea's seven AICEs,
we draw fourteen topics based on three categories: protection of
social values, AI control, and fostering digital citizenship.
Finally, we review them based on the seven AI threats to
evaluate any gaps between the threats and our responses. The
analysis indicates that Korea has not yet been able to properly
respond to the threat of AI's usurpation of human occupations (jobs).
In addition, although Korea's AICEs present appropriate responses to
lethal AI weapons, these provisions will be difficult to realize
because the competition for AI weapons among military powers is
intensifying.
Canada is at
least looking at AI and the law.
References
to Artificial Intelligence in Canada's Court Cases
Artificial
intelligence (AI) is a widely discussed topic in many fields
including law. Legal studies scholars, particularly in the domain of
technology and internet law, have expressed their hopes and concerns
regarding AI. This project aims to study how Canada's courts have
referred to AI, given the importance of the reasonings of justices to
the policy makers who determine society's rules for the usage of AI
in the future. Decisions from all levels of both Canada's provincial
and federal courts are used as the data sources for this research.
The findings indicate that there are four legal contexts in
which AI has been referred to in the Canadian caselaw including:
legal research, investment tax credits, trademarks and access to
government records. In this article the authors use these
findings to make suggestions for legal information management
professionals on how to develop collections and reference services
that are in line with the new information needs of their users
regarding AI and the rule of law.
Substitute
beer for coffee and you have my isolation philosophy.
No comments:
Post a Comment