Is
they is or is they ain’t the same? Isn’t that the goal?
https://link.springer.com/chapter/10.1007/978-94-6265-523-2_2
Artificial
Intelligence Versus Biological Intelligence: A Historical Overview
The
discipline of artificial intelligence originally aimed to replicate
human-level intelligence in a machine. It could be argued that the
best way to replicate the behavior of a system is to emulate the
mechanisms producing this behavior. But whether we should try to
replicate the human brain or the cognitive faculties to accomplish
this is unclear. Early symbol-based AI systems paid little regard to
neuroscience and were rather successful. However, since the 1980s,
artificial neural networks have become a powerful AI technique that
show remarkable resemblance to what we know about the human brain.
In this chapter, we highlight some of the similarities and
differences between artificial and human intelligence, the history of
their interconnection, what they both excel at, and what the future
may hold for artificial general intelligence.
I’ll
keep gathering ideas...
https://link.springer.com/article/10.1007/s43681-022-00194-0
Reconsidering
the regulation of facial recognition in public spaces
This paper
contributes to the discussion on effective regulation of facial
recognition technologies (FRT) in public spaces. In response to the
growing universalization of FRT in the United States and Europe as
merely intrusive technology, we propose to distinguish scenarios in
which the ethical and social risks of using FRT are unattainable from
other scenarios in which FRT can be adjusted to improve our everyday
lives. We suggest that the
general ban of FRT technologies in public spaces is not an inevitable
solution. Instead, we advocate for a risk-based approach
with emphasis on different use-cases that weighs moral risks and
identifies appropriate countermeasures. We introduce four use-cases
that focus on presence of FRT on entrances to public spaces (1)
Checking identities in airports (2) Authorisation to enter office
buildings (3) Checking visitors in stadiums (4) Monitoring passers-by
on open streets, to illustrate the diverse ethical and social
concerns and possible responses to them. Based on the different
levels of ethical and societal risks and applicability of respective
countermeasures, we call for a distinction of public spaces between
semi-open public spaces and open public spaces. We suggest that this
distinction of public spaces could not only be helpful in more
effective regulation and assessment of FRT in public spaces, but also
that the knowledge of different risks and countermeasures will lead
to better transparency and public awareness of FRT in diverse
scenarios.
(Related)
https://ojs.victoria.ac.nz/wfeess/article/view/7645
Ethics
of Facial Recognition Technology in Law Enforcement: A Case Study
Facial
Recognition Technology (FRT) has promising applications in law
enforcement due to its efficiency and cost-effectiveness. However,
this technology poses significant
ethical concerns that overshadow its benefits.
Responsible use of FRT requires consideration of these ethical
concerns that legislation
fails to cover. This study investigates the ethical
issues of FRT use and relevant ethical frameworks and principles
designed to combat these issues. Drawing on this, we propose and
discuss a code of ethics for FRT to ensure its ethical use in the
context of New Zealand law enforcement.
Similar to
what ICE and TSA are doing?
https://obiter.mandela.ac.za/article/view/14254
THE
LEGAL ISSUES REGARDING THE USE OF ARTIFICIAL INTELLIGENCE TO SCREEN
SOCIAL MEDIA PROFILES FOR THE HIRING OF PROSPECTIVE EMPLOYEES
The fourth
industrial revolution has introduced advancement in technologies that
have affected many commercial sectors in South Africa, and the
employment sector is no exception. One of these advancements is the
creation of artificial intelligence technologies that can assist
humans to make everyday tasks quicker and more efficient. It
has become common for organisations to screen social media profiles
in order to gain information about a prospective employee.
With the aid of artificial intelligence, employers can use such
systems to easily sift through social media profiles and access the
data it needs. Although these technological creations have many
successful outcomes, artificial intelligence systems can also have
drawbacks, such as inadvertently discriminating against certain
groups of people when data is collected, processed and stored.
Issues surrounding privacy breaches are also raised where artificial
intelligent systems seek to access personal information from social
media profiles. Prospective employees will need to be informed that
their social media profiles are being screened and the
artificial intelligence system needs to be programmed properly
to ensure that data is correctly and fairly processed and collected.
The Terminator
on trial?
https://link.springer.com/chapter/10.1007/978-94-6265-523-2_8
Prosecuting
Killer Robots: Allocating Criminal Responsibilities for Grave
Breaches of International Humanitarian Law Committed by Lethal
Autonomous Weapon Systems
The
fast-growing development of highly automated and autonomous weapon
systems has become one of the most controversial sources of
discussion in the international sphere. One of the many concerns
that surface with this technology is the existence of an
accountability gap. This fear stems from the complexity of holding a
human operator criminally responsible for a potential failure of the
weapon system. Thus, the question on who is to be held criminally
liable for grave breaches to international humanitarian law when
these crimes are not intentional arises. This chapter explains how
we will need to rethink the responsibilities, command structure, and
everyday operations within our military when engaging in the use of
fully autonomous weapon systems to allow our existing legal framework
to assign criminal responsibility. For this purpose, this chapter
analyses the different types of criminal responsibilities that
converge in the process of employing lethal autonomous weapons and
determine which of them is the most appropriate for grave breaches of
international humanitarian law in this case.
(Related) Who
programmed the Terminator?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4159762
Are
programmers in or 'out of' control? The individual criminal
responsibility of programmers of autonomous weapons and self-driving
cars
The increasing
use of autonomous systems technology in cars and weapons could lead
to a rise of harmful incidents on the roads and in the battlefield
potentially amounting to crimes. Such a rise has led to questions as
to who is criminally responsible for these crimes – be it the users
or the programmers? This chapter seeks to clarify the role of
programmers in crimes committed with autonomous systems by focusing
on the use of autonomous vehicles and autonomous weapons. In
assessing whether a programmer could be criminally responsible for
crimes committed with autonomous technology, it is necessary to
determine whether the programmer had control over this technology.
Risks inherent in the use of these autonomous technologies may allow
for a programmer to escape criminal liability but some risks may be
foreseeable and thus considered under the programmer’s control.
The central question is
whether programmers exercise causal control over a chain of events
leading to the commission of a crime. This chapter
contends that programmers’ control begins at the initial stage of
the autonomous system development process but continues in the use
phase, extending to the behaviour and effects of autonomous systems
technology. Based on criminal responsibility requirements and
causation theories, this chapter develops a notion of meaningful
human control (MHC) that may function to trace back responsibility to
the programmers who could understand, foresee, and anticipate the
risk of a crime being committed with autonomous systems technology.
It’s not my
fault, the computer did it!
https://link.springer.com/chapter/10.1007/978-94-6265-523-2_14
Contractual
Liability for the Use of AI under Dutch Law and EU Legislative
Proposals
In this
chapter, the contractual liability of a company (the ‘user’)
using an AI system to perform its contractual obligations is analysed
from a Dutch law and EU law perspective. In particular, we discuss
three defences
which, in the event of a breach, the user can put forward against the
attribution of that breach to such user and which relate to the
characteristics of AI systems, especially their capacity for
autonomous activity and self-learning:
(1)
the AI system was state-of-the-art when deployed,
(2)
the user had no control over the AI system, and
(3)
an AI system is not a tangible object and its use in the performance
of contractual obligations can thus not give rise to strict liability
under Article 6:77 of the Dutch Civil Code.
Following a
classical legal analysis of these defences under Dutch law and in
light of EU legislative proposals, the following conclusions are
reached. Firstly, the user is strictly liable, subject to an
exception based on unreasonableness, if the AI system was unsuitable
for the purpose for which it was deployed as at the time of
deployment. Advancements in scientific knowledge play no role in
determining suitability. Secondly, a legislative proposal by the
European Parliament allows the user to escape liability for damage
caused by a non-high-risk AI system if the user took due care with
respect to the selection, monitoring and maintenance of that system.
Thirdly, the defence that the user is not liable because an AI system
is not a tangible object is unlikely to hold.
Bigger must
mean better?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4152035
Big
Data Policing Capacity Measurement
Big data, algorithms, and computing
technologies are revolutionizing policing. Cell phone data.
Transportation data. Purchasing data. Social media and internet
data. Facial recognition and biometric data. Use of these and other
forms of data to investigate, and even predict, criminal activity is
law enforcement’s
presumptive future. Indeed, law enforcement in several
major cities have already begun to develop a big data policing
mindset, and new forms of data have played a central role in
high-profile matters featured in the “Serial” and “To Live and
Die in LA” podcasts, as well as in the Supreme Court’s leading
Carpenter v. U.S. opinion. Although the ascendancy of big data
policing appears inevitable, important empirical questions on local
law enforcement agency capacity remain insufficiently answered. For
example, do agencies have adequate capacity to facilitate big data
policing? If not, how can policymakers best target resources to
address capacity shortfalls? Are certain categories of agencies in a
comparatively stronger position in terms of capacity? Answering
questions such as these requires empirical measurement of phenomena
that are notoriously difficult to measure. This Article presents a
novel multidimensional measure of big data policing capacity in U.S.
local law enforcement agencies: the Big Data Policing Capacity Index
(BDPCI). Analysis of the BDPCI provides three principal
contributions. First, it offers an overall summary of more than
2,000 local agencies’ inadequacy in big data policing capacity
using a large-N dataset. Second, it identifies factors that are
driving lack of capacity in agencies. Third, it illustrates how
differences between groups of Agencies might be analyzed based on
size and location, including an illustrative ranking of the fifty
U.S. states. This Article is meant to inform stakeholders on
agencies’ current positions, advise on how best to improve such
positions, and drive further research into empirical measurement and
big data policing.
Should your CPO be an AI?
https://aisel.aisnet.org/amcis2022/sig_sec/sig_sec/8/
Exploring
the Characteristics and Needs of the Chief Privacy Officer in
Organizations
Over the past two decades, the growth
in technology (i.e. social networking, big data, smartphones,
Internet of Things, artificial intelligence, etc.) and increased
collection of customer data mixed with various data breaches has
increased the need to focus more on information privacy. Various
laws and regulations have been established, such as the GDPR in
Europe and various state level regulations in the United States, to
ensure the protection of customers and their data. The Chief Privacy
Officer role was established in the 1990’s with a strong research
focus in the early 2000s. However, little
attention has been given to the role of the CPO in the past decade.
Due to the increases in technology, private data collections,
breaches, and privacy regulations, there is a need to reevaluate the
role of the CPO and the evolving responsibilities it entails.
Looking at what we’re looking at.
https://link.springer.com/chapter/10.1007/978-94-6265-523-2_23
Ask
the Data: A Machine Learning Analysis of the Legal Scholarship on
Artificial Intelligence
In the last decades, the study of the
legal implications of artificial intelligence (AI) has increasingly
attracted the attention of the scholarly community. The
proliferation of articles on the regulation of algorithms has gone
hand in hand with the acknowledgment of the existence of substantial
risks associated with current applications of AI. These relate to
the widening of inequality, the deployment of discriminatory
practices, the potential breach of fundamental rights such as
privacy, and the use of AI-powered tools to surveil people and
workers. This chapter aims to map the existing legal debate on AI
and robotics by means of bibliometric analysis and unsupervised
machine learning. By using structural topic modeling (STM) on
abstracts of 1298 articles published in peer-reviewed legal journals
from 1982 to 2020, the
chapter explores what the dominant topics of discussion are and how
the academic debate on AI has evolved over the years. The
analysis results in a systematic computation of 13 topics of interest
among legal scholars, showing trends of research and potential areas
for future research.