I
hope the answer is “no” because connecting this technology to
killer drones would be truly scary.
https://helda.helsinki.fi/handle/10138/351705
Can
a deep neural network predict the political affiliation from facial
images of Finnish left and right-wing politicians?
This
master's thesis seeks to conceptually replicate psychologist Michael
Kosinski's study, published in 2021 in Nature Scientific Reports, in
which he trained a cross-validated logistic regression model to
predict political orientations from facial images. Kosinski reported
that his model achieved an accuracy of 72%, which is significantly
higher than the 55% accuracy measured in humans for the same task.
Kosinski's research attracted a huge amount of attention and also
accusations of pseudoscience.
Where
Kosinski trained his model with facial features containing
information for example about head position and emotions, in this
thesis I use a deep learning convolutional neural network for the
same task. Also, I train my model with Finnish data, consisting of
photos of the faces of Finnish left- and right-wing candidates
gathered from the 2021 municipal elections.
Ye
olde ethics.
https://www.researchgate.net/profile/Sven-Nyholm/publication/366119799_Artificial_Intelligence_Ethics_of/links/6391ee70e42faa7e75a8c5ba/Artificial-Intelligence-Ethics-of.pdf
Artificial
Intelligence, Ethics of
The
idea of artificial intelligence (AI) predates the introduction of the
term “artificial intelligence”. Moreover, the observation that
AI raises ethical and social questions predates the current
development of the field of AI ethics. Notably, the ancient Greeks
already imagined animated instruments that could take over the work
they thought human slaves were needed for. They even reflected on
what the introduction of artificial intelligence might mean for human
society – as is shown in a well-known quote from Aristotle’s
Politics, in which Aristotle says that “if each instrument could do
its own work, at the word of command, or by intelligent anticipation,
like the statues of Daedalus or the tripods of Hephaestus […]
managers would not need subordinates and masters would not need
slaves” (Aristotle 1996: 15). When Alan Turing later wrote his
famous essays in the early 1950s, he discussed whether machines can
think, and famously suggested that it is better to reflect on whether
machines can imitate intelligent human behavior (Turing 1950).
Notably, Turing also raised questions about what this might mean for
society, as when he wrote that “it seems probable that once the
machine thinking method had started, it would not take long to
outstrip our feeble powers. ... At some stage therefore we should
have to expect the machines to take control.” (Turing 2004: 475)
This is an early statement of the so-called “control problem”, a
topic of on-going interest within discussions of future ethical
implications of AI. The term “artificial intelligence” was
eventually introduced in 1955 – in a proposal for a research
workshop that took place at Dartmouth College in New Hampshire, in
the summer of 1956. In that proposal, AI is premised on the idea
that “every aspect of learning or any other feature of intelligence
can in principle be so precisely described that a machine can be made
to simulate it.” (McCarthy et al. 1955: 1) In general, then,
artificial intelligence is the idea of technologies that can either
really be intelligent (whatever that would mean), that could imitate
human intelligent behavior, or that could simulate human
intelligence.
Worth
considering even
if we don’t like it?
https://openresearch.surrey.ac.uk/esploro/outputs/bookChapter/Morally-Repugnant-Weaponry-Ethical-Responses-to/99698166502346
Morally
Repugnant Weaponry? Ethical Responses to the Prospect of Autonomous
Weapons
In
this chapter, political philosopher Alex Leveringhaus asks whether
Lethal Autonomous Weapons (AWS) are morally repugnant and whether
this entails that they should be prohibited by international law. To
this end, Leveringhaus critically surveys three prominent ethical
arguments against AWS: firstly, AWS create ‘responsibility gaps’;
secondly, that their use is incompatible with human dignity; and
thirdly, that AWS replace human agency with artificial agency. He
argues that
some
of these arguments fail to show that AWS are morally different from
more established weapons
.
However, the author concludes that AWS are currently problematic due
to their lack of predictability.
So,
you’re saying we need more laws?
https://cadmus.eui.eu/handle/1814/75116
The
impact of facial recognition technology empowered by artificial
intelligence on the right to privacy
Facial
recognition technology (FRT) is of great interest due to its
potential use in different sectors, including aviation, healthcare,
marketing, education, military, or security. Moreover, the addition
of AI to the FRT means that this technology might have an even
greater impact. However, FRT brings potential legal challenges,
including privacy, fairness, or accountability, to name a few. These
challenges have impacted certain negative attitudes towards this
technology. Many stakeholders, privacy advocates, industry members,
and even the European Commission have raised some reservations
towards FRT. This chapter aims at setting up the current FRT-AI
legal scene by conceptualising the technology, the main problems it
entails from the fairness, accountability, data protection, and
privacy perspectives and proposing some solutions to these
conundrums, based on the regulatory instruments from the EU and the
US. The chapter
demonstrates that the answer to such legal challenges goes through a
mixed alliance between law and computer science.
Is
a ban likely? Perhaps more complex regulations, but it may be too
useful to ban.
https://www.elgaronline.com/configurable/content/book$002f9781803925899$002fbook-part-9781803925899-9.xml?t:ac=book%24002f9781803925899%24002fbook-part-9781803925899-9.xml
Chapter
4: The politics of facial recognition bans in the United States
https://www.elgaronline.com/configurable/content/book$002f9781803925899$002fbook-part-9781803925899-11.xml?t:ac=book%24002f9781803925899%24002fbook-part-9781803925899-11.xml
Chapter
6: Rising global opposition to face surveillance
Is the machine’s word good enough
to convict?
https://lmulawreview.scholasticahq.com/article/56556.xml
MAN
VS. MACHINE: FACIAL RECOGNITION TECHNOLOGY REPLACING EYEWITNESS
IDENTIFICATIONS
As they intersect...
https://link.springer.com/chapter/10.1007/978-981-19-4574-8_8
AI
Ethics and Rule of Law
After entering the twenty-first
century, the two most significant changes in technology are
artificial intelligence and genetic engineering. The two domains
have a lot in common: firstly, their applications possess great
diffusion effect, which will fundamentally change the operation law
of some industries and economic institutions; secondly, their
applications also bring enormous ethical risks. Genetic engineering
will undoubtedly change the boundaries of social equity and the basic
laws of nature, while AI is changing the ethics of human cognition
and the corresponding legal ethics; thirdly, both
technologies are shaping the future of human civilization.
Almost all novels or monographs related to the prediction of human
future will be discussed more or less from genetic engineering and
AI. Therefore, understanding the boundaries of the two fields is
comprehending the prospects of human imagination in the future.
Interesting. Which is the
intellectual property? The output from the AI or the AI itself?
https://www.tandfonline.com/doi/abs/10.1080/13600834.2022.2154049
Artificial
intelligence, inventorship and the myth of the inventing machine: Can
a process be an inventor?
Institutional and academic debates
have intensified regarding the recent efforts to claim inventorship
of AI-related patent applications, as has notably been seen in the
known cases of Thaler v Comptroller (‘DABUS’) that have been
examined in various jurisdictions. The pertinent question that has
emerged is whether artificial intelligence systems can independently
produce patentable subject matter. What has to be looked at, first,
is the preliminary question of what the claim of producing inventions
‘autonomously’ can possibly mean under a technological
perspective – an essential stage in the debate that is usually
bypassed in legal commentary. Once such a technological explanation
has been provided, a legal question can reasonably arise as to
whether an AI process, such as software, may make a contribution that
rewards a patent. AI inventions are legally approached and analysed
as processes and as to their relationship with their direct products.
Thus, where a process (AI)
‘creates’ or ‘makes’ a product, the focus is reasonably put
on if and to what extent disclosing the product can provide a
contribution separate to that which has already been provided by the
process that created it. It is stressed that the current
push for AI-generated products bypasses this key question which is
essential in assessing the invention.