Any
poorly defined goal can lead to processes optimized poorly.
https://philpapers.org/rec/EDIAAT-3
AI
and the Law: Can Legal Systems Help Us Maximize Paperclips while
Minimizing Deaths?
This
Chapter provides a short undergraduate introduction to ethical and
philosophical complexities surrounding the law’s attempt (or lack
thereof) to regulate artificial intelligence. Swedish philosopher
Nick Bostrom proposed a simple thought experiment known as the
paperclip maximizer. What would happen if a machine (the “PCM”)
were given the sole goal of manufacturing as many paperclips as
possible? It might learn how to transact money, source metal, or
even build factories. The machine might also eventually realize that
humans pose a threat. Humans could turn the machine off at any
point, and then it wouldn’t be able to make as many paperclips as
possible! Taken to the logical extreme, the result is quite grim—the
PCM might even start using humans as raw material for paperclips.
The predicament only deepens once we realize that Bostrom’s thought
experiment overlooks a key player. The PCM and algorithms like it do
not arise spontaneously (at least, not yet). Most likely, some
corporation—say, Office Corp.—designed, owns, and runs the PCM.
The more paperclips the PCM manufactures, the more profits Office
Corp. makes, even if that entails converting some humans (but
preferably not customers!) into raw materials. Less dramatically,
Office Corp. may also make more money when PCM engages in other
socially sub-optimal behaviors that would otherwise violate the law,
like money laundering, sourcing materials from endangered habitats,
manipulating the market for steel, or colluding with competitors over
prices. The consequences are predictable and dire. If Office Corp.
isn’t held responsible, it will not stop with the PCM. Office
Corp. would have every incentive to develop more maximizers—say for
papers, pencils, and protractors. This chapter issues a challenge
for tech ethicists, social ontologists, and legal theorists: How
can the law help mitigate algorithmic harms without overly
compromising the potential that AI has to make us all healthier,
wealthier, and wiser? The answer is far from
straightforward.
We
want to protect our children in the worst way…
https://reason.com/2022/10/06/a-california-law-designed-to-protect-childrens-digital-privacy-could-lead-to-invasive-age-verification/
A
California Law Designed To Protect Children's Digital Privacy Could
Lead to Invasive Age Verification
While
the California Age-Appropriate Design Code Act was hailed as a
victory for digital privacy, critics warn of a litany of unintended
consequences.
The
California Age-Appropriate Design Code Act was signed
last
month by California Gov. Gavin Newsom (D). The law requires that
online businesses create robust privacy protections for users under
18.
However,
critics
of
the law have raised concerns
about
its vague language, which leaves unclear what kinds of business might
be subject to the law's constraints and what specific actions
companies must take to comply with the law.
For instance,
due to the law's strict age requirements, online businesses may
resort to invasive age verification regimes—such as face-scanning
or checking government-issued IDs.
They
are coming. Think about dealing with them.
https://elibrary.verlagoesterreich.at/article/10.33196/ealr202201003201
The
Use of Autonomous Weapons – fortunium dei?
Weapons
were created to inflict damage, whether to property or to people.
With the “help” of innovation and digitalisation, there has been
a rapid increase in the development and promotion of autonomous
weapons. Among other things, pictures and videos of people are used
in order to feed the artificial intelligence with material and
therefore let it process and develop itself further.
This
paper describes the issues of using such weapons in connection with
international humanitarian and human rights law. It is divided into
four chapters, starting with the functioning and description of the
actual weapon and autonomous weapons, and the types of such armament.
Furthermore, the points of being in favour or against these systems
will be dismantled in a detailed way within the scope of
international human rights and humanitarian law, and philosophical
standards, the latter in a brief way. Lastly, the conclusion will
briefly summarise the results, suggestions for improvements and
concerns in the future of using such weapons. Due to the
specialisation on to what extent the use of autonomous weapon systems
is permissible under human and international humanitarian law, this
paper intentionally will not deal with liability and immunity by
using them because it would go beyond the scope of this essay.
Ethics
for Designers, a basic class for computer scientists?
https://journals.sagepub.com/doi/abs/10.1177/09716858221119546
Artificial
Intelligent Systems and Ethical Agency
The
article examines the challenges involved in the process of developing
artificial ethical agents. The process involves the creators or
designing professionals, the procedures to develop an ethical agent
and the artificial systems. There are two possibilities available to
create artificial ethical agents: (a) programming ethical guidance in
the artificial Intelligence (AI)-equipped machines and/or (b)
allowing AI-equipped machines to learn ethical decision-making by
observing humans. However, it is difficult to fulfil these
possibilities due to the subjective nature of ethical
decision-making. The
challenge related to the developers is that they themselves lack
training in ethical skills. The creators who develop an
artificial ethical agent should be able to foresee the ethical issues
and have knowledge about ethical decision-making to improve the
ethical use of AI-equipped machines. The suggestion is that the
focus should be on training professionals involved in the process of
developing these artificial systems in ethics rather than developing
artificial ethical agents and thereby attributing ethical agency to
it.