Sunday, April 03, 2022

Kill ‘em all, let AI sort ‘em out.

https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780198857815.001.0001/oxfordhb-9780198857815-e-29

The Ethics of Weaponized AI

This chapter presents an overview of some of the major ethical arguments for and against the use of autonomous weapons systems (AWS). More specifically, this chapter looks at the set of contingent arguments as well as the set of in principle arguments for and against their use. After summarizing these various views, the chapter argues that AWS do not pose new or novel ethical problems. If we think an AWS makes actual decisions in the ‘strong AI’ sense, then by virtue of being a decision-maker, that entity would therefore have rights and interests worthy of our moral concern. If we, however, think an AWS does not make actual decisions, but is instead just an institutional proxy for the collective set of human decisions comprising it, then we ought to treat an AWS, both morally and metaphysically, like we would treat any other collective action problem.



(Related) It’s ‘human law’ too.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4066781

Humans in the Loop

From lethal drones to cancer diagnostics, complex and artificially intelligent algorithms are increasingly integrated into decisionmaking that affects human lives, raising challenging questions about the proper allocation of decisional authority between humans and machines. Regulators commonly respond to these concerns by putting a “human in the loop”: using law to require or encourage including an individual within an algorithmic decisionmaking process.

Drawing on our distinctive areas of expertise with algorithmic systems, we take a bird’s eye view to make three generalizable contributions to the discourse. First, contrary to the popular narrative, the law is already profoundly (and problematically) involved in governing algorithmic systems. Law may explicitly require or prohibit human involvement and law may indirectly encourage or discourage human involvement, all without regard to what we know about the strengths and weaknesses of human and algorithmic decisionmakers and the particular quirks of hybrid human-machine systems. Second, we identify “the MABA-MABA trap,” wherein regulators are tempted to address a panoply of concerns by “slapping a human in it” based on presumptions about what humans and algorithms are respectively better at doing, often without realizing that the new hybrid system needs its own distinct regulatory interventions. Instead, we suggest that regulators should focus on what they want the human to do—what role the human is meant to play—and design regulations to allow humans to play these roles successfully. Third, borrowing concepts from systems engineering and existing law regulating railroads, nuclear reactors, and medical devices, we highlight lessons for regulating humans in the loop as well as alternative means of regulating human-machine systems going forward.





This could be a very useful concept.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4066845

Algorithmic Destruction

Contemporary privacy law does not go far enough to protect our privacy interests, particularly where artificial intelligence and machine learning are concerned. While many have written on problems of algorithmic bias or deletion, this article introduces the novel concept of the “algorithmic shadow," the persistent imprint of data in a trained machine learning model, and uses the algorithmic shadow as a lens through which to view the failures of data deletion in dealing with the realities of machine learning. This article is also the first to substantively critique the novel privacy remedy of algorithmic disgorgement, also known as algorithmic destruction.

What is the algorithmic shadow? Simply put, when you feed a set of specific data to train a machine learning model, that data produces an impact on the model that results from such training. Even if you later delete data from the training data set, the already-trained model still contains a persistent “shadow” of the deleted data. The algorithmic shadow describes the persistent imprint of the data that has been fed into a machine learning model and used to refine that machine learning system.

The failure of data deletion to resolve the privacy losses caused by algorithmic shadows highlights the ineffectiveness of data deletion as a right and a remedy. Algorithmic destruction (deletion of models or algorithms trained on misbegotten data) has emerged as an alternative, or perhaps supplement, to data deletion. While algorithmic destruction or disgorgement may resolve some of the failures of data deletion, this remedy and potential right is also not without its own drawbacks.

This article has three goals: First, the article introduces and defines the concept of the algorithmic shadow, a novel concept that has so far evaded significant legal scholarly discussion, despite its importance in future discussions of artificial intelligence and privacy law. Second, the article explains why the algorithmic shadow exposes and exacerbates existing problems with data deletion as a privacy right and remedy. F inally, the article examines algorithmic destruction as a potential algorithmic right and algorithmic remedy, comparing it with data deletion, particularly in light with algorithmic shadow harms.





Facing horses?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4019105

Has the Horse Bolted? Dealing with Legal and Practical Challenges of Facial Recognition

Facial recognition is a technology that is largely used by individuals and authorities. Whilst it has great potential, especially in law enforcement, it may lead to unpredicted outcomes. This is why the European Union (EU) has started, in the AI Act’s framework, to question itself on how to rule such technology to avoid cases similar to Clearview AI. Analysing the EU approach to biometric identification systems and the case of Clearview AI, this article explores the legal and practical challenges that facial recognition poses.





How to commit ethics…

https://arxiv.org/abs/2203.13494

Big data ethics, machine ethics or information ethics? Navigating the maze of applied ethics in IT

Digitalization efforts are rapidly spreading across societies, challenging new and important ethical issues that arise from technological development. Software developers, designers and managerial decision-makers are ever more expected to consider ethical values and conduct normative evaluations when building digital products. Yet, when one looks for guidance in the academic literature one encounters a plethora of branches of applied ethics. Depending on the context of the system that is to be developed, interesting subfields like big data ethics, machine ethics, information ethics, AI ethics or computer ethics (to only name a few) may present themselves. In this paper we want to offer assistance to any member of a development team by giving a clear and brief introduction into two fields of ethical endeavor (normative ethics and applied ethics), describing how they are related to each other and, finally, provide an ordering of the different branches of applied ethics (big data ethics, machine ethics, information ethics, AI ethics or computer ethics etc.) which have gained traction over the last years. Finally, we discuss an example in the domain of facial recognition software in the domain of medicine to illustrate how this process of normative analysis might be conducted.





TED talk about an interesting challenge.

https://www.youtube.com/watch?v=1Rr-pZoftho

Self-Assembling Robots and the Potential of Artificial Evolution

What if robots could build and optimize themselves – with little to no help from humans? Computer scientist Emma Hart is working on a new technology that could make "artificial evolution" possible. She explains how the three ingredients of biological evolution can be replicated digitally to build robots that can self-assemble and adapt to any environment – from the rocky terrain of other planets to the darkest depths of the ocean – potentially ushering in a new generation of exploration.




No comments: