Sunday, June 05, 2022

A bold claim!

https://dash.harvard.edu/handle/1/37371736

Automated Kantian Ethics

AI is beginning to make decisions without human supervision in increasingly consequential contexts like healthcare, policing, and driving. These decisions are inevitably ethically tinged, but most AI systems in use today are not explicitly guided by ethics. Regulators, philosophers, and computer scientists are raising the alarm about the dangers of unethical artificial intelligence, from lethal autonomous weapons to criminal sentencing algorithms prejudiced against people of color. These warnings are spurring interest in automated ethics, or the development of machines that can perform ethical reasoning. Prior work in automated ethics rarely engages with philosophical literature, despite its relevance to the development of responsible AI. If automated ethics draws on philosophical literature, its decisions will be more nuanced, precise, and consistent, but automating ethical theories is difficult in practice. Faithfully translating a complex ethical theory from natural language to the rigid syntax of a computer program is technically and philosophically challenging.

In this thesis, I present an implementation of automated Kantian ethics that is faithful to the Kantian philosophical tradition. Given minimal factual background, my system can judge a potential action as morally obligatory, permissible, or prohibited. To accomplish this, I formalize Kant’s categorical imperative, or moral rule, in deontic logic, implement this formalization in the Isabelle/HOL theorem prover, and develop a testing framework to evaluate how well my implementation coheres with expected properties of Kantian ethics, as established in the literature. This testing framework demonstrates that my system outperforms two other potential implementations of automated Kantian ethics. I also use my system to derive philosophically sophisticated and nuanced solutions to two central controversies in Kantian literature: the permissibility of lying (a) in the context of a joke and (b) to a murderer asking about the location of their intended victim. Finally, I examine my system’s philosophical implications, demonstrating that it can not only guide AI, but it can also help academic philosophers make philosophical progress and augment the everyday ethical reasoning that we all perform as we navigate the world. Ultimately, I contribute a working proof-of-concept implementation of automated Kantian ethics capable of performing philosophical reasoning more mature than anything previously automated. My work serves as one step towards the development of responsible, trustworthy artificial intelligence.





Overview...

https://gjil.scholasticahq.com/article/36067-big-brother-back-again-facial-recognition-technology-and-the-need-for-further-legal-protections

BIG BROTHER BACK AGAIN: FACIAL RECOGNITION TECHNOLOGY AND THE NEED FOR FURTHER LEGAL PROTECTIONS

This paper discusses how the increase in use of Facial Recognition Technology has been used by government officials and law enforcement to further national security interests, and how there is a need to improve legal protections for citizens as the use of this technology progresses. This paper will explain what Facial Recognition Technology is and how it is used, the competing values of national security and privacy concerns, regional and federal laws that have been implemented to combat privacy concerns in the United States and Europe, and finally, some suggestions on how to combat this issue. Facial Recognition Technology has significant promise in protecting national security, but without further legal protections, there is great opportunity for abuse that infringes upon human rights. Many nations have competing values of national security and individual privacy that make this technology difficult to promulgate, and further legal protections are needed in navigating the future of Facial Recognition Technology.





The age of the online mug shot?

https://www.fundarfenix.com.br/_files/ugd/9b34d5_a274685010d945e8ae4e8062679ee5db.pdf#page=111

Control and surveillance: the rise of facial recognition in criminal policy

Criminal policy has become increasingly dependent on technological devices for its implementation, especially those with greater social penetration and with greater potential for violating fundamental rights such as intimacy and privacy. From this perspective, which has long been identified by several contemporary authors such as Bernard Harcourt, Byung-Chul Han and Shoshana Zuboff, among others, the present study is specifically aimed at analyzing the growing use of facial recognition technologies as a technique of biopolitical, social and penal control, consisting of one of the most acute expressions of the worldwide perceived phenomenon of everyday survelleince.





Take that, Clearview!

https://ir.lawnet.fordham.edu/iplj/vol32/iss4/4/

Face the Facts, or Is the Face a Fact?: Biometric Privacy in Publicly Available Data

Recent advances in biometric technologies have caused a stir among the privacy community. Specifically, facial recognition technologies facilitated through data scraping practices have called into question the basic precepts we had around exercising biometric privacy. Yet, in spite of emerging case law on the permissibility of data scraping, comparatively little attention has been given to the privacy implications endemic to such practices.

On the one hand, privacy proponents espouse the view that manipulating publicly available data from, for example, our social media profiles, derogates from users’ expectations around the kind of data they share with platforms (and the obligations such platforms have for protecting users from illicit uses of that data). On the other hand, free speech absolutists take the stance that, to the extent that biometric data is readily apparent in publicly available data, any restrictions on its secondary uses are prior restraints on speech.

This Note proposes that these principles underlying privacy and free speech are compatible. Wholesale bans on biometric technologies misapprehend their legitimate uses for actually preserving privacy. Despite the overwhelming dearth of protections for biometric privacy across the United States, current battles to preserve the few regulations on these data practices illuminate the emerging frontier for privacy and free speech debates.

As this Note concludes, existing regulations on biometric data practices withstand First Amendment scrutiny, and strike the appropriate balance between speech and privacy regulations.





Better to gather too much information than too little?

http://classic.austlii.edu.au/au/journals/LawTechHum/2022/4.html

Retail Analytics: Smart-Stores Saving Bricks-and-Mortar Retail or a Privacy Problem?

This article contends that large-scale data-gathering and processing by bricks-and-mortar retailers, known as ‘retail analytics’, can be a significant privacy problem in the way it normalises surveillance and the datafication of daily life. It argues that there is a disconnect between the legitimate commercial objectives of retailers and shopping centres and the extent of the impact on an individual’s privacy, as well as the erosion of privacy at a societal level. The article contributes to the literature by outlining retail analytics practices and their purposes with the aim of promoting greater awareness of them. It further highlights the importance of considering privacy in any decision-making about the implementation of data-gathering and processing technologies. In particular, it argues that there is an overreach by retailers in their data-gathering activities—that there is a disproportionate approach adopted when the objective of the retailer is greater customer convenience or engagement, but the result is a widespread surveillance system in bricks-and-mortar retail outlets. It argues that any consideration of privacy needs to honour privacy’s value and importance in order to attribute appropriate weight in decisions around the appropriateness of particular retail analytics practices and their implementation.



No comments: