Sunday, July 03, 2022

It’s Congress, it will never make sense…

https://www.proquest.com/openview/bd67e526c0c95cd6a676608946742b2d/1?pq-origsite=gscholar&cbl=18750&diss=y

Why Congress Has Not Passed Facial Recognition Technology Legislation for Public Spaces

Facial recognition technology (FRT) in public spaces has been a political and social concern for more than 30 years. Conflict exists between the use of FRT for safety and security measures and its possible violation of the First, Fourth, and Fourteenth Amendments. Additional controversial issues surrounding the use of FRT in public spaces include technological development without standardization or regulations; biometric algorithms developed with bias; and the social issues of privacy intrusion, gender and racial bias, data security, accuracy, and privacy concerns. Researchers have concurred a national policy is needed to address FRT issues but have not explained why Congress has been unsuccessful. The purpose of this qualitative case study was to explore the factors explaining this phenomenon. The narrative policy framework was used as the theoretical paradigm for this inquiry. Using Saldana’s method of coding, categorizing and theming descriptive narratives, transcripts from hearings conducted by the U.S. House of Representatives Committee on Oversight and Reform tasked with formulating FRT legislation were analyzed. The result of the analysis was the emergence of 10 factors identifying why FRT legislation was stalemated in Congress. The summative assertion from the factors revealed members of the committee were overwhelmed with the complexities of FRT. Several strategies were recommended which may advance the passage of a national FRT policy. If Congress employed these strategies and passed a national policy that alleviated FRT issues to the extent possible, positive social change regarding FRT usage in public spaces may occur.





They do the hard part…

https://arxiv.org/abs/2206.11922

Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance

In the last decade, a great number of organizations have produced documents intended to standardize, in the normative sense, and promote guidance to our recent and rapid AI development. However, the full content and divergence of ideas presented in these documents have not yet been analyzed, except for a few meta-analyses and critical reviews of the field. In this work, we seek to expand on the work done by past researchers and create a tool for better data visualization of the contents and nature of these documents. We also provide our critical analysis of the results acquired by the application of our tool into a sample size of 200 documents.





Interesting

https://www.brookings.edu/wp-content/uploads/2022/06/FP_20220621_surveillance_exports_peterson_hoffman_v2.pdf

GEOPOLITICAL IMPLICATIONS OF AI AND DIGITAL SURVEILLANCE ADOPTION

To continue addressing these policy challenges, this brief provides five recommendations for democratic governments and three for civil society.





Always amusing when a book kicks off heated rebuttals.

https://www.amazon.com/Reasonable-Robot-Artificial-Intelligence-Law/dp/1108459021

The Reasonable Robot: Artificial Intelligence and the Law



(Related)

https://academic.oup.com/jrls/article-abstract/25/1/1/6619260

The Law of AI: A Renegotiation or a Reproduction Commentary on Ryan Abbott, The Reasonable Robot Get access Arrow

Ryan Abbott’s “The Reasonable Robot: Artificial Intelligence and the Law” offers a functional approach to the law of Artificial Intelligence (AI) robots. Soon, if not already, AIs will compete with human employees in a variety of tasks and fields—from manufacturing and transportation to programming and medicine—prompting the fourth industrial revolution.1 But Abbott does not find this to be as alarming as it may appear. Instead, while the transition period requires some state intervention, he argues, “[h]istory has shown [that] fears [of technological unemployment] were misplaced, at least in regard to concerns about long-term unemployment” (p. 5).

What Abbott is concerned with is that competition between AIs and humans would be on equal footings. This concern is highlighted in the main theme of the book, the “principle of AI legal neutrality asserting that the law should not discriminate between AI and human behavior”



(Related)

https://academic.oup.com/jrls/article-abstract/25/1/18/6619262

The Interesting Robot: A Reply to Professor Abbott: Comments on The Reasonable Robot: Artificial Intelligence in the Law by Professor Ryan Abbott Get access Arrow

I had a philosophy professor in college, Bob Fogelin, who sorted classical arguments into four categories: (1) interesting and right, (2) interesting and wrong, (3) uninteresting and right, and (4) uninteresting and wrong. Very few arguments that stand the test of time fall in the last category. Even fewer fall into the first.1 Most worthwhile work falls somewhere in the middle.

Ryan Abbott has written a wonderful, interesting book about artificial intelligence and the law that happens to be wrong. The book is interesting (and ambitious) in that Abbott manages to articulate a nuanced conceptual framework for AI that spans at least four distinct legal contexts. Along the way, he makes any number of fascinating and trenchant observations regarding the interaction of law and technology. The book is wrong, or at least incomplete, insofar as it argues for a concept—“AI legal neutrality”—without satisfactory criteria of application.



(Related)

https://academic.oup.com/jrls/article-abstract/25/1/24/6619250

The Relational Robot: A Normative Lens for AI Legal Neutrality—Commentary on Ryan Abbott, The Reasonable Robot Get access Arrow

Artificial Intelligence (AI), we are told, is poised to disrupt almost every facet of our lives and society. From industrial labor markets to daily commutes, and from policing tactics to personal assistants, AI brings with it the usual promise and perils of change. How that change will unfold, however, and whether it will ultimately bestow upon us more benefits than harms, remains to be determined. A significant factor in setting the course for AI’s inevitable integration into society will be the legal framework within which it is developed and operationalized. Who will AI displace? What will it replace? What improvements will it bring? What damage will it do? The law has the power to shape the answers, but is it up to the task?



No comments: