Sunday, May 22, 2022

Want always comes before need?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4112042

Ai-Powered Public Surveillance Systems: Why We (Might) Need Them and How We Want Them

In this article, we address the introduction of AI-powered surveillance systems in our society by looking at the deployment of real-time facial recognition technologies (FRT) in public spaces and public health surveillance technologies, in particular contact tracing applications. Both cases of surveillance technologies assist public authorities in the enforcement of the law by allowing the tracking of individual movements and extrapolating results towards monitoring and predicting social behavior. Therefore, they are considered as potentially useful tools in response to societal crises, such as those generated by crime and health related pandemics. To approach the assessment of the potentials and threats of such tools, we offer a framework with three dimensions: a function dimension, examines the type, quality and quantity of data the system needs to employ to work effectively; the consent dimension considers the user’s right to be informed about and reject the use of surveillance, questioning whether consent is achievable and whether the user can decide fully autonomously/independently; and a societal dimension that frames vulnerabilities and the impacts of the increased empowerment of established political regimes through new means to control populations based on data surveillance. Our analysis framework can assist public authorities in their decisions on how to design and deploy public surveillance tools in a way that enables compliance with the law while highlighting individual and societal tradeoffs.





Probably not probable?

https://scholarlycommons.law.wlu.edu/wlulr/vol79/iss2/7/

The Computer Got It Wrong: Facial Recognition Technology and Establishing Probable Cause to Arrest

Facial recognition technology (FRT) is a popular tool among police, who use it to identify suspects using photographs or still-images from videos. The technology is far from perfect. Recent studies highlight that many FRT systems are less effective at identifying people of color, women, older people, and children. These race, gender, and age biases arise because FRT is often “trained” using non-diverse faces. As a result, police have wrongfully arrested Black men based on mistaken FRT identifications. This Note explores the intersection of facial recognition technology and probable cause to arrest.

Courts rarely, if ever, examine FRT’s role in establishing probable cause. This Note suggests a framework for how courts can evaluate FRT and probable cause. Case law about drug-sniffing dogs provides a starting point for assessing what role an FRT identification should play in probable cause determinations. But drug dogs are not a perfect analogue for FRT. Two important differences between these two policing tools warrant treating FRT with greater scrutiny than drug dogs. First, FRT has baked-in racial, gender, and age biases that drug dogs lack. Second, FRT is a digital policing tool, which recent Supreme Court precedent suggests merits more judicial scrutiny than non-digital police tools like dogs.

Giving FRT a closer look leads to the conclusion that an FRT identification alone is insufficient to establish probable cause. FRT relies on flawed inputs (non-diverse data) that leads to flawed outputs (demographic discrepancies in misidentifications). These problematic inputs and outputs provide complimentary reasons why an FRT identification alone cannot provide probable cause.



(Related)

https://arxiv.org/abs/2205.07299

Regulating Facial Processing Technologies: Tensions Between Legal and Technical Considerations in the Application of Illinois BIPA

Harms resulting from the development and deployment of facial processing technologies (FPT) have been met with increasing controversy. Several states and cities in the U.S. have banned the use of facial recognition by law enforcement and governments, but FPT are still being developed and used in a wide variety of contexts where they primarily are regulated by state biometric information privacy laws. Among these laws, the 2008 Illinois Biometric Information Privacy Act (BIPA) has generated a significant amount of litigation. Yet, with most BIPA lawsuits reaching settlements before there have been meaningful clarifications of relevant technical intricacies and legal definitions, there remains a great degree of uncertainty as to how exactly this law applies to FPT. What we have found through applications of BIPA in FPT litigation so far, however, points to potential disconnects between technical and legal communities. This paper analyzes what we know based on BIPA court proceedings and highlights these points of tension: areas where the technical operationalization of BIPA may create unintended and undesirable incentives for FPT development, as well as areas where BIPA litigation can bring to light the limitations of solely technical methods in achieving legal privacy values. These factors are relevant for (i) reasoning about biometric information privacy laws as a governing mechanism for FPT, (ii) assessing the potential harms of FPT, and (iii) providing incentives for the mitigation of these harms. By illuminating these considerations, we hope to empower courts and lawmakers to take a more nuanced approach to regulating FPT and developers to better understand privacy values in the current U.S. legal landscape.





This is a scary App…

https://ieeexplore.ieee.org/abstract/document/9773277

A Mental Trespass? Unveiling Truth, Exposing Thoughts and Threatening Civil Liberties with Non-Invasive AI Lie Detection

Imagine an app on your phone or computer that can tell if you are being dishonest, just by processing affective features of your facial expressions, body movements, and voice People could ask about your political preferences, your sexual orientation, and immediately determine which of your responses are honest and which are not. In this paper we argue why artificial intelligence-based, non-invasive lie detection technologies are likely to experience a rapid advancement in the coming years, and that it would be irresponsible to wait any longer before discussing their implications. To understand the perspective of a “reasonable” person, we conducted a survey of 129 individuals, and identified accuracy and consent as the critical factors. In our analysis, we distinguish two types of lie detection technologies: “truth metering” and “thought exposing.” We generally find that truth metering is already largely within the scope of existing US federal and state laws, albeit with some notable exceptions. In contrast, we find that current regulation of thought exposing technologies is ambiguous and inadequate to safeguard civil liberties. In order to rectify these shortcomings, we introduce the legal concept of “mental trespass” and use this concept as the basis for proposed legislation.





Convinced it will, or afraid it will?

https://www.military.com/daily-news/2022/05/21/milley-tells-west-point-cadets-technology-will-transform-war.html

Milley Tells West Point Cadets Technology Will Transform War

The top U.S. military officer challenged the next generation of Army soldiers on Saturday to prepare America's military to fight future wars that may look little like the wars of today.

Army Gen. Mark Milley, chairman of the Joint Chiefs of Staff, painted a grim picture of a world that is becoming more unstable, with great powers intent on changing the global order. He told graduating cadets at the U.S. Military Academy at West Point that they will bear the responsibility to make sure America is ready.



(Related)

https://warontherocks.com/2022/05/is-artificial-intelligence-made-in-humanitys-image-lessons-for-an-ai-military-education/

IS ARTIFICIAL INTELLIGENCE MADE IN HUMANITY’S IMAGE? LESSONS FOR AN AI MILITARY EDUCATION

Artificial intelligence is not like us. For all of AI’s diverse applications, human intelligence is not at risk of losing its most distinctive characteristics to its artificial creations.

Yet, when AI applications are brought to bear on matters of national security, they are often subjected to an anthropomorphizing tendency that inappropriately associates human intellectual abilities with AI-enabled machines. A rigorous AI military education should recognize that this anthropomorphizing is irrational and problematic, reflecting a poor understanding of both human and artificial intelligence. The most effective way to mitigate this anthropomorphic bias is through engagement with the study of human cognition — cognitive science.



(Related)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4109202

Aspects of Realizing (Meaningful) Human Control: A Legal Perspective

The concept of ‘meaningful human control’ (MHC) has progressively emerged as a key frame of reference to conceptualize the difficulties posed by military applications of artificial intelligence (AI), and to identify solutions to mitigate these challenges. At the same time, this notion remains relatively indeterminate and difficult to operationalize. If MHC is to support the existing framework of international law applicable to military AI, it needs to be clarified in order to deal with the challenges of AI broadly construed, not limited to ‘autonomous weapons systems’ (AWS). This chapter seeks to refine the notion of MHC by exploring its nature and purpose, and reflecting on how MHC relates to core concepts of human agency and responsibility. Building on this analysis, we propose ways to operationalize MHC, in particular by putting greater emphasis on pre-deployment stages. A legal ‘compliance by design’ approach is advanced by the authors as a means to address the complex realities when military decision-making processes are mediated by AI-enabled technologies.



No comments: