Sunday, September 18, 2022

I would suggest that the police car ‘talk’ to the autonomous vehicle and they exchange information. Why would they need to stop?

https://link.springer.com/chapter/10.1007/978-3-031-16474-3_7

Traffic Stops in the Age of Autonomous Vehicles

Autonomous vehicles have profound implications for laws governing police, searches and seizures, and privacy. Complicating matters, manufacturers are developing these vehicles at varying rates. Each level of vehicle automation, in turn, poses unique issues for law enforcement. Semi-autonomous (Levels 2 and 3) vehicles make it extremely difficult for police to distinguish between dangerous distracted driving and safe use of a vehicle’s autonomous capabilities. [Ask the car! Bob] Fully autonomous (Level 4 and 5) vehicles solve this problem but create a new one: the ability of criminals to use these vehicles to break the law with a low risk of detection. How and whether we solve these legal and law enforcement issues depends on the willingness of nations to adapt legal doctrines. This article explores the implications of autonomous vehicle stops and six possible solutions including: (1) restrictions on visibility obstructions, (2) restrictions on the use and purchase of fully autonomous vehicles, (3) laws requiring that users provide implied consent for suspicion-less traffic stops and searches, (4) creation of government checkpoints or pull-offs requiring autonomous vehicles to submit to brief stops and dog sniffs, (5) surveillance of data generated by these vehicles, and (6) opting to do nothing and allowing the coming changes to recalibrate the existing balance between law enforcement and citizens.





Forgetting for privacy? Should machines forget just because humans do?

https://ui.adsabs.harvard.edu/abs/2022arXiv220902299N/abstract

A Survey of Machine Unlearning

Computer systems hold a large amount of personal data over decades. On the one hand, such data abundance allows breakthroughs in artificial intelligence (AI), especially machine learning (ML) models. On the other hand, it can threaten the privacy of users and weaken the trust between humans and AI. Recent regulations require that private information about a user can be removed from computer systems in general and from ML models in particular upon request (e.g. the "right to be forgotten"). While removing data from back-end databases should be straightforward, it is not sufficient in the AI context as ML models often "remember" the old data. Existing adversarial attacks proved that we can learn private membership or attributes of the training data from the trained models. This phenomenon calls for a new paradigm, namely machine unlearning, to make ML models forget about particular data. It turns out that recent works on machine unlearning have not been able to solve the problem completely due to the lack of common frameworks and resources. In this survey paper, we seek to provide a thorough investigation of machine unlearning in its definitions, scenarios, mechanisms, and applications. Specifically, as a categorical collection of state-of-the-art research, we hope to provide a broad reference for those seeking a primer on machine unlearning and its various formulations, design requirements, removal requests, algorithms, and uses in a variety of ML applications. Furthermore, we hope to outline key findings and trends in the paradigm as well as highlight new areas of research that have yet to see the application of machine unlearning, but could nonetheless benefit immensely. We hope this survey provides a valuable reference for ML researchers as well as those seeking to innovate privacy technologies. Our resources are at https://github.com/tamlhp/awesome-machine-unlearning.





Words have power…

https://onlinelibrary.wiley.com/doi/full/10.1111/beer.12479

Ethical implications of text generation in the age of artificial intelligence

We are at a turning point in the debate on the ethics of Artificial Intelligence (AI) because we are witnessing the rise of general-purpose AI text agents such as GPT-3 that can generate large-scale highly refined content that appears to have been written by a human. Yet, a discussion on the ethical issues related to the blurring of the roles between humans and machines in the production of content in the business arena is lacking. In this conceptual paper, drawing on agenda setting theory and stakeholder theory, we challenge the current debate on the ethics of AI and aim to stimulate studies that develop research around three new challenges of AI text agents: automated mass manipulation and disinformation (i.e., fake agenda problem), massive low-quality content production (i.e., lowest denominator problem) and the creation of a growing buffer in the communication between stakeholders (i.e., the mediation problem).





Both must be ethical?

https://link.springer.com/article/10.1007/s00146-022-01545-5

AI and society: a virtue ethics approach

Advances in artificial intelligence and robotics stand to change many aspects of our lives, including our values. If trends continue as expected, many industries will undergo automation in the near future, calling into question whether we can still value the sense of identity and security our occupations once provided us with. Likewise, the advent of social robots driven by AI, appears to be shifting the meaning of numerous, long-standing values associated with interpersonal relationships, like friendship. Furthermore, powerful actors’ and institutions’ increasing reliance on AI to make decisions that may affect how people live their lives may have a significant impact on privacy while also raising issues about algorithmic transparency and human control. In this paper, building and expanding on previous works, we will look at how the deployment of Artificial Intelligence technology may lead to changes in identity, security, and other crucial values (such as friendship, fairness, and privacy). We will discuss what challenges we may face in the process, while critically reflecting on whether such changes may be desirable. Finally, drawing on a series of considerations underlying virtue ethics, we will formulate a set of preliminary suggestions, which—we hope—can be used to more carefully guide the future roll out of AI technologies for human flourishing; that is, for social and moral good.





The best we can do is a Code of Conduct?

https://link.springer.com/article/10.1007/s10506-022-09330-x

Policing based on automatic facial recognition

Advances in technology have transformed and expanded the ways in which policing is run. One new manifestation is the mass acquisition and processing of private facial images via automatic facial recognition by the police: what we conceptualise as AFR-based policing. However, there is still a lack of clarity on the manner and extent to which this largely-unregulated technology is used by law enforcement agencies and on its impact on fundamental rights. Social understanding and involvement are still insufficient in the context of AFR technologies, which in turn affects social trust in and legitimacy and effectiveness of intelligent governance. This article delineates the function creep of this new concept, identifying the individual and collective harms it engenders. A technological, contextual perspective of the function creep of AFR in policing will evidence the comprehensive creep of training datasets and learning algorithms, which have by-passed an ignorant public. We thus argue individual harms to dignity, privacy and autonomy, combine to constitute a form of cultural harm, impacting directly on individuals and society as a whole. While recognising the limitations of what the law can achieve, we conclude by considering options for redress and the creation of an enhanced regulatory and oversight framework model, or Code of Conduct, as a means of encouraging cultural change from prevailing police indifference to enforcing respect for the human rights violations potentially engaged. The imperative will be to strengthen the top-level design and technical support of AFR policing, imbuing it with the values implicit in the rule of law, democratisation and scientisation-to enhance public confidence and trust in AFR social governance, and to promote civilised social governance in AFR policing.





Perspective. The field is much broader than facial recognition.

https://ieeexplore.ieee.org/abstract/document/9881926

Machine Vision

This chapter describes machine vision and focuses on object recognition. It describes the machine how to recognize the objects and react differently depending on the object classes. Object recognition is further divided into image classification, object localization, and object detection. Compared with the traditional computer vision algorithmic approach, convolutional neural network does not require defining the object features and performing one‐to‐one matching. It offers better feature extraction and matching than algorithmic strategies. The gesture‐based interface allows the user to control different devices using hand or body motion. The chapter introduces several important machine vision applications in different areas, including medical diagnosis, retail applications, and airport security. Retail also benefits from machine vision, which teaches the machine to recognize the items in the images and videos. Facial recognition is an important airport security application, especially for passenger processing.





Only a matter of time?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4213543

HUMAN AS A MATTER OF LAW HOW COURTS CAN DEFINE HUMANNESS IN THE AGE OF ARTIFICIAL INTELLIGENCE

This Essay considers the ability of AI machines to perform intellectual functions long associated with human higher mental faculties as a form of sapience, a notion that more fruitfully describes their abilities than either intelligence or sentience. Using a transdisciplinary methodology, including philosophy of mind, moral philosophy, linguistics and neuroscience, the essay aims to situates the difference in law between human and machine in a way that a court of law could operationalize. This is not a purely theoretical exercise. Courts have already started to make that distinction and making it correctly will likely become gradually more important, as humans become more like machines (cyborgs, cobots) and machines more like humans (neural networks, robots with biological material). The essay draws a line that separates human and machine using the way in which humans think, a way that machines may mimic and possibly emulate but are unlikely ever to make their own.



No comments: