Sunday, April 02, 2023

I robot! You human?

https://journals.rudn.ru/law/article/view/34061

Theoretical aspects of identyifying legal personality of artificial intelligence: cross-national analysis of the laws of foreign countries

Research analyzes the issues of determining the legal status of artificial intellect. As artificial intellect (AI) systems become more sophisticated and play an increasingly important role in society, the arguments that they should have some form of legal personality are becoming increasingly relevant. The research argues that most legal systems could create a new category of legal persons. The issues of innovative trends in law enforcement practice are also in the focus as well as the issues of establishing general provisions on liability for criminal acts committed due to technical failures of artificial intelligence without the presence of anthropogenic participation and intervention. The article presents the results of the relevance of philosophical-legal and ontological analysis not only to a state, but also to the prospective future modifications of artificial intelligence. It outlines the results of a comparative analysis of the laws regulating artificial intelligence in a number of foreign countries along with the results of a retrospective analysis of some historical stages in the development of legal regulation of artificial intelligence.





A new Right?

https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1746820&dswid=4124

Chapter 2. To be a face in the crowd: Surveillance, facial recognition, and a right to obscurity

This chapter examines how facial recognition technology reshapes the philosophical debate over the ethics of video surveillance. When video surveillance is augmented with facial recognition, the data collected is no longer anonymous, and the data can be aggregated to produce detailed psychological profiles. I argue that – as this non-anonymous data of people’s mundane activities is collected – unjust risks of harm are imposed upon individuals. In addition, this technology can be used to catalogue all who publicly participate in political, religious, and socially stigmatised activities, and I argue that this would undermine central interests of liberal democracies. I examine the degree to which the interests of individuals and the societal interests of liberal democracies to maintain people’s obscurity while in public coincide with privacy interests, as popularly understood, and conclude that there is a practical need to articulate a novel right to obscurity to protect the interests of liberal democratic societies.





Good questions should lead to good answers.

https://link.springer.com/article/10.1007/s11948-023-00433-5

Machine Ethics: Do Androids Dream of Being Good People?

Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set of rules, either a priori or learned, misses the point. Our intention is not to solve the technical problem of machine ethics, but to learn something about human ethics, and its rationality, by reflecting on the ethics that can and should be implemented in machines. Any machine ethics implementation will have to face a number of fundamental or conceptual problems, which in the end refer to philosophical questions, such as: what is a human being (or more generally, what is a worthy being); what is human intentional acting; and how are intentional actions and their consequences morally evaluated. We are convinced that a proper understanding of ethical issues in AI can teach us something valuable about ourselves, and what it means to lead a free and responsible ethical life, that is, being good people beyond merely “following a moral code”. In the end we believe that rationality must be seen to involve more than just computing, and that value rationality is beyond numbers. Such an understanding is a required step to recovering a renewed rationality of ethics, one that is urgently needed in our highly technified society.





What would Napoleon do?

https://verfassungsblog.de/big-brother-is-watching-the-olympic-games-and-everything-else-in-public-spaces/

Big Brother is Watching the Olympic Games – and Everything Else in Public Spaces

The French National Assembly is currently debating the law on the 2024 Olympic and Paralympic Games. Despite its name, the law has more to do with security than sports. In particular, Article 7 of the law creates a legal basis for algorithmic video surveillance, that is, video surveillance that relies on artificial intelligence to treat the images and audio of video surveillance cameras in order to identify human beings, objects, or specific situations. In other words, video surveillance cameras in France’s public spaces would now be able to identify you and detect if your behaviour is suspicious. Admittedly, this is already the case in several French cities (for instance in Toulouse since 2016) and in some railway services, but without any legal basis.

France is infamous for its attachment to surveillance, with the highest administrative court even deciding to ignore a CJEU’s ruling concerning its mass surveillance measures on the ground that the protection of national security is part of the “national identity” of the country. However, Article 7 represents a major step in the direction of general biometric mass surveillance and should be of concern to everyone. In fact, the risks posed by AVS are so high that the current discussions on the European Regulation on Artificial Intelligence envision a formal ban.

The legal basis for AVS provided by Article 7 of the new French law is especially worrisome from two perspectives. First, it would legitimise a practice that is in violation of France’s human rights obligations. Second, adopting this law would make France the first EU member state to grant a legal basis to algorithmic video surveillance (AVS), thus creating a worrisome precedent and normalising biometric mass surveillance.





Perspective.

https://bridges.monash.edu/articles/journal_contribution/Taming_the_Electronic_Genie_Can_Law_Regulate_the_Use_of_Public_and_Private_Surveillance_/22337425

Taming the Electronic Genie: Can Law Regulate the Use of Public and Private Surveillance?

The fear that our social and legal institutions are being subtly but inexorably eroded by the growth in surveillance is as common in academic literature as it is in the popular imagination. While large corporations harness the powers of Big Data for the wholesale harvesting of personal data, the government utilises its coercive powers to conduct increasingly intrusive surveillance of members of the public. The article considers the major issues arising from private surveillance, particularly the breaches of privacy inherent in the collection or harvesting of personal information. It then analyses selected issues arising from public surveillance, including data retention and sharing by government, the use of surveillance techniques such as facial recognition technology in criminal investigation, and the evocation of national security concerns to justify invasions of privacy. It considers what legal regime is best suited to regulate mass public and private surveillance, including the tort of privacy, the adoption of international regimes, such as the General Data Protection Regulation, and the expansion of fiduciary principles. We argue that the concept of ‘information fiduciary’ should be added to the current range of measures designed to ensure the accountability of both public and private data collectors.



No comments: