Sunday, August 14, 2022

I’m fascinated by this argument. So is my AI.

https://www.taylorfrancis.com/chapters/edit/10.4324/9780429356797-10/law-martin-clancy

You Can Call Me Hal: AI and Music IP

This chapter outlines how legal arguments can be constructed whereby a nonhuman legal person – an AI – could be capable of corporate immortality and how such radical positions challenge the human-centred design of intellectual property (IP). The chapter begins by noting the global movement to harmonise IP law and establishes the centrality of music copyright to the music industry’s economy. The historical development of fundamental legal theories is presented so that essential concepts in music copyright such as creativity and originality can be accessed concerning AI. To comprehend the legal perplexities of music generated by unsupervised machine learning, DeepMind’s WaveNet AI is considered. Supporting legal challenges, including the potential of AI being granted legal personhood for AI, are noted, and the chapter concludes that the protection offered by music copyright law is fragile when its case law has its roots in the precomputerised age. In a chapter addendum, David Hughes, Chief Technology Officer at the Recording Industry Association of America (RIAA) 2006–2021, provides high-level music industry reflection on chapter themes.





Continuing the study.

https://link.springer.com/article/10.1007/s10506-022-09327-6

Thirty years of artificial intelligence and law: the third decade

The first issue of Artificial Intelligence and Law journal was published in 1992. This paper offers some commentaries on papers drawn from the Journal’s third decade. They indicate a major shift within Artificial Intelligence, both generally and in AI and Law: away from symbolic techniques to those based on Machine Learning approaches, especially those based on Natural Language texts rather than feature sets. Eight papers are discussed: two concern the management and use of documents available on the World Wide Web, and six apply machine learning techniques to a variety of legal applications.





Business” exists to take risk. Knowing what all those risks are is a good thing.

https://www.cambridge.org/core/journals/european-journal-of-risk-regulation/article/artificial-intelligence-autonomous-drones-and-legal-uncertainties/BDB89BEC2266D1ABF17316A53AA93480

Artificial Intelligence, Autonomous Drones and Legal Uncertainties

Drones represent a rapidly developing industry. Devices initially designed for military purposes have evolved into a new area with a plethora of commercial applications. One of the biggest hindrances in the commercial developments of drones is legal uncertainty concerning the legal regimes applicable to the multitude of issues that arises with this new technology. This is especially prevalent in situations concerning autonomous drones (ie drones operating without a pilot). This article provides an overview of some of these uncertainties. A scenario based on the fictitious but plausible event of an autonomous drone falling from the sky and injuring people on the ground is analysed from the perspectives of both German and English private law. This working scenario is used to illustrate the problem of legal uncertainty facing developers, and the article provides valuable knowledge by mapping real uncertainties that impede the development of autonomous drone technology alongside providing multidisciplinary insights from law as well as software electronic and computer engineering.





God, politics and AI. Some early thinking about AI…

https://link.springer.com/article/10.1007/s43545-022-00458-w

Spinoza, legal theory, and artificial intelligence: a conceptual analysis of law and technology

This paper sets out to show the relevance of Benedict Spinoza’s (1632–1677) views on law to the contemporary legal discourse on law and technology. I will do this by using some of the reactions toward the use of Artificial Intelligence (AI) in legal practices as illustrative examples of the continued relevance of the debate on law’s nature with which Spinoza was concerned in the fourth chapter of the Theological Political Treatise. I will argue that the problem of how to make laws efficient is being manifested in legal debates on how to regulate social and scientific practices that involve the use of certain—especially advanced—AI. As such, these debates are based on the idea that AI technology complicates the valid application of law in so far as it challenges the legal idea of the individual who corresponds with the unlimited legal subject. This complication is manifested, for instance, when we consider the rule of law criteria (predictability and transparency) for valid law-making and application in light of the fact that self-learning machines and autonomous AI hold an intentionality that lies beyond the scope of the lawmaker’s cognition. My discussion will lead to the suggestion that Spinoza’s legal theory may help us make sense of the problems perceived by legal discourses on AI and law as illustrations of a conceptual paradox embedded within the concept of law, rather than problems caused by the technological development of new forms of intentionalities.





When your client is an AI… (Okay, not really)

https://scholarship.law.ufl.edu/cgi/viewcontent.cgi?article=2106&context=facultypub

Assuming the Risks of Artificial Intelligence

Tort law has long served as a remedy for those injured by products—and injuries from artificial intelligence (“AI”) are no exception. While many scholars have rightly contemplated the possible tort claims involving AI-driven technologies that cause injury, there has been little focus on the subsequent analysis of defenses. One of these defenses, assumption of risk, has been given particularly short shrift, with most scholars addressing it only in passing. This is intriguing, particularly because assumption of risk has the power to completely bar recovery for a plaintiff who knowingly and voluntarily engaged with a risk. In reality, such a defense may prove vital to shaping the likelihood of success for these prospective plaintiffs injured by AI, first-adopters who are often eager to “voluntarily” use the new technology but simultaneously often lacking in “knowledge” about AI’s risks.

To remedy this oversight in the scholarship, this Article tackles assumption of risk head-on, demonstrating why this defense may have much greater influence on the course of the burgeoning new field of “AI torts” than originally believed. It analyzes the historic application of assumption of risk to emerging technologies, extrapolating its potential use in the context of damages caused by robotic, autonomous, and facial recognition technologies. This Article then analyzes assumption of risk’s relationship to informed consent, another key doctrine that revolves around appreciation of risks, demonstrating how an extension of informed consent principles to assumption of risk can establish a more nuanced approach for a future that is sure to involve an increasing number of AI-human interactions—and AI torts. In addition to these AI-human interactions, this Article’s reevaluation also can help in other assumption of risk analyses and tort law generally to better address the evolving innovation-riskconsent trilemma.



No comments: