Sunday, May 01, 2022

If autonomous weapons are outlawed, only outlaws (and AI) will have autonomous weapons.

https://researchers.cdu.edu.au/en/publications/weaponized-artificial-intelligence-ai-and-the-laws-of-armed-confl

Weaponized Artificial Intelligence (AI) and the Laws of Armed Conflict (LOAC) - the RAILE Project

Today much has already been written about Artificial Intelligence (AI), robotics and autonomous systems. In particular the more and more prevalent autonomous vehicles, i.e. cars, trucks, trains and to a lesser extent aeroplanes. This article looks at an emerging technology that has a fundamental impact on our society, namely the use of artificial intelligence (AI) in lethal autonomous weapon systems (LAWS) – weaponized AI - as used by the armed forces. It specifically approaches the questions around how laws and policy for this specific form of emerging technology - the military application of autonomous weapon systems (AWS) could be developed. The article focuses on how potential solution(s) may be found rather than on the well-established issues. Currently, there are three main streams in the debate around how to deal with LAWS; the ‘total ban’, the ‘wait and see’ and ‘the ‘pre-emptive legislation’ path. The recent increase in the development of LAWS has led to the Human Rights Watch (HRW) taking a strong stance against ‘killer robots’ promoting a total ban. This causes its own legal issues already in the first stage, the definition of autonomous weapons, which is inconsistent but often refers to the Human Rights Watch (HRW) 3-step listing – human-in/on/out-of the loop. However, the fact remains that the LAWS are already in existence and continues to be developed. This raises the question of how to deal with them. From a civilian perspective, the initial legal issue has been focusing on liability in relation to accidents. On the military side, international legislation has been and still is, through a series of treaties between states, striving to regulate the behaviour of troops on the fields of armed conflict. These treaties, at times referred to as Laws of Armed Conflict (LOAC) and at times as International Humanitarian Law (IHL) share four (4) fundamental core principles – distinction, proportionality, humanity and military necessity. With LAWS being an unavoidable fact in today’s field of armed conflict and rules governing troop behaviour existing in the form of international treaties, what is the next step? This article will look to present a short description of each debate stream utilizing relevant literature for the subject matter including a selection of arguments raised by prominent authors in the field of AWS and international law. The question for this article is: How do we achieve AWS/AI programming which adheres to the LOAC/IHL’s intentions of the ‘core principles of distinction, proportionality, humanity and military necessity?



(Related)

https://www.mdpi.com/2078-2489/13/5/215/htm

Editorial for the Special Issue on Meaningful Human Control and Autonomous Weapons Systems

Global discussions on the legality and ethics of using Artificial intelligence (AI) technology in warfare, particularly the use of autonomous weapons (AWS), continue to be hotly debated. Despite the push for a ban on these types of systems, unilateral agreement remains out of reach. Much of the disaccord comes from a privation of common understandings of fundamental notions of what it means for these types of systems to be autonomous. Similarly, there is a dispute as to what, if at all possible, it means for humans to have meaningful control over these systems.





Liability is liable to change in law?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4080883

Bridging the liability gaps: why AI challenges the existing rules on liability and how to design human-empowering solutions

This chapter explores the so-called ‘liability gaps’ that occurs when, in applying existing contractual, extracontractual, or strict liability rules to harms caused by AI, the inherent characteristics of AI may result in unsatisfying outcomes, in particular for the damaged party. The chapter explains the liability gaps, investigating which features of AI challenge the application of traditional legal solutions and why. Subsequently, this chapter explores the challenges connected to the different possible solutions, including contract law, extracontractual law, product liability, mandatory insurance, company law, and the idea of granting legal personhood to AI and robots. The analysis is carried out using hypothetical scenarios, to highlight both the abstract and practical implications of AI, based on the roles and interactions of the various parties involved. As a conclusion, this chapter offers an overview of the fundamental principles and guidelines that should be followed to elaborate a comprehensive and effective strategy to bridge the liability gaps. The argument made is that the guiding principle in designing legal solutions to the liability gaps must be the protection of individuals, particularly their dignity, rights and interests.





Automating lawyers… (Will all AI lawyers be called ‘Sue?’)

https://cadmus.eui.eu/handle/1814/74443

Data protection and judicial automation

The words "judicial automation" invoke a broad range of images, ranging from time-saving tools to decision-aiding tools or even quixotic ideas of robot judges. As the development of artificial intelligence technologies expands the range of possible automation, it also raises questions about the extent to which automation is admissible in judicial contexts and the safeguards required for the safe use of AI in judicial contexts. This chapter argues that these applications raise specific challenges for data protection law, as the use of personal data for judicial automation requires the adoption of safeguards against risks to the right to a fair trial. The chapter discusses current and proposed uses of judicial automation, identifying how they use personal data in their operation and the issues that arise from this use, such as algorithmic biases and system opacity. By connecting these issues to the safeguards required for automated decision-making and data protection by design, the chapter shows how data protection law may contribute to a fair trial in contexts of judicial automation and highlights open research questions in the interface between procedural rights and data protection.





A topic that may yet lead to AI personhood…

https://scholarship.law.uc.edu/cgi/viewcontent.cgi?article=1043&context=ipclj

The Patentability of Inventions with Artificial Intelligence Listed as an Inventor Following Thaler v. Hirshfeld

Computers have become an integral part of daily life for a plethora of individuals in the United States and around the world. For many, computers create ease and improve quality of life as they provide a variety of different functions. From allowing individuals to communicate with one another across the globe, to providing a medium for individuals to work and learn, to many things never previously imaginable, computers have completely transformed many aspects of daily life. Following the development of computers, many inventors and developers have been consistently looking for methods to make them faster, better, smarter, and able to solve problems. This drive eventually led to the creation of Artificial Intelligence (“AI”). AI essentially utilizes computers and machines in order to mimic the problem-solving and decision-making capabilities that are present within the human mind.1

AI can serve a wide range of purposes and applications, encompassing everything from various types of speech recognition to customer service, among many others. Some individuals are even harnessing the power of AI to help create new inventions, solve problems and innovate new methods of improving society.2 For example, AI has been used to detect defects in pharmaceutical products, to develop new composition for green technology products, and for analyzing biological samples in the manufacturing process, along with many other applications.3 As a result, when inventors are seeking intellectual property protection for their new inventions, specifically patent protection, some inventors chose to list the artificial intelligence as the inventor of the new invention when filing patent applications.





...because your car will become a risk.

https://ieeexplore.ieee.org/abstract/document/9762777

Security and Privacy Issues in Autonomous Vehicles: A Layer-Based Survey

Artificial Intelligence (AI) is changing every technology we are used to deal with. Autonomy has long been a sought-after goal in vehicles, and now more than ever we are very close to that goal. Big auto manufacturers as well are investing billions of dollars to produce Autonomous Vehicles (AVs). This new technology has the potential to provide more safety for passengers, less crowded roads, congestion alleviation, optimized traffic, fuel-saving, less pollution as well as enhanced travel experience among other benefits. But this new paradigm shift comes with newly introduced privacy issues and security concerns. Vehicles before were dumb mechanical devices, now they are becoming smart, computerized, and connected. They collect huge troves of information, which needs to be protected from breaches. In this work, we investigate security challenges and privacy concerns in AVs. We examine different attacks launched in a layer-based approach. We conceptualize the architecture of AVs in a four-layered model. Then, we survey security and privacy attacks and some of the most promising countermeasures to tackle them. Our goal is to shed light on the open research challenges in the area of AVs as well as offer directions for future research.



No comments: