Sunday, August 22, 2021

Does the government have an obligation of timely notification?

https://www.databreaches.net/u-s-state-department-recently-hit-by-a-cyber-attack-fox-news/

U.S. State Department recently hit by a cyber attack – Fox News

Reuters reports:

The U.S. State Department was recently hit by a cyber attack, and notifications of a possible serious breach were made by the Department of Defense Cyber Command, a Fox News reporter tweeted https://bit.ly/3z7RTH7 on Saturday.
It is unclear when the breach was discovered, but it is believed to have happened a couple of weeks ago, according to the Fox News reporter’s Twitter thread.

Read more on Reuters.





It’s not my fault, the computer did it.” An argument for AI personhood? Perhaps we need an AI to act as legal/ethical council for autonomous weapons?

https://www.sciencedirect.com/science/article/abs/pii/S0267364921000376

Legal evaluation of the attacks caused by artificial intelligence-based lethal weapon systems within the context of Rome statute

Artificial intelligence (AI) as of the level of development reached today has become a scientific reality that is subject to study in the fields of law, political science, and other social sciences besides computer and software engineering. AI systems which perform relatively simple tasks in the early stages of the development period are expected to become fully or largely autonomous in the near future. Thanks to this, AI which includes the concepts of machine learning, deep learning, and autonomy, has begun to play an important role in producing and using smart arms. However, questions about AI-Based Lethal Weapon Systems (AILWS) and attacks that can be carried out by such systems have not been fully answered under legal aspect. More particularly, it is a controversial issue who will be responsible for the actions that an AILWS has committed. In this article, we discussed whether AILWS can commit offense in the context of the Rome Statute, examined the applicable law regarding the responsibility of AILWS, and tried to assess whether these systems can be held responsible in the context of international law, crime of aggression, and individual responsibility. It is our finding that international legal rules including the Rome Statute can be applied regarding the responsibility for the act/crime of aggression caused by AILWS. However, no matter how advanced the cognitive capacity of an AI software, it will not be possible to resort to the personal responsibility of this kind of system since it has no legal personality at all. In such a case, responsibility will remain with the actors who design, produce, and use the system. Last but not least, since no AILWS software does have specific codes of conduct that can make legal and ethical reasonings for today, at the end of the study it was recommended that states and non-governmental organizations together with manifacturers should constitute the necessary ethical rules written in software programs to prevent these systems from unlawful acts and to develop mechanisms that would restrain AI from working outside human control.



(Related)

https://www.tandfonline.com/doi/full/10.1080/0952813X.2021.1964003

The risks associated with Artificial General Intelligence: A systematic review

Artificial General intelligence (AGI) offers enormous benefits for humanity, yet it also poses great risk. The aim of this systematic review was to summarise the peer reviewed literature on the risks associated with AGI. The review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Sixteen articles were deemed eligible for inclusion. Article types included in the review were classified as philosophical discussions, applications of modelling techniques, and assessment of current frameworks and processes in relation to AGI. The review identified a range of risks associated with AGI, including AGI removing itself from the control of human owners/managers, being given or developing unsafe goals, development of unsafe AGI, AGIs with poor ethics, morals and values; inadequate management of AGI, and existential risks. Several limitations of the AGI literature base were also identified, including a limited number of peer reviewed articles and modelling techniques focused on AGI risk, a lack of specific risk research in which domains that AGI may be implemented, a lack of specific definitions of the AGI functionality, and a lack of standardised AGI terminology. Recommendations to address the identified issues with AGI risk research are required to guide AGI design, implementation, and management.





Another “We want to sue them all” argument?

https://digitalcommons.law.byu.edu/lawreview/vol46/iss6/7/

Medical Device Artificial Intelligence: The New Tort Frontier

The medical device industry and new technology start-ups have dramatically increased investment in artificial intelligence (AI) applications, including diagnostic tools and AI-enabled devices. These technologies have been positioned to reduce climbing health costs while simultaneously improving health outcomes. Technologies like AI-enabled surgical robots, AI-enabled insulin pumps, and cancer detection applications hold tremendous promise, yet without appropriate oversight, they will likely pose major safety issues. While preventative safety measures may reduce risk to patients using these technologies, effective regulatory-tort regimes also permit recovery when preventative solutions are insufficient.

The Food and Drug Administration (FDA), the administrative agency responsible for overseeing the safety and efficacy of medical devices, has not effectively addressed AI system safety issues for its clearance processes. If the FDA cannot reasonably reduce the risk of injury for AI-enabled medical devices, injured patients should be able to rely on ex post recovery options, as in products liability cases. However, the Medical Device Amendments Act (MDA) of 1976 introduced an express preemption clause that the U.S. Supreme Court has interpreted to nearly foreclose liability claims, based almost completely on the comprehensiveness of FDA clearance review processes. At its inception, MDA preemption aimed to balance consumer interests in safe medical devices with efficient, consistent regulation to promote innovation and reduce costs.

Although preemption remains an important mechanism for balancing injury risks with device availability, the introduction of AI software dramatically changes the risk profile for medical devices. Due to the inherent opacity and changeability of AI algorithms powering AI machines, it is nearly impossible to predict all potential safety hazards a faulty AI system might pose to patients. This Article identifies key preemption issues for AI machines as they affect ex ante and ex post regulatory-tort allocation, including actual FDA review for parallel claims, bifurcation of software and device reviews, and dynamics of the technology itself that may enable plaintiffs to avoid preemption. This Author then recommends an alternative conception of the regulatory-tort allocation for AI machines that will create a more comprehensive and complementary safety and compensatory model.



(Related)

https://www.taylorfrancis.com/chapters/edit/10.4324/9781003080596-1/contract-tort-law-digital-age-zvonimir-slakoper-ivan-tot

Contract and tort law in the digital age

The new and emerging digital technologies of the overlapping third and fourth industrial revolutions are raising various challenges to the law of obligations. The central question is whether the existing contract and tort law rules and doctrines are well equipped to meet these new challenges or whether an appropriate modification, reinterpretation, or creation of entirely new legal solutions is needed to that purpose. This introductory chapter addresses contract and tort law issues related to the following main topics of the book: liability of internet intermediaries for illegal third-party content, liability of collaborative economy platforms, liability for artificial intelligence and other emerging digital technologies, and contract law challenges of blockchain-based smart contracts. The chapter provides a brief overview of the European Union regulatory framework and introduces the chapters that follow.





Perspective.

https://www.emerald.com/insight/content/doi/10.1108/JEET-04-2021-0016/full/html

The ethical implications of 4IR

This paper aims to highlight the ethical implications of the adoption of Fourth Industrial Revolution (4IR) technologies, particularly artificial intelligence (AI), for humanity. It proposes a virtues approach to resolving ethical dilemmas.

The research is based on a review of the relevant literature and empirical evidence for how AI is impacting individuals and society. It uses a taxonomy of human attributes against which potential harms are evaluated.

The technologies of the 4IR are being adopted at a fast pace, posing numerous ethical dilemmas. This study finds that the adoption of these technologies, driven by an Enlightenment view of progress, is diminishing key aspects of humanity – moral agency, human relationships, cognitive acuity, freedom and privacy and the dignity of work. The impact of AI algorithms is also shown, in particular, is shown to be distorting the view of reality and threatening democracy, in part due to the asymmetry of power between Big Tech and users. To enable humanity to be masters of technology, rather than controlled by it, a virtues-based approach should be used to resolve ethical dilemmas, rather than utilitarian ethics.





Hummm.

https://www.elgaronline.com/view/edcoll/9781800377158/9781800377158.00007.xml

Technology and Corporate Law

In light of the overwhelming impact of technology on modern life, this thought-provoking book critically analyses the interaction of innovation, technology and corporate law. It highlights the impact of artificial intelligence and distributed ledgers on corporate governance and form, examining the extent to which technology may enhance or displace conventional theories and practices concerning corporate governance and regulation Expert contributors from multiple jurisdictions identify themes and challenges that transcend national boundaries and confront the international community as a whole.



No comments: