Sunday, March 20, 2022

Liability and programming. Did I program it exactly as you specified or did I ‘interpret’ your specification? (Or is the user using it wrong?)

https://link.springer.com/article/10.1365/s43439-022-00047-w

Artificial intelligence as a challenge for the law: the example of “Doctor Algorithm”

Society increasingly relies on algorithms to improve decisions with the aid of artificial intelligence. This is particularly true in the field of medicine where, based on algorithms, ever increasing volumes of data are to be analysed and provide the basis for tailored medical treatment. This leads to the question of how the law should react to the adoption of such algorithm-based decisions. Will new regulatory approaches be required if algorithms are employed for treatment decisions, or can one resort to the basic principles applied to determine the standard of medical treatment? Who will be liable if the use of algorithms leads to malpractice, in particular if it is the programming of the algorithms that has caused the fault?



(Related)

https://www.statnews.com/2022/03/16/google-translate-health-care-english/

Doctors often turn to Google Translate to talk to patients. They want a better option

Like many health systems, the hospital complied with federal requirements for meaningful access to language services by staffing in-person interpreters for frequent needs like Spanish, and could call up interpreters for less commonly spoken languages. But it was an imperfect system — there were sometimes delays, or a dialect that it was difficult to track down a translator for — and Google Translate came to serve as a fallback.

Google Translate has become a ubiquitous, if under-examined, part of patient care. “It’s sort of [used] under the table,” said Elaine Khoong, an internist and assistant professor of medicine at the University of California, San Francisco. The practice is hidden in part because it is formally discouraged by health systems and state medical registration boards that see it as a liability. There’s a growing push by Khoong and other researchers to bring it to the surface — both to study Google Translate’s use and risk in the clinic, and to build better versions to backstop traditional language services.





Leading to the ‘plug-in lawyer?’

http://library.bseu.by:8080/bitstream/edoc/92839/2/%D0%A1%D0%B1%D0%BE%D1%80%D0%BD%D0%B8%D0%BA%202021.pdf#page=348

E-JUSTICE IN CHINA

The opportunities and challenges that information technology brings to justice can be said to be unprecedented. Big data, cloud computing, artificial intelligence, blockchain, 5G and other information technologies are not only new tools, new thinking and new methods, but also many types of disputes with new characteristics and new trends have been born.

While information technology has profoundly changed people's production and life, it has also brought unprecedented opportunities and challenges to justice. Facing the technological progress, rule of law, needs and the expectations of the people, the People’s Court of China closely follows the pulse on time, based on the actual conditions of the country, vigorously explores new judicial models in the Internet era, promotes the comprehensive and in-depth integration of information technology and judicial work, and promotes the judicial system and trial capacity modernization.

The traditional trial process is shifted from offline to online, and data and information are shifted from paper to "cloud" or "chain", corresponding to case filing, mediation, service, court hearing, proof, and profound changes have taken place in litigation links such as cross-examination, and corresponding online litigation rules need to be established [1]. At the same time, the people's courts tried new types of Internet-related cases in accordance with the law and established a series of governance rules through typical case judgments. The organic unity of the models and rules is what we call e-justice.



(Related)

https://link.springer.com/article/10.1007/s10506-022-09310-1

Smart criminal justice: exploring the use of algorithms in the Swiss criminal justice system

In the digital age, the use of advanced technology is becoming a new paradigm in police work, criminal justice, and the penal system. Algorithms promise to predict delinquent behaviour, identify potentially dangerous persons, and support crime investigation. Algorithm-based applications are often deployed in this context, laying the groundwork for a ‘smart criminal justice’. In this qualitative study based on 32 interviews with criminal justice and police officials, we explore the reasons why and extent to which such a smart criminal justice system has already been established in Switzerland, and the benefits perceived by users. Drawing upon this research, we address the spread, application, technical background, institutional implementation, and psychological aspects of the use of algorithms in the criminal justice system. We find that the Swiss criminal justice system is already significantly shaped by algorithms, a change motivated by political expectations and demands for efficiency. Until now, algorithms have only been used at a low level of automation and technical complexity and the levels of benefit perceived vary. This study also identifies the need for critical evaluation and research-based optimization of the implementation of advanced technology. Societal implications, as well as the legal foundations of the use of algorithms, are often insufficiently taken into account. By discussing the main challenges to and issues with algorithm use in this field, this work lays the foundation for further research and debate regarding how to guarantee that ‘smart’ criminal justice is actually carried out smartly.





What is good ethics?

https://www.emerald.com/insight/content/doi/10.1108/JICES-12-2020-0125/full/html

An exploratory qualitative analysis of AI ethics guidelines

The article presents four key findings: existing ethics guidelines (1) promote a broad spectrum of values; (2) focus principally on AI, followed by (Big) Data and algorithms; (3) do not adequately define the term “ethics” and related terms; and (4) have most frequent recourse to the values of “transparency,” “privacy,” and “security.” Based on these findings, the article argues that the guidelines corpus exhibits discernible utilitarian tendencies; guidelines would benefit from greater reflexivity with respect to their ethical framework; and virtue ethical approaches have a valuable contribution to make to the process of guidelines development.





The problem with ‘do your best.’

https://dilbert.com/strip/2022-03-20



No comments: