Something interesting for my students to debate.
https://venturebeat.com/2020/11/28/ethical-ai-isnt-the-same-as-trustworthy-ai-and-that-matters/
Ethical AI isn’t the same as trustworthy AI, and that matters
… In lockstep with ethics comes the topic of trust. Ethics are the guiding rules for the decisions we make and actions we take. These rules of conduct reflect our core beliefs about what is right and fair. Trust, on the other hand, reflects our belief that another person — or company — is reliable, has integrity and will behave in the manner we expect. Ethics and trust are discrete, but often mutually reinforcing, concepts.
… Certainly, unethical systems create mistrust. It does not follow, however, that an ethical system will be categorically trusted. To further complicate things, not trusting a system doesn’t mean it won’t get used.
How to be ethical.
https://link.springer.com/chapter/10.1007/978-3-030-64148-1_21
Ethical Guidelines for Solving Ethical Issues and Developing AI Systems
Artificial intelligence (AI) has become a fast-growing trend. Increasingly, organizations are interested in developing AI systems, but many of them have realized that the use of AI technologies can raise ethical questions. The goal of this study was to analyze what kind of ethical guidelines companies have for solving potential ethical issues of AI and developing AI systems. This paper presents the results of the case study conducted in three companies. The ethical guidelines defined by the case companies focused on solving potential ethical issues, such as accountability, explainability, fairness, privacy, and transparency. To analyze different viewpoints on critical ethical issues, two of the companies recommended using multi-disciplinary development teams. The companies also considered defining the purposes of their AI systems and analyzing their impacts to be important practices. Based on the results of the study, we suggest that organizations develop and use ethical guidelines to prioritize critical quality requirements of AI. The results also indicate that transparency, explainability, fairness, and privacy can be critical quality requirements of AI systems.
(Related)
AI virtues -- The missing link in putting AI ethics into practice
Several seminal ethics initiatives have stipulated sets of principles and standards for good technology development in the AI sector. However, widespread criticism has pointed out a lack of practical realization of these principles. Following that, AI ethics underwent a practical turn, but without deviating from the principled approach and the many shortcomings associated with it. This paper proposes a different approach. It defines four basic AI virtues, namely justice, honesty, responsibility and care, all of which represent specific motivational settings that constitute the very precondition for ethical decision making in the AI field. Moreover, it defines two second-order AI virtues, prudence and fortitude, that bolster achieving the basic virtues by helping with overcoming bounded ethicality or the many hidden psychological forces that impair ethical decision making and that are hitherto completely disregarded in AI ethics. Lastly, the paper describes measures for successfully cultivating the mentioned virtues in organizations dealing with AI research and development.
My AI would rather not deal with mere humans.
Artificial Intelligence and Keeping Humans “in the Loop”
Artificial intelligence (AI) technology has evolved through a number of developmental phases, from its beginnings in the 1950s to modern machine learning, expert systems and “neural networks” that mimic the structure of biological brains. AI now exceeds our performance in many activities once held to be too complex for any machine to master, such as the game Go and game shows. Nonetheless, human intellect still outperforms AI on many simple tasks, given AI’s present inability to recognize more than schematic patterns in images and data. As AI evolves, the pivotal question will be to what degree AI systems should be granted autonomy, to take advantage of this power and precision, or remain subordinate to human scrutiny and supervision, to guard against unexpected failure. That is to say, as we anticipate technological advances in AI, to what degree must humans remain “in the loop”?
We need legal geeks?
https://osf.io/preprints/lawarxiv/zfkr3/
Education for the Provision of Technologically Enhanced Legal Services
Legal professionals increasingly rely on digital technologies when they provide legal services. The most advanced technologies such as artificial intelligence (AI) promise great advancements of legal services, but lawyers are traditionally not educated in the field of digital technology and thus cannot fully unlock the potential of such technologies in their practice. In this paper, we identify five distinct skills and knowledge gaps that prevent lawyers from implementing AI and digital technology in the provision of legal services and suggest concrete models for education and training in this area. Our findings and recommendations are based on a series of semi-structured interviews, design and delivery of an experimental course in ‘Law and Computer Science’, and an analysis of the empirical data in view of wider debates in the literature concerning legal education and 21st century skills.
Perspective.
US Lags Behind Both Russia & China In These Critical Domains That Will Define The Future of War
In its “2020 Military Power Report”, the Pentagon acknowledges that the US is falling behind China in key military innovations. The report says that China’s strategy is to complete the military modernization program by 2035 and transform the PLA into a “world-class” military by the end of 2049.
… Both Russia and China have surpassed the US in many critical military technologies, which have been widely acknowledged by military analysts as indications of the beginning of the end of the United States as the only dominant power.
… The three areas in which the US is being surpassed by the two superpowers are:
Hypersonic Weapons
Artificial Intelligence
Blockchain in Military
A strange conundrum: We rely on psychologists who can not predict suicide to train a machine that can?
https://www.nytimes.com/2020/11/23/health/artificial-intelligence-veterans-suicide.html
Can an Algorithm Prevent Suicide?
… “The fact is, we can’t rely on trained medical experts to identify people who are truly at high risk,” said Dr. Marianne S. Goodman, a psychiatrist at the Veterans Integrated Service Network in the Bronx, and a clinical professor of medicine at the Icahn School of Medicine at Mount Sinai. “We’re no good at it.”
No comments:
Post a Comment