Sunday, October 25, 2020

The Jack Benny question?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3713650

Your Data or Your Life

Data, in particular personal data, is often described as the ‘new oil’ powering the information economy. It’s an attractive metaphor — evoking transformations underway in a fourth industrial revolution, heralding a world of artificial intelligence and limitless possibilities. Unfortunately, that metaphor is wrong in almost every way. Oil is finite, companies pay millions to extract it, and each barrel can be used only once . Data, by contrast, is infinite, consumers give it to you for free, and you can keep using it for as long as you like.

This essay considers the nature of personal data and recent proposals to amend Singapore's Personal Data Protection Act.





Can’t hurt.

https://www.techradar.com/news/microsoft-wants-to-make-sure-we-dont-fall-victim-to-murderous-ai

Microsoft wants to make sure we don't fall victim to murderous AI

Anyone worried about the threat of a Skynet-esque rise of the machines may be able to rest a little easier after the release of new protective measures designed to avoid a potential AI uprising.

The nonprofit MITRE Corporation has teamed up with 12 top technology companies, including the likes of Microsoft, IBM and Nvidia to launch the Adversarial ML Threat Matrix.

The group says the system is an open framework created to help security analysts spot, alert, respond to and address threats targeting machine learning (ML) systems.

https://github.com/mitre/advmlthreatmatrix





Collecting thinking.

https://www.semanticscholar.org/paper/A-Comparative-Analysis-of-Industry-Human-AI-Wright-Wang/e55360f0cf3932c90dfa8e0897ff46c14ab1c4dd

A Comparative Analysis of Industry Human-AI Interaction Guidelines

With the recent release of AI interaction guidelines from Apple, Google, and Microsoft, there is clearly interest in understanding the best practices in human-AI interaction. However, industry standards are not determined by a single company, but rather by the synthesis of knowledge from the whole community. We have surveyed all of the design guidelines from each of these major companies and developed a single, unified structure of guidelines, giving developers a centralized reference. We have then used this framework to compare each of the surveyed companies to find differences in areas of emphasis. Finally, we encourage people to contribute additional guidelines from other companies, academia, or individuals, to provide an open and extensible reference of AI design guidelines at https://ai-open-guidelines.readthedocs.io/.



(Related)

https://www.semanticscholar.org/paper/Good-AI-for-the-Present-of-Humanity-Correa/51699de84962ecaf7d74c316962904293d0796bb

Good AI for the Present of Humanity

There is a link between critical theory and some genres of literature that may be of interest to the current debate on AI ethics. While critical theory generally points out certain deficiencies in the present to criticize it, futurology, and literary genres such as Cyber-punk, extrapolate our current condition into possible dystopian futures to criticize the status quo. Given the advance of the AI industry in recent years, an increasing number of ethical matters have been pointed and debated, and we have converged into a couple of principles, like Accountability, Explainability, and Privacy as the main ones. But certainly, this can't be all. While most of the current debates around AI Ethics revolve around making AI"good", we see little effort made to make AI good for everyone. This raises questions as, what published ethical guidelines fail to cover? Like critical theory and literature warns us, what kind of future are we creating? At the expenditure of who? Does AI governance occur inclusively and diversely? In this study, I would like to present two aspects omitted or barely mentioned in the current debate on AI ethics. The current humanitarian costs of our new industrial automatize revolution and the lack of diversity in this whole modernization process.





Automating judges?

https://www.semanticscholar.org/paper/AI-lead-Court-Debate-Case-Investigation-Ji-Zhu/cbbb56a8f9e883d5e3c0457d60bf7dcd248ae083

AI-lead Court Debate Case Investigation

The multi-role judicial debate composed of the plaintiff, defendant, and judge is an important part of the judicial trial. Different from other types of dialogue, questions are raised by the judge, The plaintiff, plaintiff’s agent defendant, and defendant’s agent would be to debating so that the trial can proceed in an orderly manner. Question generation is an important task in Natural Language Generation. In the judicial trial, it can help the judge raise efficient questions so that the judge has a clearer understanding of the case. In this work, we propose an innovative end-to-end question generation model-Trial Brain Model (TBM) to build a Trial Brain, it can generate the questions the judge wants to ask through the historical dialogue between the plaintiff and the defendant. Unlike prior efforts in natural language generation, our model can learn the judge’s questioning intention through predefined knowledge. We do experiments on real-world datasets, the experimental results show that our model can provide a more accurate question in the multi-role court debate scene.



(Related)

https://www.dovepress.com/the-increasing-role-of-artificial-intelligence-in-health-care-will-rob-peer-reviewed-article-IJGM

The Increasing Role of Artificial Intelligence in Health Care: Will Robots Replace Doctors in the Future?

Abstract: Artificial intelligence (AI) pertains to the ability of computers or computer-controlled machines to perform activities that demand the cognitive function and performance level of the human brain. The use of AI in medicine and health care is growing rapidly, significantly impacting areas such as medical diagnostics, drug development, treatment personalization, supportive health services, genomics, and public health management. AI offers several advantages; however, its rampant rise in health care also raises concerns regarding legal liability, ethics, and data privacy. Technological singularity (TS) is a hypothetical future point in time when AI will surpass human intelligence. If it occurs, TS in health care would imply the replacement of human medical practitioners with AI-guided robots and peripheral systems. Considering the pace at which technological advances are taking place in the arena of AI, and the pace at which AI is being integrated with health care systems, it is not be unreasonable to believe that TS in health care might occur in the near future and that AI-enabled services will profoundly augment the capabilities of doctors, if not completely replace them. There is a need to understand the associated challenges so that we may better prepare the health care system and society to embrace such a change – if it happens.





The robot did it!

https://scholarship.law.edu/lawreview/vol69/iss2/9/

Where We’re Going, We Don’t Need Drivers: Autonomous Vehicles and AI-Chaperone Liability

The future of mainstream autonomous vehicles is approaching in the rearview mirror. Yet, the current legal regime for tort liability leaves an open question on how tortious Artificial Intelligence (AI) devices and systems that are capable of machine learning will be held accountable. To understand the potential answer, one may simply go back in time and see how this question would be answered under traditional torts. This Comment tests whether the incident involving an autonomous vehicle hitting a pedestrian is covered under the traditional torts, argues that they are incapable of solving this novel problem, and ultimately proposes a new strict liability tort: AI-Chaperone Liability. Because advancement in technology requires advancement in the law, AI-Chaperone Liability is a step forward in unchartered territory.





We’re not there yet.

https://telrp.springeropen.com/articles/10.1186/s41039-020-00141-9

Making context the central concept in privacy engineering

There is a gap between people’s online sharing of personal data and their concerns about privacy. Till now, this gap is addressed by attempting to match individual privacy preferences with service providers’ options for data handling. This approach has ignored the role different contexts play in data sharing. This paper aims at giving privacy engineering a new direction putting context centre stage and exploiting the affordances of machine learning in handling contexts and negotiating data sharing policies. This research is explorative and conceptual, representing the first development cycle of a design science research project in privacy engineering. The paper offers a concise understanding of data privacy as a foundation for design extending the seminal contextual integrity theory of Helen Nissenbaum. This theory started out as a normative theory describing the moral appropriateness of data transfers. In our work, the contextual integrity model is extended to a socio-technical theory that could have practical impact in the era of artificial intelligence. New conceptual constructs such as ‘context trigger’, ‘data sharing policy’ and ‘data sharing smart contract’ are defined, and their application is discussed from an organisational and technical level. The constructs and design are validated through expert interviews; contributions to design science research are discussed, and the paper concludes with presenting a framework for further privacy engineering development cycles.



No comments: