Sunday, October 18, 2020

Perhaps “consider’ should be part of all App development?

https://www.databreaches.net/california-ag-settlement-suggests-privacy-and-security-practices-of-digital-health-apps-may-provide-fertile-ground-for-enforcement-activity/

California AG Settlement Suggests Privacy and Security Practices of Digital Health Apps May Provide Fertile Ground for Enforcement Activity

Libbie Canter, Anna D. Kraus, and Rebecca Yergin of Covington & Burling write:

California Attorney General Xavier Becerra (“AG”) announced in September a settlement against Glow, Inc., resolving allegations that the fertility app had “expose[d] millions of women’s personal and medical information.” In the complaint, the AG alleged violations of certain state consumer protection and privacy laws, stemming from privacy and security “failures” in Glow’s mobile application (the “Glow App”). The settlement, which remains subject to court approval, requires Glow to comply with relevant consumer protection and privacy laws (including California’s medical privacy law), mandates “a first-ever injunctive term that requires Glow to consider how privacy or security lapses may uniquely impact women,” and imposes a $250,000 civil penalty.

Read more on InsidePrivacy.





AI complicates things…

https://edpl.lexxion.eu/article/edpl/2020/3/8

Forgetful AI: AI and the Right to Erasure under the GDPR

Artificial Intelligence and, specifically, Machine Learning, depends on data for its development and continuous evolution. Frequently, the information used to train Machine Learning algorithms is personal data and, thereby, subject to the rules contained within the GDPR. If the necessary requirements are fulfilled, Article 17 of the GDPR grants to the data subject the right to request from the controller the erasure of personal data concerning him/her. In this paper we will study the impact of the right to erasure under the GDPR in the development of Artificial Intelligence in the European Union. We will assess whether datasets, mathematical models and the results of applying such models to new data need to be erased, pursuant to a valid request from the data subject. We will also analyse the challenges created by this erasure, how they can be minimized and the most adequate legal interpretations to ensure seamless AI development that is also compatible with the principles of privacy and data protection currently in force within the European Union.





Face it.

https://vps5.cloudfarm.it/handle/20.500.11825/1794

Framing the picture: a human rights-based study on AI, the case of facial recognition technology

From science-fiction novels and dystopian literary scenarios, Artificial Intelligence (AI) has become a distinguishing feature of our times. AI-based technologies have the potential to decrease the mortality caused by car accidents or serious diseases, and the detrimental effects of climate change. Yet, all that glisters is not gold. We live surrounded by security cameras, unconsciously caught by the lens of private smartphones, dashcams integrated into vehicles, and regularly overflow by drones and orbiting satellites. Among these various forms of surveillance, Facial Recognition Technology (FRT) plays a central role. The present thesis aims at investigating, analysing and discussing several threats FRT can pose to human rights, democracy and the rule of law. To do so, its uses by law enforcement authorities will be “framed” adopting the European human rights law framework. This research will unveil that the risks connected to the deployment of FRT are increased when advocated for the pursuit of “public security”. Based on the performed analysis, it can be concluded that, whilst proper regulations would mitigate the adverse effects generated by FRT, the general public should be more sensitive to data protection and privacy issues in order to enable an environment for “human flourishing”.





Will the US follow China’s lead?

https://bhxb.buaa.edu.cn/Jwk3_bhsk/EN/10.13766/j.bhsk.1008-2204.2019.0224

Reform of Higher Education of Law in the Era of Artificial Intelligence

Abstract: The development of artificial intelligence has brought challenges to the field of law. Higher education of law in China should cultivate students' legal practice ability, big data thinking ability and computational thinking ability to cope with the new requirements for legal talents brought by artificial intelligence. However, from the current practice, the legal education in China is difficult to meet the training needs of legal talents in the era of artificial intelligence in terms of training objectives, curriculum system and practical conditions. So it is suggested that, firstly, we should promote the cultivation of students' ability to apply law from the aspects of teachers and courses; secondly, the course system of artificial intelligence law should be create from both theoretical and practical aspects; finally, the traditional teaching method should be reformed with the aid of artificial intelligence legal system.





Improving the scrut?

https://www.semanticscholar.org/paper/How-to-Support-Users-in-Understanding-Intelligent-Eiband-Buschek/449afa3d3cd56e628f8e9c1e376df41eb64fabe7

How to Support Users in Understanding Intelligent Systems? Structuring the Discussion

The opaque nature of many intelligent systems violates established usability principles and thus presents a challenge for human-computer interaction. Research in the field therefore highlights the need for transparency, scrutability, intelligibility, interpretability and explainability, among others. While all of these terms carry a vision of supporting users in understanding intelligent systems, the underlying notions and assumptions about users and their interaction with the system often remain unclear. We review the literature in HCI through the lens of implied user questions to synthesise a conceptual framework integrating user mindsets, user involvement, and knowledge outcomes to reveal, differentiate and classify current notions in prior work. This framework aims to resolve conceptual ambiguity in the field and provides researchers with a thinking tool to clarify their assumptions and become aware of those made in prior work. We thus hope to advance and structure the dialogue in the HCI research community on supporting users in understanding intelligent systems.





A lot to think about…

https://link.springer.com/article/10.1007/s00146-020-01079-8

Ethics of engagement

In this volume, AI&Society authors critically reflect on ethics of engagement. The narratives range from societal sustainability, Surveillance Capitalism, Machine theology, Social jurisdiction, Covid-19, EU GDPR consent mechanisms, Strategic Health Initiative, Watson for Oncology, Recommender Systems, and Socio-technological systems. The discussion and arguments range from Artificial wisdom; Artificial moral agents; Crisis of moral passivity; Smart phones on wheels; Disengagement and re-engagement with roboethics; roboaesthetics; interpersonal interaction and perceived legitimacy; Value conflicts, Nudging traps and algorithmic bias, Digital Fake News, Social anxiety, and Dysfunctional impacts of automation on social and political stability; Regulatory frameworks and EU GDPR consent mechanisms; Legal, political, and bureaucratic decision-making; Implication of autonomous decision making on judgment making during COVID-19 pandemic; AI, medicine and ethics; Global supply chain dependency and Global concordance; Narrative of entanglement; AI and shared human motivations; Cognitive-architecture for autonomy, intentionality and emotion as prerequisites for creativity; Turing’s vision and cooperative challenge of language use; and Theistic AI narratives.



No comments: