Sunday, December 04, 2022

Intimate, as we normally define intimate. Is that different enough from ‘personal’ that we need a new law?

https://www.pogowasright.org/the-right-to-intimate-privacy-an-interview-with-danielle-citron/

The Right to Intimate Privacy: An Interview with Danielle Citron

Julia Angwin’s newsletter has a great interview with Danielle Citron, privacy law scholar and advocate for privacy rights. She starts by providing a brief recap of some of Citron’s credentials and accomplishments in the field:

In her new book, “The Fight for Privacy: Protecting Dignity, Identity and Love in the Digital Age,” Danielle Citron calls for a new civil right to be established protecting intimate privacy. This is my second newsletter interviewing Danielle, who is the leading legal scholar in the emerging field of cyber civil rights. Two years ago, I interviewed her about efforts to reform Section 230 of the Communications Decency Act—a law sometimes referred to as the Magna Carta of the internet.
Citron is the Jefferson Scholars Foundation Schenck Distinguished Professor in Law and Caddell and Chapman Professor of Law at the University of Virginia, where she is the director of the school’s LawTech Center. In 2019, Citron was named a MacArthur Fellow based on her work on cyberstalking and intimate privacy.

Here’s a snippet from the interview:

Angwin: You call for a civil right to intimate privacy. What does that mean?
Citron: Modern civil rights laws protect against invidious discrimination and rightly so. I want us also to conceive of civil rights as both a commitment for all to enjoy and something that provides special protection against discrimination. Because who is most affected and harmed by the sharing of intimate information? Women, non-White people, and LGBTQ+ individuals, many of whom often have more than one vulnerable identity.
Currently, the law woefully underprotects intimate privacy.

Go read the whole interview at TheMarkup.





First responder tools have to get it right! (The police tool belt?)

https://bulletin.cepol.europa.eu/index.php/bulletin/article/view/540

Mobile Forensics and Digital Solutions

Mobile devices have become an indispensable part of modern society and are used throughout the world on a daily basis. The proliferation of such devices has rendered them a crucial part of criminal investigations and has led to the rapid advancement of the scientific field of Mobile Forensics. The forensic examination of mobile devices provides essential information for authorities in the investigation of cases and their relative importance advances as more evidence and traces of criminal activity can be acquired through the analysis of the corresponding forensic artifacts. Data related to the device user, call logs, text messages, contacts, image and video files, notes, communication records, networking activity and application related data, among others, with correct technical interpretation and correlation through expert analysis, can significantly contribute to the successful completion of digital criminal investigations. The above underline the necessity for advanced forensic tools that will utilize the most prominent achievements in Data Science. In this paper, the current status of Mobile Forensics as a branch of Digital Forensics is examined by exploring the most important challenges that digital forensic examiners face and investigating whether Artificial Intelligence and Machine Learning solutions can revolutionize the daily practice with respect to digital forensics investigations. The utilization of these emerging technologies provides crucial tools and enhances the professional expertise of digital forensic scientists, paving the way to overcome the critical challenges of digital criminal investigations.





Another overview…

https://link.springer.com/chapter/10.1007/978-3-031-19039-1_5

AI Ethics and Policy

With rapid growth and adoption of emerging technologies like AI, ethical use of such technologies become paramount. Many such ethical principles also get formally codified into law and policy. We begin this chapter by differentiating between AI and digital ethics, the key difference being that the latter tends to have broader scope than the former. We then dive into the philosophy of ethics, followed by a discussion of how AI ethics is being incorporated into policy. Several countries are used as real-world examples, but to illustrate such policies in depth, we provide two case studies. The first of these is on the influential and much-discussed General Data Protection Regulation (GDPR) enacted in the European Union in the last decade. Although it is still controversial, and perhaps too early to say, whether enforcement of GDPR has been sufficiently strong or effective, the regulation has already been used to administer a number of fines and penalties on large corporations like British Airways and Marriott. The second case study is on the US-based National Defense Authorization Act (NDAA). We close the chapter with a discussion of AI ethics in higher education and research.





Over monitoring or over reaction to a little monitoring.

http://global-workplace-law-and-policy.kluwerlawonline.com/2022/11/23/governing-data-governing-technology-complementary-approaches-for-the-future-of-work/

Governing data, governing technology? Complementary approaches for the future of work

An employer today can learn about interactions among employees or with customers via sensors and a vast variety of softwares. Is the tone of voice friendly enough with customers? How much time is spent on emailing or away from the assigned desk? Scores, ‘idle’ or silent buttons, are making the workplace a place where data is constantly accumulated and processed through Artificial Intelligence (AI) and the Internet of Things (IoT). Breaks can lead to penalties, from reduced bonuses to more serious sanctions [1]. These are just examples that represent strong evidence in the labour law debate: the recourse to data is changing organisational models and increasing employers’ capability to monitor the workforce [2]. Thus, the self-determination and purpose limitation principles offered by the current General Data Protection Regulation (EU Reg 2016/679) are now standing under the magnifying glass: can they preserve the order of powers in subordinate employment that datafication is disrupting? Or does guaranteeing individual rights against a vast and complex surveillance society risk creating an unequal David-and-Goliath conflict? [3]

This contribution suggests that data protection law at work is and will be crucial in ensuring labour protection in datafied workplaces. The present focus, however, is dominated by AI and IoT needs to be complemented with the governance of technologies (thus not only of data flows) that place structural limitations on employees’ fundamental freedoms. A complementary approach that can be already recognised in the European Commission’s Industry 5.0 strategy, with the proposal for a regulation on artificial intelligence as one of the main (yet problematic) developments [4].





Do we need AI cops?

https://www.researchgate.net/profile/Ariadna_Ochnio/publication/365470169_What_are_the_main_problems_facing_EU_criminal_law_today/links/637641a354eb5f547cde7753/What-are-the-main-problems-facing-EU-criminal-law-today.pdf

What are the main problems facing EU criminal law today?

EU criminal law has to confront a number of issues affecting almost every area of social life, starting with the use of cutting-edge technology, based on artificial intelligence, including accelerating climate change and environmental degradation, refugee flows, and ending with the recurring crises of the rule of law. The policy of solving these problems will influence the direction of the development of EU criminal law in the near future. However, a wide array of problems make it difficult to discuss them all in one collective study. It is rather easier to identify some thematic circles around which the EU is now focusing its criminal justice strategies. For this reason, this book deals with a selected number of issues of EU criminal law.





Only six?

https://ora.ox.ac.uk/objects/uuid:9ed3716e-8aba-44fc-a70c-e6f0488cf130

Six human-centered artificial intelligence grand challenges

Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood. Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making. We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition. These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 27 experts in the field of human-centered artificial intelligence. In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting human’s cognitive capacities. We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies.



No comments: