Sunday, April 17, 2022

We can, therefore we must. It may suggest guilt, can it also “prove” innocence?

https://www.virginiamercury.com/2022/04/06/virginia-police-routinely-use-secret-gps-pings-to-track-peoples-cell-phones/

Virginia police routinely use secret GPS pings to track people’s cell phones

Police never described Durvin as a suspect in the search warrant application they submitted seeking permission to track him and court records show he has not been charged with any crimes in Virginia since police took out the warrant.

Instead, officers wrote that they had found voicemails from Durvin on the overdose victim’s phone and thought tracking his location might help them figure out who supplied the deadly dose of heroin, noting that Durvin had been with the man during what appeared to be a prior overdose in Richmond.

The warrants are limited by law to 30 days but can be — and often are — renewed monthly by a judge.





The privacy hurdle...

https://www.pogowasright.org/announce-privacy-harms-final-published-version-solove-citron/

Announce: Privacy Harms – Final Published Version (Solove & Citron)

Two resources of note this week.

First, as seen on Teach Privacy, Daniel Solove’s site:

I’m delighted to announce that the final published version of my article, Privacy Harms, is now out in print!
Privacy Harms, 101 B.U. L. Rev. 793 (2022) (with Danielle Keats Citron)
Abstract:
The requirement of harm has significantly impeded the enforcement of privacy law. In most tort and contract cases, plaintiffs must establish that they have suffered harm. Even when legislation does not require it, courts have taken it upon themselves to add a harm element. Harm is also a requirement to establish standing in federal court. In Spokeo v. Robins and TransUnion v. Ramirez, the U.S. Supreme Court ruled that courts can override congressional judgment about cognizable harm and dismiss privacy claims.
Caselaw is an inconsistent, incoherent jumble, with no guiding principles. Countless privacy violations are not remedied or addressed on the grounds that there has been no cognizable harm.
Courts struggle with privacy harms because they often involve future uses of personal data that vary widely. When privacy violations result in negative consequences, the effects are often small – frustration, aggravation, anxiety, inconvenience – and dispersed among a large number of people. When these minor harms are suffered at a vast scale, they produce significant harm to individuals, groups, and society. But these harms do not fit well with existing cramped judicial understandings of harm.
This article makes two central contributions. The first is the construction of a typology for courts to understand harm so that privacy violations can be tackled and remedied in a meaningful way. Privacy harms consist of various different types, which to date have been recognized by courts in inconsistent ways. Our typology of privacy harms elucidates why certain types of privacy harms should be recognized as cognizable.
The second contribution is providing an approach to when privacy harm should be required. In many cases, harm should not be required because it is irrelevant to the purpose of the lawsuit. Currently, much privacy litigation suffers from a misalignment of enforcement goals and remedies. We contend that the law should be guided by the essential question: When and how should privacy regulation be enforced? We offer an approach that aligns enforcement goals with appropriate remedies.

You can download the article, for free, at https://ssrn.com/abstract=3782222. Once again, this site is grateful for all the free resources Dan Solove has made available to privacy law scholars, professionals in the privacy space, and interested members of the public.



(Related)

https://www.pogowasright.org/announce-fight-for-privacy-protecting-dignity-identity-and-love-in-the-digital-age-citron/

Announce: Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age (Citron)

Law professor Danielle Keats Citron’s new book is out.

Danielle has done such important work in the privacy space, tackling thorny issues like privacy harms (in collaboration with Daniel Solove), as well as her own work on issues such as stalking and harassment in cyberspace, and the role of state attorneys general in promoting and enforcing privacy-protective legislation. I look forward to reading this newest book by her.

One of the thorniest issues her work raises is what some call “content moderation” and others call “censorship” on social media platforms. If you’ve read Jeff Kosseff’s work on anonymous speech and on Section 230, you will recognize where Kosseff and Citron disagree, but both are well worth reading and considering if you are interested in privacy and protecting it.





We’ve been doing it all wrong?

https://arxiv.org/abs/2204.05151

Metaethical Perspectives on 'Benchmarking' AI Ethics

Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research and have been developed for a variety of tasks ranging from question answering to facial recognition. An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system. In this paper, drawing upon research in moral philosophy and metaethics, we argue that it is impossible to develop such a benchmark. As such, alternative mechanisms are necessary for evaluating whether an AI system is 'ethical'. This is especially pressing in light of the prevalence of applied, industrial AI research. We argue that it makes more sense to talk about 'values' (and 'value alignment') rather than 'ethics' when considering the possible actions of present and future AI systems. We further highlight that, because values are unambiguously relative, focusing on values forces us to consider explicitly what the values are and whose values they are. Shifting the emphasis from ethics to values therefore gives rise to several new ways of understanding how researchers might advance research programmes for robustly safe or beneficial AI. We conclude by highlighting a number of possible ways forward for the field as a whole, and we advocate for different approaches towards more value-aligned AI research.





A book worth reading?

https://www.degruyter.com/document/isbn/9781479812547/html?lang=en

Virtual Searches

A host of technologies—among them digital cameras, drones, facial recognition devices, night-vision binoculars, automated license plate readers, GPS, geofencing, DNA matching, datamining, and artificial intelligence—have enabled police to carry out much of their work without leaving the office or squad car, in ways that do not easily fit the traditional physical search and seizure model envisioned by the framers of the Constitution. Virtual Searches develops a useful typology for sorting through this bewildering array of old, new, and soon-to-arrive policing techniques. It then lays out a framework for regulating their use that expands the Fourth Amendment’s privacy protections without blindly imposing its warrant requirement, and that prioritizes democratic over judicial policymaking.

The coherent regulatory regime developed in Virtual Searches ensures that police are held accountable for their use of technology without denying them the increased efficiency it provides in their efforts to protect the public. Whether policing agencies are pursuing an identified suspect, constructing profiles of likely perpetrators, trying to find matches with crime scene evidence, collecting data to help with these tasks, or using private companies to do so, Virtual Searches provides a template for ensuring their actions are constitutionally legitimate and responsive to the polity.





For my students?

https://ieeexplore.ieee.org/abstract/document/9755237

Algorithm Auditing: Managing the Legal, Ethical, and Technological Risks of Artificial Intelligence, Machine Learning, and Associated Algorithms

Algorithms are becoming ubiquitous. However, companies are increasingly alarmed about their algorithms causing major financial or reputational damage. A new industry is envisaged: auditing and assurance of algorithms with the remit to validate artificial intelligence, machine learning, and associated algorithms.



No comments: