Sunday, September 06, 2020

Potential solutions?

https://www.technologyreview.com/2020/09/04/1008164/ai-biometric-face-recognition-regulation-amba-kak/

Eight case studies on regulating biometric technology show us a path forward

A new report from the AI Now Institute reveals how different regulatory approaches work or fall short in protecting communities from surveillance.

Amba Kak was in law school in India when the country rolled out the Aadhaar project in 2009. The national biometric ID system, conceived as a comprehensive identity program, sought to collect the fingerprints, iris scans, and photographs of all residents. It wasn’t long, Kak remembers, before stories about its devastating consequences began to spread. “We were suddenly hearing reports of how manual laborers who work with their hands—how their fingerprints were failing the system, and they were then being denied access to basic necessities,” she says. “We actually had starvation deaths in India that were being linked to the barriers that these biometric ID systems were creating. So it was a really crucial issue.”

On September 2, Kak, who is now the director of global strategy and programs at the New York–based AI Now Institute, released a new report detailing eight case studies of how biometric systems are regulated around the world.





Facial recognition that does not use faces? Trust the machine?

https://link.springer.com/article/10.1007/s11042-020-09391-7

Faceless identification based on temporal strips

This paper first presents a novel approach for modelling facial features, Local Directional Texture (LDT), which exploits the unique directional information in image textures for the problem of face recognition. A variant of LDT with privacy-preserving temporal strips (TS) is then considered to achieve faceless recognition with a higher degree of privacy while maintaining high accuracy. The TS uses two strips of pixel blocks from the temporal planes, XT and YT, for face recognition. By removing the reliance on spatial context (i.e., XY plane) for this task, the proposed method withholds facial appearance information from public view, where only one-dimensional temporal information that varies across time are extracted for recognition. Thus, privacy is assured, yet without impeding the facial recognition task which is vital for many security applications such as street surveillance and perimeter access control.





Mind the gap!

https://link.springer.com/chapter/10.1007/978-3-030-52470-8_3

Hacking by Law-Enforcement: Investigating with the Help of Computational Models and AI Methods

This Chapter addresses the shift generated in criminal investigations by the digital turn. In Western legal culture, home and correspondence have traditionally been considered the beacons of privacy, against intrusion by public authorities. For this reason, interference into an individual’s privacy for investigative purposes has been considered legitimate only under specific and strict conditions, often regulated by international bills of rights and domestic constitutions. With the digital turn, the core of an individual’s private life is not necessarily confined to home and correspondence, any longer: personal, sensitive information is now almost always stored on digital devices, that may prove easily vulnerable to external intrusions. In this context, Law Enforcement Agencies are taking advantage of an unprecedented ‘regulation gap’, allowing for an easy and effective interference in people’s lives. Fundamental rights seem to be the basis for filling that gap, developing from an evolutive interpretation of traditional concepts, like home and correspondence.



(Related) Ask an AI?

https://link.springer.com/chapter/10.1007/978-3-030-52470-8_4

Equality of Arms and Automatedly Generated Evidence

This Chapter addresses the trial dimension of evidence generated in an automated way. In Chap. 3 we focused, in particular, on the issues of digital intrusion into the individuals’ private life, for the purpose of investigation. Here the spotlight moves to the courtroom, where evidence is presented, discussed, challenged by the parties and evaluated by the court. In this scenario, reconstructing how evidence was generated is crucial to evaluate its reliability, eventually allowing the judge to use it for her decision. But what if evidence is generated by a black box? How do the basic conditions of the parties in trial change if there is no chance to challenge the counterpart’s evidence, because it was generated by an inaccessible algorithm? Fundamental rights, other than privacy, come to the attention and, in particular, the primary and crucial condition for a fair trial, the equality of arms between the parties.



No comments: