Sunday, December 05, 2021

Ain’t technology wonderful?

https://www.phonearena.com/news/car-thieves-use-apple-air-tags-to-track-targeted-cars_id136876

Canadian police find a new use for AirTags that Apple will never promote

In Canada, a new use has been discovered for Apple's item tracking AirTags although it isn't something that you'll see Apple advertising. A press release from the York Regional Police (via Cult of Mac) warns residents that they have discovered "a new method being used by thieves to track and steal high-end vehicles across York Region." Starting in September, the York Regional Police investigated the use of small tracking devices on high-end vehicles that were placed there to help thieves locate and steal a car they spotted earlier in the day.

The cops say that brand name "AirTags" are placed in out-of-sight locations on high-end automobiles that are parked in high-traffic areas like malls or parking lots. The vehicles are then tracked back to the owners' residences from where they are stolen right from the driveway. The thieves use a screwdriver to enter a targeted car via the driver or passenger door.

Once they are inside the car, the thieves deploy an electronic diagnostic device like the kind your friendly mechanic uses. With this device, the car thieves adjust the settings to allow the car to accept a key that they have brought with them. Once this is done, the bad guys get in the car and simply drive away.



Legitimizing a technology used inappropriately?

https://www.engadget.com/clearview-ai-facial-recognition-patent-222347603.html

Clearview AI will get a US patent for its facial recognition tech

Clearview AI is about to get formal acknowledgment for its controversial facial recognition technology. Politico reports Clearview has received a US Patent and Trademark Office "notice of allowance" indicating officials will approve a filing for its system, which scans faces across public internet data to find people from government lists and security camera footage. The company just has to pay administrative fees to secure the patent.

As you might imagine, there's a concern the USPTO is effectively blessing Clearview's technology and giving the company a chance to grow despite widespread objections to its technology's very existence. Critics are concerned Clearview is building image databases without targets' knowledge or permission, and multiple governments (including Australia and the UK) believe the facial recognition violates data laws. The tech could theoretically be used to stifle political dissent or, in private use, to stalk other people. That's not including worries about possible gender and race biases for facial recognition as a whole.


(Related)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3974243

A call for more explainable AI in law enforcement

The use of AI in law enforcement raises several significant ethical and legal concerns. One of them is AI explainability principle, which is mentioned in numerous national and international AI ethical guidelines. This paper firstly analyses what AI explainability principle could mean with relation to AI use in law enforcement, namely, to whom, why and how the explanation about the functioning of AI and its outcomes needs to be provided. Secondly, it explores some legal obstacles in ensuring the desired explainability of AI technologies, namely, the trade secret protection that often applies to AI modules and prevents access to proprietary elements of the algorithm. Finally, the paper outlines and discusses three ways to mitigate this conflict between the AI explainability principle and trade secret protection. It encourages law enforcement authorities to be more proactive in ensuring that Face Recognition Technology (FRT) outputs are explainable to different stakeholder groups,

especially those directly affected.


(Related)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3970518

Non-Asimov Explanations Regulating AI Through Transparency

An important part of law and regulation is demanding explanations for actual and potential failures. We ask questions like: What happened (or might happen) to cause this failure? And why did (or might) it happen? These are disguised normative questions – they really ask what ought to have happened, and how the humans involved ought to have behaved. If we ask the same questions about AI systems we run into two difficulties. The first is what might be described as the ‘black box’ problem, which lawyers have begun to investigate. Some modern AI systems are highly complex, so that even their makers might be unable to understand their workings fully, [Why would you program an AI to make a completely random choice? Bob] and thus answer the what and why questions. Technologists are beginning to work on this problem, aiming to use technology to explain the workings of autonomous systems more effectively, and also to produce autonomous systems which are easier to explain. But the second difficulty is so far underexplored, and is a more important one for law and regulation. This is that the kinds of explanation required by law and regulation are not, at least at first sight, the kinds of explanation which AI systems can currently provide. To answer the normative questions, law and regulation seeks a narrative explanation, a story. Humans usually explain their decisions and actions in narrative form (even if the work of psychologists and neuroscientists tells us that some of the explanations are devised ex post, and may not accurately reflect what went on in the human mind). At present, we seek these kinds of narrative explanation from AI technology, because as humans we seek to understand technology’s working through constructing a story to explain it. Our cultural history makes this inevitable – authors like Asimov, writing narratives about future AI technologies like intelligent robots, have told us that they act in ways explainable by the narrative logic which we use to explain human actions and so they can also be explained to us in those terms. This is, at least currently, not true. This chapter argues that we can only solve this problem by working from both sides. Technologists will need to find ways to tell us stories which law and regulation can use. But law and regulation will also need to accept different kinds of narratives, which tell stories about fundamental legal and regulatory concepts like fairness and reasonableness that are different from those we are used to.



And transcripts can be used against them since they were informed that calls were being recorded?

https://www.theregister.com/2021/12/04/in_brief_ai/

Prisons transcribe private phone calls with inmates using speech-to-text AI

Prisons around the US are installing AI speech-to-text models to automatically transcribe conversations with inmates during their phone calls.

A series of contracts and emails from eight different states revealed how Verus, an AI application developed by LEO Technologies and based on a speech-to-text system offered by Amazon, was used to eavesdrop on prisoners’ phone calls.

In a sales pitch, LEO’s CEO James Sexton told officials working for a jail in Cook County, Illinois, that one of its customers in Calhoun County, Alabama, uses the software to protect prisons from getting sued, according to an investigation by the Thomson Reuters Foundation.

"(The) sheriff believes (the calls) will help him fend off pending liability via civil action from inmates and activists," Sexton said. Verus transcribes phone calls and finds certain keywords discussing issues like COVID-19 outbreaks or other complaints about jail conditions.

Prisoners, however, said the tool was used to catch crime. In one case, it allegedly found one inmate illegally collecting unemployment benefits. But privacy advocates aren’t impressed. "The ability to surveil and listen at scale in this rapid way – it is incredibly scary and chilling," said Julie Mao, deputy director at Just Futures Law, an immigration legal group.



It’s coming. Deal with it.

http://eskup.kpu.edu.rs/dar/article/view/273

CHALLENGES OF CONTEMPORARY PREDICTIVE POLICING

Big data algorithms developed for predictive policing are increasingly present in the everyday work of law enforcement. There are various applications of such technologies to predict crimes, potential crime scenes, profiles of perpetrators, and more. In this way, police officers are provided with appropriate assistance in their work, increasing their efficiency or entirely replacing them in specific tasks. Although it needs to be technologically advanced, police use force and arrest, so prediction algorithms can have significantly different, primarily drastic consequences than those that similar technologies would produce in agriculture, industry, or health. For the further development of predictive policing, it is necessary to have a clear picture of the problems it can cause. This paper discusses modern predictive policing from the perspective of challenges that negatively affect its application.



...and we don’t have a federal privacy law yet, let alone AI law.

https://venturebeat.com/2021/12/04/the-metaverse-needs-aggressive-regulation/

The metaverse needs aggressive regulation

Thirty years ago, while working at Air Force Research Laboratory, I built the first interactive augmented reality system, enabling users to reach out and engage a mixed world of real and virtual objects. I was so inspired by the reactions people had when they experienced those early prototypes, I founded one of the first VR companies in 1993, Immersion Corp, and later founded an early AR technology company, Outland Research. Yes, I’ve been an enthusiastic believer in the metaverse for a very long time.

I’ve also been a longtime critic of the field, issuing warnings about the negative impacts that AR and VR could have on society, especially when combined with the power of artificial intelligence. It’s not the technology I fear, but the fact that large corporations can use the infrastructure of the metaverse to monitor and manipulate the public at levels that make social media seem quaint. That’s because these platforms will not just track what you click on, but where you go, what you do, what you look at, even how long your gaze lingers. They will also monitor your facial expressions, vocal inflections, and vital signs (as captured by your smart-watch), all while intelligent algorithms predict changes in your emotional state.



If I recognize your face, is that not the same as a machine recognizing your face?

https://www.igi-global.com/article/a-meta-analysis-of-privacy/285580

A Meta-Analysis of Privacy: Ethical and Security Aspects of Facial Recognition Systems

Facial recognition systems use advanced computing to capture facial information and compare the same with proprietary databases for validation. The emergence of data capturing intermediaries and open access image repositories have compounded the need for a holistic perspective for handling the privacy and security challenges associated with FRS. The study presents the results of a bibliometric analysis conducted on the topic of privacy, ethical and security aspects of FRS. This study presents the level of academic discussion on the topic using bibliometric performance analysis. The results of the bibliographic coupling analysis to identify the research hotspots are also presented. The results also include the systematic literature review of 148 publications that are distributed across seven themes Both the bibliometric and systematic analysis showed that privacy and security in FRS requires a holistic perspective that cuts across privacy, ethical, security, legal, policy and technological aspects.



Does this sound like an AI hater? Or is it just a fussy legal argument?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3974219

AI as an Inventor: Has the Federal Court of Australia Erred in DABUS?

The emergence of advanced Artificial Intelligence (AI) technologies has caused an international debate as to whether inventions generated by AI technology without human intervention should be protected under patent law and who should own them. These questions have been discussed in a recent Federal Court of Australia decision in Thaler v Commissioner of Patents. In that judgment, Beach J recognised that some AI has the ability to autonomously invent and that such AI-generated inventions could be protected under patent law. His Honour held that, in such instances, an AI system could and should be listed as an inventor in a patent application. This article challenges the decision by arguing that, even in the case of the most sophisticated AI systems, these systems are not autonomous in the inventive process as humans provide significant contributions to the very system that leads to the inventive output. Secondly, I contend that the discussion on the need of patent protection for AI-generated inventions (if it were possible at all) is misplaced and not sufficiently comprehensive. Finally, the expanded application of the Patents Act 1990 (Cth), and especially s 15(1), to accommodate ‘AI inventors’, is an over-reach that is not consistent with the current Australian patent law.



The world is changing.

https://philpapers.org/rec/MACTIO-49

The impact of artificial intelligence on jobs and work in New Zealand

Artificial Intelligence (AI) is a diverse technology. It is already having significant effects on many jobs and sectors of the economy and over the next ten to twenty years it will drive profound changes in the way New Zealanders live and work. Within the workplace AI will have three dominant effects. This report (funded by the New Zealand Law Foundation) addresses: Chapter 1 Defining the Technology of Interest; Chapter 2 The changing nature and value of work; Chapter 3 AI and the employment relationship; Chapter 4 Consumers, professions and society. The report includes recommendations to the New Zealand Government.



Another perspective?

https://philpapers.org/rec/MACACG

A Citizen's Guide to Artificial Intelligence

A concise but informative overview of AI ethics and policy. Artificial intelligence, or AI for short, has generated a staggering amount of hype in the past several years. Is it the game-changer it's been cracked up to be? If so, how is it changing the game? How is it likely to affect us as customers, tenants, aspiring homeowners, students, educators, patients, clients, prison inmates, members of ethnic and sexual minorities, and voters in liberal democracies? Authored by experts in fields ranging from computer science and law to philosophy and cognitive science, this book offers a concise overview of moral, political, legal and economic implications of AI. It covers the basics of AI's latest permutation, machine learning, and considers issues such as transparency, bias, liability, privacy, and regulation. Both business and government have integrated algorithmic decision support systems into their daily operations, and the book explores the implications for our lives as citizens. For example, do we take it on faith that a machine knows best in approving a patient's health insurance claim or a defendant's request for bail? What is the potential for manipulation by targeted political ads? How can the processes behind these technically sophisticated tools ever be transparent? The book discusses such issues as statistical definitions of fairness, legal and moral responsibility, the role of humans in machine learning decision systems, “nudging” algorithms and anonymized data, the effect of automation on the workplace, and AI as both regulatory tool and target.



Should your AI write like Steven King or Dr Seuss?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3973961

A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law

Artificial intelligence tools can now “write” in such a sophisticated manner that they fool people into believing that a human wrote the text. None are better at writing than GPT-3, released in 2020 for beta testing and coming to commercial markets in 2021. GPT-3 was trained on a massive dataset that included scrapes of language from sources ranging from the NYTimes to Reddit boards. And so, it comes as no surprise that researchers have already documented incidences of bias where GPT-3 spews toxic language. But because GPT-3 is so good at “writing,” and can be easily trained to write in a specific voice — from classic Shakespeare to Taylor Swift — it is poised for wide adoption in the field of law.

This Article explores the ethical considerations that will follow from GPT-3’s introduction into lawyers’ practices. GPT-3 is new, but the use of AI in the field of law is not. AI has already thoroughly suffused the practice of law. GPT-3 is likely to take hold as well, generating some early excitement that it and other AI tools could help close the access to justice gap. That excitement should nevertheless be tempered with a realistic assessment of GPT-3’s tendency to produce biased outputs.

As amended, the Model Rules of Professional Conduct acknowledge the impact of technology on the profession and provide some guard rails for its use by lawyers. This Article is the first to apply the current guidance to GPT-3, concluding that it is inadequate. I examine three specific Model Rules — Rule 1.1 (Competence), Rule 5.3 (Supervision of Nonlawyer Assistance), and Rule 8.4(g) (Bias) — and propose amendments that focus lawyers on their duties and require them to regularly educate themselves about pros and cons of using AI to ensure the ethical use of this emerging technology.


No comments: