Saturday, June 01, 2024

Will this settlement inspire other states to go the same route?

https://www.reuters.com/legal/transactional/meta-settle-texas-lawsuit-over-facebook-facial-recognition-data-2024-05-31/

Meta to settle Texas lawsuit over Facebook facial recognition data

Meta’s Facebook (META.O), opens new tab has agreed to settle a lawsuit by the state of Texas that accused the social media giant of illegally using facial-recognition technology to collect biometric data of millions of Texans without their consent.

Meta and Texas said in a court filing in Texas state court on Friday that they have agreed in principle to resolve the lawsuit, filed in 2022.

They asked a judge to pause the case for 30 days to allow the sides to finish the deal and present it to the court. The filing did not spell out the terms of the settlement.



Friday, May 31, 2024

This won’t go over well.

https://techcrunch.com/2024/05/30/misinformation-works-and-a-handful-of-social-supersharers-sent-80-of-it-in-2020/

Misinformation works, and a handful of social ‘supersharers’ sent 80% of it in 2020

A pair of studies published Thursday in the journal Science offers evidence not only that misinformation on social media changes minds, but that a small group of committed “supersharers,” predominately older Republican women, were responsible for the vast majority of the “fake news” in the period looked at.





Surveillance from a drone you can see (or hear?) would be Okay?

https://pogowasright.org/the-alaska-supreme-court-takes-aerial-surveillances-threat-to-privacy-seriously-other-courts-should-too/

The Alaska Supreme Court Takes Aerial Surveillance’s Threat to Privacy Seriously, Other Courts Should Too

In arguing that Mr. McKelvey did not have a reasonable expectation of privacy, the government raised various factors which have been used to justify warrantless surveillance in other jurisdictions. These included the ubiquity of small aircrafts flying overhead in Alaska; the commercial availability of the camera and lens; the availability of aerial footage of the land elsewhere; and the alleged unobtrusive nature of the surveillance.

In response, the Court divorced the ubiquity and availability of the technology from whether people would reasonably expect the government to use it to spy on them. The Court observed that the fact the government spent resources to take photos demonstrates that whatever available images were insufficient for law enforcement needs. Also, the inability or unlikelihood the spying was detected adds to, not detracts from, its pernicious nature because “if the surveillance technique cannot be detected, then one can never fully protect against being surveilled.”





Perspective.

https://www.schneier.com/blog/archives/2024/05/how-ai-will-change-democracy.html

How AI Will Change Democracy

In particular, there are potential changes over four dimensions: Speed, scale, scope and sophistication. The problem with AIs trading stocks isn’t that they’re better than humans—it’s that they’re faster. But computers are better at chess and Go because they use more sophisticated strategies than humans. We’re worried about AI-controlled social media accounts because they operate on a superhuman scale.

It gets interesting when changes in degree can become changes in kind. High-speed trading is fundamentally different than regular human trading. AIs have invented fundamentally new strategies in the game of Go. Millions of AI-controlled social media accounts could fundamentally change the nature of propaganda.

It’s these sorts of changes and how AI will affect democracy that I want to talk about.





AI questions…

https://www.bespacific.com/stanford-hai-tests-westlaw-but-the-genai-results-look-worse/

Stanford HAI Tests Westlaw But The GenAI Results Look Worse

Artificial Lawyer: “Ok this story is getting into unusual territory now. Artificial Lawyer just got an email from the spokespeople for the Stanford University HAI team who told this site the researchers had updated their genAI study of hallucinations in case law tools to include Thomson Reuters’ Westlaw. And guess what….? Westlaw has come out even worse than the Practical Law tests (see below) according to what they have published in an updated paper. Here is the new statement to AL from HAI: ‘Letting you know that the research and blog post have been updated with new findings. The study now includes an analysis of Westlaw’s AI-Assisted Research alongside Lexis+ AI and Ask Practical Law AI.’ They have updated the HAI group’s findings here to reflect this. As you may remember, this whole thing started when a group of researchers tested whether LexisNexis’s and Thomson Reuter’s genAI tools were as good as hoped for case law research. There was plenty of confusion caused when the team tested Practical Law, rather than Westlaw for the case law questions. They have since been given access to Westlaw and hence the new results… Here is the link to the original story in Artificial Lawyer, and there are two more articles with comments that follow it that give more context – please see the AL site…”





The answer?

https://www.technologyreview.com/2024/05/31/1093019/why-are-googles-ai-overviews-results-so-bad/

Why Google’s AI Overviews gets things wrong

Most LLMs simply predict the next word (or token) in a sequence, which makes them appear fluent but also leaves them prone to making things up. They have no ground truth to rely on, but instead choose each word purely on the basis of a statistical calculation. That leads to hallucinations. It’s likely that the Gemini model in AI Overviews gets around this by using an AI technique called retrieval-augmented generation (RAG), which allows an LLM to check specific sources outside of the data it’s been trained on, such as certain web pages, says Chirag Shah, a professor at the University of Washington who specializes in online search.

Once a user enters a query, it’s checked against the documents that make up the system’s information sources, and a response is generated. Because the system is able to match the original query to specific parts of web pages, it’s able to cite where it drew its answer from—something normal LLMs cannot do.

One major upside of RAG is that the responses it generates to a user’s queries should be more up to date, more factually accurate, and more relevant than those from a typical model that just generates an answer based on its training data. The technique is often used to try to prevent LLMs from hallucinating. (A Google spokesperson would not confirm whether AI Overviews uses RAG.)

So why does it return bad answers?

But RAG is far from foolproof. In order for an LLM using RAG to come up with a good answer, it has to both retrieve the information correctly and generate the response correctly. A bad answer results when one or both parts of the process fail.





And thus ends the death watch…

https://www.bespacific.com/the-trump-manhattan-criminal-verdict/

The Trump Manhattan Criminal Verdict

Via Scott McFarlane – For the history books ===> Supreme Court of the State of New York. The People of the State of New York against Donald J. Trump, defendant



Thursday, May 30, 2024

I wonder if this is used in a certain courtroom in New York?

https://www.schneier.com/blog/archives/2024/05/supply-chain-attack-against-courtroom-software.html

Supply Chain Attack against Courtroom Software

No word on how this backdoor was installed:

A software maker serving more than 10,000 courtrooms throughout the world hosted an application update containing a hidden backdoor that maintained persistent communication with a malicious website, researchers reported Thursday, in the latest episode of a supply-chain attack.
The software, known as the JAVS Viewer 8, is a component of the JAVS Suite 8, an application package courtrooms use to record, play back, and manage audio and video from proceedings. Its maker, Louisville, Kentucky-based Justice AV Solutions, says its products are used in more than 10,000 courtrooms throughout the US and 11 other countries. The company has been in business for 35 years.

It’s software used by courts; we can imagine all sort of actors who want to backdoor it.





Why didn’t I think of this? (Still many other areas where I could build similar tools.)

https://www.bespacific.com/evaluating-generative-ai-for-legal-research-a-benchmarking-project/

Evaluating Generative AI for Legal Research: A Benchmarking Project

Via LLRX Evaluating Generative AI for Legal Research: A Benchmarking Project It is difficult to test Large-Language Models (LLMs) without back-end access to run evaluations. So to test the abilities of these products, librarians can use prompt engineering to figure out how to get desired results (controlling statutes, key cases, drafts of a memo, etc.). Some models are more successful than others at achieving specific results. However, as these models update and change, evaluations of their efficacy can change as well. Law Librarians and tech experts par excellence, Rebecca Fordon, Sean Harrington and Christine Park plan to propose a typology of legal research tasks based on existing computer and information science scholarship and draft corresponding questions using the typology, with rubrics others can use to score the tools they use.



Wednesday, May 29, 2024

Interesting. Are we equally ambivalent about identification?

https://pogowasright.org/eu-facial-recognition-at-airports-individuals-should-have-maximum-control-over-biometric-data/

EU: Facial recognition at airports: individuals should have maximum control over biometric data

From the European Data Protection Board (EDPB), May 24:

Brussels, 24 May – During its latest plenary, the EDPB adopted an Opinion on the use of facial recognition technologies by airport operators and airline companies to streamline the passenger flow at airports. This Article 64(2) Opinion, following a request from the French Data Protection Authority, addresses a matter of general application and produces effects in more than one Member State.

There is no uniform legal requirement in the EU for airport operators and airline companies to verify that the name on the passenger’s boarding pass matches the name on their identity document, and this may be subject to national laws. Therefore, where no verification of the passengers’ identity with an official identity document is required, no such verification with the use of biometrics should be performed, as this would result in an excessive processing of data.





We knew that, but here a list of supporting “don’ts.” (AI can’t say, “I don’t know.” )

https://www.makeuseof.com/why-you-shouldnt-trust-chatgpt-to-summarize-your-text/

Why You Shouldn't Trust ChatGPT to Summarize Your Text

There are limits to what ChatGPT knows. And its programming forces it to deliver what you ask for, even if the result is wrong. This means ChatGPT makes mistakes, and moreover, there are some common mistakes it makes, especially when it’s summarizing information and you’re not paying attention.



Sunday, May 26, 2024

Invasion or merely interesting?

https://www.bigtechnology.com/p/how-shein-and-temu-snuck-up-on-amazon

How Shein and Temu Snuck Up on Amazon

The Chinese-owned platforms represent the biggest threat to its e-commerce empire in recent memory.





Will it also accept input from AI?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4830981

Introduction to the Handbook of the Ethics of Artificial Intelligence

Artificial intelligence is having a transformative effect across the spectrum of human endeavor. But along with the remarkable opportunities and promises heralded by these technologies, there are also significant challenges. Responses to these challenges have taken shape and been organized under the umbrella of what has been called the ethics of AI. But these efforts have not been without their own set of difficulties. First and foremost is the fact that the field tends to rely on Western/European moral and legal philosophies, meaning that the frameworks currently being utilized to address the big challenges of AI run the risk of reproducing many of the difficulties that the field seeks to redress. The Handbook of the Ethics of AI is deliberately designed to respond to this problem by diversifying the field of AI ethics, opening it to others and other ways of thinking and doing ethics. This introduction provides an overview of the book and its contents.





Similar to programs to identify potential shoplifters…

https://verfassungsblog.de/gaza-artificial-intelligence-and-kill-lists/

Gaza, Artificial Intelligence, and Kill Lists

One of the greatest challenges in warfare is the identification of military targets. The Israeli army has developed an artificial intelligence-based system called “Lavender” that automates this process by sifting enormous amounts of surveillance data and identifying possible Hamas or Islamic Jihad (PIJ) fighters based on patterns in that data. This approach promises faster and more accurate targeting; however, human rights organizations such as Human Rights Watch (HRW) and the International Committee of the Red Cross (ICRC) have warned of deficits in responsibility for violations of International Humanitarian Law (IHL) arguing that with these semi- or even fully automated systems, human officers experience a certain “routinization” which reduces “the necessity of decision making” and masks the life-and-death significance of the decision. Moreover, military commanders who bear the onus of responsibility for faulty targeting (IHL-breaches) may not have the capacity anymore to supervise the algorithmic “black box” advising them.

In the following, we will examine these concerns and show how responsibility for violations of IHL remains attributable to a state that uses automated or semi-automated systems in warfare. In doing so, we will demonstrate that even though the new technological possibilities present certain challenges, existing IHL is well equipped to deal with them.







Where were you on the night of May 25” now becomes a question for the police AI.

https://www.researchsquare.com/article/rs-4343562/v1

Advancing Law Enforcement Through AI: A proposal for District-Level Person and Vehicle tracking

This paper introduces a proposal for Advancing Law Enforcement Through AI: A Proposal for District-level Person and Vehicle Tracking. The proposal outlines the development of a sophisticated District-level Person and Vehicle Tracking System, powered by artificial intelligence(AI) technologies. The system aims to bolster law enforcement capabilities by enabling real-time tracking and notifications for individuals and vehicles of interest within designated districts. Leveraging advanced AI algorithms, including facial recognition and number plate detection, the proposed system offers a comprehensive solution to the challenges faced by law enforcement agencies in monitoring and apprehending suspects. The proof of concept demonstrates the technical feasibility of district-level tracking, showcasing synchronized sightings and generating detailed maps of locations. By addressing privacy concerns and implementing rigorous protocols, the proposed system represents a significant advancement in law enforcement practices, with the potential to revolutionize public safety management at the district level.





Toward personhood.

https://www.elgaronline.com/edcollchap/book/9781803924816/book-part-9781803924816-18.xml

Chapter 14: Digital owners in property law

The chapter focuses on how modern technologies challenge the concept of ownership in property law. The authors explore the implications of decentralised ownership models, such as those used by Decentralized Autonomous Organizations (DAOs), and the potential for AI systems to hold property rights. The article provides a detailed analysis of how property law systems might adapt to these emerging challenges. It considers the possibility of AI systems owning property independently, akin to corporations' current activities. The authors also discuss the ethical and practical implications of AI ownership, including the need for legal frameworks to address liability issues and the limitations of AI in comparison to human cognitive and physical abilities. The authors suggest that the legal concept of ownership, particularly in property law, must evolve to accommodate the unique characteristics and capabilities of AI systems and blockchain technologies. This evolution, they argue, is crucial for the law to remain relevant and effective in the 21st century.