Saturday, May 10, 2025

How valuable must this data seem to be worth the risk of billion dollar fines?

https://pogowasright.org/google-agrees-to-pay-texas-1-4-billion-data-privacy-settlement/

Google agrees to pay Texas $1.4 billion data privacy settlement

CNBC reports:

Google agreed to pay nearly $1.4 billion to the state of Texas to settle allegations of violating the data privacy rights of state residents, Texas Attorney General Ken Paxton said Friday.
Paxton sued Google in 2022 for allegedly unlawfully tracking and collecting the private data of users.
The attorney general said the settlement, which covers allegations in two separate lawsuits against the search engine and app giant, dwarfed all past settlements by other states with Google for similar data privacy violations.
Google’s settlement comes nearly 10 months after Paxton obtained a $1.4 billion settlement for Texas from Meta, the parent company of Facebook and Instagram, to resolve claims of unauthorized use of biometric data by users of those popular social media platforms.

Read more at CNBC.





Do you need permission to leave the country? This looks like a step in that direction.

https://pogowasright.org/us-customs-and-border-protection-plans-to-photograph-everyone-exiting-the-us-by-car/

US Customs and Border Protection Plans to Photograph Everyone Exiting the US by Car

Caroline Haskins reports:

United States Customs and Border Protection plans to log every person leaving the country by vehicle by taking photos at border crossings of every passenger and matching their faces to their passports, visas, or travel documents, WIRED has learned.
The escalated documentation of travelers could be used to track how many people are self-deporting, or leave the US voluntarily, which the Trump administration is fervently encouraging to people in the country illegally.
CBP exclusively tells WIRED, in response to an inquiry to the agency, that it plans to mirror the current program it’s developing—photographing every person entering the US and match their faces with their travel documents—to the outbound lanes going to Canada and Mexico. The agency currently does not have a system that monitors people leaving the country by vehicle.

Read more at WIRED.





The questionable fight continues. MAIA (Make America Insecure Again?)

https://pogowasright.org/florida-bill-requiring-encryption-backdoors-for-social-media-accounts-has-failed/

Florida bill requiring encryption backdoors for social media accounts has failed

Zack Whittaker reports:

A Florida bill, which would have required social media companies to provide an encryption backdoor for allowing police to access user accounts and private messages, has failed to pass into law.
The Social Media Use by Minors bill was “indefinitely postponed” and “withdrawn from consideration” in the Florida House of Representatives earlier this week. Lawmakers in the Florida Senate had already voted to advance the legislation, but a bill requires both legislative chambers to pass before it can become law.
The bill would have required social media firms to “provide a mechanism to decrypt end-to-end encryption when law enforcement obtains a subpoena,” which are typically issued by law enforcement agencies and without judicial oversight.

Read more at TechCrunch.



Friday, May 09, 2025

The goal is to replace lawyers with AI, right?

https://www.bespacific.com/artificial-intelligence-and-law-an-overview-of-recent-technological-changes-in-large-language-models-and-law/

Artificial Intelligence and Law – An Overview of Recent Technological Changes in Large Language Models and Law

Surden, Harry, Artificial Intelligence and Law – An Overview of Recent Technological Changes in Large Language Models and Law (February 12, 2025). 96 Colorado Law Review pp. 376 – 411 (2025), U of Colorado Law Legal Studies Research Paper No. 25-8, Available at SSRN:  https://ssrn.com/abstract=5135305 or http://dx.doi.org/10.2139/ssrn.5135305

This article, based upon a keynote address by Professor Harry Surden, provides an in-depth overview of the recent advancements in artificial intelligence (AI) since 2022, particularly the rise of highly capable large language models (LLMs) such as OpenAI’s GPT-4, and their implications for the field of law. The talk begins with a historical perspective, tracing the evolution of AI from early symbolic systems to the modern deep learning era, highlighting the breakthroughs that have enabled AI to process and generate human language with unprecedented sophistication. The address explores the technical foundations of contemporary AI models, including the transition from rule-based systems to data-driven machine learning, the role of deep learning, and the emergence of transformers, which have significantly enhanced AI’s ability to understand and generate text. It then examines the capabilities and limitations of GPT-4, emphasizing its strengths in legal research, document drafting, and analysis while also identifying key concerns, such as hallucinations, biases, and the sensitivity of AI outputs to user prompts. The article also considers the potential risks of using AI in legal decision-making, particularly in judicial settings, where AI-generated legal reasoning may appear authoritative yet embed subtle interpretive choices. It argues that while AI can assist in legal tasks, it should not be treated as a neutral arbiter of law. It concludes by addressing near-term trends in AI, including improvements in model accuracy, interpretability, and integration into legal workflows, and emphasizes the need for AI literacy among legal professionals.”



(Related)

https://www.bespacific.com/measuring-the-rapidly-increasing-use-of-artificial-intelligence-in-legal-scholarship/

Measuring the Rapidly Increasing Use of Artificial Intelligence in Legal Scholarship

Conklin, Michael and Houston, Christopher, Measuring the Rapidly Increasing Use of Artificial Intelligence in Legal Scholarship (March 23, 2025). Available at SSRN:  https://ssrn.com/abstract=5190385  or http://dx.doi.org/10.2139/ssrn.5190385

The rapid advancement of artificial intelligence (AI) has had a profound impact on nearly every industry, including legal academia. As AI-driven tools like ChatGPT become more prevalent, they raise critical questions about authorship, academic integrity, and the evolving nature of legal writing. While AI offers promising benefits—such as improved efficiency in research, drafting, and analysis—it also presents ethical dilemmas related to originality, bias, and the potential homogenization of legal discourse. One of the challenges in assessing AI’s influence on legal scholarship is the difficulty of identifying AI-generated content. Traditional plagiarism-detection methods are often inadequate, as AI does not merely copy existing text but generates novel outputs based on probabilistic language modeling. This first-of-its-kind study uses the existence of an AI idiosyncrasy to measure the use of AI in legal scholarship. This provides the first-ever empirical evidence of a sharp increase in the use of AI in legal scholarship, thus raising pressing questions about the proper role of AI in shaping legal scholarship and the practice of law. By applying a novel framework to highlight the rapidly evolving challenges at the intersection of AI and legal academia, this Essay will hopefully spark future debate on the careful balance in this area.”



Wednesday, May 07, 2025

Perspective.

https://thedailyeconomy.org/article/three-dimensional-trade-chess-explained/

Three-Dimensional Trade Chess, Explained

Supporters of tariffs claim that mainstream economists don’t understand the game. This “game” has been described by Peter Navarro as three-dimensional chess. I want to take that claim at face value, break down its dimensions, and then evaluate whether Trump’s actions are likely to lead to his desired outcomes. And the answer is “no,” because the administration is playing three quite different, inconsistent games all at once. That isn’t a strategy at all; rather, it’s a scheme for certain failure.

Trump and his advisers claim that their policies have three justifications. For simplicity, I will call these (1) security, (2) reciprocity, and (3) revenue. I’ll present these (as the lawyers put it) arguendo, meaning that for the sake of argument, we will just consider the case for Trump’s actions, and mostly put aside the counterarguments.



Tuesday, May 06, 2025

We knew this was coming.

https://www.bespacific.com/from-help-to-harm-how-the-government-is-quietly-repurposing-everyones-data-for-surveillance/

From help to harm: How the government is quietly repurposing everyone’s data for surveillance

The Conversation, Nicole M. Bennett: “A whistleblower at the National Labor Relations Board reported an unusual spike in potentially sensitive data flowing out of the agency’s network in early March 2025 when staffers from the Department of Government Efficiency, which goes by DOGE, were granted access to the agency’s databases. On April 7, the Department of Homeland Security gained access to Internal Revenue Service tax data. These seemingly unrelated events are examples of recent developments in the transformation of the structure and purpose of federal government data repositories. I am a researcher who studies the intersection of migration, data governance and digital technologies. I’m tracking how data that people provide to U.S. government agencies for public services such as tax filing, health care enrollment, unemployment assistance and education support is increasingly being redirected toward surveillance and law enforcement. Originally collected to facilitate health care, eligibility for services and the administration of public services, this information is now shared across government agencies and with private companies, reshaping the infrastructure of public services into a mechanism of control. Once confined to separate bureaucracies, data now flows freely through a network of interagency agreements, outsourcing contracts and commercial partnerships built up in recent decades. These data-sharing arrangements often take place outside public scrutiny, driven by national security justificationsfraud prevention initiatives and digital modernization efforts. The result is that the structure of government is quietly transforming into an integrated surveillance apparatus, capable of monitoring, predicting and flagging behavior at an unprecedented scale. Executive orders signed by President Donald Trump aim to remove remaining institutional and legal barriers to completing this massive surveillance system…”





What age is the target? 13 or 16 or 18?

https://pogowasright.org/virginia-governor-signs-into-law-bill-restricting-minors-use-of-social-media/

Virginia Governor Signs into Law Bill Restricting Minors’ Use of Social Media

Hunton Andrews Kurth writes:

On May 2, 2025, Virginia Governor Glenn Youngkin signed into law a bill that amends the Virginia Consumer Data Protection Act (“VCDPA”) to impose significant restrictions on minors’ use of social media. The bill comes on the heels of recent children’s privacy amendments to the VCDPA that took effect on January 1, 2025.
The bill amends the VCDPA to require social media platform operators to (1) use commercially reasonable methods (such as a neutral age screen) to determine whether a user is a minor under the age of 16 and (2) limit a minor’s use of the social media platform to one hour per day, unless a parent consents to increase the daily limit.
The bill prohibits social media platform operators from using the information collected to determine a user’s age for any other purpose.

Read more at Privacy & Information Security Law Blog.



Monday, May 05, 2025

Any chance this will stop Trump? (Nope, I don’t think so either.)

https://www.bespacific.com/the-ruling-against-trumps-law-firm-order-shows-how-to-respond-in-this-moment/

The ruling against Trump’s law firm order shows how to respond in this moment

Law Dork – Chris Geitner – “…Judge Howell’s decision striking down the order targeting Perkins Coie can serve as a framework for addressing Trump. On Friday evening [May 2, 2025], U.S. District Judge Beryl Howell took a stab at summarizing what it means to exist in this moment — and how courts need to respond to it — with lessons for the rest of us as well. “[S]ettling personal vendettas by targeting a disliked business or individual for punitive government action is not a legitimate use of the powers of the U.S. government or an American President,“ Howell wrote in her decision striking down the first of President Donald Trump’s several executive orders targeting a law firm with broad reprisals due to his personal grievances with the firm. Although Howell’s statement might have been an obvious one when she took the bench nearly 15 years ago, Trump proved why such a statement was necessary less than 48 hours after she issued it. Asked on Meet the Press whether he has to uphold the Constitution, Trump replied, “I don’t know,” later, at least, adding that his “brilliant lawyers … are going to obviously follow what the Supreme Court said.” In a moment when the president is explicitly shoving his oath of office to the wayside without a second thought, the other branches, the states, and the people will have to step up… In her 102-page decision finding that Trump’s attack on the Perkins Coie law firm [Note – I added this link to Perkins Coie site with all documents relevant to this case] was unconstitutional on several grounds, rendering the executive order “null and void,” Howell, an Obama appointee, laid out the facts — many of them undisputed — about how Trump and the Trump administration are acting without regard to basic constitutional protections. She then expanded a temporary restraining order that had blocked significant parts of the order to a permanent injunction blocking the entirety of the executive order. Like the subsequent law firm executive orders, the Perkins Coie one included a “purpose” section that recounted Trump’s grievances, followed by sections addressing security clearances, government contracting (including the firms’ clients contracts), diversity efforts, and personnel (which includes building access restrictions and hiring restrictions). Quoting from key U.S. Supreme Court decisions from recent years — and one seminal case — addressing fundamental limits on government power, Howell concluded her opinion by stating:

Government officials, including the President, may not “subject[] individuals to ‘retaliatory actions’ after the fact for having engaged in protected speech.”  Hous. Cmty. Coll. Sys., 595 U.S. at 474 (quoting Nieves, 587 U.S. at 398). They may neither “use the power of the State to punish or suppress disfavored expression,” Vullo, 602 U.S. at 188, nor engage in the use of “purely personal and arbitrary power,” Yick Wo, 118 U.S. at 370. In this case, these and other foundational protections were violated by EO 14230. On that basis, this Court has found that EO 14230 violates the Constitution and is thus null and void.



Sunday, May 04, 2025

If it exists, it’s taxable.

https://pogowasright.org/the-irs-says-your-digital-life-is-not-your-property/

The IRS Says Your Digital Life Is Not Your Property

Brent Skorup and Laura Bondank write:

When the IRS secretly demands your financial records and private information from a third party, without a warrant, what rights do you still have?
That’s the question at the heart of Harper v. O’Donnell, which is before the Supreme Court. New Hampshire resident Jim Harper is fighting back against the IRS after discovering he was swept up in a massive digital dragnet. The case could redefine how the Fourth Amendment applies in the age of cloud storage—and it may determine whether your emails, location history, search queries, and financial records that tech companies store on your behalf are treated as your property.
In 2016, the IRS ordered the cryptocurrency exchange Coinbase to hand over transaction records of over 14,000 customers. Harper was among them and only learned of the government’s records grab after the IRS sent him a warning letter, mistakenly suggesting he’d underreported his cryptocurrency income. He soon discovered the IRS had his transaction logs, wallet addresses, and public keys—allowing the agency to monitor any future transactions he made.
Harper hadn’t done anything wrong. He’d simply used a legal platform to buy and sell cryptocurrency. But his digital footprint became visible to the government overnight.

Read more at Reason.





Sorry, the AI says we shouldn’t waste time treating you.

https://www.researchgate.net/profile/John-Mathew-26/publication/391318390_Predictive_AI_Models_for_Emergency_Room_Triage/links/68121727ded43315573f521a/Predictive-AI-Models-for-Emergency-Room-Triage.pdf

Predictive AI Models for Emergency Room Triage

Emergency room (ER) triage is a critical process that prioritizes patients based on the severity of their conditions, aiming to ensure timely care in high-pressure environments. However, traditional triage methods are often subjective and may lead to delays in treatment, overcrowding, and suboptimal patient outcomes. This paper explores the role of predictive Artificial Intelligence (AI) models in enhancing ER triage by providing data-driven, real-time insights to optimize decision-making, improve patient prioritization, and streamline resource allocation. We examine various AI techniques, including machine learning (ML), deep learning (DL), and natural language processing (NLP), highlighting their application in analyzing structured and unstructured data such as electronic health records (EHRs), patient vital signs, medical imaging, and clinical notes. The paper also discusses the importance of data preprocessing, including handling missing values, data normalization, and feature selection, to ensure accurate model predictions. Through case studies and clinical implementations, we demonstrate how AI models have been successfully integrated into real-world ER settings to predict patient acuity, early deterioration, and patient outcomes. Ethical, legal, and practical considerations such as data privacy, algorithmic bias, and model transparency are also addressed. The paper concludes with a discussion on the future directions of AI in ER triage, including the integration of multimodal data, real-time monitoring, and personalized care. Predictive AI has the potential to significantly enhance ER efficiency and improve patient care, making it a valuable tool for modern healthcare systems.





AI is no big deal?

https://scholarship.law.unc.edu/cgi/viewcontent.cgi?article=1508&context=ncjolt

Liability for AI Agents

Artificial intelligence (“AI”) is becoming integral to modern life, fueling innovation while presenting complex legal challenges. Unlike traditional software, AI operates with a degree of autonomy, producing outcomes that its developers or deployers cannot fully anticipate. Advances in underlying technology have further enhanced this autonomy, giving rise to AI agents: systems capable of interacting with their environment independently, often with minimal or no human oversight. As AI decision-making—like that of humans—is inherently imperfect, its increasing deployment inevitably results in instances of harm, prompting the critical question of whether developers and deployers should be held liable as a matter of tort law.

This question is frequently answered in the negative. Many scholars, adopting a framework of technological exceptionalism, assume AI to be uniquely disruptive. Citing the lack of transparency and unpredictability of AI models, they contend that AI challenges conventional notions of causality, rendering existing liability regimes inadequate.

This Article offers the first comprehensive normative analysis of the liability challenges posed by AI agents through a law-and-economics lens. It begins by outlining an optimal AI liability framework designed to maximize economic and societal benefits. Contrary to prevailing assumptions about AI’s disruptiveness, this analysis reveals that AI largely aligns with traditional products. While AI presents some distinct challenges—particularly in its complexity, opacity, and potential for benefit externalization—these factors call for targeted refinements to existing legal frameworks rather than an entirely new paradigm.

This holistic approach underscores the resilience of traditional legal principles in tort law. While AI undoubtedly introduces novel complexities, history shows that tort law has effectively navigated similar challenges before.

For example, AI’s causality issues closely resemble those in medical malpractice cases, where the impact of treatment on patient recovery can be uncertain. The legal system has already addressed these issues, providing a clear precedent for extending similar solutions to AI. Likewise, while the traditional distinction between design and manufacturing defects does not map neatly onto AI, there is a compelling case for classifying inadequate AI training data as a manufacturing defect—aligning AI liability with established legal doctrine.

Taken together, this Article argues that AI agents do not necessitate a fundamental overhaul of tort law but rather call for targeted, nuanced refinements. This analysis offers essential guidance on how to effectively apply existing legal standards to this evolving technology.





Who really done it?

https://ijlr.iledu.in/wp-content/uploads/2025/04/V5I723.pdf

ARTIFICIAL INTELLIGENCE, LEGAL PERSONHOOD, AND DETERMINATION OF CRIMINAL LIABILITY

The broad adoption of artificial intelligence (AI) across vital domains ranging from autonomous vehicles and financial markets to healthcare diagnostics and legal analytics has exposed significant gaps in our legal systems when AI-driven errors or malfunctions cause harm. Autonomous systems often involve multiple stakeholder hardware suppliers, software developers, sensor manufacturers, and corporate overseers making it difficult to pinpoint who is responsible for a system’s failure. The 2018 Uber autonomous-vehicle crash in Tempe, Arizona, where a pedestrian was misclassified repeatedly by the AI’s perception module and the emergency braking function was disabled, underscores this challenge: with safety overrides turned off and state oversight minimal, liability became entangled among engineers, operators, and corporate policy not the machine alone.

Traditional criminal law doctrines rest on actus reus (the guilty act) and mens rea (the guilty mind), both premised on human agency and intent. AI entities, however, can execute complex decision-making without consciousness or moral awareness, creating a “responsibility gap” under current frameworks. To bridge this gap, scholars like Gabriel Hallevy have proposed three liability models—perpetration-via-another (holding programmers or users accountable), the natural-probable-consequence model (liability for foreseeable harms), and direct liability (attributing responsibility to AI itself if it meets legal thresholds for actus reus and an analogue of mens rea). Each model offers insight but struggles with AI’s semi-autonomous nature and opacity.

This paper argues against prematurely conferring legal personhood on AI an approach that risks absolving human actors and diluting accountability. Instead, it advocates for a human-centric policy framework that combines clear oversight duties, mandated explainability measures, and calibrated negligence or strict-liability standards for high-risk AI applications. Such reforms are especially urgent in jurisdictions like India, where AI governance remains nascent. By anchoring liability in human oversight and regulatory clarity rather than on machines themselves, we can ensure that accountability evolves in step with AI’s growing capabilities, safeguarding both innovation and public safety.