Thursday, April 10, 2025

Does that make it simple?

https://pogowasright.org/what-is-sufficient-consent/

What is Sufficient Consent?

Odia Kagan of FoxRothschild writes:

The following is sufficient consent for the Video Privacy Protection Act and the California Invasion of Privacy Act, according to a recent decision in the U.S. District Court for the Northern District of California.
Cookie banner + checkbox with hyperlink to privacy notice at account creation + checkbox with hyperlink to privacy notice at each purchase + privacy notice that details data sharing with third parties via trackers.
The privacy notice in this case included the ability to:
  • refuse cookies or request their deletion.
  • obtain the list of partners who are permitted to store and/or access these cookies.
  • adjust your browser settings to understand when cookies are stored on your device or to disable the cookies.
  • object and withdraw consent to the sharing of data.





Tools & Techniques.

https://www.bespacific.com/openalex/

OpenAlex

OpenAlex is a free and open catalog of the world’s scholarly research system. It is a nonprofit project that indexes and links over 250 million scholarly works from 250 thousand sources. You can search, analyze, and export the data for free, or upgrade to access more features and support the project.  OpenAlex offers an open replacement for industry-standard scientific knowledge bases like Elsevier’s Scopus and Clarivate’s Web of Science. Compared to these paywalled services, OpenAlex offers significant advantages in terms of inclusivity, affordability, and availability. OpenAlex is: OpenAlex is a map of the world’s research ecosystem, linking components (like papers, institutions, journals, topics, SDGs, authors, etc.) to one another. Research outputs are the main artery of the system.



Wednesday, April 09, 2025

With great power comes great responsibility…

https://www.bespacific.com/artificial-intelligence-and-aggregate-litigation/

Artificial Intelligence and Aggregate Litigation

Wilf-Townsend, Daniel, Artificial Intelligence and Aggregate Litigation (March 01, 2025). 103 Wash. U. L. Rev. __ (forthcoming 2026), Available at SSRN:  https://ssrn.com/abstract=5163640  or  http://dx.doi.org/10.2139/ssrn.5163640

The era of AI litigation has begun, and it is already clear that the class action will have a distinctive role to play. AI-powered tools are often valuable because they can be deployed at scale. And the harms they cause often exist at scale as well, pointing to the class action as a key device for resolving the correspondingly numerous potential legal claims. This article presents the first general account of the complex interplay between aggregation and artificial intelligence. First, the article identifies a pair of effects that the use of AI tools is likely to have on the availability of class actions to pursue legal claims. While the use of increased automation by defendants will tend to militate in favor of class certification, the increased individualization enabled by AI tools will cut against it. These effects, in turn, will be strongly influenced by the substantive laws governing AI tools—especially by whether liability attaches “upstream” or “downstream” in a given course of conduct, and by the kinds of causal showings that must be made to establish liability. After identifying these influences, the article flips the usual script and describes how, rather than merely being a vehicle for enforcing substantive law, aggregation could actually enable new types of liability regimes. AI tools can create harms that are only demonstrable at the level of an affected group, which is likely to frustrate traditional individual claims. Aggregation creates opportunities to prove harm and assign remedies at the group level, providing a path to address this difficult problem. Policymakers hoping for fair and effective regulations should therefore attend to procedure, and aggregation in particular, as they write the substantive laws governing AI use.





What if the AI hates me?

https://www.politico.com/newsletters/digital-future-daily/2025/04/08/the-worries-about-ai-in-trumps-social-media-surveillance-00279255

The worries about AI in Trump’s social media surveillance

As the Trump administration goes after immigrants for allegedly posing national security threats, social media posts have taken a prominent role in the story — coming up in the Department of Homeland Security’s allegations against Palestinian activist Mahmoud Khalil, the Georgetown University researcher Badar Khan Suri and alleged gang member Jerce Reyes Barrios.

It’s not clear what tools the government is using to collect and analyze social media posts, and DHS didn’t respond to a direct request about how it is surveilling online platforms.

Earlier social media monitoring tools functioned more like a search engine, surfacing and ranking results based on relevancy, but AI tools take on a more deterministic role, Rachel Levinson-Waldman, the managing director of the Brennan Center’s Liberty and National Security Program, told DFD.

AI is starting to be used, not just to streamline the process, which already brings its own significant concerns, but to augment or replace the judgment,” said Levinson-Waldman, who studies social media monitoring tools.

… “There are real concerns that AI is being used to automate target selection, and potentially initiating surveillance without adequate human review,” Kia Hamadanchy, a senior policy counsel for the American Civil Liberties Union, told the committee.

The use of AI in social media surveillance also creates greater potential for what experts call automation bias. The term describes a tendency to trust technology to deliver accurate information — an issue that has surfaced in healthcare, aviation and law enforcement.



(Related)

https://www.theguardian.com/uk-news/2025/apr/08/uk-creating-prediction-tool-to-identify-people-most-likely-to-kill

UK creating ‘murder prediction’ tool to identify people most likely to kill

The UK government is developing a “murder prediction” programme which it hopes can use personal data of those known to the authorities to identify the people most likely to become killers.

Researchers are alleged to be using algorithms to analyse the information of thousands of people, including victims of crime, as they try to identify those at greatest risk of committing serious violent offences.

The scheme was originally called the “homicide prediction project”, but its name has been changed to “sharing data to improve risk assessment”. The Ministry of Justice hopes the project will help boost public safety but campaigners have called it “chilling and dystopian”.



Tuesday, April 08, 2025

Dealing with tariffs.

https://www.reuters.com/technology/micron-impose-tariff-related-surcharge-some-products-april-9-sources-say-2025-04-08/

Exclusive: Micron to impose tariff-related surcharge on some products from April 9, sources say



(Related)

https://timesofindia.indiatimes.com/technology/mobiles-tabs/how-apple-flew-5-flights-full-of-iphones-from-india-and-china-in-3-days-to-beat-trump-tariffs/articleshow/120044321.cms

How Apple 'flew' 5 flights full of iPhones from India and China in 3 days to beat Trump tariffs





But explain that to the average citizen…

https://newrepublic.com/post/193677/donald-trump-tariffs-deficit-understanding

Trump Exposes Own Kindergarten-Level Understanding of Economics

… “I spoke to a lot of leaders—European, Asian—from all over the world. They are dying to make a deal, but I said, ‘We’re not gonna have deficits with your country,’” Trump told reporters on board Air Force One Sunday. “We’re not gonna do that, because to me a deficit is a loss. We’re gonna have surpluses or at worst we’re gonna be breaking even.”

A trade deficit isn’t a “loss,” regardless of what Trump thinks. A trade deficit simply means that one country spends more on goods from another country than that country spends on goods from them.

Crucially, economists say that having a trade deficit is not an inherently bad thing at all, because the U.S. simply can’t and shouldn’t make everything. Trump’s insistence that the U.S. is being taken for a ride betrayed a fundamental misunderstanding of economics that is built on a dislike of other countries and a desire to be the dealmaker responsible for a new world order.





Tools & Techniques. (An overview)

https://www.nature.com/articles/d41586-025-01069-0

AI for research: the ultimate guide to choosing the right tool

Curious about using artificial intelligence to boost your research? Here are the programs you shouldn’t miss.



Monday, April 07, 2025

Too late?

https://www.schneier.com/blog/archives/2025/04/dirnsa-fired.html

DIRNSA Fired

In “Secrets and Lies” (2000), I wrote:

It is poor civic hygiene to install technologies that could someday facilitate a police state.

It’s something a bunch of us were saying at the time, in reference to the vast NSA’s surveillance capabilities.

I have been thinking of that quote a lot as I read news stories of President Trump firing the Director of the National Security Agency. General Timothy Haugh.

A couple of weeks ago, I wrote:

We don’t know what pressure the Trump administration is using to make intelligence services fall into line, but it isn’t crazy to worry that the NSA might again start monitoring domestic communications.

The NSA already spies on Americans in a variety of ways. But that’s always been a sideline to its main mission: spying on the rest of the world. Once Trump replaces Haugh with a loyalist, the NSA’s vast surveillance apparatus can be refocused domestically.

Giving that agency all those powers in the 1990s, in the 2000s after the terrorist attacks of 9/11, and in the 2010s was always a mistake. I fear that we are about to learn how big a mistake it was.

Here’s PGP creator Phil Zimmerman in 1996, spelling it out even more clearly:

The Clinton Administration seems to be attempting to deploy and entrench a communications infrastructure that would deny the citizenry the ability to protect its privacy. This is unsettling because in a democracy, it is possible for bad people to occasionally get elected—sometimes very bad people. Normally, a well-functioning democracy has ways to remove these people from power. But the wrong technology infrastructure could allow such a future government to watch every move anyone makes to oppose it. It could very well be the last government we ever elect.
When making public policy decisions about new technologies for the government, I think one should ask oneself which technologies would best strengthen the hand of a police state. Then, do not allow the government to deploy those technologies. This is simply a matter of good civic hygiene.



Sunday, April 06, 2025

If not now, when?

https://www.igi-global.com/chapter/is-ai-a-legal-person/373407

Is AI a Legal Person?: Redefining the Limits of Legal Personhood

The traditional legal principles of personhood, which have historically been reserved for humans and, in certain situations, companies, are being challenged by the rapid breakthroughs in artificial intelligence (AI). It is necessary to grant legal personhood to AI systems as their decision-making abilities increase. Legal personhood, according to proponents, would guarantee accountability and provide legal redress for harm brought about by AI's acts (Tretyakova, 2021; Hárs, 2022). Critics argue that AI is devoid of fundamental characteristics that are important to conventional notions of personality, such as consciousness and moral agency (Forrest, 2021; Schneider, 2019). This proposal examines frameworks, including Asimov's Three Laws of Robotics and Hohfeldian jural relations, to suggest approaches where AI can assume legal responsibilities without full personhood status. The study aims to develop regulatory strategies that balance innovation with ethical and societal safeguards.





E-book.

https://kuscholarworks.ku.edu/entities/publication/2c576502-ce94-4a0a-9c73-0e7f7f36da44

Artificial Intelligence for Lawyers: Navigating Novel Methods and Practices for the Future of Law

In an era marked by rapid technological advancements, the legal profession stands at the precipice of a transformative evolution. The integration of Artificial Intelligence (AI) into legal research and practice heralds a new dawn, promising unprecedented efficiency, accuracy, and strategic insight. This book, "Artificial Intelligence for Lawyers: Navigating Novel Methods and Practices for the Future of Law," delves into the profound impact of AI on the legal landscape, offering a comprehensive guide to understanding and leveraging these cutting-edge technologies.

https://kuscholarworks.ku.edu/server/api/core/bitstreams/6dc4b9d6-ccaa-43e4-9515-8eddcb8ebb7d/content





Making AI fit.

https://www.researchgate.net/profile/Georgios-Feretzakis/publication/390271448_GDPR_and_Large_Language_Models_Technical_and_Legal_Obstacles/links/67e6a6ce49e91c0feac1be64/GDPR-and-Large-Language-Models-Technical-and-Legal-Obstacles.pdf

GDPR and Large Language Models: Technical and Legal Obstacles

Large Language Models (LLMs) have revolutionized natural language processing but present significant technical and legal challenges when confronted with the General Data Protection Regulation (GDPR). This paper examines the complexities involved in reconciling the design and operation of LLMs with GDPR requirements. In particular, we analyze how key GDPR provisions—including the Right to Erasure, Right of Access, Right to Rectification, and restrictions on Automated Decision-Making—are challenged by the opaque and distributed nature of LLMs. We discuss issues such as the transformation of personal data into non-interpretable model parameters, difficulties in ensuring transparency and accountability, and the risks of bias and data over-collection. Moreover, the paper explores potential technical solutions such as machine unlearning, explainable AI (XAI), differential privacy, and federated learning, alongside strategies for embedding privacy-bydesign principles and automated compliance tools into LLM development. The analysis is further enriched by considering the implications of emerging regulations like the EU’s Artificial Intelligence Act. In addition, we propose a four-layer governance framework that addresses data governance, technical privacy enhancements, continuous compliance monitoring, and explainability and oversight, thereby offering a practical roadmap for GDPR alignment in LLM systems. Through this comprehensive examination, we aim to bridge the gap between the technical capabilities of LLMs and the stringent data protection standards mandated by GDPR, ultimately contributing to more responsible and ethical AI practices.





Law in the future?

https://journals.rcsi.science/1026-9452/article/view/285439

The Digital or Information Code: prospects for legislative regulation

The article deals with the topical issues of legal regulation of information relations in the conditions of rapid development of digital technologies and increasing conflict of interests between states and global IT-corporations. The author emphasizes the complexity of legal regulation, especially in the issues of data encryption and access to confidential information of users. Special attention is paid to conflicts between states and IT-corporations over control over digital technologies.

In particular, the need for improved information law is emphasized, including the enshrinement of digital rights such as the right to delete personal information, the right to a pseudonym, arbitration in online disputes, and the right to appeal decisions involving artificial intelligence. Important steps are also proposed to strengthen administrative and legal liability for digital offenses such as wrongful disclosure of confidential information, spamming and digital bullying.

The author focuses on the need to systematize information legislation and emphasizes that government measures should take into account the rights and freedoms of users, limiting government intervention where it is inappropriate. Legislation should be aimed at protecting civil rights, but at the same time leave room for the development of the IT sector and innovations.

The article presents a comprehensive view of the problem of legal regulation of the digital industry in the context of Administrative Law, proposing specific steps to create a balanced legislative framework that takes into account the interests of both the state and society in the context of ongoing digital transformation.