Friday, March 20, 2026

Due care (do we care?)

https://pogowasright.org/are-warrants-enough/

Are Warrants Enough?

Privacy law scholar Professor Daniel Solove writes:

Are Warrants Enough?

Why Fourth Amendment Warrants Can’t Meet the Moment

This year, in Chatrie v. United States, the U.S. Supreme Court will decide whether geofence warrants are valid under the Fourth Amendment. The geofence warrant at issue in the case was one that allowed the government to obtain account data from Google of hundreds of millions of users. It’s the equivalent to a digital dragnet, which I’ve long argued contravenes the core purpose of the Fourth Amendment. The Framers of the Constitution hated dragnet searches . . . actually, to be more precise, HATED them.
If the Supreme Court doesn’t find geofence warrants to be invalid, then it’s hard to imagine much left of the already-desiccated Fourth Amendment. But Chatrie is just the tip of the iceberg. Regular warrants under the Fourth Amendment—those that are properly circumscribed based on particularized suspicion—are also not strong enough for our times.

Read more at DanielSolove.substack.com

Related posts:





Governments frequently want to “do something” but this ain’t the way.

https://www.theregister.com/2026/03/20/jlr_bailout_cmc/

Jaguar Land Rover's cyber bailout sets worrying precedent, watchdog warns

Lack of clear criteria risks encouraging firms to lean on state support instead of worrying about insurance



Thursday, March 19, 2026

AI will always choose vanilla?

https://www.bespacific.com/homogenizing-effect-of-large-language-models-on-human-expression-and-thought/

The homogenizing effect of large language models on human expression and thought

Sourati Z, S. Ziabari A, Dehghani M. The homogenizing effect of large language models on human expression and thought. Trends in Cognitive Sciences, 2026; Online March 11, 2026. No paywall.

Cognitive diversity, reflected in variations of language, perspective, and reasoning, is essential to creativity and collective intelligence. This diversity is rich and grounded in culture, history, and individual experience. Yet, as large language models (LLMs) become deeply embedded in people’s lives, they risk standardizing language and reasoning. We synthesize evidence across linguistics, psychology, cognitive science, and computer science to show how LLMs reflect and reinforce dominant styles while marginalizing alternative voices and reasoning strategies. We examine how their design and widespread use contribute to this effect by mirroring patterns in their training data and amplifying convergence as all people increasingly rely on the same models across contexts. Unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability.”





Learn how to use tools before trying them out.

https://nypost.com/2026/03/18/tech/dancing-robot-bounced-from-restaurant-after-scaring-patrons/

Dancing robot seen dragged away by panicked restaurant staff after going haywire in bizarre video: ‘Actually scary’

This machine rages against you.

The rise of the machines could be closer than you think. A humanoid bot had to be “bounced” from a California restaurant after smashing tableware during a dance routine gone awry, as seen in viral X footage.

The smashing machine had reportedly been tasked with performing for patrons at the Haidilao hotpot restaurant in San Jose.





Just for fun…

https://www.adamsmith.org/blog/even-more-useful-maxims

Even More Useful Maxims



Wednesday, March 18, 2026

Apparently drone warfare is here to stay.

https://www.bloomberg.com/news/articles/2026-03-17/ai-drone-software-stock-jumps-700-in-best-ipo-since-newsmax?embedded-checkout=true

AI Drone Software Stock Jumps 520% in Best IPO Since Newsmax

Swarmer Inc. shares skyrocketed as much as 700% on Tuesday, making the artificial intelligence drone software company’s debut the best trading by a US stock since Newsmax Inc.’s blockbuster entry nearly a year ago.

Swarmer is a software company, and not a drone manufacturer. Drones powered by the company’s artificial intelligence technology enables them to deploy and coordinate drone swarms, like a bird flock, at scale. Its platform has been deployed in Ukraine with more than 100,000 real-world missions in active combat environment, since April 2024, according to it regulatory filing.

Swarmer’s opening-day rally comes as investors are weighing their bets on defense spending, as the industry is seeing an emergence of software-driven, autonomous, unmanned systems, reflecting a broader move in modern warfare toward low-cost weapons.





Tools & Techniques.

https://techcrunch.com/2026/03/02/nearby-glasses-new-app-alerts-you-wearing-smart-glasses-surveillance-meta-snap-bluetooth/

A new app alerts you if someone nearby is wearing smart glasses

One of the chief problems with “luxury surveillance” devices, like smart glasses with baked-in video recording cameras, is that they often look indistinguishable from regular eyewear, meaning you might be recorded without knowing it.

But now there is an app that can detect and alert you when someone nearby is wearing smart glasses, or potentially other always-recording tech.

The Android app, aptly named Nearby Glasses, constantly scans for nearby signals that emit from Bluetooth-enabled tech, such as wearable devices made by Meta (and Oakley) and Snap.





Tools & Techniques.

https://www.zdnet.com/article/optmeowt-free-privacy-tool-stop-sites-selling-data/

This free privacy tool makes it super easy to see which sites are selling your data

There's a service called Global Privacy Control that offers extensions and/or links to browsers and apps that support the cause. This service began in 2020 and was inspired by the California Consumer Privacy Act, which gives California residents the right to opt out of any business that would sell their data. Currently, GPC is available for:



Tuesday, March 17, 2026

Just because…

https://www.adamsmith.org/blog/useful-maxims

Useful maxims

https://www.adamsmith.org/blog/more-useful-maxims

More Useful Maxims





Modern war? (Or do hackers just see an opportunity?)

https://www.theregister.com/2026/03/16/cybercrime_iran_war_245_percent_rise/

Cybercrime has skyrocketed 245% since the start of the Iran war

However, not all of the malicious traffic originated from Iran. The embattled theocracy accounted for only 14 percent of the source IPs, compared to Russia (35 percent) and China (28 percent). This doesn't necessarily mean that the threat groups carrying out the cyber activities are based in these two counties. Both China and Russia have historically turned a blind eye toward digital-crime networks and services operating out of their countries – just as long as the attacks don't target Chinese and Russian government agencies or organizations.



Monday, March 16, 2026

Action, faster.

https://www.theverge.com/ai-artificial-intelligence/895030/palantirs-maven-smart-system-is-an-ai-powered-kanban-board-for-killing-people

Palantir’s Maven Smart System is an AI-powered Kanban board for killing people.

The company recently hosted a series of speakers at AIPCon, including Cameron Stanley, the Department of War’s Chief Digital and Artificial Intelligence Officer, who gave a chilling demo of Palantir’s Maven Smart System, where anyone or anything can be targeted for a military strike with a “Left click, right click, left click.”



Sunday, March 15, 2026

Ethics for all actually.

https://scholarworks.uark.edu/arlnlaw/27/

Ethics Of Artificial Intelligence For Lawyers: Resistance Is Futile: Candor, Supervision, And Fees

In Star Trek: The Next Generation, the Borg deliver their iconic warning to every species they encounter: “Resistance is futile.” The line resonates because it conveys the inevitability that once the Borg arrive, escape is no longer an option.

For lawyers, the duties of candor, supervision, and fairness in fees are just as inescapable. ABA Formal Opinion 512 (“ABA Opinion”) makes clear that, regardless of how powerful artificial intelligence becomes, it cannot relieve attorneys of their obligation. Attorneys must verify what they file, oversee how their colleagues use the technology, and ensure that clients are charged fairly. This installment examines those three pillars, showing how courts and the ABA are making plain that ethical rules still govern.



(Related)

https://scholarworks.uark.edu/arlnlaw/26/

Ethics Of Artificial Intelligence For Lawyers: You Will Be Assimilated: Best Practices For Lawyers Using Artificial Intelligence

This installment explores the best practices for responsible adoption: protecting client confidentiality, addressing AI openly in engagement letters, learning the skill of prompt engineering, and preparing for the workforce changes AI will accelerate. Assimilation may be inevitable, but the terms of assimilation, ethical, careful, client-centered, are still within the control of the profession.





A scary thought that I ain’t thunk yet.

https://scholarship.law.ufl.edu/jtlp/vol30/iss1/2/

Python Hunting: How Laws that Protect the Everglades from the Invasive Burmese Python, Including Eradication Programs, Can Inform the Regulation of Objects Controlled by Artifical Intelligence

This Article explores the surprisingly apt analogy between the Burmese python problem in the Florida Everglades and abandoned objects that are controlled by artificial intelligence (AI). With few natural predators, the invasive Burmese python, which was likely introduced to the Everglades through abandonment by pet owners, has threatened native species with extinction. Objects controlled by AI, which we will likely increasingly share our environment with, such as autonomous taxis and food delivery robots, as well as a variety of objects that are used by the military, may be abandoned by their owners and continue to operate. Over time, these objects may be given increasing levels of agency and learn from their environments, making them potentially more dangerous. These objects are likely to create material losses if allowed to run amok. The Burmese python similarly has agency and has run amok.

Beyond the superficial analogy between these two paradigms, this Article provides an interesting thought journey aimed at finding a precedent to cling to when we predict and analyze a problem that hasn’t fully emerged but is likely on the horizon. Borrowing frameworks from other areas of law when writing atop a blank slate is a time-honored tradition in American law. What is old can be new again, and we have seen—and wrestled with—the essence of this problem before. Unfortunately, we seem to be fighting a losing battle against the pythons in the Everglades. Hopefully, creative solutions, technology and the dedication of resources will cause the tide to turn. Sounding the alarm now about autonomous AI objects can help us predict problems in advance and create mechanisms for the mitigation of losses and ultimate redress when harm occurs, unlike the situation in the Everglades.





For want of a nail…

https://finance.yahoo.com/news/iran-war-could-wreak-havoc-on-farmers-create-a-potential-bottleneck-for-the-entire-ai-story-171240723.html

Iran war could wreak havoc on farmers, create a potential 'bottleneck for the entire AI story'

Earlier this month, Qatar shut down one of the world's largest energy hubs due to drone attacks. That halted production of liquefied natural gas and helium, a byproduct of natural gas extraction. The disruption accounts for about one-third of the global helium supply, according to Bloomberg estimates.

Helium has essential uses, including in magnetic resonance imaging (MRI) and welding, as well as electronics and semiconductor manufacturing, which consumes a large portion of the world's supply. It's crucial for rapidly cooling chips during fabrication to prevent overheating and defects.





It’s like…

https://academic.oup.com/jiplp/advance-article/doi/10.1093/jiplp/jpag018/8509416?guestAccessKey=

Metaphors we judge (AI) by: a rhetorical analysis of artificial copyright disputes

  • This article is a ‘metaphorical’ guide to today’s most pressing artificial intelligence (AI) copyright questions, focusing in particular on the EU and the USA. Is unauthorized training on copyright-protected works permitted? Can AI models copy? And is AI-generated output itself protected? As this article demonstrates, debates on these questions can all be traced back to a handful of crucial metaphors.

  • After all, generative AI is hardly comprehensible without the extensive use of metaphors and analogies. Most notably, AI is systematically conceptualized in human terms such as ‘neural networks’ that ‘learn’, ‘know’ or ‘memorize’. This article aims to demonstrate how such metaphors (unconsciously) influence legal evaluations and even judicial decisions in copyright law.

  • The resulting analysis is particularly relevant to lawyers, judges and artists interested in copyright and its intersection with AI. Yet, it may also appeal to those interested in AI, legal reasoning and language more generally, as metaphors and their (rhetorical) effects are by no means unique to copyright and may be equally relevant in fields such as privacy law and (legal) philosophy.





The whole book.

https://www.researchgate.net/profile/Sayed-Mahbub-Hasan-Amiri-2/publication/401660183_The_AI_Classroom_How_Artificial_Intelligence_Will_Reshape_Teaching_and_Learning/links/69ac6250bff9750ad9c95e3e/The-AI-Classroom-How-Artificial-Intelligence-Will-Reshape-Teaching-and-Learning.pdf

The AI Classroom: How Artificial Intelligence Will Reshape Teaching and Learning



Saturday, March 14, 2026

Think of it as ‘double secret probation.’

https://www.eff.org/deeplinks/2026/03/eff-launches-new-fight-free-law

EFF Launches New Fight to Free the Law

EFF has filed a new lawsuit against the Consumer Product Safety Commission (CPSC) to ensure that the public has full access to the laws that govern us.

Our client Public.Resource.Org (Public Resource), a tiny non-profit founded by open records advocate Carl Malamud, has a mission that’s both simple and powerful: to make government information more accessible. Public Resource acquires and makes available online a wide variety of public documents such as tax filings, government-produced videos, and federal rules about safety and product designs. Those rules are initially created through private standards organizations and later incorporated into federal law. Such documents are often difficult to access otherwise, meaning the public cannot read, share, or comment on them. 

Working with Harvard Law School’s Cyberlaw Clinic, Public Resource has been submitting Freedom of Information Act requests to the CPSC requesting copies of the legally binding safety codes for children’s products—an area of law of intense interest to child safety advocates and consumer advocates, not to mention the families who use those products. But CPSC says it can’t release the codes, because the private association that coordinated their initial development insists that it retains copyright in them even after they have been adopted into law. That’s like saying a lobbyist who drafted a new tax law gets to control who reads it or shares it, even after it becomes a legal mandate.