Like randomly changing employees, except you don’t see the change.
https://www.bespacific.com/shifting-sands-a-cautionary-tale-ai-in-courts/
Shifting Sands, A Cautionary Tale – AI in Courts
Shifting Sands, A Cautionary Tale Feb 23, 2026. Judge Scott Schlegel, Fifth Circuit Court of Appeal. On February 13, 2026 – OpenAI retired GPT 4o from ChatGPT. That is a normal product change for a consumer platform. For courts, it is a useful reminder about what we are really doing when we build tools on top of foundation models. Even when you design responsibly, narrow the scope, use approved sources, and test carefully before deployment, the system is still sitting on a layer you do not control. Whether it is a self help kiosk walking an unrepresented litigant through filing steps, a chambers assistant summarizing briefs and helping draft a bench memo, or a staff tool answering procedural questions for clerks, they all share the same dependency. They sit on top of a model layer the court does not control. That layer can change, and the surrounding behavior can change with it. The same input that produced a cautious answer last month can produce a materially different answer next month, even though you did not touch a line of code. That is not a reason to avoid AI in courts. It is a reason to treat court AI as an operational program, not a one time build. Courts live on stability, predictability, and accountability. Those values do not disappear because a vendor shipped an update. If an assistant gives bad guidance, the public is not going to parse whether the cause was upstream or local. The responsibility will attach to the institution that deployed it. So if a court is going to rely on an AI assistant for public facing information, internal staff work, or chambers support, the court needs ongoing control. It needs to know what model is in use and when it changes. It needs scheduled testing against real court questions, not just a launch day review. Model changes need to be treated as meaningful changes, not routine maintenance. The court needs the ability to narrow features, pause the tool, or turn it off quickly when behavior shifts. And it needs a human owner for outputs whenever the stakes are real. GPT 4o’s retirement from the spotlight is the cleanest proof of the point. You can spend months, or even years, building something correctly and still watch the foundation move without notice. Because that foundation will inevitably shift, the oversight mechanisms must match the stakes.
See also via Judge Schlegel – AI IN CHAMBERS. AI can help chambers, but only if it stays in a defined support lane and every output is treated as untrusted until verified. These guides set practical boundaries for using AI in chambers, with workflows that preserve accountability and keep judgment where it belongs.
Be careful what you wish for? Can you set the threshold too low? Should Meta wait until it has iron clad proof?
https://www.theguardian.com/technology/2026/feb/25/meta-ai-junk-child-abuse-tips-doj
Meta’s AI sending ‘junk’ tips to DoJ, US child abuse investigators say
Meta’s use of artificial intelligence software to moderate its social media platforms is generating large volumes of useless reports about cases of child sexual abuse, which are draining resources and hindering investigations, said officers from the US Internet Crimes Against Children (ICAC) taskforce.
“We get a lot of tips from Meta that are just kind of junk,” Benjamin Zwiebel, a special agent with the ICAC taskforce in New Mexico, said last week during his testimony in the state’s trial against Meta. The state’s attorney general alleges the company’s platforms are putting profits over child safety. Meta disputes these allegations, citing changes it has introduced on its platforms, such as teen accounts with default protections. The ICAC taskforce is a nationwide network of law enforcement agencies coordinated with the US Department of Justice to investigate and prosecute online child exploitation and abuse cases.
Call it a memory refresh…
https://www.bespacific.com/google-has-a-secret-reference-desk-heres-how-to-use-it/
Google Has a Secret Reference Desk. Here’s How to Use It.
Google Has a Secret Reference Desk. Here’s How to Use It. – 40 Google features to find exactly what you need, the alternative search engines that do things Google won’t, and the reference desk framework underneath all of it. Hana Lee Goldin, MLIS – “Most of us search Google the same way we always have: type a few words, scroll, click something that looks close enough, and hope. For a while, that worked. Google handed us a list of links and let us take it from there. What’s happening now is something different. A 2024 study by SparkToro found that nearly 60% of Google searches end without anyone clicking through to a website, and the trend has accelerated since. By February 2026, Ahrefs found that queries triggering AI Overviews now see a 58% reduction in clicks. Google has been systematically inserting itself between you and the original source, answering questions with AI-generated summaries before you ever reach the page those answers came from. The results you do see are filtered through an algorithm that weighs your search history, your location, and the billions of dollars advertisers have spent to appear for particular queries. Two people searching identical phrases on the same day can get meaningfully different results without either of them knowing it. And because Google controls roughly 90% of the world’s search traffic, most people have no frame of reference for what a less mediated search experience would even look like…”