Friday, February 27, 2026

Clean out all those anti-Trump voters?

https://pogowasright.org/trumps-doj-sues-kentucky-four-other-states-for-voter-data/

Trump’s DOJ sues Kentucky, four other states for voter data

McKenna Horsley reports:

The U.S. Department of Justice is suing five additional states, including Kentucky, for not providing voter registration data, including sensitive information such as driver’s license and Social Security numbers.
Kentucky Secretary of State Michael Adams, a Republican, said in a Thursday statement that he would “not voluntarily commit a data breach” of Kentuckians’ private information without a court order.
The DOJ is now suing 29 states and the District of Columbia for the information, which it has said it would use to ensure clean voting rolls in the states.

Read more at Hoptown Chronicle.




Will this impact customs ‘inspections?’

https://www.eff.org/deeplinks/2026/02/victory-tenth-circuit-finds-fourth-amendment-doesnt-support-broad-search-0

Victory! Tenth Circuit Finds Fourth Amendment Doesn’t Support Broad Search of Protesters’ Devices and Digital Data

In a big win for protesters’ rights, the U.S. Court of Appeals for the Tenth Circuit overturned a lower court’s dismissal of a challenge to sweeping warrants to search a protester’s devices and digital data and a nonprofit’s social media data.

The case, Armendariz v. City of Colorado Springs, arose after a housing protest in 2021, during which Colorado Springs police arrested protesters for obstructing a roadway. After the demonstration, police also obtained warrants to seize and search through the devices and data of Jacqueline Armendariz Unzueta, who they claimed threw a bike at them during the protest. The warrants included a search through all of her photos, videos, emails, text messages, and location data over a two-month period, as well as a time-unlimited search for 26 keywords, including words as broad as “bike,” “assault,” “celebration,” and “right,” that allowed police to comb through years of Armendariz’s private and sensitive data—all supposedly to look for evidence related to the alleged simple assault. Police further obtained a warrant to search the Facebook page of the Chinook Center, the organization that spearheaded the protest, despite the Chinook Center never having been accused of a crime.

The district court dismissed the civil rights lawsuit brought by Armendariz and the Chinook Center, holding that the searches were justified and that, in any case, the officers were entitled to qualified immunity. The plaintiffs, represented by the ACLU of Colorado, appealed. EFF—joined by the Center for Democracy and Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—wrote an amicus brief in support of that appeal.

In a 2-1 opinion, the Tenth Circuit reversed the district court’s dismissal of the lawsuit’s Fourth Amendment search and seizure claims. The court painstakingly picked apart each of the three warrants and found them to be overbroad and lacking in particularity as to the scope and duration of the searches. The court further held that in furnishing such facially deficient warrants, the officers violated “clearly established” law and thus were not entitled to qualified immunity. Although the court did not explicitly address the First Amendment concerns raised by the lawsuit, it did note the backdrop against how these searches were carried out, including animus by Colorado Springs police leading up to the housing protest.



Thursday, February 26, 2026

Like randomly changing employees, except you don’t see the change.

https://www.bespacific.com/shifting-sands-a-cautionary-tale-ai-in-courts/

Shifting Sands, A Cautionary Tale – AI in Courts

Shifting Sands, A Cautionary Tale  Feb 23, 2026. Judge Scott Schlegel, Fifth Circuit Court of Appeal.  On February 13, 2026 – OpenAI retired GPT 4o from ChatGPT. That is a normal product change for a consumer platform. For courts, it is a useful reminder about what we are really doing when we build tools on top of foundation models. Even when you design responsibly, narrow the scope, use approved sources, and test carefully before deployment, the system is still sitting on a layer you do not control. Whether it is a self help kiosk walking an unrepresented litigant through filing steps, a chambers assistant summarizing briefs and helping draft a bench memo, or a staff tool answering procedural questions for clerks, they all share the same dependency. They sit on top of a model layer the court does not control. That layer can change, and the surrounding behavior can change with it. The same input that produced a cautious answer last month can produce a materially different answer next month, even though you did not touch a line of code. That is not a reason to avoid AI in courts. It is a reason to treat court AI as an operational program, not a one time build. Courts live on stability, predictability, and accountability. Those values do not disappear because a vendor shipped an update. If an assistant gives bad guidance, the public is not going to parse whether the cause was upstream or local. The responsibility will attach to the institution that deployed it. So if a court is going to rely on an AI assistant for public facing information, internal staff work, or chambers support, the court needs ongoing control. It needs to know what model is in use and when it changes. It needs scheduled testing against real court questions, not just a launch day review. Model changes need to be treated as meaningful changes, not routine maintenance. The court needs the ability to narrow features, pause the tool, or turn it off quickly when behavior shifts. And it needs a human owner for outputs whenever the stakes are real. GPT 4o’s retirement from the spotlight is the cleanest proof of the point. You can spend months, or even years, building something correctly and still watch the foundation move without notice. Because that foundation will inevitably shift, the oversight mechanisms must match the stakes.

See also via Judge Schlegel – AI IN CHAMBERS.  AI can help chambers, but only if it stays in a defined support lane and every output is treated as untrusted until verified. These guides set practical boundaries for using AI in chambers, with workflows that preserve accountability and keep judgment where it belongs.




Be careful what you wish for? Can you set the threshold too low? Should Meta wait until it has iron clad proof?

https://www.theguardian.com/technology/2026/feb/25/meta-ai-junk-child-abuse-tips-doj

Meta’s AI sending ‘junk’ tips to DoJ, US child abuse investigators say

Metas use of artificial intelligence software to moderate its social media platforms is generating large volumes of useless reports about cases of child sexual abuse, which are draining resources and hindering investigations, said officers from the US Internet Crimes Against Children (ICAC) taskforce.

We get a lot of tips from Meta that are just kind of junk,” Benjamin Zwiebel, a special agent with the ICAC taskforce in New Mexico, said last week during his testimony in the state’s trial against Meta. The state’s attorney general alleges the company’s platforms are putting profits over child safety. Meta disputes these allegations, citing changes it has introduced on its platforms, such as teen accounts with default protections. The ICAC taskforce is a nationwide network of law enforcement agencies coordinated with the US Department of Justice to investigate and prosecute online child exploitation and abuse cases.



Call it a memory refresh…

https://www.bespacific.com/google-has-a-secret-reference-desk-heres-how-to-use-it/

Google Has a Secret Reference Desk. Here’s How to Use It.

Google Has a Secret Reference Desk. Here’s How to Use It.  – 40 Google features to find exactly what you need, the alternative search engines that do things Google won’t, and the reference desk framework underneath all of it.  Hana Lee Goldin, MLIS  – “Most of us search Google the same way we always have: type a few words, scroll, click something that looks close enough, and hope. For a while, that worked. Google handed us a list of links and let us take it from there. What’s happening now is something different. A 2024 study by SparkToro  found that nearly 60% of Google searches end without anyone clicking through to a website, and the trend has accelerated since. By February 2026, Ahrefs found that queries triggering AI Overviews now see a 58% reduction  in clicks. Google has been systematically inserting itself between you and the original source, answering questions with AI-generated summaries before you ever reach the page those answers came from. The results you do see are filtered through an algorithm that weighs your search history, your location, and the billions of dollars advertisers have spent to appear for particular queries. Two people searching identical phrases on the same day can get meaningfully different results without either of them knowing it. And because Google controls roughly 90% of the world’s search traffic, most people have no frame of reference for what a less mediated search experience would even look like…”


Wednesday, February 25, 2026

Tools for the paranoid.

https://www.bespacific.com/this-app-warns-you-if-someone-is-wearing-smart-glasses-nearby/

This App Warns You if Someone Is Wearing Smart Glasses Nearby

404 Media [no paywall] – The creator of Nearby Glasses made the app after reading 404 Media’s coverage of how people are using Meta’s Ray-Bans smartglasses to film people without their knowledge or consent: “A new hobbyist developed app warns if people nearby may be wearing smart glasses, such as Meta’s Ray-Ban glasses, which stalkers and harassers have repeatedly used to film people without their knowledge or consent. The app scans for smart glasses’ distinctive Bluetooth signatures and sends a push alert if it detects a potential pair of glasses in the local area. The app comes as companies such as Meta continue to add AI-powered features to their glasses. Earlier this month The New York Times reported Meta was working on adding facial recognition to its smart glasses. “Name Tag,” as the feature is called, would let smart glasses wearers identify people and get information about them from Meta’s AI assistant, the report said. “I consider it to be a tiny part of resistance against surveillance tech,” Yves Jeanrenaud, the hobbyist developer and sociologist who made the app, told 404 Media. To use the app, called Nearby Glasses, users download it from the Google Play Store or GitHub. They may need to tweak some settings such as “enable foreground service” to keep the app scanning. Then they press “Start Scanning” and a debug log will show the app’s activity. If it detects what it believes to be a pair of smart glasses, the app will send a notification: “Smart Glasses are probably nearby,” it reads, according to a screenshot…”





Beyond the tipping point? Are humans now obsolete?

https://thehackernews.com/2026/02/manual-processes-are-putting-national.html

Manual Processes Are Putting National Security at Risk

More than half of national security organizations still rely on manual processes to transfer sensitive data, according to The CYBER360: Defending the Digital Battlespace report. This should alarm every defense and government leader because manual handling of sensitive data is not just inefficient, it is a systemic vulnerability. 



(Related) Perhaps AI isn’t the answer either…

https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/

AIs can’t stop recommending nuclear strikes in war game simulations



(Related) Imagine the fun we could have…

https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

Poisoning AI Training Data

All it takes to poison AI training data is to create a website:

I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….
Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.
Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

These things are not trustworthy, and yet they are going to be widely trusted.





Perspective.

https://gizmodo.com/ai-added-basically-zero-to-us-economic-growth-last-year-goldman-sachs-says-2000725380

AI Added ‘Basically Zero’ to US Economic Growth Last Year, Goldman Sachs Says

Meta, Amazon, Google, OpenAI, and other tech companies spent billions last year investing in AI. They’re expected to spend even more, roughly $700 billion, this year on dozens of new data centers to train and run their advanced models.

This spending frenzy has kept Wall Street buzzing and fueled a narrative that all this investment is helping prop up and even grow the U.S. economy.

President Donald Trump has cited that argument as a reason the industry should not face state-level regulations.

Investment in AI is helping to make the U.S. Economy the ‘HOTTEST’ in the World — But overregulation by the States is threatening to undermine this Growth Engine,” Trump wrote in a post on Truth Social in November. “We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes.”

… “It was a very intuitive story,” Joseph Briggs, a Goldman Sachs analyst, told The Washington Post on Monday. “That maybe prevented or limited the need to actually dig deeper into what was happening.”

Briggs’ colleague, Goldman Sachs Chief Economist Jan Hatzius, said in an interview with the Atlantic Council that AI investment spending has had “basically zero” contribution to the U.S. GDP growth in 2025.



Tuesday, February 24, 2026

Perspective.

https://www.bespacific.com/the-science-of-how-ai-pays-attention/

The science of how AI pays attention

Growth Memo: “I analyzed 1.2 million search results to find out exactly how AI reads. The verdict? It’s a busy editor, not a patient student… There isn’t much known about which parts of a text LLMs cite. We analyzed 18,012 citations and found a “ski ramp” distribution.

  1. 44.2% of all citations come from the first 30% of text (the intro). The AI reads like a journalist. It grabs the “Who, What, Where” from the top. If your key insight is in the intro, the chances it gets cited are high.

  2. 31.1% of citations come from the 30-70% of a text (the middle). If you bury your key product features in paragraph 12 of a 20-paragraph post, the AI is 2.5x less likely to cite it.

  3. 24.7% of citations come from the last third of an article (the conclusion). It proves the AI does wake up at the end (much like humans). It skips the actual footer (see the 90-100% drop-off), but it loves the “Summary” or “Conclusion” section right before the footer.

Possible explanations for the ski ramp pattern are training and efficiency:

  • LLMs are trained on journalism and academic papers, which follow the “BLUF” (Bottom Line Up Front) structure. The model learns that the most “weighted” information is always at the top.

  • While modern models can read up to 1 million tokens for a single interaction (~700-800K words), they aim to establish the frame as fast as possible, then interpret everything else through that frame…”





Perspective.

https://sloanreview.mit.edu/audio/ai-is-not-improving-productivity-nobel-laureate-daron-acemoglu/

AI Is Not Improving Productivity: Nobel Laureate Daron Acemoglu

In this bonus episode of the Me, Myself, and AI podcast, Nobel Prize-winning economist Daron Acemoglu joins host Sam Ransbotham to challenge some of the most common assumptions about artificial intelligence’s future. Drawing on his book Power and Progress, Daron argues that technology doesn’t have a fixed destiny — and that today’s choices will determine whether AI boosts workers or simply accelerates automation and inequality. He makes a case for focusing on new tasks that complement human skills, rather than replacing them, and warns that current incentives push AI toward centralization and automation by default. The conversation tackles productivity myths, reliability risks, and why regulation should proactively steer AI toward social good.



Monday, February 23, 2026

Are we near the tipping point?

https://databreaches.net/2026/02/22/top-nato-allies-believe-cyberattacks-on-hospitals-are-an-act-of-war-theyre-still-struggling-to-fight-back/?pk_campaign=feed&pk_kwd=top-nato-allies-believe-cyberattacks-on-hospitals-are-an-act-of-war-theyre-still-struggling-to-fight-back

Top NATO allies believe cyberattacks on hospitals are an act of war. They’re still struggling to fight back.

Maggie Miller, Dana Nickel and Antoaneta Roussi report:

NATO countries’ restrained response to hybrid attacks is at odds with public opinion, new polling shows: Broad swaths of the public in key allied countries say actions such as cyberattacks on hospitals should be considered acts of war.
The POLITICO Poll, conducted in the United States, Canada, France, Germany and the United Kingdom, showed a majority of people agreed that a cyberattack that shuts down hospitals or power grids constitutes an act of war. Canadians felt the strongest about the issue, with 73 percent agreeing.
Respondents from all five countries also rallied behind the idea that sabotaging undersea cables or energy pipelines — which has occurred more frequently in recent years — should be considered be an act of war.

Read more at Politico.





Consider how this impacts AI ‘thinking.’

https://www.bespacific.com/does-retraction-after-misconduct-have-an-impact-on-citations/

Does retraction after misconduct have an impact on citations?

Candal-Pedreira C, Ruano-Ravina A, Fernández E, Ramos J, Campos-Varela I, Pérez-Ríos M.  Does retraction after misconduct have an impact on citations? A pre–post study.  BMJ Global Health. 2020; 5:e003719. https://doi.org/10.1136/bmjgh-2020-003719

  • Background  Retracted articles continue to be cited after retraction, and this could have consequences for the scientific community and general population alike. This study was conducted to analyse the association of retraction on citations received by retracted papers due to misconduct using two-time frames: during a postretraction period equivalent to the time the article had been in print before retraction; and during the total postretraction period.

  • Methods  Quasiexperimental, pre–post evaluation study. A total of 304 retracted original articles and literature reviews indexed in MEDLINE fulfilled the inclusion criteria. Articles were required to have been published in a journal indexed in MEDLINE from January 2013 through December 2015 and been retracted between January 2014 and December 2016. The main outcome was the number of citations received before and after retraction. Results were broken down by journal quartile according to impact factor and the most cited papers during the preretraction period were specifically analysed.

  • Results  There was an increase in postretraction citations when compared with citations received preretraction. There were some exceptions however: first, citations received by articles published in first-quartile journals decreased immediately after retraction (p<0.05), only to increase again after some time had elapsed; and second, postretraction citations decreased significantly in the case of articles that had received many citations before their retraction (p<0.05).

  • Conclusions  The results indicate that retraction of articles has no association on citations in the long term, since the retracted articles continue to be cited, thus circumventing their retraction.





Interesting, but not amusing.

https://www.citriniresearch.com/p/2028gic

THE 2028 GLOBAL INTELLIGENCE CRISIS

A Thought Exercise in Financial History, from the Future

What if our AI bullishness continues to be right...and what if that’s actually bearish?

What follows is a scenario, not a prediction.  This isn’t bear porn or AI doomer fan-fiction. T he sole intent of this piece is modeling a scenario that’s been relatively underexplored. Our friend Alap Shah posed the question, and together we brainstormed the answer. We wrote this part, and he’s written two others you can find here.

Hopefully, reading this leaves you more prepared for potential left tail risks as AI makes the economy increasingly weird.



Saturday, February 21, 2026

Privacy tools…

https://pogowasright.org/resource-privacy-law-directory-codamail/

Resource: Privacy Law Directory — Codamail

Regular readers may recall that this site recently noted The Data Broker Directory: Who has your data, where they got it, and who they sell it to by Codamail’s Stephen K. Gielda of Packetderm.

Instead of taking a well-deserved break after all the work he did to compile that resource, Steve went down the rabbit hole and compiled yet more helpful information. Codamail has now released a Privacy Law Directory. From its explanation:

This directory covers 21 country jurisdictions across the United States, the European Union, and international partners as of February 2026. Each page examines not just data protection legislation but also surveillance laws, intelligence agencies, data broker contracts, Internet exchange point taps, sureveillance company contracts, mutual legal assistance treaties (MLATs), data sharing agreements, data retention laws, encryption laws, child protection laws, oversight boards, and enforcement actions, because understanding privacy requires understanding the full picture.
The directory is organized around the intelligence alliance framework that shapes modern signals intelligence cooperation: the Five Eyes (the core anglophone alliance), the Nine Eyes (adding four European partners), and the Fourteen Eyes (SIGINT Seniors Europe). These alliances determine how intercepted communications and personal data flow between governments, making them directly relevant to any assessment of a jurisdiction’s privacy posture.
A recurring finding across every jurisdiction in this directory is that privacy laws primarily protect a country’s own citizens and residents. Nearly every nation examined here maintains legal exemptions that permit its intelligence agencies to collect, intercept, and retain foreign communications with fewer restrictions than apply to domestic targets. These foreign traffic exemptions, combined with intelligence-sharing alliances that allow partner nations to collect on each other’s populations and share the results back, create a global system in which domestic privacy protections can be structurally bypassed.
Beyond government surveillance, commercial data collection operates largely outside the scope of these laws. Data brokers aggregate personal information from public records, commercial transactions, app SDKs, advertising exchanges, and social media into profiles that can be purchased by governments, private investigators, and corporations without the judicial oversight required for law enforcement surveillance. Internet exchange points are monitored in multiple choke points (places most traffic passes through). Commercial surveillance contractors sell endpoint exploitation tools, spyware, and analytics platforms directly to government agencies. The result is that the privacy protections documented in this directory, while significant, represent only one layer of a more complex reality.
For detailed coverage of these mechanisms, see the Data Broker Directory (over 1700 across 17 categories) and The Myth of Jurisdictional Privacy.

Go explore and bookmark the Privacy Law Directory now.

With great thanks to Steve and Codamail for taking our privacy so seriously.





The Lone Ranger will be unmasked?

https://pogowasright.org/federal-judge-masked-ice-agents-violate-fourth-amendment/

Federal judge: Masked ICE agents violate Fourth Amendment

Chris Dickerson reports:

A federal judge has ruled Immigration and Customs Enforcement’s practice of conducting arrests with masked, unidentifiable agents violates the Fourth Amendment’s prohibition on unreasonable seizures.
In a February 19 opinion, U.S. District Judge Joseph E. Goodwin ordered the immediate release of petitioner Anderson Jesus Urquilla-Ramos, who was “arrested abruptly and without warning by a group of masked men purporting to be” ICE officers.
Antiseptic judicial rhetoric cannot do justice to what is happening,” Goodwin wrote to begin his 34-page ruling. “Across the interior of the United States, agents of the federal government — masked, anonymous, armed with military weapons, operating from unmarked vehicles, acting without warrants of any kind — are seizing persons for civil immigration violations and imprisoning them without any semblance of due process.

Read more at Legal Newsline.

Related posts:



Thursday, February 19, 2026

Is it too late for the US to follow suit?

https://www.theregister.com/2026/02/19/poland_china_car_ban/

Poland bans camera-packing cars made in China from military bases

Poland’s Ministry of Defence has banned Chinese cars – and any others include tech to record position, images, or sound – from entering protected military facilities.

A Tuesday announcement from the country’s Ministry of Defence says the decision came after risk analysis of the potential for the many gadgets built into modern cars to allow “uncontrolled acquisition and use of data.”

The ban also prohibits officials connecting their work phones to infotainment systems in China-made cars.





For your amusement?

https://www.bespacific.com/epsteinalysis-com/

Epsteinalysis.com

Under the moniker – Axiomofinfinity – for which there is no further information that I could locate – a individual or group has posted a remarkable searchable database, Epstein Files Explorer, of over one million documents and over two million pages that comprise the Epstein Files released by the DOJ. Applications used – Programmatic applications – Extracted via spaCy NER and clustered by similarity. = curated key figure. Use “Known Only” to filter.

Features include the following:

  • Search – There is a search feature for: Documents, Bates numbers, Entities

  • Analyze – Timeline; Event, Network, Events, Meetings

  • Images – All Images, Faces

  • Videos – All Videos

  • Redactions – Inconsistencies, Statistics