Saturday, January 24, 2026

Conflict ahead?

https://pogowasright.org/several-state-ai-laws-set-to-go-into-effect-in-2026-despite-federal-governments-push-to-eliminate-state-level-ai-regulations/

Several State AI Laws Set to Go into Effect in 2026, Despite Federal Government’s Push to Eliminate State-Level AI Regulations

Corey Bartkus of Barnes & Thornburg LLP writes:

Illinois, Texas, and Colorado are each set to implement laws governing the use of artificial intelligence (AI) in the workforce in 2026, all while the federal government has signaled its intent to eliminate state-level regulations on AI.
On Dec. 11, 2025, President Donald Trump signed an executive order, titled “Ensuring a National Policy Framework for Artificial Intelligence,” which directed the federal government to review state laws that are deemed “inconsistent” with its plans to implement a national policy framework for AI.
Meanwhile, new AI laws in Illinois and Texas went into effect on Jan. 1. Illinois’ new law, H.B. 3773, amends the state’s human rights act to make clear that the statute is triggered when discrimination emanates from an employer’s use of AI to make decisions on hiring, firing, discipline, tenure, and training. Under H.B. 3773, companies must notify workers when AI is integrated into any of the aforementioned workplace decisions. Furthermore, companies are barred from using ZIP codes in the AI model when evaluating candidates. Because these new protections were implemented as part of Illinois’ existing human rights code, they come with a private right of action.

Read more at The National Law Review.





A different kind of security risk. Not sure training is available to address this in most companies.

https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html

Who Approved This Agent? Rethinking Access, Accountability, and Risk in the Age of AI Agents

AI agents are accelerating how work gets done. They schedule meetings, access data, trigger workflows, write code, and take action in real time, pushing productivity beyond human speed across the enterprise.

Then comes the moment every security team eventually hits:

Wait… who approved this?”

Unlike users or applications, AI agents are often deployed quickly, shared broadly, and granted wide access permissions, making ownership, approval, and accountability difficult to trace. What was once a straightforward question is now surprisingly hard to answer.

AI Agents Break Traditional Access Models

AI agents are not just another type of user. They fundamentally differ from both humans and traditional service accounts, and those differences are what break existing access and approval models.

Human access is built around clear intent. Permissions are tied to a role, reviewed periodically, and constrained by time and context. Service accounts, while non-human, are typically purpose-built, narrowly scoped, and tied to a specific application or function.

AI agents are different. They operate with delegated authority and can act on behalf of multiple users or teams without requiring ongoing human involvement. Once authorized, they are autonomous, persistent, and often act across systems, moving between various systems and data sources to complete tasks end-to-end.





Perspective.

https://theconversation.com/is-ai-hurting-your-ability-to-think-how-to-reclaim-your-brain-272834

Is AI hurting your ability to think? How to reclaim your brain

The retirement of West Midlands police chief Craig Guildford is a wake-up call for those of us using artificial intelligence (AI) tools at work and in our personal lives. Guildford lost the confidence of the home secretary after it was revealed that the force used incorrect AI-generated evidence in their controversial decision to ban Israeli football fans from attending a match.

This is a particularly egregious example, but many people may be falling victim to the same phenomenon – outsourcing the “struggle” of thinking to AI.

As an expert on how new technology reshapes society and the human experience, I have observed a growing phenomenon which I and other researchers refer to as “cognitive atrophy”.

Essentially, AI is replacing tasks many people have grown reluctant to do themselves – thinking, writing, creating, analysing. But when we don’t use these skills, they can decline.

We also risk getting things very, very wrong. Generative AI works by predicting likely words from patterns trained on vast amounts of data. When you ask it to write an email or give advice, its responses sound logical. But it does not understand or know what is true.



Friday, January 23, 2026

Perhaps another storage plan is worth considering?

https://pogowasright.org/microsoft-gave-fbi-keys-to-unlock-encrypted-data-exposing-major-privacy-concern/

Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Concern

Thomas Brewster reports:

Early last year, the FBI served Microsoft with a search warrant, asking it to provide recovery keys to unlock encrypted data stored on three laptops. Federal investigators in Guam believed the devices held evidence that would help prove individuals handling the island’s Covid unemployment assistance program were part of a plot to steal funds.
The data was protected with BitLocker, software that’s automatically enabled on many modern Windows PCs to safeguard all the data on the computer’s hard drive. BitLocker scrambles the data so that only those with a key can decode it.
It’s possible for users to store those keys on a device they own, but Microsoft also recommends BitLocker users store their keys on its servers for convenience. While that means someone can access their data if they forget their password, or if repeated failed attempts to login lock the device, it also makes them vulnerable to law enforcement subpoenas and warrants.
In the Guam case, it handed over the encryption keys to investigators.

Read more at Forbes.





I may have mentioned this problem a few times…

https://www.zdnet.com/article/ai-is-poisoning-itself-model-collapse-cure/

AI is quietly poisoning itself and pushing models toward collapse - but there's a cure

According to tech analyst Gartner, AI data is rapidly becoming a classic Garbage In/Garbage Out (GIGO) problem for users. That's because organizations' AI systems and large language models (LLMs) are flooded with unverified, AI‑generated content that cannot be trusted. 

You know this better as AI slop. While annoying to you and me, it's deadly to AI because it poisons the LLMs with fake data. The result is what's called in AI circles "Model Collapse." AI company Aquant defined this trend: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality." 





Something to keep in mind?

https://techcrunch.com/2026/01/22/google-now-offers-free-sat-practice-exams-powered-by-gemini/

Google now offers free SAT practice exams, powered by Gemini

Prepping for the SAT is nobody’s idea of fun, but Google aims to make it less stressful with AI. The company announced that it’s now focusing its AI education efforts on standardized testing with free SAT practice exams powered by Gemini. 

Students can prompt Gemini by typing “I want to take a practice SAT test,” and the AI will provide them with a free practice exam. Gemini then analyzes the results, highlighting strengths and identifying areas that need further review. It also offers detailed explanations for any incorrect answers. 



Thursday, January 22, 2026

Law by memo? (We want to, therefore we can?)

https://www.bespacific.com/immigration-officers-assert-sweeping-power-to-enter-homes-without-a-judges-warrant/

Immigration officers assert sweeping power to enter homes without a judge’s warrant

AP Federal immigration officers are asserting sweeping power to forcibly enter people’s homes without a judge’s warrant, according to an internal Immigration and Customs Enforcement memo obtained by The Associated Press, marking a sharp reversal of longstanding guidance meant to respect constitutional limits on government searches. The memo authorizes ICE officers to use force to enter a residence based solely on a more narrow administrative warrant to arrest someone with a final order of removal, a move that advocates say collides with Fourth Amendment protections and upends years of advice given to immigrant communities. The shift comes as the Trump administration dramatically expands immigration arrests nationwide, deploying thousands of officers under a mass deportation campaign that is already reshaping enforcement tactics in cities such as Minneapolis. For years, immigrant advocates, legal aid groups and local governments have urged people not to open their doors to immigration agents unless they are shown a warrant signed by a judge. That guidance is rooted in Supreme Court rulings that generally prohibit law enforcement from entering a home without judicial approval. The ICE directive directly undercuts that advice at a time when arrests are accelerating under the administration’s immigration crackdown. The memo itself has not been widely shared within the agency, according to a whistleblower complaint, but its contents have been used to train new ICE officers who are being deployed into cities and towns to implement the president’s immigration crackdown. New ICE hires and those still in training are being told to follow the memo’s guidance instead of written training materials that actually contradict the memo, according to the whistleblower disclosure. It is unclear how broadly the directive has been applied in immigration enforcement operations. The Associated Press witnessed ICE officers ramming through the front door of the home of a Liberian man, Garrison Gibson, with a deportation order from 2023 in Minneapolis on Jan. 11, wearing heavy tactical gear and with their rifles drawn. Documents reviewed by The AP revealed that the agents only had an administrative warrant — meaning there was no judge who authorized the raid on private property. The change is almost certain to meet legal challenges and stiff criticism from advocacy groups and immigrant-friendly state and local governments that have spent years successfully urging people not to open their doors unless ICE shows them a warrant signed by a judge. The Associated Press obtained the memo and whistleblower complaint from an official in Congress, who shared it on condition of anonymity to discuss sensitive documents. The AP verified the authenticity of the accounts in the complaint.

The memo, signed by the acting director of ICE, Todd Lyons, and dated May 12, 2025, says: “Although the U.S. Department of Homeland Security (DHS) has not historically relied on administrative warrants alone to arrest aliens subject to final orders of removal in their place of residence, the DHS Office of the General Counsel has recently determined that the U.S. Constitution, the Immigration and Nationality Act, and the immigration regulations do not prohibit relying on administrative warrants for this purpose…”





Might be interesting to include an order for disclosure of the search terms the FBI was using…

https://pogowasright.org/judge-prevents-feds-from-going-through-reporters-materials-seized-by-fbi/

Judge prevents feds from going through reporter’s materials seized by FBI

Sydney Haulenbeek reports:

 A magistrate judge on Wednesday blocked the government from examining data that it seized from a Washington Post reporter last week.
In a two-page order, Magistrate Judge William B. Porter granted the Post’s request for a standstill order, halting the federal government from reviewing any of the materials they seized from the property of Post reporter Hannah Natanson last week.
The government must preserve the documents that it collected during its search, Porter wrote, but not review them. The judge also scheduled an oral argument for early February.
Natanson, who covers President Donald Trump’s reshaping of the government, had her home searched by FBI agents early morning last Wednesday. The agents, who had a search warrant, seized a phone she used for work, two laptops — one of which was owned by the Post — a recorder, a hard drive and a Garmin watch.

Read more at Courthouse News.



Wednesday, January 21, 2026

Perspective. (1984 like?)

https://pogowasright.org/how-to-give-the-government-new-power-to-un-person-someone-in-three-easy-steps/

How to Give the Government New Power to “Un-Person” Someone, in Three Easy Steps

Jay Stanley writes:

The big push for state digital driver’s licenses that we’ve been warning about is effectively a movement to increase the power of big companies and government to control individuals. One feature of the licenses most states are adopting that may prove to be particularly dangerous is revocation — how and when people’s IDs can be canceled. Want to give the government powers that are brand new in human history? Just follow these three easy steps:
  • Build A Digital Driver’s License With Centralized Revocation Capability
  • Allow Those Ids To Be Used For Everything So People Can’t Function Without Them
  • Let Government Officials Yank People’s IDs Out Of Their Wallets

Read more at ACLU.





Perspective.

https://theconversation.com/ai-cannot-automate-science-a-philosopher-explains-the-uniquely-human-aspects-of-doing-research-272477

AI cannot automate science – a philosopher explains the uniquely human aspects of doing research

While AI can assist in tasks that are part of the scientific process, it is still far away from automating science – and may never be able to. As a philosopher who studies both the history and the conceptual foundations of science, I see several problems with the idea that AI systems can “do science” without or even better than humans.





Hackers become hatters?

https://boingboing.net/2026/01/20/nycs-new-high-tech-subway-turnstiles-defeated-by-hat.html

NYC's new high tech subway turnstiles defeated by hat

New York City's newest attempt to prevent subway fare evasion can be beaten by simply tossing a hat. As seen in this video, dropping a hat triggers the motion sensor that opens the gate for exiting passengers. The gate not only opens without a fare being paid, but stays open long enough for the enterprising fellow's entire crew to slip through. An alarm sounds, but no one seems to care, including police who likely already know about the flaws.



Tuesday, January 20, 2026

How much “Oops!” can we tolerate?

https://pogowasright.org/ices-facial-recognition-app-misidentified-a-woman-twice/

ICE’s Facial Recognition App Misidentified a Woman. Twice

Joseph Cox reports:

When authorities used Immigration and Customs Enforcement’s (ICE) facial recognition app on a detained woman in an attempt to learn her identity and immigration status, it returned two different and incorrect names, raising serious questions about the accuracy of the app ICE is using to determine who should be removed from the United States, according to testimony from a Customs and Border Protection (CBP) official obtained by 404 Media.
ICE has told lawmakers the app, called Mobile Fortify, provides a “definitive” determination of someone’s immigration status, and should be trusted over a birth certificate. The incident, which happened last year in Oregon, casts doubt on that claim.
ICE has treated Mobile Fortify like it’s a 100% accurate record retrieval system of everybody’s immigration status for the entire population of the U.S. when this is obviously not true, and could never be true from a technical perspective,” Cooper Quintin, a security researcher and senior public interest technologist at the Electronic Frontier Foundation, told 404 Media.

Read more at 404 Media.





Perspective.

https://sloanreview.mit.edu/audio/connecting-language-and-artificial-intelligence-princetons-tom-griffiths/

Connecting Language and (Artificial) Intelligence: Princeton’s Tom Griffiths

In this bonus episode of the Me, Myself, and AI podcast, Princeton University professor and artificial intelligence researcher Tom Griffiths joins host Sam Ransbotham to unpack The Laws of Thought, his new book exploring how math has been used for centuries to understand how minds — human and machine — actually work. Tom walks through three main frameworks shaping intelligence today — rules and symbols, neural networks, and probability — and he explains why modern AI only makes sense when you see how those pieces fit together. The conversation connects cognitive science, large language models, and the limits of human versus machine intelligence. Along the way, Tom and Sam dig into language, learning, and what humans still do better — like judgment, curation, and metacognition.





Perspective.

https://thedailyeconomy.org/article/the-price-of-greenland-and-the-cost-of-attacking-sovereignty/

The Price of Greenland — and the Cost of Attacking Sovereignty

President Donald Trump’s renewed push to acquire Greenland is now framed not as a novelty or negotiating stunt, but as a foreign policy and national security imperative. Administration officials argue that Greenland’s Arctic location, proximity to emerging shipping lanes, and potential role in countering Russian and Chinese influence make US control strategically essential. 

That framing has now been paired with explicit economic pressure: in a recent social media post on Saturday, January 17, 2026,  Mr. Trump announced that Denmark — the sovereign power over Greenland — will face a 10 percent tariff on all goods exported to the United States beginning February 1, with the rate rising to 25 percent on June 1 if Denmark does not agree to a “Complete and Total purchase of Greenland.”  He further stated that Norway, Sweden, France, Germany, Britain, the Netherlands, and Finland — NATO allies that have expressed solidarity with Denmark — will be subjected to the same escalating tariffs unless they relent.

Even granting the strategic premise, the proposal collapses under basic economic reasoning. The problem is not subtle. It lies in valuation, incentives, and the institutional foundations that make both markets and geopolitics workable.





Perspective.

https://www.aljazeera.com/news/2026/1/20/how-a-year-of-trump-reshaped-the-world-in-seven-charts

How a year of Trump reshaped the world in seven charts





Perspective.

https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur

AI companies will fail. We can salvage something from the wreckage

AI is asbestos in the walls of our tech society, stuffed there by monopolists run amok. A serious fight against it must strike at its roots



Monday, January 19, 2026

Already in your neighborhood schools?

https://www.forbes.com/sites/thomasbrewster/2025/12/16/ai-bathroom-monitors-welcome-to-americas-new-surveillance-high-schools/

AI Bathroom Monitors? Welcome To America’s New Surveillance High Schools

Inside a white stucco building in Southern California, video cameras compare faces of passersby against a facial recognition database. Behavioral analysis AI reviews the footage for signs of violent behavior. Behind a bathroom door, a smoke detector-shaped device captures audio, listening for sounds of distress. Outside, drones stand ready to be deployed and provide intel from above, and license plate readers from $8.5 billion surveillance behemoth Flock Safety ensure the cars entering and exiting the parking lot aren’t driven by criminals.

This isn't a high-security government facility. It's Beverly Hills High School.





Thus spake IBM.

https://www.ibm.com/thought-leadership/institute-business-value/en-us/report/enterprise-2030

The enterprise in 2030

AI isn’t just enhancing the business model. By 2030, it will be the business model.

Study: https://www.ibm.com/downloads/documents/us-en/1550f812c451680b



Sunday, January 18, 2026

If we can do it to a potential foe, they can do it to us.

https://databreaches.net/2026/01/17/us-cyberattack-blacks-out-venezuela-leads-to-maduros-capture-in-2026/

US Cyberattack Blacks Out Venezuela, Leads to Maduro’s Capture in 2026

Julian E. Barnes and Anatoly Kurmanaev report:

The cyberattack that plunged Venezuela’s capital into darkness this month demonstrated the Pentagon’s ability not just to turn off the lights, but also to allow them to be turned back on, according to U.S. officials briefed on the operation.
The Jan. 3 operation was one of the most public displays of offensive U.S. cybercapabilities in recent years. It showed that at least with a country like Venezuela, whose military does not have sophisticated defenses against cyberattacks, the United States could use cyberweapons with powerful and precise effects.
The U.S. military also used cyberweapons to interfere with air defense radar, according to people briefed on the matter, who discussed sensitive details of the operation on the condition of anonymity. (Venezuela’s most powerful radar was not functional, however.)

Read more at The New York Times.





Keeping up...

https://pogowasright.org/u-s-biometric-laws-pending-legislation-tracker-january-2026/

U.S. Biometric Laws & Pending Legislation Tracker – January 2026

Lauren Caisman and Amy de La Lama of BCLP provide a useful summary of existing and proposed biometric laws by state.

Read their write-up on BCLP.




Not just to improve vision but to see.

https://pogowasright.org/the-hidden-legal-minefield-compliance-concerns-with-ai-smart-glasses-part-4-data-security-breach-notification-and-third-party-ai-processing-risks/

The Hidden Legal Minefield: Compliance Concerns with AI Smart Glasses, Part 4: Data Security, Breach Notification, and Third-Party AI Processing Risks

Joseph Lazzarotti of JacksonLewis writes:

As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns.
  • In Part 1, we addressed compliance issues that arise when these wearables collect biometric information.
  • In Part 2, we covered all-party consent requirements and AI notetaking technologies.
  • In Part 3, we considered broader privacy and surveillance issues, including from a labor law perspective.
In this Part 4, we consider the potentially vast amount of personal and other confidential data that may be collected, visually and audibly, through everyday use of this technology. Cybersecurity and data security risk more broadly pose another major and often underestimated exposure from this technology.
The Risk
AI smart glasses collect, analyze, and transmit enormous volumes of sensitive data—often continuously, and typically transmitting it to cloud-based servers operated by third parties. This creates a perfect storm of cybersecurity risk, regulatory exposure, and breach notification obligations under laws in all 50 states, as well as the CCPA, GDPR, and numerous sector-specific regulations, such as HIPAA for the healthcare industry.
Unlike traditional cameras or recording devices, AI glasses are designed to collect and process data in real time. Even when users believe they are not “recording,” the devices may still be capturing visual, audio, and contextual information for AI analysis, transcription, translation, or object recognition. That data is frequently transmitted to third-party AI providers with unclear security controls, retention practices, and secondary-use restrictions.
Many AI glasses explicitly rely on third-party AI services. For example, Brilliant Labs’ Frame glasses use ChatGPT to power their AI assistant, Noa, and disclose that multiple large language models may be involved in processing. In practice, this means sensitive business conversations, images, and metadata may leave the organization entirely—often without IT, security, or legal teams fully understanding where the data goes or how it is protected.

Read more at Workplace Privacy, Data Management & Security Report.





Not just lawyers.

https://scholarworks.uark.edu/arlnlaw/23/

Ethics of Artificial Intelligence for Lawyers: Shall We Play a Game? The Rise of Artificial Intelligence and the First Cases

In the 1983 movie WarGames, a young computer hacker accidentally accesses a United States military supercomputer programmed to run nuclear war simulations. Four decades after WarGames, lawyers are now facing similar challenges of learning to use and communicate with artificial intelligence––hopefully without destroying the world. Artificial intelligence tools, such as ChatGPT, Claude, and Gemini, are quickly being incorporated into legal practice. These systems can draft documents, perform analysis, and support other legal tasks. While lawyers adjust to these new technologies, courts and regulatory authorities are actively developing appropriate frameworks to guide and supervise the use of these tools within the sector.

This first installment in this series lays the foundation with a brief history of artificial intelligence, the rise of generative models, and the problem of “hallucinations” that make these tools especially dangerous for lawyers. It also surveys the first wave of cases, where courts sanctioned attorneys and pro se litigants for relying on hallucinated citations, imposed new procedural safeguards, and began confronting broader disputes over evidence, intellectual property, education, and government transparency. The next installments will shift from cases to rules by examining the American Bar Association’s Formal Opinion 512. Formal Opinion 512 is expansive, so it will be examined in two parts, first through its guidance on competence, confidentiality, and communication, and then through its treatment of candor, supervision, and fees. From there, the series ill turn to the rapidly evolving regulatory landscape, surveying federal inaction, California’s aggressive framework, the European Union’s AI Act, and Arkansas’s initial steps. The final entries in this series will focus on practice by outlining best practices that lawyers can adopt today and previewing the new skills that will define the next frontier of lawyer competence.





Let’s gang up on AI…

http://gmp-pub.com/index.php/ILDJ/article/view/19

The Rise of Artificial Intelligence and Its Implications for International Legal Accountability

This study examines the profound challenges posed by the rapid rise of Artificial Intelligence (AI) to the framework of international legal accountability. As AI systems become increasingly autonomous, complex, and opaque, existing international legal norms historically designed for human and state actors struggle to provide adequate regulatory guidance or mechanisms for responsibility attribution. The research identifies four interconnected problem areas: legal gaps in governing AI, difficulties in assigning accountability for autonomous AI decisions, human rights and humanitarian law implications, and structural imbalances within global governance. Current international law lacks coherent provisions that address the unique characteristics of AI, including unpredictability, machine learning opacity, and cross-border impacts. These gaps complicate the process of determining liability when AI systems cause harm, especially in contexts such as algorithmic discrimination, surveillance practices, and autonomous weapons deployment. Moreover, AI amplifies risks to fundamental rights, including privacy, freedom of expression, and due process, while also challenging the principles of distinction and proportionality in armed conflict. At the global governance level, power asymmetries between technologically advanced states, developing countries, and dominant private technology corporations hinder the creation of inclusive and effective regulatory standards. Consequently, the governance of AI is fragmented, slow, and heavily influenced by actors with disproportionate technological and economic power. This study argues that a comprehensive, adaptive, and multilateral legal framework is essential to ensure accountability, protect human rights, and promote equitable global governance in the AI era. Strengthening international institutions, harmonizing global standards, and expanding oversight over non-state actors are crucial steps toward achieving a balanced and just international AI governance system.