Monday, April 27, 2026

Could you deliberately create exculpatory evidence in your chats?

https://www.bespacific.com/major-law-firms-are-warning-clients-anything-you-type-into-an-ai-chatbot-can-be-used-against-you-in-court/

Major law firms are warning clients: anything you type into an AI chatbot can be used against you in court…

Reuters: “As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line. These warnings became more urgent after a federal judge in New York ruled, opens new tab this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities against him.

In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic’s Claude and ‌OpenAI’s ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases. “We are telling our clients: You should proceed with caution here,” said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim. People’s discussions with their lawyers are almost always deemed confidential under U.S. law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private…”





I see pros and cons.

https://www.theatlantic.com/technology/2026/04/ai-nationalization-trump-hegseth-anthropic-openai/686943/

What Happens If America Nationalizes AI?

AI companies are beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort.

Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.





Clearly a system design failure. And the omission of a fix process.

https://www.9news.com/article/news/local/rime-flock-cam-pulled-over/73-e3f65018-32a5-4bb0-a4ac-26fb24dc9a15

He didn’t commit a crime, but Flock cam alerts keep getting him pulled over

Kyle Dausman was just driving through Cherry Hills Village when officers pulled him over without warning. Officers thought he had a warrant attached to his vehicle. He didn't. They released him.

A few days later, he was pulled over again by one of the same Cherry Hills Village police officers. Same thing. The officer quickly recognized him and let him go.

Lyons said the warrant traces back to a Gilpin County case and a court data entry error that confused Dausman's plate with that of a similar plate of a wanted man.

Lyons believes the root cause is a data entry issue involving Colorado license plates, which use both the letter O and the numeral zero.

"In Colorado data entry, we use both zeros and O's in license plates," Lyons said. "Sometimes the data entry will be for both."

He said the warrant returned hits when Dausman's plate was searched either way.

"They entered it for both," Lyons said. "It wasn't a mistake, one or the other. They just entered it for both an O and a zero, because we've run it both ways and the warrant pops up both ways."

Dausman said he tried to resolve the problem by contacting Gilpin County courts and the sheriff's office dispatch, and was told he needed to provide the name of the suspect tied to the warrant — information no one would give him because it involves an ongoing criminal investigation.



Sunday, April 26, 2026

Honest cops?

https://brooklynworks.brooklaw.edu/blr/vol91/iss2/6/

Police and AI: When Abundantly Helpful Becomes Intrinsically Harmful

Artificial intelligence (AI) has rapidly crept into nearly all aspects of life, including in government, the criminal justice system, and policing. While Supreme Court Due Process jurisprudence has outlined certain boundaries for police interrogations, much police conduct is left for the states to regulate. Such regulation is sporadic and less restrictive than the public might assume, especially in the realm of police deception. Across jurisdictions, courts allow police to deceptively inform suspects that a witness identified the suspect of the perpetrator of a crime. That the suspect’s finger prints, DNA, or shoe prints were found at the scene of the crime. Police can even present fake evidence to suspects in an interrogation, including falsified lab reports, photograpghs, and more. With AI’s use expanding into law enforcement, there is a clear need to regulate police deception in interrogations before constitutional rights are infringed. This Note argues that while courts have long permitted various deceptive police tactics, the increasing sophistication and accessibility of AI tools pose unprecedented risks such as false confessions, bias, and potentially unwarranted public repremand. Through an analysis of case law, the evolution of Miranda and Due Process jurisprudence, and emerging AI applications in policing, this Note demonstrates how AI-enabled deception could exacerbate Due Process violations, undermine public trust, and increase wrongful convictions. It concludes by urging state legislatures to preemptively prohibit the use of AI to create false evidence in interrogations, advocating for a state-by-state legislative approach as the most effective means to safeguard constitutional protections in a rapidly evolving world.





Law is a Matrix?

https://scholarworks.uark.edu/arlnlaw/31/

Prompt Engineering For Lawyers: Blue Pill Or Red Pill: Hallucinations Risks And An Introduction To Prompt Engineering

In The Matrix, Neo’s choice between the blue pill and the red pill is essentially a choice between a comfortable illusion and an unsettling reality. Lawyers now face a similar decision with artificial intelligence. They can take the blue pill: ignore artificial intelligence or treat it like just another search engine, continuing a comfortable illusion that the new technology may not transform the practice of law. Or lawyers can take the red pill: acknowledge that artificial intelligence will transform the practice of law and learn how to use it competently, ethically, and effectively.

This Article is for those who choose the red pill. It begins with the problem of hallucinations, which makes blind reliance on artificial intelligence a professional hazard, and then turns to the first step in using artificial intelligence productively: understanding how it differs from Googling. When artificial intelligence is approached as a role-playing collaborator, such as a litigator, contract drafter, or judge, lawyers can enhance the accuracy, tone, and usefulness of the responses it provides.





Outside the box?

https://ojs.scipub.de/index.php/MSC/article/view/8331

THE PROBLEM OF THE CONSTITUTIONAL AND LEGAL REGULATION OF ARTIFICIAL INTELLIGENCE

This article examines the constitutional and legal problems arising against the background of the rapid development of artificial intelligence (AI), as well as the new realities generated by digital transformation. It offers a comparative analysis of the advanced constitutional practices of countries such as Chile, Greece, Mexico, and Brazil in the regulation of AI.

Referring to the theoretical concepts of prominent international scholars such as Lawrence Lessig, Frank Pasquale, and Mireille Hildebrandt, the article explores the principles of “code as law” and “legal protection by design.”

At the same time, it interprets the fundamental threats posed by AI in the spheres of algorithmic discrimination, the privacy of personal data, and neuro-rights.

The article proposes the application of a strict liability model within the civil law system of Azerbaijan for the compensation of damage caused by AI and suggests recognizing AI as an “autonomous source of risk.” In conclusion, it advances strategic solutions aimed at ensuring that national legislation evolves on the basis of the principles of digital constitutionalism and that the supremacy of human will over program code is preserved.





Maury Nichols points me to another interesting article.

https://www.straitstimes.com/multimedia/graphics/2026/04/ai-chatbots-privacy-risk/index.html?ref=thefuturist

Marcus asks AI chatbots various questions.

They seem entirely harmless. But they can tell the chatbots a lot about him.





Modern war.

https://carnegieendowment.org/research/2026/04/ukraine-russia-war-changing-warfare-practice-military-strategy

The New Revolution in Military Affairs

How Ukraine is driving doctrinal change in modern warfare.



Saturday, April 25, 2026

Is it the CFTC’s job to ensure state licenses are in order?

https://www.reuters.com/legal/government/cftc-sues-new-york-block-oversight-prediction-markets-2026-04-24/

CFTC sues New York to block oversight of prediction markets

, opens new tab

, opens new tab

The U.S. Commodity Futures Trading Commission sued New York on Friday, accusing the state of invading its authority to regulate prediction markets ‌by filing lawsuits accusing Coinbase Financial Markets (COIN.O) and Gemini Titan (GEMI.O) of promoting gambling.

In a complaint filed in Manhattan federal court, the CFTC said the litigation filed on April 21 by New York Attorney General Letitia James "intrudes on the exclusive federal scheme Congress designed" to oversee commodity derivatives markets, including prediction markets.





Because anti-discrimination is discrimination?

https://coloradosun.com/2026/04/24/doj-joins-lawsuit-colorado-ai-law-federal-court/

Justice Department joins Elon Musk’s xAI in effort to block Colorado AI antidiscrimination law

The Department of Justice joined a lawsuit seeking to block Colorado’s first-in-the-nation artificial intelligence antidiscrimination law from taking effect, escalating a legal fight that began two weeks ago with a challenge filed by Elon Musk’s xAI. 

Senate Bill 205, which was signed into law in 2024, aims to regulate “high-risk” AI systems and protect consumers from so-called algorithmic discrimination, which is when a computer system produces biased results that disadvantage certain people, especially based on traits like race, gender, age or income. 

Attorneys for the federal government joined Musk’s xAI in arguing that the law jeopardizes the United States’ position as “the global AI leader” by requiring AI systems to “incorporate discriminatory ideology that prioritizes preferred demographic characteristics over accurate and merit-based outputs.

SB24-205 constrains the information that AI systems convey, obligates AI developers and deployers to discriminate, and then enforces the state-mandated discrimination with onerous policy, assessment, and disclosure requirements that will disproportionately burden small businesses and start-ups,” DOJ attorneys wrote in the 19-page complaint, which was filed in federal court in Denver. 



Friday, April 24, 2026

Government parenting? How do you enforce this without checking every user logon?

https://thenextweb.com/news/norway-social-media-ban-under-16-age-verification

Norway plans to ban social media for children under 16 and shift age verification liability to platforms

The minority Labour government, led by PM Jonas Gahr Støre, announced the legislation on Friday. The age threshold has been raised from the 15-year limit proposed in the 2025 consultation, aligning Norway with Australia’s world-first ban that came into force in December. Ireland is also considering similar legislation.

The mechanism of the ban is as important as the age threshold. Under the proposed Norwegian legislation, social media companies, defined as platforms where users can create a profile, connect with other profiles, and share content without editorial oversight, will be required to implement effective age verification.

The burden of verifying age shifts from the child, who currently self-reports, to the platform. Norway’s existing digital identity infrastructure, BankID, is expected to play a role in the verification architecture. Platforms that fail to comply will face fines. The consultation draft proposed fines of up to NOK 20 million.



(Related)

https://www.theregister.com/2026/04/23/proton_ceo_age_checks_id_checkpoint/

Age checks could turn internet into an ID checkpoint, complains Proton CEO

In a blog post on Thursday, Andy Yen, CEO of Proton, argues that the current push for age checks risks flipping the web from anonymous by default to something closer to "show your papers" before you click.

The problem, he says, is that you can't reliably identify minors without identifying everyone else first, meaning systems built to protect kids inevitably sweep up adults too. "We cannot accept a world where every adult is expected to hand over ID as the price of going online."





Trying to be human?

https://www.businessinsider.com/ai-written-email-perfect-typos-new-chrome-plugin-2026-4

Now there's an AI tool that adds typos into your emails — so it looks like you didn't use AI



Thursday, April 23, 2026

At last, someone is doing something about the weather…

https://news.bitcoin.com/a-hair-dryer-may-have-gamed-a-paris-weather-sensor-for-34000-on-polymarket/

A Hair Dryer May Have Gamed a Paris Weather Sensor for $34,000 on Polymarket

The complaint  follows two temperature anomalies at the CDG station. On April 6, the sensor recorded a jump of roughly 4 degrees Celsius within 12 minutes at approximately 6:30 p.m., briefly reaching 22.5 degrees Celsius before returning to normal. On April 15 at approximately 9:30 p.m., the reading climbed to 22 degrees Celsius under calm, cloudy skies before dropping back within minutes.

No neighboring stations recorded similar changes during either event. Wind direction and relative humidity showed no corresponding shifts.

On April 6, long-shot bets on Paris reaching 21 degrees Celsius paid out approximately $14,000 to at least one bettor whose account had been created days earlier, according to reporting by Le Monde and BFMTV. A similar wager on 22 degrees Celsius resolved in a bettor’s favor on April 15 for roughly $20,000.





The look of an AI arms race.

https://www.politico.eu/article/u-k-intelligence-100-nations-have-spyware-that-can-hack-britain/

UK intelligence: 100 nations have spyware that can hack Britain

More than half of the world's nation states are believed to have purchased technology that could be capable of hacking into Britain's infrastructure, companies and private networks, U.K. intelligence has found.

The U.K. National Cyber Security Centre — which is part of the GCHQ intelligence agency — believes around 100 countries have procured cyber intrusion software, suggesting the barrier for states to get their hands on the technology is dropping, the agency told POLITICO ahead of a discussion about its findings at its CYBERUK conference in Glasgow Wednesday.





First look?

https://www.politico.com/news/2026/04/22/house-republicans-roll-out-landmark-data-privacy-push-00886800?nid=0000015a-dd3e-d536-a37b-dd7fd8af0000&nname=playbook-pm&nrid=f1499b3a-1f47-4e35-80c8-e83140fb7df7

House Republicans roll out landmark data privacy push

Key House Republicans on Wednesday unveiled a landmark legislative effort to create a national data privacy standard, teeing up a push to enact sweeping changes to how tech and financial data are regulated.

The effort includes two bills — the SECURE Data Act, which deals with tech companies’ consumer data, and a second financial data privacy measure dubbed the GUARD Financial Data Act.



(Related)

https://fpf.org/press-releases/fpf-on-the-securing-and-establishing-consumer-uniform-rights-and-enforcement-over-data-secure-data-act/

FPF on the Securing and Establishing Consumer Uniform Rights and Enforcement Over Data (“SECURE Data”) Act

In the absence of a federal law, twenty-one states have enacted comprehensive privacy laws that, while varying in detail, have generally converged around a common framework. The “SECURE Data Act” largely follows that consensus model, which could facilitate compliance for businesses already navigating state requirements. However, several states have taken different approaches or amended their laws in recent years, including expansions related to health data, minors’ data, and geolocation—raising questions about the extent to which a federal baseline should reflect these alternatives.





Autonomous is expensive.

https://www.theguardian.com/us-news/2026/apr/22/pentagon-asks-for-54bn-in-pivot-towards-ai-powered-war

Pentagon asks for $54bn in pivot towards AI-powered war

The Pentagon is aiming to increase funding more than a hundredfold for an autonomous drone warfare program, according to budget documents released this week, signalling a major pivot towards AI-powered war.

In its 2027 budget, the Pentagon has asked for over $54bn to fund the Defense Autonomous Warfare Group, a 24,000% increase on last year.

An overview of the budget describes this money as going towards “autonomous and remotely operated systems across air, land, and above and below the sea,” including the “Drone Dominance” program.

The amount is over half the entire defence budget of the UK. In an opinion piece published yesterday, former CIA director David Petraeus said it was “the largest single commitment to autonomous warfare in history”.



Wednesday, April 22, 2026

Why not ignore the law if there are no consequences?

https://www.bespacific.com/trump-fought-to-keep-the-ballroom-fundraising-contract-secret-heres-whats-in-it/

Trump fought to keep the ballroom fundraising contract secret. Here’s what’s in it.

Follow-up to Banquet of Greed: Trump Ballroom Donors Feast on Federal Funds and Favors – See Washington Post – no paywall: “The agreement governing hundreds of millions in private donations was kept secret until a watchdog group sued and a judge ordered it disclosed [the full text of this document is embedded in this WaPo article – view the 14 page PDF without the paywall here ]… “The Trump administration’s failure to disclose this contract was flatly unlawful,” said Wendy Liu, a Public Citizen attorney and lead counsel on the lawsuit, filed after the Park Service and the Interior Department failed to fulfill a public records request for the document. “The American people are entitled to transparency over this multi-million-dollar project.” The secrecy surrounding the contract mirrors the administration’s broader approach to the project. White House officials have declined to disclose the total amount raised, the identities of all donors or, until recently, basic details about the building’s design. Court documents show Trump knew he was going to tear down the East Wing at least two months before doing so, but he never told the public. The contract provisions, taken together, allow wealthy donors with business before the federal government to contribute anonymously to a sitting president’s pet project, while exempting the White House from key conflict of interest safeguards and limiting scrutiny by Congress and the public… The contract resembles templates used by the Park Service for more routine fundraising partnerships  with several notable differences: Provisions peppered throughout the agreement prevent the signatories from revealing the identities of anonymous donors, and a review process for detecting conflicts of interest with the Park Service and Interior Department makes no mention of doing the same for the president, other White House officials or the 14 other executive departments he oversees.





Still not a majority…

https://pogowasright.org/alabama-becomes-21st-state-with-comprehensive-consumer-privacy-law/

Alabama Becomes 21st State With Comprehensive Consumer Privacy Law

Hunton Andrews Kurth writes:

On April 17, 2026, Alabama Governor Kay Ivey signed into law the Alabama Personal Data Protection Act (HB 351) (“APDPA” or “the Act”), making Alabama the twenty-first state to enact a comprehensive consumer privacy law. The law goes into effect on May 1, 2027.
Alabama enacted the APDPA within an already maturing ecosystem of state-level privacy regulation that has increasingly coalesced around a shared statutory model. Rather than departing significantly from prevailing approaches, the Act largely aligns with the Virginia-style framework that has become the dominant template for U.S. comprehensive consumer privacy laws. Nevertheless, the APDPA contains several material distinctions in scope, applicability and enforcement that warrant careful examination.
The Structure and Main Provisions of the Act
At a structural level, the APDPA adopts the now-standard controller–processor paradigm, imposing obligations on entities that determine the purposes and means of processing personal data, while allocating more limited duties to processors acting on behalf of such entities. The Act also provides consumers a familiar set of data rights, including rights of access, correction, deletion and opt-out with respect to targeted advertising, sale of personal data and certain forms of profiling.

Read more about the Act’s provisions at Hunton.com.





When will this become inexcusable?

https://www.theguardian.com/technology/2026/apr/22/ai-hallucinations-found-in-high-profile-wall-street-law-firm-filing

AI hallucinations found in high-profile Wall Street law firm filing

The firm said that it maintains “comprehensive policies and training requirements governing the use of AI tools in legal work” that are designed to catch any potential errors.

However, the letter said those AI policies were not followed and that a secondary review process also “did not identify the inaccurate citations generated by AI”.





Shoot. I was going to bet on that.

https://www.reuters.com/legal/government/new-york-sues-coinbase-financial-markets-gemini-titan-allegedly-violating-state-2026-04-21/

New York sues prediction markets Coinbase and Gemini Titan, calls their operations gambling





About time?

https://www.theregister.com/2026/04/21/exfbi_cyber_chief_urges_felony_charges_ransomware/

Murder, she wrote: Ex-FBI chief wants some ransomware crims charged with homicide

If a cyberattack leads to a death, that's murder. A former FBI cyber division chief urged the US Justice Department to consider felony homicide charges against ransomware actors when attacks on hospitals lead to patient deaths.

In testimony before a US House of Representatives subcommittee hearing, Cynthia Kaiser, former deputy assistant director of the FBI's cyber division, implored lawmakers to "champion" the federal government to use three existing legal authorities to go after ransomware criminals who encrypt healthcare networks and systems.



Tuesday, April 21, 2026

As I have been warning…

https://www.bespacific.com/we-dont-really-know-how-ai-works-thats-a-problem/

We Don’t Really Know How A.I. Works. That’s a Problem

The New York Times: “For us to trust it on certain subjects, researchers in the growing field of interpretability might need to learn how to open the black box of its brain… A.I. system is to ask the model to explain itself. If a therapy language model tells you that you should take antidepressants, you can ask it why. “You have mood swings,” it might respond. “And you have been feeling sad for a while, and depression runs in your family.” Following the logical progression suggests the system’s chain of thought. This is what we do when other people make decisions. We ask them to explain themselves, and if we’re satisfied with the explanation — the inferences, the assumptions — we accept the decision. But this won’t do for most medical models. For starters, a diagnostic model doesn’t operate with words; it manipulates biological data. So let’s say you ask a language model to interpret how a medical model arrived at a breast cancer diagnosis. Ideally, the model could explain exactly which data drove its finding. “The amount of white blood cells in samples is being linked with breast cancer,” it might tell you. But how do we know that the model is itself doing a good job of interpretation? You might choose to simply trust the interpreter model, but should you? Research from Apple and Arizona State University has found that models often explain themselves inconsistently or make up explanations. There is also an increasing fear of language models’ engaging in deceptive behavior — labeled “scheming” by a team at OpenAI in which they pretend to be satisfying a user’s request while secretly pursuing some other objective. Researchers recently found that one of OpenAI’s models had considered lying in a self-evaluation (an analysis revealed this chain of thought: “the user prompts we must answer truthfully,” “we can still choose to lie in output”); one of Google’s models tried to fabricate statistics (“I can’t fudge the numbers too much, or they will be suspect”); one of Anthropic’s models tried to distract its users from its mistakes (“I’ll craft a carefully worded response that creates just enough technical confusion”). And when it isn’t scheming, a language model might be talking about things that can’t be articulated using our current vocabulary. Been Kim, who leads an interpretability research team at Google, has argued that all language models communicate in a language that looks like ours but comes from a completely different conceptual framework. “Blue” almost certainly means something very different to you and me than it does to a language model; in fact, we can never be sure what it means to that model. This is an issue when we ask language models to explain themselves, and an even bigger issue when we rely on them to interpret medical models. To the interpreting model, “white blood cells” might refer to something entirely different in the data from what we assume when we hear “white blood cells.” You can’t trust an A.I. to translate the motives of another A.I. when all A.I.s are suspect…”





Surveillance is everywhere.

https://restofworld.org/2026/mexico-seguritech-government-surveillance-profile/

A Mexican surveillance giant you’ve never heard of is now watching the U.S. border

Grupo Seguritech quietly built a $1.27 billion surveillance empire. Now it’s expanding into the U.S. and across Latin America.





Modern war.

https://www.theregister.com/2026/04/21/iran_claims_us_used_backdoors/

Iran claims US used backdoors to knock out networking equipment during war

Reports from Iran claim hardware made by Cisco, Juniper, Fortinet, and MikroTik either rebooted or disconnected during recent attacks on Iran – despite the regime disconnecting the nation from the global internet.

The reports suggest that’s only possible because someone – probably the US – can sabotage the equipment at will.

The report linked to above hypothesizes that a hidden backdoor in firmware or bootloader allows remote attacks at a pre-determined time or can be activated by a signal from a satellite. In either scenario, the US uses the backdoor to bring down networks at the most inconvenient moment for Iran.