Thursday, April 30, 2026

Useful.

https://www.bespacific.com/how-the-experts-figure-out-whats-real-in-the-age-of-deepfakes/

How the experts figure out what’s real in the age of deepfakes

The Verge – no paywall: “In the days that followed the US and Israel’s joint military strike on Iran on Saturday, floods of images and videos that supposedly document the war have appeared online. Some are old or depict unrelated conflicts, are made or manipulated with AI, and in some cases, are actually taken from military-themed video games like War Thunder. With misinformation spreading like wildfire, many people have placed their trust in reputable digital investigators. Organizations like The New York Times, Indicator, and Bellingcat have extensive verification procedures to avoid publishing synthetic or misleading content. “Audiences can turn to trusted, independent news organizations that take the time and effort to authenticate visuals and clearly explain sourcing,” Charlie Stadtlander, executive director for media relations and communications at The Times, told The Verge. Media authentication methods are rarely foolproof, but standards are extremely high, and experts have years of experience with evading fake news. This process is no easy task, especially given the lack of reliable deepfake detection tools. But learning from the experts can help us to better protect ourselves when news events are dominating digital spaces — so here are some of the tricks they use…”





Could this be lose in the US? Would anyone notice?

https://www.schneier.com/blog/archives/2026/04/fast16-malware.html

Fast16 Malware

Researchers have reverse-engineered a piece of malware named Fast16. It’s almost certainly state-sponsored, probably US in origin, and was deployed against Iran years before Stuxnet:

“…the Fast16 malware was designed to carry out the most subtle form of sabotage ever seen in an in-the-wild malware tool: By automatically spreading across networks and then silently manipulating computation processes in certain software applications that perform high-precision mathematical calculations and simulate physical phenomena, Fast16 can alter the results of those programs to cause failures that range from faulty research results to catastrophic damage to real-world equipment.”

Another news article.

Lots of interesting details at the links.



Wednesday, April 29, 2026

Do what I say until I tell you to do what I do.

https://www.axios.com/2026/04/29/trump-anthropic-pentagon-ai-executive-order-gov

Scoop: White House workshops plan to bring back Anthropic

The White House is developing guidance that would allow agencies to get around Anthropic's supply chain risk designation and onboard new models including its most powerful yet, Mythos, according to sources familiar with the matter.

Why it matters: The Trump administration appears to be performing a 180 on a company it previously claimed was such a grave security risk that it had to be ripped out of the federal government.





Not a gag, but a plea…

https://www.siliconvalley.com/2026/04/28/openai-trial-judge-lectures-musk-altman-on-trading-social-media-barbs/

OpenAI trial judge lectures Musk, Altman on trading social media barbs

Ahead of opening statements on Tuesday, US District Judge Yvonne Gonzalez Rogers encouraged Musk and his counterparts at OpenAI to “control your propensity to use social media to make things worse outside this courtroom.”





Privacy is profitable?

https://www.marketscreener.com/news/gartner-estimates-u-s-states-privacy-fines-totaled-3-425-billion-in-2025-trend-expected-to-accel-ce7f59ddd08ff026

Gartner Estimates U.S. States' Privacy Fines Totaled $3.425 Billion in 2025; Trend Expected to Accelerate Through 2028

In the U.S., More Fines Have Been Levied Due to Violations of Privacy Laws in 2025 Than the Five Years Prior Combined.

Gartner, Inc., a business and technology insights company, has estimated that U.S. states gave out $3.425 billion in privacy-related fines in 2025. Gartner estimated the total value of privacy-related fines assessed in the United States in 2025 by compiling and aggregating enforcement actions and statutory private rights of action associated with state and federal privacy laws.

In the U.S., more fines have been levied due to violations of privacy laws in 2025 than the last five years combined. This trend is expected to accelerate through 2028 (see Figure 1).



Tuesday, April 28, 2026

Reality is not shifting… (Okay, maybe a bit.)

https://www.bespacific.com/hallucinations-by-west-lexis-ai/

Hallucinations” by West & Lexis AI?

Via LLRX  – Hallucinations” by West & Lexis AI?  – Michael Berman  addresses benchmarks used for AI legal research platforms in the context of the risk of hallucinations in retrieval-augmented generation (RAG) AI outputs. As Berman states, verification, of course, is not only good advice, but also an ethical mandate.



(Related)

https://www.bespacific.com/claude-legal-is-here-and-its-worth-a-closer-look/

Claude Legal Is Here, and It’s Worth a Closer Look

Via LLRX  – Claude Legal Is Here, and It’s Worth a Closer Look – With the recently launched Claude Legal plugin, Nicole L. Black  recommends to lawyers and legal professionals Claude’s AI for tasks like document review and contract drafting. The Claude Legal plugin runs within Claude Cowork, a desktop app that you can download, and no specialized legal software subscription is required.



(Related)

https://www.bespacific.com/i-tested-claude-for-word-on-some-classic-litigator-tasks/

I Tested Claude for Word on Some Classic Litigator Tasks

Via LLRX  – I Tested Claude for Word on Some Classic Litigator Tasks  – Over the past several days Rebecca Fordon  has been digging into the Claude for Word add-in, and the headline finding surprised her. On document-intensive legal work — cite-checking, consistency review, Table of Authorities assembly — it seems to need less supervision than either Claude on the web or Claude Code. Four tests bear that out, with limits worth knowing.





Suspicions confirmed.

https://www.coindesk.com/markets/2026/04/26/only-3-of-traders-drive-prediction-markets-accuracy-not-the-crowd-study-finds

Only 3% of traders drive prediction markets' accuracy, not the crowd, study finds

The Green Beret arrested for betting on a classified U.S. raid looked like a one-off scandal for prediction markets. A new study suggests he may be a more troubling data point: an extreme example of the small group of informed traders who, as the soldier is accused of doing, actually move prices on Polymarket, while the crowd loses money around them.

The study, part of a working paper released this week by Roberto Gómez-Cram, Yunhan Guo, Theis Ingerslev Jensen and Howard Kung of London Business School and Yale, directly tests the industry's core claim that the markets work owing to the massed knowledge of their participants.





IP is no longer secure-able?

https://futurism.com/artificial-intelligence/malus-clones-software-copyright

Devious New AI Tool “Clones” Software So That the Original Creator Doesn’t Hold a Copyright Over the New Version

The advent of generative AI continues to undermine the very concept of copyright, from entire books shamelessly ripping off authors to tasteless AI slop depicting beloved characters going viral on social media. The sin is foundational: all today’s popular AI tools were built by pillaging copyrighted material without permission.

Even software isn’t safe. As 404 Media reports, a new tool dubbed Malus.sh — pronounced “malice,” to give a subtle clue where this is headed — uses AI to “liberate” a piece of software from existing copyright licenses, essentially creating a “clean room” clone that technically doesn’t infringe on the original code’s copyright.



Monday, April 27, 2026

Could you deliberately create exculpatory evidence in your chats?

https://www.bespacific.com/major-law-firms-are-warning-clients-anything-you-type-into-an-ai-chatbot-can-be-used-against-you-in-court/

Major law firms are warning clients: anything you type into an AI chatbot can be used against you in court…

Reuters: “As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line. These warnings became more urgent after a federal judge in New York ruled, opens new tab this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities against him.

In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic’s Claude and ‌OpenAI’s ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases. “We are telling our clients: You should proceed with caution here,” said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim. People’s discussions with their lawyers are almost always deemed confidential under U.S. law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private…”





I see pros and cons.

https://www.theatlantic.com/technology/2026/04/ai-nationalization-trump-hegseth-anthropic-openai/686943/

What Happens If America Nationalizes AI?

AI companies are beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort.

Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.





Clearly a system design failure. And the omission of a fix process.

https://www.9news.com/article/news/local/rime-flock-cam-pulled-over/73-e3f65018-32a5-4bb0-a4ac-26fb24dc9a15

He didn’t commit a crime, but Flock cam alerts keep getting him pulled over

Kyle Dausman was just driving through Cherry Hills Village when officers pulled him over without warning. Officers thought he had a warrant attached to his vehicle. He didn't. They released him.

A few days later, he was pulled over again by one of the same Cherry Hills Village police officers. Same thing. The officer quickly recognized him and let him go.

Lyons said the warrant traces back to a Gilpin County case and a court data entry error that confused Dausman's plate with that of a similar plate of a wanted man.

Lyons believes the root cause is a data entry issue involving Colorado license plates, which use both the letter O and the numeral zero.

"In Colorado data entry, we use both zeros and O's in license plates," Lyons said. "Sometimes the data entry will be for both."

He said the warrant returned hits when Dausman's plate was searched either way.

"They entered it for both," Lyons said. "It wasn't a mistake, one or the other. They just entered it for both an O and a zero, because we've run it both ways and the warrant pops up both ways."

Dausman said he tried to resolve the problem by contacting Gilpin County courts and the sheriff's office dispatch, and was told he needed to provide the name of the suspect tied to the warrant — information no one would give him because it involves an ongoing criminal investigation.



Sunday, April 26, 2026

Honest cops?

https://brooklynworks.brooklaw.edu/blr/vol91/iss2/6/

Police and AI: When Abundantly Helpful Becomes Intrinsically Harmful

Artificial intelligence (AI) has rapidly crept into nearly all aspects of life, including in government, the criminal justice system, and policing. While Supreme Court Due Process jurisprudence has outlined certain boundaries for police interrogations, much police conduct is left for the states to regulate. Such regulation is sporadic and less restrictive than the public might assume, especially in the realm of police deception. Across jurisdictions, courts allow police to deceptively inform suspects that a witness identified the suspect of the perpetrator of a crime. That the suspect’s finger prints, DNA, or shoe prints were found at the scene of the crime. Police can even present fake evidence to suspects in an interrogation, including falsified lab reports, photograpghs, and more. With AI’s use expanding into law enforcement, there is a clear need to regulate police deception in interrogations before constitutional rights are infringed. This Note argues that while courts have long permitted various deceptive police tactics, the increasing sophistication and accessibility of AI tools pose unprecedented risks such as false confessions, bias, and potentially unwarranted public repremand. Through an analysis of case law, the evolution of Miranda and Due Process jurisprudence, and emerging AI applications in policing, this Note demonstrates how AI-enabled deception could exacerbate Due Process violations, undermine public trust, and increase wrongful convictions. It concludes by urging state legislatures to preemptively prohibit the use of AI to create false evidence in interrogations, advocating for a state-by-state legislative approach as the most effective means to safeguard constitutional protections in a rapidly evolving world.





Law is a Matrix?

https://scholarworks.uark.edu/arlnlaw/31/

Prompt Engineering For Lawyers: Blue Pill Or Red Pill: Hallucinations Risks And An Introduction To Prompt Engineering

In The Matrix, Neo’s choice between the blue pill and the red pill is essentially a choice between a comfortable illusion and an unsettling reality. Lawyers now face a similar decision with artificial intelligence. They can take the blue pill: ignore artificial intelligence or treat it like just another search engine, continuing a comfortable illusion that the new technology may not transform the practice of law. Or lawyers can take the red pill: acknowledge that artificial intelligence will transform the practice of law and learn how to use it competently, ethically, and effectively.

This Article is for those who choose the red pill. It begins with the problem of hallucinations, which makes blind reliance on artificial intelligence a professional hazard, and then turns to the first step in using artificial intelligence productively: understanding how it differs from Googling. When artificial intelligence is approached as a role-playing collaborator, such as a litigator, contract drafter, or judge, lawyers can enhance the accuracy, tone, and usefulness of the responses it provides.





Outside the box?

https://ojs.scipub.de/index.php/MSC/article/view/8331

THE PROBLEM OF THE CONSTITUTIONAL AND LEGAL REGULATION OF ARTIFICIAL INTELLIGENCE

This article examines the constitutional and legal problems arising against the background of the rapid development of artificial intelligence (AI), as well as the new realities generated by digital transformation. It offers a comparative analysis of the advanced constitutional practices of countries such as Chile, Greece, Mexico, and Brazil in the regulation of AI.

Referring to the theoretical concepts of prominent international scholars such as Lawrence Lessig, Frank Pasquale, and Mireille Hildebrandt, the article explores the principles of “code as law” and “legal protection by design.”

At the same time, it interprets the fundamental threats posed by AI in the spheres of algorithmic discrimination, the privacy of personal data, and neuro-rights.

The article proposes the application of a strict liability model within the civil law system of Azerbaijan for the compensation of damage caused by AI and suggests recognizing AI as an “autonomous source of risk.” In conclusion, it advances strategic solutions aimed at ensuring that national legislation evolves on the basis of the principles of digital constitutionalism and that the supremacy of human will over program code is preserved.





Maury Nichols points me to another interesting article.

https://www.straitstimes.com/multimedia/graphics/2026/04/ai-chatbots-privacy-risk/index.html?ref=thefuturist

Marcus asks AI chatbots various questions.

They seem entirely harmless. But they can tell the chatbots a lot about him.





Modern war.

https://carnegieendowment.org/research/2026/04/ukraine-russia-war-changing-warfare-practice-military-strategy

The New Revolution in Military Affairs

How Ukraine is driving doctrinal change in modern warfare.



Saturday, April 25, 2026

Is it the CFTC’s job to ensure state licenses are in order?

https://www.reuters.com/legal/government/cftc-sues-new-york-block-oversight-prediction-markets-2026-04-24/

CFTC sues New York to block oversight of prediction markets

, opens new tab

, opens new tab

The U.S. Commodity Futures Trading Commission sued New York on Friday, accusing the state of invading its authority to regulate prediction markets ‌by filing lawsuits accusing Coinbase Financial Markets (COIN.O) and Gemini Titan (GEMI.O) of promoting gambling.

In a complaint filed in Manhattan federal court, the CFTC said the litigation filed on April 21 by New York Attorney General Letitia James "intrudes on the exclusive federal scheme Congress designed" to oversee commodity derivatives markets, including prediction markets.





Because anti-discrimination is discrimination?

https://coloradosun.com/2026/04/24/doj-joins-lawsuit-colorado-ai-law-federal-court/

Justice Department joins Elon Musk’s xAI in effort to block Colorado AI antidiscrimination law

The Department of Justice joined a lawsuit seeking to block Colorado’s first-in-the-nation artificial intelligence antidiscrimination law from taking effect, escalating a legal fight that began two weeks ago with a challenge filed by Elon Musk’s xAI. 

Senate Bill 205, which was signed into law in 2024, aims to regulate “high-risk” AI systems and protect consumers from so-called algorithmic discrimination, which is when a computer system produces biased results that disadvantage certain people, especially based on traits like race, gender, age or income. 

Attorneys for the federal government joined Musk’s xAI in arguing that the law jeopardizes the United States’ position as “the global AI leader” by requiring AI systems to “incorporate discriminatory ideology that prioritizes preferred demographic characteristics over accurate and merit-based outputs.

SB24-205 constrains the information that AI systems convey, obligates AI developers and deployers to discriminate, and then enforces the state-mandated discrimination with onerous policy, assessment, and disclosure requirements that will disproportionately burden small businesses and start-ups,” DOJ attorneys wrote in the 19-page complaint, which was filed in federal court in Denver. 



Friday, April 24, 2026

Government parenting? How do you enforce this without checking every user logon?

https://thenextweb.com/news/norway-social-media-ban-under-16-age-verification

Norway plans to ban social media for children under 16 and shift age verification liability to platforms

The minority Labour government, led by PM Jonas Gahr Støre, announced the legislation on Friday. The age threshold has been raised from the 15-year limit proposed in the 2025 consultation, aligning Norway with Australia’s world-first ban that came into force in December. Ireland is also considering similar legislation.

The mechanism of the ban is as important as the age threshold. Under the proposed Norwegian legislation, social media companies, defined as platforms where users can create a profile, connect with other profiles, and share content without editorial oversight, will be required to implement effective age verification.

The burden of verifying age shifts from the child, who currently self-reports, to the platform. Norway’s existing digital identity infrastructure, BankID, is expected to play a role in the verification architecture. Platforms that fail to comply will face fines. The consultation draft proposed fines of up to NOK 20 million.



(Related)

https://www.theregister.com/2026/04/23/proton_ceo_age_checks_id_checkpoint/

Age checks could turn internet into an ID checkpoint, complains Proton CEO

In a blog post on Thursday, Andy Yen, CEO of Proton, argues that the current push for age checks risks flipping the web from anonymous by default to something closer to "show your papers" before you click.

The problem, he says, is that you can't reliably identify minors without identifying everyone else first, meaning systems built to protect kids inevitably sweep up adults too. "We cannot accept a world where every adult is expected to hand over ID as the price of going online."





Trying to be human?

https://www.businessinsider.com/ai-written-email-perfect-typos-new-chrome-plugin-2026-4

Now there's an AI tool that adds typos into your emails — so it looks like you didn't use AI



Thursday, April 23, 2026

At last, someone is doing something about the weather…

https://news.bitcoin.com/a-hair-dryer-may-have-gamed-a-paris-weather-sensor-for-34000-on-polymarket/

A Hair Dryer May Have Gamed a Paris Weather Sensor for $34,000 on Polymarket

The complaint  follows two temperature anomalies at the CDG station. On April 6, the sensor recorded a jump of roughly 4 degrees Celsius within 12 minutes at approximately 6:30 p.m., briefly reaching 22.5 degrees Celsius before returning to normal. On April 15 at approximately 9:30 p.m., the reading climbed to 22 degrees Celsius under calm, cloudy skies before dropping back within minutes.

No neighboring stations recorded similar changes during either event. Wind direction and relative humidity showed no corresponding shifts.

On April 6, long-shot bets on Paris reaching 21 degrees Celsius paid out approximately $14,000 to at least one bettor whose account had been created days earlier, according to reporting by Le Monde and BFMTV. A similar wager on 22 degrees Celsius resolved in a bettor’s favor on April 15 for roughly $20,000.





The look of an AI arms race.

https://www.politico.eu/article/u-k-intelligence-100-nations-have-spyware-that-can-hack-britain/

UK intelligence: 100 nations have spyware that can hack Britain

More than half of the world's nation states are believed to have purchased technology that could be capable of hacking into Britain's infrastructure, companies and private networks, U.K. intelligence has found.

The U.K. National Cyber Security Centre — which is part of the GCHQ intelligence agency — believes around 100 countries have procured cyber intrusion software, suggesting the barrier for states to get their hands on the technology is dropping, the agency told POLITICO ahead of a discussion about its findings at its CYBERUK conference in Glasgow Wednesday.





First look?

https://www.politico.com/news/2026/04/22/house-republicans-roll-out-landmark-data-privacy-push-00886800?nid=0000015a-dd3e-d536-a37b-dd7fd8af0000&nname=playbook-pm&nrid=f1499b3a-1f47-4e35-80c8-e83140fb7df7

House Republicans roll out landmark data privacy push

Key House Republicans on Wednesday unveiled a landmark legislative effort to create a national data privacy standard, teeing up a push to enact sweeping changes to how tech and financial data are regulated.

The effort includes two bills — the SECURE Data Act, which deals with tech companies’ consumer data, and a second financial data privacy measure dubbed the GUARD Financial Data Act.



(Related)

https://fpf.org/press-releases/fpf-on-the-securing-and-establishing-consumer-uniform-rights-and-enforcement-over-data-secure-data-act/

FPF on the Securing and Establishing Consumer Uniform Rights and Enforcement Over Data (“SECURE Data”) Act

In the absence of a federal law, twenty-one states have enacted comprehensive privacy laws that, while varying in detail, have generally converged around a common framework. The “SECURE Data Act” largely follows that consensus model, which could facilitate compliance for businesses already navigating state requirements. However, several states have taken different approaches or amended their laws in recent years, including expansions related to health data, minors’ data, and geolocation—raising questions about the extent to which a federal baseline should reflect these alternatives.





Autonomous is expensive.

https://www.theguardian.com/us-news/2026/apr/22/pentagon-asks-for-54bn-in-pivot-towards-ai-powered-war

Pentagon asks for $54bn in pivot towards AI-powered war

The Pentagon is aiming to increase funding more than a hundredfold for an autonomous drone warfare program, according to budget documents released this week, signalling a major pivot towards AI-powered war.

In its 2027 budget, the Pentagon has asked for over $54bn to fund the Defense Autonomous Warfare Group, a 24,000% increase on last year.

An overview of the budget describes this money as going towards “autonomous and remotely operated systems across air, land, and above and below the sea,” including the “Drone Dominance” program.

The amount is over half the entire defence budget of the UK. In an opinion piece published yesterday, former CIA director David Petraeus said it was “the largest single commitment to autonomous warfare in history”.



Wednesday, April 22, 2026

Why not ignore the law if there are no consequences?

https://www.bespacific.com/trump-fought-to-keep-the-ballroom-fundraising-contract-secret-heres-whats-in-it/

Trump fought to keep the ballroom fundraising contract secret. Here’s what’s in it.

Follow-up to Banquet of Greed: Trump Ballroom Donors Feast on Federal Funds and Favors – See Washington Post – no paywall: “The agreement governing hundreds of millions in private donations was kept secret until a watchdog group sued and a judge ordered it disclosed [the full text of this document is embedded in this WaPo article – view the 14 page PDF without the paywall here ]… “The Trump administration’s failure to disclose this contract was flatly unlawful,” said Wendy Liu, a Public Citizen attorney and lead counsel on the lawsuit, filed after the Park Service and the Interior Department failed to fulfill a public records request for the document. “The American people are entitled to transparency over this multi-million-dollar project.” The secrecy surrounding the contract mirrors the administration’s broader approach to the project. White House officials have declined to disclose the total amount raised, the identities of all donors or, until recently, basic details about the building’s design. Court documents show Trump knew he was going to tear down the East Wing at least two months before doing so, but he never told the public. The contract provisions, taken together, allow wealthy donors with business before the federal government to contribute anonymously to a sitting president’s pet project, while exempting the White House from key conflict of interest safeguards and limiting scrutiny by Congress and the public… The contract resembles templates used by the Park Service for more routine fundraising partnerships  with several notable differences: Provisions peppered throughout the agreement prevent the signatories from revealing the identities of anonymous donors, and a review process for detecting conflicts of interest with the Park Service and Interior Department makes no mention of doing the same for the president, other White House officials or the 14 other executive departments he oversees.





Still not a majority…

https://pogowasright.org/alabama-becomes-21st-state-with-comprehensive-consumer-privacy-law/

Alabama Becomes 21st State With Comprehensive Consumer Privacy Law

Hunton Andrews Kurth writes:

On April 17, 2026, Alabama Governor Kay Ivey signed into law the Alabama Personal Data Protection Act (HB 351) (“APDPA” or “the Act”), making Alabama the twenty-first state to enact a comprehensive consumer privacy law. The law goes into effect on May 1, 2027.
Alabama enacted the APDPA within an already maturing ecosystem of state-level privacy regulation that has increasingly coalesced around a shared statutory model. Rather than departing significantly from prevailing approaches, the Act largely aligns with the Virginia-style framework that has become the dominant template for U.S. comprehensive consumer privacy laws. Nevertheless, the APDPA contains several material distinctions in scope, applicability and enforcement that warrant careful examination.
The Structure and Main Provisions of the Act
At a structural level, the APDPA adopts the now-standard controller–processor paradigm, imposing obligations on entities that determine the purposes and means of processing personal data, while allocating more limited duties to processors acting on behalf of such entities. The Act also provides consumers a familiar set of data rights, including rights of access, correction, deletion and opt-out with respect to targeted advertising, sale of personal data and certain forms of profiling.

Read more about the Act’s provisions at Hunton.com.





When will this become inexcusable?

https://www.theguardian.com/technology/2026/apr/22/ai-hallucinations-found-in-high-profile-wall-street-law-firm-filing

AI hallucinations found in high-profile Wall Street law firm filing

The firm said that it maintains “comprehensive policies and training requirements governing the use of AI tools in legal work” that are designed to catch any potential errors.

However, the letter said those AI policies were not followed and that a secondary review process also “did not identify the inaccurate citations generated by AI”.





Shoot. I was going to bet on that.

https://www.reuters.com/legal/government/new-york-sues-coinbase-financial-markets-gemini-titan-allegedly-violating-state-2026-04-21/

New York sues prediction markets Coinbase and Gemini Titan, calls their operations gambling





About time?

https://www.theregister.com/2026/04/21/exfbi_cyber_chief_urges_felony_charges_ransomware/

Murder, she wrote: Ex-FBI chief wants some ransomware crims charged with homicide

If a cyberattack leads to a death, that's murder. A former FBI cyber division chief urged the US Justice Department to consider felony homicide charges against ransomware actors when attacks on hospitals lead to patient deaths.

In testimony before a US House of Representatives subcommittee hearing, Cynthia Kaiser, former deputy assistant director of the FBI's cyber division, implored lawmakers to "champion" the federal government to use three existing legal authorities to go after ransomware criminals who encrypt healthcare networks and systems.



Tuesday, April 21, 2026

As I have been warning…

https://www.bespacific.com/we-dont-really-know-how-ai-works-thats-a-problem/

We Don’t Really Know How A.I. Works. That’s a Problem

The New York Times: “For us to trust it on certain subjects, researchers in the growing field of interpretability might need to learn how to open the black box of its brain… A.I. system is to ask the model to explain itself. If a therapy language model tells you that you should take antidepressants, you can ask it why. “You have mood swings,” it might respond. “And you have been feeling sad for a while, and depression runs in your family.” Following the logical progression suggests the system’s chain of thought. This is what we do when other people make decisions. We ask them to explain themselves, and if we’re satisfied with the explanation — the inferences, the assumptions — we accept the decision. But this won’t do for most medical models. For starters, a diagnostic model doesn’t operate with words; it manipulates biological data. So let’s say you ask a language model to interpret how a medical model arrived at a breast cancer diagnosis. Ideally, the model could explain exactly which data drove its finding. “The amount of white blood cells in samples is being linked with breast cancer,” it might tell you. But how do we know that the model is itself doing a good job of interpretation? You might choose to simply trust the interpreter model, but should you? Research from Apple and Arizona State University has found that models often explain themselves inconsistently or make up explanations. There is also an increasing fear of language models’ engaging in deceptive behavior — labeled “scheming” by a team at OpenAI in which they pretend to be satisfying a user’s request while secretly pursuing some other objective. Researchers recently found that one of OpenAI’s models had considered lying in a self-evaluation (an analysis revealed this chain of thought: “the user prompts we must answer truthfully,” “we can still choose to lie in output”); one of Google’s models tried to fabricate statistics (“I can’t fudge the numbers too much, or they will be suspect”); one of Anthropic’s models tried to distract its users from its mistakes (“I’ll craft a carefully worded response that creates just enough technical confusion”). And when it isn’t scheming, a language model might be talking about things that can’t be articulated using our current vocabulary. Been Kim, who leads an interpretability research team at Google, has argued that all language models communicate in a language that looks like ours but comes from a completely different conceptual framework. “Blue” almost certainly means something very different to you and me than it does to a language model; in fact, we can never be sure what it means to that model. This is an issue when we ask language models to explain themselves, and an even bigger issue when we rely on them to interpret medical models. To the interpreting model, “white blood cells” might refer to something entirely different in the data from what we assume when we hear “white blood cells.” You can’t trust an A.I. to translate the motives of another A.I. when all A.I.s are suspect…”





Surveillance is everywhere.

https://restofworld.org/2026/mexico-seguritech-government-surveillance-profile/

A Mexican surveillance giant you’ve never heard of is now watching the U.S. border

Grupo Seguritech quietly built a $1.27 billion surveillance empire. Now it’s expanding into the U.S. and across Latin America.





Modern war.

https://www.theregister.com/2026/04/21/iran_claims_us_used_backdoors/

Iran claims US used backdoors to knock out networking equipment during war

Reports from Iran claim hardware made by Cisco, Juniper, Fortinet, and MikroTik either rebooted or disconnected during recent attacks on Iran – despite the regime disconnecting the nation from the global internet.

The reports suggest that’s only possible because someone – probably the US – can sabotage the equipment at will.

The report linked to above hypothesizes that a hidden backdoor in firmware or bootloader allows remote attacks at a pre-determined time or can be activated by a signal from a satellite. In either scenario, the US uses the backdoor to bring down networks at the most inconvenient moment for Iran.



Monday, April 20, 2026

We still need non-artificial intelligence? Who’da guessed!

https://sloanreview.mit.edu/article/how-ai-helps-the-best-and-hurts-the-rest/

How AI Helps the Best and Hurts the Rest

Can generative AI serve as an on-demand business adviser? A field experiment with hundreds of small business owners in Kenya found that AI access boosted revenues and profits by 15% for high performers — but caused a nearly 10% decline for those who had already been struggling. The culprit: Weaker performers followed generic or misleading AI advice because they lacked the judgment to filter it out. Leaders deploying AI at scale must design their rollouts carefully to avoid widening performance gaps.





Nothing fishy here! Move along.

https://uk.finance.yahoo.com/news/somebody-keeps-betting-hundreds-millions-103004349.html

Somebody Keeps Betting Hundreds of Millions on Trump's Next Iran Post. They Keep Winning. Megyn Kelly Wants to Know Who

On Saturday, March 21, Trump posted on Truth Social that he would "obliterate" Iran's power plants unless Iran reopened the Strait of Hormuz within 48 hours. That deadline landed Monday morning.

Oil markets braced. Strikes on energy infrastructure would spike crude prices — more expensive gas at the pump, jittery stock markets, a financial shock across every 401(k) in the country.

At 6:49 a.m. Monday, someone placed a massive bet that none of that would happen.

In a single minute, whoever it was sold roughly half a billion dollars' worth of oil contracts — a bet that oil would soon get cheaper, not more expensive. Simultaneously, they bought stock futures — a bet that the market would rally. That minute saw nine times the normal trading activity for that time of day. There was no public news to explain any of it.

If Trump had gone through with the strikes at his own deadline, the bet would have blown up. Oil would have spiked. Stocks would have dropped. Whoever placed the trade could have lost hundreds of millions within minutes.

Just after 7 a.m., Trump posted that he was calling off the strikes.

Oil prices crashed more than 10%. Stock futures jumped more than 2.5%. The Dow closed up more than 1,000 points. Whoever placed the bets won on both sides.





War is an economic event…

https://euromaidanpress.com/2026/04/18/ukraine-cut-russias-oil-exports-by-880000-barrels-in-one-day-thats-100-million-every-24-hours/

Ukraine cut Russia’s oil exports by 880,000 barrels in one day — that’s $100 million every 24 hours



Sunday, April 19, 2026

I’m not sure I understand. (Place your bets now!)

https://blogs.lse.ac.uk/businessreview/2026/04/16/prediction-markets-have-made-uncertainty-itself-a-tradable-asset/

Prediction markets have made uncertainty itself a tradable asset

The history of prediction markets can be traced back to Francis Galton’s ox and Kenneth Arrow’s promise. But their recent stratospheric rise is reliant on our polycrisis era. Bets can be made on elections, interest rates and war. More uncertainty leads to more disagreement, more trading and larger markets. Chirantan Chatterjee explains what this reveals about the world.





Citizenship requires us to keep an eye on government…

https://www.engadget.com/apps/judge-sides-with-creators-of-banned-ice-trackers-who-allege-dhs-and-doj-violated-their-first-amendment-rights-191701801.html

Judge sides with creators of banned ICE trackers who allege DHS and DOJ violated their First Amendment rights

A judge has granted the makers of the "ICE Sightings - Chicagoland" Facebook group and the Eyes Up app a preliminary injunction to stop the Trump administration from coercing platforms to take these projects down. Judge Jorge L. Alonso of the United States District Court for the Northern District of Illinois found that the plaintiffs, Kassandra Rosado and Kreisau Group, are likely to succeed in their case, which alleges that the government suppressed protected speech under the First Amendment by strong-arming Facebook and Apple into removing ICE monitoring efforts.

Both Eyes Up and ICE Sightings - Chicagoland use publicly available information to keep tabs on ICE activity. But after pressure from Trump officials, they were removed from Apple's App Store and Facebook, respectively.





Figure out your responsibility.

https://www.ecgi.global/publications/blog/algorithmic-incompetence-the-fiduciary-duty-your-board-is-already-breaching

Algorithmic Incompetence: The Fiduciary Duty Your Board Is Already Breaching

Whoever exercises a function affecting third parties cannot delegate judgment to a system they neither understand nor supervise.

A pillow in the wrong hands suffocates; in the right hands, it supports. Roberto Cingolani's metaphor captures what corporate law has always known: responsibility lies not with the instrument but with whoever adopts it without understanding its implications.

In boardrooms across Europe and North America, a quiet abdication is underway. Boards are adopting algorithmic systems they do not understand, delegating comprehension to opaque technologies, and assuming that regulatory grace periods exempt them from thinking. They are wrong. The duty to understand what you govern is not a novelty of the AI Act — it is an ancient obligation that artificial intelligence now renders inescapable.





Modern war.

https://www.researchgate.net/profile/Muhammad-Faisal-Sddiqui/publication/403643037_Artificial_Intelligence_in_Future_Warfare_Ethical_Frameworks_and_the_Regulation_of_Lethal_Autonomous_Weapons_IEEE_Transactions_on_Technology_and_Society/links/69d73ef05518257d60e8ede8/Artificial-Intelligence-in-Future-Warfare-Ethical-Frameworks-and-the-Regulation-of-Lethal-Autonomous-Weapons-IEEE-Transactions-on-Technology-and-Society.pdf

Artificial Intelligence in Future Warfare: Ethical Frameworks and the Regulation of Lethal Autonomous Weapons

The integration of artificial intelligence into weapons systems has compressed the decision cycle of lethal engagement from hours to milliseconds, outpacing the international legal and ethical frameworks designed to constrain state violence. This paper surveys the landscape of deployed and tested lethal autonomous weapons systems (LAWS), analyzes the adequacy of existing international law relative to current AI capabilities, and proposes a regulatory structure calibrated to the actual risk profile of autonomous lethality. We examine nine real-world systems -- from the Kargu-2's documented autonomous engagement in Libya (2020) to Israel's "Lavender" AI targeting in Gaza (2023-2024) and the ongoing 2026 Iran-US-Israel conflict "Operation Epic Fury," the largest AI-assisted warfare campaign in recorded history -- and classify each using a three-tier autonomy model: human-in-the-loop (HITL), human-on-the-loop (HOTL), and human-out-of-the-loop (HOOTL). Our gap analysis of the Geneva Conventions, the Convention on Certain Conventional Weapons (CCW), and International Humanitarian Law (IHL) identifies four critical regulatory failures: the absence of a binding definition of "meaningful human control," an accountability vacuum when LAWS cause civilian casualties, a speed asymmetry between AI warfare timescales and legal review processes, and the dual-use nature of civilian AI technologies. To address these gaps, we propose a five-tier governance framework scaling regulatory stringency with the product of autonomy level and lethality threshold. The framework carries direct implications for stalled UN CCW Group of Governmental Experts negotiations, offering a technically grounded basis for legally binding distinctions that current diplomatic language lacks.





The only good terrorist is…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6549339

Human Rights related to AI in Counterterrorism

Counterterrorism outside armed conflict increasingly relies on Artificial Intelligence (AI). States use AI notably for detecting, predicting, and responding to terrorism. Despite acclamations of States and regional organizations that AI needs to be used in compliance with international human rights law, there is still insufficient clarity on how human rights law guides and governs legality in the use of AI in counterterrorism. Accordingly, this chapter analyses the key human rights that are relevant to - and which help to determine the lawful use of - AI in counterterrorism. This concerns, notably, the right to privacy; the rights to liberty and security; the principle of non-discrimination; the right to freedom of expression; the right to freedom of peaceful assembly; and the rights to life and to freedom from ill-treatment. The chapter assesses how these rights concern the use of AI in counterterrorism by relating them to the functions of AI applications. This is achieved through analysis of international and national rules and jurisprudence that are directly or indirectly pertinent.





I thought Trump still hated Musk? Does he hate the French more?

https://www.cnbc.com/2026/04/18/justice-department-france-probe-exlon-musk-x.html

Justice Department refuses to assist French probe into Musk’s X, WSJ reports

The U.S. Justice Department has told French law enforcement it will not assist with efforts to investigate tech billionaire Elon Musk’s social media platform X, The Wall Street Journal reported on Saturday, citing a letter from the DOJ’s Office of International Affairs, dated Friday.