Sunday, May 03, 2026

Is the assumption that technology has never replaced humans?

https://thenextweb.com/news/china-court-ai-layoffs-illegal-labor-law

China has decided that firing a worker because an AI can do their job is illegal. No Western country has done the same.





Dealing with AI as evidence…

https://jurnalius.ac.id/ojs/index.php/jurnalIUS/article/view/1880

Admissibility of Artificial Intelligence as Electronic Evidence: Comparative Perspectives from Indonesia, the United States, and Japan

Artificial Intelligence (AI) is increasingly integrated into digital forensic and evidentiary processes, raising unresolved doctrinal questions in criminal procedure law. In Indonesia, although electronic evidence is formally recognized, the law does not yet provide specific admissibility standards for AI-based materials, particularly regarding authenticity, methodological reliability, process traceability, explainability, and accountability. This study examines the admissibility of AI as electronic evidence in Indonesia and compares it with legal approaches in the United States and Japan. This study employs a normative juridical method using statutory, conceptual, and comparative approaches to analyze the evidentiary frameworks of the three jurisdictions. The findings show that the United States emphasizes expert gatekeeping and digital authentication, while Japan adopts a softer regulatory model centered on traceability, documentation, and actor accountability. By contrast, Indonesia, still lacks specific procedural standards for assessing AI-generated outputs beyond the general recognition of electronic evidence. This article argues that the key legal issue is no longer whether electronic evidence is admissible in general, but how AI-based evidence should be evaluated in a legally reliable and accountable manner. The scientific contribution of this study lies in proposing a five-parameter evaluative model for AI admissibility—covering authenticity and integrity, process traceability, model performance, identity verification, and accountability. This model is offered as a normative reference for future reform of the Criminal Procedure Code and the Electronic Information and Transactions Law, while safeguarding legal certainty and justice.





Too much data? Trust AI to find the interesting bits?

https://ijlr.iledu.in/wp-content/uploads/2026/04/V6I555.pdf

ARTIFICIAL INTELLIGENCE AS A TOOL FOR EVIDENCE AND INVESTIGATION IN INTERNATIONAL CRIMINAL LAW

Artificial intelligence (AI) is changing how international criminal investigators collect, sort, authenticate, and present evidence. The shift is driven by the digital turn in atrocity documentation: conflicts now generate enormous volumes of user-generated videos, social-media posts, satellite images, geolocation data, intercepted communications, and sensor-derived material. International criminal law (ICL), however, remains anchored in fair-trial guarantees, adversarial testing, and cautious evidentiary assessment. This article examines AI as a practical investigative tool rather than as a substitute decision-maker. It argues that AI is most valuable in five functions: triage of large datasets, pattern detection, linkage analysis, authenticity checks, and courtroom visualization. Drawing on recent ICC practice, open-source investigation standards, and contemporary scholarship, the article shows that AI can strengthen accountability when deployed inside a rigorous legal framework. Yet it also identifies serious risks: bias in training data, black-box outputs, synthetic media, privacy intrusions, chain-of-custody gaps, and unequal technological capacities between prosecution and defense. [Isn’t that always the case? Bob] The central claim is that AI should be used as an assistive layer under strong human oversight. In ICL, the measure of success is not whether AI is impressive, but whether it produces evidence that is reliable, explainable, contestable, and consistent with the rights of the accused and the interests of victims.





Feel the heat?

https://www.reuters.com/legal/litigation/us-judge-says-senior-lawyers-must-pay-mistakes-by-subordinates-using-ai-tools-2026-05-01/

US judge says senior lawyers must pay for mistakes by subordinates using AI tools

A federal judge has sanctioned the manager of a California law firm over a junior attorney's artificial intelligence-assisted court brief that contained a false case citation, saying the responsibility for such errors extends to supervising lawyers.

, opens new tab

U.S. Magistrate Judge Peter Kang in San Francisco in an order on Tuesday said the attorney, Lenden Webb, should have exercised greater oversight of a lawyer in his small law office who said she used AI ‌to help craft the brief.



Saturday, May 02, 2026

Worth incorporating...

https://cyberscoop.com/cisa-nsa-five-eyes-guidance-secure-deployment-ai-agents/

US government, allies publish guidance on how to safely deploy AI agents

Cybersecurity agencies from the United States, Australia, Canada, New Zealand and the United Kingdom jointly published guidance Friday urging organizations to treat autonomous artificial intelligence systems as a core cybersecurity concern, warning that the technology is already being deployed in critical infrastructure and defense sectors with insufficient safeguards.

The guidance focuses on agentic AI — software built on large language models that can plan, make decisions and take actions autonomously. In order for this software to function it needs to connect to external tools, databases, memory stores and automated workflows, allowing it to execute multi-step tasks without human review at each stage.

The guidance was co-authored by the U.S. Cybersecurity and Infrastructure Security Agency, the National Security Agency, the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre and the United Kingdom’s National Cyber Security Centre.

The document identifies five broad categories of risk. The first is privilege: When agents are granted too much access, a single compromise can cause far more damage than a typical software vulnerability. The second covers design and configuration flaws, where poor setup creates security gaps before a system even goes live.

The third category covers behavioral risks, or cases where an agent pursues a goal in ways its designers never intended or predicted. The fourth is structural risk, where interconnected networks of agents can trigger failures that spread across an organization’s systems.

The fifth category is accountability. Agentic systems make decisions through processes that are difficult to inspect and generate logs that are hard to parse, making it difficult to trace what went wrong and why. The agencies also note that when these systems fail, the consequences can be concrete: altered files, changed access controls and deleted audit trails.



Friday, May 01, 2026

Government by hallucination?

https://www.bespacific.com/over-80-percent-us-government-agencies-already-use-ai-agents/

Over 80% of US government agencies already use AI agents – and it’s only the beginning

ZDNET: “According to IDC research focused on public-sector readiness, agentic AI is no longer in the experimental phase for government; it is a leadership mandate.  IDC finds that while many government agencies are implementing agent-driven workflows, few have moved beyond pilots. The rate of agentic AI adoption in government is due to several factors:

  • Budgetary pressures

  • Sovereignty and compliance, including requirements for data resistance, algorithmic transparency, and accountability

  • Workforce disruption, which points to skill gaps in cybersecurity and machine learning operations

  • Citizen expectations for faster, more personalized, and equitable services…”





I bet someone bet they’d do that…

https://www.cnbc.com/2026/04/30/senate-prediction-markets-trading-ban-kalshi-polymarket.html

U.S. senators ban themselves from prediction markets trading



(Related)

https://www.politico.com/news/2026/04/30/gillibrand-mccormick-team-up-to-crack-down-on-prediction-markets-00901671

Gillibrand, McCormick team up to crack down on prediction markets

Sens. Kirsten Gillibrand and Dave McCormick want to bar federal lawmakers from placing wagers on prediction markets and tighten rules around insider trading on the platforms.

The bipartisan duo introduced a bill Thursday to crack down on the wildly popular, freewheeling platforms, just hours after the Senate unanimously approved a resolution that would bar members of the upper chamber and their staffers from trading on prediction markets.

McCormick and Gillibrand’s bill would go further and prohibit the president, vice president and senior executive branch officials from trading on prediction markets. It would also require the Commodity Futures Trading Commission to determine whether its regulations need to be updated to prevent insider trading on the platforms and act on those findings.





Logical it is not.

https://thenextweb.com/news/eu-finance-ministers-mythos-anthropic-cybersecurity

Europe’s finance ministers are about to discuss an AI model none of them can access

Euro-area finance ministers will discuss Anthropic’s Mythos AI model with banking supervisors on Monday, according to a senior EU official. The technology that will be on the agenda is one that no government in the European Union has access to, built by a company that the United States Pentagon has designated a national security supply chain risk, and which the White House is simultaneously using through the National Security Agency while blocking its creator from expanding access to others. The ministers are expected to return to the topic after Monday’s discussion once they gather more information. The problem is that gathering information is precisely what they cannot do. As the senior EU official put it, governments are only hearing rumours about its capabilities.





Wish this was amusing…

https://newrepublic.com/article/209775/transcript-trump-rage-jim-comey-backfires-case-goes-off-rails

Transcript: Trump’s Rage at Jim Comey Backfires as Case Goes Off Rails

As Trump’s efforts to jail his enemies start looking truly buffoonish, a former federal prosecutor explains why the case targeting James Comey is already looking like a spectacular flop.



Thursday, April 30, 2026

Useful.

https://www.bespacific.com/how-the-experts-figure-out-whats-real-in-the-age-of-deepfakes/

How the experts figure out what’s real in the age of deepfakes

The Verge – no paywall: “In the days that followed the US and Israel’s joint military strike on Iran on Saturday, floods of images and videos that supposedly document the war have appeared online. Some are old or depict unrelated conflicts, are made or manipulated with AI, and in some cases, are actually taken from military-themed video games like War Thunder. With misinformation spreading like wildfire, many people have placed their trust in reputable digital investigators. Organizations like The New York Times, Indicator, and Bellingcat have extensive verification procedures to avoid publishing synthetic or misleading content. “Audiences can turn to trusted, independent news organizations that take the time and effort to authenticate visuals and clearly explain sourcing,” Charlie Stadtlander, executive director for media relations and communications at The Times, told The Verge. Media authentication methods are rarely foolproof, but standards are extremely high, and experts have years of experience with evading fake news. This process is no easy task, especially given the lack of reliable deepfake detection tools. But learning from the experts can help us to better protect ourselves when news events are dominating digital spaces — so here are some of the tricks they use…”





Could this be lose in the US? Would anyone notice?

https://www.schneier.com/blog/archives/2026/04/fast16-malware.html

Fast16 Malware

Researchers have reverse-engineered a piece of malware named Fast16. It’s almost certainly state-sponsored, probably US in origin, and was deployed against Iran years before Stuxnet:

“…the Fast16 malware was designed to carry out the most subtle form of sabotage ever seen in an in-the-wild malware tool: By automatically spreading across networks and then silently manipulating computation processes in certain software applications that perform high-precision mathematical calculations and simulate physical phenomena, Fast16 can alter the results of those programs to cause failures that range from faulty research results to catastrophic damage to real-world equipment.”

Another news article.

Lots of interesting details at the links.



Wednesday, April 29, 2026

Do what I say until I tell you to do what I do.

https://www.axios.com/2026/04/29/trump-anthropic-pentagon-ai-executive-order-gov

Scoop: White House workshops plan to bring back Anthropic

The White House is developing guidance that would allow agencies to get around Anthropic's supply chain risk designation and onboard new models including its most powerful yet, Mythos, according to sources familiar with the matter.

Why it matters: The Trump administration appears to be performing a 180 on a company it previously claimed was such a grave security risk that it had to be ripped out of the federal government.





Not a gag, but a plea…

https://www.siliconvalley.com/2026/04/28/openai-trial-judge-lectures-musk-altman-on-trading-social-media-barbs/

OpenAI trial judge lectures Musk, Altman on trading social media barbs

Ahead of opening statements on Tuesday, US District Judge Yvonne Gonzalez Rogers encouraged Musk and his counterparts at OpenAI to “control your propensity to use social media to make things worse outside this courtroom.”





Privacy is profitable?

https://www.marketscreener.com/news/gartner-estimates-u-s-states-privacy-fines-totaled-3-425-billion-in-2025-trend-expected-to-accel-ce7f59ddd08ff026

Gartner Estimates U.S. States' Privacy Fines Totaled $3.425 Billion in 2025; Trend Expected to Accelerate Through 2028

In the U.S., More Fines Have Been Levied Due to Violations of Privacy Laws in 2025 Than the Five Years Prior Combined.

Gartner, Inc., a business and technology insights company, has estimated that U.S. states gave out $3.425 billion in privacy-related fines in 2025. Gartner estimated the total value of privacy-related fines assessed in the United States in 2025 by compiling and aggregating enforcement actions and statutory private rights of action associated with state and federal privacy laws.

In the U.S., more fines have been levied due to violations of privacy laws in 2025 than the last five years combined. This trend is expected to accelerate through 2028 (see Figure 1).



Tuesday, April 28, 2026

Reality is not shifting… (Okay, maybe a bit.)

https://www.bespacific.com/hallucinations-by-west-lexis-ai/

Hallucinations” by West & Lexis AI?

Via LLRX  – Hallucinations” by West & Lexis AI?  – Michael Berman  addresses benchmarks used for AI legal research platforms in the context of the risk of hallucinations in retrieval-augmented generation (RAG) AI outputs. As Berman states, verification, of course, is not only good advice, but also an ethical mandate.



(Related)

https://www.bespacific.com/claude-legal-is-here-and-its-worth-a-closer-look/

Claude Legal Is Here, and It’s Worth a Closer Look

Via LLRX  – Claude Legal Is Here, and It’s Worth a Closer Look – With the recently launched Claude Legal plugin, Nicole L. Black  recommends to lawyers and legal professionals Claude’s AI for tasks like document review and contract drafting. The Claude Legal plugin runs within Claude Cowork, a desktop app that you can download, and no specialized legal software subscription is required.



(Related)

https://www.bespacific.com/i-tested-claude-for-word-on-some-classic-litigator-tasks/

I Tested Claude for Word on Some Classic Litigator Tasks

Via LLRX  – I Tested Claude for Word on Some Classic Litigator Tasks  – Over the past several days Rebecca Fordon  has been digging into the Claude for Word add-in, and the headline finding surprised her. On document-intensive legal work — cite-checking, consistency review, Table of Authorities assembly — it seems to need less supervision than either Claude on the web or Claude Code. Four tests bear that out, with limits worth knowing.





Suspicions confirmed.

https://www.coindesk.com/markets/2026/04/26/only-3-of-traders-drive-prediction-markets-accuracy-not-the-crowd-study-finds

Only 3% of traders drive prediction markets' accuracy, not the crowd, study finds

The Green Beret arrested for betting on a classified U.S. raid looked like a one-off scandal for prediction markets. A new study suggests he may be a more troubling data point: an extreme example of the small group of informed traders who, as the soldier is accused of doing, actually move prices on Polymarket, while the crowd loses money around them.

The study, part of a working paper released this week by Roberto Gómez-Cram, Yunhan Guo, Theis Ingerslev Jensen and Howard Kung of London Business School and Yale, directly tests the industry's core claim that the markets work owing to the massed knowledge of their participants.





IP is no longer secure-able?

https://futurism.com/artificial-intelligence/malus-clones-software-copyright

Devious New AI Tool “Clones” Software So That the Original Creator Doesn’t Hold a Copyright Over the New Version

The advent of generative AI continues to undermine the very concept of copyright, from entire books shamelessly ripping off authors to tasteless AI slop depicting beloved characters going viral on social media. The sin is foundational: all today’s popular AI tools were built by pillaging copyrighted material without permission.

Even software isn’t safe. As 404 Media reports, a new tool dubbed Malus.sh — pronounced “malice,” to give a subtle clue where this is headed — uses AI to “liberate” a piece of software from existing copyright licenses, essentially creating a “clean room” clone that technically doesn’t infringe on the original code’s copyright.



Monday, April 27, 2026

Could you deliberately create exculpatory evidence in your chats?

https://www.bespacific.com/major-law-firms-are-warning-clients-anything-you-type-into-an-ai-chatbot-can-be-used-against-you-in-court/

Major law firms are warning clients: anything you type into an AI chatbot can be used against you in court…

Reuters: “As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line. These warnings became more urgent after a federal judge in New York ruled, opens new tab this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities against him.

In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic’s Claude and ‌OpenAI’s ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases. “We are telling our clients: You should proceed with caution here,” said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim. People’s discussions with their lawyers are almost always deemed confidential under U.S. law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private…”





I see pros and cons.

https://www.theatlantic.com/technology/2026/04/ai-nationalization-trump-hegseth-anthropic-openai/686943/

What Happens If America Nationalizes AI?

AI companies are beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort.

Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.





Clearly a system design failure. And the omission of a fix process.

https://www.9news.com/article/news/local/rime-flock-cam-pulled-over/73-e3f65018-32a5-4bb0-a4ac-26fb24dc9a15

He didn’t commit a crime, but Flock cam alerts keep getting him pulled over

Kyle Dausman was just driving through Cherry Hills Village when officers pulled him over without warning. Officers thought he had a warrant attached to his vehicle. He didn't. They released him.

A few days later, he was pulled over again by one of the same Cherry Hills Village police officers. Same thing. The officer quickly recognized him and let him go.

Lyons said the warrant traces back to a Gilpin County case and a court data entry error that confused Dausman's plate with that of a similar plate of a wanted man.

Lyons believes the root cause is a data entry issue involving Colorado license plates, which use both the letter O and the numeral zero.

"In Colorado data entry, we use both zeros and O's in license plates," Lyons said. "Sometimes the data entry will be for both."

He said the warrant returned hits when Dausman's plate was searched either way.

"They entered it for both," Lyons said. "It wasn't a mistake, one or the other. They just entered it for both an O and a zero, because we've run it both ways and the warrant pops up both ways."

Dausman said he tried to resolve the problem by contacting Gilpin County courts and the sheriff's office dispatch, and was told he needed to provide the name of the suspect tied to the warrant — information no one would give him because it involves an ongoing criminal investigation.