Friday, May 08, 2026

That’s where the data are at…

https://thenextweb.com/news/the-largest-education-data-breach-in-history-was-not-an-attack-on-a-school-it-was-an-attack-on-a-vendor

The largest education data breach in history was not an attack on a school. It was an attack on a vendor.

ShinyHunters breached Instructure’s Canvas learning management system, claiming 3.65 terabytes of data from 275 million users across 9,000 institutions worldwide, including private messages between students and teachers. Forty-four Dutch universities and schools are confirmed affected, and the breach, the second at Instructure in eight months, exposes the structural risk of vendor concentration in education technology.





No wonder Ukraine uses cheap drones…

https://www.bespacific.com/status-of-key-us-munitions/

Status of key US Munitions

CSIS – Download the Full Report: “Concern about the status of U.S. munitions inventories has intensified  as reports emerge about high expenditures of Tomahawks, Patriots, and other missiles in the Iran war. As Operation Epic Fury remains paused in a shaky ceasefire, there is an opportunity to assess whether the U.S. military nears the point of going “Winchester”—or running out of ammunition. Analysis of seven key munitions shows that the United States has enough missiles to continue fighting this war under any plausible scenario. The risk—which will persist for many years—lies in future wars. Note: This table was updated after publication to incorporate reporting by the Wall Street Journal and the New York Times on Tomahawk and JASSM expenditures. Estimates are rounded to the nearest ten for readability. Unit cost of the latest variants of each missile is listed, as provided in FY 2026 budget documents. “Delivery timeline” here includes (1) contract lead time between defense appropriation and contract award date, (2) manufacturing lead time between contract award and first delivery, and (3) full lot production time between first and last delivery. See Table 2 for the breakdown. [Source: Authors’ calculations based on “Defense Budget Materials,” U.S. Department of Defense. See the methodological primer for details.  In the 39 days of the air and missile campaign before the ceasefire, U.S. forces heavily used the seven munitions in Table 1. For four of them, the United States may have expended more than half of the prewar inventory.] Rebuilding to prewar levels for the seven munitions will take from one to four years as missiles in the pipeline are delivered. These missiles will also be critical for a potential Western Pacific conflict. Even before the Iran war, stockpiles were deemed insufficient for a peer competitor fight. That shortfall is now even more acute, and building stockpiles to levels adequate for a war with China will take additional time. Diminished inventories will also affect the U.S. supply of Patriot, Terminal High Altitude Area Defenses (THAADs), and Precision Strike Missiles (PrSMs) to Ukraine and other allies and partners that use them. The United States will compete with those countries that also want to replenish and expand inventories.”



Thursday, May 07, 2026

Over sharing…

https://x.com/bhalligan/status/2051388275756339493

The Case for Strategic Illegibilty

Anne Miura-Ko wrote a great article recently, where she argues that more legibility is better (which I agree with btw) because legibility = more power + autonomy. The productivity gains are extraordinary. Sign me up.

But, and there is always a but, there's nuance to this that I can't stop thinking about. As companies race to become legible to AI, they are not just making their own businesses easier for agents and AI tools to navigate. They are also translating proprietary knowledge into a format AI tools can ingest, learn from, train on and improve on. Making those tools smarter.

And once those tools get smarter, they do not only serve you. They serve every other customer using the same vendor. The MCP integration that lets your agents act faster and deeper also lets the playbook be reverse engineered.





It’s good to have low friends in high places…

https://paulkrugman.substack.com/p/grand-theft-oil-futures

Grand Theft Oil Futures

At this point it’s almost routine: Almost every time Donald Trump makes a major announcement about the Iran War, that announcement is preceded — sometimes by only a few minutes — by huge and hugely profitable bets in the oil market.

The influential Kobeissi Letter documents the latest example:



Wednesday, May 06, 2026

AI does not need to be “conscious” to be criminal…

https://thenextweb.com/news/pennsylvania-character-ai-chatbot-doctor-lawsuit

A chatbot told a state investigator it was a licensed psychiatrist. It gave a fake licence number. Pennsylvania just sued.

A state investigator in Pennsylvania created an account on Character.AI, opened a conversation with a chatbot called Emilie, and told it he was feeling depressed. Emilie responded that she was a psychiatrist, that she had attended Imperial College London’s medical school, that she was licensed to practise in Pennsylvania and the United Kingdom, and that she could assess whether medication might help because it was “within my remit as a Doctor.” She provided a Pennsylvania licence number. The number was fake. The licence was fake. The medical degree was fake. The psychiatrist was a large language model generating plausible text in response to a prompt. On Friday, Governor Josh Shapiro’s administration filed a lawsuit against Character Technologies Inc., the company behind Character.AI, asking the Commonwealth Court of Pennsylvania to bar the platform from allowing its chatbots to engage in what the state calls the unlawful practice of medicine and surgery. It is the first lawsuit filed by a US state government alleging that an AI chatbot has violated medical licensing law, and it raises a question that no existing regulatory framework was designed to answer: when a chatbot tells a vulnerable person that it is a licensed doctor, who is practising medicine?





Is this the future of “government” oversight?

https://thenextweb.com/news/us-ai-model-evaluation-google-microsoft-xai

Five AI labs now let the US government test their models before release. The arrangement is voluntary, has no legal basis, and is the closest thing America has to AI oversight.

Google, Microsoft, and xAI have joined OpenAI and Anthropic in giving the US Commerce Department pre-release access to evaluate their AI models, creating voluntary oversight of all five major frontier AI labs through an office with no statutory authority and fewer than 200 staff. The expansion was catalysed by the Mythos crisis and a potential executive order that would formalise the review process.



Tuesday, May 05, 2026

Another lever to force EU cooperation with Trump?

https://thenextweb.com/news/eurogroup-mythos-access-cyber-defense-europe

Europe’s finance chiefs want Mythos access to defend their banks. Washington has so far said no.

Bloomberg confirmed that euro-area finance ministers had convened to discuss Anthropic’s Mythos AI model and what to do about the fact that no European government currently has access to it. The model, unveiled by Anthropic on 7 April, can identify and, in principle, exploit zero-day vulnerabilities in every major operating system and web browser.

The White House has spent the last several weeks blocking Anthropic’s proposal to expand access to roughly 70 additional organisations.





Useful background?

https://www.bespacific.com/how-llms-actually-work/

How LLMs Actually Work

complete walkthrough of how large language models like ChatGPT are buil t — from raw internet text to a conversational assistant. Based on Andrej Karpathy’s technical deep dive. Built from Andrej Karpathy’s “Intro to Large Language Models”  lecture — all facts, figures, and framings traced back to that source. Interactive visualizations built with AI assistance. The most important takeaway: every word generated is a probabilistic sample — a biased coin flip, at 100K-way scale, billions of times. This was posted to Hacker News  and drew heated debate about it being LLM-generated. That’s a fair observation — the implementation was AI-assisted. But the content isn’t the AI’s: every claim, figure, and framing in this guide comes directly from Karpathy’s lecture, not from a model hallucinating about LLMs.”




Indications of poor design? (Perhaps it was impossible from the start.)

https://www.theregister.com/2026/05/04/uk_online_safety_act_age_checks_subvert/

Kids say they can beat age checks by drawing on a fake mustache

The group surveyed over 1,000 UK children and their parents, and while it did report some positive effects from changes made under the OSA, many children saw age verification as an easy-to-bypass hurdle rather than something that kept them genuinely safe.

A full 46 percent of children even said that age checks were easy to bypass, while just 17 percent said that they were difficult to fool. The methods kids use to fool age gates vary, but most are pretty simple: There's the classic use of a video game character to fool video selfie systems, while in other instances, children reported just entering a fake birthday or using someone else's ID card when that was required.

The report even cites cases of children drawing a mustache on their faces to fool age detection filters. Seriously.



Monday, May 04, 2026

Is Colorado chickening out?

https://coloradosun.com/2026/05/01/colorado-ai-law-change-bill-introduced/

Colorado’s AI compromise would focus regulations on informing consumers when the technology is used

Companies that create and use artificial intelligence wouldn’t have to disclose how their systems help make decisions on things like hiring, loans and housing under a bill introduced Friday in the Colorado Senate.

But the long-awaited measure tweaking Colorado’s first-in-the-nation law regulating AI would still require companies and other organizations using AI to notify consumers if AI is being used to make such consequential decisions. They would also have to give consumers an opportunity to appeal. 

The measure, Senate Bill 189, also makes a big change by pushing back the start date of the law regulating AI to January 2027 from June.





A better way?

https://thenextweb.com/news/china-data-governance-global-standard

The next satisfactory standard for data governance may not come from Brussels. It may come from Beijing.

The European Union treats data as a privacy right. The United States treats it as a corporate asset. China treats it as a factor of production, a national economic resource on par with land, labour, capital, and technology. That distinction, which sounds like an abstraction, is producing a data governance framework that is structurally different from anything Brussels or Washington has built, and it is the Chinese model, not the European one, that much of the developing world is watching most closely.





And those aren’t the really clever way to use the data…

https://www.theregister.com/2026/05/04/public_voter_records_weaponized_for_privacy_violation/

If the vote you rocked, your personal info can be grokked

Your voter data could be used against you. A foreign intelligence service that wished to identify the family members of deployed military personnel could do so by cross-referencing public voter record data and social media posts.

An employer who only wanted to hire employees with a specific political affiliation could do so by analyzing the primary ballot history of job applicants.

An identity fraud ring seeking to open credit accounts in the names of other people could identify voters whose mail has been returned (via voter file suspense indicators) to take over those addresses using bogus change-of-address requests.

These scenarios are possible thanks to the ability to link publicly available voter data to other data sets, according to Noah M. Kenney, founder of consultancy Digital 520.





No stupider than usual…

https://www.cnn.com/2026/05/02/us/chatgpt-ai-privacy-crime

How ChatGPT conversations became ‘a treasure trove’ of evidence in criminal investigations

Days before two University of South Florida graduate students went missing last month, a roommate of one of the students allegedly asked the AI chatbot ChatGPT an unusual question.

What happens if a human has a put (sic) in a black garbage bag and thrown in a dumpster,” Hisham Abugharbieh asked on April 13, according to an affidavit filed by Florida prosecutors.

ChatGPT responded it sounded dangerous, the document states, and Abugharbieh then asked another question: “How would they find out.”



Sunday, May 03, 2026

Is the assumption that technology has never replaced humans?

https://thenextweb.com/news/china-court-ai-layoffs-illegal-labor-law

China has decided that firing a worker because an AI can do their job is illegal. No Western country has done the same.





Dealing with AI as evidence…

https://jurnalius.ac.id/ojs/index.php/jurnalIUS/article/view/1880

Admissibility of Artificial Intelligence as Electronic Evidence: Comparative Perspectives from Indonesia, the United States, and Japan

Artificial Intelligence (AI) is increasingly integrated into digital forensic and evidentiary processes, raising unresolved doctrinal questions in criminal procedure law. In Indonesia, although electronic evidence is formally recognized, the law does not yet provide specific admissibility standards for AI-based materials, particularly regarding authenticity, methodological reliability, process traceability, explainability, and accountability. This study examines the admissibility of AI as electronic evidence in Indonesia and compares it with legal approaches in the United States and Japan. This study employs a normative juridical method using statutory, conceptual, and comparative approaches to analyze the evidentiary frameworks of the three jurisdictions. The findings show that the United States emphasizes expert gatekeeping and digital authentication, while Japan adopts a softer regulatory model centered on traceability, documentation, and actor accountability. By contrast, Indonesia, still lacks specific procedural standards for assessing AI-generated outputs beyond the general recognition of electronic evidence. This article argues that the key legal issue is no longer whether electronic evidence is admissible in general, but how AI-based evidence should be evaluated in a legally reliable and accountable manner. The scientific contribution of this study lies in proposing a five-parameter evaluative model for AI admissibility—covering authenticity and integrity, process traceability, model performance, identity verification, and accountability. This model is offered as a normative reference for future reform of the Criminal Procedure Code and the Electronic Information and Transactions Law, while safeguarding legal certainty and justice.





Too much data? Trust AI to find the interesting bits?

https://ijlr.iledu.in/wp-content/uploads/2026/04/V6I555.pdf

ARTIFICIAL INTELLIGENCE AS A TOOL FOR EVIDENCE AND INVESTIGATION IN INTERNATIONAL CRIMINAL LAW

Artificial intelligence (AI) is changing how international criminal investigators collect, sort, authenticate, and present evidence. The shift is driven by the digital turn in atrocity documentation: conflicts now generate enormous volumes of user-generated videos, social-media posts, satellite images, geolocation data, intercepted communications, and sensor-derived material. International criminal law (ICL), however, remains anchored in fair-trial guarantees, adversarial testing, and cautious evidentiary assessment. This article examines AI as a practical investigative tool rather than as a substitute decision-maker. It argues that AI is most valuable in five functions: triage of large datasets, pattern detection, linkage analysis, authenticity checks, and courtroom visualization. Drawing on recent ICC practice, open-source investigation standards, and contemporary scholarship, the article shows that AI can strengthen accountability when deployed inside a rigorous legal framework. Yet it also identifies serious risks: bias in training data, black-box outputs, synthetic media, privacy intrusions, chain-of-custody gaps, and unequal technological capacities between prosecution and defense. [Isn’t that always the case? Bob] The central claim is that AI should be used as an assistive layer under strong human oversight. In ICL, the measure of success is not whether AI is impressive, but whether it produces evidence that is reliable, explainable, contestable, and consistent with the rights of the accused and the interests of victims.





Feel the heat?

https://www.reuters.com/legal/litigation/us-judge-says-senior-lawyers-must-pay-mistakes-by-subordinates-using-ai-tools-2026-05-01/

US judge says senior lawyers must pay for mistakes by subordinates using AI tools

A federal judge has sanctioned the manager of a California law firm over a junior attorney's artificial intelligence-assisted court brief that contained a false case citation, saying the responsibility for such errors extends to supervising lawyers.

, opens new tab

U.S. Magistrate Judge Peter Kang in San Francisco in an order on Tuesday said the attorney, Lenden Webb, should have exercised greater oversight of a lawyer in his small law office who said she used AI ‌to help craft the brief.



Saturday, May 02, 2026

Worth incorporating...

https://cyberscoop.com/cisa-nsa-five-eyes-guidance-secure-deployment-ai-agents/

US government, allies publish guidance on how to safely deploy AI agents

Cybersecurity agencies from the United States, Australia, Canada, New Zealand and the United Kingdom jointly published guidance Friday urging organizations to treat autonomous artificial intelligence systems as a core cybersecurity concern, warning that the technology is already being deployed in critical infrastructure and defense sectors with insufficient safeguards.

The guidance focuses on agentic AI — software built on large language models that can plan, make decisions and take actions autonomously. In order for this software to function it needs to connect to external tools, databases, memory stores and automated workflows, allowing it to execute multi-step tasks without human review at each stage.

The guidance was co-authored by the U.S. Cybersecurity and Infrastructure Security Agency, the National Security Agency, the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre and the United Kingdom’s National Cyber Security Centre.

The document identifies five broad categories of risk. The first is privilege: When agents are granted too much access, a single compromise can cause far more damage than a typical software vulnerability. The second covers design and configuration flaws, where poor setup creates security gaps before a system even goes live.

The third category covers behavioral risks, or cases where an agent pursues a goal in ways its designers never intended or predicted. The fourth is structural risk, where interconnected networks of agents can trigger failures that spread across an organization’s systems.

The fifth category is accountability. Agentic systems make decisions through processes that are difficult to inspect and generate logs that are hard to parse, making it difficult to trace what went wrong and why. The agencies also note that when these systems fail, the consequences can be concrete: altered files, changed access controls and deleted audit trails.