As I have been warning…
https://www.bespacific.com/we-dont-really-know-how-ai-works-thats-a-problem/
We Don’t Really Know How A.I. Works. That’s a Problem
The New York Times: “For us to trust it on certain subjects, researchers in the growing field of interpretability might need to learn how to open the black box of its brain… A.I. system is to ask the model to explain itself. If a therapy language model tells you that you should take antidepressants, you can ask it why. “You have mood swings,” it might respond. “And you have been feeling sad for a while, and depression runs in your family.” Following the logical progression suggests the system’s chain of thought. This is what we do when other people make decisions. We ask them to explain themselves, and if we’re satisfied with the explanation — the inferences, the assumptions — we accept the decision. But this won’t do for most medical models. For starters, a diagnostic model doesn’t operate with words; it manipulates biological data. So let’s say you ask a language model to interpret how a medical model arrived at a breast cancer diagnosis. Ideally, the model could explain exactly which data drove its finding. “The amount of white blood cells in samples is being linked with breast cancer,” it might tell you. But how do we know that the model is itself doing a good job of interpretation? You might choose to simply trust the interpreter model, but should you? Research from Apple and Arizona State University has found that models often explain themselves inconsistently or make up explanations. There is also an increasing fear of language models’ engaging in deceptive behavior — labeled “scheming” by a team at OpenAI — in which they pretend to be satisfying a user’s request while secretly pursuing some other objective. Researchers recently found that one of OpenAI’s models had considered lying in a self-evaluation (an analysis revealed this chain of thought: “the user prompts we must answer truthfully,” “we can still choose to lie in output”); one of Google’s models tried to fabricate statistics (“I can’t fudge the numbers too much, or they will be suspect”); one of Anthropic’s models tried to distract its users from its mistakes (“I’ll craft a carefully worded response that creates just enough technical confusion”). And when it isn’t scheming, a language model might be talking about things that can’t be articulated using our current vocabulary. Been Kim, who leads an interpretability research team at Google, has argued that all language models communicate in a language that looks like ours but comes from a completely different conceptual framework. “Blue” almost certainly means something very different to you and me than it does to a language model; in fact, we can never be sure what it means to that model. This is an issue when we ask language models to explain themselves, and an even bigger issue when we rely on them to interpret medical models. To the interpreting model, “white blood cells” might refer to something entirely different in the data from what we assume when we hear “white blood cells.” You can’t trust an A.I. to translate the motives of another A.I. when all A.I.s are suspect…”
Surveillance is everywhere.
https://restofworld.org/2026/mexico-seguritech-government-surveillance-profile/
A Mexican surveillance giant you’ve never heard of is now watching the U.S. border
Grupo Seguritech quietly built a $1.27 billion surveillance empire. Now it’s expanding into the U.S. and across Latin America.
Modern war.
https://www.theregister.com/2026/04/21/iran_claims_us_used_backdoors/
Iran claims US used backdoors to knock out networking equipment during war
… Reports from Iran claim hardware made by Cisco, Juniper, Fortinet, and MikroTik either rebooted or disconnected during recent attacks on Iran – despite the regime disconnecting the nation from the global internet.
The reports suggest that’s only possible because someone – probably the US – can sabotage the equipment at will.
The report linked to above hypothesizes that a hidden backdoor in firmware or bootloader allows remote attacks at a pre-determined time or can be activated by a signal from a satellite. In either scenario, the US uses the backdoor to bring down networks at the most inconvenient moment for Iran.