Is the strategy for defeating AI?
AI Perfected Chess. Humans Made It Unpredictable Again
Artificial intelligence drove chess toward perfect play, leading to more draws at top tournaments. Now grandmasters are winning by making less optimal moves.
Observations on articles I read to keep current about technology. My interests are: Privacy, security, business, the computer industry, and geeky stuff that catches my eye.
I don't think I have an agenda beyond my own amusement.
Note that I lump all my comments into a single post. This is not a typical BLOG technique, It's just an indication that I'm lazy.
Perhaps we’ll get a Donald avatar…
Trump White House launches own app after cryptic social media teases
The Trump administration announced the launch of the White House app on Friday, promising news “straight from the source, no filter.”
The administration announcement followed a series of social media teases in recent days, causing frenzied speculation about what was coming.
Upon opening the app, users were greeted with a short video featuring snippets of President Trump at work. From there, technical difficulties took over.
… The app includes sections labeled “news,” “live,” “social,” and “gallery” — all of which were empty at launch on Friday morning.
The news section features press releases from the Trump administration and links to articles from outside news sources. The gallery contains photos from recent events, including first lady Melania Trump’s summit with world spouses and the president’s meeting with the Japanese prime minister.
Tactical shortage or strategic problem? (How will China or North Korea view a US with no weapons?)
"Alarmingly Low": Pentagon Scrambles After US Fires 850 Tomahawks At Iran
It can take up to 2 years to build a Tomahawk, costing $3.6 million a piece, according to the report. Moreover, last year's budget had included only 57 of them.
The US army has fired over 850 Tomahawk missiles in four weeks during its war with Iran. Only a few hundred Tomahawk missiles are manufactured every year, and the rate of firing has alarmed some Pentagon officials who are in talks about how to make more of the missiles available, The Washington Post reported.
When you get serious…
https://www.bespacific.com/prompt-catalog-2026-for-artificial-intelligence/
Prompt Catalog 2026 for Artificial Intelligence
Via LLRX – Marcus P. Zillman’s extensive bibliography covers numerous subject matter specific AI prompt resources, guides, templates, methodologies and best practices take you across applications, and leans into expert sources from LinkedIn.
Only the beginning…
Traffic Violation! License Plate Reader Mission Creep Is Already Here
A new report from 404 Media sheds light on how automated license plate readers (ALPRs) could be used beyond the press releases and glossy marketing materials put out by law enforcement agencies and ALPR vendors. In December 2025, Georgia State Patrol ticketed a motorcyclist for holding a cell phone in his hand. According to the report, the ticket read, “CAPTURED ON FLOCK CAMERA 31 MM 1 HOLDING PHONE IN LEFT HAND.”
Tools & Techniques.
https://fpf.org/blog/2026-chatbot-legislation-tracker/
2026 Chatbot Legislation Tracker
With nearly 100 chatbot-specific bills introduced across states in 2026, a complex and increasingly fragmented compliance landscape is quickly emerging. This tracker helps stakeholders understand that landscape by highlighting chatbot legislation advancing through initial chambers in state legislatures and Congress, and organizing key provisions across proposals to show what is coming and how requirements may vary across jurisdictions. The tracker is updated on Thursdays to reflect legislative movement and amendments.
https://fpf.org/2026-chatbot-legislation-tracker/
This tracker highlights chatbot-related legislation advancing through U.S. state legislatures and Congress in 2026. It includes bills that have passed at least one legislative chamber and is updated weekly to reflect movement and amendments. This tracker reflects a subset of FPF’s broader legislative tracking work. FPF members receive access to comprehensive tracking across the full AI policy landscape, including all chatbot and AI-related legislation. To learn more about corporate membership, visit FPF’s Become a Member page.
Worth considering…
https://thenextweb.com/news/ai-amplifies-whatever-you-feed-it-including-confusion
AI amplifies whatever you feed it, including confusion
Most organizations are not failing at AI because of technology. They are failing because they do not know which data actually matters, and they are scaling that confusion faster than ever. At a time when investment continues to surge, the expectation is that more intelligence will naturally follow. Instead, many teams are finding themselves overwhelmed. The issue is the inability to distinguish between signal and noise in a way that leads to confident decisions.
The broader landscape makes this tension hard to ignore. According to the State of Enterprise AI 2026, global spending is projected to reach $2.52 trillion, yet only 14% of CFOs report measurable returns. At the same time, 42% of companies abandoned most of their AI pilots in 2025. These point to a systemic disconnect between ambition and execution. As boards demand accountability and leaders look for proof of value, many organizations are confronting a difficult reality: they invested in capability without first ensuring clarity.
The usual explanation is that the data is not clean enough. That is not wrong, but it misses something more fundamental. Clean data has limited value if it is not relevant, connected, or usable in the context of real decisions. Over time, organizations have accumulated dashboards, reports, and tracking systems that create the appearance of visibility while leaving critical questions unresolved. Teams often cannot explain why a metric moves, how it connects to outcomes, or what action should follow. That gap between information and understanding is where progress stalls.
For my nerd friends…
https://www.bespacific.com/sorting-algorithms/
Sorting algorithms
tools.simonwillison.net colophon – Watch how different algorithms organize data, step by step. Explore and compare different sorting algorithms through interactive animated visualizations that display how each algorithm organizes data in real-time. The tool allows you to adjust dataset size and animation speed, run individual algorithms step-by-step or continuously, and race multiple algorithms simultaneously to see which performs best. Each algorithm includes detailed complexity analysis and visual indicators for comparisons, swaps, and sorted elements.
(Ditto)
https://www.bespacific.com/anything-counter/
Anything Counter
What is AnythingCounter? A live dashboard that shows real-time estimates of what happens every second in the digital world. You see numbers for AI hallucinations, deepfakes, phishing, e-waste, jobs lost to automation, and more. We take global statistics and turn them into counters that update continuously so you can watch digital activity in real time.
How are the statistics calculated? We use published yearly or periodic stats from international bodies, research institutes, and cybersecurity reports. Those figures get converted into per-second or per-day rates and shown as live counters. Full sources and how we calculate everything: our methodology page.
Are the numbers real or estimated? They’re estimates based on real data. We take published statistics and convert them into real-time rates to show the scale of digital activity. Each counter links to its sources so you can check the underlying data.
How often do the counters update? The numbers tick every second in your browser. The rates behind them come from the latest research and reports we could find; we update those when new data is published.
A preview of things to come in the US?
https://www.theregister.com/2026/03/26/brit_law_maker_fails_to/
Brit lawmaker targeted by AI deepfake fails to get answers from US Big Tech
… Last autumn, Freeman was the subject of an AI-created fake that falsely claimed he had defected to a rival party, Reform. This was plausible enough, given several genuine Conservative defections in recent months, but entirely fabricated
Not only was it damaging to his reputation, but allowing political misinformation to continue to spread unchecked could end the democratic process in the UK, he argued. Freeman said platforms spreading the content are failing to respond. "There's no redress. There was no statement or principle that it was a problem," he said in Parliament yesterday, labeling the event a "serious disruption to democratic representation."
(Related)
As the US Midterms Approach, AI Is Going to Emerge as a Key Issue Concerning Voters
In December, the Trump administration signed an executive order that neutered states’ ability to regulate AI by ordering his administration to both sue and withhold funds from states that try to do so. This action pointedly supported industry lobbyists keen to avoid any constraints and consequences on their deployment of AI, while undermining the efforts of consumers, advocates, and industry associations concerned about AI’s harms who have spent years pushing for state regulation.
Trump’s actions have clarified the ideological alignments around AI within America’s electoral factions. They set down lines on a new playing field for the midterm elections, prompting members of his party, the opposition, and all of us to consider where we stand in the debate over how and where to let AI transform our lives.
Did Iran cross this line?
https://www.theregister.com/2026/03/25/whats_scarier_than_a_swarm/
Only Trump can decide when cyberwar turns into real war
There's a theoretical red line with cyber warfare. Cross it, and the US will respond with a physical attack like missile strikes. And that line "is whatever the President says it is," according to former NSA boss retired General Paul Nakasone.
Nakasone, speaking during an RSA Conference keynote on Wednesday with three other former NSA directors and commanders of US Cyber Command, argued that there shouldn't be a well-defined red line. "The president should have a lot of leeway in which he determines whether or not the nation's going to respond kinetically."
Retired US Navy Admiral Mike Rogers, on the other hand, said he thinks there should be a "series of minimums, like loss of life, loss of infrastructure associated with health and well being."
This argument would seem to apply to any AI system.
https://www.bespacific.com/eff-sues-for-answers-about-medicares-ai-experiment/
EFF Sues for Answers About Medicare’s AI Experiment
EFF – Little Is Known About AI That Could Affect Millions of Seniors’ Care: The Electronic Frontier Foundation (EFF) today filed a Freedom of Information Act (FOIA) lawsuit against the Centers for Medicare & Medicaid Services (CMS) seeking records about a multi-state program that is using AI to evaluate requests for medical care. “Tasking an algorithm with making determinations about treatment can create unwarranted—and even discriminatory—delays or denials of necessary medical care,” said Kit Walsh, EFF’s Director of AI and Access-to-Knowledge Legal Projects. “Given these serious risks, the public requires transparency that it hasn’t gotten. We’re suing to get badly needed answers about how Medicare’s AI experiment works.” Announced by CMS Administrator Dr. Mehmet Oz last year, the pilot program known as WISeR (Wasteful and Inappropriate Service Reduction) uses AI to assess prior authorization requests from Medicare beneficiaries. Previously rare in original Medicare, prior authorization requires medical providers to obtain advance approval from a patient’s health insurer before delivering certain treatments or services as a condition of coverage. Unfortunately, there is little information about how the AI algorithms used in WISeR work, including what training data they rely on. It remains unclear whether WISeR has any safeguards against systemic flaws such as algorithmic bias, privacy violations, and wrongful denials of care. Healthcare experts, care providers, and lawmakers have all raised alarms that WISeR may cause serious harm to patients by relying on AI unless it has the necessary safeguards. Despite this widespread criticism, WISeR was rolled out in six states in January, potentially affecting as many as 6.4 million Medicare beneficiaries, according to one estimate…”
For the complaint: https://www.eff.org/document/complaint-eff-v-cms-medicare-wiser-foia
A tool for wholesale hallucinations?
https://www.bespacific.com/claude-meets-westlaw-and-lexis/
Claude Meets Westlaw and Lexis
Seth Chandler – “Something remarkable has happened in the last few months, and most of the legal academy has not noticed. Anthropic’s Claude—the AI assistant many of us have experimented with for drafting, brainstorming, and analysis—can now directly control a web browser. That means Claude can log into Westlaw or Lexis, run searches, read cases, pull up law review articles and treatises, and synthesize what it finds into polished work product, autonomously, in minutes, while you watch. Subscribers to this blog already know about tools like Midpage AI, which provides a dedicated connector between Claude and a legal database. I have described Midpage—rightly—as The Killer App. Its technology is sound: it uses modern MCP protocols and direct API calls, which are fast and reliable. A browser agent, by contrast, relies on primitive point-and-click methods developed in the 1970s that depend on visual interpretation of a webpage—something trivial for most humans but slower and more error-prone for computers. That disadvantage, however, is now offset in two important ways. First, browser access unlocks the far larger compendium of materials held by the legacy giants. Westlaw and Lexis maintain vast repositories of foreign-nation materials, far broader coverage of agency decisions, and enormous collections of secondary sources—law review articles and treatises whose utility one can question in the abstract but that in practice periodically prove invaluable. Second, the pay structure of legal database access works in your favor. Most ABA-accredited law schools provide Westlaw and Lexis access to faculty and students at no additional charge; there is no marginal cost per query—at least until Westlaw and Lexis move to shut down external agentic AI access to their repositories. Why pay $25 a month for a separate legal database subscription when Claude can navigate the ones you already have? In short, The Killer App is now even deadlier…”
Life lessons?
https://www.adamsmith.org/blog/still-more-useful-maxims
Still More Useful Maxims
Why do otherwise intelligent lawyers fall for AI? I doubt it’s AI’s fault.
Oregon attorney slapped with record fine after citing case law hallucinated by AI
Another Oregon attorney has been bamboozled by the incorrect output of artificial intelligence — and the state’s appellate court has slapped him with a record fine.
The Oregon Court of Appeals issued a $10,000 fine to Bill Ghiorso, a Salem-based civil attorney, after determining he signed his name to a legal brief containing 15 bogus citations and nine quotes “that had been contrived from thin air.”
Ghiorso challenged the fee, arguing he didn’t “knowingly” include false material in his filings, but instead had relied on a paralegal’s research. But the appellate court rejected that argument.
“Counsel at least should have known… that submitting a brief with unchecked and ultimately fabricated citations may breach an attorney’s duties of professionalism, truthfulness and candor to the court,” Presiding Judge Scott Shorr wrote in the March 18 opinion.
(Related)
https://www.bespacific.com/the-ai-law-professor-ai-make-lawyers-work-more-not-less/
The AI Law Professor – AI make lawyers work more not less
Thomson Reuters, tom Martin – “At every legal technology conference, the same promise rings out: AI will automate the drudgery so lawyers can focus on what really matters. While it’s a seductive vision, it’s also contradicted by the best research we have on what actually happens when knowledge workers adopt these tools.
Key points:
The productivity promise is largely wrong — Emerging research shows that AI doesn’t reduce work — it intensifies it. Lawyers work faster, take on broader responsibilities, and extend their hours without recognizing the expansion. Further, because prompting AI feels like chatting rather than laboring, lawyers slip work into evenings and weekends without registering it as additional effort.
Self-reinforcing acceleration is the real risk — AI speeds tasks, which raises expectations, which increases reliance, which expands scope, ultimately creating a cycle that drives burnout in a profession already plagued by it.
Purposeful integration is the antidote — Legal organizations need to promote intentional governance structures that account for how people actually behave with AI, not how leadership imagines they will or should.
If you’ve attended a legal technology conference anytime over the past two years, you’ve heard the pitch: Automate the mundane and elevate the meaningful. A study published [subscription needed] in the Harvard Business Review by UC-Berkeley researchers Aruna Ranganathan and Xingqi Maggie Ye suggests we should be more skeptical. They tracked how generative AI (GenAI) changed work habits over eight months at a 200-person technology company. Their findings were striking — AI tools didn’t reduce work; rather, they intensified it. According to the study, the tech employees studied were shown to work faster, take on broader responsibilities, extend their hours into evenings and weekends, and multitask more aggressively — all without being asked to do so. The promise of liberation became a reality of acceleration and overwork. For those of us in the legal profession, this should be a wake-up call…”
Another intrusion.
https://www.bespacific.com/this-company-is-secretly-turning-your-zoom-meetings-into-ai-podcasts/
This Company Is Secretly Turning Your Zoom Meetings into AI Podcasts
404 Media no paywall: “WebinarTV, a company that bills itself as “a search engine for the best webinars,” is secretly scanning the internet for Zoom meeting links, recording the calls, and turning them into AI-generated podcasts for profit. In some cases, people only found out that their Zoom calls were recorded once WebinarTV reached out to them directly to say their call was turned into a podcast in an attempt to promote WebinarTV’s services. WebinarTV claims to host more than 200,000 webinars. It’s not clear how it’s recording so many Zoom calls without permission, but in some cases the stolen videos posted to WebinarTV can put call participants at risk. Tom Rademacher, a teacher and editor, told me he organized a Zoom call for educators and education advocates in the months after Donald Trump was elected to discuss keeping kids safe from ICE. “I very intentionally did not record the webinar since we’d be talking politics and there were some local electeds and district leaders that were on,” Rademacher told me. “There were definitely people on there who it would have been bad politically and professionally to be, especially at the time, linked to being anti-Trump in an education space.” Rademacher received an email on October 7, 2025, from WebinarTV VP of communications Sarah Blair, whose profile image appears to be AI-generated and who has no online presence. “Your webinar is featured on the Phil & Amy Show,” Blair said in her email. “They talk about the highlights from your webinar – without giving away too much – to entice viewers. To listen to the show, click Highlights tab on the OnDemand page or click here.” The link sent Rademacher to a page on WebinarTV.us which featured a full recording of the Zoom recording, an AI-generated video summary of the meeting, “chapters” that sent the viewers to different parts of the meeting, and an AI-generated episode of the “Phil & Amy Show,” in which two AI-generated personalities discuss the content of the call, including quips and rapport between Phil and Amy. “By suddenly having the whole meeting be public so you could see what [participants] were saying, after all the talk about safe spaces, it just felt super gross,” Rademacher told me. Rademacher asked Blair how she got the recording of the meeting and asked that WebinarTV take it down, which it did….
People who complained about WebinarTV on Linkedin also speculated that WebinarTV was finding the meetings by scraping the web for Zoom links. Freedom of the Press Foundation speculated that WebinarTV is using a Zoom API to scrape for public webinars, but noted that this would probably violate Zoom’s terms of service, which doesn’t allow people to use the API “To scrape, build databases, or otherwise create copies of any data accessed or obtained using the Zoom APIs by your Application.”…
Ready, fire, aim?
https://www.nytimes.com/2026/03/24/us/politics/trump-iran-power-stations.html
Trump’s Threat to Iran Crosses a Line, Rights Experts Say
Intentionally targeting the country’s energy infrastructure could constitute a war crime under international law.
Still amusing…
https://www.adamsmith.org/blog/yet-more-useful-maxims
Yet More Useful Maxims
A dialog with politicians? Scary! (Imagine an AI Trump!)
https://www.schneier.com/blog/archives/2026/03/team-mirai-and-democracy.html
Team Mirai and Democracy
Japan’s election last month and the rise of the country’s newest and most innovative political party, Team Mirai, illustrates the viability of a different way to do politics.
In this model, technology is used to make democratic processes stronger, instead of undermining them. It is harnessed to root out corruption, instead of serving as a cash cow for campaign donations.
Imagine an election where every voter has the opportunity to opine directly to politicians on precisely the issues they care about. They’re not expected to spend hours becoming policy experts. Instead, an AI Interviewer walks them through the subject, answering their questions, interrogating their experience, even challenging their thinking.
Should there be a similar line for legal misconduct?
https://www.tandfonline.com/doi/full/10.1080/08989621.2026.2645390#abstract
Hallucinated citations produced by generative artificial intelligence may constitute research misconduct when citations function as data in scholarly papers
In this article, we discuss the growing problem of hallucinated citations produced by Generative Artificial Intelligence (GenAI) in scholarly research and writing. We argue that GenAI hallucinated citations might qualify as a provable instance of research misconduct under the U.S. federal regulations when a) the researcher uses a GenAI tool to produce hallucinated (i.e., nonexistent) citations for a research document; b) the citations function as data because they directly support research findings, as in, for example, review articles or bibliometric studies; and c) the researcher demonstrates indifference to the risk of fabrication of the data (i.e. citations) because they did not check the GenAI’s output for veracity and accuracy. Other types of problematic citations such as bibliometrically incorrect citations, or contextually inaccurate citations, are indicative of poor scholarship and irresponsible behavior, but do not qualify as research misconduct. Recognizing that GenAI hallucinated citations could be regarded as research misconduct in certain cases will hopefully encourage researchers to take this problem more seriously than they do now. In partnership with scientific institutions, funders and professional societies, the scholarly community should work on establishing, promoting, and enforcing standards for responsible use of AI in research, including standards pertaining to citation practices.
Who should be looking out for you? Your doctor, a nurse, or the guy from IT?
https://www.atlantis-press.com/proceedings/tfol-25/126022211
Surveillance Medicine and the Law
Artificial intelligence is quickly becoming embedded in healthcare systems around the world. As this happens, the promise of efficiency, predictability, and personalisation of care is frequently presented as a moral imperative. However, there remains a growing body of evidence that AI-driven healthcare technologies can systematically undermine core principles of medical and legal ethics and, potentially, breach fundamental human rights. This study is an exploration of the deployment of AI in healthcare - specifically predictive algorithms, triage bots, and data-driven diagnostics - and how these risks infringe upon the right to health and the right to non-discrimination.
This study aims, through the lens of critical legal studies, to interrogate how these systems and technologies replicate and automate existing forms of inequality, while hidden by the veil of neutral language and innovation. Drawing upon case studies including UnitedHealth, Babylon Health, and DeepMind, the study demonstrates how algorithmic health tools can exacerbate systemic issues such as racism, gender biases and digital exclusion. It also aims to explore how existing legal systems fail to challenge these harmful effects and perpetually reinforce power dynamics and data commodification under the veil of progress.
By critically re-examining the legal governance of AI in healthcare, this study calls for a reassertion of ethical and rights-based principles in emerging health technology regulation, focused not on market efficiency, but on ethical principles like equality, autonomy and human dignity.
AIs don’t think. (Yet)
https://journal.ijtrp.com/index.php/ijtrp/article/view/21
The Legal and Ethical Implications of AI in Judicial Decision-Making: Challenges to Fair Trial and Due Process
A paradigm shift in the discussion of law, justice, and governance has resulted from the incorporation of artificial intelligence (AI) into judicial systems. Even though AI has been successful in increasing productivity, simplifying case management, and helping judges with research, using it to make decisions in court presents serious ethical and legal issues. The constitutional protections of due process and fair trial, which protect individual rights from caprice and guarantee openness, impartiality, and accountability in decision-making, are at the heart of this discussion. The ethical and legal ramifications of using AI in court decision-making are examined in this paper. It looks at how the idea of equality before the law may be threatened by algorithmic tools that, despite their promise of objectivity, may replicate or even worsen systemic biases present in training data. The constitutional requirement of reasoned judgments is challenged by the "black box problem," in which algorithms generate results without comprehensible reasoning, undermining public confidence in the legal system. Furthermore, there are serious concerns about who is responsible for incorrect or unfair results when accountability is distributed between algorithmic systems and human judges. The study examines developments in China, India, the United States, and the European Union using a comparative methodology. Both the advantages and disadvantages of AI-driven adjudication are highlighted in the study, ranging from the US controversy surrounding COMPAS risk-assessment tools to China's smart court experiment and India's cautious use of AI through SUPACE. It contends that although artificial intelligence (AI) can increase judicial efficiency, human conscience, empathy, and interpretive reasoning—all of which are essential components of justice—cannot be separated from adjudication. In order to ensure that technological innovation does not undermine constitutional values but rather strengthens the accessibility, fairness, and credibility of judicial systems, the paper ends by suggesting safeguards such as regulatory frameworks, transparency standards, and a "human-in-the-loop" principle.
Simple and effective?
TERROR-AI-SM THE FUTURE OF ARTIFICIAL INTELLIGENCE IN THE HANDS OF TERRORISTS
Terrorism remains one of the major challenges to international security. The past decade has witnessed a rapid convergence of two forces with profound implications for global stability: the accelerating capabilities of artificial intelligence and the persistent, adaptive threat of terrorism. What was once the realm of science fiction — autonomous machines making battlefield decisions, synthetic media manipulating public opinion — is now technically feasible and increasingly accessible to non-state actors. This convergence is already reshaping the threat landscape, compelling governments and international institutions to reconsider and adapt their counterterrorism frameworks in order to address the realities of an era where terrorism and cutting-edge technology are inextricably linked.