Saturday, March 29, 2025

AI may be killing its food supply.

https://www.bigtechnology.com/p/as-ai-takes-his-readers-a-leading

As AI Takes His Readers, A Leading History Publisher Wonders What’s Next

Late last year, Jan van der Crabben’s AI fears materialized. His World History Encyclopedia — the world’s second most visited history website — showed up in Google’s AI Overviews, synthesized and presented alongside other history sites. Then, its traffic cratered, dropping 25% in November.

Van der Crabben, the website’s CEO and founder, knew he was getting a preview of what many online publishers may soon experience. His site built a sizable audience with plenty of help from Google, which still accounts for 80% of its traffic. But as AI search and bots like ChatGPT ingest and summarize the web’s content, that traffic is starting to disappear. Now, his path forward is beginning to look murky.





Many, many years ago (1983) I attended a lecture by Grace Hopper. One point I remember clearly was her insistence that COBOL was obsolete and should be replaced immediately. So yes, this is overdue and no, I doubt it can be done “in a matter of months.”

https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/

DOGE Plans to Rebuild SSA Code Base in Months, Risking Benefits and System Collapse

The so-called Department of Government Efficiency (DOGE) is starting to put together a team to migrate the Social Security Administration’s (SSA) computer systems entirely off one of its oldest programming languages in a matter of months, potentially putting the integrity of the system—and the benefits on which tens of millions of Americans rely—at risk.



Friday, March 28, 2025

Perspective.

https://www.bespacific.com/when-dietrich-bonhoeffer-a-german-pastor-theorized-how-stupidity-enabled-the-rise-of-the-nazis/

When Dietrich Bonhoeffer, a German Pastor, Theorized How Stupidity Enabled the Rise of the Nazis

Open Culture: “Two days after Adolf Hitler became Chancellor of Germany, the Lutheran pastor Dietrich Bonhoeffer took to the airwaves. Before his radio broadcast was cut off, he warned his countrymen that their führer could well be a verführer, or misleader. Bonhoeffer’s anti-Nazism lasted until the end of his life in 1945, when he was executed by the regime for association with the 20 July plot to assassinate Hitler. Even while imprisoned, he kept thinking about the origins of the political mania that had overtaken Germany. The force of central importance to Hitler’s rise was not evil, he concluded, but stupidity.

Stupidity is a more dangerous enemy of the good than malice,” Bonhoeffer wrote in a letter to his co-conspirators on the tenth anniversary of Hitler’s accession to the chancellorship. “One may protest against evil; it can be exposed and, if need be, prevented by use of force. Evil always carries within itself the germ of its own subversion in that it leaves behind in human beings at least a sense of unease. Against stupidity we are defenseless.” When provoked, “the stupid person, in contrast to the malicious one, is utterly self-satisfied and, being easily irritated, becomes dangerous by going on the attack….

You can see Bonhoeffer’s theory of stupidity explained in the illustrated Sprouts video above, and you can learn more about the man himself from the documentary Bonhoeffer. Or, better yet, read his collection, Letters and Papers from Prison. Though rooted in his time, culture, and religion, his thought remains relevant wherever humans follow the crowd. “The fact that the stupid person is often stubborn must not blind us to the fact that he is not independent,” he writes, which held as true in the public squares of wartime Europe as it does on the social-media platforms of today. “In conversation with him, one virtually feels that one is dealing not at all with a person, but with slogans, catchwords and the like, that have taken possession of him.” Whatever would surprise Bonhoeffer about our time, he would know exactly what we mean when we call stupid people “tools.”





Tools & Techniques.

https://www.infodocket.com/2025/03/27/research-tools-allen-institute-for-artificial-intelligence-introduces-ai2-paper-finder/

Research Tools: Allen Institute for Artificial Intelligence Introduces Ai2 Paper Finder

Note: Ai2 is also the provider of the wonderful Semantic Scholar database. The new research tool discussed below (Paper Finder0 is free to access.

Today we release Ai2 Paper Finder, an LLM-powered literature search system.

We believe that AI-based literature search should follow the research and thought process that a human researcher would use when looking for relevant papers in their field. Ai2 Paper Finder is built on this philosophy, and it excels at locating papers that are hard to find using existing search tools.



Thursday, March 27, 2025

Inevitable in the camera rich UK.

https://www.theregister.com/2025/03/27/uk_facial_recognition/

UK's first permanent facial recognition cameras installed in South London

The two cameras will be installed in the city center in an effort to combat crime and will be attached to buildings and lamp posts on North End and London Road. According to the police they will only be turned on when officers are in the area and in a position to make an arrest if a criminal is spotted.

The installation follows a two-year trial in the area where police vans fitted with the camera have been patrolling the streets matching passersby to its database of suspects or criminals, leading to hundreds of arrests. The Met claims the system can alert them in seconds if a wanted wrong'un is spotted, and if the person gets the all-clear, the image of their face will be deleted.





Clear as mud? The “Oops!” keeps getting bigger.

https://www.politico.com/news/2025/03/26/gabbard-signal-government-devices-cybersecurity-00250731

Gabbard says Signal comes ‘pre-installed’ on government devices

Director of National Intelligence Tulsi Gabbard testified to House Intelligence Committee members Wednesday that encrypted messaging app Signal comes “pre-installed” on government devices — a potentially major shift in official communications on the heels of a massive Chinese government-linked hack of U.S. telecommunications networks last year.

The app has been largely unauthorized for use on government-issued devices in the past. The Defense Department Office of the Inspector General issued multiple reports condemning a top Pentagon official in 2021 for using Signal to communicate, and the National Security Agency reportedly warned employees last month of the vulnerabilities of using Signal, stating that the app was “a high value target to intercept sensitive information.”

Cybersecurity experts told POLITICO earlier this week that the app should not be used to discuss classified information and stressed the need for government officials to use authorized and more secure means of communication.



(Related)

https://databreaches.net/2025/03/26/private-data-and-passwords-of-senior-u-s-security-officials-found-online/

Private Data and Passwords of Senior U.S. Security Officials Found Online

This will likely come as no surprise to many, but Spiegel International reports:

Donald Trump’s most important security advisers used Signal to discuss an imminent military strike. Now, reporting by DER SPIEGEL has found that the contact data of some of those officials, including mobile phone numbers, is freely accessible on the internet.

According to reporting by  Patrick Beuth, Jörg Diehl, Roman Höfner, Roman Lehberger, Friederike Röhreke und Fidelius Schmid:

DER SPIEGEL reporters were able to find mobile phone numbers, email addresses and even some passwords belonging to the top officials.
To do so, the reporters used commercial people search engines along with hacked customer data that has been published on the web. Those affected by the leaks include National Security Adviser Mike Waltz, Director of National Intelligence Tulsi Gabbard and Secretary of Defense Pete Hegseth.
Most of these numbers and email addresses are apparently still in use, with some of them linked to profiles on social media platforms like Instagram and LinkedIn. They were used to create Dropbox accounts and profiles in apps that track running data. There are also WhatsApp profiles for the respective phone numbers and even Signal accounts in some cases.
As such, the reporting has revealed an additional grave, previously unknown security breach at the highest levels in Washington.

Read more at DER SPIEGEL.





Easy for kids to get a prepaid credit card?

https://www.cnbc.com/2025/03/26/utah-adopts-child-safety-law-requiring-apple-google-to-verify-user-ages.html

Utah governor signs online child safety law requiring Apple, Google to verify user ages

The App Store Accountability Act, or S.B. 142, could also kick off a wave of other states, including South Carolina and California, passing similar legislation

Apple and Google will need to request age verification checks when someone makes a new account in the state. That will most likely have to be done using credit cards, according to Weiler. If someone under 18 opens an app store account, Apple or Google will have to link it to a parent’s account or request additional documentation. Parents will have to consent to in-app purchases.





Massive effort that should be reviewed by security and AI teams.

https://www.schneier.com/blog/archives/2025/03/a-taxonomy-of-adversarial-machine-learning-attacks-and-mitigations.html

A Taxonomy of Adversarial Machine Learning Attacks and Mitigations

NIST just released a comprehensive taxonomy of adversarial machine learning attacks and countermeasures.



Wednesday, March 26, 2025

Bias in, bias out.

https://www.bespacific.com/people-tend-to-choose-search-terms-that-will-confirm-their-beliefs/

People tend to choose search terms that will confirm their beliefs

Ars Technica: “Forcing the use of general search terms can help people change their minds. People are often quite selective about the information they’ll accept, seeking out sources that will confirm their biases, while discounting those that will challenge their beliefs. In theory, search engines can potentially change that. By prioritizing results from high-quality, credible sources, a search engine could ensure that people found accurate information more frequently, potentially opening them to the possibility of updating their beliefs. Obviously, that hasn’t worked out on the technology side, as people quickly learned how to game the algorithms used by search engines, meaning that the webpages that get returned have been created by people with no interest in quality or credibility. But a new study is suggesting that the concept fails on the human side, too, as people tend to devise search terms that are specific enough to ensure that the results of the search will end up reinforcing their existing beliefs. The study showed that invisibly swapping search terms to something more general can go a long way toward enabling people to change their mind. Searching for affirmation. The new work was done by two researchers at Tulane, Eugina Leung and Oleg Urminsky. Much of their study focuses on a simple question that people might turn to a search engine to answer: is caffeine good or bad for you? If you wanted to search for that, you could potentially ask “what are the health effects of caffeine?” which should get you a mixture of the pros and cons. But people could also ask it in less neutral terms, such as, “is caffeine bad for you?” These more specific searches are likely to pull up a more biased selection of results than the general, neutral terms.”





Ignoring laws they find bothersome?

https://databreaches.net/2025/03/26/american-oversight-v-hegseth-gabbard-ratcliffe-bessent-rubio-and-nara-regarding-military-actions-planned-on-signal-messaging-app/

American Oversight v. Hegseth, Gabbard, Ratcliffe, Bessent, Rubio, and NARA Regarding Military Actions Planned on Signal Messaging App

Docket Number 25-0883 in District Court for the District of Columbia.

Lawsuit filed against Defense Secretary Pete Hegseth, DNI Tulsi Gabbard, CIA Director John Ratcliffe, Treasury Secretary Scott Bessent, Secretary of State and acting Archivist Marco Rubio, and the U.S. National Archives and Records Administration concerning news reports that journalist Jeffery Goldberg had been added to a Signal group chat among those containing potentially classified information about active military operations.

Read the complaint on American Oversight.    This has to do with the Federal Records Act and the Administrative Procedure Act requiring the preservation and recovery of records created using Signal for a group–chat discussion of planned and active military operations from March 11, 2025, through March 15, 2025.





If I was a student, I’d probably have a few accounts my school didn’t know about. Will the schools tell students they have flagged their account?

https://www.theverge.com/news/634977/instagram-school-partners-prioritize-reports

Instagram is giving schools a faster way to get students’ posts taken down

Instagram is rolling out a new program to fast-track moderation reports made by school districts. After a district joins the new Schools Partnership program, any post or account they flag for potentially violating Instagram’s rules will “be automatically prioritized for review.”



Tuesday, March 25, 2025

I have to wonder if this really was a mistake.

https://databreaches.net/2025/03/24/the-trump-administration-accidentally-texted-me-its-war-plans/

The Trump Administration Accidentally Texted Me Its War Plans

This is an absolutely mind-boggling breach. How could no one looking at the list of participants in a Signal chat not question what a reporter was doing in a war plans chat?  Jeffrey Goldberg reports:

The world found out shortly before 2 p.m. eastern time on March 15 that the United States was bombing Houthi targets across Yemen.
I, however, knew two hours before the first bombs exploded that the attack might be coming. The reason I knew this is that Pete Hegseth, the secretary of defense, had texted me the war plan at 11:44 a.m. The plan included precise information about weapons packages, targets, and timing.
This is going to require some explaining.

Read more at The Atlantic.





What impact will training on false data have?

https://arstechnica.com/ai/2025/03/cloudflare-turns-ai-against-itself-with-endless-maze-of-irrelevant-facts/

Cloudflare turns AI against itself with endless maze of irrelevant facts

On Wednesday, web infrastructure provider Cloudflare announced a new feature called "AI Labyrinth" that aims to combat unauthorized AI data scraping by serving fake AI-generated content to bots. The tool will attempt to thwart AI companies that crawl websites without permission to collect training data for large language models that power AI assistants like ChatGPT.

Instead of simply blocking bots, Cloudflare's new system lures them into a "maze" of realistic-looking but irrelevant pages, wasting the crawler's computing resources. The approach is a notable shift from the standard block-and-defend strategy used by most website protection services. Cloudflare says blocking bots sometimes backfires because it alerts the crawler's operators that they've been detected.

"When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them," writes Cloudflare. "But while real looking, this content is not actually the content of the site we are protecting, so the crawler wastes time and resources."



Monday, March 24, 2025

Keeping up.

https://www.bespacific.com/artificial-intelligence-ai-legislation/

Artificial Intelligence (AI) Legislation

Multistate: Artificial Intelligence (AI) Legislation. “Lawmakers are increasingly addressing AI through legislation.  As AI technologies have burst on the scene, state lawmakers have responded by addressing concerns with this ubiquitous technology through public policy. In 2023, we saw less than 200 bills introduced across state legislatures addressing the issue of AI. But that shifted in 2024 when MultiState tracked over 600 AI-related bills with nearly 100 enacted into law. We expect 2025 to be a big year for AI legislation. Keep an eye on the bills we’re tracking with the state-by-state bill tracking map below. For a comprehensive view of current state laws related to AI, see our state-by-state AI policy overviews.”





One of many questions…

https://www.llrx.com/2025/03/does-the-government-decide-what-your-law-firm-will-do/

Does the Government Decide What Your Law Firm Will Do?

If anyone’s going to speak up, it should be law firms. If anyone’s going to take a stand, it should be law firms.”

As you may have heard, Donald Trump is punishing law firm Covington & Burling for giving pro bono representation to former special counsel Jack Smith. His executive order canceled security clearances for members of the firm and directed government agencies to review all contracts with the firm for legal work.

I had hopes that other law firms might issue statements of support for C&B, though those hopes weren’t very high. Judging from this article, I was right to be skeptical: “Some firm leaders, citing corporate clients threatening to walk if they get crosswise with Trump, have rejected outright or put up roadblocks to partners seeking approval to represent DOJ lawyers, FBI agents, and other civil servants who’ve faced various forms of attack.”

But the quote that led off this post didn’t come from the Covington articles. It came from this one about law firms downplaying or removing mention of their DEI policies since Trump and Elon Musk began their anti-DEI crusade. The article singles out K&L Gates, which “removed references to its diversity initiatives from its website, including mentions of the Mansfield Pledge and key demographic statistics.” This is what “obeying in advance” looks like, by the way.



Sunday, March 23, 2025

Interesting question. Do we have an answer?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5179224

Artificial Intelligence and the Discrimination Injury

For a decade, scholars have debated whether discrimination involving artificial intelligence (AI) can be captured by existing discrimination laws. This article argues that the challenge that artificial intelligence poses for discrimination law stems not from the specifics of any statute, but from the very conceptual framework of discrimination law. Discrimination today is a species of tort, concerned with rectifying individual injuries, rather than a law aimed at broadly improving social or economic equality. As a result, the doctrine centers blameworthiness and individualized notions of injury. But it is also a strange sort of tort that does not clearly define its injury. Defining the discrimination harm is difficult and contested. As a result, the doctrine skips over the injury question and treats a discrimination claim as a process question about whether a defendant acted properly in a single decisionmaking event. This tort-with-unclear-injury formulation effectively merges the questions of injury and liability: If a defendant did not act improperly, then no liability attaches because a discrimination event did not occur. Injury is tied to the single decision event and there is no room for recognizing discrimination injury without liability.

This formulation directly affects regulation of AI discrimination for two reasons: First, AI decisionmaking is distributed; it is a combination of software development, its configuration, and its application, all of which are completed at different times and usually by different parties. This means that the mental model of a single decision and decisionmaker breaks down in this context. Second, the process-based injury is fundamentally at odds with the existence of “discriminatory” technology as a concept. While we can easily conceive of discriminatory AI as a colloquial matter, if there is legally no discrimination event until the technology is used in an improper way, then the technology cannot be considered discriminatory until it is improperly used.

The analysis leads to two ultimate conclusions. First, while the applicability of disparate impact law to AI is unknown, as no court has addressed the question head-on, liability will depend in large part on the degree to which a court is willing to hold a decisionmaker (e.g. and employer, lender, or landlord) liable for using a discriminatory technology without adequate attention to the effects, for a failure to either comparison shop or fix the AI. Given the shape of the doctrine, the fact that the typical decisionmaker is not tech savvy, and that they likely purchased the technology on the promise of it being non- discriminatory, whether a court would find such liability is an open question. Second, discrimination law cannot be used to create incentives or penalties for the people best able to address the problem of discriminatory AI—the developers themselves. The Article therefore argues for supplementing discrimination law with the application of a combination of consumer protection, product safety, and products liability—all legal doctrines meant to address the distribution of harmful products on the open market, and all better suited to directly addressing the products that create discriminatory harms.





Can AI help?

https://taapublications.com/tijsrat/article/view/453

THE IMPACT OF ARTIFICIAL INTELLIGENCE ON LEGAL SYSTEMS

Artificial Intelligence (AI) is transforming sectors of the world, and the legal sector is no different. The emerging use of AI technologies into the judiciary has gigantic potential and issues. However, AI holds the potential to enhance the efficacy of legal processes by automating routine tasks like document searching, legal analysis, and contract analysis,reducing the cost of legal services, and making legal services more accessible. Artificial intelligence technologies such as predictive analytics also possess the ability to facilitate better decision-making by having the ability to distinguish between case law trends and predict outcomes. However, the universal application of AI also presents firm ethical, legal, and privacy concerns. Of these is the possibility of algorithmic bias, which can lead to biased or discriminatory judicial decisions. Another problem is a lack of transparency in AI decision-making, and therefore it can be difficult to explain how algorithms make decisions. Also, AI poses problems for traditional legal systems, as they were designed to take into account passive systems, not active ones. This paper discusses the advantages as well as challenges of AI legal systems, having a detailed look at their implications for lawyers, clients, and lawmakers. By this analysis, the paper emphasizes the need for a balanced regulatory environment for ensuring ethical use of AI while protecting individuals' rights and upholding justice in the legal system.





Perspective.

https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/18975

Authoritarian Surveillance: An Introduction

Authoritarian surveillance is no longer an exceptional or rare practice. In many parts of the world, we are witnessing an increase of pervasive government monitoring, of curtailing privacy protections, of stringent control of information flows, and of intimidation towards self-censorship. These hallmarks of authoritarian surveillance are not confined to authoritarian or undemocratic regimes. In a political landscape that favours strongarm authoritarian leaders, the boundaries between authoritarian and democratic regimes, the liberal and the illiberal ones, are blurrier than ever. The increasing availability of advanced technologies for analyzing (big) data, particularly when integrated with artificial intelligence (AI), has heightened the temptation for governments to adopt authoritarian surveillance tools and practices—and has amplified the potential dangers involved.

This Dialogue section introduces the multiple dimensions of contemporary authoritarian surveillance, going beyond a dichotomy between “democratic” and “authoritarian” regimes to identify and map authoritarian surveillance in diverse geographical and political contexts. We focus on surveillance beyond the exceptional and beyond the rule of law to examine an increasingly mundane but dangerous practice undermining the limited democratic spaces that remain in our world. The seven articles in this special Dialogue section explore different angles of authoritarian surveillance— the technologies that facilitate it, the laws that govern it, and the legacies that precede it or linger thereafter—and the social and political consequences that emerge as a result. Together, this collection revisits existing literature on authoritarian surveillance, calls for a renewed scholarly focus on its consequences, and proposes new directions for future research.

Vol. 23 No. 1 (2025): Open Issue





Perspective.

https://sites.duke.edu/lawfire/2025/03/22/podcast-lt-gen-jack-shanahan-usaf-ret-on-the-military-uses-of-artificial-intelligence/

Podcast: Lt Gen Jack Shanahan, USAF (Ret.) on “The Military Uses of Artificial Intelligence”

Want to get caught up on the latest about artificial intelligence (AI) in the armed forces?  Then today’s video of Prof Gary Corn’s, Fireside Chat with Lt Gen John N.T. “Jack” Shanahan, USAF (Ret.), the former Director of the Department of Defense’s Joint Artificial Intelligence Center, on “The Military Uses of Artificial Intelligence” is for you.