Saturday, October 18, 2025

It was the best of reasons, it was the worst of reasons…

https://pogowasright.org/claro-and-town-of-dover-nj-launch-ai-video-analytics-to-transform-public-safety/

Claro and Town of Dover, NJ Launch AI Video Analytics to Transform Public Safety

From a press release by Claro:

The Town of Dover has taken a bold step forward in public safety by partnering with Claro to deploy advanced AI-driven surveillance technology across its municipal buildings. This initiative, which enhances both security and operational efficiency, is already being recognized as a model for smart city innovation.
With the help of Claro’s expertise, the Town of Dover was able to integrate AI Video Analytics use cases, such as visible weapons detection and facial recognition, into its existing camera system – avoiding the significant cost and disruption of a full infrastructure overhaul.
As a small municipality, we don’t have the budget for constant law enforcement presence,” said Mayor James Dodd. “Claro gave us the ability to enhance safety with cutting-edge technology that works with what we already have.”

Read more of the press release.

Claro and the town obviously think this is a good thing, but those of us who appreciate the First Amendment and Fourth Amendment may not agree. Here’s the version of their headline Joe Cadillic sent us:

Town of Dover NJ to use Claro facial recognition in libraries, municipal buildings and to ID people in crowds”

Libraries? Privacy in libraries is especially sensitive so as not to observe what books people are reading or checking out. How will Claro be used in libraries?

Yes, there is supposedly no reasonable expectation of privacy in public, but in the day and age of citizens and immigrants being rounded up by ICE or at risk of violation of civil liberties, do we really need MORE surveillance? Will immigrants need to be fearful if they try to read up on immigration law in the Dover library? We hope not.



Friday, October 17, 2025

If you’re a writer, learn to write right.

https://www.zdnet.com/article/this-free-google-ai-course-could-transform-how-you-research-and-write-but-act-fast/

This free Google AI course could transform how you research and write - but act fast

take a free 4-module course that begins next week.

The course is called "Google AI Tools for Journalists: Optimizing Editorial Workflow, Content Creation, and Audience Engagement." It's obviously intended for journalists, but it could also be valuable to bloggers, influencers, teachers, students, YouTubers, or any other communicator.





Concern. AI is now based on AI, which is at least partly hallucinogenic...

https://www.zdnet.com/article/more-than-half-of-new-content-is-ai-generated-now-report-finds/

More than half of new content is AI-generated now, report finds

You aren't imagining it: There is more AI slop on the internet. In fact, half of the articles you're coming across are AI-generated according to SEO firm Graphite, which found that over half of the written content on the internet is produced by AI. 



Thursday, October 16, 2025

Fall Privacy Foundation Lunch/Seminar.

Sturm College of Law

Friday, October 17th from 12:00 to 2:00

Topic: Identity Theft

To register contact:

Kristen Dermyer. Kristen.dermyer@du.edu, Academic Programs Coordinator, 303-871-6487





Perspective.

https://www.pewresearch.org/global/2025/10/15/how-people-around-the-world-view-ai/

How People Around the World View AI

More are concerned than excited about its use, and more trust their own country and the EU to regulate it than trust the U.S. or China

A median of 34% of adults across these countries have heard or read a lot about AI, while 47% have heard a little and 14% say they’ve heard nothing at all, according to a spring 2025 Pew Research Center survey.

But many are worried about AI’s effects on daily life. A median of 34% of adults say they are more concerned than excited about the increased use of AI, while 42% are equally concerned and excited. A median of 16% are more excited than concerned.





Another perspective.

https://www.bespacific.com/ai-is-not-popular-and-ai-users-are-unpleasant-asshats/

AI is not popular, and AI users are unpleasant asshats

Pivot to AI  – It can’t be that stupid, you must be prompting it wrong: “Some AI papers have a finding so good you just want to quote it everywhere. But many turn out to be trash. They’ve got bad statistics, they’ve got bad methodology, they’re really sales pieces, or they just don’t prove their claimed result. If you see a finding you love, you can’t skip reading the paper before you post about it. Today’s is “Evaluating Artificial Intelligence Use and Its Psychological Correlates via Months of Web-Browsing Data.” It’s a peer-reviewed journal article, published in Cyberpsychology, Behavior, and Social Networking, September 2025. [Liebertarchive, PDF] The researchers measured 14 million website visits by 499 students and 455 members of the general public over a 90-day period. Firstly, nobody used AI very much — 1% of student web-browsing was AI, 0.44% of the general public study. Secondly, the AI users were not very nice people:

The most consistent predictors of AI use across studies were aversive personality traits (e.g., Machiavellianism, narcissism, psychopathy).

So AI is not actually popular, and AI users are unpleasant assholes. Now you might go “yeah, figures.” But let’s dive in and see how well it backs it up. The first thing the researchers did was not trust the users to self-report. They measured their web browsing by getting 90 days of browser history from Chrome on a desktop — so no mobile or app usage. They did collect the users’ self-reports, which were just incorrect:

we observed that self-reported AI use and actual AI use were only moderately correlated (ρ = 0.329).

If the users went to a chatbot site, that’s obviously AI. For other sites, the researchers picked the Interactive Advertising Bureau category by … running the addresses through a chatbot, GPT-4o. That’s not so great, though they tested GPT-4o against a random 200 of the addresses and it was correct on all but one. So they figured it’d do. The researchers were surprised how low AI usage actually was. The lead author, Emily McKinley, said: [PsyPost]

We were genuinely surprised by how infrequent AI use was, even among students who typically serve as early adopters of emerging technologies.


Wednesday, October 15, 2025

My AI made me double down? (Note that opposing counsel did take the time to check.)

https://www.bespacific.com/lawyer-caught-using-ai-while-explaining-to-court-why-he-used-ai/

Lawyer Caught Using AI While Explaining to Court Why He Used AI

404 Media [no paywall]: “An attorney in a New York Supreme Court commercial case got caught using AI in his filings, and then got caught using AI again in the brief where he had to explain why he used AI, according to court documents filed earlier this month. New York Supreme Court Judge Joel Cohen wrote in a decision granting the plaintiff’s attorneys’ request for sanctions that the defendant’s counsel, Michael Fourte’s law offices, not only submitted AI-hallucinated citations and quotations in the summary judgment brief that led to the filing of the plaintiff’s motion for sanctions, but also included “multiple new AI-hallucinated citations and quotations” in the process of opposing the motion.  “In other words,” the judge wrote, “counsel relied upon unvetted AI — in his telling, via inadequately supervised colleagues — to defend his use of unvetted AI.” The case itself centers on a dispute between family members and a defaulted loan. The details of the case involve a fairly run-of-the-mill domestic money beef, but Fourte’s office allegedly using AI that generated fake citations, and then inserting nonexistent citations into the opposition brief, has become the bigger story. The plaintiff and their lawyers discovered “inaccurate citations and quotations in Defendants’ opposition brief that appeared to be ‘hallucinated’ by an AI tool,” the judge wrote in his decision to sanction Fourte. After the plaintiffs brought this issue to the Court’s attention, the judge wrote, Fourte submitted a response where the attorney “without admitting or denying the use of AI, ‘acknowledge[d] that several passages were inadvertently enclosed in quotation’ and ‘clarif[ied] that these passages were intended as paraphrases or summarized statements of the legal principles established in the cited authorities.’”…





Perspective.

https://theconversation.com/russias-permanent-test-is-pushing-europe-to-the-brink-of-war-heres-what-moscow-actually-wants-266826

Russia’s ‘permanent test’ is pushing Europe to the brink of war – here’s what Moscow actually wants

The scenes have become grimly familiar: Russian tanks rolling into Georgia in 2008, the seizure of Crimea in 2014, the invasion of Ukraine in 2022, Russian military jets violating European airspace, and now mysterious drone sightings closing airports across Europe.

While these may seem like disconnected events, in reality they are but chapters in a singular, focused and evolving strategy. Russia’s aim is to wield military power when necessary, engage in “grey-zone” war tactics when possible, and exert political pressure everywhere. Moscow has been doing all this for decades, with one objective in mind: to redraw Europe’s security map without triggering direct war with Nato.



Tuesday, October 14, 2025

Just a “heads up” for the Fall Privacy Foundation Lunch/Seminar.

Sturm College of Law

Friday, October 17th from 12:00 to 2:00

Topic: Identity Theft





Being social is a health risk?

https://www.politico.com/news/2025/10/13/california-health-warning-labels-social-media-00606066?utm_campaign=mb&utm_medium=newsletter&utm_source=morning_brew

Social media must warn users of ‘profound’ health risks under new California law

Gov. Gavin Newsom on Monday signed a law mandating health warning labels for social media, making California the latest U.S. state to wield a rule originally designed to curb tobacco addiction as a digital safety feature.

The new law is part of a national push to combat social media’s potential health risks that has grown since former President Joe Biden’s surgeon general first advocated the labels. Recent research has linked the technology to increased anxiety, body dysmorphia and sleep interruption in children, among other impacts.



(Related)

https://techcrunch.com/2025/10/13/california-becomes-first-state-to-regulate-ai-companion-chatbots/

California becomes first state to regulate AI companion chatbots

California Governor Gavin Newsom signed a landmark bill on Monday that regulates AI companion chatbots, making it the first state in the nation to require AI chatbot operators to implement safety protocols for AI companions.

The law, SB 243, is designed to protect children and vulnerable users from some of the harms associated with AI companion chatbot use. It holds companies — from the big labs like Meta and OpenAI to more focused companion startups like Character AI and Replika — legally accountable if their chatbots fail to meet the law’s standards.





Tools & Techniques.

https://www.bespacific.com/promptlaw/

PromptLaw

Via Greg Siskind:

What is Prompt.lawPrompt.law is an online platform where lawyers can find, create, and share AI-generated legal task prompts. Think of it as a “cookbook” of AI prompts designed specifically for legal professionals to streamline their work.

Who can use Prompt.law?  Prompt.law is designed for legal professionals, including attorneys, law firm staff, legal tech developers, and in-house counsel who want to leverage AI for legal tasks.

How does Prompt.law work?  Users can browse a library of AI prompts tailored for legal tasks, submit their own prompts, and customize prompts for specific use cases. Prompts are categorized by practice area and legal function for easy discovery.

Is Prompt.law free to use?  Yes. Prompt.law is free. However, in order to use Prompt.law, users must register for an account.



Sunday, October 12, 2025

New definitions. Could they apply to all media?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5561522

The New Art Forgers

The “substantial similarity” between a copyrighted work and an unauthorized derivative has formed the bedrock of copyright infringement jurisprudence since the mid-nineteenth century. Recent technological developments, however, are destabilizing these conceptual foundations. In May, the Copyright Office suggested that the use of copyrighted works to train AI models may constitute infringement even if model outputs are not “substantially similar” to model inputs if they nevertheless “dilute the market” for similar works. One month later, Judge Chhabria of the Northern District of California argued that AI outputs do not have to be “substantially similar” to copyrighted training data in order to be infringing. The plaintiff’s incentives are sufficiently harmed, Judge Chhabria argued, when the market is flooded with “similar enough” AI-generated works.

These developments should be read as early warning signs of a disturbing doctrinal shift from “substantial similarity” to a new and dubious threshold for actionable infringement: “substitutive similarity”, where the substitutability of the defendant’s work, rather than the similarity of protected expression, provides the cause of action. This novel theory of harm, if widely adopted, would impose dangerous restrictions on downstream creativity. Any new work that was “similar enough” to existing works would be treated as potentially infringing, despite the absence of substantially similar expression. This would corrupt what is essentially a question of fact – whether the defendant copied “enough” of the plaintiff’s work to constitute unlawful appropriation – with deontic considerations of the wrongfulness of free-riding.

At the same time, artists are understandably rattled by the speed and scale of AI generation. AI models can produce “new” works in the style of established artists in a matter of seconds, dramatically undercutting the market for their work. AI style mimicry makes it difficult for artists to control their personal brands and for consumers to locate authentic works by their favorite artists. Copyright is responsible for protecting artists’ creative incentives, but its legal tests were not designed to handle the scale of imitation enabled by AI.

This Article offers a way out of this jurisprudential morass. Instead of lowering the burden of proof for infringement, Congress should strengthen the attribution rights of existing creators. Low-protectionists have long advocated for attribution rights as a way of protecting authors’ interests without expanding the scope of their economic entitlements. Proper attribution allows creators to capture the full reputational benefits of their labor without stifling downstream creativity. For example, Congress could enact an AI-specific attribution right that requires the disclosure of copyrighted training data in output metadata. This would mitigate the labor-displacing effects of generative AI by directing consumers to the original creators of a popular style or aesthetic.



Generative AI places copyright jurisprudence at a critical crossroads. Indulging Judge Chhabria’s novel theory of harm would effectively inaugurate a new standard for infringement – “substitutive similarity” – that would stifle not just AI innovation but human creativity more broadly. The stakes for protecting free expression through careful guardianship of longstanding doctrine could not be higher. This Article guides readers through this critical inflection point with new terminology for the jurisprudential lexicon as well as practical proposals for reform.





Interesting idea.

https://www.proquest.com/openview/f49bcfbaea46db396599409c08492adf/1?pq-origsite=gscholar&cbl=18750&diss=y

The Upcoming Moral Crisis in Primitive Artificial Intelligence

As we continue to develop artificially intelligent systems, there is an increasingly high chance that we will develop a system that is both conscious and capable of suffering. Furthermore, it is likely that the development of this conscious machine will be entirely unintentional. While this machine will have moral status, identifying it will be extremely difficult, leading to it being treated the same as its inert predecessors. For these reasons I believe that a crisis in ethics is looming. This paper aims to argue that it is possible for a machine to have moral status, that the first such machine will likely be produced unintentionally, and that identifying this machine will involve significant difficulties.





At least they are thinking about it…

https://www.reuters.com/legal/government/new-york-court-system-sets-rules-ai-use-by-judges-staff-2025-10-10/

New York court system sets rules for AI use by judges, staff

The New York state court system on Friday set out a new policy on the use of artificial intelligence by judges and other court staff, joining at least four other U.S. states that have adopted similar rules in the past year.

, opens new tab

The interim policy, which applies to all judges, justices and nonjudicial employees in the New York Unified Court System, limits the use of generative AI to approved products and mandates AI training.