Saturday, June 28, 2025

Perspective. (Before you can read, jump through these hoops.)

https://www.eff.org/deeplinks/2025/06/todays-supreme-court-decision-age-verification-tramples-free-speech-and-undermines

Today's Supreme Court Decision on Age Verification Tramples Free Speech and Undermines Privacy

Today’s decision in Free Speech Coalition v. Paxton is a direct blow to the free speech rights of adults. The Court ruled that “no person—adult or child—has a First Amendment right to access speech that is obscene to minors without first submitting proof of age.” This ruling allows states to enact onerous age-verification rules that will block adults from accessing lawful speech, curtail their ability to be anonymous, and jeopardize their data security and privacy. These are real and immense burdens on adults, and the Court was wrong to ignore them in upholding Texas’ law.





Perspective.

https://www.zdnet.com/article/how-the-senates-ban-on-state-ai-regulation-imperils-internet-access/

How the Senate's ban on state AI regulation imperils internet access

The issue is twofold: if passed, the rule would both constitutionally prohibit states from enforcing AI legislation and put often critical funding for internet access at risk.



Friday, June 27, 2025

Will this spread to other countries?

https://www.ft.com/content/4a5235c5-acd0-4e81-9d44-2362a25c8eb3

Brazil supreme court rules digital platforms are liable for users’ posts

Brazil’s supreme court has ruled that social media platforms can be held legally responsible for users’ posts, in a decision that tightens regulation on technology giants in the country.

Companies such as Facebook, TikTok and X will have to act immediately to remove material such as hate speech, incitement to violence or “anti-democratic acts”, even without a prior judicial takedown order, as a result of the decision in Latin America’s largest nation late on Thursday.





Could I sue my twin brother?

https://www.theguardian.com/technology/2025/jun/27/deepfakes-denmark-copyright-law-artificial-intelligence

Denmark to tackle deepfakes by giving people copyright to their own features

The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice.





New tool…

https://www.404media.co/ice-is-using-a-new-facial-recognition-app-to-identify-people-leaked-emails-show/

ICE Is Using a New Facial Recognition App to Identify People, Leaked Emails Show

Immigration and Customs Enforcement (ICE) is using a new mobile phone app that can identify someone based on their fingerprints or face by simply pointing a smartphone camera at them, according to internal ICE emails viewed by 404 Media. The underlying system used for the facial recognition component of the app is ordinarily used when people enter or exit the U.S. Now, that system is being used inside the U.S. by ICE to identify people in the field.

The news highlights the Trump administration’s growing use of sophisticated technology for its mass deportation efforts and ICE’s enforcement of its arrest quotas. The document also shows how biometric systems built for one reason can be repurposed for another, a constant fear and critique from civil liberties proponents of facial recognition tools.





Can a non-person speak?

https://www.thefire.org/news/fire-court-ai-speech-still-speech-and-first-amendment-still-applies

FIRE to court: AI speech is still speech — and the First Amendment still applies

This week, FIRE filed a “friend-of-the-court” brief in Garcia v. Character Technologies urging immediate review of a federal court’s refusal to recognize the First Amendment implications of AI-generated speech.

The plaintiff in the lawsuit is the mother of a teenage boy who committed suicide after interacting with an AI chatbot modeled on the character Daenerys Targaryen from the popular fantasy series Game of Thrones. The suit alleges the interactions with the chatbot, one of hundreds of chatbots hosted on defendant Character Technologies’ platform, caused the teenager’s death. 

Character Technologies moved to dismiss the lawsuit, arguing among other things that the First Amendment protects chatbot outputs and bars the lawsuit’s claims. A federal district court in Orlando denied the motion, and in doing so stated it was “not prepared to hold that the Character A.I. LLM's output is speech.” 

FIRE’s brief argues the court failed to appreciate the free speech implications of its decision, which breaks with a well-established tradition of applying the First Amendment to new technologies with the same strength and scope as applies to established communication methods like the printing press or even the humble town square. The significant ramifications of this error for the future of free speech make it important for higher courts to provide immediate input.

Contrary to the court’s uncertainty about whether “words strung together by an LLM” are speech, assembling words to convey messages and information is the essence of speech. And, save for a limited number of carefully defined exceptions, the First Amendment protects speech — regardless of the tool used to create, produce, or transmit it.  



(Related)

https://cdt.org/insights/cdt-and-eff-urge-court-to-carefully-consider-users-first-amendment-rights-in-garcia-v-character-technologies-inc/

CDT and EFF Urge Court to Carefully Consider Users’ First Amendment Rights in Garcia v. Character Technologies, Inc.

On Monday, CDT and EFF sought leave to submit an amicus brief urging the U.S. District Court of the Middle District of Florida to grant an interlocutory appeal to the Eleventh Circuit to ensure adequate review of users’ First Amendment rights in Garcia v. Character Technologies, Inc. The case involves the tragic suicide of a child that followed his use of a chatbot and the complex First Amendment questions that accompany whether and how plaintiffs can appropriately recover damages alleged to stem from chatbot outputs. 

CDT and EFF’s brief discusses how First Amendment-protected expression may be implicated throughout the design, delivery, and use of chatbot LLMs and urges the court to prioritize users’ interests in accessing chatbot outputs in its First Amendment analysis. The brief documents the Supreme Court’s long-standing precedent holding that the First Amendment’s protections for speech extend not just to speakers but also to people who seek out information. A failure to appropriately consider users’ First Amendment rights in relation to seeking information from chatbots, the brief argues, would open the door for unprecedented governmental interference in the ways that people can create, seek, and share information. 

Read the full brief.



Thursday, June 26, 2025

Constraining surveillance?

https://www.404media.co/flock-removes-states-from-national-lookup-tool-after-ice-and-abortion-searches-revealed/

Flock Removes States From National Lookup Tool After ICE and Abortion Searches Revealed

Flock, the automatic license plate reader (ALPR) company with a presence in thousands of communities across the U.S., has stopped agencies across the country from searching cameras inside Illinois, California, and Virginia, 404 Media has learned. The dramatic moves come after 404 Media revealed local police departments were repeatedly performing lookups around the country on behalf of ICE, a Texas officer searched cameras nationwide for a woman who self-administered an abortion, and lawmakers recently signed a new law in Virginia. Ordinarily Flock allows agencies to opt into a national lookup database, where agencies in one state can access data collected in another, as long as they also share their own data. This practice violates multiple state laws which bar the sharing of ALPR data out of state or it being accessed for immigration or healthcare purposes.



Wednesday, June 25, 2025

Okay, but promise you won’t pirate anything…

https://www.jurist.org/news/2025/06/us-federal-judge-makes-landmark-ruling-on-ai-copyright-law/

US federal judge issues landmark ruling on AI copyright law

A federal judge in California issued a landmark ruling on Monday in one of the first major court decisions addressing Artificial Intelligence (AI) training and copyright law. In a mixed ruling, the court found that training large language AI models (LLMs) on copyrighted books is legal under the doctrine of fair use, while also holding that AI platforms cannot use pirated materials to train their systems.

The lawsuit, filed by three authors, challenged the use of copyrighted materials by Anthropic, a leading AI company. Anthropic reported over $1 billion in annualized recurring revenue at the end of 2024, and is most known for its platform “Claude.” The authors contend that Anthropic used copyrighted books without permission to train Claude’s family of LLMs. The lawsuit shows that Anthropic has used several million books to train its systems — some were purchased in print form and then digitally scanned, while others were pirated from online sources.

Writing for the US District Court for the Northern District of California, Judge William Alsup ruled that converting legally purchased books to a digital format for the purpose of training LLMs does not infringe on copyright protections because it was merely a format change, and “was not done for purposes trenching upon the copyright owner’s rightful interests.” The ruling notes that Anthropic’s use of the copyrighted materials falls under the fair use doctrine because of its “transformative” nature. 



 

Tuesday, June 24, 2025

The very definition of ‘open source intelligence.’

https://www.bespacific.com/whats-the-pizza-meter-and-how-did-it-become-a-meme/

What’s The ‘Pizza Meter’ And How Did It Become A Meme?

The Pentagon Pizza Orders Conspiracy Theory Explainer. The bigger the crisis and the more time government staffers are stuck in their offices, the more pizza they eat. At least that’s what the internet thinks according to the “Pizza Meter theory, which some believe could even predict the beginning of a possible World War III. The relatively old conspiracy theory resurfaced in early August this year after X user @RealBenGeller tweeted about the Pizza Meter being “off the charts” and the “bars” in Washington, D.C. being empty, sparking more predictions of a possible war or alarming controversy building inside the U.S. government. How could pizza deliveries indicate a possible threat of war? Allow us to explain! What Is The Pizza Meter Theory? The Pizza Meter, also known as the Pentagon Pizza Orders Theory, is a theory proposing that upticks in pizza orders received by restaurants near the Pentagon can predict international conflicts and times of crisis in the U.S. government. The concept originated in the early 1990s after a Domino’s Pizza franchise owner in Northern Virginia near the Pentagon named Frank Meeks, told newspapers that before major national security events, he saw a noticeable uptick in business…”





Why?

https://www.wired.com/story/elon-musk-computer-sam-altman/

Elon Musk’s Lawyers Claim He ‘Does Not Use a Computer’

The claim appeared in a court filing related to Elon Musk’s ongoing lawsuit against Sam Altman and OpenAI. The Tesla and xAI owner has posted about his laptop numerous times in the past year.





Perspective.

https://www.eff.org/deeplinks/2025/06/no-fakes-act-has-changed-and-its-so-much-worse

The NO FAKES Act Has Changed – and It’s So Much Worse

A bill purporting to target the issue of misinformation and defamation caused by generative AI has mutated into something that could change the internet forever, harming speech and innovation from here on out.

The Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES) Act aims to address understandable concerns about generative AI-created “replicas” by creating a broad new intellectual property right. That approach was the first mistake: rather than giving people targeted tools to protect against harmful misrepresentationsbalanced against the need to protect legitimate speech such as parodies and satiresthe original NO FAKES just federalized an image-licensing system.

The updated bill doubles down on that initial mistaken approach by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them, with few safeguards against abuse.

The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instancemeaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters;  c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”



Monday, June 23, 2025

Tools & Techniques? Interesting in many ways.

https://www.businessinsider.com/how-lawyer-used-ai-help-win-case-clearbrief-2025-6

A family was awarded $1.5 million after US border officers wrongfully detained their 9-year-old. Their lawyer shares which AI tool helped him win the case.

Some lawyers continue to fall for made-up cases generated by artificial intelligence. Others are quietly finding ways to make the technology work for them.

Joseph McMullen, a San Diego civil rights and criminal defense attorney, is one of them. Last year, he said, he used AI-powered legal software to help him win a major case by sifting through evidence and making his filings more persuasive.

He approached tools like ChatGPT with deep skepticism. In one of his early tests, the chatbot surfaced a case that seemed perfect — until he realized it didn't exist. "That was it. Never again," he said.

Barely a month went by without another story of a lawyer getting burned by bogus case law. Judges were catching on. A public database maintained by legal data analyst Damien Charlotin lists 120 cases where courts caught lawyers using fake or hallucinated citations. Most of the cases were in the US in the past 18 months.

Another attorney recommended Clearbrief, a tool that integrates with Microsoft Word and lets lawyers link every factual claim to the underlying evidence. The plugin recognizes citations using natural language processing and automatically generates links to relevant case law or documents.

When an attorney files a brief using Clearbrief, a judge or any recipient can open a hyperlinked version in Word or a browser. Each citation becomes interactive: Clicking on one pulls up the exact source text side-by-side with the brief, allowing the reader to verify claims faster without digging through exhibits or databases.



(Related)

https://www.bespacific.com/deep-background-gpt-released/

Deep Background GPT Released

The world’s best AI fact-checking tool is now available to all as a completely free GPT. “People rightly asked me for an introduction for people who have never used SIFT Toolbox. Here it is — I just released a non-hallucinating rigorous AI-based fact-checker that anyone can use for free. And I don’t say that lightly: I literally co-wrote the book on using the internet to verify things. All you do is log into ChatGPT, click the link below, and put in a sentence or paragraph for it to fact check.  https://chatgpt.com/g/g-684fa334fb0c8191910d50a70baad796-deep-background-fact-checks-and-context?model=o3



Sunday, June 22, 2025

 Automating legal stuff…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5309575

Generative Misinterpretation

In a series of provocative experiments, a loose group of scholars, lawyers, and judges has endorsed generative interpretation: asking large language models (LLMs) like ChatGPT and Claude to resolve interpretive issues from actual cases. With varying degrees of confidence, they argue that LLMs are (or will soon be) able to assist-or even replace-judges in performing interpretive tasks like determining the meaning of a term in a contract or statute. A few go even further and argue for using LLMs to decide entire cases and to generate opinions supporting those decisions.

We respectfully dissent. In this Article, we show that LLMs are not yet fit for purpose for use in judicial chambers. Generative interpretation, like all empirical methods, must bridge two gaps to be useful and legitimate. The first is a reliability gap: are its methods consistent and reproducible enough to be trusted in high-stakes, real-world settings? Unfortunately, as we show, LLM proponents' experimental results are brittle and frequently arbitrary. The second is an epistemic gap: do these methods measure what they purport to? Here, LLM proponents have pointed to (1) LLMs' training processes on large datasets, (2) empirical measures of LLM outputs, (3) the rhetorical persuasiveness of those outputs, and (4) the assumed predictability of algorithmic methods. We show, however, that all of these justifications rest on unstated and faulty premises about the nature of LLMs and the nature of judging.

The superficial fluency of LLM-generated text conceals fundamental gaps between what these models are currently capable of and what legal interpretation requires to be methodologically and socially legitimate. Put simply, any human or computer can put words on a page, but it takes something more to turn those words into a legitimate act of legal interpretation. LLM proponents do not yet have a plausible story of what that "something more" comprises.





Not to mention “bunker busters.”

https://thecrsss.com/index.php/Journal/article/view/611

Revisiting the Geneva Conventions: Are the Four Core Conventions Sufficient for 21st Century Warfare?

The Geneva Conventions of 1949, along with their Additional Protocols, form the foundation of international humanitarian law IHL governing armed conflict. However, the evolution of warfare in the 21st century characterized by cyber operations, autonomous weapons, urban warfare, and the involvement of non-state actors has raised serious questions about the adequacy of these core legal instruments. This study critically examines whether the four Geneva Conventions remain sufficient to address the humanitarian and legal challenges posed by modern warfare. Using a doctrinal legal research methodology, the study analyses treaty texts, customary international law, scholarly commentary, and case studies involving emerging conflict scenarios. It finds that while the Geneva Conventions continue to provide essential humanitarian safeguards, they lack specificity in addressing new forms of warfare, particularly in domains such as cyberspace and artificial intelligence. The article concludes that without reinterpretation or the adoption of supplementary legal frameworks, the protective regime of IHL may fall short in ensuring civilian protection and accountability in future conflicts. Thus, the study recommends a proactive revaluation of the Conventions through dynamic interpretation, the development of new protocols, and enhanced international cooperation. The Geneva Conventions, widely regarded as the cornerstone of IHL, have guided the conduct of war and the protection of victims since their adoption in 1949. However, the evolution of warfare in the 21st century marked by cyber conflicts, autonomous weapons systems, and non-state actors has challenged the efficacy and comprehensiveness of these treaties. This article critically examines the relevance and sufficiency of the four Geneva Conventions in regulating modern armed conflicts. It contends that while the Conventions remain foundational, significant legal and normative gaps necessitate either expansive reinterpretation or supplementary legal instruments to ensure continued humanitarian protection.