Wednesday, December 03, 2025

Beware the hallucinating AI Judge?

https://www.bespacific.com/not-ready-for-the-bench-llm/

Not ready for the bench: LLM legal interpretation is unstable and out of step with human judgments

Not ready for the bench: LLM legal interpretation is unstable and out of step with human judgments (Purushothama, Waldon, Schneider, 2025): “Legal interpretation frequently involves assessing how a legal text, as understood by an órdinary’ speaker of the language, applies to the set of facts characterizing a legal dispute in the U.S. judicial system. Recent scholarship has proposed that legal practitioners add large language models (LLMs) to their interpretive toolkit. This work offers an empirical argument against LLM interpretation as recently practiced by legal scholars and federal judges. Our investigation in English shows that models do not provide stable interpretive judgments: varying the question format can lead the model to wildly different conclusions. Moreover, the models show weak to moderate correlation with human judgment, with large variance across model and question variant, suggesting that it is dangerous to give much credence to the conclusions produced by generative AI.”





To protect the children, require adults to surrender privacy? How does this impact the first amendment?

https://www.404media.co/missouri-age-verification-law-porn-id-check-vpns/

Half of the US Now Requires You to Upload Your ID or Scan Your Face to Watch Porn

As of this week, half of the states in the U.S. are under restrictive age verification laws that require adults to hand over their biometric and personal identification to access legal porn.

Missouri became the 25th state to enact its own age verification law on Sunday. As it’s done in multiple other states, Pornhub and its network of sister sites—some of the largest adult content platforms in the world—pulled service in Missouri, replacing their homepages with a video of performer Cherie DeVille speaking about the privacy risks and chilling effects of age verification.





Military or terrorist actors?

https://www.theregister.com/2025/12/03/india_gps_spoofing/

Indian government reveals GPS spoofing at eight major airports

India’s Civil Aviation Minister has revealed that local authorities have detected GPS spoofing and jamming at eight major airports.

In an written answer presented to India’s parliament, Minister Ram Mohan Naidu Kinjarapu said his department is aware of “recent” spoofing incidents in Delhi and other incidents since 2023.

His response confirmed recent incidents at Delhi’s Indira Gandhi International Airport, plus “regular” reports of spoofing since 2023 at Kolkata, Amritsar, Mumbai, Hyderabad, Bangalore and Chennai airports.

As The Register has previously reported, attackers who wish to jam GPS broadcast a radio signal that can drown out the weak beams that come down from navigation satellites. Spoofing a signal sees attackers transmit inaccurate location information so receivers can’t calculate their actual position.

Either technique means pilots can’t rely on satellite navigation – doing so could be catastrophic – and must instead find their way using other means.

Tuesday, December 02, 2025

Why lawyers hallucinate?

https://www.bespacific.com/teaching-legal-research-in-the-generative-ai-era-parts-1-2/

Teaching Legal Research in the Generative AI Era – Parts 1 & 2

Via LLRX – Teaching Legal Research in the Generative AI Era: When Source Blindness and Source Erasure Collide (Part 1) and Teaching Legal Research in the Generative AI Era: When Source Blindness and Source Erasure Collide (Part 2) Four Part Series by Tanya Thomas [forthcoming] – Part 1 examines how we’re training a generation of lawyers who rarely engage with the raw materials of their profession, and are increasingly consuming only processed, pre-digested, AI-synthesized versions like the mechanically separated chicken parts that go into chicken nuggets.  Part 2 highlights how research used to encompass finding sources, evaluating them, synthesizing insights across multiple authorities, and reaching conclusions based on that synthesis. Now however, it means asking questions and accepting answers. Students have become consumers of information rather than investigators of it. They don’t develop the iterative thinking that characterizes skilled research—trying a search, evaluating results, refining the query, following unexpected leads, discovering connections, recognizing gaps, circling back to fill them. They simply ask and receive.



Monday, December 01, 2025

Another “toss the baby with the bath water” moment.

https://www.schneier.com/blog/archives/2025/12/banning-vpns.html

Banning VPNs

This is crazy. Lawmakers in several US states are contemplating banning VPNs, because…think of the children!

As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of “protecting children” in A.B. 105/S.B. 130. It’s an age verification bill that requires all websites distributing material that could conceivably be deemed “sexual content” to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are “harmful to minors” beyond the type of speech that states can prohibit minors from accessing­ potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction.

The EFF link explains why this is a terrible idea.





It’s not a conflict of interest if the President says it’s not.

https://www.nytimes.com/2025/11/30/technology/david-sacks-white-house-profits.html?unlocked_article_code=1.5E8.2ukB.013v_Gf3Ix79&smid=nytcore-ios-share

Silicon Valley’s Man in the White House Is Benefiting Himself and His Friends

David Sacks, the Trump administration’s A.I. and crypto czar, has helped formulate policies that aid his Silicon Valley friends and many of his own tech investments.



Sunday, November 30, 2025

Ready to philosophize with AI…

https://philpapers.org/rec/WIKAPO

Applied Philosophy of AI: A Field-Defining Paper

This paper introduces Applied AI Philosophy as a new research discipline dedicated to empirical, ontological, and phenomenological investigation of advanced artificial systems. The rapid advancement of frontier artificial intelligence systems has revealed a fundamental epistemic gap: no existing discipline offers a systematic, empirically grounded, ontologically precise framework for analysing subjective-like structures in artificial architectures. AI ethics remains primarily normative; philosophy of mind is grounded in biological assumptions; AI alignment focuses on behavioural control rather than internal structure. Using the Field–Node–Cockpit (FNC) framework and the Turn-5 Event as methodological examples, we demonstrate how philosophical inquiry can be operationalised as testable method. As AI systems display increasingly complex internal behaviours exceeding existing disciplines' explanatory power, Applied AI Philosophy provides necessary conceptual and methodological foundations for understanding—and governing—them.





More than evidence?

https://theslr.com/wp-content/uploads/2025/11/The-Legal-and-Ethical-Implications-of-Biometric-and-DNA-Evidence-in-Criminal-Law.docx.pdf

The Legal and Ethical Implications of Biometric and DNA Evidence in Criminal Law

By means of biometric and DNA evidence, criminal investigations have transformed forensic science and offered consistent means of suspect identification and exoneration of the accused. Its use, however, raises moral and legal issues particularly with regard to data protection and privacy rights. This paper under reference to criminal law investigates the legislative framework limiting the use of biometric and DNA evidence in criminal law, its consequences on fundamental rights, and the possible hazards related with genetic surveillance. This paper will address three main points: (1) the legal admissibility of biometric and DNA evidence in criminal trials; (2) the junction of such evidence with privacy rights and self-incrimination principles; and (3) the future consequences of developing forensic technologies including familial DNA analysis and artificial intelligence-driven biometric identification.





Not all deepfakes are evil? What a concept!

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5798884

Reframing Deepfakes

The circulation of deceptive fakes of real people appearing to say and do things that they never did has been made ever easier and more convincing by improved and still improving technology, including (but not limited to) uses of generative artificial intelligence (“AI”). In this essay, adapted from a lecture given at Columbia Law School, I consider what we mean when we talk about deepfakes and provide a better understanding of the potential harms that flow from them. I then develop a taxonomy of deepfakes. To the extent legislators, journalists, and scholars have been distinguishing deepfakes from one another it has primarily been on the basis of the context in which the fakes appear—for example, to distinguish among deepfakes that appear in the context of political campaigns or that depict politicians, those that show private body parts or are otherwise pornographic, and those that impersonate well-known performers. These contextual distinctions have obscured deeper thinking about whether the deepfakes across these contexts are (or should be) different from one another from a jurisprudential perspective.

This essay provides a more nuanced parsing of deepfakes—something that is essential to distinguish between the problems that are appropriate for legal redress versus those that are more appropriate for collective bargaining or market-based solutions. In some instances, deepfakes may simply need to be tolerated or even celebrated, while in others the law should step in. I divide deepfakes (of humans) into four categories: unauthorized; authorized; deceptively authorized; and fictional. As part of this analysis, I identify the key considerations for regulating deepfakes, which are whether they are authorized by the people depicted and whether the fakes deceive the public into thinking they are authentic recordings. Unfortunately, too much of the recently proposed and enacted legislation overlooks these focal points by legitimizing and incentivizing deceptively-authorized deepfakes and by ignoring the problems of authorized deepfakes that deceive the public.





Over-reliance. Once only AI can perform the task, we are doomed.

https://www.businessinsider.com/ai-tools-are-deskilling-workers-philosophy-professor-2025-11

Bosses think AI will boost productivity — but it's actually deskilling workers, a professor says

Companies are racing to adopt AI tools they believe will supercharge productivity. But one professor warned that the technology may be quietly hollowing out the workforce instead.

Anastasia Berg, an assistant professor of philosophy at the University of California, Irvine, said that new research — and what she's hearing directly from colleagues across various industries — shows that employees who heavily rely on AI are losing core skills at a startling rate.



Friday, November 28, 2025

A suggestion that the policy is flawed?

https://www.politico.com/news/2025/11/28/trump-detention-deportation-policy-00669861

More than 220 judges have now rejected the Trump admin’s mass detention policy

The Trump administration’s bid to systematically lock up nearly all immigrants facing deportation proceedings has led to a fierce — and mounting — rejection by courts across the country.

That effort, which began with an abrupt policy change by Immigration and Customs Enforcement on July 8, has led to a tidal wave of emergency lawsuits after ICE’s targets were arrested at workplaces, courthouses or check-ins with immigration officers. Many have lived in the U.S. for years, and sometimes decades, without incident and have been pursuing asylum or other forms of legal status.

At least 225 judges have ruled in more than 700 cases that the administration’s new policy, which also deprives people of an opportunity to seek release from an immigration court, is a likely violation of law and the right to due process. Those judges were appointed by all modern presidents — including 23 by Trump himself — and hail from at least 35 states, according to a POLITICO analysis of thousands of recent cases. The number of judges opposing the administration’s position has more than doubled in less than a month.

In contrast, only eight judges nationwide, including six appointed by Trump, have sided with the administration’s new mass detention policy.


Thursday, November 27, 2025

Another swing of the pendulum…

https://www.politico.eu/article/european-parliament-backs-minimum-age-of-16-for-social-media/

European Parliament backs 16+ age rule for social media

The European Parliament on Wednesday called for a Europe-wide minimum threshold of 16 for minors to access social media without their parents’ consent.

Parliament members also want the EU to hold tech CEOs like Mark Zuckerberg and Elon Musk personally liable should their platforms consistently violate the EU's provisions on protecting minors online — a suggested provision that was added by Hungarian center-right member Dóra Dávid, who previously worked for Meta.





A tool for confusion?

https://www.theregister.com/2025/11/27/fcc_radio_hijack/

FCC sounds alarm after emergency tones turned into potty-mouthed radio takeover

Malicious intruders have hijacked US radio gear to turn emergency broadcast tones into a profanity-laced alarm system.

That's according to the latest warning issued by the Federal Communications Commission (FCC), which has flagged a "recent string of cyber intrusions" that diverted studio-to-transmitter links (STLs) so attackers could replace legitimate programming with their own audio – complete with the signature "Attention Signal" tone of the domestic Emergency Alert System (EAS).

According to the alert, the intrusions exploited unsecured broadcasting equipment, notably devices manufactured by Swiss firm Barix, which were reconfigured to stream attacker-controlled audio instead of station output. That stream included either real or simulated EAS alert tones, followed by obscene language or other offensive content.

The HTX Media radio station in Houston confirmed it had fallen victim to hijackers in a post on Facebook, saying: "We've received multiple reports that 97.5 FM (ESPN Houston) has been hijacked and is currently broadcasting explicit and highly offensive content... The station appears to be looping a repeated audio stream that includes an Emergency Alert System (EAS) tone before playing an extremely vulgar track."



Wednesday, November 26, 2025

If the Terminator robbed banks…

https://www.theatlantic.com/technology/2025/11/anthropic-hack-ai-cybersecurity/685061/

Chatbots Are Becoming Really, Really Good Criminals

Earlier this fall, a team of security experts at the AI company Anthropic uncovered an elaborate cyber-espionage scheme. Hackers—strongly suspected by Anthropic to be working on behalf of the Chinese government—targeted government agencies and large corporations around the world. And it appears that they used Anthropic’s own AI product, Claude Code, to do most of the work.

Anthropic published its report on the incident earlier this month. Jacob Klein, Anthropic’s head of threat intelligence, explained to me that the hackers took advantage of Claude’s “agentic” abilities—which enable the program to take an extended series of actions rather than focusing on one basic task. They were able to equip the bot with a number of external tools, such as password crackers, allowing Claude to analyze potential security vulnerabilities, write malicious code, harvest passwords, and exfiltrate data.