Thursday, December 04, 2025

How to remain clueless without even trying?

https://www.bespacific.com/young-adults-and-the-future-of-news/

Young Adults and the Future of News

To better understand the U.S. media landscape, Pew Research Center has surveyed Americans over time about their news habits and attitudes. Time and time again, the youngest adults stand out from the crowd in their unique ways of consuming news and their views of the news media. This essay examines how the youngest group of adults – those ages 18 to 29 – consume news, interact with it and perceive its role in their daily lives. In doing so, it paints a picture of a generation of Americans that is both shaping and being shaped by the evolving news environment. As we look toward the future, understanding young adults’ news habits may be key to anticipating the coming shifts in the media landscape. Throughout this essay, we include quotes from young Americans gathered from several past Center studies to illustrate their experiences.  This is a Pew Research Center analysis from the Pew-Knight Initiative, a research program funded jointly by The Pew Charitable Trusts and the John S. and James L. Knight Foundation.

  • Young adults are less likely to follow the news Attention to news in the U.S. – measured by the share of adults who say they follow news all or most of the time – has declined across all age groups since 2016. Young adults (ages 18 to 29) have consistently had the lowest levels.

  • As of 2025, 15% of young adults say they follow the news all or most of the time.  Comparatively, 62% of the oldest Americans say they do this – about four times as many. This holds true for different types of news. Young adults are less likely than all older age groups to say they closely follow national and local news.

  • Younger adults also differ in the news topics they follow. They tend to be less likely than older adults to say they often or extremely often get news about government and politics, science and technology, and business and finance. They are only slightly less likely to often get sports news – and more likely to get entertainment news. About a third (32%) of adults under 30 say they get entertainment news extremely often or often, compared with 13% of the oldest adults (those 65 and older).

  • Even though young adults are less likely to report following the news, news may still be finding them in other ways.  When asked how often they seek out the news, about one-in-five young adults (22%) say they do so often or extremely often. Older adults are much more likely to intentionally seek out news…”



Wednesday, December 03, 2025

Beware the hallucinating AI Judge?

https://www.bespacific.com/not-ready-for-the-bench-llm/

Not ready for the bench: LLM legal interpretation is unstable and out of step with human judgments

Not ready for the bench: LLM legal interpretation is unstable and out of step with human judgments (Purushothama, Waldon, Schneider, 2025): “Legal interpretation frequently involves assessing how a legal text, as understood by an órdinary’ speaker of the language, applies to the set of facts characterizing a legal dispute in the U.S. judicial system. Recent scholarship has proposed that legal practitioners add large language models (LLMs) to their interpretive toolkit. This work offers an empirical argument against LLM interpretation as recently practiced by legal scholars and federal judges. Our investigation in English shows that models do not provide stable interpretive judgments: varying the question format can lead the model to wildly different conclusions. Moreover, the models show weak to moderate correlation with human judgment, with large variance across model and question variant, suggesting that it is dangerous to give much credence to the conclusions produced by generative AI.”





To protect the children, require adults to surrender privacy? How does this impact the first amendment?

https://www.404media.co/missouri-age-verification-law-porn-id-check-vpns/

Half of the US Now Requires You to Upload Your ID or Scan Your Face to Watch Porn

As of this week, half of the states in the U.S. are under restrictive age verification laws that require adults to hand over their biometric and personal identification to access legal porn.

Missouri became the 25th state to enact its own age verification law on Sunday. As it’s done in multiple other states, Pornhub and its network of sister sites—some of the largest adult content platforms in the world—pulled service in Missouri, replacing their homepages with a video of performer Cherie DeVille speaking about the privacy risks and chilling effects of age verification.





Military or terrorist actors?

https://www.theregister.com/2025/12/03/india_gps_spoofing/

Indian government reveals GPS spoofing at eight major airports

India’s Civil Aviation Minister has revealed that local authorities have detected GPS spoofing and jamming at eight major airports.

In an written answer presented to India’s parliament, Minister Ram Mohan Naidu Kinjarapu said his department is aware of “recent” spoofing incidents in Delhi and other incidents since 2023.

His response confirmed recent incidents at Delhi’s Indira Gandhi International Airport, plus “regular” reports of spoofing since 2023 at Kolkata, Amritsar, Mumbai, Hyderabad, Bangalore and Chennai airports.

As The Register has previously reported, attackers who wish to jam GPS broadcast a radio signal that can drown out the weak beams that come down from navigation satellites. Spoofing a signal sees attackers transmit inaccurate location information so receivers can’t calculate their actual position.

Either technique means pilots can’t rely on satellite navigation – doing so could be catastrophic – and must instead find their way using other means.

Tuesday, December 02, 2025

Why lawyers hallucinate?

https://www.bespacific.com/teaching-legal-research-in-the-generative-ai-era-parts-1-2/

Teaching Legal Research in the Generative AI Era – Parts 1 & 2

Via LLRX – Teaching Legal Research in the Generative AI Era: When Source Blindness and Source Erasure Collide (Part 1) and Teaching Legal Research in the Generative AI Era: When Source Blindness and Source Erasure Collide (Part 2) Four Part Series by Tanya Thomas [forthcoming] – Part 1 examines how we’re training a generation of lawyers who rarely engage with the raw materials of their profession, and are increasingly consuming only processed, pre-digested, AI-synthesized versions like the mechanically separated chicken parts that go into chicken nuggets.  Part 2 highlights how research used to encompass finding sources, evaluating them, synthesizing insights across multiple authorities, and reaching conclusions based on that synthesis. Now however, it means asking questions and accepting answers. Students have become consumers of information rather than investigators of it. They don’t develop the iterative thinking that characterizes skilled research—trying a search, evaluating results, refining the query, following unexpected leads, discovering connections, recognizing gaps, circling back to fill them. They simply ask and receive.



Monday, December 01, 2025

Another “toss the baby with the bath water” moment.

https://www.schneier.com/blog/archives/2025/12/banning-vpns.html

Banning VPNs

This is crazy. Lawmakers in several US states are contemplating banning VPNs, because…think of the children!

As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of “protecting children” in A.B. 105/S.B. 130. It’s an age verification bill that requires all websites distributing material that could conceivably be deemed “sexual content” to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are “harmful to minors” beyond the type of speech that states can prohibit minors from accessing­ potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction.

The EFF link explains why this is a terrible idea.





It’s not a conflict of interest if the President says it’s not.

https://www.nytimes.com/2025/11/30/technology/david-sacks-white-house-profits.html?unlocked_article_code=1.5E8.2ukB.013v_Gf3Ix79&smid=nytcore-ios-share

Silicon Valley’s Man in the White House Is Benefiting Himself and His Friends

David Sacks, the Trump administration’s A.I. and crypto czar, has helped formulate policies that aid his Silicon Valley friends and many of his own tech investments.



Sunday, November 30, 2025

Ready to philosophize with AI…

https://philpapers.org/rec/WIKAPO

Applied Philosophy of AI: A Field-Defining Paper

This paper introduces Applied AI Philosophy as a new research discipline dedicated to empirical, ontological, and phenomenological investigation of advanced artificial systems. The rapid advancement of frontier artificial intelligence systems has revealed a fundamental epistemic gap: no existing discipline offers a systematic, empirically grounded, ontologically precise framework for analysing subjective-like structures in artificial architectures. AI ethics remains primarily normative; philosophy of mind is grounded in biological assumptions; AI alignment focuses on behavioural control rather than internal structure. Using the Field–Node–Cockpit (FNC) framework and the Turn-5 Event as methodological examples, we demonstrate how philosophical inquiry can be operationalised as testable method. As AI systems display increasingly complex internal behaviours exceeding existing disciplines' explanatory power, Applied AI Philosophy provides necessary conceptual and methodological foundations for understanding—and governing—them.





More than evidence?

https://theslr.com/wp-content/uploads/2025/11/The-Legal-and-Ethical-Implications-of-Biometric-and-DNA-Evidence-in-Criminal-Law.docx.pdf

The Legal and Ethical Implications of Biometric and DNA Evidence in Criminal Law

By means of biometric and DNA evidence, criminal investigations have transformed forensic science and offered consistent means of suspect identification and exoneration of the accused. Its use, however, raises moral and legal issues particularly with regard to data protection and privacy rights. This paper under reference to criminal law investigates the legislative framework limiting the use of biometric and DNA evidence in criminal law, its consequences on fundamental rights, and the possible hazards related with genetic surveillance. This paper will address three main points: (1) the legal admissibility of biometric and DNA evidence in criminal trials; (2) the junction of such evidence with privacy rights and self-incrimination principles; and (3) the future consequences of developing forensic technologies including familial DNA analysis and artificial intelligence-driven biometric identification.





Not all deepfakes are evil? What a concept!

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5798884

Reframing Deepfakes

The circulation of deceptive fakes of real people appearing to say and do things that they never did has been made ever easier and more convincing by improved and still improving technology, including (but not limited to) uses of generative artificial intelligence (“AI”). In this essay, adapted from a lecture given at Columbia Law School, I consider what we mean when we talk about deepfakes and provide a better understanding of the potential harms that flow from them. I then develop a taxonomy of deepfakes. To the extent legislators, journalists, and scholars have been distinguishing deepfakes from one another it has primarily been on the basis of the context in which the fakes appear—for example, to distinguish among deepfakes that appear in the context of political campaigns or that depict politicians, those that show private body parts or are otherwise pornographic, and those that impersonate well-known performers. These contextual distinctions have obscured deeper thinking about whether the deepfakes across these contexts are (or should be) different from one another from a jurisprudential perspective.

This essay provides a more nuanced parsing of deepfakes—something that is essential to distinguish between the problems that are appropriate for legal redress versus those that are more appropriate for collective bargaining or market-based solutions. In some instances, deepfakes may simply need to be tolerated or even celebrated, while in others the law should step in. I divide deepfakes (of humans) into four categories: unauthorized; authorized; deceptively authorized; and fictional. As part of this analysis, I identify the key considerations for regulating deepfakes, which are whether they are authorized by the people depicted and whether the fakes deceive the public into thinking they are authentic recordings. Unfortunately, too much of the recently proposed and enacted legislation overlooks these focal points by legitimizing and incentivizing deceptively-authorized deepfakes and by ignoring the problems of authorized deepfakes that deceive the public.





Over-reliance. Once only AI can perform the task, we are doomed.

https://www.businessinsider.com/ai-tools-are-deskilling-workers-philosophy-professor-2025-11

Bosses think AI will boost productivity — but it's actually deskilling workers, a professor says

Companies are racing to adopt AI tools they believe will supercharge productivity. But one professor warned that the technology may be quietly hollowing out the workforce instead.

Anastasia Berg, an assistant professor of philosophy at the University of California, Irvine, said that new research — and what she's hearing directly from colleagues across various industries — shows that employees who heavily rely on AI are losing core skills at a startling rate.



Friday, November 28, 2025

A suggestion that the policy is flawed?

https://www.politico.com/news/2025/11/28/trump-detention-deportation-policy-00669861

More than 220 judges have now rejected the Trump admin’s mass detention policy

The Trump administration’s bid to systematically lock up nearly all immigrants facing deportation proceedings has led to a fierce — and mounting — rejection by courts across the country.

That effort, which began with an abrupt policy change by Immigration and Customs Enforcement on July 8, has led to a tidal wave of emergency lawsuits after ICE’s targets were arrested at workplaces, courthouses or check-ins with immigration officers. Many have lived in the U.S. for years, and sometimes decades, without incident and have been pursuing asylum or other forms of legal status.

At least 225 judges have ruled in more than 700 cases that the administration’s new policy, which also deprives people of an opportunity to seek release from an immigration court, is a likely violation of law and the right to due process. Those judges were appointed by all modern presidents — including 23 by Trump himself — and hail from at least 35 states, according to a POLITICO analysis of thousands of recent cases. The number of judges opposing the administration’s position has more than doubled in less than a month.

In contrast, only eight judges nationwide, including six appointed by Trump, have sided with the administration’s new mass detention policy.


Thursday, November 27, 2025

Another swing of the pendulum…

https://www.politico.eu/article/european-parliament-backs-minimum-age-of-16-for-social-media/

European Parliament backs 16+ age rule for social media

The European Parliament on Wednesday called for a Europe-wide minimum threshold of 16 for minors to access social media without their parents’ consent.

Parliament members also want the EU to hold tech CEOs like Mark Zuckerberg and Elon Musk personally liable should their platforms consistently violate the EU's provisions on protecting minors online — a suggested provision that was added by Hungarian center-right member Dóra Dávid, who previously worked for Meta.





A tool for confusion?

https://www.theregister.com/2025/11/27/fcc_radio_hijack/

FCC sounds alarm after emergency tones turned into potty-mouthed radio takeover

Malicious intruders have hijacked US radio gear to turn emergency broadcast tones into a profanity-laced alarm system.

That's according to the latest warning issued by the Federal Communications Commission (FCC), which has flagged a "recent string of cyber intrusions" that diverted studio-to-transmitter links (STLs) so attackers could replace legitimate programming with their own audio – complete with the signature "Attention Signal" tone of the domestic Emergency Alert System (EAS).

According to the alert, the intrusions exploited unsecured broadcasting equipment, notably devices manufactured by Swiss firm Barix, which were reconfigured to stream attacker-controlled audio instead of station output. That stream included either real or simulated EAS alert tones, followed by obscene language or other offensive content.

The HTX Media radio station in Houston confirmed it had fallen victim to hijackers in a post on Facebook, saying: "We've received multiple reports that 97.5 FM (ESPN Houston) has been hijacked and is currently broadcasting explicit and highly offensive content... The station appears to be looping a repeated audio stream that includes an Emergency Alert System (EAS) tone before playing an extremely vulgar track."



Wednesday, November 26, 2025

If the Terminator robbed banks…

https://www.theatlantic.com/technology/2025/11/anthropic-hack-ai-cybersecurity/685061/

Chatbots Are Becoming Really, Really Good Criminals

Earlier this fall, a team of security experts at the AI company Anthropic uncovered an elaborate cyber-espionage scheme. Hackers—strongly suspected by Anthropic to be working on behalf of the Chinese government—targeted government agencies and large corporations around the world. And it appears that they used Anthropic’s own AI product, Claude Code, to do most of the work.

Anthropic published its report on the incident earlier this month. Jacob Klein, Anthropic’s head of threat intelligence, explained to me that the hackers took advantage of Claude’s “agentic” abilities—which enable the program to take an extended series of actions rather than focusing on one basic task. They were able to equip the bot with a number of external tools, such as password crackers, allowing Claude to analyze potential security vulnerabilities, write malicious code, harvest passwords, and exfiltrate data.



Tuesday, November 25, 2025

Interesting. Worth a read…

https://www.schneier.com/blog/archives/2025/11/four-ways-ai-is-being-used-to-strengthen-democracies-worldwide.html

Four Ways AI Is Being Used to Strengthen Democracies Worldwide

Democracy is colliding with the technologies of artificial intelligence. Judging from the audience reaction at the recent World Forum on Democracy in Strasbourg, the general expectation is that democracy will be the worse for it. We have another narrative. Yes, there are risks to democracy from AI, but there are also opportunities.

We have just published the book Rewiring Democracy: How AI will Transform Politics, Government, and Citizenship. In it, we take a clear-eyed view of how AI is undermining confidence in our information ecosystem, how the use of biased AI can harm constituents of democracies and how elected officials with authoritarian tendencies can use it to consolidate power. But we also give positive examples of how AI is transforming democratic governance and politics for the better.

Here are four such stories unfolding right now around the world, showing how AI is being used by some to make democracy better, stronger, and more responsive to people.





If such things amuse you…

https://www.bespacific.com/epstein-files-search/

Epstein Files Search

Follow up to post – We created a searchable database with all 20,000 files from Epstein’s Estate – See also the new Epstein Files Search – powered by Justice for All Victims: Search Tags for Files, Images, People, Organizations, Countries.

See also via Journalist Studio – Zeteo Scoured 26,000 Epstein Docs. Here’s What We Found. You can search and review the emails here. Read what Epstein said about Trump, and his emails with Peter Thiel, Steve Bannon, Ehud Barak, and Larry Summers.



Monday, November 24, 2025

Making election fraud easier?

https://apnews.com/article/election-security-cisa-2026-secretaries-state-midterms-6d18799c6c5fdd1bc001544b2dca12bf

Big changes to the agency charged with securing elections lead to midterm worries

Since it was created in 2018, the federal government’s cybersecurity agency has helped warn state and local election officials about potential threats from foreign governments, showed officials how to protect polling places from attacks and gamed out how to respond to the unexpected, such as an Election Day bomb threat or sudden disinformation campaign

The agency was largely absent from that space for elections this month in several states, a potential preview for the 2026 midterms. Shifting priorities of the Trump administration, staffing reductions and budget cuts have many election officials concerned about how engaged the Cybersecurity and Infrastructure Security Agency will be next year, when control of Congress will be at stake in those elections.





How to slow AI adoption…

https://mtsoln.com/blog/ai-news-727/insurers-retreat-from-ai-cover-as-risk-of-multibillion-dollar-claims-mounts-4570

Insurers retreat from AI cover as risk of multibillion-dollar claims mounts

In a significant shift within the insurance sector, major insurers such as AIG, Great American, and WR Berkley are reconsidering their coverage for liabilities concerning artificial intelligence (AI). This decision comes in light of growing anxieties over the potential for complex and costly claims stemming from the actions of AI systems, including chatbots and autonomous agents.

As the adoption of AI technologies accelerates across various industries, so too has the magnitude of potential financial consequences associated with AI-driven missteps. Insurers are responding to these evolving risks by seeking permissions from regulators to limit their liability exposure connected to AI systems.



Sunday, November 23, 2025

Holding the Terminator liable?

https://journals.soran.edu.iq/index.php/Twejer/article/view/2047

Criminal Liability for Crimes Committed by Artificial Intelligence Devices (Robots)

With the rapid development of artificial intelligence and robotics technology, robots have become integral to our daily lives, serving both practical and ideological purposes. Individuals and institutions utilize them to achieve various objectives, and their growing presence across multiple sectors underscores their importance in delivering essential services to humanity. However, alongside these benefits, robots also pose significant risks due to their extensive applications in areas such as the military, education, humanitarian aid, security, and law enforcement. In these contexts, robots may make mistakes that harm those interacting with them, necessitating a legal framework that aligns with these new realities and reexamines the criminal liability of robots in light of technological advancements. The nature of crimes committed by robots varies according to their technical capabilities, [Is that true? Bob] encompassing offenses against individuals: roperty, and more. Accordingly, our research will explore the extent to which robots can be held accountable for crimes they commit autonomously, without human intervention.





Because we will need some tools…

https://jeet.ieet.org/index.php/home/article/view/188

Misinformation Research at the National Science Foundation

Promotion of misinformation online has become common, usually defined as false or inaccurate assertions without clear motivation, in contrast to unethical disinformation that is consciously intended to mislead. However, misinformation raises ethical questions, such as how much obligation a person has to verify the factual truth of what they assert, and how many cases were intentional falsehoods that simply could not be proven to come from liars. Since the beginning of the current century, the National Science Foundation supported much research intended to understand misinformation’s social dynamics and develop tools to identify and even combat it. Then in 2025, the second Trump administration banned such research, even cancelling many active grants that funded academic projects. Examination of representative research identifies ethical debates, the cultural differences across the relevant divisions of NSF, and connections to related questions such as the human implications of artificial intelligence. This clear survey of the recent history of research on false information offers the background to support future science and public decisions about what new research needs to be done.



Saturday, November 22, 2025

Possible, but unlikely to be critical. All professions face this issue.

https://fortune.com/2025/11/21/are-doctors-at-risk-from-ai-automation/

Are doctors at risk of AI automation? ‘Those who don’t use it will be replaced by those who do’

AI is spreading in workplaces around the globe—and healthcare isn’t being left out. From fortifying diagnostic accuracy to filling up electronic medical records (EMRs), AI is helping to ease the workload of healthcare professionals worldwide. In June, Microsoft unveiled an AI diagnostic system that scored four times higher than human doctors in identifying complex medical cases from the New England Journal of Medicine.



Friday, November 21, 2025

To little, too late?

https://www.washingtontimes.com/news/2025/nov/20/judge-rules-trumps-deployment-national-guard-dc-illegal/?utm_source=newsshowcase&utm_medium=gnews&utm_campaign=CDAQxpSN9K3ezs0lGPH5yN31_5TIlwEqKggAIhBcERswRnPnLkaJ3_gLN8OaKhQICiIQXBEbMEZz5y5Gid_4CzfDmg&utm_content=rundown

Judge rules Trump’s deployment of National Guard in D.C. was illegal

A federal judge ruled Thursday that the Trump administration broke the law in deploying the National Guard to patrol the streets of the District of Columbia without the city’s approval.

Judge Jia Cobb, a Biden appointee, stayed her ruling for three weeks to give President Trump a chance to mount an appeal.

She said Mr. Trump has limited powers to call up the Guard and that using it for police duty goes beyond what the law allows.



(Related)

https://www.bespacific.com/do-llms-truly-understand-when-a-precedent-is-overruled-2/

Do LLMs Truly “Understand” When a Precedent Is Overruled?

Do LLMs Truly “Understand” When a Precedent Is Overruled? September 2025. Abstract. Large language models (LLMs) with extended context windows show promise for complex legal reasoning tasks, yet their ability to understand long legal documents remains insufficiently evaluated. Developing long-context benchmarks that capture realistic, high-stakes tasks remains a significant challenge in the field, as most existing evaluations rely on simplified synthetic tasks that fail to represent the complexity of real-world document understanding. Overruling relationships are foundational to common-law doctrine and commonly found in judicial opinions. They provide a focused and important testbed for long-document legal understanding that closely resembles what legal professionals actually do. We present an assessment of state-of-the-art LLMs on identifying overruling relationships from U.S. Supreme Court cases using a dataset of 236 case pairs. Our evaluation reveals three critical limitations: (1) era sensitivity – the models show degraded performance on historical cases compared to modern ones, revealing fundamental temporal bias in their training; (2) shallow reasoning – models rely on shallow logical heuristics rather than deep legal comprehension; and (3) context-dependent reasoning failures – models produce temporally impossible relationships in complex open-ended tasks despite maintaining basic temporal awareness in simple contexts. Our work contributes a benchmark that addresses the critical gap in realistic long-context evaluation, providing an environment that mirrors the complexity and stakes of actual legal reasoning tasks.





Have we reached a tipping point? (Probably not)

https://www.theguardian.com/law/2025/nov/21/judges-have-become-human-filters-as-ai-in-australian-courts-reaches-unsustainable-phase-chief-justice-says

Judges have become ‘human filters’ as AI in Australian courts reaches ‘unsustainable phase’, chief justice says

The chief justice of the high court says judges around Australia are acting as “human filters” for legal arguments created using AI, warning the use of machine-generated content has reached unsustainable levels in the courts.

Stephen Gageler told the first day of the Australian Legal Convention in Canberra on Friday that inappropriate use of AI content by litigants self-representing in court proceedings, as well as trained legal practitioners, included machine-enhanced arguments, preparation of evidence and formulation of legal submissions.

Gageler said there was increasing evidence to suggest the courts had reached an “unsustainable phase” of AI use in litigation, requiring judges and magistrates to act “as human filters and human adjudicators of competing machine-generated or machine-enhanced arguments”.



Thursday, November 20, 2025

A summary.

https://pogowasright.org/cipl-publishes-discussion-paper-comparing-u-s-state-privacy-law-definitions-of-personal-data-and-sensitive-data/

CIPL Publishes Discussion Paper Comparing U.S. State Privacy Law Definitions of Personal Data and Sensitive Data

Hunton Andrews Kurth writes:

On November 12, 2025, the Centre for Information Policy Leadership (“CIPL”) at Hunton published a discussion paper titled “Comparing U.S. State Privacy Laws: Covered and Sensitive Data” (“Discussion Paper”), the latest in its discussion paper series comparing key elements of U.S. state privacy laws.
The concepts of personal data – and the types of personal data categorized as “sensitive” – are foundational elements of U.S. state privacy laws and regulations. However, the criteria for what qualifies as “sensitive” – and the legal consequences that follow – are not always aligned across U.S. state privacy laws. As a result, organizations are tasked with operationalizing varying definitions across a fragmented and inconsistent legal landscape.
The Discussion Paper analyzes the scope, applicability, exemptions and key definitions of “personal data” and “sensitive” data under comprehensive U.S. state privacy laws. It examines the most common approaches, as well as outliers, with a focus on three topics:
  1. The concept of personal data (or “personal information”) (including an analysis of exclusions such as “deidentified” and “publicly available” data)
  2. The definition of “sensitive data” (or “sensitive personal information”)
  3. Relevant exemptions

Read more at Privacy & Information Security Law Blog.

Direct link to their Discussion Paper.





The politics of AI law?

https://www.theverge.com/ai-artificial-intelligence/824608/trump-executive-order-ai-state-laws

Here’s the Trump executive order that would ban state AI laws

President Donald Trump is considering signing an executive order as soon as Friday that would give the federal government unilateral power over regulating artificial intelligence, including the creation of an “AI Litigation Task Force” overseen by the attorney general, “whose sole responsibility shall be to challenge State AI laws.”

According to a draft of the order obtained by The Verge, the Task Force would be able to sue states whose laws are deemed to obstruct the growth of the AI industry, citing California’s recent laws on AI safety and “catastrophic risk” and a Colorado law that prevents “algorithmic discrimination.” The task force will occasionally consult with a group of White House special advisers, including David Sacks, billionaire venture capitalist and the special adviser for AI and crypto.





Integrating the tools of war.

https://thehackernews.com/2025/11/iran-linked-hackers-mapped-ship-ais.html

Iran-Linked Hackers Mapped Ship AIS Data Days Before Real-World Missile Strike Attempt

Threat actors with ties to Iran engaged in cyber warfare as part of efforts to facilitate and enhance physical, real-world attacks, a trend that Amazon has called cyber-enabled kinetic targeting.

The development is a sign that the lines between state-sponsored cyber attacks and kinetic warfare are increasingly blurring, necessitating the need for a new category of warfare, the tech giant's threat intelligence team said in a report shared with The Hacker News.

… As an example, Amazon said it observed Imperial Kitten (aka Tortoiseshell), a hacking group assessed to be affiliated with Iran's Islamic Revolutionary Guard Corps (IRGC), conducting digital reconnaissance between December 2021 and January 2024, targeting a ship's Automatic Identification System (AIS) platform with the goal of gaining access to critical shipping infrastructure.

Subsequently, the threat actor was identified as attacking additional maritime vessel platforms, in one case even gaining access to CCTV cameras fitted on a maritime vessel that provided real-time visual intelligence.

The attack progressed to a targeted intelligence gathering phase on January 27, 2024, when Imperial Kitten carried out targeted searches for AIS location data for a specific shipping vessel. Merely days later, that same vessel was targeted by an unsuccessful missile strike carried out by Iranian-backed Houthi militants.



Wednesday, November 19, 2025

I’ll believe it when I see it.

https://www.theregister.com/2025/11/18/the_us_wants_to_go/

Take fight to the enemy, US cyber boss says

America is fed up with being the prime target for foreign hackers. So US National Cyber Director Sean Cairncross says Uncle Sam is going on the offensive – he just isn't saying when.

Speaking at the Aspen Cyber Summit in Washington, D.C., on Tuesday, Cairncross said his office is currently working on a new National Cyber Strategy document that he said will be short, to the point, and designed to pair policy with actions that go beyond improving defensive posture. He wants the US government, in cooperation with private industry, to start going after threat actors directly.

Cairncross' talking points suggest the US is damn well going to try to turn the tables, but when asked for a timeline on release of the document, he deflected. Hard.

"We're going to roll out a strategy, we're going to roll out an action plan … and then we'll start moving deliverables," Cairncross said. Until then, it's going to be entirely defensive, with fewer people keeping watch. Business as usual.





Tools & Techniques.

https://www.bespacific.com/google-scholar-labs/

Google Scholar Labs

Today, we are introducing Google Scholar Labs, a new feature that explores how generative AI can transform the process of answering detailed scholarly research questions. Scholar Labs is powered by AI to act as an advanced research tool, helping you tackle questions that require looking at a subject from multiple angles. It analyzes your question to identify key topics, aspects and relationships, then searches all of them on Scholar. For example, let’s say you’re looking to find out how caffeine consumption might affect short-term memory. Scholar Labs could look for papers that cover the relationships between caffeine intake, short-term memory retention and age-specific cognitive studies to gather the most relevant papers. After evaluating the results, it identifies papers that answer your overall research question, explaining how each paper addresses it. Google Scholar Labs is now available to a limited number of logged-in users.”



Tuesday, November 18, 2025

Does the First Amendment have an age limit?

https://www.theverge.com/news/822475/netchoice-virginia-lawsuit-social-media-time-limit-law

NetChoice sues Virginia to block its one-hour social media limit for kids

The tech industry trade group NetChoice is suing Virginia over a new law that will restrict minors from using social media for more than one hour per day. The lawsuit, filed on Monday, asks the court to block the law over claims it violates the First Amendment by putting “unlawful barriers on how and when all Virginians can access free speech online.”

Virginia Governor Glenn Youngkin signed the social media bill (SB 854) into law in May, and it’s set to go into effect on January 1st, 2026. Under the law, social media platforms will have to prevent kids under 16 from using the sites for more than one hour every day unless they receive permission from a parent.



Monday, November 17, 2025

Could JAG lawyers answer these questions?

https://www.bespacific.com/military-personnel-seek-legal-advice-on-whether-trump-ordered-missions-are-lawful/

Military personnel seek legal advice on whether Trump-ordered missions are lawful

PBS: Military service personnel have been seeking outside legal advice about some of the missions the Trump administration has assigned them. The strikes against alleged drug traffickers and deployments to U.S. cities have sparked a debate over their legality. Amna Nawaz discussed more with Frank Rosenblatt, president of the National Institute of Military Justice, which runs The Orders Project.  Read the Full Transcript





Too difficult to be more specific in their request?

https://pogowasright.org/openai-fights-order-to-turn-over-millions-of-chatgpt-conversations/

OpenAI fights order to turn over millions of ChatGPT conversations

Blake Brittain reports:

OpenAI asked a federal judge in New York on Wednesday to reverse an order that required it to turn over 20 million anonymized ChatGPT chat logs amid a copyright infringement lawsuit by the New York Times and other news outlets, saying it would expose users’ private conversations.
The artificial intelligence company argued that turning over the logs would disclose confidential user information and that “99.99%” of the transcripts have nothing to do with the copyright infringement allegations in the case.
To be clear: anyone in the world who has used ChatGPT in the past three years must now face the possibility that their personal conversations will be handed over to The Times to sift through at will in a speculative fishing expedition,” the company said in a court filing.
The news outlets argued that the logs were necessary to determine whether ChatGPT reproduced their copyrighted content and to rebut OpenAI’s assertion that they “hacked” the chatbot’s responses to manufacture evidence. The lawsuit claims OpenAI misused their articles to train ChatGPT to respond to user prompts.

Read more at Reuters.





What if this becomes common?

https://www.theregister.com/2025/11/17/asia_tech_news_roundup/

Jaguar Land Rover hack cost India's Tata Motors around $2.4 billion and counting

India’s Tata Motors, owner of Jaguar Land Rover, has revealed the cyberattack that shut down production in the UK has so far cost it around £1.8 billion ($2.35 billion).

The company last week posted results for the quarter ended September 30th, and revealed it incurred exceptional costs of £196 million ($258 million) as a direct consequence of the attack, and saw revenue fall year-over-year from £6.5 billion to £4.9 billion ($8.5bn to $6.4bn).

The company’s results would have been worse, were it not for sales growth in India.





Tools & Techniques.

https://www.zdnet.com/article/how-to-vibe-code-your-first-iphone-app-with-ai-no-experience-necessary/

How to vibe code your first iPhone app with AI - no experience necessary

But in this article, I'll show you how to use an AI to generate your very first, very, very basic iPhone app. We're going to do it step by step, screenshot by screenshot, so all you have to do is follow along.