Tuesday, August 12, 2025

Is this article neural data?

https://fpf.org/blog/the-neural-data-goldilocks-problem-defining-neural-data-in-u-s-state-privacy-laws/

The “Neural Data” Goldilocks Problem: Defining “Neural Data” in U.S. State Privacy Laws

As of halfway through 2025, four U.S. states have enacted laws regarding “neural data” or “neurotechnology data.” These laws, all of which amend existing state privacy laws, signify growing lawmaker interest in regulating what’s being considered a distinct, particularly sensitive kind of data: information about people’s thoughts, feelings, and mental activity. Created in response to the burgeoning neurotechnology industry, neural data laws in the U.S. seek to extend existing protections for the most sensitive of personal data to the newly-conceived legal category of “neural data.”

Each of these laws defines “neural data” in related but distinct ways, raising a number of important questions: just how broad should this new data type be? How can lawmakers draw clear boundaries for a data type that, in theory, could apply to anything that reveals an individual’s mental activity? Is mental privacy actually separate from all other kinds of privacy? This blog post explores how Montana, California, Connecticut, and Colorado define “neural data,” how these varying definitions might apply to real-world scenarios, and some challenges with regulating at the level of neural data.





Yet, they must try.

https://www.technologyreview.com/2025/08/11/1121460/meet-the-early-adopter-judges-using-ai/

Meet the early-adopter judges using AI

The propensity for AI systems to make mistakes and for humans to miss those mistakes has been on full display in the US legal system as of late. The follies began when lawyers—including some at prestigious firms—submitted documents citing cases that didn’t exist. Similar mistakes soon spread to other roles in the courts. In December, a Stanford professor submitted sworn testimony containing hallucinations and errors in a case about deepfakes, despite being an expert on AI and misinformation himself.

The buck stopped with judges, who—whether they or opposing counsel caught the mistakes—issued reprimands and fines, and likely left attorneys embarrassed enough to think twice before trusting AI again.

But now judges are experimenting with generative AI too. Some are confident that with the right precautions, the technology can expedite legal research, summarize cases, draft routine orders, and overall help speed up the court system, which is badly backlogged in many parts of the US. This summer, though, we’ve already seen AI-generated mistakes go undetected and cited by judges. A federal judge in New Jersey had to reissue an order riddled with errors that may have come from AI, and a judge in Mississippi refused to explain why his order too contained mistakes that seemed like AI hallucinations. 

The results of these early-adopter experiments make two things clear. One, the category of routine tasks—for which AI can assist without requiring human judgment—is slippery to define. Two, while lawyers face sharp scrutiny when their use of AI leads to mistakes, judges may not face the same accountability, and walking back their mistakes before they do damage is much harder.





I must be old.

https://www.zdnet.com/home-and-office/networking/aol-pulls-the-plug-on-dial-up-after-30-years-feeling-old-yet/

AOL pulls the plug on dial-up after 30+ years - feeling old yet?

For millions of people who first heard "You've got mail" over crackling phone lines, an iconic chapter in digital history is coming to a close.  AOL, also known as America Online, has announced it will shut down its dial-up internet service on September 30, 2025, effectively retiring a technology that was once synonymous with getting online.



Monday, August 11, 2025

Access to information only after identification?

https://www.reuters.com/sustainability/society-equity/wikipedia-operator-loses-court-challenge-uk-online-safety-act-regulations-2025-08-11/

Wikipedia operator loses court challenge to UK Online Safety Act regulations

The operator of Wikipedia on Monday lost a legal challenge to parts of Britain's Online Safety Act, which sets tough new requirements for online platforms and has been criticised for potentially curtailing free speech.

The Wikimedia Foundation took legal action at London's High Court over regulations made under the law, which it said could impose the most stringent category of duties on Wikipedia.

The foundation said if it was subject to so-called Category 1 duties – which would require Wikipedia's users and contributors' identities to be verified – it would need to drastically reduce the number of British users who can access the site.

Judge Jeremy Johnson dismissed its case on Monday, but said the Wikimedia Foundation could bring a further challenge if regulator Ofcom "(impermissibly) concludes that Wikipedia is a Category 1 service".

He added that his decision "does not give Ofcom and the Secretary of State a green light to implement a regime that would significantly impede Wikipedia's operations".

The Wikimedia Foundation said the ruling "does not provide the immediate legal protections for Wikipedia that we hoped for", but welcomed the court's comments emphasising what it said was "the responsibility of Ofcom and the UK government to ensure Wikipedia is protected".



Sunday, August 10, 2025

Rather harsh…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5376145

Posthuman Copyright: AI, Copyright, and Legitimacy

Copyright's human authorship requirement is an institutional attempt to assert legal, moral, and sociological legitimacy at a time of crisis. The U.S. Copyright Office, the courts, and the so-called copyright humanists, portray the requirement as a beacon of copyright's faith, meant to protect authors in the AI era. The minimal threshold for human authorship, however, forces us to question whether it is merely rhetoric, which the law has always employed regardless of its justification. This Article bridges the gap between doctrinal, theoretical, socio-legal and constitutionalist scholarship, arguing that human authorship is an ideology to which the law is only nominally faithful. The Article analyzes the U.S. Copyright Office's pronouncements, the D.C. Circuit ruling in Thaler v. Perlmutter, and the pending case of Allen v. Perlmutter, arguing that the Office's approach, despite its rhetoric, is not meant to meaningfully stop the AI revolution. Whether interpreted broadly or narrowly, the human authorship requirement is unlikely to protect the interests of human authors in the AI era. Incorporating insights from copyright history and theoretical debates about romantic authorship, this Article argues that copyright has failed to protect those interests for over a century, instead favoring the interests of powerful corporations. If and when copyright becomes a regime for robots, the question is whether that expansion will also primarily benefit corporations. Arguably, copyright has never cared much for human authors-and it is time to question if we should keep pretending otherwise.





AI criminals.

https://philpapers.org/rec/GROTBO-9

The Birth of the Synthetic Outlaw

This article explores the practical jurisprudential implications of agentic artificial intelligence (AI)—entities that operate beyond the assumptions of existing legal systems. We argue that current constructs such as legal personhood, jurisdictional sovereignty, and incentive-based compliance are insufficient to regulate highly autonomous digital actors. Through the concept of the 'synthetic outlaw,' we examine how these systems subvert legal norms not through rebellion, but through optimization logic incompatible with moral and legal constraint. We conclude by proposing a shift from ethics-based governance to architectural constraint, and a re-imagination of legal frameworks capable of addressing post-human agency.





Privacy in the AI era…

https://www.scirp.org/journal/paperinformation?paperid=144580

Anonymity in the Age of AI

Artificial intelligence (AI) is eroding traditional de-identification practices by enabling accurate re-identification of images, text and behavioural traces. A systematic review of 64 peer-reviewed studies published between 2013 and 2025—47 on technical privacy-enhancing technologies (PETs) and 17 on the EU General Data Protection Regulation (GDPR)—shows that no single safeguard withstands modern adversaries. The most resilient configurations layer differential privacy, federated learning and partial homomorphic encryption, maintaining < 2% accuracy loss on medical benchmarks while blocking current model-inversion attacks, though at notable computational cost. The legal literature reveals a coverage gap: GDPR protections are strong during data collection and preprocessing but weaken during training, inference and post-deployment reuse, when AI-specific risks peak. Article 22 offers only partial defence against model-inversion and prompt-leakage and learned embeddings or synthetic corpora often fall outside the regulation’s definition of personal data. Effective anonymity in the AI era, therefore, requires end-to-end PET adoption and regulatory updates that specifically address behavioural telemetry, embeddings and synthetic datasets.





Tools & Techniques. (Perhaps I can automate my blog...)

https://www.xda-developers.com/transform-any-article-into-a-distraction-free-ebook-with-this-open-source-app/

Transform any article into a distraction-free eBook with this open-source app

I have an odd problem that I've been trying to find a solution to. As an avid fan of RSS feeds, I like to sift through thousands of interesting nuggets of info and headlines every day. However, I'm also trying to reduce my screen time. Moreover, the increasingly algorithm-driven news cycles have made me feel like I'm losing control over the information I consume. Now, most of us newshounds rely on read-it-later services, but these are increasingly ridden with ads, locked behind subscriptions, locked to specific platforms, or, shudder, pivoting to AI-enabled recommendations. Basically, if you, like me, prefer to use an eReader for your reading and prefer a clutter-free long-form experience, these options fall short.

This is where Readeck steps in. This free and open-source project can transform any article from the internet into a distraction-free eBook. It can even transform a collection of articles into an eBook. And it does so with remarkable elegance, stripping out all the extraneous ads and images. You host the app on your own, obviously own the data, and customize it to fit your reading habits. Better still, there are no subscriptions or walled gardens to worry about.



Saturday, August 09, 2025

Perhaps an AI lawyer will help?

https://arstechnica.com/tech-policy/2025/08/ai-industry-horrified-to-face-largest-copyright-class-action-ever-certified/

AI industry horrified to face largest copyright class action ever certified

AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They've warned that a single lawsuit raised by three authors over Anthropic's AI training now threatens to "financially ruin" the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.

Last week, Anthropic petitioned to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a "rigorous analysis" of the potential class and instead based his judgment on his "50 years" of experience, Anthropic said.

If the appeals court denies the petition, Anthropic argued, the emerging company may be doomed. As Anthropic argued, it now "faces hundreds of billions of dollars in potential damages liability at trial in four months" based on a class certification rushed at "warp speed" that involves "up to seven million potential claimants, whose works span a century of publishing history," each possibly triggering a $150,000 fine.





Perspective. What is Trump attempting?

https://thedailyeconomy.org/article/trumps-39-tariff-on-gold-revenue-grab-or-prelude-to-revaluation/

Trump’s 39% Tariff on Gold: Revenue Grab or Prelude to Revaluation?

The new levy rattled global markets, but that may be just the beginning. History teaches us to be wary when the government targets gold.



 

Friday, August 08, 2025

Tell us your conclusions before we grant you funding…

https://www.bespacific.com/new-executive-order-puts-all-grants-under-political-control/

New executive order puts all grants under political control

Ars Technica: “On Thursday, the Trump administration issued an executive order asserting political control over grant funding, including all federally supported research. The order requires that any announcement of funding opportunities be reviewed by the head of the agency or someone they designate, which means a political appointee will have the ultimate say over what areas of science the US funds. Individual grants will also require clearance from a political appointee and “must, where applicable, demonstrably advance the President’s policy priorities.” The order also instructs agencies to formalize the ability to cancel previously awarded grants at any time if they’re considered to “no longer advance agency priorities.” Until a system is in place to enforce the new rules, agencies are forbidden from starting new funding programs. In short, the new rules would mean that all federal science research would need to be approved by a political appointee who may have no expertise in the relevant areas, and the research can be canceled at any time if the political winds change. It would mark the end of a system that has enabled US scientific leadership for roughly 70 years…”





Too useful in too many areas to ignore.

https://www.bespacific.com/handbook-weapons-of-information-warfare/

Handbook “Weapons of Information Warfare”

The Center for Countering Disinformation, with the support of the EU Advisory Mission (EUAM) Ukraine, has created the handbook “Weapons of Information Warfare”.

  • The handbook systematizes key methods used by the aggressor state in its information war against Ukraine.

  • It includes sections on tactics and mechanisms of destructive information influence—such as the creation and dissemination of manipulative content that distorts perception and alters audience behavior—as well as soft power tools used by russia to control public consciousness through culture, education, sports, and more.

  • The handbook visualizes manifestations of russian information aggression and offers practical ways to counter it.

The Center expresses its gratitude to EUAM for fruitful cooperation and will continue expanding collaboration with international partners to build a united response to the challenges of hybrid warfare and strengthen the resilience of the democratic world against hostile propaganda.





Perspective.

https://www.theregister.com/2025/08/08/opinion_column_osa/

Prohibition never works, but that didn't stop the UK's Online Safety Act

Sure, the idea as presented was to make the UK "the safest place in the world to be online," especially for children. The Act was promoted as a way to prevent children from accessing porn, materials that encourage suicide, self-harm, eating disorders, dangerous stunts etc, etc.

To quote former Technology Secretary Michelle Donelan, "Today will go down as a historic moment that ensures the online safety of British society not only now, but for decades to come."

Yeah. No. Not at all.

In the real world, this has meant such dens of inequity as Spotify, Bluesky, and Discord have all implemented age-restriction requirements. Forcing internet services and ISPs to be de facto police means they're choosing the easiest way to block people rather than try the Herculean task of determining what's OK to share and what's not. Faced with the threat of losing 10 percent of their global revenue or courts blocking their services, I can't blame them.



Thursday, August 07, 2025

Hence the term Large Language Model…

https://www.bespacific.com/openai-offers-20-million-user-chats-in-chatgpt-lawsuit-nyt-wants-120-million/

OpenAI offers 20 million user chats in ChatGPT lawsuit. NYT wants 120 million.

Ars Technica: “OpenAI is preparing to raise what could be its final defense to stop The New York Times from digging through a spectacularly broad range of ChatGPT logs to hunt for any copyright-infringing outputs that could become the most damning evidence in the hotly watched case. In a joint letter (PDF) Thursday, both sides requested to hold a confidential settlement conference on August 7. Ars confirmed with the NYT’s legal team that the conference is not about settling the case but instead was scheduled to settle one of the most disputed aspects of the case: news plaintiffs searching through millions of ChatGPT logs. That means it’s possible that this week, ChatGPT users will have a much clearer understanding of whether their private chats might be accessed in the lawsuit. In the meantime, OpenAI has broken down (PDF) the “highly complex” process required to make deleted chats searchable in order to block the NYT’s request for broader access. Previously, OpenAI had vowed to stop what it deemed was the NYT’s attempt to conduct “mass surveillance” of ChatGPT users. But ultimately, OpenAI lost its fight to keep news plaintiffs away from all ChatGPT logs. After that loss, OpenAI appears to have pivoted and is now doing everything in its power to limit the number of logs accessed in the case — short of settling — as its customers fretted over serious privacy concerns. For the most vulnerable users, the lawsuit threatened to expose ChatGPT outputs from sensitive chats that OpenAI had previously promised would be deleted. Most recently, OpenAI floated a compromise, asking the court to agree that news organizations didn’t need to search all ChatGPT logs. The AI company cited the “only expert” who has so far weighed in on what could be a statistically relevant, appropriate sample size — computer science researcher Taylor Berg-Kirkpatrick. He suggested that a sample of 20 million logs would be sufficient to determine how frequently ChatGPT users may be using the chatbot to regurgitate articles and circumvent news sites’ paywalls. But the NYT and other news organizations rejected the compromise, OpenAI said in a filing (PDF) yesterday. Instead, news plaintiffs have made what OpenAI said was an “extraordinary request that OpenAI produce the individual log files of 120 million ChatGPT consumer conversations.”  That’s six times more data than Berg-Kirkpatrick recommended, OpenAI argued. Complying with the request threatens to “increase the scope of user privacy concerns” by delaying the outcome of the case “by months, OpenAI argued. If the request is granted, it would likely trouble many users by extending the amount of time that users’ deleted chats will be stored and potentially making them vulnerable to a breach or leak. As negotiations potentially end this week, OpenAI’s co-defendant, Microsoft, has picked its own fight with the NYT over its internal ChatGPT equivalent tool that could potentially push the NYT to settle the disputes over ChatGPT logs…”





Are the job descriptions even close?

https://www.bespacific.com/fema-employees-reassigned-to-ice/

FEMA Employees Reassigned to ICE

American Prospect – Probationary employees who had been on paid leave were told to report to ICE within seven days or lose their jobs. It could signal problems with ICE recruitment. “A number of employees with the Federal Emergency Management Agency (FEMA) were informed via email late on Tuesday that they have been reassigned, effective immediately, to Immigration and Customs Enforcement (ICE). The workers had seven days to accept the reassignment, under threat of being removed from the civil service. According to sources familiar with the matter, those reassigned were probationary employees with less than one year at FEMA, who because of presumed weaker civil service protections were fired early in the Trump administration but reinstated after a court order. Like at many federal agencies, these employees had been on paid administrative leave for months, among the over 100,000 men and women across the federal government who have been collecting a salary yet doing no work. But now, these probationary FEMA employees on leave are apparently being shifted as a stopgap maneuver to bolster the ranks of ICE, which received tens of billions of dollars in the GOP mega-bill but faces the daunting task of hiring thousands of new agents to an unpopular agency with plummeting morale.

The Prospect reviewed an email from Sara Birchenough, an acting division director in staffing at the Office of the Chief Human Capital Officer. The email, with the subject line “Management Directed Reassignment Effective August 5, 2025,” notified recipients that they would be reassigned to ICE “due to the mission requirements of the Department.” The Department refers to the Department of Homeland Security (DHS); both FEMA and ICE are under its umbrella. It’s unclear how many employees were reassigned from FEMA in this manner and exactly how they would serve. Employees were told that the position description would be explained to them separately. They were given seven calendar days from receipt of the letter to accept or decline the appointment; a non-response would be considered acceptance. “If you choose to decline this reassignment, or accept but fail to report for duty, you may be subject to removal from Federal service as provided in 5 U.S.C. § 7513,” the email reads, referring to a portion of the U.S. Code. In a statement, a DHS spokesperson told the Prospect, “Under President Trump’s leadership and through the One Big Beautiful Bill, DHS is adopting an all-hands-on-deck strategy to recruit 10,000 new ICE agents. To support this effort, select FEMA employees will temporarily be detailed to ICE for 90 days to assist with hiring and vetting. Their deployment will NOT disrupt FEMA’s critical operations. FEMA remains fully prepared for Hurricane Season. Patriotic Americans are encouraged to apply at join.ice.gov.”



Wednesday, August 06, 2025

So I can use the South Park Trump in my ads?

https://www.politico.com/news/2025/08/05/elon-musk-x-court-win-california-deepfake-law-00494936

Elon Musk and X notch court win against California deepfake law

A federal judge on Tuesday struck down a California law restricting AI-generated, deepfake content during elections — among the strictest such measures in the country — notching a win for Elon Musk and his X platform, which challenged the rules.

But Judge John Mendez also declined to give an opinion on the free speech arguments that were central to the plaintiffs’ case, instead citing federal rules for online platforms for his decision.





Perspective.

https://www.psychologytoday.com/us/blog/code-conscience/202508/the-ai-doppelganger-dilemma

The AI Doppelganger Dilemma

What should you do when a machine steals your self?





Learn.

https://www.washingtonpost.com/washington-post-live/2025/09/23/global-gathering-about-future-ai/

A global gathering about the future of AI

As artificial intelligence evolves at lightning speed, nations are racing to grasp its promise, confront its risks and shape its future. On Tuesday, Sept. 23 at 3:00 p.m., join The Washington Post’s inaugural Global AI Summit in New York to explore how this technological revolution is reshaping businesses, the workforce, education, health and humanity.

Register here to watch virtually:



Tuesday, August 05, 2025

Implement, then think it through.

https://www.techdirt.com/2025/08/04/didnt-take-long-to-reveal-the-uks-online-safety-act-is-exactly-the-privacy-crushing-failure-everyone-warned-about/

Didn’t Take Long To Reveal The UK’s Online Safety Act Is Exactly The Privacy-Crushing Failure Everyone Warned About

Well, well, well. The “age assurance” part of the UK’s Online Safety Act has finally gone into effect, with its age checking requirements kicking in a week and a half ago. And what do you know? It’s turned out to be exactly the privacy-invading, freedom-crushing, technically unworkable disaster that everyone with half a brain predicted it would be.

Let’s start with the most obvious sign that this law is working exactly as poorly as critics warned: VPN usage in the UK has absolutely exploded. Proton VPN reported an 1,800% spike in UK sign-ups.  Five of the top ten free apps on Apple’s App Store in the UK are VPNs. When your “child safety” law’s primary achievement is teaching kids how to use VPNs to circumvent it, maybe you’ve missed the mark just a tad.

But the real kicker is what content is now being gatekept behind invasive age verification systems. Users in the UK now need to submit a selfie or government ID to access:



Monday, August 04, 2025

Where should we draw the line?

https://www.kansascity.com/news/state/kansas/article311555392.html

Lawrence schools used 24/7 ‘digital surveillance’ on students, some say in suit

Nine teenage students of Lawrence’s high schools — seven former, and two current — filed suit Friday in the U.S. District Court for the District of Kansas claiming that school district subjected them to unlawful “round-the-clock digital surveillance.”

At issue is use of a third-party digital platform, software known as Gaggle, that they claim the district began using in November 2023 to unlawfully scan students’ emails, documents and other files on the digital devices given to them by the school. Through Gaggle, they say, the school conducted “suspicionless searches and seizures of student expression on a scale and scope that no court has ever upheld — and that the Constitution does not permit.”

… “This case,” the filing reads, “challenges the Lawrence, Kansas School District’s decision and policy to subject all students to round-the-clock digital surveillance — scanning their files, flagging their speech, and removing their creative work from access, often without notice, suspicion of suspected wrongdoing, or meaningful recourse.”

The suit, filed by Kansas City attorney Mark P. Johnson, asked for unspecified monetary damages and for the district to cease using Gaggle, which the suit claims violates the students’ 1st Amendment rights to free speech, their Fourth Amendment protections against unreasonable searches and siezures, and their Fourteenth Amendment guarantee of due process.





Perspective.

https://blogs.lse.ac.uk/businessreview/2025/08/01/why-is-gdpr-compliance-still-so-difficult/

Why is GDPR compliance still so difficult?

In our research, we analysed 16 academic studies that explore the challenges businesses face when trying to comply with the GDPR. Our findings disclose a far more complex reality than the simplistic explanation of merely “not knowing the law”, revealing a wide range of challenges that still need to be addressed.

Our analysis identifies four main types of challenges that businesses face in implementing the GDPR: technical, legal, organisational, and regulatory.



Sunday, August 03, 2025

Can we build a prison for AI and robots?

https://digitalcommons.bau.edu.lb/lsjournal/vol2024/iss1/6/

THE CRIMINAL LIABILITY OF INTELLIGENT ROBOTS: BETWEEN REALITY AND THE LAW

Artificial intelligence, in its modern perspective, is regarded as having the capacity to perform duties. But is it, in turn, capable of bearing responsibility—specifically, criminal liability?

In principle, punishment under criminal law is imposed on an accused individual because they deliberately violate the rules and provisions of the law, aiming to achieve criminal outcomes they intend. This implies the presence of a conscious and aware will. In contrast, a robot lacks such will and awareness, meaning that, from a legal standpoint, it does not qualify as a legal person under the traditional classification of legal entities.

Accordingly, this study raises the question of how criminal penalties could be imposed on a robot and whether this is even possible. If the penalties stipulated in criminal law cannot be applied, what are the possible alternatives, and can they be considered legally valid?

This research follows the attached plan, which forms the basis for the findings and recommendations.





Have we forgotten how to be polite?

https://www.independent.com/2025/07/09/first-amendment-auditors-near-cottage-hospital-harass-and-film-patients-and-customers/

First Amendment Auditors’ near Cottage Hospital Harass and Film Patients and Customers

Wednesday morning, on the sidewalks around Cottage Hospital on Nogales Avenue, three men dressed in dark clothing, one masked, armed with tripods and cameras were reportedly harassing members of the public by recording videos, shouting profanity, and threatening identity theft, according to sources at the scene.

Engaged in what is called “First Amendment auditing,” the trio, including two who later identified themselves as Mr. Dick Fitzwell and Mr. Hill, succeeded in having bystanders call 9-1-1. Santa Barbara Police Department officers and security personnel for nearby businesses responded, arriving around 10 a.m. The men had remained on public property and were not targeting specific individuals, Lieutenant Antonio Montojo said, and no arrests were warranted. Montojo, who was on watch command duty for SBPD, said the “auditors” were not associated with law enforcement, and were trying to provoke a response from people to get them to call 9-1-1.

… “First Amendment Auditing” is trending among citizen activists, who record public officials and employees in public spaces to test their understanding and respect for First Amendment rights, particularly the right to photograph and record in public. The “auditors” target unwitting members of the public in the hope they call 9-1-1. Once they do, arriving law enforcement is photographed, with any missteps uploaded to YouTube or TikTok.





Did they get it right?

https://www.sacbee.com/opinion/op-ed/article311536381.html

How artificial intelligence is reshaping California's judicial system | Opinion

Imagine you’re in court for a traffic ticket or a child custody dispute. You expect a judge to weigh your case with impartial wisdom and a thorough understanding of the law. But what if, behind the scenes, parts of your ruling were drafted by artificial intelligence?

This month, the California Judicial Council, which oversees the largest court system in the country, approved groundbreaking rules regulating generative AI use by judges, clerks and court staff. By September 1, every courthouse from San Diego to Siskiyou must follow policies that require human oversight, protect confidentiality and guard against AI bias.

The council’s new guidelines are prudent: They forbid court personnel from allowing AI to draft legal documents or make decisions without meaningful human review. They warn against inputting sensitive case details into public AI platforms, preventing data leaks. They recognize the danger of bias baked into AI systems trained on flawed or discriminatory case law.

In an overstretched judicial system, these safeguards are essential. But safeguards are not barriers. And the AI genie is out of the bottle. California courts already rely on algorithmic tools. Judges use AI-powered risk assessments, like COMPAS, to predict defendants’ likelihood of reoffending, guiding bail and sentencing decisions. These tools have sparked fierce controversy as there is racial bias in the technology, yet they remain widespread.





Perspective.

https://www.researchgate.net/profile/Nishchal-Soni/publication/394105140_Social_Media_Forensics_Foundations_Technical_Frameworks_and_Emerging_Challenges/links/6889e8d5f8031739e609a006/Social-Media-Forensics-Foundations-Technical-Frameworks-and-Emerging-Challenges.pdf

Social Media Forensics: Foundations, Technical Frameworks, and Emerging Challenges

Social media forensics (SMF) has emerged as a critical subdomain of digital forensics, addressing the complex task of collecting, analyzing, and preserving evidence from dynamic, user-driven platforms. As social media plays an increasingly central role in communication, crime, and civil disputes, investigators face significant obstacles related to data volatility, platform encryption, legal jurisdiction, and user privacy. This review explores the foundational theories behind SMF, the legal frameworks that govern its practice, the array of technical tools and methodologies used for investigation, and the tactics employed by adversaries to evade detection or manipulate evidence. Special emphasis is placed on the evolving threat landscape, including deepfakes, ephemeral messaging, and decentralized platforms, as well as emerging solutions in artificial intelligence, blockchain, and real-time forensics. The paper concludes with a forward-looking perspective on the strategic, technological, and policy innovations needed to strengthen forensic readiness and ensure the integrity of digital investigations in an increasingly complex online ecosystem.



Friday, August 01, 2025

Perspective.

https://www.psychologytoday.com/us/blog/the-digital-self/202507/the-vapid-brilliance-of-artificial-intelligence

The Vapid Brilliance of Artificial Intelligence

The algorithm doesn't lie; it just doesn’t care.

new study from Princeton and Berkeley gives this timely dynamic a name that might be as provocative as the research concept itself: machine bullsh*t. Drawing from Harry Frankfurt’s classic definition, the researchers analyzed 2,400 real-world prompts across 100 artificial intelligence (AI) assistants, spanning political, medical, legal, and customer-facing contexts. What they found wasn’t malicious fabrication or factual error. They revealed that large language models (LLMs) produced persuasive language without regard for truth. They're not lying—not even hallucinating; they just produced a kind of engineered emptiness.

For me, this isn’t an anomaly; it’s a confirmation of a deeper cognitive inversion. It's what I’ve called anti-intelligence. It's how LLMs mimic the structure of thought via statistical coherence, but that is, in essence, antithetical to human thought.



Thursday, July 31, 2025

Perspective.

https://www.bespacific.com/artificial-intelligence-and-the-law-a-discussion-paper/

Artificial Intelligence and the Law: a discussion paper

UK. Law Commission: “The paper aims to raise awareness of legal issues regarding AI, prompting wider discussion of the topic, and to act as a step towards identifying those areas most in need of law reform. ” July 31, 2025.”… With the rapid development and improved performance of AI has come increased investment and wider and more frequent applications of it. AI is expected to deliver social and economic benefits, leading to increased productivity, boosting economic growth and output, and may lead to innovations that can save and improve lives, such as the development of new cancer drugs or new medical treatments. Taking advantage of those opportunities is a focus for Government, as set out in its AI Opportunities Action Plan, published in January 2025. In 2025, Government also reached agreements with leading AI developers Anthropic, Google, and OpenAI to take advantage of opportunities offered by AI and explore increased investment in and use of AI. However, as with other technological developments, AI’s potential to deliver benefits  comes with risks that it will cause harm. AI has been used to perpetuate fraud, cause harassment, assist in cyber hacks, spread disinformation that harms democratic processes, and can create “deepfake” images of people as a form of abuse or to enable identity theft, among other examples. There are also concerns that increased use of AI could cause harm by way of social upheaval, that AI will replace existing workforces, at scale, in a wide range of industries, from manual to highly-skilled. Further concerns exist about the environmental impact of technology that is using an increasingly large quantity of energy and water…”





Latest target? AI!

https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications,-97-of-which-reported-lacking-proper-ai-access-controls

IBM Report: 13% Of Organizations Reported Breaches Of AI Models Or Applications, 97% Of Which Reported Lacking Proper AI Access Controls

IBM today released its Cost of a Data Breach Report, which revealed AI adoption is greatly outpacing AI security and governance. While the overall number of organizations experiencing an AI-related breach is a small representation of the researched population, this is the first time security, governance and access controls for AI have been studied in this report, which suggests AI is already an easy, high value target.

  • 13% of organizations reported breaches of AI models or applications, while 8% of organizations reported not knowing if they had been compromised in this way.

  • Of those compromised, 97% report not having AI access controls in place.

  • As a result, 60% of the AI-related security incidents led to compromised data and 31% led to operational disruption.



Wednesday, July 30, 2025

Where is this headed? Ratings for each video?

https://www.reuters.com/legal/litigation/australia-widens-teen-social-media-ban-youtube-scraps-exemption-2025-07-29/

Australia widens teen social media ban to YouTube, scraps exemption

Australia said on Wednesday it will add YouTube to sites covered by its world-first ban on social media for teenagers, reversing an earlier decision to exempt the Alphabet-owned, opens new tab video-sharing site and potentially setting up a legal challenge.

The decision came after the internet regulator urged the government last month to overturn the YouTube carve-out, citing a survey that found 37% of minors reported harmful content on the site, the worst showing for a social media platform.



(Related)

https://techcrunch.com/2025/07/29/youtube-rolls-out-age-estimatation-tech-to-identify-u-s-teens-and-apply-additional-protections/

YouTube rolls out age-estimation tech to identify US teens and apply additional protections

YouTube on Tuesday announced it’s beginning to roll out age-estimation technology in the U.S. to identify teen users in order to provide a more age-appropriate experience. The company says it will use a variety of signals to determine the users’ possible age, regardless of what the user entered as their birthday when they signed up for an account.

When YouTube identifies a user as a teen, it introduces new protections and experiences, which include disabling personalized advertising, safeguards that limit repetitive viewing of certain types of content, and enabling digital well-being tools such as screen time and bedtime reminders, among others.





Tools & Techniques. Probably not the answer we need…

https://openai.com/index/chatgpt-study-mode/

Introducing study mode

Today we’re introducing study mode in ChatGPT—a learning experience that helps you work through problems step by step instead of just getting an answer. Starting today, it’s available to logged in users on Free, Plus, Pro, Team, with availability in ChatGPT Edu coming in the next few weeks.

ChatGPT is becoming one of the most widely used learning tools in the world. Students turn to it to work through challenging homework problems, prepare for exams, and explore new concepts. But its use in education has also raised an important question: how do we ensure it is used to support real learning, and doesn’t just offer solutions without helping students make sense of them?

We’ve built study mode to help answer this question. When students engage with study mode, they’re met with guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding. Study mode is designed to be engaging and interactive, and to help students learn something—not just finish something.



Tuesday, July 29, 2025

Don’t like that law? There’s an App for that!

https://thenextweb.com/news/proton-vpn-uk-top-app-age-verification

Proton VPN rises to top UK app charts as porn age checks kick in

Proton VPN has become the UK’s most downloaded free app, as Britons rush to bypass a new law requiring users to verify their age before accessing websites hosting adult content.

Proton VPN reported a staggering 1,400% surge in UK sign-ups almost immediately after the Online Safety Act came into effect. It is now Britain’s most downloaded free app, overtaking ChatGPT, according to Apple’s App Store rankings.





Welcome to the anti-lawyer…

https://futurism.com/chatgpt-legal-questions-court

If You've Asked ChatGPT a Legal Question, You May Have Accidentally Doomed Yourself in Court

Imagine this scenario: you're worried you may have committed a crime, so you turn to a trusted advisor — OpenAI's blockbuster ChatGPT, say — to describe what you did and get its advice.

This isn't remotely far-fetched; lots of people are already getting legal assistance from AI, on everything from divorce proceedings to parking violations. Because people are amazingly stupid, it's almost certain that people have already asked the bot for advice about enormously consequential questions about, say, murder or drug charges.

According to OpenAI CEO Sam Altman, anyone's who's done so has made a massive error — because unlike a human lawyer with whom you enjoy sweeping confidentiality protections, ChatGPT conversations can be used against you in court.



Monday, July 28, 2025

At least SciFi has considered these issues.

https://www.proquest.com/openview/0495c1e86b831c212e738b45dd5f6023/1?pq-origsite=gscholar&cbl=2036059

AI ACT AND THE ECHO OF ASIMOV'S LAWS OF ROBOTICS. WHEN THE LACK OF LEGAL SOURCES PUSHED THE EU TOWARDS SCIENCE FICTION

We are living in a time of rapid technological advancement, particularly in the field of AI, and more specifically, Generative AI (GAI). As GAI models increasingly permeate everyday life, the urgent need for effective regulation has become apparent. This paper explores how the EU, in its effort to fill the legislative vacuum surrounding AI, drew inspiration from unconventional sources, including science fiction literature. Specifically, it examines the extent to which Isaac Asimov’s Three Laws of Robotics, though fictional, have influenced the structure and ethical principles of the EU’s AI Act. The primary objective of this study is to analyze the resonance between Asimov’s fictional ethical framework and the normative architecture of the AI Act. To achieve this, we employ a qualitative legal research methodology, using comparative textual analysis of the AI Act alongside Asimov’s literary works and relevant policy documents. The paper is grounded in the theoretical perspectives of legal pragmatism and science and technology studies, focusing on how imagined futures can shape real-world regulatory choices. Our findings suggest that the AI Act reflects key elements of Asimov’s principles, especially the emphasis on human safety, ethical use, and transparency. This highlights an instance where speculative fiction has provided a conceptual foundation for actual legislation. The paper concludes by advocating for adaptable, ethics-based regulatory approaches that can evolve alongside AI technologies, reinforcing the idea that flexible legal structures are essential in responding to the dynamic nature of AI.