Tuesday, August 12, 2025

Is this article neural data?

https://fpf.org/blog/the-neural-data-goldilocks-problem-defining-neural-data-in-u-s-state-privacy-laws/

The “Neural Data” Goldilocks Problem: Defining “Neural Data” in U.S. State Privacy Laws

As of halfway through 2025, four U.S. states have enacted laws regarding “neural data” or “neurotechnology data.” These laws, all of which amend existing state privacy laws, signify growing lawmaker interest in regulating what’s being considered a distinct, particularly sensitive kind of data: information about people’s thoughts, feelings, and mental activity. Created in response to the burgeoning neurotechnology industry, neural data laws in the U.S. seek to extend existing protections for the most sensitive of personal data to the newly-conceived legal category of “neural data.”

Each of these laws defines “neural data” in related but distinct ways, raising a number of important questions: just how broad should this new data type be? How can lawmakers draw clear boundaries for a data type that, in theory, could apply to anything that reveals an individual’s mental activity? Is mental privacy actually separate from all other kinds of privacy? This blog post explores how Montana, California, Connecticut, and Colorado define “neural data,” how these varying definitions might apply to real-world scenarios, and some challenges with regulating at the level of neural data.





Yet, they must try.

https://www.technologyreview.com/2025/08/11/1121460/meet-the-early-adopter-judges-using-ai/

Meet the early-adopter judges using AI

The propensity for AI systems to make mistakes and for humans to miss those mistakes has been on full display in the US legal system as of late. The follies began when lawyers—including some at prestigious firms—submitted documents citing cases that didn’t exist. Similar mistakes soon spread to other roles in the courts. In December, a Stanford professor submitted sworn testimony containing hallucinations and errors in a case about deepfakes, despite being an expert on AI and misinformation himself.

The buck stopped with judges, who—whether they or opposing counsel caught the mistakes—issued reprimands and fines, and likely left attorneys embarrassed enough to think twice before trusting AI again.

But now judges are experimenting with generative AI too. Some are confident that with the right precautions, the technology can expedite legal research, summarize cases, draft routine orders, and overall help speed up the court system, which is badly backlogged in many parts of the US. This summer, though, we’ve already seen AI-generated mistakes go undetected and cited by judges. A federal judge in New Jersey had to reissue an order riddled with errors that may have come from AI, and a judge in Mississippi refused to explain why his order too contained mistakes that seemed like AI hallucinations. 

The results of these early-adopter experiments make two things clear. One, the category of routine tasks—for which AI can assist without requiring human judgment—is slippery to define. Two, while lawyers face sharp scrutiny when their use of AI leads to mistakes, judges may not face the same accountability, and walking back their mistakes before they do damage is much harder.





I must be old.

https://www.zdnet.com/home-and-office/networking/aol-pulls-the-plug-on-dial-up-after-30-years-feeling-old-yet/

AOL pulls the plug on dial-up after 30+ years - feeling old yet?

For millions of people who first heard "You've got mail" over crackling phone lines, an iconic chapter in digital history is coming to a close.  AOL, also known as America Online, has announced it will shut down its dial-up internet service on September 30, 2025, effectively retiring a technology that was once synonymous with getting online.



No comments: