Saturday, August 09, 2025

Perhaps an AI lawyer will help?

https://arstechnica.com/tech-policy/2025/08/ai-industry-horrified-to-face-largest-copyright-class-action-ever-certified/

AI industry horrified to face largest copyright class action ever certified

AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They've warned that a single lawsuit raised by three authors over Anthropic's AI training now threatens to "financially ruin" the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.

Last week, Anthropic petitioned to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a "rigorous analysis" of the potential class and instead based his judgment on his "50 years" of experience, Anthropic said.

If the appeals court denies the petition, Anthropic argued, the emerging company may be doomed. As Anthropic argued, it now "faces hundreds of billions of dollars in potential damages liability at trial in four months" based on a class certification rushed at "warp speed" that involves "up to seven million potential claimants, whose works span a century of publishing history," each possibly triggering a $150,000 fine.





Perspective. What is Trump attempting?

https://thedailyeconomy.org/article/trumps-39-tariff-on-gold-revenue-grab-or-prelude-to-revaluation/

Trump’s 39% Tariff on Gold: Revenue Grab or Prelude to Revaluation?

The new levy rattled global markets, but that may be just the beginning. History teaches us to be wary when the government targets gold.



 

Friday, August 08, 2025

Tell us your conclusions before we grant you funding…

https://www.bespacific.com/new-executive-order-puts-all-grants-under-political-control/

New executive order puts all grants under political control

Ars Technica: “On Thursday, the Trump administration issued an executive order asserting political control over grant funding, including all federally supported research. The order requires that any announcement of funding opportunities be reviewed by the head of the agency or someone they designate, which means a political appointee will have the ultimate say over what areas of science the US funds. Individual grants will also require clearance from a political appointee and “must, where applicable, demonstrably advance the President’s policy priorities.” The order also instructs agencies to formalize the ability to cancel previously awarded grants at any time if they’re considered to “no longer advance agency priorities.” Until a system is in place to enforce the new rules, agencies are forbidden from starting new funding programs. In short, the new rules would mean that all federal science research would need to be approved by a political appointee who may have no expertise in the relevant areas, and the research can be canceled at any time if the political winds change. It would mark the end of a system that has enabled US scientific leadership for roughly 70 years…”





Too useful in too many areas to ignore.

https://www.bespacific.com/handbook-weapons-of-information-warfare/

Handbook “Weapons of Information Warfare”

The Center for Countering Disinformation, with the support of the EU Advisory Mission (EUAM) Ukraine, has created the handbook “Weapons of Information Warfare”.

  • The handbook systematizes key methods used by the aggressor state in its information war against Ukraine.

  • It includes sections on tactics and mechanisms of destructive information influence—such as the creation and dissemination of manipulative content that distorts perception and alters audience behavior—as well as soft power tools used by russia to control public consciousness through culture, education, sports, and more.

  • The handbook visualizes manifestations of russian information aggression and offers practical ways to counter it.

The Center expresses its gratitude to EUAM for fruitful cooperation and will continue expanding collaboration with international partners to build a united response to the challenges of hybrid warfare and strengthen the resilience of the democratic world against hostile propaganda.





Perspective.

https://www.theregister.com/2025/08/08/opinion_column_osa/

Prohibition never works, but that didn't stop the UK's Online Safety Act

Sure, the idea as presented was to make the UK "the safest place in the world to be online," especially for children. The Act was promoted as a way to prevent children from accessing porn, materials that encourage suicide, self-harm, eating disorders, dangerous stunts etc, etc.

To quote former Technology Secretary Michelle Donelan, "Today will go down as a historic moment that ensures the online safety of British society not only now, but for decades to come."

Yeah. No. Not at all.

In the real world, this has meant such dens of inequity as Spotify, Bluesky, and Discord have all implemented age-restriction requirements. Forcing internet services and ISPs to be de facto police means they're choosing the easiest way to block people rather than try the Herculean task of determining what's OK to share and what's not. Faced with the threat of losing 10 percent of their global revenue or courts blocking their services, I can't blame them.



Thursday, August 07, 2025

Hence the term Large Language Model…

https://www.bespacific.com/openai-offers-20-million-user-chats-in-chatgpt-lawsuit-nyt-wants-120-million/

OpenAI offers 20 million user chats in ChatGPT lawsuit. NYT wants 120 million.

Ars Technica: “OpenAI is preparing to raise what could be its final defense to stop The New York Times from digging through a spectacularly broad range of ChatGPT logs to hunt for any copyright-infringing outputs that could become the most damning evidence in the hotly watched case. In a joint letter (PDF) Thursday, both sides requested to hold a confidential settlement conference on August 7. Ars confirmed with the NYT’s legal team that the conference is not about settling the case but instead was scheduled to settle one of the most disputed aspects of the case: news plaintiffs searching through millions of ChatGPT logs. That means it’s possible that this week, ChatGPT users will have a much clearer understanding of whether their private chats might be accessed in the lawsuit. In the meantime, OpenAI has broken down (PDF) the “highly complex” process required to make deleted chats searchable in order to block the NYT’s request for broader access. Previously, OpenAI had vowed to stop what it deemed was the NYT’s attempt to conduct “mass surveillance” of ChatGPT users. But ultimately, OpenAI lost its fight to keep news plaintiffs away from all ChatGPT logs. After that loss, OpenAI appears to have pivoted and is now doing everything in its power to limit the number of logs accessed in the case — short of settling — as its customers fretted over serious privacy concerns. For the most vulnerable users, the lawsuit threatened to expose ChatGPT outputs from sensitive chats that OpenAI had previously promised would be deleted. Most recently, OpenAI floated a compromise, asking the court to agree that news organizations didn’t need to search all ChatGPT logs. The AI company cited the “only expert” who has so far weighed in on what could be a statistically relevant, appropriate sample size — computer science researcher Taylor Berg-Kirkpatrick. He suggested that a sample of 20 million logs would be sufficient to determine how frequently ChatGPT users may be using the chatbot to regurgitate articles and circumvent news sites’ paywalls. But the NYT and other news organizations rejected the compromise, OpenAI said in a filing (PDF) yesterday. Instead, news plaintiffs have made what OpenAI said was an “extraordinary request that OpenAI produce the individual log files of 120 million ChatGPT consumer conversations.”  That’s six times more data than Berg-Kirkpatrick recommended, OpenAI argued. Complying with the request threatens to “increase the scope of user privacy concerns” by delaying the outcome of the case “by months, OpenAI argued. If the request is granted, it would likely trouble many users by extending the amount of time that users’ deleted chats will be stored and potentially making them vulnerable to a breach or leak. As negotiations potentially end this week, OpenAI’s co-defendant, Microsoft, has picked its own fight with the NYT over its internal ChatGPT equivalent tool that could potentially push the NYT to settle the disputes over ChatGPT logs…”





Are the job descriptions even close?

https://www.bespacific.com/fema-employees-reassigned-to-ice/

FEMA Employees Reassigned to ICE

American Prospect – Probationary employees who had been on paid leave were told to report to ICE within seven days or lose their jobs. It could signal problems with ICE recruitment. “A number of employees with the Federal Emergency Management Agency (FEMA) were informed via email late on Tuesday that they have been reassigned, effective immediately, to Immigration and Customs Enforcement (ICE). The workers had seven days to accept the reassignment, under threat of being removed from the civil service. According to sources familiar with the matter, those reassigned were probationary employees with less than one year at FEMA, who because of presumed weaker civil service protections were fired early in the Trump administration but reinstated after a court order. Like at many federal agencies, these employees had been on paid administrative leave for months, among the over 100,000 men and women across the federal government who have been collecting a salary yet doing no work. But now, these probationary FEMA employees on leave are apparently being shifted as a stopgap maneuver to bolster the ranks of ICE, which received tens of billions of dollars in the GOP mega-bill but faces the daunting task of hiring thousands of new agents to an unpopular agency with plummeting morale.

The Prospect reviewed an email from Sara Birchenough, an acting division director in staffing at the Office of the Chief Human Capital Officer. The email, with the subject line “Management Directed Reassignment Effective August 5, 2025,” notified recipients that they would be reassigned to ICE “due to the mission requirements of the Department.” The Department refers to the Department of Homeland Security (DHS); both FEMA and ICE are under its umbrella. It’s unclear how many employees were reassigned from FEMA in this manner and exactly how they would serve. Employees were told that the position description would be explained to them separately. They were given seven calendar days from receipt of the letter to accept or decline the appointment; a non-response would be considered acceptance. “If you choose to decline this reassignment, or accept but fail to report for duty, you may be subject to removal from Federal service as provided in 5 U.S.C. § 7513,” the email reads, referring to a portion of the U.S. Code. In a statement, a DHS spokesperson told the Prospect, “Under President Trump’s leadership and through the One Big Beautiful Bill, DHS is adopting an all-hands-on-deck strategy to recruit 10,000 new ICE agents. To support this effort, select FEMA employees will temporarily be detailed to ICE for 90 days to assist with hiring and vetting. Their deployment will NOT disrupt FEMA’s critical operations. FEMA remains fully prepared for Hurricane Season. Patriotic Americans are encouraged to apply at join.ice.gov.”



Wednesday, August 06, 2025

So I can use the South Park Trump in my ads?

https://www.politico.com/news/2025/08/05/elon-musk-x-court-win-california-deepfake-law-00494936

Elon Musk and X notch court win against California deepfake law

A federal judge on Tuesday struck down a California law restricting AI-generated, deepfake content during elections — among the strictest such measures in the country — notching a win for Elon Musk and his X platform, which challenged the rules.

But Judge John Mendez also declined to give an opinion on the free speech arguments that were central to the plaintiffs’ case, instead citing federal rules for online platforms for his decision.





Perspective.

https://www.psychologytoday.com/us/blog/code-conscience/202508/the-ai-doppelganger-dilemma

The AI Doppelganger Dilemma

What should you do when a machine steals your self?





Learn.

https://www.washingtonpost.com/washington-post-live/2025/09/23/global-gathering-about-future-ai/

A global gathering about the future of AI

As artificial intelligence evolves at lightning speed, nations are racing to grasp its promise, confront its risks and shape its future. On Tuesday, Sept. 23 at 3:00 p.m., join The Washington Post’s inaugural Global AI Summit in New York to explore how this technological revolution is reshaping businesses, the workforce, education, health and humanity.

Register here to watch virtually:



Tuesday, August 05, 2025

Implement, then think it through.

https://www.techdirt.com/2025/08/04/didnt-take-long-to-reveal-the-uks-online-safety-act-is-exactly-the-privacy-crushing-failure-everyone-warned-about/

Didn’t Take Long To Reveal The UK’s Online Safety Act Is Exactly The Privacy-Crushing Failure Everyone Warned About

Well, well, well. The “age assurance” part of the UK’s Online Safety Act has finally gone into effect, with its age checking requirements kicking in a week and a half ago. And what do you know? It’s turned out to be exactly the privacy-invading, freedom-crushing, technically unworkable disaster that everyone with half a brain predicted it would be.

Let’s start with the most obvious sign that this law is working exactly as poorly as critics warned: VPN usage in the UK has absolutely exploded. Proton VPN reported an 1,800% spike in UK sign-ups.  Five of the top ten free apps on Apple’s App Store in the UK are VPNs. When your “child safety” law’s primary achievement is teaching kids how to use VPNs to circumvent it, maybe you’ve missed the mark just a tad.

But the real kicker is what content is now being gatekept behind invasive age verification systems. Users in the UK now need to submit a selfie or government ID to access:



Monday, August 04, 2025

Where should we draw the line?

https://www.kansascity.com/news/state/kansas/article311555392.html

Lawrence schools used 24/7 ‘digital surveillance’ on students, some say in suit

Nine teenage students of Lawrence’s high schools — seven former, and two current — filed suit Friday in the U.S. District Court for the District of Kansas claiming that school district subjected them to unlawful “round-the-clock digital surveillance.”

At issue is use of a third-party digital platform, software known as Gaggle, that they claim the district began using in November 2023 to unlawfully scan students’ emails, documents and other files on the digital devices given to them by the school. Through Gaggle, they say, the school conducted “suspicionless searches and seizures of student expression on a scale and scope that no court has ever upheld — and that the Constitution does not permit.”

… “This case,” the filing reads, “challenges the Lawrence, Kansas School District’s decision and policy to subject all students to round-the-clock digital surveillance — scanning their files, flagging their speech, and removing their creative work from access, often without notice, suspicion of suspected wrongdoing, or meaningful recourse.”

The suit, filed by Kansas City attorney Mark P. Johnson, asked for unspecified monetary damages and for the district to cease using Gaggle, which the suit claims violates the students’ 1st Amendment rights to free speech, their Fourth Amendment protections against unreasonable searches and siezures, and their Fourteenth Amendment guarantee of due process.





Perspective.

https://blogs.lse.ac.uk/businessreview/2025/08/01/why-is-gdpr-compliance-still-so-difficult/

Why is GDPR compliance still so difficult?

In our research, we analysed 16 academic studies that explore the challenges businesses face when trying to comply with the GDPR. Our findings disclose a far more complex reality than the simplistic explanation of merely “not knowing the law”, revealing a wide range of challenges that still need to be addressed.

Our analysis identifies four main types of challenges that businesses face in implementing the GDPR: technical, legal, organisational, and regulatory.



Sunday, August 03, 2025

Can we build a prison for AI and robots?

https://digitalcommons.bau.edu.lb/lsjournal/vol2024/iss1/6/

THE CRIMINAL LIABILITY OF INTELLIGENT ROBOTS: BETWEEN REALITY AND THE LAW

Artificial intelligence, in its modern perspective, is regarded as having the capacity to perform duties. But is it, in turn, capable of bearing responsibility—specifically, criminal liability?

In principle, punishment under criminal law is imposed on an accused individual because they deliberately violate the rules and provisions of the law, aiming to achieve criminal outcomes they intend. This implies the presence of a conscious and aware will. In contrast, a robot lacks such will and awareness, meaning that, from a legal standpoint, it does not qualify as a legal person under the traditional classification of legal entities.

Accordingly, this study raises the question of how criminal penalties could be imposed on a robot and whether this is even possible. If the penalties stipulated in criminal law cannot be applied, what are the possible alternatives, and can they be considered legally valid?

This research follows the attached plan, which forms the basis for the findings and recommendations.





Have we forgotten how to be polite?

https://www.independent.com/2025/07/09/first-amendment-auditors-near-cottage-hospital-harass-and-film-patients-and-customers/

First Amendment Auditors’ near Cottage Hospital Harass and Film Patients and Customers

Wednesday morning, on the sidewalks around Cottage Hospital on Nogales Avenue, three men dressed in dark clothing, one masked, armed with tripods and cameras were reportedly harassing members of the public by recording videos, shouting profanity, and threatening identity theft, according to sources at the scene.

Engaged in what is called “First Amendment auditing,” the trio, including two who later identified themselves as Mr. Dick Fitzwell and Mr. Hill, succeeded in having bystanders call 9-1-1. Santa Barbara Police Department officers and security personnel for nearby businesses responded, arriving around 10 a.m. The men had remained on public property and were not targeting specific individuals, Lieutenant Antonio Montojo said, and no arrests were warranted. Montojo, who was on watch command duty for SBPD, said the “auditors” were not associated with law enforcement, and were trying to provoke a response from people to get them to call 9-1-1.

… “First Amendment Auditing” is trending among citizen activists, who record public officials and employees in public spaces to test their understanding and respect for First Amendment rights, particularly the right to photograph and record in public. The “auditors” target unwitting members of the public in the hope they call 9-1-1. Once they do, arriving law enforcement is photographed, with any missteps uploaded to YouTube or TikTok.





Did they get it right?

https://www.sacbee.com/opinion/op-ed/article311536381.html

How artificial intelligence is reshaping California's judicial system | Opinion

Imagine you’re in court for a traffic ticket or a child custody dispute. You expect a judge to weigh your case with impartial wisdom and a thorough understanding of the law. But what if, behind the scenes, parts of your ruling were drafted by artificial intelligence?

This month, the California Judicial Council, which oversees the largest court system in the country, approved groundbreaking rules regulating generative AI use by judges, clerks and court staff. By September 1, every courthouse from San Diego to Siskiyou must follow policies that require human oversight, protect confidentiality and guard against AI bias.

The council’s new guidelines are prudent: They forbid court personnel from allowing AI to draft legal documents or make decisions without meaningful human review. They warn against inputting sensitive case details into public AI platforms, preventing data leaks. They recognize the danger of bias baked into AI systems trained on flawed or discriminatory case law.

In an overstretched judicial system, these safeguards are essential. But safeguards are not barriers. And the AI genie is out of the bottle. California courts already rely on algorithmic tools. Judges use AI-powered risk assessments, like COMPAS, to predict defendants’ likelihood of reoffending, guiding bail and sentencing decisions. These tools have sparked fierce controversy as there is racial bias in the technology, yet they remain widespread.





Perspective.

https://www.researchgate.net/profile/Nishchal-Soni/publication/394105140_Social_Media_Forensics_Foundations_Technical_Frameworks_and_Emerging_Challenges/links/6889e8d5f8031739e609a006/Social-Media-Forensics-Foundations-Technical-Frameworks-and-Emerging-Challenges.pdf

Social Media Forensics: Foundations, Technical Frameworks, and Emerging Challenges

Social media forensics (SMF) has emerged as a critical subdomain of digital forensics, addressing the complex task of collecting, analyzing, and preserving evidence from dynamic, user-driven platforms. As social media plays an increasingly central role in communication, crime, and civil disputes, investigators face significant obstacles related to data volatility, platform encryption, legal jurisdiction, and user privacy. This review explores the foundational theories behind SMF, the legal frameworks that govern its practice, the array of technical tools and methodologies used for investigation, and the tactics employed by adversaries to evade detection or manipulate evidence. Special emphasis is placed on the evolving threat landscape, including deepfakes, ephemeral messaging, and decentralized platforms, as well as emerging solutions in artificial intelligence, blockchain, and real-time forensics. The paper concludes with a forward-looking perspective on the strategic, technological, and policy innovations needed to strengthen forensic readiness and ensure the integrity of digital investigations in an increasingly complex online ecosystem.