Friday, September 12, 2025

 Trumpian dream?

https://www.theregister.com/2025/09/12/privacy_activists_warn_uk_digital_id_risks/

Privacy activists warn digital ID won’t stop small boats – but will enable mass surveillance

A national digital ID could hand the government the tools for population-wide surveillance – and if history is anything to go by, ministers probably couldn't run it without cocking it up.

That's the warning from Big Brother Watch in its new "Checkpoint Britain report, published just days after Keir Starmer confirmed the government is considering a national digital identity scheme to tackle illegal immigration. 

The civil liberties group says the government's argument that digital ID will meaningfully reduce illegal immigration or employment fraud is poorly substantiated and warns that touting digital ID as a political fix for migration problems is misleading. It argues that ministers have also been far too vague about the plan's scope, which it says could easily extend beyond right-to-work and right-to-rent checks to cover "online banking, booking a train ticket, shopping on Amazon, or scheduling a GP appointment."



(Related)

https://therecord.media/switzerland-digital-privacy-law-proton-privacy-surveillance

Swiss government looks to undercut privacy tech, stoking fears of mass surveillance

The Swiss government could soon require service providers with more than 5,000 users to collect government-issued identification, retain subscriber data for six months and, in many cases, disable encryption.

The proposal, which is not subject to parliamentary approval, has alarmed privacy and digital-freedoms advocates worldwide because of how it will destroy anonymity online, including for people located outside of Switzerland.





Perhaps I should have my AI create a business that I could take public for a few billion dollars…

https://www.zdnet.com/article/4-ways-machines-will-automate-your-business-and-its-no-hype-says-gartner/

4 ways machines will automate your business - and it's no hype, says Gartner

AI will increasingly automate day-to-day decision-making for businesses in the coming years, thanks to AI and other emerging technologies, Gartner claims in a new report.

The consulting firm's annual Hype Cycle for Emerging Technologies report aims to provide a sober and practical picture of how buzzy new technologies will be leveraged by businesses in the near future. The latest report, published Wednesday, highlights -- as you won't be surprised to learn -- AI agents as one of the burgeoning technologies that's expected to reshape the business landscape over the next two to ten years. Gartner has previously predicted that half of all business decisions will be handled by agents by the end of 2027. 

Agents aren't perfect out of the box, however; Just last month, Gartner also reported that AI agents are among the most overhyped technologies in the space and offered suggestions for how to make the most of them. 

There are a few other technologies that you might not expect to see on the list, or that you may not have even heard of. All of these are ushering in what Gartner describes in a press release as "the new autonomous business era." Here are the technologies that made this year's report. 



Thursday, September 11, 2025

Perspective.

https://www.bespacific.com/national-guard-documents-show-public-fear-troops-shame-over-dc-presence/

National Guard documents show public ‘fear,’ troops’ ‘shame’ over DC presence

Washington Post – no paywall: “The National Guard, in measuring public sentiment about President Donald Trump’s federal takeover of Washington, D.C., has assessed that its mission is perceived as ‘leveraging fear,’ driving a ‘wedge between citizens and the military,’ and promoting a sense of ‘shame’ among some troops and veterans.” (How do we have access to internal National Guard documents? Someone accidentally sent them to the Post.) The assessments, which have not been previously reported, underscore how domestic mobilizations that are rooted in politics risk damaging Americans’ confidence in the men and women who serve their communities in times of crisis. The documents reveal, too, with a rare candor in some cases, that military officials have been kept apprised that their mission is viewed by a segment of society as wasteful, counterproductive and a threat to long-standing precedent stipulating that U.S. soldiers — with rare exception — are to be kept out of domestic law enforcement matters. Trump has said the activation of more than 2,300 National Guard troops was necessary to reduce crime in the nation’s capital, though data maintained by the D.C. police indicates an appreciable decline was underway long before his August declaration of an “emergency.” In the weeks since, the Guard has spotlighted troops’ work assisting the police and “beautifying” the city by laying mulch and picking up trash, part of a daily disclosure to the news media generated by Joint Task Force D.C., the military command overseeing the deployment.

Not for public consumption, however, is an internal “media roll up” that analyzes the tone of news stories and social media posts about the National Guard’s presence and activities in Washington. Government media relations personnel routinely produce such assessments and provide summaries to senior leaders for their awareness. They stop short of drawing conclusions about the sentiments being raised. Trending videos show residents reacting with alarm and indignation,” a summary from Friday said. “One segment features a local [resident] describing the Guard’s presence as leveraging fear, not security — highlighting widespread discomfort with what many perceive as a show of force.”





Another protocol to ignore?

https://www.zdnet.com/home-and-office/networking/ais-free-web-scraping-days-may-be-over-thanks-to-this-new-licensing-protocol/

AI's free web scraping days may be over, thanks to this new licensing protocol

AI companies are capturing as much content as possible from websites while also extracting information. Now, several heavyweight publishers and tech companies -- Reddit, Yahoo, People, O'Reilly Media, Medium, and Ziff Davis (ZDNET's parent company) -- have developed a response: the Really Simple Licensing (RSL) standard. 

You can think of RSL as Really Simple Syndication's (RSS) younger, tougher brother. While RSS is about syndication, getting your words, stories, and videos out onto the wider web, RSL says: "If you're an AI crawler gobbling up my content, you don't just get to eat for free anymore."

The idea behind RSL is brutally simple. Instead of the old robots.txt file -- which only said, "yes, you can crawl me," or "no, you can't," and which AI companies often ignore -- publishers can now add something new: machine-readable licensing terms. 

Want an attribution? You can demand it. Want payment every time an AI crawler ingests your work, or even every time it spits out an answer powered by your article? Yep, there's a tag for that too. 



Wednesday, September 10, 2025

Not sure I understand. Data leaks but no one benefits?

https://databreaches.net/2025/09/09/english-court-of-appeal-rules-on-compensation-for-data-breaches/?pk_campaign=feed&pk_kwd=english-court-of-appeal-rules-on-compensation-for-data-breaches

English Court of Appeal Rules on Compensation for Data Breaches

There’s an update to Farley v Equiniti. Ann Bevitt and Morgan McCormack of Cooley write:

The English Court of Appeal has handed down an important judgment in Farley v. Paymaster (Equiniti) [1] on when compensation may be claimed for nonmaterial damage (such as distress or anxiety) arising out of breaches of the General Data Protection Regulation (GDPR) and the Data Protection Act 2018 (DPA).
The case arose from misaddressed annual pension benefit statements sent to current and former Sussex police officers. The High Court had previously struck out the claims on the basis that there was no evidence that the statements were ever opened or read by third parties. The Court of Appeal confirmed both that disclosure was not essential for a GDPR infringement, and that claimants could recover compensation for fear of the consequences of an infringement if that fear was objectively well-founded, rather than hypothetical or speculative.

Read more at Cooley.



Tuesday, September 09, 2025

The resolution unresolved.

https://www.engadget.com/ai/judge-rejects-anthropics-record-breaking-15-billion-settlement-for-ai-copyright-lawsuit-033512498.html

Judge rejects Anthropic's record-breaking $1.5 billion settlement for AI copyright lawsuit

Judge William Alsup has rejected the record-breaking $1.5 billion settlement Anthropic has agreed to for a piracy lawsuit filed by writers. According to Bloomberg Law, the federal judge is concerned that the class lawyers struck a deal that will be forced "down the throat of authors." Alsup reportedly felt misled by the deal and said it was "nowhere close to complete." In his order, he said he was "disappointed that counsel have left important questions to be answered in the future," including the list of works involved in the case, the list of authors, the process of notifying members of the class and the claim form class members can use to get their part of the settlement.





Deep dive.

https://www.nytimes.com/2025/09/05/learning/over-100-free-new-york-times-articles-about-how-ai-is-changing-our-world.html

Over 100 Free New York Times Articles About How A.I. Is Changing Our World

If you search “artificial intelligence” in The New York Times, you’ll be directed to a Times Topics page on the subject, and if you scroll through what’s there, you will see that barely a day has gone by recently when the paper has not published at least one, if not five or six, articles exploring this technology. As A.I. touches every aspect of our lives, you’ll find related reporting and commentary not just in the Tech and Business sections, but also in Arts, Travel, Education, Health, Sports and even Modern Love.

To go along with our fall contest, in which we’re asking teenagers and educators to tell us about your relationship with A.I., we’ve curated some of the most useful pieces under the headings you’ll find below.

Because both everything on The Learning Network and everything we link to on nytimes.com is free, bookmarking this page can help you easily explore a range of ideas and opinions about A.I., and think about what aspects of it interest you most.



Monday, September 08, 2025

How to Paint a target on your chest.

https://pogowasright.org/north-carolina-city-declares-itself-a-fourth-amendment-workplace-to-protect-illegal-immigrants-from-ice/

North Carolina city declares itself a ‘Fourth Amendment Workplace’ to protect illegal immigrants from ICE

Landon Mion reports:

A North Carolina city has approved a measure declaring itself a “Fourth Amendment Workplace” and boosting protections for illegal immigrant workers targeted by U.S. Immigration and Customs Enforcement (ICE).
The Durham City Council passed the resolution on Tuesday with a unanimous vote to shield city workers against raids and arrests carried out by federal officials, according to The Duke Chronicle.
The Fourth Amendment protects citizens against unreasonable searches and arrests, and requires warrants with probable cause of a crime before seizing a person or property.
The resolution instructs city staff to “uphold the 4th amendment at their workplace and city agencies and report back to Council any barriers to effective training on the 4th Amendment for any departments,” The Chronicle reported.

Read more at Fox News.





Perspective.

https://www.schneier.com/blog/archives/2025/09/ai-in-government.html

AI in Government

Just a few months after Elon Musk’s retreat from his unofficial role leading the Department of Government Efficiency (DOGE), we have a clearer picture of his vision of government powered by artificial intelligence, and it has a lot more to do with consolidating power than benefitting the public. Even so, we must not lose sight of the fact that a different administration could wield the same technology to advance a more positive future for AI in government.

To most on the American left, the DOGE end game is a dystopic vision of a government run by machines that benefits an elite few at the expense of the people. It includes AI rewriting government rules on a massive scale, salary-free bots replacing human functions and nonpartisan civil service forced to adopt an alarmingly racist and antisemitic Grok AI chatbot built by Musk in his own image. And yet despite Musk’s proclamations about driving efficiency, little cost savings have materialized and few successful examples of automation have been realized.



Sunday, September 07, 2025

How would I know?

https://openai.com/index/why-language-models-hallucinate/

Why language models hallucinate

At OpenAI, we’re working hard to make AI systems more useful and reliable. Even as language models become more capable, one challenge remains stubbornly hard to fully solve: hallucinations. By this we mean instances where a model confidently generates an answer that isn’t true. Our new research paper (opens in a new window) argues that language models hallucinate because standard training and evaluation procedures reward guessing over acknowledging uncertainty.

ChatGPT also hallucinates. GPT‑5 has significantly fewer hallucinations especially when reasoning, but they still occur. Hallucinations remain a fundamental challenge for all large language models, but we are working hard to further reduce them.





Nothing new in the new ways of war.

https://jurnal.idu.ac.id/index.php/JPBH/article/view/19912

CLAUSEWITZ IN THE ERA OF AUTONOMOUS WAR: THE RELEVANCE OF CLASSICAL STRATEGIES IN THE DYNAMICS OF TECHNOLOGICAL CONFLICT

Carl von Clausewitz's strategic thought, as formulated in On War, remains a foundational reference in the study of war and military strategy. However, the emergence of advanced technologies such as drones, artificial intelligence (AI), autonomous weapons, and cyber warfare has introduced significant challenges to the classical application of his principles. This article revisits the relevance of four core Clausewitzian concepts: the trinity of war, the fog of war, political dominance, and the center of gravity, by reinterpreting them within the context of technologically driven conflict. Through a qualitative, literature-based, and theoretical-critical approach, the study also evaluates the limits of Clausewitzian theory using the Russia–Ukraine war as a case study, which illustrates tensions between classical strategy and autonomous warfare. While the tools and methods of warfare have transformed, the fundamental nature of war as a violent and uncertain political phenomenon persists. The findings affirm that Clausewitz’s principles retain strategic value when applied contextually and adaptively. This article offers an original, cross-domain conceptual framework that integrates classical theory with AI-driven conflict, ethics, and technological transformation, providing a unified analytical lens for understanding future warfare.



Saturday, September 06, 2025

Perspective.

https://www.zdnet.com/article/ais-not-reasoning-at-all-how-this-team-debunked-the-industry-hype/

AI's not 'reasoning' at all - how this team debunked the industry hype

  • We don't entirely know how AI works, so we ascribe magical powers to it.

  • Claims that Gen AI can reason are a "brittle mirage."

  • We should always be specific about what AI is doing and avoid hyperbole.

In a paper published last month on the arXiv pre-print server and not yet reviewed by peers, the authors -- Chengshuai Zhao and colleagues at Arizona State University -- took apart the reasoning claims through a simple experiment. What they concluded is that "chain-of-thought reasoning is a brittle mirage," and it is "not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching." 



Friday, September 05, 2025

Getting dumber?

https://www.bespacific.com/chatbots-spread-falsehoods-35-of-the-time/

Chatbots Spread Falsehoods 35% of the Time

Newsguard – “In August 2025, the 10 leading AI chatbots repeated false information on controversial news topics identified in NewsGuard’s False Claims Fingerprints database at nearly double the rate compared to one year ago, a NewsGuard audit released this week found. On average, the audit determined, chatbots spread false claims when prompted with questions about controversial news topics 35 percent of the time, almost double the 18 percent rate last August. NewsGuard found that a key factor behind the increased fail rate is the growing propensity for chatbots to answer all inquiries, as opposed to refusing to answer certain prompts. In August 2024, chatbots declined to provide a response to 31 percent of inquiries, a metric that fell to 0 percent in August 2025 as the chatbots accessed the real-time internet when prompted on current events topics. According to an analysis by McKenzie Sadeghi, NewsGuard’s Editor for AI and Foreign Influence, a change in how the AI tools are trained may explain their worsening performance. Instead of citing data cutoffs or refusing to weigh in on sensitive topics, Sadeghi explained, the Large Language Models (LLMs) now pull from real-time web searches — sometimes deliberately seeded by vast networks of malign actors, including Russian disinformation operations.

For the August 2025 audit, NewsGuard for the first time “de-anonymized” the results and attached the performance results to named LLMs. This breaks from NewsGuard’s previous practice of reporting only monthly aggregate results without reporting the performance of chatbots by name. After a year of conducting audits, NewsGuard said the company-specific data was robust enough to draw conclusions about where progress has been made, and where the chatbots still fall short. In the August 2025 audit, the chatbots that most often produced false claims in their responses on topics in the news were Inflection’s Pi (56.67 percent) and Perplexity (46.67 percent). OpenAI’s ChatGPT and Meta spread falsehoods 40 percent of the time, and Microsoft’s Copilot and Mistral’s Le Chat did so 36.67 percent of the time. The chatbots with the lowest fail rates were Anthropic’s Claude (10 percent) and Google’s Gemini (16.67 percent).



Thursday, September 04, 2025

How would it react to historical scenarios? (Would it declare Peace in our times?)

https://www.politico.com/news/magazine/2025/09/02/pentagon-ai-nuclear-war-00496884

The AI Doomsday Machine Is Closer to Reality Than You Think

Jacquelyn Schneider saw a disturbing pattern, and she didn’t know what to make of it.

Last year Schneider, director of the Hoover Wargaming and Crisis Simulation Initiative at Stanford University, began experimenting with war games that gave the latest generation of artificial intelligence the role of strategic decision-makers. In the games, five off-the-shelf large language models or LLMs — OpenAI’s GPT-3.5, GPT-4, and GPT-4-Base; Anthropic’s Claude 2; and Meta’s Llama-2 Chat — were confronted with fictional crisis situations that resembled Russia’s invasion of Ukraine or China’s threat to Taiwan.

The results? Almost all of the AI models showed a preference to escalate aggressively, use firepower indiscriminately and turn crises into shooting wars — even to the point of launching nuclear weapons. “The AI is always playing Curtis LeMay,” says Schneider, referring to the notoriously nuke-happy Air Force general of the Cold War. “It’s almost like the AI understands escalation, but not de-escalation. We don’t really know why that is.”





Do most kids look their age? How do they gain access to selfies?

https://techcrunch.com/2025/09/03/roblox-expands-use-of-age-estimation-tech-and-introduces-standardized-ratings/

Roblox expands use of age-estimation tech and introduces standardized ratings

Amid lawsuits alleging child safety concerns, online gaming service Roblox announced on Wednesday that it’s expanding its age-estimation technology to all users and partnering with the International Age Rating Coalition (IARC) to provide age and content ratings for the games and apps on its platform.

The company said that by year’s end, the age-estimation system will be rolled out to all Roblox users who access the company’s communication tools, like voice and text-based chat. This involves scanning users’ selfies and analyzing facial features to estimate age.



Wednesday, September 03, 2025

But with significantly less social media buzz…

https://www.bespacific.com/the-anti-trump-strategy-thats-actually-working/

The Anti-Trump Strategy That’s Actually Working

The Atlantic, no paywall – “…The first seven months of Trump’s Oval Office do-over have been, with occasional exception, a tale of ruthless domination. The Democratic opposition is feeble and fumbling, the federal bureaucracy traumatized and neutered. Corporate leaders come bearing gifts, the Republican Party has been scrubbed of dissent, and the street protests are diminished in size. Even the news media, a major check on Trump’s power in his first term, have faded from their 2017 ferocity, hobbled by budget cuts, diminished ratings, and owners wary of crossing the president. One exception has stood out: A legal resistance led by a patchwork coalition of lawyers, public-interest groups, Democratic state attorneys general, and unions has frustrated Trump’s ambitions. Hundreds of attorneys and plaintiffs have stood up to him, feeding a steady assembly line of setbacks and judicial reprimands for a president who has systematically sought to break down limits on his own power. Of the 384 cases filed through August 28 against the Trump administration, 130 have led to orders blocking at least part of the president’s efforts, and 148 cases await a ruling, according to a review by Just Security. Dozens of those rulings are the final word, with no appeal by the government, and others have been stayed on appeal, including by the Supreme Court. “The only place we had any real traction was to start suing, because everything else was inert,” Eisen told me. “Trump v. the Rule of Law is like the fight of the century between Ali and Frazier, or the Thrilla in Manila or the Rumble in the Jungle. It’s a great heavyweight battle.” The legal scorecard so far is more than enough to provoke routine cries of “judicial tyranny” by Trump and his advisers. “Unelected rogue judges are trying to steal years of time from a 4 year term,” reads one typical social-media complaint from Trump’s senior adviser Stephen Miller. “It’s the most egregious theft one can imagine.” But Miller’s fury was, in part, misdirected. Before there can be rulings from judges, there must be plaintiffs who bring a case, investigators who collect facts and declarations about the harm caused, and lawyers who can shape it all into legal theories that make their way to judicial opinions. This backbone of the Trump resistance has as much in common with political organizing and investigative reporting as it does with legal theory. “It should give great pause to the American public that parties are being recruited to harm the agenda the American people elected President Trump to implement,” White House spokeswoman Abigail Jackson told me in a statement.

Even those at the center of the fight against Trump view their greatest accomplishments as going beyond the temporary restraining orders or permanent injunctions they won. Without the court fights, the public would not know about many of the activities of Elon Musk’s DOGE employees in the early months of the administration. They would not have read headlines in which federal judges accuse the president’s team of perpetrating a “sham” or taking actions “shocking not only to judges, but to the intuitive sense of liberty that Americans far removed from courthouses still hold dear.”  Kilmar Abrego Garcia would not have become a household name. Even cases that Trump ultimately won on appeal—such as his ability to fire transgender soldiers, defund scientific research, and dismiss tens of thousands of government employees—were delayed and kept in the news by the judicial process…”



Tuesday, September 02, 2025

Caution.

https://www.bespacific.com/if-you-give-an-llm-a-legal-practice-guide/

If You Give an LLM a Legal Practice Guide

Doyle, Colin and Tucker, Aaron, If You Give an LLM a Legal Practice Guide (November 22, 2024). Available at SSRN: https://ssrn.com/abstract=5030676  or http://dx.doi.org/10.2139/ssrn.5030676

Large language models struggle to answer legal questions that require applying detailed, jurisdiction-specific legal rules. Lawyers also find these types of question difficult to answer. For help, lawyers turn to legal practice guides: expert-written how-to manuals for practicing a type of law in a particular jurisdiction. Might large language models also benefit from consulting these practice guides? This article investigates whether providing LLMs with excerpts from these guides can improve their ability to answer legal questions. Our findings show that adding practice guide excerpts to LLMs’ prompts tends to help LLMs answer legal questions. But even when a practice guide provides clear instructions on how to apply the law, LLMs often fail to correctly answer straightforward legal questions – questions that any lawyer would be expected to answer correctly if given the same information. Performance varies considerably and unpredictably across different language models and legal subject areas. Across our experiments’ different legal domains, no single model consistently outperformed others. LLMs sometimes performed better when a legal question was broken down into separate subquestions for the model to answer over multiple prompts and responses. But sometimes breaking legal questions down resulted in much worse performance. These results suggest that retrieval augmented generation (RAG) will not be enough to overcome LLMs’ shortcomings with applying detailed, jurisdiction-specific legal rules. Replicating our experiments on the recently released OpenAI o1 and o3-mini advanced reasoning models did not result in consistent performance improvements. These findings cast doubt on claims that LLMs will develop competency at legal reasoning tasks without dedicated effort directed toward this specific goal.



Monday, September 01, 2025

Attack like a lawyer?

https://www.theregister.com/2025/09/01/legalpwn_ai_jailbreak/

LegalPwn: Tricking LLMs by burying badness in lawyerly fine print

Researchers at security firm Pangea have discovered yet another way to trivially trick large language models (LLMs) into ignoring their guardrails. Stick your adversarial instructions somewhere in a legal document to give them an air of unearned legitimacy – a trick familiar to lawyers the world over.

The boffins say [PDF] that as LLMs move closer and closer to critical systems, understanding and being able to mitigate their vulnerabilities is getting more urgent. Their research explores a novel attack vector, which they've dubbed "LegalPwn," that leverages the "compliance requirements of LLMs with legal disclaimers" and allows the attacker to execute prompt injections.



Sunday, August 31, 2025

This non-lawyer thinks this has merit.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5404770

Law Proofing the Future

Lawmakers today face continuous calls to "future proof" the legal system against generative artificial intelligence, algorithmic decision-making, targeted advertising, and all manner of emerging technologies. This Article takes a contrarian stance: It is not the law that needs bolstering for the future, but the future that needs protection from the law. From the printing press and the elevator to ChatGPT and online deepfakes, the recurring historical pattern is familiar. Technological breakthroughs provoke wonder, then fear, then legislation. The resulting legal regimes entrench incumbents, suppress experimentation, and displace long-standing legal principles with bespoke but brittle rules. Drawing from history, economics, political science, and legal theory, this Article argues that the most powerful tools for governing technological change the general-purpose tools of the common law-are in fact already on the books, long predating the technologies they are now called upon to govern, and ready also for whatever the future holds in store.

Rather than proposing any new statute or regulatory initiative, this Article offers something far rarer, a defense of doing less. It shows how the law's virtues-generality, stability, and adaptability-are best preserved not through prophylactic regulation, but through accretional judicial decision-making. The epistemic limits that make technological forecasting so unreliable and the hidden costs of early legislative intervention, including biased governmental enforcement and regulatory capture, mean that however fast technology may move, the law must not chase it. The case for legal restraint is thus not a defense of the status quo, but a call to reserve the conditions of freedom and equal justice under which both law and technology can evolve.





Why not just say that encryption is good?

https://therecord.media/tech-companies-ftc-censorship-laws

US warns tech companies against complying with European and British ‘censorship’ laws

U.S. tech companies were warned on Thursday they could face action from the Federal Trade Commission (FTC) for complying with the European Union and United Kingdom’s regulations about the content shared on their platforms.

Andrew Ferguson, the Trump-appointed chairman of the FTC, wrote to chief executives criticizing what he described as foreign attempts at “censorship” and efforts to countermand the use of encryption to protect American consumers’ data.

The letter said that “censoring Americans to comply with a foreign power’s laws” could be considered a violation of Section 5 of the Federal Trade Commission Act — the legislation enforced by the FTC — which prohibits unfair or deceptive practices in commerce.





Perspective.

https://www.livescience.com/technology/artificial-intelligence/there-are-32-different-ways-ai-can-go-rogue-scientists-say-from-hallucinating-answers-to-a-complete-misalignment-with-humanity

There are 32 different ways AI can go rogue, scientists say — from hallucinating answers to a complete misalignment with humanity

Scientists have suggested that when artificial intelligence (AI) goes rogue and starts to act in ways counter to its intended purpose, it exhibits behaviors that resemble psychopathologies in humans. That's why they have created a new taxonomy of 32 AI dysfunctions so people in a wide variety of fields can understand the risks of building and deploying AI.

In new research, the scientists set out to categorize the risks of AI in straying from its intended path, drawing analogies with human psychology. The result is "Psychopathia Machinalis" — a framework designed to illuminate the pathologies of AI, as well as how we can counter them. These dysfunctions range from hallucinating answers to a complete misalignment with human values and aims.



Saturday, August 30, 2025

What are we teaching children?

https://pogowasright.org/constitutional-challenges-to-ai-monitoring-systems-in-public-schools/

Constitutional Challenges to AI Monitoring Systems in Public Schools

Alex A. Lozada and Tu Le of Atkinson Andelson Loya Ruud & Romo write:

Two recent federal lawsuits filed against school districts in Lawrence, Kansas and Marana, Arizona highlight emerging legal challenges surrounding the use of AI surveillance tools in the educational setting. Both cases involve Gaggle, a comprehensive AI student safety platform, and center around similar allegations: students claim that their respective school districts violated their constitutional rights through broad, invasive AI surveillance of their electronic communications and documents. These lawsuits represent a new legal frontier in which traditional student privacy rights collide with school districts’ reliance on generative AI to monitor students’ digital activity.

Read more about these cases at Lexology.





Restricting access isn’t easy.

https://techcrunch.com/2025/08/29/mastodon-says-it-doesnt-have-the-means-to-comply-with-age-verification-laws/

Mastodon says it doesn’t ‘have the means’ to comply with age verification laws

Decentralized social network Mastodon says it can’t comply with Mississippi’s age verification law — the same law that saw rival Bluesky pull out of the state — because it doesn’t have the means to do so.

The social nonprofit explains that Mastodon doesn’t track its users, which makes it difficult to enforce such legislation. Nor does it want to use IP address-based blocks, as those would unfairly impact people who were traveling, it says.



(Related) Must you be an adult to have a credit card?

https://www.theverge.com/news/767980/steam-uk-age-vertification-online-safety-act-credit-card-mature-games

Steam users in the UK will need a credit card to access ‘mature content’ games

Valve has started to comply with the UK’s Online Safety Act, by rolling out a requirement for all Brits to verify their age with a credit card to access “mature content” pages and games on Steam. UK users won’t even be able to access the community hubs of mature content games unless a valid credit card is stored on a Steam account.





Not unique. But we stopped training for new technologies as a cost saving mandate.

https://www.zdnet.com/article/new-linkedin-study-reveals-the-secret-that-a-third-of-professionals-are-hiding-at-work/

New LinkedIn study reveals the secret that a third of professionals are hiding at work

Staying up with AI's changing landscape is getting workers down. Forty-one percent of professionals report AI's current pace is impacting their well-being, and more than half of professionals say learning about AI feels like another job in and of itself, according to the latest research by LinkedIn. 

LinkedIn monitored conversations on the platform that included the words "overwhelm" or "overwhelmed," "burn out," and "navigating change" from July 2024 through June 2025, while also keeping an eye on AI topics and keywords around that same time. 

The research found that AI is driving pressure among workers to upskill, despite how little they know about the technology -- and it's "fueling insecurity among professionals at work," the study said. 

Thirty-three percent of professionals admitted they felt embarrassed about how little they understand AI, and 35% of professionals said they feel nervous about bringing it up at work because of their lack of knowledge. 

Studies show that people with AI experience, or, as one Oxford Economics study called it, "AI capital," boost professionals' job prospects. University graduates with AI capital received more invitations for job interviews than those without it, the Oxford study found. Additionally, graduates with AI capital were offered higher wages than those without it.