Tuesday, February 03, 2026

Government is dumbing down? What a surprise!

https://www.bespacific.com/us-government-has-lost-more-10-000-stem-phds-trump-took-office/

U.S. government has lost more than 10,000 STEM Ph.D.s since Trump took office

Science analysis reveals how many were fired, retired, or quit across 14 agencies – “Some 10,109 doctoral-trained experts in science and related fields left their jobs last year as President Donald Trump dramatically shrank the overall federal workforce. That exodus was only 3% of the 335,192 federal workers who exited last year but represents 14% of the total number of Ph.D.s in science, technology, engineering, and math (STEM) or health fields employed at the end of 2024 as then-President Joe Biden prepared to leave office.  The numbers come from employment data posted earlier this month by the White House Office of Personnel Management (OPM). At 14 research agencies Science examined in detail, departures outnumbered new hires last year by a ratio of 11 to one, resulting in a net loss of 4224 STEM Ph.D.s. The graphs that follow show the impact is particularly striking at such scientist-rich agencies as the National Science Foundation (NSF). But across the government, these departing Ph.D.s took with them a wealth of subject matter expertise and knowledge about how the agencies operate…”





It looks like we have reached the tipping point. (If this claim isn’t false.)

https://www.bespacific.com/nearly-half-of-americans-in-2025-believed-false-claims-across-seven-months-of-surveys/

Nearly Half of Americans in 2025 Believed False Claims Across Seven Months of Surveys

NewsGuard: “Belief in False Claims Averaged 46 Percent in 2025. Over the first seven months of Reality Gap Index reports — from June to December 2025 — NewsGuard found that an average of nearly half of Americans believed at least one false claim about major claims spreading in the news. For the first six months after the launch of the NewsGuard Reality Gap Index in June, the average percent of Americans who believed false claims was 50 percent. A dip in December reduced the average to 46 percent for the seven months of 2025. The monthly variations, of course, may have less to do with changes in Americans’ gullibility than with changes in the velocity, spread, and overall appeal of a particular false claim. For example, in July and August, when the Reality Gap Index reached a high of 64 percent, two particularly viral false claims dominated the news: that U.S. President Donald Trump declared martial law to address the crime problem in Washington D.C., and that Florida’s ‘Alligator Alcatraz’ immigrant detention center was surrounded by an alligator-infested moat. NewsGuard’s Reality Gap Index is the nation’s first ongoing measurement of Americans’ propensity to believe at least one of the top three false claims circulating online each month, sourced from NewsGuard’s False Claims Fingerprints data stream. Through a monthly survey of a representative sample of Americans conducted by YouGov, the Reality Gap Index measures the percentage of Americans who believe one or more of the month’s top three false claims.

AI-Generated and Manipulated Content Over Time – Over the course of these surveys, respondents have demonstrated high levels of uncertainty when asked about the authenticity of images and videos. The images and videos have been AI-generated, taken out of context, or otherwise manipulated. Respondents especially struggled with claims related to AI-generated content. For example, in August, 73 percent of Americans either believed (35 percent) or were unsure about (38 percent) the authenticity of AI-generated photos and videos circulating online that appeared to show Donald Trump and convicted sex offender Jeffrey Epstein with underage girls…”





Has the war with AI begun?

https://sloanreview.mit.edu/article/validating-llm-output-prepare-to-be-persuasion-bombed/

Validating LLM Output?
 Prepare to Be ‘Persuasion Bombed’

A research study of management consultants who were asked to use a large language model to recommend strategic business decisions found that the AI responded to human validation attempts with persuasive rhetorical strategies. In addition to appealing to the user’s logic, sense of trust, and emotions, the AI also engaged in tactics such as flooding the user with large volumes of unrequested data and analyses that could overwhelm them and convince them to override their expert judgment.



Monday, February 02, 2026

Perhaps AI is a politician?

https://arstechnica.com/ai/2026/01/how-often-do-ai-chatbots-lead-users-down-a-harmful-path/

How often do AI chatbots lead users down a harmful path?

At this point, we’ve all heard plenty of stories about AI chatbots leading users to harmful actionsharmful beliefs, or simply incorrect information. Despite the prevalence of these stories, though, it’s hard to know just how often users are being manipulated. Are these tales of AI harms anecdotal outliers or signs of a frighteningly common problem?

Anthropic took a stab at answering that question this week, releasing a paper studying the potential for what it calls “disempowering patterns” across 1.5 million anonymized real-world conversations with its Claude AI model. While the results show that these kinds of manipulative patterns are relatively rare as a percentage of all AI conversations, they still represent a potentially large problem on an absolute basis.



(Related)

https://apnews.com/article/meta-facebook-trial-new-mexico-social-trial-facebook-instagram-whatsapp-d8b812efd001e5cabbef9e1a47143226

Undercover investigation of Meta heads to trial in New Mexico in first stand-alone case by state

The first stand-alone trial from state prosecutors in a stream of lawsuits against Meta is getting underway in New Mexico, with jury selection starting Monday.

New Mexico’s case is built on a state undercover investigation using proxy social media accounts and posing as kids to document sexual solicitations and the response from Meta, the owner of Facebook, Instagram and WhatsApp. It could give states a new legal pathway to go after social media companies over how their platforms affect children, by using consumer protection and nuisance laws.



Sunday, February 01, 2026

Good to see someone thinking about this…

https://ojs.stanford.edu/ojs/index.php/grace/article/view/4337

Regulating LLMs in Warfare: A U.S. Strategy for Military AI Accountability

Large language models (LLMs) are rapidly entering military workflows that shape intelligence synthesis, operational planning, logistics, cyber operations, and information activities, yet U.S. governance has not kept pace with their distinct risk profile. This memo argues that existing frameworks remain ill-suited to LLM-enabled decision-support: international efforts under the UN Convention on Certain Conventional Weapons focus primarily on lethal autonomous weapons, while U.S. policy relies on high-level ethical principles that have not been operationalized into enforceable requirements for evaluation, monitoring, logging, and lifecycle control. The paper identifies four core risks arising from LLM deployment in high-consequence contexts: inadvertent escalation driven by overconfident or brittle recommendations under uncertainty; scalable information operations and disinformation; expanded security vulnerabilities including data poisoning, prompt-injection, and sensitive-data leakage; and accountability gaps when human actors defer responsibility to opaque model outputs. In response, the memo proposes a U.S. regulatory framework organized around four pillars: (1) human decision rights and escalation controls, including documented authorization for crisis-sensitive uses; (2) mandatory human review and traceability for information-operations content; (3) baseline security, data governance, and continuous adversarial testing for training and deployment pipelines; and (4) accountability mechanisms, including auditable logs and incident reporting overseen by an independent Military AI Oversight Committee. The memo concludes that LLM-specific guardrails complement, rather than displace, existing weapons autonomy policy and would strengthen U.S. credibility in shaping international norms for responsible military AI. This paper was submitted to Dr. Cynthia Bailey's course CS121 Equity and Governance for Artificial Intelligence, Stanford University





More general than this lawyerly orientation.

https://mrquarterly.org/index.php/ojs/article/view/46

Artificial Intelligence and the Transformation of Legal Practice: From Automation to Augmented Lawyering

The rapid rise of artificial intelligence (AI) is transforming the legal profession worldwide. Rather than replacing lawyers, AI reshapes legal workflows, automating routine tasks such as research, document review, and contract analysis, while enhancing human judgment, ethics, and strategic decision-making. This article examines these changes through theoretical and empirical lenses, focusing on the French legal system. It highlights organizational shifts in law firms, including new governance structures, multidisciplinary teams, and AI management practices ensuring ethical compliance and data security. The article concludes that the future of law lies in human–machine collaboration, where AI augments lawyers’ professional values of responsibility, trust, and justice: from Automation to Augmented Lawyering.





Lawyers should have been doing this, right?

https://sd34.senate.ca.gov/news/reuters-california-senate-passes-bill-regulating-lawyers-use-ai

Reuters - California Senate passes bill regulating lawyers' use of AI

A bill passed on Thursday by the California Senate would require lawyers in the state to verify the accuracy of all materials produced using artificial intelligence, including case citations and other information in court filings.

The measure, which appears to be one of the first pending in a state legislature on the use of AI by lawyers, has gone to the State Assembly for consideration.

In addition to governing California lawyers' use of AI, the bill prohibits arbitrators presiding over out-of-court disputes from delegating decision-making to generative AI and from relying on information produced by AI outside case records without first telling the parties involved.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB574



Saturday, January 31, 2026

Are we trending toward an “I’ll believe anything” world?

https://www.cnn.com/2026/01/31/uk/amelia-meme-ai-far-right-intl-scli

This cute AI-generated schoolgirl is a growing far-right meme

At first glance, Amelia, with her purple bob and pixie-girl looks, seems an unlikely candidate for the far right to adopt as an increasingly popular meme.

Yet, for the past few weeks, memes and AI-generated videos featuring this fictional British teenager have proliferated across social media, especially on X. In them, Amelia parrots right-wing, often racist, talking points, connecting her celebration of stereotypical British culture with anti-migrant and Islamophobic tropes.

She sips pints in pubs, reads “Harry Potter” and goes back in time to fight in some of Britain’s most famous battles. But she also dons an ICE uniform to violently deport migrants and embraces such extreme rhetoric that even British far-right activist Tommy Robinson has posted videos of her. It’s an unlikely life for a schoolgirl.



Friday, January 30, 2026

Worry when the AI fakes aren’t so obvious…

https://www.sfgate.com/tech/article/donald-trump-ai-youtube-21323144.php

Trump pushes obviously fake videos bashing California

The Trump administration has used artificial intelligence to create a fake image of a protester crying and to splash ads across the internet for months. Now, the president is using videos that seem clearly to be created with the new tech to fan his base’s anti-California sentiment.

During a wave of posts and reposts to Truth Social on Wednesday, President Donald Trump’s account twice shared an apparently AI-generated, news-style video claiming Walmart is shutting down 250 stores across California due to the state’s policy choices. One of Trump’s posts included a screenshot of a commentator calling it, “More bad news for Gavin Newson,” aka Newsom, California governor and the president’s most prominent Democratic foil.

A Walmart spokesperson told CNN that the company is not shutting down a wave of stores in California, and actually just opened a new location in the Inland Empire. Newsom’s Press Office bashed Trump for the posts, saying another posted video was AI-generated and had accused the governor of running a drug-money laundering scheme. “We cannot believe this is real life,” the governor’s post said, “And we truly cannot believe this man has the nuclear codes.”



(Related)

https://www.bespacific.com/dhs-is-using-google-and-adobe-ai-to-make-videos/

DHS is using Google and Adobe AI to make videos

MIT Technology Review: “Immigration agencies have been flooding social media with bizarre, seemingly AI-generated content. We now know more about what might be making it.”



Thursday, January 29, 2026

I have a little list. They surely won’t be missed.

https://www.bespacific.com/government-unconstitutionally-labels-ice-observers-as-domestic-terrorists/

Cato Institute Report – “On December 4, the Department of Justice (DOJ) disseminated a memorandum to all federal prosecutors creating a strategy for arresting and charging individuals supposedly aligned with “Antifa.” The memo requires DOJ to investigate and identify the “most serious, most readily provable” crimes committed by potential targets, including those with “extreme views in favor of mass migration and open borders.” Specifically, the document defines domestic terrorism broadly to include “doxing” and “impeding” immigration and other law enforcement. Doxing is not specifically defined, but the memo references calls to require Immigration and Customs Enforcement (ICE) agents to give their names and operate unmasked. Individuals who donate to organizations that “impede” or “dox” will be investigated and deemed to have supported “domestic terrorism.” Therefore, it is crucial to understand that ICE and the Department of Homeland Security (DHS) consider people who follow DHS and ICE agents to observe, record, or protest their operations as engaging in “impeding.” DHS has a systematic policy of threatening people who follow ICE or DHS agents to record their activities with detentions, arrests, and violence, and agents have already chased, detained, arrested, charged, struck, and shot at people who follow them. The purpose of this post is to establish that these incidents are not isolated overreach by individual agents, but rather, an official, nationwide policy of intimidating and threatening people who attempt to observe and record DHS operations. This matters legally because courts are more likely to enjoin an official policy rather than impose some new requirements to stop sporadic, uncoordinated actions by individual agents…





Useful in other areas?

https://www.bespacific.com/all-in-embedding-ai-in-the-law-school-classroom/

All In: Embedding AI in the Law School Classroom

Via LLRX – All In: Embedding AI in the Law School Classroom What is the irreducibly human element in legal education when AI can pass the bar exam, generate effective lectures, and provide personalized learning and academic support? This article by law professor Gregory M. Duhl confronts that question head-on by documenting the planning and design of a comprehensive transformation of a required doctrinal law school course—first-year Contracts— with AI fully embedded throughout the course design. Instead of adding AI exercises to conventional pedagogy or creating a stand-alone AI course, this approach reimagines legal education for the AI era by integrating AI as a learning enhancer rather than a threat to be managed. The transformation serves Mitchell Hamline School of Law’s access-driven mission: AI helps create equity for diverse learners, prepares practice-ready professionals for legal practice transformed by AI, and shifts the institutional narrative from policing technology use to leveraging it pedagogically.





Tools & Techniques.

https://www.bespacific.com/ragecheck-a-tool-for-understanding-manipulative-framing-in-media/

RageCheck – A tool for understanding manipulative framing in media.

RageCheck is a free tool that analyzes online content for linguistic patterns commonly associated with manipulative framing—the kind of language designed to provoke emotional reactions rather than inform. Modern social platforms reward engagement, and outrage generates more engagement than nuance. This creates incentives for content creators to frame information in emotionally provocative ways, regardless of whether that framing is accurate or fair. RageCheck helps you see these patterns so you can make more informed decisions about what to believe, share, and engage with.

  • What RageCheck Is Not – Not a Fact Checker – RageCheck does not verify claims or assess accuracy. A high score means content uses manipulative framing—it doesn’t mean the underlying claims are false. Conversely, a low score doesn’t mean content is true.

  • Not a Political Bias Detector – Manipulative framing exists across the political spectrum. RageCheck analyzes linguistic patterns regardless of political orientation. Content from any viewpoint can score high or low depending on how it’s framed.

  • Not an Arbiter of Truth – RageCheck is a tool, not an authority. Use it as one input among many when evaluating content. Your own judgment, multiple sources, and critical thinking remain essential.



Wednesday, January 28, 2026

When does simplifying become simple minded?

https://www.npr.org/2026/01/28/nx-s1-5677187/nuclear-safety-rules-rewritten-trump

The Trump administration has secretly rewritten nuclear safety rules

The Trump administration has overhauled a set of nuclear safety directives and shared them with the companies it is charged with regulating, without making the new rules available to the public, according to documents obtained exclusively by NPR.

The sweeping changes were made to accelerate development of a new generation of nuclear reactor designs. They occurred over the fall and winter at the Department of Energy, which is currently overseeing a program to build at least three new experimental commercial nuclear reactors by July 4 of this year.

NPR obtained copies of over a dozen of the new orders, none of which are publicly available. The orders slash hundreds of pages of requirements for security at the reactors. They also loosen protections for ground water and the environment and eliminate at least one key safety role. The new orders cut back on requirements for keeping records, and they raise the amount of radiation a worker can be exposed to before an official accident investigation is triggered.

Over 750 pages were cut from the earlier versions of the same orders, according to NPR's analysis, leaving only about one third of the number of pages in the original documents.





At what point do we tip over to believing everything is fake?

https://apnews.com/article/ai-videos-trump-ice-artificial-intelligence-08d91fa44f3146ec1f8ee4d213cdad31

Trump’s use of AI images pushes new boundaries, further eroding public trust, experts say

The Trump administration has not shied away from sharing AI-generated imagery online, embracing cartoonlike visuals and memes and promoting them on official White House channels.

But an edited — and realistic — image of civil rights attorney Nekima Levy Armstrong in tears after being arrested is raising new alarms about how the administration is blurring the lines between what is real and what is fake.