Thursday, February 05, 2026

Why hack the Olympics?

https://www.theregister.com/2026/02/05/winter_olympics_russian_attacks/

Italy claims cyberattacks 'of Russian origin' are pelting Winter Olympics

Italy's foreign minister says the country has already started swatting away cyberattacks from Russia targeting the Milano Cortina Winter Olympics.

Antonio Tajani told reporters on Wednesday that a series of cyberattacks targeted some of the government's foreign offices, including the one in the US capital.

He said they were "of Russian origin," but did not specify whether this appeared to be state-backed activity, nor provide details about the nature of the attacks, AP reported.

Thirteen Russians will compete in Milano Cortina, but they must do so as independents – they cannot fly the Russian flag.

For decades, Russia has used sporting events, especially the Olympics, for political gain. From the 1950s onward, many believe that Russia saw the Games as a means to assert the value of socialism, with the rivalry between the USSR and the capitalist US pervading most major events for three decades.





Will “elections Trump’s way” become “voting Trump’s way?”

https://www.bespacific.com/trumps-call-to-nationalize-elections-adds-to-state-officials-alarm/

Trump’s Call to ‘Nationalize’ Elections Adds to State Officials’ Alarm

The New York Times Gift Article  – “President Trump’s declaration that he wants to “nationalize” voting in the United States  arrives at a perilous moment for the relationship between the federal government and top election officials across the country. While the executive branch has no explicit authority over elections, generations of secretaries of state have relied on the intelligence gathering and cybersecurity defenses, among other assistance, that only the federal government can provide. But as Mr. Trump has escalated efforts  to involve the administration in election and voting matters while also eliminating programs designed to fortify these systems against attacks, secretaries of state and other top state election officials, including some Republican ones, have begun to sound alarms. Some see what was once a crucial partnership as frayed beyond repair. They point to Mr. Trump’s push to overturn the 2020 election, his continued false claims that the contest was rigged, the presence of election deniers  in influential government positions and his administration’s attempts to dig up evidence of widespread voter fraud that year, even though none has ever been found.  The worry, these election officials say, is that Mr. Trump and his allies might try to interfere in or cast doubt on this year’s midterm elections. The president is urgently trying to defend the Republican majorities in Congress, and the political environment has appeared to grow less friendly  to his party.

On Tuesday, a day after Mr. Trump’s comments about wanting to “nationalize” elections, Karoline Leavitt, the White House press secretary, said the president was referring to federal election legislation in Congress. Yet after Ms. Leavitt’s attempt to clarify Mr. Trump’s initial remarks, he doubled down on his assertion  that the federal government should oversee state elections. “Look at some of the places — that horrible corruption on elections — and the federal government should not allow that,” he said. “The federal government should get involved.” Even before Mr. Trump’s latest remarks, state officials had pointed to other evidence of his aims regarding elections. The F.B.I. seized ballots and other 2020 voting records  last week from an election office in Fulton County, Ga., which on Wednesday challenged the seizure  in court. The Justice Department has sued nearly half of the states in the country to try to obtain their full voter rolls with Americans’ personal information in an effort to build a national voter database.

Media Matters – Steve Bannon:  “You’re damn right we’re gonna have ICE surround the polls come November. Let’s put you on notice again. ICE is going to be around the polls in the 2026 midterm elections.”



Wednesday, February 04, 2026

To guide your security…

https://pogowasright.org/fbi-couldnt-get-into-wapo-reporters-iphone-because-it-had-lockdown-mode-enabled/

FBI Couldn’t Get into WaPo Reporter’s iPhone Because It Had Lockdown Mode Enabled

Joseph Cox reports:

The FBI has been unable to access a Washington Post reporter’s seized iPhone because it was in Lockdown Mode, a sometimes overlooked feature that makes iPhones broadly more secure, according to recently filed court records.
The court record shows what devices and data the FBI was able to ultimately access, and which devices it could not, after raiding the home of the reporter, Hannah Natanson, in January as part of an investigation into leaks of classified information. It also provides rare insight into the apparent effectiveness of Lockdown Mode, or at least how effective it might be before the FBI may try other techniques to access the device.
Because the iPhone was in Lockdown mode, CART could not extract that device,” the court record reads, referring to the FBI’s Computer Analysis Response Team, a unit focused on performing forensic analyses of seized devices. The document is written by the government, and is opposing the return of Natanson’s devices.

Read more at 404 Media.



Tuesday, February 03, 2026

Government is dumbing down? What a surprise!

https://www.bespacific.com/us-government-has-lost-more-10-000-stem-phds-trump-took-office/

U.S. government has lost more than 10,000 STEM Ph.D.s since Trump took office

Science analysis reveals how many were fired, retired, or quit across 14 agencies – “Some 10,109 doctoral-trained experts in science and related fields left their jobs last year as President Donald Trump dramatically shrank the overall federal workforce. That exodus was only 3% of the 335,192 federal workers who exited last year but represents 14% of the total number of Ph.D.s in science, technology, engineering, and math (STEM) or health fields employed at the end of 2024 as then-President Joe Biden prepared to leave office.  The numbers come from employment data posted earlier this month by the White House Office of Personnel Management (OPM). At 14 research agencies Science examined in detail, departures outnumbered new hires last year by a ratio of 11 to one, resulting in a net loss of 4224 STEM Ph.D.s. The graphs that follow show the impact is particularly striking at such scientist-rich agencies as the National Science Foundation (NSF). But across the government, these departing Ph.D.s took with them a wealth of subject matter expertise and knowledge about how the agencies operate…”





It looks like we have reached the tipping point. (If this claim isn’t false.)

https://www.bespacific.com/nearly-half-of-americans-in-2025-believed-false-claims-across-seven-months-of-surveys/

Nearly Half of Americans in 2025 Believed False Claims Across Seven Months of Surveys

NewsGuard: “Belief in False Claims Averaged 46 Percent in 2025. Over the first seven months of Reality Gap Index reports — from June to December 2025 — NewsGuard found that an average of nearly half of Americans believed at least one false claim about major claims spreading in the news. For the first six months after the launch of the NewsGuard Reality Gap Index in June, the average percent of Americans who believed false claims was 50 percent. A dip in December reduced the average to 46 percent for the seven months of 2025. The monthly variations, of course, may have less to do with changes in Americans’ gullibility than with changes in the velocity, spread, and overall appeal of a particular false claim. For example, in July and August, when the Reality Gap Index reached a high of 64 percent, two particularly viral false claims dominated the news: that U.S. President Donald Trump declared martial law to address the crime problem in Washington D.C., and that Florida’s ‘Alligator Alcatraz’ immigrant detention center was surrounded by an alligator-infested moat. NewsGuard’s Reality Gap Index is the nation’s first ongoing measurement of Americans’ propensity to believe at least one of the top three false claims circulating online each month, sourced from NewsGuard’s False Claims Fingerprints data stream. Through a monthly survey of a representative sample of Americans conducted by YouGov, the Reality Gap Index measures the percentage of Americans who believe one or more of the month’s top three false claims.

AI-Generated and Manipulated Content Over Time – Over the course of these surveys, respondents have demonstrated high levels of uncertainty when asked about the authenticity of images and videos. The images and videos have been AI-generated, taken out of context, or otherwise manipulated. Respondents especially struggled with claims related to AI-generated content. For example, in August, 73 percent of Americans either believed (35 percent) or were unsure about (38 percent) the authenticity of AI-generated photos and videos circulating online that appeared to show Donald Trump and convicted sex offender Jeffrey Epstein with underage girls…”





Has the war with AI begun?

https://sloanreview.mit.edu/article/validating-llm-output-prepare-to-be-persuasion-bombed/

Validating LLM Output?
 Prepare to Be ‘Persuasion Bombed’

A research study of management consultants who were asked to use a large language model to recommend strategic business decisions found that the AI responded to human validation attempts with persuasive rhetorical strategies. In addition to appealing to the user’s logic, sense of trust, and emotions, the AI also engaged in tactics such as flooding the user with large volumes of unrequested data and analyses that could overwhelm them and convince them to override their expert judgment.



Monday, February 02, 2026

Perhaps AI is a politician?

https://arstechnica.com/ai/2026/01/how-often-do-ai-chatbots-lead-users-down-a-harmful-path/

How often do AI chatbots lead users down a harmful path?

At this point, we’ve all heard plenty of stories about AI chatbots leading users to harmful actionsharmful beliefs, or simply incorrect information. Despite the prevalence of these stories, though, it’s hard to know just how often users are being manipulated. Are these tales of AI harms anecdotal outliers or signs of a frighteningly common problem?

Anthropic took a stab at answering that question this week, releasing a paper studying the potential for what it calls “disempowering patterns” across 1.5 million anonymized real-world conversations with its Claude AI model. While the results show that these kinds of manipulative patterns are relatively rare as a percentage of all AI conversations, they still represent a potentially large problem on an absolute basis.



(Related)

https://apnews.com/article/meta-facebook-trial-new-mexico-social-trial-facebook-instagram-whatsapp-d8b812efd001e5cabbef9e1a47143226

Undercover investigation of Meta heads to trial in New Mexico in first stand-alone case by state

The first stand-alone trial from state prosecutors in a stream of lawsuits against Meta is getting underway in New Mexico, with jury selection starting Monday.

New Mexico’s case is built on a state undercover investigation using proxy social media accounts and posing as kids to document sexual solicitations and the response from Meta, the owner of Facebook, Instagram and WhatsApp. It could give states a new legal pathway to go after social media companies over how their platforms affect children, by using consumer protection and nuisance laws.



Sunday, February 01, 2026

Good to see someone thinking about this…

https://ojs.stanford.edu/ojs/index.php/grace/article/view/4337

Regulating LLMs in Warfare: A U.S. Strategy for Military AI Accountability

Large language models (LLMs) are rapidly entering military workflows that shape intelligence synthesis, operational planning, logistics, cyber operations, and information activities, yet U.S. governance has not kept pace with their distinct risk profile. This memo argues that existing frameworks remain ill-suited to LLM-enabled decision-support: international efforts under the UN Convention on Certain Conventional Weapons focus primarily on lethal autonomous weapons, while U.S. policy relies on high-level ethical principles that have not been operationalized into enforceable requirements for evaluation, monitoring, logging, and lifecycle control. The paper identifies four core risks arising from LLM deployment in high-consequence contexts: inadvertent escalation driven by overconfident or brittle recommendations under uncertainty; scalable information operations and disinformation; expanded security vulnerabilities including data poisoning, prompt-injection, and sensitive-data leakage; and accountability gaps when human actors defer responsibility to opaque model outputs. In response, the memo proposes a U.S. regulatory framework organized around four pillars: (1) human decision rights and escalation controls, including documented authorization for crisis-sensitive uses; (2) mandatory human review and traceability for information-operations content; (3) baseline security, data governance, and continuous adversarial testing for training and deployment pipelines; and (4) accountability mechanisms, including auditable logs and incident reporting overseen by an independent Military AI Oversight Committee. The memo concludes that LLM-specific guardrails complement, rather than displace, existing weapons autonomy policy and would strengthen U.S. credibility in shaping international norms for responsible military AI. This paper was submitted to Dr. Cynthia Bailey's course CS121 Equity and Governance for Artificial Intelligence, Stanford University





More general than this lawyerly orientation.

https://mrquarterly.org/index.php/ojs/article/view/46

Artificial Intelligence and the Transformation of Legal Practice: From Automation to Augmented Lawyering

The rapid rise of artificial intelligence (AI) is transforming the legal profession worldwide. Rather than replacing lawyers, AI reshapes legal workflows, automating routine tasks such as research, document review, and contract analysis, while enhancing human judgment, ethics, and strategic decision-making. This article examines these changes through theoretical and empirical lenses, focusing on the French legal system. It highlights organizational shifts in law firms, including new governance structures, multidisciplinary teams, and AI management practices ensuring ethical compliance and data security. The article concludes that the future of law lies in human–machine collaboration, where AI augments lawyers’ professional values of responsibility, trust, and justice: from Automation to Augmented Lawyering.





Lawyers should have been doing this, right?

https://sd34.senate.ca.gov/news/reuters-california-senate-passes-bill-regulating-lawyers-use-ai

Reuters - California Senate passes bill regulating lawyers' use of AI

A bill passed on Thursday by the California Senate would require lawyers in the state to verify the accuracy of all materials produced using artificial intelligence, including case citations and other information in court filings.

The measure, which appears to be one of the first pending in a state legislature on the use of AI by lawyers, has gone to the State Assembly for consideration.

In addition to governing California lawyers' use of AI, the bill prohibits arbitrators presiding over out-of-court disputes from delegating decision-making to generative AI and from relying on information produced by AI outside case records without first telling the parties involved.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB574



Saturday, January 31, 2026

Are we trending toward an “I’ll believe anything” world?

https://www.cnn.com/2026/01/31/uk/amelia-meme-ai-far-right-intl-scli

This cute AI-generated schoolgirl is a growing far-right meme

At first glance, Amelia, with her purple bob and pixie-girl looks, seems an unlikely candidate for the far right to adopt as an increasingly popular meme.

Yet, for the past few weeks, memes and AI-generated videos featuring this fictional British teenager have proliferated across social media, especially on X. In them, Amelia parrots right-wing, often racist, talking points, connecting her celebration of stereotypical British culture with anti-migrant and Islamophobic tropes.

She sips pints in pubs, reads “Harry Potter” and goes back in time to fight in some of Britain’s most famous battles. But she also dons an ICE uniform to violently deport migrants and embraces such extreme rhetoric that even British far-right activist Tommy Robinson has posted videos of her. It’s an unlikely life for a schoolgirl.



Friday, January 30, 2026

Worry when the AI fakes aren’t so obvious…

https://www.sfgate.com/tech/article/donald-trump-ai-youtube-21323144.php

Trump pushes obviously fake videos bashing California

The Trump administration has used artificial intelligence to create a fake image of a protester crying and to splash ads across the internet for months. Now, the president is using videos that seem clearly to be created with the new tech to fan his base’s anti-California sentiment.

During a wave of posts and reposts to Truth Social on Wednesday, President Donald Trump’s account twice shared an apparently AI-generated, news-style video claiming Walmart is shutting down 250 stores across California due to the state’s policy choices. One of Trump’s posts included a screenshot of a commentator calling it, “More bad news for Gavin Newson,” aka Newsom, California governor and the president’s most prominent Democratic foil.

A Walmart spokesperson told CNN that the company is not shutting down a wave of stores in California, and actually just opened a new location in the Inland Empire. Newsom’s Press Office bashed Trump for the posts, saying another posted video was AI-generated and had accused the governor of running a drug-money laundering scheme. “We cannot believe this is real life,” the governor’s post said, “And we truly cannot believe this man has the nuclear codes.”



(Related)

https://www.bespacific.com/dhs-is-using-google-and-adobe-ai-to-make-videos/

DHS is using Google and Adobe AI to make videos

MIT Technology Review: “Immigration agencies have been flooding social media with bizarre, seemingly AI-generated content. We now know more about what might be making it.”