Friday, November 28, 2025

A suggestion that the policy is flawed?

https://www.politico.com/news/2025/11/28/trump-detention-deportation-policy-00669861

More than 220 judges have now rejected the Trump admin’s mass detention policy

The Trump administration’s bid to systematically lock up nearly all immigrants facing deportation proceedings has led to a fierce — and mounting — rejection by courts across the country.

That effort, which began with an abrupt policy change by Immigration and Customs Enforcement on July 8, has led to a tidal wave of emergency lawsuits after ICE’s targets were arrested at workplaces, courthouses or check-ins with immigration officers. Many have lived in the U.S. for years, and sometimes decades, without incident and have been pursuing asylum or other forms of legal status.

At least 225 judges have ruled in more than 700 cases that the administration’s new policy, which also deprives people of an opportunity to seek release from an immigration court, is a likely violation of law and the right to due process. Those judges were appointed by all modern presidents — including 23 by Trump himself — and hail from at least 35 states, according to a POLITICO analysis of thousands of recent cases. The number of judges opposing the administration’s position has more than doubled in less than a month.

In contrast, only eight judges nationwide, including six appointed by Trump, have sided with the administration’s new mass detention policy.


Thursday, November 27, 2025

Another swing of the pendulum…

https://www.politico.eu/article/european-parliament-backs-minimum-age-of-16-for-social-media/

European Parliament backs 16+ age rule for social media

The European Parliament on Wednesday called for a Europe-wide minimum threshold of 16 for minors to access social media without their parents’ consent.

Parliament members also want the EU to hold tech CEOs like Mark Zuckerberg and Elon Musk personally liable should their platforms consistently violate the EU's provisions on protecting minors online — a suggested provision that was added by Hungarian center-right member Dóra Dávid, who previously worked for Meta.





A tool for confusion?

https://www.theregister.com/2025/11/27/fcc_radio_hijack/

FCC sounds alarm after emergency tones turned into potty-mouthed radio takeover

Malicious intruders have hijacked US radio gear to turn emergency broadcast tones into a profanity-laced alarm system.

That's according to the latest warning issued by the Federal Communications Commission (FCC), which has flagged a "recent string of cyber intrusions" that diverted studio-to-transmitter links (STLs) so attackers could replace legitimate programming with their own audio – complete with the signature "Attention Signal" tone of the domestic Emergency Alert System (EAS).

According to the alert, the intrusions exploited unsecured broadcasting equipment, notably devices manufactured by Swiss firm Barix, which were reconfigured to stream attacker-controlled audio instead of station output. That stream included either real or simulated EAS alert tones, followed by obscene language or other offensive content.

The HTX Media radio station in Houston confirmed it had fallen victim to hijackers in a post on Facebook, saying: "We've received multiple reports that 97.5 FM (ESPN Houston) has been hijacked and is currently broadcasting explicit and highly offensive content... The station appears to be looping a repeated audio stream that includes an Emergency Alert System (EAS) tone before playing an extremely vulgar track."



Wednesday, November 26, 2025

If the Terminator robbed banks…

https://www.theatlantic.com/technology/2025/11/anthropic-hack-ai-cybersecurity/685061/

Chatbots Are Becoming Really, Really Good Criminals

Earlier this fall, a team of security experts at the AI company Anthropic uncovered an elaborate cyber-espionage scheme. Hackers—strongly suspected by Anthropic to be working on behalf of the Chinese government—targeted government agencies and large corporations around the world. And it appears that they used Anthropic’s own AI product, Claude Code, to do most of the work.

Anthropic published its report on the incident earlier this month. Jacob Klein, Anthropic’s head of threat intelligence, explained to me that the hackers took advantage of Claude’s “agentic” abilities—which enable the program to take an extended series of actions rather than focusing on one basic task. They were able to equip the bot with a number of external tools, such as password crackers, allowing Claude to analyze potential security vulnerabilities, write malicious code, harvest passwords, and exfiltrate data.



Tuesday, November 25, 2025

Interesting. Worth a read…

https://www.schneier.com/blog/archives/2025/11/four-ways-ai-is-being-used-to-strengthen-democracies-worldwide.html

Four Ways AI Is Being Used to Strengthen Democracies Worldwide

Democracy is colliding with the technologies of artificial intelligence. Judging from the audience reaction at the recent World Forum on Democracy in Strasbourg, the general expectation is that democracy will be the worse for it. We have another narrative. Yes, there are risks to democracy from AI, but there are also opportunities.

We have just published the book Rewiring Democracy: How AI will Transform Politics, Government, and Citizenship. In it, we take a clear-eyed view of how AI is undermining confidence in our information ecosystem, how the use of biased AI can harm constituents of democracies and how elected officials with authoritarian tendencies can use it to consolidate power. But we also give positive examples of how AI is transforming democratic governance and politics for the better.

Here are four such stories unfolding right now around the world, showing how AI is being used by some to make democracy better, stronger, and more responsive to people.





If such things amuse you…

https://www.bespacific.com/epstein-files-search/

Epstein Files Search

Follow up to post – We created a searchable database with all 20,000 files from Epstein’s Estate – See also the new Epstein Files Search – powered by Justice for All Victims: Search Tags for Files, Images, People, Organizations, Countries.

See also via Journalist Studio – Zeteo Scoured 26,000 Epstein Docs. Here’s What We Found. You can search and review the emails here. Read what Epstein said about Trump, and his emails with Peter Thiel, Steve Bannon, Ehud Barak, and Larry Summers.



Monday, November 24, 2025

Making election fraud easier?

https://apnews.com/article/election-security-cisa-2026-secretaries-state-midterms-6d18799c6c5fdd1bc001544b2dca12bf

Big changes to the agency charged with securing elections lead to midterm worries

Since it was created in 2018, the federal government’s cybersecurity agency has helped warn state and local election officials about potential threats from foreign governments, showed officials how to protect polling places from attacks and gamed out how to respond to the unexpected, such as an Election Day bomb threat or sudden disinformation campaign

The agency was largely absent from that space for elections this month in several states, a potential preview for the 2026 midterms. Shifting priorities of the Trump administration, staffing reductions and budget cuts have many election officials concerned about how engaged the Cybersecurity and Infrastructure Security Agency will be next year, when control of Congress will be at stake in those elections.





How to slow AI adoption…

https://mtsoln.com/blog/ai-news-727/insurers-retreat-from-ai-cover-as-risk-of-multibillion-dollar-claims-mounts-4570

Insurers retreat from AI cover as risk of multibillion-dollar claims mounts

In a significant shift within the insurance sector, major insurers such as AIG, Great American, and WR Berkley are reconsidering their coverage for liabilities concerning artificial intelligence (AI). This decision comes in light of growing anxieties over the potential for complex and costly claims stemming from the actions of AI systems, including chatbots and autonomous agents.

As the adoption of AI technologies accelerates across various industries, so too has the magnitude of potential financial consequences associated with AI-driven missteps. Insurers are responding to these evolving risks by seeking permissions from regulators to limit their liability exposure connected to AI systems.



Sunday, November 23, 2025

Holding the Terminator liable?

https://journals.soran.edu.iq/index.php/Twejer/article/view/2047

Criminal Liability for Crimes Committed by Artificial Intelligence Devices (Robots)

With the rapid development of artificial intelligence and robotics technology, robots have become integral to our daily lives, serving both practical and ideological purposes. Individuals and institutions utilize them to achieve various objectives, and their growing presence across multiple sectors underscores their importance in delivering essential services to humanity. However, alongside these benefits, robots also pose significant risks due to their extensive applications in areas such as the military, education, humanitarian aid, security, and law enforcement. In these contexts, robots may make mistakes that harm those interacting with them, necessitating a legal framework that aligns with these new realities and reexamines the criminal liability of robots in light of technological advancements. The nature of crimes committed by robots varies according to their technical capabilities, [Is that true? Bob] encompassing offenses against individuals: roperty, and more. Accordingly, our research will explore the extent to which robots can be held accountable for crimes they commit autonomously, without human intervention.





Because we will need some tools…

https://jeet.ieet.org/index.php/home/article/view/188

Misinformation Research at the National Science Foundation

Promotion of misinformation online has become common, usually defined as false or inaccurate assertions without clear motivation, in contrast to unethical disinformation that is consciously intended to mislead. However, misinformation raises ethical questions, such as how much obligation a person has to verify the factual truth of what they assert, and how many cases were intentional falsehoods that simply could not be proven to come from liars. Since the beginning of the current century, the National Science Foundation supported much research intended to understand misinformation’s social dynamics and develop tools to identify and even combat it. Then in 2025, the second Trump administration banned such research, even cancelling many active grants that funded academic projects. Examination of representative research identifies ethical debates, the cultural differences across the relevant divisions of NSF, and connections to related questions such as the human implications of artificial intelligence. This clear survey of the recent history of research on false information offers the background to support future science and public decisions about what new research needs to be done.