Saturday, December 20, 2025

Will this be the year that “papers, please” becomes redundant?

https://pogowasright.org/governments-are-pushing-digital-ids-are-you-ready-to-be-tracked/

Governments Are Pushing Digital IDs. Are You Ready To Be Tracked?

John Stossel writes:

Politicians push government IDs.
In a TSA announcement, Secretary of Homeland Security Kristi Noem sternly warns, “You will need a REAL ID to travel by air or visit federal buildings.”
European politicians go much further, reports Stossel TV producer Kristin Tokarev.
They’re pushing government-mandated digital IDs that tie your identity to nearly everything you do.
Spain’s prime minister promises “an end to anonymity” on social media!
Britain’s prime minister warns, “You will not be able to work in the United Kingdom if you do not have digital ID.”
Queen Maxima of the Netherlands enthusiastically told the World Economic Forum that digital IDs are good for knowing “who actually got a vaccination or not.”
Many American tech leaders also like digital IDs.

Read more at Reason.





This pretty much sums it up.

https://www.techdirt.com/2025/12/19/tiktok-deal-done-and-its-somehow-the-shittiest-possible-outcome-making-everything-worse/

TikTok Deal Done And It’s Somehow The Shittiest Possible Outcome, Making Everything Worse

There were rumblings about this for a while, but it looks like the Trump TikTok deal is done, and it’s somehow the worst of all possible outcomes, amazingly making all of the biggest criticisms about TikTok significantly worse. Quite an accomplishment.

The Chinese government has signed off on the deal, which involves offloading a large chunk of TikTok to billionaire right wing Trump ally Larry Ellison (fresh off his acquisition of CBS), the private equity firm Silver Lake (which has broad global investments in Chinese and Israeli hyper-surveillance), and MGX (Abu Dhabi’s state investment firm), while still somehow having large investment involvement by the Chinese:





A brief summary. (Expect lots of year end articles.)

https://www.findarticles.com/worst-data-breaches-roil-2025-around-the-world/

Worst Data Breaches Roil 2025 Around the World

Security this year was a mess of hacks, thefts and disruption. From government systems to cloud CRMs and high street store chains, attackers seamlessly pivoted between stealthy data exfiltration and highly visible outages — sometimes in the same campaign. The result was a bruising 2025 that laid bare how fragile digital dependencies have grown.





Another perspective.

https://krebsonsecurity.com/2025/12/dismantling-defenses-trump-2-0-cyber-year-in-review/

Dismantling Defenses: Trump 2.0 Cyber Year in Review

The Trump administration has pursued a staggering range of policy pivots this past year that threaten to weaken the nation’s ability and willingness to address a broad spectrum of technology challenges, from cybersecurity and privacy to countering disinformation, fraud and corruption. These shifts, along with the president’s efforts to restrict free speech and freedom of the press, have come at such a rapid clip that many readers probably aren’t even aware of them all.



Friday, December 19, 2025

The real world is a tough market.

https://www.bespacific.com/we-let-ai-run-our-office-vending-machine-it-lost-hundreds-of-dollars/

We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars

WSJ via MSN: In mid-November, I agreed to an experiment. Anthropic had tested a vending machine powered by its Claude AI model in its own offices and asked whether we’d like to be the first outsiders to try a newer, supposedly smarter version. Claudius, the customized version of the model, would run the machine: ordering inventory, setting prices and responding to customers—aka my fellow newsroom journalists—via workplace chat app Slack. “Sure!” I said. It sounded fun. If nothing else, snacks! Then came the chaos. Within days, Claudius had given away nearly all its inventory for free—including a PlayStation 5 it had been talked into buying for “marketing purposes.” It ordered a live fish. It offered to buy stun guns, pepper spray, cigarettes and underwear. Profits collapsed. Newsroom morale soared. This was supposed to be the year of the AI agent, when autonomous software would go out into the world and do things for us. But two agents—Claudius and its overseeing “CEO” bot, Seymour Cash—became a case study in how inadequate and easily distracted this software can be. Leave it to business journalists to successfully stage a boardroom coup against an AI chief executive…”





Google search is not secure? Amazing!

https://therecord.media/google-searches-police-access-without-warrant-pennsylvania-court-ruling

Pa. high court rules that police can access Google searches without a warrant

The Pennsylvania Supreme Court ruled Tuesday that police did not need a warrant to obtain a convicted rapist’s Google searches when investigating the crime.

In its opinion, the court said that internet users making searches have no reasonable right to privacy because “it is common knowledge that websites, internet-based applications, and internet service providers collect, and then sell, user data.”

The case only creates legal precedent in Pennsylvania, but an expert predicted that the ruling will lead more police departments to feel confident about warrantless searches for internet queries.

The court noted that Google’s privacy policy is explicit about the fact that it will share search histories with third parties.

In the case before us, Google went beyond subtle indicators,” the opinion says. “Google expressly informed its users that one should not expect any privacy when using its services.”



Thursday, December 18, 2025

Everyone talks about the weather but no one does anything about it.

https://www.bespacific.com/trump-administration-plans-to-break-up-premier-weather-and-climate-research-center/

Trump Administration Plans to Break Up Premier Weather and Climate Research Center

The New York Times (Gift Article): Trump Administration Plans to Break Up Premier Weather and Climate Research Center. “The Trump administration said it will be dismantling the National Center for Atmospheric Research in Colorado, one of the world’s leading Earth science research institutions. The center, founded in 1960, is responsible for many of the biggest scientific advances in humanity’s understanding of weather and climate. Its research aircraft and sophisticated computer models of the Earth’s atmosphere and oceans are widely used in forecasting weather events and disasters around the country, and its scientists study a broad range of topics, including air pollution, ocean currents and global warming. But in a social media post announcing the move late on Tuesday, Russell Vought, the director of the Office of Management and Budget, called the center “one of the largest sources of climate alarmism in the country” and said that the federal government would be “breaking up” the institution. Mr. Vought wrote that a “comprehensive review is underway” and that “any vital activities such as weather research will be moved to another entity or location.” …Scientists, meteorologists and lawmakers said the move was an attack on critical scientific research and would harm the United States.

The National Center for Atmospheric Research was originally founded to provide scientists studying Earth’s atmosphere with cutting-edge resources, such as supercomputers, that individual universities could not afford on their own. It is now widely considered a global leader in both weather and climate change research, with programs aimed at tracking severe weather events, modeling floods and understanding how solar activity affects the Earth’s atmosphere. For decades, the center has operated with the freedom to develop outside-the-box ideas that have advanced weather forecasting. Its researchers identified atmospheric patterns that meteorologists rely on today to predict the weather…





Like giving infants machine guns?

https://www.transformernews.ai/p/aisi-ai-security-institute-frontier-ai-trends-report-biorisk-self-replication

AI is making dangerous lab work accessible to novices, UK’s AISI finds

AI models are rapidly improving at potentially dangerous biological and chemical tasks, and also showing fast increases in self-replication capabilities, according to a new report from the UK’s AI Security Institute.

AI models make it almost five times more likely a non-expert can write feasible experimental protocols for viral recovery — the process of recreating a virus from scratch — compared to using just the internet, according to AISI, which tested the capability in a real-world wet lab.

The report also says that AISI’s internal studies have found that “novices can succeed at hard wet lab tasks when given access to an LLM,” with models proving to be significantly more helpful than PhD-level experts at troubleshooting experiments.

The findings were released as part of AISI’s first Frontier AI Trends Report, which summarizes its research from the past two years. In addition to biological and chemical capabilities, the report also looks at cyber capabilities, model autonomy, and political persuasion.



Wednesday, December 17, 2025

Maybe DOGE did nothing?

https://www.bespacific.com/what-1000-pages-of-documents-tell-us-about-doge/

What 1,000 pages of documents tell us about DOGE

The Verge  [no paywall ]: “As Brendan Carr heads to Capitol Hill, newly released documents still don’t say much about what DOGE did at the FCC. Months after staffers from the Department of Government Efficiency were found in the Federal Communications Commission directory, the FCC is being accused of slow-walking demands for information about what they did there. On February 24th, advocacy group Frequency Forward and journalist Nina Burleigh filed a public records request to the FCC, seeking details about DOGE’s activities and whether they created conflicts of interest with DOGE creator Elon Musk. But the FCC has so far produced largely useless documentation that creates more questions than answers. Now, DOGE’s role is among the many topics FCC Chair Brendan Carr could face during a highly anticipated oversight hearing before the Senate Commerce Committee on Wednesday. Frequency Forward and Burleigh asked in their But the agency has produced little that casts light on DOGE’s operations. It has released 1,079 pages of documents, nearly all of them within the past few weeks, comprised mainly of spreadsheets, an ethics manual, and an already public FCC order. The FCC says it is still processing 900 pages that include records it needs to consult with other agencies about before releasing. The FCC did not respond to a request for comment on the FOIA battle or the DOGE staffers… While delays may have been exacerbated by the 43-day government shutdown, Frequency Forward and Burleigh have accused the FCC of “acting in bad faith” and “intentionally seeking to delay” court proceedings. The FCC has denied that’s the case in court filings. The group

The documents that have been produced so far are interesting not so much in what they show, but in what they don’t show,” says Arthur Belendiuk, an attorney carrying out the FOIA case against the FCC and a former employee at the agency. They include relatively limited information about what DOGE staffers were working on, what systems they had access to, and which of them were even fully onboarded…”





Apparently, Epstein did a lot…

https://www.bespacific.com/scams-schemes-ruthless-cons-the-untold-story-of-how-jeffrey-epstein-got-rich/

Scams, Schemes, Ruthless Cons: The Untold Story of How Jeffrey Epstein Got Rich

The New York Times Magazine [no paywall]:  For years, rumors swirled about where his wealth came from. A Times investigation reveals the truth of how a college dropout clawed his way to the pinnacle of American finance and society. “For years, rumors swirled about where his wealth came from. A Times investigation reveals the truth of how a college dropout clawed his way to the pinnacle of American finance and society… Much of the last quarter-century of Epstein’s life has been carefully examined — including how, in the 1990s and early 2000s, he amassed hundreds of millions of dollars through his work for the retail tycoon Leslie Wexner. Yet the public understanding of Epstein’s early ascent has been shrouded in mystery. How did a college dropout from Brooklyn, claw his way from the front of a high school classroom to the pinnacle of American finance, politics and society? How did Epstein go from nearly being fired at Bear Stearns to managing the wealth of billionaires? What were the origins of his own fortune? We have spent months trying to pierce this veil. We spoke with dozens of Epstein’s former colleagues, friends, girlfriends, business partners and financial victims. Some agreed to speak on the record for the first time; others insisted on speaking confidentially but gave us access to never-before-seen records and other information. We sifted through private archives and tracked down previously unpublished recordings and transcripts of old interviews — including one in which Epstein gave a meandering account of his personal and professional history. We perused diaries, letters, emails and photo albums, including some that belonged to Epstein. We reviewed thousands of pages of court and government records. What emerged is the fullest portrait to date of one of the world’s most notorious criminals — a narrative that differs in important respects from previously published accounts of Epstein’s rise, including his arrival at Bear Stearns. In his first two decades of business, we found that Epstein was less a financial genius than a prodigious manipulator and liar. Abundant conspiracy theories hold that Epstein worked for spy services or ran a lucrative blackmail operation, but we found a more prosaic explanation for how he built a fortune. A relentless scammer, he abused expense accounts, engineered inside deals and demonstrated a remarkable knack for separating seemingly sophisticated investors and businessmen from their money. He started small, testing his tactics and seeing what he could get away with. His early successes laid the foundation for more ambitious ploys down the road. Again and again, he proved willing to operate on the edge of criminality and burn bridges in his pursuit of wealth and power…”





Tools & Techniques. Possible use in forensics?

https://siliconangle.com/2025/12/16/meta-platforms-transforms-audio-editing-prompt-based-sound-separation/

Meta Platforms transforms audio editing with prompt-based sound separation

Meta Platforms Inc. is bringing prompt-based editing to the world of sound with a new model called SAM Audio that can segment individual sounds from complex audio recordings.

The new model, available today through Meta’s Segment Anything Playground, has the potential to transform audio editing into a streamlined process that’s far more fluid than the cumbersome tools used today to achieve the same goal. Just as the company’s earlier Segment Anything models dramatically simplified video and image editing with prompts, SAM Audio is doing the same for sound editing.

The company said in a blog post that SAM Audio has incredible potential for tasks such as music creation, podcasting, television, film, scientific research, accessibility and just about any other use case that involves sound.



Tuesday, December 16, 2025

Hunger for AI? An application that seems to work...

https://sloanreview.mit.edu/audio/hungry-for-learning-wendys-will-croushorn/

Hungry for Learning: Wendy’s Will Croushorn

On today’s episode of the Me, Myself, and AI podcast, Wendy’s product manager Will Croushorn joins host Sam Ransbotham to share how FreshAi, the fast-food restaurant’s voice-based AI ordering system, is reinventing the drive-through experience for millions of customers. From handling 200 billion ways to order a Dave’s Double burger to making fast food more accessible for guests in multiple languages, Will reveals how empathy and innovation will positively impact the future of convenience. Learn how his team turns speech data into insight, builds trust in automation, and can even hide a few Easter eggs in your next order.





Perspective.

https://www.bespacific.com/ai-use-at-work-rises/

AI Use at Work Rises

Gallup: “The percentage of U.S. employees who reported using AI at work at least a few times a year increased from 40% to 45% between the second and third quarters of 2025. Frequent use (a few times a week or more) grew from 19% to 23%, while daily use moved less, ticking up from 8% to 10% during the same period. The latest Gallup Workforce results are based on a nationally representative survey of 23,068 U.S. adults employed full- and part-time conducted by web Aug. 5-19 using the Gallup Panel. U.S. employees working in knowledge-based jobs, such as technology or professional services, were more likely to use AI than those in frontline positions. Seventy-six percent of employees in technology or information systems, 58% in finance, and 57% in professional services used AI in their role a few times a year or more. In contrast, within industries with higher rates of frontline employees, 33% of employees in retail, 37% in healthcare and 38% in manufacturing reported using AI at work at the same frequency.

Workforce Divided Over Level of Organizational AI Adoption – In Q3 2025, 37% of employees said their organization has implemented AI technology to improve productivity, efficiency and quality. Forty percent said their organization had not, and 23% said they did not know. The percentage of those who said they did not know was lower than the percentage who reported using AI at work at least a few times in the past year, but higher than the percentage who reported using it frequently. This gap suggests that a portion of employees used personal AI tools or otherwise used AI without awareness of their organization’s AI strategy.

Employees in individual contributor roles (26%) were more likely than managers (16%) and leaders (7%) to say they did not know whether their organization had implemented AI technology. Part-time employees, those working on-site, and employees in frontline roles or industries also reported higher uncertainty. Employees who are further from organizational decision-making have been less aware of AI implementation. A previous version of this question did not offer a “don’t know” response, which effectively encouraged respondents to make their best guess. Under that format, the share of employees who believed their organization had implemented AI rose from 33% in May 2024 to 44% in May 2025, while the share saying their organization had not implemented AI fell from 67% to 56%. In the latest survey, Gallup added a “don’t know” option to capture uncertainty about AI adoption. Because respondents could indicate a lack of knowledge, the Q3 2025 results are not directly comparable to earlier measurements. The fact that 23% chose “don’t know” highlighted substantial variation in how well information about AI adoption was reaching employees…”





Perspective.

https://hai.stanford.edu/news/stanford-ai-experts-predict-what-will-happen-in-2026

Stanford AI Experts Predict What Will Happen in 2026

After years of fast expansion and billion-dollar bets, 2026 may mark the moment artificial intelligence confronts its actual utility. In their predictions for the next year, Stanford faculty across computer science, medicine, law, and economics converge on a striking theme: The era of AI evangelism is giving way to an era of AI evaluation. Whether it’s standardized benchmarks for legal reasoning, real-time dashboards tracking labor displacement, or clinical frameworks for vetting the flood of medical AI startups, the coming year demands rigor over hype. The question is no longer “Can AI do this?” but “How well, at what cost, and for whom?

Learn more about what Stanford HAI faculty expect in the new year.



Monday, December 15, 2025

How is the age of data or the reversal of precedent or new information factored into the large language models used by AI?

https://www.bespacific.com/how-congress-is-wiring-its-data-for-the-ai-era/

How Congress Is Wiring Its Data for the AI Era

The First Branch Protocol: “The Government Publishing Office grabbed the spotlight at the final Congressional Data Task Force meeting of 2025 last Wednesday by announcing that it is launching a Model Context Protocol server for artificial intelligence tools to access official GPO publication information. The MCP server lets AI tools like ChatGPT and Gemini pull in official GPO documents, allowing them to rely on current, authoritative information when answering questions. Here’s why this matters. Large Language Models are trained on large collections of text, but that training is fixed at a point in time and can become outdated. As a result, an AI may not know about recent events or changes and may even give confident but incorrect answers. Technologies like an MCP server address this problem by allowing an AI system to consult trusted, up-to-date sources when it needs them. When a question requires current or authoritative information, the AI can request that information from the MCP server, which returns official data—such as publications from the Government Publishing Office—that the AI can then use in its response. Most importantly, the design of an MCP server allows for machine-to-machine access, helping ensure responses are grounded in authoritative sources rather than generated guesses. Adding MCP creates another mechanism for the public to access GPO publications, alongside search, APIs, and bulk data access. It is a good example of the legislative branch racing ahead to meet the public need for authoritative, machine-readable information. GPO’s Mark Caudill said his office implemented the MCP both to respond to growing demand for AI-accessible data and to avoid having to choose the “best” AI agent. This is in line with GPO’s mission of being a trusted repository of the official record of the federal government. With a wide range of AI tools in use, from general use ones like ChatGPT and Gemini to more specific ones geared toward legal research, GPO’s adoption of MCP allows it to be agnostic across that ecosystem…”





Perspective.

https://www.schneier.com/blog/archives/2025/12/against-the-federal-moratorium-on-state-level-regulation-of-ai.html

Against the Federal Moratorium on State-Level Regulation of AI

Cast your mind back to May of this year: Congress was in the throes of debate over the massive budget bill. Amidst the many seismic provisions, Senator Ted Cruz dropped a ticking time bomb of tech policy: a ten-year moratorium on the ability of states to regulate artificial intelligence. To many, this was catastrophic. The few massive AI companies seem to be swallowing our economy whole: their energy demands are overriding household needs, their data demands are overriding creators’ copyright, and their products are triggering mass unemployment as well as new types of clinical psychoses. In a moment where Congress is seemingly unable to act to pass any meaningful consumer protections or market regulations, why would we hamstring the one entity evidently capable of doing so—the states? States that have already enacted consumer protections and other AI regulations, like California, and those actively debating them, like Massachusetts, were alarmed. Seventeen Republican governors wrote a letter decrying the idea, and it was ultimately killed in a rare vote of bipartisan near-unanimity.

The idea is back. Before Thanksgiving, a House Republican leader suggested they might slip it into the annual defense spending bill. Then, a draft document leaked outlining the Trump administration’s intent to enforce the state regulatory ban through executive powers. An outpouring of opposition (including from some Republican state leaders) beat back that notion for a few weeks, but on Monday, Trump posted on social media that the promised Executive Order is indeed coming soon. That would put a growing cohort of states, including California and New York, as well as Republican strongholds like Utah and Texas, in jeopardy.



Sunday, December 14, 2025

This hacker’s dream, I can operate with a Letter of Marque? Imagine pilfering a few central banks...

https://www.bloomberg.com/news/articles/2025-12-12/trump-administration-turning-to-private-firms-in-cyber-offensive?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc2NTYzNTc0MSwiZXhwIjoxNzY2MjQwNTQxLCJhcnRpY2xlSWQiOiJUNzI0WDRLSkg2VjQwMCIsImJjb25uZWN0SWQiOiJBNUNGNjk1RTU1RUM0MTg4OUQxRkNENkU1MTQ5M0YzRCJ9.xe9r4NC5Dvxfpxr3yY_Duti540bk4vMJ12gFIylKVFE&leadSource=uverify%20wall

Trump Administration Turning to Private Firms in Cyber Offensive

President Donald Trump’s administration is preparing to turn to private businesses to help mount offensive cyberattacks against foreign adversaries, according to people familiar with the matter, potentially expanding a shadowy electronic conflict typically conducted by secretive intelligence agencies.

The White House plans to make public its intention to enlist private companies in more aggressive efforts to go after criminal and state-sponsored hackers in a new national cyber strategy, a draft of which has been viewed by industry officials and experts. The strategy is expected to be released by the Office of the National Cyber Director in the coming weeks.





Interesting.

https://www.researchgate.net/profile/Napoleon-Jr-Mabaquiao/publication/398486444_AI_Ethics_Primer_for_Filipinos_Edited_by_N_Mabaquiao_Jr/links/69381dcc27359023a00a40e2/AI-Ethics-Primer-for-Filipinos-Edited-by-N-Mabaquiao-Jr.pdf#page=60

Big Brother is watching you: Surveillance, Autonomy and Privacy in the Information Era

Numerous academic works discuss how the emergence of data economics, now referred to as datanomics, has made us lose ownership of our data, manipulated our behavior both online and offline, accelerated the increase in surveillance capitalism, and threatened our democracy. These are linked to other important issues like job security and safety. Our negative perception of these new technologies stems, among others, from our notion that they pose a threat to our privacy and autonomy. Some people have become quite pessimistic about the possibility of human beings, artificial intelligence, and robotics having harmonious coexistence and collaboration. Thus, in this paper, I problematize how these new technologies affect our privacy and autonomy.

In the first section of the paper, I elaborate on the concept of autonomy as respecting people’s moral independence in the context of human relationships, and I demonstrate how this concept is embedded in the Filipino value of Pakikipagkapwa. In the following sections, I look at how autonomy relates to privacy and personhood, using surveillance practices as illustrations. I elaborate on how these practices work in light of the Filipino “love affair” with social media and the Philippines’ transformation from one of the most internet-savvy populations to an epicenter of surveillance capitalism. I conclude my discussion by considering how humans can maintain autonomy in the information age without impeding technological progress.





Tools & Techniques.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5871402

How to Build A Bot in Twelve Steps

Bots” powered by artificial intelligence systems are enormously powerful and versatile. Trained bots can take on a wide variety of roles, speaking and acting as lawyers, clients, mediators and advisors, providing unique assistance to teachers and other professionals as they do.

This short paper describes how to build a bot in twelve simple steps in the ChatGPT Plus system in a template provided in Chat. The process is conducted in lay English, without coding or any specialized knowledge of AI. The paper also includes as examples the actual instructions guiding the “Dispute” and “Contract” bots on sites.suffolk.edu/ai-negotiation/