Saturday, November 15, 2025

Still a bar, not a solid wall.

https://pogowasright.org/maryland-privacy-crackdown-raises-bar-for-disclosure-compliance/

Maryland Privacy Crackdown Raises Bar for Disclosure Compliance

Michael Canty, Carol Villegas, and Danielle Izzo of Labaton Sucharow write:

Maryland joined the patchwork of 19 states enacting data privacy rules themselves in lieu of a federal standard last year when Gov. Wes Moore (D) signed the Maryland Online Data Privacy Act of 2024, empowering the state to curb exploitative data practices. The MODPA went into effect on Oct. 1, and enforcement begins on April 1, 2026.
Companies and consumers are evaluating whether the MODPA mirrors existing privacy regulations or marks a significant regulatory expansion. In either case, MODPA is a step toward restoring data privacy control into the hands of consumers.
The MODPA appears to be consistent with many existing state privacy statutes. However, its scope likely extends beyond existing state statutes because it sets a low threshold for application to companies doing business in Maryland and establishes stringent data minimization requirements that encompass broad categories, specifically with respect to “sensitive data.”
Privacy advocates have praised the statute as one that “provides Marylanders with some of the strongest privacy protections in the country.”

Read more at Bloomberg Law.





Is this how AI attacks humanity? The Terminator as therapist...

https://www.zdnet.com/article/using-ai-for-therapy-dont-its-bad-for-your-mental-health-apa-warns/

Using AI for therapy? Don't - it's bad for your mental health, APA warns

Therapy might be expensive and inaccessible, while many AI chatbots are free and readily available. But that doesn't mean the new technology can or should replace mental health professionals -- or fully address the mental health crisis, according to a recent advisory published Thursday by the American Psychological Association.

The advisory outlines recommendations for the public's use and over-reliance on consumer-facing chatbots. It underscores the general public and vulnerable populations' growing use of uncertified, consumer-facing AI chatbots and how they're poorly designed to address users' mental health needs. 

Recent surveys show that one of the largest providers of mental health support in the country right now is AI chatbots like ChatGPTClaude, and Copilot. It also follows several high-profile incidents involving chatbots' mishandling of people experiencing mental health episodes. 



Friday, November 14, 2025

Worth thinking about.

https://www.schneier.com/blog/archives/2025/11/the-role-of-humans-in-an-ai-powered-world.html

The Role of Humans in an AI-Powered World

As AI capabilities grow, we must delineate the roles that should remain exclusively human. The line seems to be between fact-based decisions and judgment-based decisions.

For example, in a medical context, if an AI was demonstrably better at reading a test result and diagnosing cancer than a human, you would take the AI in a second. You want the more accurate tool. But justice is harder because justice is inherently a human quality in a way that “Is this tumor cancerous?” is not. That’s a fact-based question. “What’s the right thing to do here?” is a human-based question.

Chess provides a useful analogy for this evolution. For most of history, humans were best. Then, in the 1990s, Deep Blue beat the best human. For a while after that, a good human paired with a good computer could beat either one alone. But a few years ago, that changed again, and now the best computer simply wins. There will be an intermediate period for many applications where the human-AI combination is optimal, but eventually, for fact-based tasks, the best AI will likely surpass both.

The enduring role for humans lies in making judgments, especially when values come into conflict. What is the proper immigration policy? There is no single “right” answer; it’s a matter of feelings, values, and what we as a society hold dear. A lot of societal governance is about resolving conflicts between people’s rights—my right to play my music versus your right to have quiet. There’s no factual answer there. We can imagine machines will help; perhaps once we humans figure out the rules, the machines can do the implementing and kick the hard cases back to us. But the fundamental value judgments will likely remain our domain.

This essay originally appeared in IVY.





Perspective.

https://www.anthropic.com/news/disrupting-AI-espionage

Disrupting the first reported AI-orchestrated cyber espionage campaign

We recently argued that an inflection point had been reached in cybersecurity: a point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill. This was based on systematic evaluations showing cyber capabilities doubling in six months; we’d also been tracking real-world cyberattacks, observing how malicious actors were using AI capabilities. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale.

The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases. The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention.





Forensic tool?

https://www.bespacific.com/where-is-this-photo/

Where Is This Photo?

Free AI Photo Locator & Image Location Finder – Upload any photo, image, or travel picture and our AI technology will analyze and detect its location to determine where it was taken, anywhere in the world. Our photo location finder uses advanced AI image geolocation and reverse photo location search to identify GPS coordinates from pictures. [Free but additional features available with subscription]



Thursday, November 13, 2025

It’s not just lazy lawyers…

https://www.bespacific.com/computer-science-papers-rife-with-ai/

Computer science papers rife with AI

Semefor: “The largest pre-publication repository of scientific studies announced it would no longer accept computer science review papers because of the rise of fake AI-generated content.  arXiv is a preprint site, which accepts papers before peer review with minimal moderation. It allows wider access to research that is usually behind paywalls, and vastly speeds up publication times, although it has fewer quality controls. But the percentage of papers rejected has recently shot up. In a blog post, arXiv said it had seen an “unmanageable influx of papers, many AI-generated, and the problem was especially pronounced in computer science. Recent research suggested that up to 22% of all CS papers might contain some AI-generated content.”





Isn’t there an automatic freeze on potential evidence? (Was thirty days sufficient for all police purposes?)

https://www.bespacific.com/judge-rules-flock-surveillance-images-are-public-records-that-can-be-requested-by-anyone/

Judge Rules Flock Surveillance Images Are Public Records That Can Be Requested By Anyone

404 Media: “A judge in Washington has ruled that police images taken by Flock’s AI license plate-scanning cameras are public records that can be requested as part of normal public records requests. The decision highlights the sheer volume of the technology-fueled surveillance state in the United States, and shows that at least in some cases, police cannot withhold the data collected by its surveillance systems.  In a ruling last week, Judge Elizabeth Neidzwski ruled that “the Flock images generated by the Flock cameras located in Stanwood and Sedro-Wooley [Washington] are public records under the Washington State Public Records Act,” that they are “not exempt from disclosure,” and that “an agency does not have to possess a record for that record to be subject to the Public Records Act.” She further found that “Flock camera images are created and used to further a governmental purpose” and that the images on them are public records because they were paid for by taxpayers. Despite this, the records that were requested as part of the case will not be released because the city automatically deleted them after 30 days. Local media in Washington first reported on the case; 404 Media bought Washington State court records to report the specifics of the case in more detail…





Learning to be dumber?

https://www.zdnet.com/article/does-your-chatbot-have-brain-rot-4-ways-to-tell/

Does your chatbot have 'brain rot'? 4 ways to tell

Last month, a team of AI researchers from the University of Texas at Austin, Texas A&M, and Purdue University published a paper advancing what they call "the LLM Brain Rot Hypothesis" -- basically, that the output of AI chatbots like ChatGPT, Gemini, Claude, and Grok will degrade the more they're exposed to "junk data" found on social media.





You mean it might not be all good?

https://www.businessinsider.com/companies-are-warning-about-risks-of-ai-sec-filings-2025-11

The new AI warnings popping up in SEC filings

An increasing share of companies' annual filings with the Securities and Exchange Commission now caution investors that the technology could have a significant negative impact on their businesses.

So far this year, 418 publicly traded companies valued at more than $1 billion have cited AI-related risk factors associated with reputational harm in those reports, according to an analysis conducted with AlphaSense.  That is a 46% jump from 2024 and roughly nine times greater than in 2023.

AI datasets could hurt a company's image, the filings say, by producing biased or incorrect information, compromising security, or infringing on others' rights.



Wednesday, November 12, 2025

How did it even get that far?

https://www.bespacific.com/you-wont-believe-the-excuses-lawyers-have-after-getting-busted-for-using-ai/

You won’t believe the excuses lawyers have after getting busted for using AI

Ars Technica – I got hacked; I lost my login; it was a rough draft; toggling windows is hard. Amid what one judge called an “epidemic” of fake AI-generated case citations bogging down courts, some common excuses are emerging from lawyers hoping to dodge the most severe sanctions for filings deemed misleading. Using a database compiled by French lawyer and AI researcher Damien Charlotin, Ars reviewed 23 cases where lawyers were sanctioned for AI hallucinations. In many, judges noted that the simplest path to avoid or diminish sanctions was to admit that AI was used as soon as it’s detected, act humble, self-report the error to relevant legal associations, and voluntarily take classes on AI and law. But not every lawyer takes the path of least resistance, Ars’ review found, with many instead offering excuses that no judge found credible. Some even lie about their AI use, judges concluded. Since 2023—when fake AI citations started being publicized—the most popular excuse has been that the lawyer didn’t know AI was used to draft a filing. Sometimes that means arguing that you didn’t realize you were using AI, as in the case of a California lawyer who got stung by Google’s AI Overviews, which he claimed he took for typical Google search results. Most often, lawyers using this excuse tend to blame an underling, but clients have been blamed, too. A Texas lawyer this month was sanctioned after deflecting so much that the court had to eventually put his client on the stand after he revealed she played a significant role in drafting the aberrant filing. “Is your client an attorney?” the court asked. “No, not at all your Honor, just was essentially helping me with the theories of the case,” the lawyer said…”





Is this the start of the next investment bubble/opportunity?

https://www.cnn.com/2025/11/12/tech/quantum-computing-ibm-microsoft-google

A seismic shift in computing is on the horizon (and it’s not AI)

Quantum computing could potentially lead to a $1.3 trillion increase in value across certain industries by 2035, according to McKinsey & Company, and for good reason. Experts believe quantum computing could lead to breakthroughs in fields like cryptography, finance, science and transportation, and IBM says the technology could solve some problems in minutes or hours that would typically take non-quantum standard computers thousands of years.





Picking targets, testing security. Weapons for the next war.

https://www.theregister.com/2025/11/12/asio_cyber_sabotage_warnings/

Australia’s spy boss says authoritarian nations ready to commit ‘high-impact sabotage’

The head of Australia’s Security Intelligence Organisation (ASIO) has warned that authoritarian regimes “are growing more willing to disrupt or destroy critical infrastructure”, using cyber-sabotage.

Burgess said those scenarios “are not hypotheticals,” adding “foreign governments have elite teams investigating these possibilities right now.” Some of those governments, he said, have previously had an intent “to commit espionage and foreign interference – to steal and meddle.”



(Related)

https://www.theregister.com/2025/11/12/uk_aviation_boss_says_organized/

Aviation watchdog says organized drone attacks will shut UK airports ‘sooner or later’

Britain's aviation watchdog has warned it's only a matter of time before organized drone attacks bring UK airports to a standstill.

Civil Aviation Authority (CAA) boss Rob Bishton told the Airlines UK conference on Monday that it was "entirely unrealistic" to think drone incursions "won't cause disruption" in the future, days after two Belgian airports were forced to shut down following drone sightings.

"It's not a question of if, only of when," he said, according to The Financial Times, adding that both drones and cyber threats are now evolving faster than anyone can keep up.



Tuesday, November 11, 2025

For your Security manager?

https://thehackernews.com/2025/11/cisos-expert-guide-to-ai-supply-chain.html

CISO's Expert Guide To AI Supply Chain Attacks

AI-enabled supply chain attacks jumped 156% last year. Discover why traditional defenses are failing and what CISOs must do now to protect their organizations.

Download the full CISO’s expert guide to AI Supply chain attacks here.



Monday, November 10, 2025

Security or accountability? Why not both?

https://www.bespacific.com/to-preserve-records-homeland-security-now-relies-on-officials-to-take-screenshots/

To Preserve Records, Homeland Security Now Relies on Officials to Take Screenshots

The New York Times Gift Article: “Experts say the new policy, which ditches software that automatically captured text messages, opens ample room for both willful and unwitting noncompliance with federal records laws. The Department of Homeland Security has stopped using software that automatically captured text messages and saved trails of communication between officials, according to sworn court statements filed this week. Instead, the agency began in April to require officials to manually take screenshots of their messages to comply with federal records laws, citing cybersecurity concerns with the autosave software. Public records experts say the new record-keeping policy opens ample room for both willful and unwitting noncompliance with federal open records laws in an administration that has already shown a lack of interest in, or willingness to skirt, records laws. That development could be particularly troubling as the department executes President Trump’s aggressive agenda of mass deportations, a campaign that has included numerous accusations of misconduct by law enforcement officials, the experts said.

If you are an immigration official or an agent and believe that the public might later criticize you, or that your records could help you be held accountable, would you go out of the way to preserve those records that might expose wrongdoing?” said Lauren Harper, who advocates government transparency at the Freedom of the Press Foundation.

The Department of Homeland Security includes key immigration agencies such as Immigration and Customs Enforcement and Customs and Border Protection. The department did not respond to requests for comment on Thursday. But on Friday, after publication of this article, it said in an emailed statement that the application the agency had stopped using “was not the exclusive means of preserving text data,” but did not elaborate on why it had opted for manual record-keeping. The department has maintained and will continue to maintain records of phone data, including text messages, the statement added, reiterating what department officials had said in a court statement filed on Wednesday…”





Diversity?

https://www.bespacific.com/googles-hidden-empire/

Google’s Hidden Empire

Google’s Hidden Empire: This paper presents striking new data about the scale of Google’s involvement in the global digital and corporate landscape, head and shoulders above the other big tech firms. While public attention and some antitrust scrutiny has focused on these firms’ mergers and acquisitions (M&A) activities, Google has also been amassing an empire of more than 6,000 companies which it has acquired, supported or invested in, across the digital economy and beyond. The power of Google over the digital markets infrastructure and dynamics is likely greater than previously documented. We also trace the antitrust failures that have led to this state of affairs. In particular, we explore the role of neoclassical economics practiced both inside the regulatory authorities and by consultants on the outside. Their unduly narrow approach has obscured harms from vertical and conglomerate concentrations of market power and erected ever higher hurdles for enforcement action, as we demonstrate using examples of the failure to intervene in the Google/DoubleClick and Google/Fitbit mergers. Our lessons from the past failures can inform the current approach towards one of the biggest ever big tech M&A deals: Google’s $32 billion acquisition of the Israeli cloud cybersecurity firm Wiz.



Sunday, November 09, 2025

The future of auditing?

https://arxiv.org/abs/2510.26576

"Show Me You Comply... Without Showing Me Anything": Zero-Knowledge Software Auditing for AI-Enabled Systems

The increasing exploitation of Artificial Intelligence (AI) enabled systems in critical domains has made trustworthiness concerns a paramount showstopper, requiring verifiable accountability, often by regulation (e.g., the EU AI Act). Classical software verification and validation techniques, such as procedural audits, formal methods, or model documentation, are the mechanisms used to achieve this. However, these methods are either expensive or heavily manual and ill-suited for the opaque, "black box" nature of most AI models. An intractable conflict emerges: high auditability and verifiability are required by law, but such transparency conflicts with the need to protect assets being audited-e.g., confidential data and proprietary models-leading to weakened accountability. To address this challenge, this paper introduces ZKMLOps, a novel MLOps verification framework that operationalizes Zero-Knowledge Proofs (ZKPs)-cryptographic protocols allowing a prover to convince a verifier that a statement is true without revealing additional information-within Machine-Learning Operations lifecycles. By integrating ZKPs with established software engineering patterns, ZKMLOps provides a modular and repeatable process for generating verifiable cryptographic proof of compliance. We evaluate the framework's practicality through a study of regulatory compliance in financial risk auditing and assess feasibility through an empirical evaluation of top ZKP protocols, analyzing performance trade-offs for ML models of increasing complexity.





Perspective.

https://sonoflaw.academy/index.php/slaj/article/view/3

Artificial Intelligence and Judicial Ethics: Balancing Efficiency, Accountability, and Human Judgment

This paper explores the ethical and legal challenges arising from the integration of Artificial Intelligence (AI) into judicial systems worldwide. While AI promises enhanced efficiency, consistency, and access to justice, its use in decision-making raises fundamental concerns about transparency, accountability, and moral responsibility. Drawing on legal philosophy, comparative case studies, and policy analysis, the paper proposes a normative framework for ethical AI in judicial contexts. It argues that human judgment must remain central to the exercise of justice, supported—but not replaced—by machine learning. Recommendations include establishing audit mechanisms, data transparency mandates, and ethics oversight bodies to safeguard judicial independence in the age of automation.





You know the defense lawyers are using it…

https://read.dukeupress.edu/fsr/article-abstract/37/3-4/241/403305/AI-in-ProsecutionBalancing-Innovation-with-Ethical

AI in Prosecution: Balancing Innovation with Ethical and Legal Responsibilities

The article explores sound approaches prosecutor offices can use in the adoption of AI technology. Prosecutors should consider various factors, including prosecutors’ legal and ethical duties, current use of AI by prosecutors, AI use by law enforcement agencies, and the future use of AI for prosecutors.