Saturday, October 11, 2025

Perspective.

https://pogowasright.org/article-the-great-scrape-the-clash-between-scraping-and-privacy-2/

Article: The Great Scrape: The Clash Between Scraping and Privacy

Abstract

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping”—the automated extraction of large amounts of data from the internet. A great deal of scraped data contains people’s personal information. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archiving of records, and meaningful scientific research, scraping for AI can also be objectionable and even harmful to individuals and society.

Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles of privacy laws, including fairness, individual rights and control, transparency, consent, purpose specification and secondary use restrictions, data minimization, onward transfer, and data security. Scraping ignores the data protection laws built around these requirements.

Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others.

This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation.

Citation and Free Download of Article at:

Solove, Daniel J. and Hartzog, Woodrow, The Great Scrape: The Clash Between Scraping and Privacy (July 03, 2024). 113 California Law Review 1521 (2025), Available at SSRN: https://ssrn.com/abstract=4884485 or http://dx.doi.org/10.2139/ssrn.4884485





Make the raw materials more expensive and limit who we can sell to. Do I have that right?

https://www.cnbc.com/2025/10/10/trump-trade-tariffs-china-software.html

Trump puts extra 100% tariff on China imports, adds export controls on ‘critical software’

President Donald Trump on Friday said the United States would impose new tariffs of 100% on imports from China “over and above any Tariff that they are currently paying,” starting on Nov. 1.

Trump also said that the U.S., on that same date, would also impose export controls on “any and all critical software.”



Friday, October 10, 2025

Tools & Techniques. (A couple with forensic applications.)

https://www.makeuseof.com/make-windows-better-with-these-free-microsoft-store-apps/

8 free Microsoft Store apps that make Windows better



Thursday, October 09, 2025

Resistance is futile…

https://www.jalopnik.com/1982690/police-flock-cameras-sued-for-tracking-man-526-times/

Police Used Flock Cameras To Track One Driver Over 500 Times. Now They're Being Sued

Yet a Flock Safety spokesperson told NBC News, "Fourth Amendment case law overwhelmingly shows that LPRs do not constitute a warrantless search because they take point-in-time photos of cars in public and cannot continuously track the movements of any individual." This sounds like a lovely little linguistic loophole: The cameras don't continuously track people's movements, they're stationary! But they "capture detailed data about license plates," which enables "quick and efficient action." That sounds a lot like tracking.

Virginia Gov. Glenn Youngkin signed a law in May limiting the circumstances where license plate data can be accessed, and it goes into effect in January 2026. Under the law, authorities will only be able to obtain data during active criminal investigations, missing and endangered persons cases, and in cases where a car or license plate has been stolen. 

You may notice that this means the cameras will still be recording. All this law does is tell law enforcement when they can access the footage. The Schmidt-Arrington case is about being recorded at all, and as the Kansas police chief case shows, that's the crux of the issue. Taking the pictures but restricting access for the moment is like requiring everyone to get a mug shot in case they commit a crime someday.





Tools & Techniques.

https://www.bespacific.com/the-art-of-ai-prompting-in-law-and-dispute-resolution-practice/

The Art of AI Prompting in Law and Dispute Resolution Practice

Lande, John, The Art of AI Prompting in Law and Dispute Resolution Practice (September 29, 2025). University of Missouri School of Law Legal Studies Research Paper No. 2025-46, 43 Alternatives to the High Cost of Litigation (forthcoming November 2025), Available at SSRN: https://ssrn.com/abstract=5544018 or http://dx.doi.org/10.2139/ssrn.5544018

This short article offers a practical guide for using AI tools to improve the judgment and efficiency of lawyers, mediators, and arbitrators. It cites ABA Ethics Opinion 512, which describes lawyers’ ethical duty of technological competence under the ABA Model Rules. The article encourages practitioners to begin by selecting AI tools appropriate to their tasks such as general-purpose platforms or specialized tools listed in the article. It explains how to write effective prompts, use follow-up questions to refine outputs, and apply professional judgment when reviewing results. It includes a list of suggested follow-up prompts. Getting useful results from AI tools requires similar skills used in legal and dispute resolution work – skills that people can improve with practice. The article includes numerous examples of AI prompts that can be used throughout the life of a case, from preparation to post-session reflection. It cautions that AI tools can produce inaccurate, misleading, or fabricated content, and urges users to exercise human oversight and to independently verify results.



Wednesday, October 08, 2025

Unstoppable. Probably.

https://www.theregister.com/2025/10/07/gen_ai_shadow_it_secrets/

Employees regularly paste company secrets into ChatGPT

Employees could be opening up to OpenAI in ways that put sensitive data at risk. According to a study by security biz LayerX, a large number of corporate users paste Personally Identifiable Information (PII) or Payment Card Industry (PCI) numbers right into ChatGPT, even if they're using the bot without permission.

In its Enterprise AI and SaaS Data Security Report 2025, LayerX blames the growing, largely uncontrolled usage of generative AI tools for exfiltrating personal and payment data from enterprise environments.

With 45 percent of enterprise employees now using generative AI tools, 77 percent of these AI users have been copying and pasting data into their chatbot queries, the LayerX study says. A bit more than a fifth (22 percent) of these copy and paste operations include PII/PCI.

"With 82 percent of pastes coming from unmanaged personal accounts, enterprises have little to no visibility into what data is being shared, creating a massive blind spot for data leakage and compliance risks," the report says.





Perspective.

https://www.bespacific.com/consumer-reports-study-finds-surge-in-texting-and-messaging-scams/

Consumer Reports study finds surge in texting and messaging scams

Consumer Reports “(CR) along with Aspen Digital and the Global Cyber Alliancereleased the fourth annual Consumer Cyber Readiness Report today, marking the beginning of Cybersecurity Awareness Month. The report examines American consumer attitudes on digital privacy and security and steps they are taking to protect themselves from potential threats.  The report found that over the past year, consumers have seen a significant increase in text messaging-based scams, especially for younger American consumers aged between 18 – 29 years old. In addition to revealing disparities based on age, this year’s data showed stark inequities regarding the groups most vulnerable to digital scams. For example, while the percentage of Americans who reported losing money from a digital scam remains the same as last year, at 1 in 10, this year’s report revealed for the first time that, among those who had encountered a scam attempt, people with the lowest household incomes were three times as likely to report financial losses due to scams as people in the highest income households (29 percent compared to 10 percent). The report also found continued racial disparity in financial losses related to scams: 37 percent of Black Americans who encountered a scam lost money, compared to only 15 percent of white Americans. These figures were similar to those reported last year. This echoes similar findings that have been made by other organizations in the recent past, including the Federal Trade Commission…”



Tuesday, October 07, 2025

It’s not just a lawyer problem…

https://www.pcmag.com/news/deloitte-refunds-portion-of-440k-report-over-ai-hallucinations

Deloitte Refunds Portion of $440K Report Over AI Hallucinations

Deloitte recently signed a deal with Anthropic to give all of its employees access to the Claude chatbot and dramatically expand the use of AI across its business, but the consulting firm might want to keep a close eye on what Claude is churning out.

The Australian government recently ordered Deloitte to refund a portion of the $440,000 it was paid for a compliance report after discovering several errors, The Guardian reports, most of which were phony citations.





This is how the Terminator actually works.

https://thehackernews.com/2025/10/new-research-ai-is-already-1-data.html

New Research: AI Is Already the #1 Data Exfiltration Channel in the Enterprise

For years, security leaders have treated artificial intelligence as an "emerging" technology, something to keep an eye on but not yet mission-critical. A new Enterprise AI and SaaS Data Security Report by AI & Browser Security company LayerX proves just how outdated that mindset has become. Far from a future concern, AI is already the single largest uncontrolled channel for corporate data exfiltration—bigger than shadow SaaS or unmanaged file sharing.



Monday, October 06, 2025

Oh, the horror!

https://www.techspot.com/news/109738-following-cyberattack-japan-days-away-running-out-country.html

Following cyberattack, Japan is days away from running out of country's favorite beer

Country's largest brewery suffers ransomware attack





Perspective.

https://futurism.com/artificial-intelligence/yale-study-ai-job-impact

New Yale Study Finds AI Has Had Essentially Zero Impact on Jobs

Ever since OpenAI’s ChatGPT was introduced in November 2022, experts and executives have been predicting that it and other AI models will eliminate untold jobs — forecasts that seem, at a first glance, to have been borne out by the plethora of tech sector layoffs in the wake of its debut.

But a new study from Yale University found quite the opposite in the United States, which should give anxious workers some relief as it goes against the hyped up prognostications of many tech CEOs.



Sunday, October 05, 2025

Are we headed in this direction?

https://scindeks.ceon.rs/article.aspx?artid=0353-90082566369B&lang=en

Artificial intelligence as an instrument of political surveillance in post-conflict societies

This paper explores the profound implications of the application of artificial intelligence (AI) as an instrument of political surveillance in post-conflict societies, with a special focus on the cases of China, Israel, and the countries of the former SFRY, including the sensitive context of Kosovo and Metohija. The analysis adopts a multidisciplinary approach that combines security studies, political theory, digital ethics, and sociological perspectives to understand how AI transforms mechanisms of control, monitoring, and political decision-making. The paper argues that in post-conflict societies, AI is not merely a technological tool but a mechanism for restructuring state sovereignty, narrative control, and institutional dominance over political opponents. Special attention is given to the phenomena of digital authoritarianism, biometric surveillance expansion, algorithmic discrimination, and geopolitical use of data in the context of either stabilization or repression. Through a comparative analysis of China and Israel as leading exporters of AI-based surveillance systems and their diffusion into other parts of the world, including the Balkans, the paper highlights risks to human rights, democracy, and social cohesion. The paper concludes with recommendations for the development of ethical frameworks, regulatory policies, and international standards to curb the misuse of AI in fragile democracies and post-conflict zones.





Perspective.

https://moderndiplomacy.eu/2025/10/05/from-swords-to-algorithms-reimagining-sun-tzu-in-the-age-of-ai/

From Swords to Algorithms: Reimagining Sun Tzu in the Age of AI

During the times of Sun Tzu, victory depended on terrain, rivers, mountains, and fortresses. The digital terrain is the decisive one in the era of AI. The superlatives of a modern war are data. The person who determines data flows, network infrastructure, and algorithmic dominance sets the stage of the war. This change has been explicitly proclaimed by the doctrine of intelligent warfare by the People’s Liberation Army (PLA). This doctrine focuses on the ability to integrate AI in all the facets of the military operations, including logistics and self-directed weapons. To China, it is better to have superiority in algorithms rather than superiority in weaponry.

This logic can be reflected in the U.S. in terms of Joint All-Domain Command and Control (JADC2). It aims to combine the information obtained on land, sea, air, space, and cyber into a single system, but with the help of AI. JADC2 seeks to reduce the OODA loop (observe, orient, decide, act) by a factor of time and beyond the capability of adversaries to react.

The concept of terrain control in Sun Tzu has taken on a new meaning to guarantee the integrity of data, cyber dominance, and information superiority. Algorithms dictate supply lines, just as rivers did in the past. Algorithms dictate who sees, decides, and acts first.