Saturday, October 25, 2025

Is this better or worse?

https://blogs.lse.ac.uk/businessreview/2025/10/23/politics-more-than-race-or-gender-is-a-source-of-workplace-bias/

Politics is a source of workplace bias more than race or gender

The pattern we uncovered is clear. Workers are about 60 per cent more likely to be employed by an owner who shares their political affiliation than by one who doesn’t. This “assortative matching” (when people pair up with others who share similar characteristics) is more pronounced for political identity than for gender or race.



Friday, October 24, 2025

Perspective.

https://www.bespacific.com/courts-adapt-to-the-challenges-of-generative-ai/

Courts Adapt to the Challenges of Generative AI

Via LLRX – Courts Adapt to the Challenges of Generative AI  – AI in Law & Legal Tech Expert Nicole L. Black frames how AI is changing how legal work gets done, and the effects aren’t limited to law offices. Other legal organizations are equally impacted, including the courts. As judicial offices around the country grapple with the how and why of secure AI adoption, new rules, policies, and processes are being implemented to address the ethical and practical issues presented.





Perspective.

https://pogowasright.org/state-attorneys-general-privacy-enforcement-trends-2020-2024/

State Attorneys General & Privacy: Enforcement Trends, 2020-2024

A new report by EPIC.org:

In October 2025, EPIC published State Attorneys General & Privacy: Enforcement Trends, 2020-2024. As part of EPIC’s ongoing work to support State AG privacy enforcement, EPIC created this report to serve as a reference tool for regulators seeking to protect our digital privacy and autonomy from corporate interests.  It outlines, for example, where State AGs (including five U.S. territories and D.C.) have focused their enforcement efforts, how they collaborate, and what sources of legal authority they have invoked in their privacy-related cases and settlements.
The report examines State AG enforcement actions across six areas of privacy harms: Unwanted Calls & Texts, Data Breach, Data Privacy, Antitrust, Platform Accountability & Governance, and Algorithms & Automated Systems.
EPIC’s report catalogs over 220 cases and settlements, 35 letters, and 20 public investigations from January 2020 through December 2024, providing a detailed look at the breadth and impact of state-level privacy enforcement. The report’s five-year period represents a base understanding of how State AGs used flexible consumer protection authority and federal authority available to them. Equipped with more specific authority in the coming years, State AGs will build on their impressive body of work to combat ongoing consumer privacy harm in the digital age.
EPIC thanks the 56 State and Territorial Attorneys General for their ongoing efforts to protect Americans’ privacy rights. EPIC will continue tracking privacy-related enforcement actions from AGs and is happy to speak with any AGs about this report or privacy enforcement in general. Please feel free to reach out at stateagreport@epic.org.

 DOWNLOAD THE REPORT



Thursday, October 23, 2025

Imagine all the uses…

https://www.bespacific.com/the-surveillance-empire-that-tracked-world-leaders-a-vatican-enemy-and-maybe-you/

The Surveillance Empire That Tracked World Leaders, a Vatican Enemy, and Maybe You

Mother Jones: “…Operating from their base in Jakarta, where permissive export laws have allowed their surveillance business to flourish, First Wap’s European founders and executives have quietly built a phone-tracking empire, with a footprint extending from the Vatican to the Middle East to Silicon Valley. It calls its proprietary system Altamides, which it describes in promotional materials as “a unified platform to covertly locate the whereabouts of single or multiple suspects in real-time, to detect movement patterns, and to detect whether suspects are in close vicinity with each other.” Altamides leaves no trace on the phones it targets, unlike spyware such as Pegasus. Nor does it require a target to click on a malicious link or show any of the telltale signs (such as overheating or a short battery life) of remote monitoring…

Last year the investigative newsroom Lighthouse Reports obtained a secret archive, containing more than a million instances where Altamides was used to trace cell phones all over the world. This data trove, the majority of which spans 2007 to 2014, is one of the largest disclosures to date of the inner workings of the vast surveillance industry. It does not just list the phone numbers of people who were monitored; it offers, in many cases, precise maps of their movements, showing where they went and when. Over months of research, Lighthouse, Germany’s Paper Trail Media, Mother JonesReveal, and an international consortium of partners dug into these logs to understand who was being spied on and why. We identified surveillance targets in 100 countries and spoke to dozens of them. We obtained confidential documents and communications outlining how Altamides—an acronym for “Advanced Location Tracking and Mobile Information and Deception System”—was marketed and deployed. We also interviewed industry insiders and former employees of the company about its operations and clientele…”



Wednesday, October 22, 2025

And that’s just one company?

https://www.theguardian.com/business/2025/oct/22/jaguar-land-rover-hack-has-cost-uk-economy-19bn-most-costly-cyber-attack-britain

Jaguar Land Rover hack has cost UK economy £1.9bn, experts say

The hack of Jaguar Land Rover has cost the UK economy an estimated £1.9bn, potentially making it the most costly cyber-attack in British history, a cybersecurity body has said.

A report by the Cyber Monitoring Centre (CMC) said losses could be higher if there were unexpected delays to the return to full production at the carmaker to levels before the hack took place at the end of August.

JLR was forced to shut down systems across all of its factories and offices after realising the extent of the penetration. The carmaker, Britain’s biggest automotive employer, only managed a limited restart in early October and is not expected to return to full production until January.

As well as crippling JLR, the hack has affected as many as 5,000 organisations across Britain, given the wide extent of the carmaker’s complex supply chain. While JLR has been able to rely on its large financial buffers, smaller suppliers were immediately forced to lay off thousands of workers and contend with a painful pause in cashflow.





Is it wrong by chance or by design?

https://www.bespacific.com/largest-study-of-its-kind-shows-ai-assistants-misrepresent-news-content-45-of-the-time/

Largest Study of Its Kind Shows AI Assistants Misrepresent News Content 45% of the Time

BBC: “New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants – already a daily information gateway for millions of people – routinely misrepresent news content no matter which language, territory, or AI platform is tested. The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools.

Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context.

Key findings:

  • 45% of all AI answers had at least one significant issue.

  • 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.

  • 20% contained major accuracy issues, including hallucinated details and outdated information.

  • Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.

  • Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of errors.





The art of the deal? As part of protecting users from China, we hand their data to ICE?

https://www.forbes.com/sites/emilybaker-white/2025/10/21/tiktok-wont-say-if-its-giving-ice-your-data/

TikTok Won’t Say If It’s Giving ICE Your Data

Earlier this year, TikTok quietly changed its policies about when and how it would share data with governments.

As the company negotiated terms with the Trump Administration that would allow its app to continue operating in the U.S., it added language to its policies that covered data sharing not just with law enforcement, but also with “regulatory authorities, where relevant,” and weakened promises to inform users about government requests for their private data.



Tuesday, October 21, 2025

It will only get worse.

https://www.bespacific.com/the-great-scrape-the-clash-between-scraping-and-privacy-2/

The Great Scrape: The Clash Between Scraping and Privacy

Solove, Daniel J. and Hartzog, Woodrow, The Great Scrape: The Clash Between Scraping and Privacy (July 03, 2024). 113 California Law Review 1521 (2025), Available at SSRN: https://ssrn.com/abstract=4884485 or http://dx.doi.org/10.2139/ssrn.4884485

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping”—the automated extraction of large amounts of data from the internet. A great deal of scraped data contains people’s personal information. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archiving of records, and meaningful scientific research, scraping for AI can also be objectionable and even harmful to individuals and society. Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice.

In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles of privacy laws, including fairness, individual rights and control, transparency, consent, purpose specification and secondary use restrictions, data minimization, onward transfer, and data security. Scraping ignores the data protection laws built around these requirements. Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others. This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation.





Oh boy, another great time waster…

https://www.makeuseof.com/replaced-cable-plan-with-this-free-live-tv-site/

I replaced my cable plan with this free live-TV site (and it's legit)

TV Garden is an open-source project built to aggregate publicly available live TV streams from around the world. The platform sources its content from legitimate public broadcasters, YouTube live streams, and open IPTV directories. These are channels that are already freely accessible but scattered across the internet, much like the free internet TV channels you can watch online.



Monday, October 20, 2025

Worth a look.

https://www.bespacific.com/an-opinionated-guide-to-using-ai-right-now/

An Opinionated Guide to Using AI Right Now

Ethan Mollick: “Every few months I write an opinionated guide to how to use AI¹, but now I write it in a world where about 10% of humanity uses AI weekly. The vast majority of that use involves free AI tools, which is often fine… except when it isn’t. OpenAI recently released a breakdown of what people actually use ChatGPT for (way less casual chat than you’d think, way more information-seeking than you expected). This means I can finally give you advice based on real usage patterns instead of hunches. I annotated OpenAI’s chart with some suggestions about when to use free versus advanced models.

If the chart suggests that a free model is good enough for what you use AI for, pick your favorite and use it without worrying about anything else in the guide. You basically have nine or so choices, because there are only a handful of companies that make cutting-edge models. All of them offer some free access. The four most advanced AI systems are Claude from Anthropic, Google’s Gemini, OpenAI’s ChatGPT, and Grok by Elon Musk’s xAI. Then there are the open weights AI families, which are almost (but not quite) as good: Deepseek, Kimi, Z and Qwen from China, and Mistral from France. Together, variations on these AI models take up the first 35 spots in almost any rating system of AI. Any other AI service you use that offers a cutting-edge AI from Microsoft Copilot to Perplexity (both of which offer some free use) is powered by one or more of these nine AIs as its base.





Addressing annoyances...

https://pogowasright.org/california-enacts-new-privacy-laws/

California Enacts New Privacy Laws

Lindsey Tonsager, Libbie Canter, Jayne Ponder, Jenna Zhang, Ariel Dukes, and Bryan Ramirez of Covington and Burling write:

Recently, California Governor Gavin Newsom signed into law several privacy and related proposals, including new laws governing browser opt-out preference signals, social media account deletion, data brokers, reproductive and health services, age signals for app stores, social media “black box warning” labels for minors, and companion chatbots. This blog summarizes the statutes’ key takeaways.
  • Opt-Out Preference SignalsThe California Opt Me Out Act (AB 566) will require businesses that develop or maintain browsers to include functionality configurable by a consumer that enables the browser to send an opt-out preference signal. Additionally, a business that develops or maintains a browser must make clear to a consumer in public disclosures how the opt-out preference signal works and the intended effect of the opt-out preference signal. The law states that a business that maintains or develops a browser that includes the opt-out preference signal shall not be liable for a violation of the title by a business that receives the opt-out preference signal. AB 566 will take effect January 1, 2027, and provides the California Privacy Protection Agency (“CPPA”) rulemaking authority.
  • Social Media Account DeletionAB 656 will require social media platforms that generate more than $100M per year in gross revenues to provide a “clear and conspicuous” button to complete an account deletion request. “Social media platform” is defined by reference to Section 22675 of the California code as a “public or semipublic internet-based service or application that has users in California” and where (1) a “substantial function” of the service or application is to connect users to interact socially with each other and (2) allows users to construct a public or semipublic profile, populate a list of users with whom the individual shares a social connection, and create or post content viewable by other users. If verification is needed for the account deletion request, it must be provided in a cost-effective and easy-to-use manner through a preestablished two-factor authentication, email, text, telephone, or message means.

Read more at Inside Privacy.



Sunday, October 19, 2025

Could any evaluation of an attack happen before the attack.

https://www.tandfonline.com/doi/full/10.1080/15027570.2025.2569130

Robocop Reimagined: Harnessing the Power of AI for LOAC Compliance

This article is intended as a contribution to the growing literature on the potential benefits of military applications of AI to ensure compliance with the Law of Armed Conflict. Drawing on foundational notions of the philosophy of mind and legal philosophy, the article proposes the introduction of a secondary LOAC-compliance software, the “e-JAG”, in order to police the results offered by primary targeting software, while at the same time remaining always under human control, as what can overall be considered a positive redundancy in the sense of being an additional guard rail to strengthen the precautions in attack that militaries are legally obligated to implement.





Perhaps we are not ready for an automated legal system.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5585290

Law, Justice, and Artificial Intelligence

Judges, physicians, human resources managers, and other decision-makers often face a tension between adhering to rules and taking into account the specific circumstances of the case at hand. Increasingly, such decisions are supported by—and soon may even be made by—artificial intelligence, including large language models (LLMs). How LLMs resolve these tensions is therefore of paramount importance.

Specifically, little is known about how LLMs navigate the tension between applying legal rules and accounting for justice. This study compares the decisions of GPT-4o, Claude Sonnet 4, and Gemini 2.5 Flash with those of laypersons and legal professionals, including judges, across six vignette-based experiments comprising about 50,000 decisions.

We find that, unlike humans, LLMs do not balance law and equity: when instructed to follow the law, they largely ignore justice in both their decisions and reasoning; when instructed to decide based on justice, they disregard legal rules. Moreover, in contrast to humans, requiring reasons or providing precedents has little effect on their responses. Prompting LLMs to consider litigant sympathy, or asking them to predict judicial decisions rather than make them, somewhat reduce their formalism, but they remain far more rigid than humans.

Beyond their formalism, LLMs exhibit far less variability ("noise") than humans. While greater consistency is generally a virtue in decision-making, the article discusses its shortcomings as well. The study introduces a methodology for evaluating current and future LLMs where no demonstrably single correct answer exists.





Prosecuting the Terminator?

https://www.pzhfars.ir/article_231761_en.html?lang=fa

Civil Liability for Robots and Artificial Intelligence: Legal Challenges and Solutions in the Age of New Technologies

New technologies, especially robots and artificial intelligence, have created extensive transformations in social, economic, and industrial life. However, the rapid development of these technologies has created numerous challenges in the field of civil liability, which traditional legal systems cannot easily adapt to. The main question is how to explain and regulate civil liability arising from damages or injuries attributed to robots and artificial intelligence systems based on current civil law. This research aims to investigate the legal challenges of civil liability for robots and artificial intelligence and to provide innovative legal solutions. The research method was descriptive-analytical and comparative, and the topic was analyzed by reviewing legal sources, international documents, and comparative studies. The research findings show that ambiguities related to determining the culprit, proving fault, and direct liability of robots are among the most important legal issues in this field and require the development of specific rules and regulations based on the characteristics of intelligent technologies. The innovation aspect of this research is in providing a comparative framework and proposing native solutions appropriate to technological developments and the country's legal system. Finally, to guarantee the rights of individuals and protect the public interest, it is necessary to amend and update civil liability laws, and legal and judicial institutions must do their utmost in this regard.





Searching for evidence of a specific crime or anything that look suspicious?

https://www.cnet.com/home/security/amazons-ring-cameras-push-deeper-into-police-and-government-surveillance/

Amazon's Ring Cameras Push Deeper Into Police and Government Surveillance

Less than two years after removing a feature that made it easier for law enforcement agencies to request footage from owners of Ring doorbells and other security products, Amazon has partnered with two companies that will help facilitate the same kinds of requests.

Two weeks after rolling out a new product line for 2025, Ring, owned by Amazon, announced a partnership with Flock Safety, as part of its expansion of the Community Requests feature in the Ring Neighbors app. Atlanta-based Flock is a police technology company that sells surveillance technology, including drones, license-plate reading systems and other tools. The announcement follows a partnership Ring entered into with Axon, previously Taser International, which also builds tools for police and military applications.