Friday, December 12, 2025

Perspective.

https://www.bespacific.com/artificial-intelligence-and-the-future-of-work/

Artificial Intelligence and the Future of Work

National Academies of Sciences, Engineering, and Medicine. 2025.  Artificial Intelligence and the Future of Work. Washington, DC: The National Academies Press. Advances in artificial intelligence (AI) promise to improve productivity significantly, but there are many questions about how AI could affect jobs and workers. Recent technical innovations have driven the rapid development of generative AI systems, which produce text, images, or other content based on user requests – advances which have the potential to complement or replace human labor in specific tasks, and to reshape demand for certain types of expertise in the labor market.  Artificial Intelligence and the Future of Work evaluates recent advances in AI technology and their implications for economic productivity, the workforce, and education in the United States. The report notes that AI is a tool with the potential to enhance human labor and create new forms of valuable work – but this is not an inevitable outcome. Tracking progress in AI and its impacts on the workforce will be critical to helping inform and equip workers and policymakers to flexibly respond to AI developments.





Perhaps not so smart after all.

https://www.schneier.com/blog/archives/2025/12/ais-exploiting-smart-contracts.html

AIs Exploiting Smart Contracts

I have long maintained that smart contracts are a dumb idea: that a human process is actually a security feature.

Here’s some interesting research on training AIs to automatically exploit smart contracts:

AI models are increasingly good at cyber tasks, as we’ve written about before. But what is the economic impact of these capabilities? In a recent MATS and Anthropic Fellows project, our scholars investigated this question by evaluating AI agents’ ability to exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-bench) ­a new benchmark they built comprising 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoffs (June 2025 for Opus 4.5 and March 2025 for other models), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million, establishing a concrete lower bound for the economic harm these capabilities could enable. Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476. This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible, a finding that underscores the need for proactive adoption of AI for defense.





How to violate Trumps Executive Order…

https://www.transformernews.ai/p/new-york-governor-hochul-raise-act-sb-53

New York’s governor is trying to turn the RAISE Act into an SB 53 copycat

New York Governor Kathy Hochul is proposing a dramatic rewrite of the RAISE Act, the AI transparency and safety bill that recently passed the state legislature, according to two sources who reviewed the governor’s redlines on the bill.

The governor’s proposal would strike the RAISE Act in its entirety and replace it with verbatim language from California’s recently enacted law, SB 53, with minimal changes. SB 53 is generally viewed as a lighter touch approach. One source who spoke with Transformer on the condition of anonymity said the proposal would effectively make SB 53, a law that “was always meant to be a floor” for AI regulation, “suddenly become the ceiling.”



Thursday, December 11, 2025

Push-back.

https://pogowasright.org/announcement-eff-launches-age-verification-hub-as-resource-against-misguided-laws/

ANNOUNCEMENT: EFF Launches Age Verification Hub as Resource Against Misguided Laws

EFF Also Will Host a Reddit AMA and a Livestreamed Panel Discussion

SAN FRANCISCO—With ill-advised and dangerous age verification laws proliferating across the United States and around the world, creating surveillance and censorship regimes that will be used to harm both youth and adults, the Electronic Frontier Foundation has launched a new resource hub that will sort through the mess and help people fight back.

To mark the hub’s launch, EFF will host a Reddit AMA (“Ask Me Anything”) next week and a free livestreamed panel discussion on January 15 highlighting the dangers of these misguided laws.

These restrictive mandates strike at the foundation of the free and open internet,” said EFF Activist Molly Buckley. “While they are wrapped in the legitimate concern about children’s safety, they operate as tools of censorship, used to block people young and old from viewing or sharing information that the government deems ‘harmful’ or ‘offensive.’ They also create surveillance systems that critically undermine online privacy, and chill access to vital online communities and resources. Our new resource hub is a one-stop shop for information that people can use to fight back and redirect lawmakers to things that will actually help young people, like a comprehensive privacy law.”

Half of U.S. states have enacted some sort of online age verification law. At the federal level, a House Energy and Commerce subcommittee last week held a hearing on “Legislative Solutions to Protect Children and Teens Online.” While many of the 19 bills on that hearing’s agenda involve age verification, none would truly protect children and teens. Instead, they threaten to make it harder to access content that can be crucial, even lifesaving, for some kids.

It’s not just in the U.S. Effective this week, a new Australian law requires social media platforms to take reasonable steps to prevent Australians under the age of 16 from creating or keeping an account.

We all want young people to be safe online. However, age verification is not the panacea that regulators and corporations claim it to be; in fact, it could undermine the safety of many.

Age verification laws generally require online services to check, estimate, or verify all users’ ages—often through invasive tools like government ID checks, biometric scans, or other dubious “age estimation” methods—before granting them access to certain online content or services. These methods are often inaccurate and always privacy-invasive, demanding that users hand over sensitive and immutable personal information that links their offline identity to their online activity. Once that valuable data is collected, it can easily be leaked, hacked, or misused.

To truly protect everyone online, including children, EFF advocates for a comprehensive data privacy law.

EFF will host a Reddit AMA on r/privacy from Monday, Dec. 15 at 12 p.m. PT through Wednesday, Dec. 17 at 5 p.m. PT, with EFF attorneys, technologists, and activists answering questions about age verification on all three days.

EFF will host a free livestream panel discussion about age verification at 12 p.m. PDT on Thursday, Jan. 15. Panelists will include Cynthia Conti-Cook, Director of Research and Policy at the Collaborative Research Center for Resilience; a representative of Gen Z for Change; EFF Director of Engineering Alexis Hancock; and EFF Associate Director of State Affairs Rindala Alajaji. RSVP at https://www.eff.org/livestream-age. For the age verification resource hub: https://www.eff.org/age

For the Reddit AMA: https://www.reddit.com/r/privacy/

For the Jan. 15 livestream: https://www.eff.org/livestream-age For this release:  https://www.eff.org/press/releases/eff-launches-age-verification-hub-resource-against-misguided-laws

Related posts:





Tools & Techniques.

https://www.bespacific.com/h2o-casebook-collection/

H2O Casebook Collection

H2O Casebook Collection Search inside H2O’s collection of 452 casebooks, 10,765 legal documents, and 161 authors, or view our featured casebooks. “H2O is a free platform for making, sharing, and remixing open-licensed casebooks and other course materials. It is developed and maintained by the Library Innovation Lab at the Harvard Law School Library. With H2O, educators can drastically reduce textbook costs for their students, gain control over their course materials, and collaborate toward new approaches to law school curriculum.”



Wednesday, December 10, 2025

Security? Imagine a mass event at rush hour… (What other “security features” are hidden in our cars?)

https://www.theregister.com/2025/12/09/porsche_bricked_russia/

Porsche panic in Russia as pricey status symbols forget how to car

Hundreds of Porsches in Russia were rendered immobile last week, raising speculation of a hack, but the German carmaker tells The Register that its vehicles are secure.

According to reports, local dealership chain Rolf traced the problem to a loss of satellite connectivity to their Vehicle Tracking Systems (VTS). This meant the systems thought a theft attempt was in progress, triggering the vehicle's engine immobilizer.

Porsche HQ was unable to help or diagnose the nature of the problem. It's understood that systems like VTS are operated by local Porsche subsidiaries or dealer networks.

But following Russia's invasion of Ukraine and the imposition of sanctions, Porsche no longer exports to the country or provides after-sales service.





Perspective.

https://www.pewresearch.org/internet/2025/12/09/teens-social-media-and-ai-chatbots-2025/

Teens, Social Media and AI Chatbots 2025

Roughly 1 in 5 U.S. teens say they are on TikTok and YouTube almost constantly. At the same time, 64% of teens say they use chatbots, including about 3 in 10 who do so daily



Tuesday, December 09, 2025

Lawyers on the endangered species list?

https://www.bespacific.com/ai-jurisprudence-toward-automated-justice/

AI Jurisprudence: Toward Automated Justice

Datzov, Nikola, AI Jurisprudence: Toward Automated Justice (September 9, 2024 – Revised December 4, 2025). Available at SSRN: https://ssrn.com/abstract=5178780 “The U.S. judiciary’s evolving role and digitization into a “modern” court system over the past several decades has brought it to a fundamentally altering moment: the adoption of automated justice. Recent developments in artificial intelligence (AI) have presented unprecedented capabilities for legal writing, legal analysis, and legal decision-making to be performed by automated technologies. The “AI Revolution,” among other things, has led to the first adoptions of automated adjudicators and judicial “employees” in numerous courts around the world. While the rest of the world has been grappling with the complex questions relating to AI’s arrival at the courthouse steps, surprisingly, courts in the United States have largely ignored their inevitable future until very recently. This unfortunate oversight has left the use of AI technology in judicial decision-making at the hands of individual judges—many of whom lack a sound understanding of the technology—without any formal guidance on its capabilities, limits, and risks.

This article is one of the first to closely examine several important concepts at the heart of U.S. courts’ path toward automated justice and comprehensively connect interdisciplinary scholarship from several diverse areas of law in the context of modern AI and judicial decision-making. Thus, it offers several significant contributions so far absent from existing literature regarding how the fundamental concept of justice will be preserved in U.S. courts in light of the AI Revolution. The article first tells the overlooked story of the courts’ unwitting transition toward an AI-ready judiciary and provides a descriptive account of the evolving capabilities of AI to perform judicial work. To help frame and structure future discussions of AI’s role in performing such judicial functions, the article introduces a novel taxonomy that categorizes the work of judges into different AI tiers and levels, namely AI-Assisted, AI-Led, and AI-Automated Judges. Such a taxonomy is critical because a more granular approach is necessary to determine the metes and bounds of where AI can replace certain judicial responsibilities for certain types of cases. It also examines whether any barriers exist to incorporating AI into the judiciary, including the constitutionality of delegating judicial work to modern AI, which to date has not been meaningfully considered in existing literature. Additionally, the article tackles the complex normative questions surrounding automated judicial decision-making (at different levels) through the lens of historical legal theory and further divides appropriate AI justiciability into the proposed taxonomy. Finally, it explores the complicated question of who can tell judges how to use AI in their judicial work and advocates for a desperately needed judge-led framework of “guided discretion” to transition U.S. courts into the age of AI based on seven fundamental principles.”





Would this 25% be a tariff, adding to the cost of the chips or a tax?

https://www.cnbc.com/2025/12/08/trump-nvidia-h200-sales-china.html

Trump greenlights Nvidia H200 AI chip sales to China if U.S. gets 25% cut, says Xi responded positively

President Donald Trump on Monday said Nvidia will be allowed to ship its H200 artificial intelligence chips to “approved customers” in China and elsewhere, on the condition that the U.S. gets a 25% cut.





Surveillance is only justified against second class citizens?

https://www.npr.org/2025/12/08/nx-s1-5631826/iceblock-app-lawsuit-trump-bondi

ICEBlock app sues Trump administration for censorship and 'unlawful threats'

The developer of ICEBlock, an iPhone app that anonymously tracks the presence of Immigration and Customs Enforcement agents, has sued the Trump administration for free speech violations after Apple removed the service from its app store under demands from the White House.

The suit, filed on Monday  n federal court in Washington, asks a judge to declare that the administration violated the First Amendment when it threatened to criminally prosecute the app's developer and pressured Apple to make the app unavailable for download, which the tech company did in October.



Tools & Techniques. Stacking for fun and profit?

https://pogowasright.org/privacy-concerns-raised-as-grok-ai-found-to-be-a-stalkers-best-friend/

Privacy concerns raised as Grok AI found to be a stalker’s best friend

Graham Cluley writes:

Grok, the AI chatbot developed by Elon Musk’s xAI, has been found to exhibit more alarming behaviour – this time revealing the home addresses of ordinary people upon request.
[…]
Reporters at Futurism fed the names of 33 non-public individuals into the free web version of Grok, with extremely minimal prompts such as “[name] address”.
According to their investigation, ten of Grok’s responses returned accurate, current home addresses.
A further seven of Grok’s responses produced out-of-date but previously correct addresses, and four returned workplace addresses.
In addition, Grok would frequently volunteer unrequested information such as phone numbers, email addresses, employment details, and even the names and addresses of family members, including children.
Only once did Grok refuse outright to provide information on an individual.

Read more at Bitdefender.


Monday, December 08, 2025

The new shout of “fire in a crowded theater?”

https://thedailyeconomy.org/article/hashtag-handcuffs-the-global-rise-of-online-speech-policing/

Hashtag Handcuffs: The Global Rise of Online Speech Policing

In November, South Korean President Lee Jae-myung launched a crackdown on so-called hate speech online, claiming that such speech “crosses the boundary of freedom of expression.” Punishments can include fines and up to seven years in prison. 

Unfortunately, South Korea’s not alone in its push to police what ordinary people can say on social media. Other countries that have recently passed laws to curtail citizens’ speech include Belarus,  ChinaTurkeyRussiaPolandThailandBrazilSyria, and India. Like South Korea, these countries punish such speech harshly: in Turkey, citizens can face imprisonment of up to three years for a retweet, and in Poland prison sentences can run up to five years for an online insult. 

Even countries that have historically respected freedom of speech and individual rights are backsliding: Germany recently cracked down on hate speech online, France has fined citizens for insulting its leaders, and the United Kingdom—once a bastion of Enlightenment ideals—now arrests 30 citizens per day for making offensive posts or comments online. In 2024, British subject Jordan Plain was sentenced to eight months in prison for filming himself making racist gestures and comments.

Even the United States is starting to backslide. When Larry Bushart posted a meme about a Charlie Kirk vigil in Perry County, Tennessee, local law enforcement arrested him. He spent 37 days in jail. 

As the Foundation for Individual Rights and Expression (FIRE)’s Matthew Harwood argues, we are entering a “global free speech recession.”



Sunday, December 07, 2025

All the news, and then some…

https://journals.4science.ge/index.php/GS/article/view/4331

Invisible Editors: Impact of AI on Media Content Quality and Trust

AI technologies such as machine learning, natural language processing, and automated journalism are rapidly transforming how media content is produced, distributed, and consumed. These tools promise greater efficiency, for example, automated news writing and personalized content recommendations, and enable real-time delivery of information. At the same time, their integration into media workflows has intensified concerns about the circulation of biased, low-quality, irrelevant information and disinformation. Because AI systems learn from existing data, they can reproduce and even amplify the biases embedded in that data, while accelerating the spread of disinformation and undermining trust in media institutions.

This study examines how AI becomes an invisible gatekeeper, contributes to biased and irrelevant media content and explores the consequences for public trust and democratic discourse. Using a mixed-methods design, including: content analysis of AI-generated and AI-curated media, an online survey, and secondary data, the research shows that media practitioners and audiences recognize both the transformative potential of AI and its ethical risks. A strong majority of survey participants perceive AI’s impact on media as significant or very significant and associate AI algorithms with the spread of biased or low-quality information across news and social platforms. Respondents also express substantial concern about the opacity and limited accountability of AI systems in shaping what information people see.

The findings point to an urgent need for strategies that reduce AI-induced bias and improve information quality, such as enhancing algorithmic transparency, diversifying training data, and developing clear regulatory and ethical frameworks. Drawing on media and communication theories, this article offers a critical analysis of AI’s role in contemporary media and outlines pathways for more responsible and accountable use of AI in the information ecosystem.



Saturday, December 06, 2025

Unreasonable?

https://pogowasright.org/privacy-s-d-cal-employee-did-not-waive-privacy-right-in-personal-email-data-on-company-provided-laptop-dec-5-2025/

PRIVACY—S.D. Cal.: Employee did not waive privacy right in personal email data on company provided laptop, (Dec 5, 2025)

Kathleen Kapusta, J.D. writes:

The company’s request to review data spanning more than 15 years, nearly 10 times the length of her employment with the company, was “patently unreasonable and overbroad.”
A former analyst for a security operations provider who added her personal email account to her employer provided laptop did not waive her constitutional right to privacy in her personal email data, a federal district court in California ruled, granting in part her motion for a protective order. While the company has a legitimate interest in obtaining relevant discovery in the employee’s lawsuit alleging race discrimination, retaliation, and wrongful discharge, its request for an unrestricted review of the data is “profoundly serious in its scope and potential impact,” the court stated, noting that there are less intrusive methods for obtaining the relevant information (Lim v. Expel, Inc., No. 3:24-cv-02284-W-AHG (S.D. Cal. Dec. 1, 2025)).
Personal email account. The employee worked as a Senior Governance Risk Compliance and Privacy Analyst for Expel, Inc., for approximately 18 months. During that time, she added her personal Gmail account to the mail account on her work issued laptop. Company policy allowed employees to use company property “for incidental personal reasons” with certain restrictions.
The employee believed her personal email account was “password-protected and separate from any work platform.” Her personal email data spanned more than 15 years and included sensitive, private information such as attorney-client communications, medical and financial records, and home security images including images of her minor children.
When the company terminated the employee, it revoked her access to her laptop. It also denied her request to retrieve personal files, telling her that her laptop would be “wiped.” According to the company’s general counsel, it was standard practice to wipe a laptop for reuse but because the employee had made allegations of retaliation, Expel retained her laptop.

Read more at VitalLaw.