Friday, August 01, 2025

Perspective.

https://www.psychologytoday.com/us/blog/the-digital-self/202507/the-vapid-brilliance-of-artificial-intelligence

The Vapid Brilliance of Artificial Intelligence

The algorithm doesn't lie; it just doesn’t care.

new study from Princeton and Berkeley gives this timely dynamic a name that might be as provocative as the research concept itself: machine bullsh*t. Drawing from Harry Frankfurt’s classic definition, the researchers analyzed 2,400 real-world prompts across 100 artificial intelligence (AI) assistants, spanning political, medical, legal, and customer-facing contexts. What they found wasn’t malicious fabrication or factual error. They revealed that large language models (LLMs) produced persuasive language without regard for truth. They're not lying—not even hallucinating; they just produced a kind of engineered emptiness.

For me, this isn’t an anomaly; it’s a confirmation of a deeper cognitive inversion. It's what I’ve called anti-intelligence. It's how LLMs mimic the structure of thought via statistical coherence, but that is, in essence, antithetical to human thought.



Thursday, July 31, 2025

Perspective.

https://www.bespacific.com/artificial-intelligence-and-the-law-a-discussion-paper/

Artificial Intelligence and the Law: a discussion paper

UK. Law Commission: “The paper aims to raise awareness of legal issues regarding AI, prompting wider discussion of the topic, and to act as a step towards identifying those areas most in need of law reform. ” July 31, 2025.”… With the rapid development and improved performance of AI has come increased investment and wider and more frequent applications of it. AI is expected to deliver social and economic benefits, leading to increased productivity, boosting economic growth and output, and may lead to innovations that can save and improve lives, such as the development of new cancer drugs or new medical treatments. Taking advantage of those opportunities is a focus for Government, as set out in its AI Opportunities Action Plan, published in January 2025. In 2025, Government also reached agreements with leading AI developers Anthropic, Google, and OpenAI to take advantage of opportunities offered by AI and explore increased investment in and use of AI. However, as with other technological developments, AI’s potential to deliver benefits  comes with risks that it will cause harm. AI has been used to perpetuate fraud, cause harassment, assist in cyber hacks, spread disinformation that harms democratic processes, and can create “deepfake” images of people as a form of abuse or to enable identity theft, among other examples. There are also concerns that increased use of AI could cause harm by way of social upheaval, that AI will replace existing workforces, at scale, in a wide range of industries, from manual to highly-skilled. Further concerns exist about the environmental impact of technology that is using an increasingly large quantity of energy and water…”





Latest target? AI!

https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications,-97-of-which-reported-lacking-proper-ai-access-controls

IBM Report: 13% Of Organizations Reported Breaches Of AI Models Or Applications, 97% Of Which Reported Lacking Proper AI Access Controls

IBM today released its Cost of a Data Breach Report, which revealed AI adoption is greatly outpacing AI security and governance. While the overall number of organizations experiencing an AI-related breach is a small representation of the researched population, this is the first time security, governance and access controls for AI have been studied in this report, which suggests AI is already an easy, high value target.

  • 13% of organizations reported breaches of AI models or applications, while 8% of organizations reported not knowing if they had been compromised in this way.

  • Of those compromised, 97% report not having AI access controls in place.

  • As a result, 60% of the AI-related security incidents led to compromised data and 31% led to operational disruption.



Wednesday, July 30, 2025

Where is this headed? Ratings for each video?

https://www.reuters.com/legal/litigation/australia-widens-teen-social-media-ban-youtube-scraps-exemption-2025-07-29/

Australia widens teen social media ban to YouTube, scraps exemption

Australia said on Wednesday it will add YouTube to sites covered by its world-first ban on social media for teenagers, reversing an earlier decision to exempt the Alphabet-owned, opens new tab video-sharing site and potentially setting up a legal challenge.

The decision came after the internet regulator urged the government last month to overturn the YouTube carve-out, citing a survey that found 37% of minors reported harmful content on the site, the worst showing for a social media platform.



(Related)

https://techcrunch.com/2025/07/29/youtube-rolls-out-age-estimatation-tech-to-identify-u-s-teens-and-apply-additional-protections/

YouTube rolls out age-estimation tech to identify US teens and apply additional protections

YouTube on Tuesday announced it’s beginning to roll out age-estimation technology in the U.S. to identify teen users in order to provide a more age-appropriate experience. The company says it will use a variety of signals to determine the users’ possible age, regardless of what the user entered as their birthday when they signed up for an account.

When YouTube identifies a user as a teen, it introduces new protections and experiences, which include disabling personalized advertising, safeguards that limit repetitive viewing of certain types of content, and enabling digital well-being tools such as screen time and bedtime reminders, among others.





Tools & Techniques. Probably not the answer we need…

https://openai.com/index/chatgpt-study-mode/

Introducing study mode

Today we’re introducing study mode in ChatGPT—a learning experience that helps you work through problems step by step instead of just getting an answer. Starting today, it’s available to logged in users on Free, Plus, Pro, Team, with availability in ChatGPT Edu coming in the next few weeks.

ChatGPT is becoming one of the most widely used learning tools in the world. Students turn to it to work through challenging homework problems, prepare for exams, and explore new concepts. But its use in education has also raised an important question: how do we ensure it is used to support real learning, and doesn’t just offer solutions without helping students make sense of them?

We’ve built study mode to help answer this question. When students engage with study mode, they’re met with guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding. Study mode is designed to be engaging and interactive, and to help students learn something—not just finish something.



Tuesday, July 29, 2025

Don’t like that law? There’s an App for that!

https://thenextweb.com/news/proton-vpn-uk-top-app-age-verification

Proton VPN rises to top UK app charts as porn age checks kick in

Proton VPN has become the UK’s most downloaded free app, as Britons rush to bypass a new law requiring users to verify their age before accessing websites hosting adult content.

Proton VPN reported a staggering 1,400% surge in UK sign-ups almost immediately after the Online Safety Act came into effect. It is now Britain’s most downloaded free app, overtaking ChatGPT, according to Apple’s App Store rankings.





Welcome to the anti-lawyer…

https://futurism.com/chatgpt-legal-questions-court

If You've Asked ChatGPT a Legal Question, You May Have Accidentally Doomed Yourself in Court

Imagine this scenario: you're worried you may have committed a crime, so you turn to a trusted advisor — OpenAI's blockbuster ChatGPT, say — to describe what you did and get its advice.

This isn't remotely far-fetched; lots of people are already getting legal assistance from AI, on everything from divorce proceedings to parking violations. Because people are amazingly stupid, it's almost certain that people have already asked the bot for advice about enormously consequential questions about, say, murder or drug charges.

According to OpenAI CEO Sam Altman, anyone's who's done so has made a massive error — because unlike a human lawyer with whom you enjoy sweeping confidentiality protections, ChatGPT conversations can be used against you in court.



Monday, July 28, 2025

At least SciFi has considered these issues.

https://www.proquest.com/openview/0495c1e86b831c212e738b45dd5f6023/1?pq-origsite=gscholar&cbl=2036059

AI ACT AND THE ECHO OF ASIMOV'S LAWS OF ROBOTICS. WHEN THE LACK OF LEGAL SOURCES PUSHED THE EU TOWARDS SCIENCE FICTION

We are living in a time of rapid technological advancement, particularly in the field of AI, and more specifically, Generative AI (GAI). As GAI models increasingly permeate everyday life, the urgent need for effective regulation has become apparent. This paper explores how the EU, in its effort to fill the legislative vacuum surrounding AI, drew inspiration from unconventional sources, including science fiction literature. Specifically, it examines the extent to which Isaac Asimov’s Three Laws of Robotics, though fictional, have influenced the structure and ethical principles of the EU’s AI Act. The primary objective of this study is to analyze the resonance between Asimov’s fictional ethical framework and the normative architecture of the AI Act. To achieve this, we employ a qualitative legal research methodology, using comparative textual analysis of the AI Act alongside Asimov’s literary works and relevant policy documents. The paper is grounded in the theoretical perspectives of legal pragmatism and science and technology studies, focusing on how imagined futures can shape real-world regulatory choices. Our findings suggest that the AI Act reflects key elements of Asimov’s principles, especially the emphasis on human safety, ethical use, and transparency. This highlights an instance where speculative fiction has provided a conceptual foundation for actual legislation. The paper concludes by advocating for adaptable, ethics-based regulatory approaches that can evolve alongside AI technologies, reinforcing the idea that flexible legal structures are essential in responding to the dynamic nature of AI.



Friday, July 25, 2025

Perspective.

https://news.bloomberglaw.com/litigation/kagan-says-she-was-impressed-by-ai-bot-claudes-legal-analysis

Kagan Says She Was Impressed by AI Bot Claude’s Legal Analysis

US Supreme Court Justice Elena Kagan found AI chatbot Claude to have conducted an excellent analysis of a complicated Constitutional dispute.

Kagan, speaking at the Ninth Circuit’s judicial conference in Monterey, Calif., said she has been following a blog by Supreme Court litigator Adam Unikowsky of Jenner & Block LLP, who has undertaken a number of experiments with AI and legal writing. In one blog last year, he asked the chatbot to analyze the high court’s divided opinions involving the Confrontation Clause, where Kagan had authored both majority and dissenting opinions.

Claude, I thought, did an exceptional job of figuring out an extremely difficult Confrontation Clause issue, one which the court has divided on twice,” Kagan said.

Unikowsky this month published a post where he fed Anthropic PBC’s flagship Claude all of the briefs for a case he had argued last fall and asked the model to act as an attorney providing oral argument to the high court. He concluded that the bot provided better argument than he had.



Thursday, July 24, 2025

No doubt everyone in law enforcement will want one of these, attached to their own databases.

https://www.bespacific.com/new-ice-mobile-app-pushes-biometric-policing-onto-american-streets/

New ICE mobile app pushes biometric policing onto American streets

BiometricUpdate.com: “U.S. Immigration and Customs Enforcement (ICE) has quietly deployed a new surveillance tool in its Enforcement and Removal Operations (ERO) arsenal – a smartphone app known as Mobile Fortify. Designed for ICE field agents, the app enables real-time biometric identity verification using facial recognition or contactless fingerprints. Based on leaked emails reported by 404 Media, the introduction of Mobile Fortify marks a profound shift in ICE’s operational methodology of using traditional fingerprint-based stationary checks to using mobile, on-the-go biometric profiling that echoes the type of border surveillance previously confined to airports and ports of entry. Mobile Fortify was built to integrate seamlessly with multiple Department of Homeland Security (DHS) biometric systems. Agents using ICE-issued mobile devices can now photograph a subject’s face or fingerprint, triggering a near-instant biometric match against data sources that include CBP’s Traveler Verification Service and DHS’s broader Automated Biometric Identification System (IDENT) database which contains biometric records on over 270 million individuals. This level of portability and automation suggests a capability that is poised to extend biometric surveillance far beyond designated checkpoints and into neighborhoods, local transport hubs, and any environment in which ICE officers operate. Facial recognition, though notably less reliable than fingerprints, is nevertheless embedded in the app’s core functionality. A February 2025 DHS Inspector General audit had warned that reliance on facial recognition risked misidentification. ICE agents have been observed pointing phones at individuals in cars during protests and other domestic operations, although it remains unclear whether Mobile Fortify was active in those encounters. The presence of a “training mode” within the app’s software though suggests that ICE envisions a spectrum of deployments from casual identity checks to more deliberate urban biometric sweeps. Although ICE officials stress that biometric matching happens in real time, the underlying model appears to be automated. A mobile photo or print is captured, transmitted to a DHS server linked to identity repositories, and compared through algorithmic matching – most likely involving AI-enhanced pattern recognition.



(Related)

https://www.bespacific.com/deportation-data-project/

Deportation Data Project

Immigration and Customs Enforcement. ICE collects data on every person it encounters, arrests, detains, transports via flight, and deports. We post below data that ICE produced in response to several FOIA requests by multiple organizations. Crucially, in some data releases, there are linked identifiers across data types such as arrests and detainers, allowing merges that enable tracing immigrants’ pathways (anonymously) through the immigration enforcement pipeline. The identifiers are, unfortunately, different across releases, only enabling merging within a data release. See below for a description of each release. Our ICE codebook describes each data table and the fields within them.





Sounds like someone who does not understand technology. Of course it is ‘do-able’ it’s just expensive. (and not even very expensive.)

https://deadline.com/2025/07/trump-ai-action-plan-copyright-1236466617/

Donald Trump Says AI Companies Can’t Be Expected To Pay For All Copyrighted Content Used In Their Training Models: “Not Do-Able”

Donald Trump said that AI companies can’t be expected to pay for the use of copyrighted content in their systems, amid a fierce debate over the use of intellectual property in training models.





I don’t use social media. I could never get a visa…

https://www.eff.org/deeplinks/2025/07/you-shouldnt-have-make-your-social-media-public-get-visa

You Shouldn’t Have to Make Your Social Media Public to Get a Visa

The Trump administration is continuing  its  dangerous push  to surveil and suppress foreign students’ social media activity. The State Department recently announced an unprecedented new requirement that applicants for student and exchange visas must set all social media accounts to “public” for government review. The State Department also indicated that if applicants refuse to unlock their accounts or otherwise don’t maintain a social media presence, the government may interpret it as an attempt to evade the requirement or deliberately hide online activity.





Perspective.

https://www.zdnet.com/article/will-ai-think-like-humans-were-not-even-close-and-were-asking-the-wrong-question/

Will AI think like humans? We're not even close - and we're asking the wrong question

Artificial intelligence may have impressive inferencing powers, but don't count on it to have anything close to human reasoning powers anytime soon. The march to so-called artificial general intelligence (AGI), or AI capable of applying reasoning through changing tasks or environments in the same manner as humans, is still a long way off.  Large reasoning models (LRMs), while not perfect, do offer a tentative step in that direction. 

In other words, don't count on your meal-prep service robot to react appropriately to a kitchen fire or a pet jumping on the table and slurping up food.