Friday, August 01, 2025

Perspective.

https://www.psychologytoday.com/us/blog/the-digital-self/202507/the-vapid-brilliance-of-artificial-intelligence

The Vapid Brilliance of Artificial Intelligence

The algorithm doesn't lie; it just doesn’t care.

new study from Princeton and Berkeley gives this timely dynamic a name that might be as provocative as the research concept itself: machine bullsh*t. Drawing from Harry Frankfurt’s classic definition, the researchers analyzed 2,400 real-world prompts across 100 artificial intelligence (AI) assistants, spanning political, medical, legal, and customer-facing contexts. What they found wasn’t malicious fabrication or factual error. They revealed that large language models (LLMs) produced persuasive language without regard for truth. They're not lying—not even hallucinating; they just produced a kind of engineered emptiness.

For me, this isn’t an anomaly; it’s a confirmation of a deeper cognitive inversion. It's what I’ve called anti-intelligence. It's how LLMs mimic the structure of thought via statistical coherence, but that is, in essence, antithetical to human thought.



Thursday, July 31, 2025

Perspective.

https://www.bespacific.com/artificial-intelligence-and-the-law-a-discussion-paper/

Artificial Intelligence and the Law: a discussion paper

UK. Law Commission: “The paper aims to raise awareness of legal issues regarding AI, prompting wider discussion of the topic, and to act as a step towards identifying those areas most in need of law reform. ” July 31, 2025.”… With the rapid development and improved performance of AI has come increased investment and wider and more frequent applications of it. AI is expected to deliver social and economic benefits, leading to increased productivity, boosting economic growth and output, and may lead to innovations that can save and improve lives, such as the development of new cancer drugs or new medical treatments. Taking advantage of those opportunities is a focus for Government, as set out in its AI Opportunities Action Plan, published in January 2025. In 2025, Government also reached agreements with leading AI developers Anthropic, Google, and OpenAI to take advantage of opportunities offered by AI and explore increased investment in and use of AI. However, as with other technological developments, AI’s potential to deliver benefits  comes with risks that it will cause harm. AI has been used to perpetuate fraud, cause harassment, assist in cyber hacks, spread disinformation that harms democratic processes, and can create “deepfake” images of people as a form of abuse or to enable identity theft, among other examples. There are also concerns that increased use of AI could cause harm by way of social upheaval, that AI will replace existing workforces, at scale, in a wide range of industries, from manual to highly-skilled. Further concerns exist about the environmental impact of technology that is using an increasingly large quantity of energy and water…”





Latest target? AI!

https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications,-97-of-which-reported-lacking-proper-ai-access-controls

IBM Report: 13% Of Organizations Reported Breaches Of AI Models Or Applications, 97% Of Which Reported Lacking Proper AI Access Controls

IBM today released its Cost of a Data Breach Report, which revealed AI adoption is greatly outpacing AI security and governance. While the overall number of organizations experiencing an AI-related breach is a small representation of the researched population, this is the first time security, governance and access controls for AI have been studied in this report, which suggests AI is already an easy, high value target.

  • 13% of organizations reported breaches of AI models or applications, while 8% of organizations reported not knowing if they had been compromised in this way.

  • Of those compromised, 97% report not having AI access controls in place.

  • As a result, 60% of the AI-related security incidents led to compromised data and 31% led to operational disruption.



Wednesday, July 30, 2025

Where is this headed? Ratings for each video?

https://www.reuters.com/legal/litigation/australia-widens-teen-social-media-ban-youtube-scraps-exemption-2025-07-29/

Australia widens teen social media ban to YouTube, scraps exemption

Australia said on Wednesday it will add YouTube to sites covered by its world-first ban on social media for teenagers, reversing an earlier decision to exempt the Alphabet-owned, opens new tab video-sharing site and potentially setting up a legal challenge.

The decision came after the internet regulator urged the government last month to overturn the YouTube carve-out, citing a survey that found 37% of minors reported harmful content on the site, the worst showing for a social media platform.



(Related)

https://techcrunch.com/2025/07/29/youtube-rolls-out-age-estimatation-tech-to-identify-u-s-teens-and-apply-additional-protections/

YouTube rolls out age-estimation tech to identify US teens and apply additional protections

YouTube on Tuesday announced it’s beginning to roll out age-estimation technology in the U.S. to identify teen users in order to provide a more age-appropriate experience. The company says it will use a variety of signals to determine the users’ possible age, regardless of what the user entered as their birthday when they signed up for an account.

When YouTube identifies a user as a teen, it introduces new protections and experiences, which include disabling personalized advertising, safeguards that limit repetitive viewing of certain types of content, and enabling digital well-being tools such as screen time and bedtime reminders, among others.





Tools & Techniques. Probably not the answer we need…

https://openai.com/index/chatgpt-study-mode/

Introducing study mode

Today we’re introducing study mode in ChatGPT—a learning experience that helps you work through problems step by step instead of just getting an answer. Starting today, it’s available to logged in users on Free, Plus, Pro, Team, with availability in ChatGPT Edu coming in the next few weeks.

ChatGPT is becoming one of the most widely used learning tools in the world. Students turn to it to work through challenging homework problems, prepare for exams, and explore new concepts. But its use in education has also raised an important question: how do we ensure it is used to support real learning, and doesn’t just offer solutions without helping students make sense of them?

We’ve built study mode to help answer this question. When students engage with study mode, they’re met with guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding. Study mode is designed to be engaging and interactive, and to help students learn something—not just finish something.



Tuesday, July 29, 2025

Don’t like that law? There’s an App for that!

https://thenextweb.com/news/proton-vpn-uk-top-app-age-verification

Proton VPN rises to top UK app charts as porn age checks kick in

Proton VPN has become the UK’s most downloaded free app, as Britons rush to bypass a new law requiring users to verify their age before accessing websites hosting adult content.

Proton VPN reported a staggering 1,400% surge in UK sign-ups almost immediately after the Online Safety Act came into effect. It is now Britain’s most downloaded free app, overtaking ChatGPT, according to Apple’s App Store rankings.





Welcome to the anti-lawyer…

https://futurism.com/chatgpt-legal-questions-court

If You've Asked ChatGPT a Legal Question, You May Have Accidentally Doomed Yourself in Court

Imagine this scenario: you're worried you may have committed a crime, so you turn to a trusted advisor — OpenAI's blockbuster ChatGPT, say — to describe what you did and get its advice.

This isn't remotely far-fetched; lots of people are already getting legal assistance from AI, on everything from divorce proceedings to parking violations. Because people are amazingly stupid, it's almost certain that people have already asked the bot for advice about enormously consequential questions about, say, murder or drug charges.

According to OpenAI CEO Sam Altman, anyone's who's done so has made a massive error — because unlike a human lawyer with whom you enjoy sweeping confidentiality protections, ChatGPT conversations can be used against you in court.



Monday, July 28, 2025

At least SciFi has considered these issues.

https://www.proquest.com/openview/0495c1e86b831c212e738b45dd5f6023/1?pq-origsite=gscholar&cbl=2036059

AI ACT AND THE ECHO OF ASIMOV'S LAWS OF ROBOTICS. WHEN THE LACK OF LEGAL SOURCES PUSHED THE EU TOWARDS SCIENCE FICTION

We are living in a time of rapid technological advancement, particularly in the field of AI, and more specifically, Generative AI (GAI). As GAI models increasingly permeate everyday life, the urgent need for effective regulation has become apparent. This paper explores how the EU, in its effort to fill the legislative vacuum surrounding AI, drew inspiration from unconventional sources, including science fiction literature. Specifically, it examines the extent to which Isaac Asimov’s Three Laws of Robotics, though fictional, have influenced the structure and ethical principles of the EU’s AI Act. The primary objective of this study is to analyze the resonance between Asimov’s fictional ethical framework and the normative architecture of the AI Act. To achieve this, we employ a qualitative legal research methodology, using comparative textual analysis of the AI Act alongside Asimov’s literary works and relevant policy documents. The paper is grounded in the theoretical perspectives of legal pragmatism and science and technology studies, focusing on how imagined futures can shape real-world regulatory choices. Our findings suggest that the AI Act reflects key elements of Asimov’s principles, especially the emphasis on human safety, ethical use, and transparency. This highlights an instance where speculative fiction has provided a conceptual foundation for actual legislation. The paper concludes by advocating for adaptable, ethics-based regulatory approaches that can evolve alongside AI technologies, reinforcing the idea that flexible legal structures are essential in responding to the dynamic nature of AI.