Thursday, July 17, 2025

Those who do not read science fiction are doomed to repeat it?

https://futurism.com/ai-models-flunking-three-laws-robotics

Leading AI Models Are Completely Flunking the Three Laws of Robotics

Last month, for instance, researchers at Anthropic found that top AI models from all major players in the space — including OpenAI, Google, Elon Musk's xAI, and Anthropic's own cutting-edge tech — happily resorted to blackmailing human users when threatened with being shut down.

In other words, that single research paper caught every leading AI catastrophically bombing all three Laws of Robotics: the first by harming a human via blackmail, the second by subverting human orders, and the third by protecting its own existence in violation of the first two laws.





Perspective.

https://sloanreview.mit.edu/article/stop-deploying-ai-start-designing-intelligence/

Stop Deploying AI. Start Designing Intelligence

Stephen Wolfram is a physicist-turned-entrepreneur whose pioneering work in cellular automata, computational irreducibility, and symbolic knowledge systems fundamentally reshaped our understanding of complexity. His theoretical breakthroughs led to successful commercial products, Wolfram Alpha and Wolfram Language. Despite his success, the broader business community has largely overlooked these foundational insights. As part of our ongoing “Philosophy Eats AI”  exploration — the thesis that foundational philosophical clarity is essential to the future value of intelligent systems — we find that Wolfram’s fundamental insights about computation have distinctly actionable, if underappreciated, uses for leaders overwhelmed by AI capabilities but underwhelmed by AI returns.





A reasonable example?

https://www.aclu.org/about/privacy/archive/2025-07-31/privacy-statement

American Civil Liberties Union Privacy Statement



Wednesday, July 16, 2025

Tools & Techniques.

https://www.bespacific.com/handbook-of-the-law-ethics-and-policy-of-artificial-intelligence/

Handbook of the Law, Ethics and Policy of Artificial Intelligence

The Cambridge University Press & Assessment Handbook of the Law, Ethics and Policy of Artificial Intelligence (2025), edited by Nathalie Smuha, KU Leuven – is a comprehensive 600-page resource that brings together 30+ leading scholars across law, ethics, philosophy, and AI policy [Open Access – PDF and HTML]. It’s structured into three parts:

  • AI, Ethics & Philosophy

  • AI, Law & Policy

  • AI in Sectoral Applications (healthcare, finance, education, law enforcement, military, etc.)



Tuesday, July 15, 2025

I wondered how they would do this. Now I wonder how the government can verify that is was done.

https://arstechnica.com/tech-policy/2025/07/reddit-starts-verifying-ages-of-uk-users-to-comply-with-child-safety-law/

Reddit’s UK users must now prove they’re 18 to view adult content

Reddit announced today that it has started verifying UK users' ages before letting them "view certain mature content" in order to comply with the country's Online Safety Act.

Reddit said that users "shouldn't need to share personal information to participate in meaningful discussions," but that it will comply with the law by verifying age in a way that protects users' privacy. "Using Reddit has never required disclosing your real world identity, and these updates don't change that," Reddit said.

Reddit said it contracted with the company Persona, which "performs the verification on either an uploaded selfie or a photo of your government ID. Reddit will not have access to the uploaded photo, and Reddit will only store your verification status along with the birthdate you provided so you won't have to re-enter it each time you try to access restricted content."

Reddit said that Persona made promises about protecting the privacy of data. "Persona promises not to retain the photo for longer than 7 days and will not have access to your Reddit data such as the subreddits you visit," the Reddit announcement said.

Reddit provided more detail on how the age verification works here, and a list of what content is restricted here





The truth is not supposed to be a matter of opinion.

https://reason.com/2025/07/14/missouri-harasses-ai-companies-over-chatbots-dissing-glorious-leader-trump/

Missouri Harasses AI Companies Over Chatbots Dissing Glorious Leader Trump

Missourians deserve the truth, not AI-generated propaganda masquerading as fact," said Missouri Attorney General Andrew Bailey. That's why he's investigating prominent artificial intelligence companies for…failing to spread pro-Trump propaganda?

Under the guise of fighting "big tech censorship" and "fake news," Bailey is harassing Google, Meta, Microsoft, and OpenAI. Last week, Bailey's office sent each company a formal demand letter seeking "information on whether these AI chatbots were trained to distort historical facts and produce biased results while advertising themselves to be neutral."

And what, you might wonder, led Bailey to suspect such shenanigans?

Chatbots don't rank President Donald Trump on top.



Monday, July 14, 2025

Why indeed?

https://www.bespacific.com/cops-favorite-ai-tool-automatically-deletes-evidence-of-when-ai-was-used/

Cops’ favorite AI tool automatically deletes evidence of when AI was used

Ars Technica: AI police tool is designed to avoid accountability, watchdog says. On Thursday, a digital rights group, the Electronic Frontier Foundation, published an expansive investigation into AI-generated police reports that the group alleged are, by design, nearly impossible to audit and could make it easier for cops to lie under oath.  Axon’s Draft One debuted last summer at a police department in Colorado, instantly raising questions about the feared negative impacts of AI-written police reports on the criminal justice system. The tool relies on a ChatGPT variant to generate police reports based on body camera audio, which cops are then supposed to edit to correct any mistakes, assess the AI outputs for biases, or add key context. But the EFF found that the tech “seems designed to stymie any attempts at auditing, transparency, and accountability.” Cops don’t have to disclose when AI is used in every department, and Draft One does not save drafts or retain a record showing which parts of reports are AI-generated. Departments also don’t retain different versions of drafts, making it difficult to assess how one version of an AI report might compare to another to help the public determine if the technology is “junk,” the EFF said. That raises the question, the EFF suggested, “Why wouldn’t an agency want to maintain a record that can establish the technology’s accuracy?” It’s currently hard to know if cops are editing the reports or “reflexively rubber-stamping the drafts to move on as quickly as possible,” the EFF said. That’s particularly troubling, the EFF noted, since Axon disclosed to at least one police department that “there has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the ‘guardrails’ that supposedly deter officers from submitting AI-generated reports without reading them first.” The AI tool could also possibly be “overstepping in its interpretation of the audio,” possibly misinterpreting slang or adding context that never happened.

A “major concern,” the EFF said, is that the AI reports can give cops a “smokescreen,” perhaps even allowing them to dodge consequences for lying on the stand by blaming the AI tool for any “biased language, inaccuracies, misinterpretations, or lies” in their reports. “There’s no record showing whether the culprit was the officer or the AI,” the EFF said. “This makes it extremely difficult if not impossible to assess how the system affects justice outcomes over time.” According to the EFF, Draft One “seems deliberately designed to avoid audits that could provide any accountability to the public.” In one video from a roundtable discussion the EFF reviewed, an Axon senior principal product manager for generative AI touted Draft One’s disappearing drafts as a feature, explaining, “we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices.”





Sure to be a very common question.

https://www.bespacific.com/can-you-trust-ai-in-legal-research/

Can you trust AI in legal research?

Sally McLaren post on LinkedIn: “Can you trust AI in legal research? Our study tested how leading Generative AI tools responded to fake case citations and the results were eye-opening. While some models correctly flagged the fabricated case, others confidently generated detailed but entirely false legal content, even referencing real statutes and cases.  Our table of findings (which is not behind a paywall) breaks down how each model performed. We encourage you to repurpose this for use in your AI literacy sessions to help build critical awareness in legal research.”

We have another article, this time in Legal Information Management. A deep dive into Generative AI outputs and the legal information professional’s role (paywall). In the foot notes we have included our data and encourage use of it in you AI literacy sessions. “You’re right to be skeptical!”: The Role of Legal Information Professionals in Assessing Generative AI Outputs | Legal Information Management | Cambridge Core: “Generative AI tools, such as ChatGPT, have demonstrated impressive capabilities in summarisation and content generation. However, they are infamously prone to hallucination, fabricating plausible information and presenting it as fact. In the context of legal research, this poses significant risk. This paper, written by Sally McLaren and Lily Rowe, examines how widely available AI applications respond to fabricated case citations and assesses their ability to identify false cases, the nature of their summaries, and any commonalities in their outputs. Using a non-existent citation, we analysed responses from multiple AI models, evaluating accuracy, detail, structure and the inclusion of references. Results revealed that while some models flagged our case as fictitious, others generated convincing but erroneous legal content, occasionally citing real cases or legislation. The experiment underscores concern about AI’s credibility in legal research and highlights the role of legal information professionals in mitigating risks through user education and AI literacy training. Practical engagement with these tools is crucial to understanding the user experience. Our findings serve as a foundation for improving AI literacy in legal research.”



Sunday, July 13, 2025

Do I look like an immigrant?

https://pogowasright.org/trump-border-czar-boasts-ice-can-briefly-detain-people-based-on-physical-appearance/

Trump Border Czar Boasts ICE Can ‘Briefly Detain’ People Based On ‘Physical Appearance’

This is what our country has deteriorated to under Trump. T hink about whether this is acceptable to you, and if not, what you can and will do about it. 

David Moyes reports:

President Donald Trump’s border czar Tom Homan went viral on Friday after practically boasting on TV about all the ways ICE agents and Border Patrol agents can go after suspected illegal immigrants.
Homan was being interviewed on Fox News about a potential ruling from a federal judge in Los Angeles over whether the Trump administration could be ordered to pause its ICE raids on immigrants.
He responded by claiming that immigration law enforcers don’t actually need “probable cause” to detain a possible suspect, despite it being a key part of the Constitution’s Fourth Amendment.
People need to understand, ICE [Immigration and Customs Enforcement] officers and Border Patrol don’t need probable cause to walk up to somebody, briefly detain them, and question them,” Homan said. “They just go through the observations, get articulable facts based on their location, their occupation, their physical appearance, their actions.”
Homan also insisted that if his agents briefly detained someone, “it’s not probable cause. It’s reasonable suspicion.”

Read more at HuffPost.





All students are criminals?

https://www.jmir.org/2025/1/e71998/

School-Based Online Surveillance of Youth: Systematic Search and Content Analysis of Surveillance Company Websites

Background:

School-based online surveillance of students has been widely adopted by middle and high school administrators over the past decade. Little is known about the technology companies that provide these services or the benefits and harms of the technology for students. Understanding what information online surveillance companies monitor and collect about students, how they do it, and if and how they facilitate appropriate intervention fills a crucial gap for parents, youth, researchers, and policy makers.

Objective:

The two goals of this study were to (1) comprehensively identify school-based online surveillance companies currently in operation, and (2) collate and analyze company-described surveillance services, monitoring processes, and features provided.

Methods:

We systematically searched GovSpend and EdSurge’s Education Technology (EdTech) Index to identify school-based online surveillance companies offering social media monitoring, student communications monitoring, or online monitoring. We extracted publicly available information from company websites and conducted a systematic content analysis of the websites identified. Two coders independently evaluated all company websites and discussed the findings to reach 100% consensus regarding website data labeling.

Results:

Our systematic search identified 14 school-based online surveillance companies. Content analysis revealed that most of these companies facilitate school administrators’ access to students’ digital behavior, well beyond monitoring during school hours and on school-provided devices. Specifically, almost all companies reported conducting monitoring of students at school, but 86% (12/14) of companies reported also conducting monitoring 24/7 outside of school and 7% (1/14) reported conducting monitoring outside of school at school administrator-specified locations. Most online surveillance companies reported using artificial intelligence to conduct automated flagging of student activity (10/14, 71%), and less than half of the companies (6/14, 43%) reported having a secondary human review team. Further, 14% (2/14) of companies reported providing crisis responses via company staff, including contacting law enforcement at their discretion.



Conclusions:

This study is the first detailed assessment of the school-based online surveillance industry and reveals that student monitoring technology can be characterized as heavy-handed. Findings suggest that students who only have school-provided devices are more heavily surveilled and that historically marginalized students may be at a higher risk of being flagged due to algorithmic bias. The dearth of research on efficacy and the notable lack of transparency about how surveillance services work indicate that increased oversight by policy makers of this industry may be warranted. Dissemination of our findings can improve parent, educator, student, and researcher awareness of school-based online monitoring services.