Wednesday, July 23, 2025

Should anyone use devices like this?

https://www.theverge.com/news/711621/amazon-bee-ai-wearable-acquisition

Amazon buys Bee AI wearable that listens to everything you say

Bee makes a $49.99 Fitbit-like device that listens in on your conversations while using AI to transcribe everything that you and the people around you say, allowing it to generate personalized summaries of your days, reminders, and suggestions from within the Bee app. You can also give the device permission to access your emails, contacts, location, reminders, photos, and calendar events to help inform its AI-generated insights, as well as create a searchable history of your activities.

My colleague Victoria Song got to try out the device for herself and found that it didn’t always get things quite right. It tended to confuse real-life conversations with the TV shows, TikTok videos, music, and movies that it heard.





Oh yeah, that makes perfect sense.

https://www.zdnet.com/article/people-dont-trust-ai-but-theyre-increasingly-using-it-anyway/

People don't trust AI but they're increasingly using it anyway

According to data first reported by Axios, ChatGPT now responds to around 2.5 billion user queries daily, with 330 million of those (roughly 13%) originating in the US. That's around 912.5 billion queries per year.

ChatGPT was also the most downloaded app in the world in April; in June, it clocked more App Store downloads than TikTok, Facebook, Instagram, and X combined.





Tools & Techniques.

https://news.mit.edu/2025/mit-learn-offers-whole-new-front-door-institute-0721

MIT Learn offers “a whole new front door to the Institute”

In 2001, MIT became the first higher education institution to provide educational resources for free to anyone in the world. Fast forward 24 years: The Institute has now launched a dynamic AI-enabled website for its non-degree learning opportunities, making it easier for learners around the world to discover the courses and resources available on MIT’s various learning platforms.

MIT Learn enables learners to access more than 12,700 educational resources — including introductory and advanced courses, courseware, videos, podcasts, and more — from departments across the Institute. MIT Learn is designed to seamlessly connect the existing Institute’s learning platforms in one place.



Tuesday, July 22, 2025

To err is human. To hallucinate is AI?

https://www.bespacific.com/generative-artificial-intelligence-and-copyright-law-4/

Generative Artificial Intelligence and Copyright Law

Generative Artificial Intelligence and Copyright Law CRS Legal Sidebar – LSB10922, 7/18/25 – “Innovations in artificial intelligence (AI) have raised several new questions in the field of copyright law.  Generative AI programs—such as Open AI’s DALL-E and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual or other prompts. Generative AI programs are trained to create such outputs partly by exposing them to large quantities of existing writings, photos, paintings, or other works. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have confronted regarding whether generative AI outputs may be copyrighted as well as whether training and using generative AI programs may infringe copyrights in other works. Other CRS Legal Sidebars explore questions AI raises in the intellectual property fields of patents and the right of publicity…”





No encryption no privacy.

https://scholarship.law.marquette.edu/mulr/vol108/iss2/5/

Encryption Backdoors and the Fourth Amendment

The National Security Agency (NSA) reportedly paid and pressured technology companies to trick their customers into using vulnerable encryption products. This Article examines whether any of three theories removed the Fourth Amendment’s requirement that this be reasonable. The first is that a challenge to the encryption backdoor might fail for want of a search or seizure. The Article rejects this both because the Amendment reaches some vulnerabilities apart from the searches and seizures they enable and because the creation of this vulnerability was itself a search or seizure. The second is that the role of the technology companies might have brought this backdoor within the private-search doctrine. The Article criticizes the doctrine— particularly its origins in Burdeau v. McDowell—and argues that if it ever should apply, it should not here. The last is that the customers might have waived their Fourth Amendment rights under the third-party doctrine. The Article rejects this both because the customers were not on notice of the backdoor and because historical understandings of the Amendment would not have tolerated it. The Article concludes that none of these theories removed the Amendment’s reasonableness requirement.



Monday, July 21, 2025

Yeah, we knew that.

https://arstechnica.com/tech-policy/2025/07/its-frighteningly-likely-many-us-courts-will-overlook-ai-errors-expert-says/

It’s “frighteningly likely” many US courts will overlook AI errors, expert says

Fueling nightmares that AI may soon decide legal battles, a Georgia court of appeals judge, Jeff Watkins, explained why a three-judge panel vacated an order last month that appears to be the first known ruling in which a judge sided with someone seemingly relying on fake AI-generated case citations to win a legal fight.

Now, experts are warning that judges overlooking AI hallucinations in court filings could easily become commonplace, especially in the typically overwhelmed lower courts. And so far, only two states have moved to force judges to sharpen their tech competencies and adapt so they can spot AI red flags and theoretically stop disruptions to the justice system at all levels.

The recently vacated order came in a Georgia divorce dispute, where Watkins explained that the order itself was drafted by the husband's lawyer, Diana Lynch. That's a common practice in many courts, where overburdened judges historically rely on lawyers to draft orders. But that protocol today faces heightened scrutiny as lawyers and non-lawyers increasingly rely on AI to compose and research legal filings, and judges risk rubberstamping fake opinions by not carefully scrutinizing AI-generated citations.

The errant order partly relied on "two fictitious cases" to deny the wife's petition—which Watkins suggested were "possibly 'hallucinations' made up by generative-artificial intelligence"—as well as two cases that had "nothing to do" with the wife's petition.





Tools & Techniques. Take a picture of your document and output text.

https://www.yourvalley.net/stories/pdfgear-scan-finally-a-completely-free-ai-scanner-app-for-all,600756

PDFgear Scan, Finally, a Completely Free AI Scanner App for All



Sunday, July 20, 2025

Your relationships are changing.

https://pogowasright.org/as-companies-race-to-add-ai-terms-of-service-changes-are-going-to-freak-a-lot-of-people-out-think-twice-before-granting-consent/

As companies race to add AI, terms of service changes are going to freak a lot of people out. Think twice before granting consent!

Jude Karabus reports:

WeTransfer this week denied claims it uses files uploaded to its ubiquitous cloud storage service to train AI, and rolled back changes it had introduced to its Terms of Service after they deeply upset users. The topic? Granting licensing permissions for an as-yet-unreleased LLM product.
Agentic AI, GenAI, AI service bots, AI assistants to legal clerks, and more are washing over the tech space like a giant wave as the industry paddles for its life hoping to surf on a neural networks breaker. WeTransfer is not the only tech giant refreshing its legal fine print – any new product that needs permissions-based data access – not just for AI – is going to require a change to its terms of service.
In the case of WeTransfer, the passage that aroused ire was:
You hereby grant us a perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license to use your Content for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy. (Emphasis ours.)

Read more at The Register.

Meanwhile, over on TechCrunch, Zack Whittaker writes: think twice before granting AI access to your personal data:

There is a trend of AI apps that promise to save you time by transcribing your calls or work meetings, for example, but which require an AI assistant to access your real-time private conversations, your calendars, contacts, and more. Meta, too, has been testing the limits of what its AI apps can ask for access to, including tapping into the photos stored in a user’s camera roll that haven’t been uploaded yet.
Signal president Meredith Whittaker recently likened the use of AI agents and assistants to “putting your brain in a jar.” Whittaker explained how some AI products can promise to do all kinds of mundane tasks, like reserving a table at a restaurant or booking a ticket for a concert. But to do that, AI will say it needs your permission to open your browser to load the website (which can allow the AI access to your stored passwords, bookmarks, and your browsing history), a credit card to make the reservation, your calendar to mark the date, and it may also ask to open your contacts so you can share the booking with a friend.





No doubt incorporating Asimov’s three laws...

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5351275

Should AI Write Your Constitution?

Artificial Intelligence (AI) now has the capacity to write a constitution for any country in the world. But should it? The immediate reaction is likely emphatically no—and understandably so, given that there is no greater exercise of popular sovereignty than the act of constituting oneself under higher law legitimated by the consent of the governed. But constitution-making is not a single act at a single moment. It is a series of discrete steps demanding varying degrees of popular participation to produce a text that enjoys legitimacy both in perception and reality. Some of these steps could prudently integrate human-AI collaboration or autonomous AI assistance—or so we argue in this first Article to explain and evaluate how constitutional designers not only could, but also should, harness the extraordinary potential of AI. We combine our expertise as innovators in the use and design of AI with our direct involvement as advisors in constitution-making processes around the world to map the terrain of opportunities and hazards in the next iteration of the continuing fusion of technology with governance. We ask and answer the most important question now confronting constitutional designers: how to use AI in making and reforming constitutions?

We make five major contributions to jumpstart the study of AI and constitutionalism. First, we unveil the results of the first Global Survey of Constitutional Experts on AI. How do constitutional experts view the risks and rewards of AI, would they use AI to write their own constitution, and what red lines would they impose around AI? Second, we introduce a novel spectrum of human control to classify and distinguish three types of tasks in constitution-making: high sensitivity tasks that should remain fully within the domain of human judgment and control, lower sensitivity tasks that are candidates for significant AI assistance or automation, and moderate sensitivity tasks that are ripe for human-AI collaboration. Third, we take readers through the key steps in the constitution-making process, from start to finish, to thoroughly explain how AI can assist with discrete tasks in constitution-making. Our objective here is to show scholars and practitioners how and when AI may be integrated into foundational democratic processes. Fourth, we construct a Democracy Shield—a set of specific practices, principles, and protocols—to protect constitutionalism and constitutional values from the real, perceived, and unanticipated risks that AI raises when merged into acts of national self-definition and popular reconstitution. Fifth, we make specific recommendations on how constitutional designers should use AI to make and reform constitutions, recognizing that openness to using AI in governance is likely to grow as human use and familiarity with AI increases over time, as we anticipate it will. This cutting-edge Article is therefore simultaneously descriptive, prescriptive, and normative.



Thursday, July 17, 2025

Those who do not read science fiction are doomed to repeat it?

https://futurism.com/ai-models-flunking-three-laws-robotics

Leading AI Models Are Completely Flunking the Three Laws of Robotics

Last month, for instance, researchers at Anthropic found that top AI models from all major players in the space — including OpenAI, Google, Elon Musk's xAI, and Anthropic's own cutting-edge tech — happily resorted to blackmailing human users when threatened with being shut down.

In other words, that single research paper caught every leading AI catastrophically bombing all three Laws of Robotics: the first by harming a human via blackmail, the second by subverting human orders, and the third by protecting its own existence in violation of the first two laws.





Perspective.

https://sloanreview.mit.edu/article/stop-deploying-ai-start-designing-intelligence/

Stop Deploying AI. Start Designing Intelligence

Stephen Wolfram is a physicist-turned-entrepreneur whose pioneering work in cellular automata, computational irreducibility, and symbolic knowledge systems fundamentally reshaped our understanding of complexity. His theoretical breakthroughs led to successful commercial products, Wolfram Alpha and Wolfram Language. Despite his success, the broader business community has largely overlooked these foundational insights. As part of our ongoing “Philosophy Eats AI”  exploration — the thesis that foundational philosophical clarity is essential to the future value of intelligent systems — we find that Wolfram’s fundamental insights about computation have distinctly actionable, if underappreciated, uses for leaders overwhelmed by AI capabilities but underwhelmed by AI returns.





A reasonable example?

https://www.aclu.org/about/privacy/archive/2025-07-31/privacy-statement

American Civil Liberties Union Privacy Statement



Wednesday, July 16, 2025

Tools & Techniques.

https://www.bespacific.com/handbook-of-the-law-ethics-and-policy-of-artificial-intelligence/

Handbook of the Law, Ethics and Policy of Artificial Intelligence

The Cambridge University Press & Assessment Handbook of the Law, Ethics and Policy of Artificial Intelligence (2025), edited by Nathalie Smuha, KU Leuven – is a comprehensive 600-page resource that brings together 30+ leading scholars across law, ethics, philosophy, and AI policy [Open Access – PDF and HTML]. It’s structured into three parts:

  • AI, Ethics & Philosophy

  • AI, Law & Policy

  • AI in Sectoral Applications (healthcare, finance, education, law enforcement, military, etc.)



Tuesday, July 15, 2025

I wondered how they would do this. Now I wonder how the government can verify that is was done.

https://arstechnica.com/tech-policy/2025/07/reddit-starts-verifying-ages-of-uk-users-to-comply-with-child-safety-law/

Reddit’s UK users must now prove they’re 18 to view adult content

Reddit announced today that it has started verifying UK users' ages before letting them "view certain mature content" in order to comply with the country's Online Safety Act.

Reddit said that users "shouldn't need to share personal information to participate in meaningful discussions," but that it will comply with the law by verifying age in a way that protects users' privacy. "Using Reddit has never required disclosing your real world identity, and these updates don't change that," Reddit said.

Reddit said it contracted with the company Persona, which "performs the verification on either an uploaded selfie or a photo of your government ID. Reddit will not have access to the uploaded photo, and Reddit will only store your verification status along with the birthdate you provided so you won't have to re-enter it each time you try to access restricted content."

Reddit said that Persona made promises about protecting the privacy of data. "Persona promises not to retain the photo for longer than 7 days and will not have access to your Reddit data such as the subreddits you visit," the Reddit announcement said.

Reddit provided more detail on how the age verification works here, and a list of what content is restricted here





The truth is not supposed to be a matter of opinion.

https://reason.com/2025/07/14/missouri-harasses-ai-companies-over-chatbots-dissing-glorious-leader-trump/

Missouri Harasses AI Companies Over Chatbots Dissing Glorious Leader Trump

Missourians deserve the truth, not AI-generated propaganda masquerading as fact," said Missouri Attorney General Andrew Bailey. That's why he's investigating prominent artificial intelligence companies for…failing to spread pro-Trump propaganda?

Under the guise of fighting "big tech censorship" and "fake news," Bailey is harassing Google, Meta, Microsoft, and OpenAI. Last week, Bailey's office sent each company a formal demand letter seeking "information on whether these AI chatbots were trained to distort historical facts and produce biased results while advertising themselves to be neutral."

And what, you might wonder, led Bailey to suspect such shenanigans?

Chatbots don't rank President Donald Trump on top.