Wednesday, July 23, 2025

Should anyone use devices like this?

https://www.theverge.com/news/711621/amazon-bee-ai-wearable-acquisition

Amazon buys Bee AI wearable that listens to everything you say

Bee makes a $49.99 Fitbit-like device that listens in on your conversations while using AI to transcribe everything that you and the people around you say, allowing it to generate personalized summaries of your days, reminders, and suggestions from within the Bee app. You can also give the device permission to access your emails, contacts, location, reminders, photos, and calendar events to help inform its AI-generated insights, as well as create a searchable history of your activities.

My colleague Victoria Song got to try out the device for herself and found that it didn’t always get things quite right. It tended to confuse real-life conversations with the TV shows, TikTok videos, music, and movies that it heard.





Oh yeah, that makes perfect sense.

https://www.zdnet.com/article/people-dont-trust-ai-but-theyre-increasingly-using-it-anyway/

People don't trust AI but they're increasingly using it anyway

According to data first reported by Axios, ChatGPT now responds to around 2.5 billion user queries daily, with 330 million of those (roughly 13%) originating in the US. That's around 912.5 billion queries per year.

ChatGPT was also the most downloaded app in the world in April; in June, it clocked more App Store downloads than TikTok, Facebook, Instagram, and X combined.





Tools & Techniques.

https://news.mit.edu/2025/mit-learn-offers-whole-new-front-door-institute-0721

MIT Learn offers “a whole new front door to the Institute”

In 2001, MIT became the first higher education institution to provide educational resources for free to anyone in the world. Fast forward 24 years: The Institute has now launched a dynamic AI-enabled website for its non-degree learning opportunities, making it easier for learners around the world to discover the courses and resources available on MIT’s various learning platforms.

MIT Learn enables learners to access more than 12,700 educational resources — including introductory and advanced courses, courseware, videos, podcasts, and more — from departments across the Institute. MIT Learn is designed to seamlessly connect the existing Institute’s learning platforms in one place.



Tuesday, July 22, 2025

To err is human. To hallucinate is AI?

https://www.bespacific.com/generative-artificial-intelligence-and-copyright-law-4/

Generative Artificial Intelligence and Copyright Law

Generative Artificial Intelligence and Copyright Law CRS Legal Sidebar – LSB10922, 7/18/25 – “Innovations in artificial intelligence (AI) have raised several new questions in the field of copyright law.  Generative AI programs—such as Open AI’s DALL-E and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual or other prompts. Generative AI programs are trained to create such outputs partly by exposing them to large quantities of existing writings, photos, paintings, or other works. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have confronted regarding whether generative AI outputs may be copyrighted as well as whether training and using generative AI programs may infringe copyrights in other works. Other CRS Legal Sidebars explore questions AI raises in the intellectual property fields of patents and the right of publicity…”





No encryption no privacy.

https://scholarship.law.marquette.edu/mulr/vol108/iss2/5/

Encryption Backdoors and the Fourth Amendment

The National Security Agency (NSA) reportedly paid and pressured technology companies to trick their customers into using vulnerable encryption products. This Article examines whether any of three theories removed the Fourth Amendment’s requirement that this be reasonable. The first is that a challenge to the encryption backdoor might fail for want of a search or seizure. The Article rejects this both because the Amendment reaches some vulnerabilities apart from the searches and seizures they enable and because the creation of this vulnerability was itself a search or seizure. The second is that the role of the technology companies might have brought this backdoor within the private-search doctrine. The Article criticizes the doctrine— particularly its origins in Burdeau v. McDowell—and argues that if it ever should apply, it should not here. The last is that the customers might have waived their Fourth Amendment rights under the third-party doctrine. The Article rejects this both because the customers were not on notice of the backdoor and because historical understandings of the Amendment would not have tolerated it. The Article concludes that none of these theories removed the Amendment’s reasonableness requirement.



Monday, July 21, 2025

Yeah, we knew that.

https://arstechnica.com/tech-policy/2025/07/its-frighteningly-likely-many-us-courts-will-overlook-ai-errors-expert-says/

It’s “frighteningly likely” many US courts will overlook AI errors, expert says

Fueling nightmares that AI may soon decide legal battles, a Georgia court of appeals judge, Jeff Watkins, explained why a three-judge panel vacated an order last month that appears to be the first known ruling in which a judge sided with someone seemingly relying on fake AI-generated case citations to win a legal fight.

Now, experts are warning that judges overlooking AI hallucinations in court filings could easily become commonplace, especially in the typically overwhelmed lower courts. And so far, only two states have moved to force judges to sharpen their tech competencies and adapt so they can spot AI red flags and theoretically stop disruptions to the justice system at all levels.

The recently vacated order came in a Georgia divorce dispute, where Watkins explained that the order itself was drafted by the husband's lawyer, Diana Lynch. That's a common practice in many courts, where overburdened judges historically rely on lawyers to draft orders. But that protocol today faces heightened scrutiny as lawyers and non-lawyers increasingly rely on AI to compose and research legal filings, and judges risk rubberstamping fake opinions by not carefully scrutinizing AI-generated citations.

The errant order partly relied on "two fictitious cases" to deny the wife's petition—which Watkins suggested were "possibly 'hallucinations' made up by generative-artificial intelligence"—as well as two cases that had "nothing to do" with the wife's petition.





Tools & Techniques. Take a picture of your document and output text.

https://www.yourvalley.net/stories/pdfgear-scan-finally-a-completely-free-ai-scanner-app-for-all,600756

PDFgear Scan, Finally, a Completely Free AI Scanner App for All



Sunday, July 20, 2025

Your relationships are changing.

https://pogowasright.org/as-companies-race-to-add-ai-terms-of-service-changes-are-going-to-freak-a-lot-of-people-out-think-twice-before-granting-consent/

As companies race to add AI, terms of service changes are going to freak a lot of people out. Think twice before granting consent!

Jude Karabus reports:

WeTransfer this week denied claims it uses files uploaded to its ubiquitous cloud storage service to train AI, and rolled back changes it had introduced to its Terms of Service after they deeply upset users. The topic? Granting licensing permissions for an as-yet-unreleased LLM product.
Agentic AI, GenAI, AI service bots, AI assistants to legal clerks, and more are washing over the tech space like a giant wave as the industry paddles for its life hoping to surf on a neural networks breaker. WeTransfer is not the only tech giant refreshing its legal fine print – any new product that needs permissions-based data access – not just for AI – is going to require a change to its terms of service.
In the case of WeTransfer, the passage that aroused ire was:
You hereby grant us a perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license to use your Content for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy. (Emphasis ours.)

Read more at The Register.

Meanwhile, over on TechCrunch, Zack Whittaker writes: think twice before granting AI access to your personal data:

There is a trend of AI apps that promise to save you time by transcribing your calls or work meetings, for example, but which require an AI assistant to access your real-time private conversations, your calendars, contacts, and more. Meta, too, has been testing the limits of what its AI apps can ask for access to, including tapping into the photos stored in a user’s camera roll that haven’t been uploaded yet.
Signal president Meredith Whittaker recently likened the use of AI agents and assistants to “putting your brain in a jar.” Whittaker explained how some AI products can promise to do all kinds of mundane tasks, like reserving a table at a restaurant or booking a ticket for a concert. But to do that, AI will say it needs your permission to open your browser to load the website (which can allow the AI access to your stored passwords, bookmarks, and your browsing history), a credit card to make the reservation, your calendar to mark the date, and it may also ask to open your contacts so you can share the booking with a friend.





No doubt incorporating Asimov’s three laws...

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5351275

Should AI Write Your Constitution?

Artificial Intelligence (AI) now has the capacity to write a constitution for any country in the world. But should it? The immediate reaction is likely emphatically no—and understandably so, given that there is no greater exercise of popular sovereignty than the act of constituting oneself under higher law legitimated by the consent of the governed. But constitution-making is not a single act at a single moment. It is a series of discrete steps demanding varying degrees of popular participation to produce a text that enjoys legitimacy both in perception and reality. Some of these steps could prudently integrate human-AI collaboration or autonomous AI assistance—or so we argue in this first Article to explain and evaluate how constitutional designers not only could, but also should, harness the extraordinary potential of AI. We combine our expertise as innovators in the use and design of AI with our direct involvement as advisors in constitution-making processes around the world to map the terrain of opportunities and hazards in the next iteration of the continuing fusion of technology with governance. We ask and answer the most important question now confronting constitutional designers: how to use AI in making and reforming constitutions?

We make five major contributions to jumpstart the study of AI and constitutionalism. First, we unveil the results of the first Global Survey of Constitutional Experts on AI. How do constitutional experts view the risks and rewards of AI, would they use AI to write their own constitution, and what red lines would they impose around AI? Second, we introduce a novel spectrum of human control to classify and distinguish three types of tasks in constitution-making: high sensitivity tasks that should remain fully within the domain of human judgment and control, lower sensitivity tasks that are candidates for significant AI assistance or automation, and moderate sensitivity tasks that are ripe for human-AI collaboration. Third, we take readers through the key steps in the constitution-making process, from start to finish, to thoroughly explain how AI can assist with discrete tasks in constitution-making. Our objective here is to show scholars and practitioners how and when AI may be integrated into foundational democratic processes. Fourth, we construct a Democracy Shield—a set of specific practices, principles, and protocols—to protect constitutionalism and constitutional values from the real, perceived, and unanticipated risks that AI raises when merged into acts of national self-definition and popular reconstitution. Fifth, we make specific recommendations on how constitutional designers should use AI to make and reform constitutions, recognizing that openness to using AI in governance is likely to grow as human use and familiarity with AI increases over time, as we anticipate it will. This cutting-edge Article is therefore simultaneously descriptive, prescriptive, and normative.



Thursday, July 17, 2025

Those who do not read science fiction are doomed to repeat it?

https://futurism.com/ai-models-flunking-three-laws-robotics

Leading AI Models Are Completely Flunking the Three Laws of Robotics

Last month, for instance, researchers at Anthropic found that top AI models from all major players in the space — including OpenAI, Google, Elon Musk's xAI, and Anthropic's own cutting-edge tech — happily resorted to blackmailing human users when threatened with being shut down.

In other words, that single research paper caught every leading AI catastrophically bombing all three Laws of Robotics: the first by harming a human via blackmail, the second by subverting human orders, and the third by protecting its own existence in violation of the first two laws.





Perspective.

https://sloanreview.mit.edu/article/stop-deploying-ai-start-designing-intelligence/

Stop Deploying AI. Start Designing Intelligence

Stephen Wolfram is a physicist-turned-entrepreneur whose pioneering work in cellular automata, computational irreducibility, and symbolic knowledge systems fundamentally reshaped our understanding of complexity. His theoretical breakthroughs led to successful commercial products, Wolfram Alpha and Wolfram Language. Despite his success, the broader business community has largely overlooked these foundational insights. As part of our ongoing “Philosophy Eats AI”  exploration — the thesis that foundational philosophical clarity is essential to the future value of intelligent systems — we find that Wolfram’s fundamental insights about computation have distinctly actionable, if underappreciated, uses for leaders overwhelmed by AI capabilities but underwhelmed by AI returns.





A reasonable example?

https://www.aclu.org/about/privacy/archive/2025-07-31/privacy-statement

American Civil Liberties Union Privacy Statement



Wednesday, July 16, 2025

Tools & Techniques.

https://www.bespacific.com/handbook-of-the-law-ethics-and-policy-of-artificial-intelligence/

Handbook of the Law, Ethics and Policy of Artificial Intelligence

The Cambridge University Press & Assessment Handbook of the Law, Ethics and Policy of Artificial Intelligence (2025), edited by Nathalie Smuha, KU Leuven – is a comprehensive 600-page resource that brings together 30+ leading scholars across law, ethics, philosophy, and AI policy [Open Access – PDF and HTML]. It’s structured into three parts:

  • AI, Ethics & Philosophy

  • AI, Law & Policy

  • AI in Sectoral Applications (healthcare, finance, education, law enforcement, military, etc.)



Tuesday, July 15, 2025

I wondered how they would do this. Now I wonder how the government can verify that is was done.

https://arstechnica.com/tech-policy/2025/07/reddit-starts-verifying-ages-of-uk-users-to-comply-with-child-safety-law/

Reddit’s UK users must now prove they’re 18 to view adult content

Reddit announced today that it has started verifying UK users' ages before letting them "view certain mature content" in order to comply with the country's Online Safety Act.

Reddit said that users "shouldn't need to share personal information to participate in meaningful discussions," but that it will comply with the law by verifying age in a way that protects users' privacy. "Using Reddit has never required disclosing your real world identity, and these updates don't change that," Reddit said.

Reddit said it contracted with the company Persona, which "performs the verification on either an uploaded selfie or a photo of your government ID. Reddit will not have access to the uploaded photo, and Reddit will only store your verification status along with the birthdate you provided so you won't have to re-enter it each time you try to access restricted content."

Reddit said that Persona made promises about protecting the privacy of data. "Persona promises not to retain the photo for longer than 7 days and will not have access to your Reddit data such as the subreddits you visit," the Reddit announcement said.

Reddit provided more detail on how the age verification works here, and a list of what content is restricted here





The truth is not supposed to be a matter of opinion.

https://reason.com/2025/07/14/missouri-harasses-ai-companies-over-chatbots-dissing-glorious-leader-trump/

Missouri Harasses AI Companies Over Chatbots Dissing Glorious Leader Trump

Missourians deserve the truth, not AI-generated propaganda masquerading as fact," said Missouri Attorney General Andrew Bailey. That's why he's investigating prominent artificial intelligence companies for…failing to spread pro-Trump propaganda?

Under the guise of fighting "big tech censorship" and "fake news," Bailey is harassing Google, Meta, Microsoft, and OpenAI. Last week, Bailey's office sent each company a formal demand letter seeking "information on whether these AI chatbots were trained to distort historical facts and produce biased results while advertising themselves to be neutral."

And what, you might wonder, led Bailey to suspect such shenanigans?

Chatbots don't rank President Donald Trump on top.



Monday, July 14, 2025

Why indeed?

https://www.bespacific.com/cops-favorite-ai-tool-automatically-deletes-evidence-of-when-ai-was-used/

Cops’ favorite AI tool automatically deletes evidence of when AI was used

Ars Technica: AI police tool is designed to avoid accountability, watchdog says. On Thursday, a digital rights group, the Electronic Frontier Foundation, published an expansive investigation into AI-generated police reports that the group alleged are, by design, nearly impossible to audit and could make it easier for cops to lie under oath.  Axon’s Draft One debuted last summer at a police department in Colorado, instantly raising questions about the feared negative impacts of AI-written police reports on the criminal justice system. The tool relies on a ChatGPT variant to generate police reports based on body camera audio, which cops are then supposed to edit to correct any mistakes, assess the AI outputs for biases, or add key context. But the EFF found that the tech “seems designed to stymie any attempts at auditing, transparency, and accountability.” Cops don’t have to disclose when AI is used in every department, and Draft One does not save drafts or retain a record showing which parts of reports are AI-generated. Departments also don’t retain different versions of drafts, making it difficult to assess how one version of an AI report might compare to another to help the public determine if the technology is “junk,” the EFF said. That raises the question, the EFF suggested, “Why wouldn’t an agency want to maintain a record that can establish the technology’s accuracy?” It’s currently hard to know if cops are editing the reports or “reflexively rubber-stamping the drafts to move on as quickly as possible,” the EFF said. That’s particularly troubling, the EFF noted, since Axon disclosed to at least one police department that “there has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the ‘guardrails’ that supposedly deter officers from submitting AI-generated reports without reading them first.” The AI tool could also possibly be “overstepping in its interpretation of the audio,” possibly misinterpreting slang or adding context that never happened.

A “major concern,” the EFF said, is that the AI reports can give cops a “smokescreen,” perhaps even allowing them to dodge consequences for lying on the stand by blaming the AI tool for any “biased language, inaccuracies, misinterpretations, or lies” in their reports. “There’s no record showing whether the culprit was the officer or the AI,” the EFF said. “This makes it extremely difficult if not impossible to assess how the system affects justice outcomes over time.” According to the EFF, Draft One “seems deliberately designed to avoid audits that could provide any accountability to the public.” In one video from a roundtable discussion the EFF reviewed, an Axon senior principal product manager for generative AI touted Draft One’s disappearing drafts as a feature, explaining, “we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices.”





Sure to be a very common question.

https://www.bespacific.com/can-you-trust-ai-in-legal-research/

Can you trust AI in legal research?

Sally McLaren post on LinkedIn: “Can you trust AI in legal research? Our study tested how leading Generative AI tools responded to fake case citations and the results were eye-opening. While some models correctly flagged the fabricated case, others confidently generated detailed but entirely false legal content, even referencing real statutes and cases.  Our table of findings (which is not behind a paywall) breaks down how each model performed. We encourage you to repurpose this for use in your AI literacy sessions to help build critical awareness in legal research.”

We have another article, this time in Legal Information Management. A deep dive into Generative AI outputs and the legal information professional’s role (paywall). In the foot notes we have included our data and encourage use of it in you AI literacy sessions. “You’re right to be skeptical!”: The Role of Legal Information Professionals in Assessing Generative AI Outputs | Legal Information Management | Cambridge Core: “Generative AI tools, such as ChatGPT, have demonstrated impressive capabilities in summarisation and content generation. However, they are infamously prone to hallucination, fabricating plausible information and presenting it as fact. In the context of legal research, this poses significant risk. This paper, written by Sally McLaren and Lily Rowe, examines how widely available AI applications respond to fabricated case citations and assesses their ability to identify false cases, the nature of their summaries, and any commonalities in their outputs. Using a non-existent citation, we analysed responses from multiple AI models, evaluating accuracy, detail, structure and the inclusion of references. Results revealed that while some models flagged our case as fictitious, others generated convincing but erroneous legal content, occasionally citing real cases or legislation. The experiment underscores concern about AI’s credibility in legal research and highlights the role of legal information professionals in mitigating risks through user education and AI literacy training. Practical engagement with these tools is crucial to understanding the user experience. Our findings serve as a foundation for improving AI literacy in legal research.”



Sunday, July 13, 2025

Do I look like an immigrant?

https://pogowasright.org/trump-border-czar-boasts-ice-can-briefly-detain-people-based-on-physical-appearance/

Trump Border Czar Boasts ICE Can ‘Briefly Detain’ People Based On ‘Physical Appearance’

This is what our country has deteriorated to under Trump. T hink about whether this is acceptable to you, and if not, what you can and will do about it. 

David Moyes reports:

President Donald Trump’s border czar Tom Homan went viral on Friday after practically boasting on TV about all the ways ICE agents and Border Patrol agents can go after suspected illegal immigrants.
Homan was being interviewed on Fox News about a potential ruling from a federal judge in Los Angeles over whether the Trump administration could be ordered to pause its ICE raids on immigrants.
He responded by claiming that immigration law enforcers don’t actually need “probable cause” to detain a possible suspect, despite it being a key part of the Constitution’s Fourth Amendment.
People need to understand, ICE [Immigration and Customs Enforcement] officers and Border Patrol don’t need probable cause to walk up to somebody, briefly detain them, and question them,” Homan said. “They just go through the observations, get articulable facts based on their location, their occupation, their physical appearance, their actions.”
Homan also insisted that if his agents briefly detained someone, “it’s not probable cause. It’s reasonable suspicion.”

Read more at HuffPost.





All students are criminals?

https://www.jmir.org/2025/1/e71998/

School-Based Online Surveillance of Youth: Systematic Search and Content Analysis of Surveillance Company Websites

Background:

School-based online surveillance of students has been widely adopted by middle and high school administrators over the past decade. Little is known about the technology companies that provide these services or the benefits and harms of the technology for students. Understanding what information online surveillance companies monitor and collect about students, how they do it, and if and how they facilitate appropriate intervention fills a crucial gap for parents, youth, researchers, and policy makers.

Objective:

The two goals of this study were to (1) comprehensively identify school-based online surveillance companies currently in operation, and (2) collate and analyze company-described surveillance services, monitoring processes, and features provided.

Methods:

We systematically searched GovSpend and EdSurge’s Education Technology (EdTech) Index to identify school-based online surveillance companies offering social media monitoring, student communications monitoring, or online monitoring. We extracted publicly available information from company websites and conducted a systematic content analysis of the websites identified. Two coders independently evaluated all company websites and discussed the findings to reach 100% consensus regarding website data labeling.

Results:

Our systematic search identified 14 school-based online surveillance companies. Content analysis revealed that most of these companies facilitate school administrators’ access to students’ digital behavior, well beyond monitoring during school hours and on school-provided devices. Specifically, almost all companies reported conducting monitoring of students at school, but 86% (12/14) of companies reported also conducting monitoring 24/7 outside of school and 7% (1/14) reported conducting monitoring outside of school at school administrator-specified locations. Most online surveillance companies reported using artificial intelligence to conduct automated flagging of student activity (10/14, 71%), and less than half of the companies (6/14, 43%) reported having a secondary human review team. Further, 14% (2/14) of companies reported providing crisis responses via company staff, including contacting law enforcement at their discretion.



Conclusions:

This study is the first detailed assessment of the school-based online surveillance industry and reveals that student monitoring technology can be characterized as heavy-handed. Findings suggest that students who only have school-provided devices are more heavily surveilled and that historically marginalized students may be at a higher risk of being flagged due to algorithmic bias. The dearth of research on efficacy and the notable lack of transparency about how surveillance services work indicate that increased oversight by policy makers of this industry may be warranted. Dissemination of our findings can improve parent, educator, student, and researcher awareness of school-based online monitoring services.



Saturday, July 12, 2025

I’m interested in where this might go…

https://www.politico.eu/article/france-opens-criminal-probe-into-x-for-algorithm-manipulation/

France launches criminal investigation into Musk’s X over algorithm manipulation

French prosecutors have opened a criminal investigation into X over allegations that the company owned by billionaire Elon Musk manipulated its algorithms for the purposes of “foreign interference.”

Magistrate Laure Beccuau said in a statement Friday that prosecutors had launched the probe on Wednesday and were looking into whether the social media giant broke French law by altering its algorithms and fraudulently extracting data from users.

The criminal investigation comes on the heels of an inquiry launched in January, and is based on complaints from a lawmaker and an unnamed senior civil servant, Beccuau said.

A complaint that sparked the initial January inquiry accused X of spreading “an enormous amount of hateful, racist, anti-LGBT+ and homophobic political content, which aims to skew the democratic debate in France.”





Perspective.

https://blogs.lse.ac.uk/politicsandpolicy/what-if-ai-becomes-conscious/

What if AI becomes conscious?

The question of whether Artificial Intelligence can become conscious is not just a philosophical question, but a political one. Given that an increasing number of people are forming social relationships with AI systems, the calls for treating them as persons with legal protections might not be far off. In this interview based on his book The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI Jonathan Birch argues that we shouldn’t be too quick to dismiss the possibility that AI could become conscious, but warns that we are not ready, conceptually or societally, for such an eventuality.



Friday, July 11, 2025

Training data for your Legal AI?

https://www.bespacific.com/gpo-makes-available-supreme-court-cases-dating-back-to-the-18th-century/

GPO Makes Available U.S. Supreme Court Cases Dating Back to the 18th Century

The U.S. Government Publishing Office (GPO) has made available hundreds of historic volumes of U.S. Supreme Court cases dating from 1790–1991. These cases are published officially in the United States Reports and are now available on GPO’s GovInfo, the one-stop site for authentic, published information for all three branches of the Federal Government. United States Reports: https://www.govinfo.gov/app/collection/usreports Major cases available through this new collection include: Some notable cases available in this release include:





Perspective.

https://thehackernews.com/2025/07/securing-data-in-ai-era.html

Securing Data in the AI Era

The 2025 Data Risk Report: Enterprises face potentially serious data loss risks from AI-fueled tools. Adopting a unified, AI-driven approach to data security can help.

As businesses increasingly rely on cloud-driven platforms and AI-powered tools to accelerate digital transformation, the stakes for safeguarding sensitive enterprise data have reached unprecedented levels. The Zscaler ThreatLabz 2025 Data Risk Report reveals how evolving technology landscapes are amplifying vulnerabilities, highlighting the critical need for a proactive and unified approach to data protection.



Thursday, July 10, 2025

You can hurry too fast…

https://www.bespacific.com/66-of-inhouse-lawyers-using-raw-chatbots/

66% of Inhouse Lawyers Using ‘Raw’ Chatbots

Artificial Lawyer: “A major survey by Axiom of 600+ senior inhouse lawyers across eight countries on AI adoption has found that 66% of them are using ‘raw’ LLM chatbots such as ChatGPT, and only between 7% and 17% are using bona fide legal AI tools made for this sector. There is something terrible about this, but also there is a silver lining. The terrible bit first: if you’re primarily using a ‘raw’ chatbot approach for legal work then that suggests that what you can do with genAI is limited. You can’t really organise things in terms of proper workflows, and more likely this is an ad hoc, ‘prompt here and a prompt there‘, approach. It’s also a major data risk. It just shows a level of AI use that is what we can call ‘surface level’. There is no deep planning or strategy going on here at all it seems for many lawyers. The positive bit…..a huge number of inhouse lawyers are now comfortable with using genAI. Now we just have to get them to understand why they need to use legal tech tools that have the correct structure, refinement, privacy safeguards, ability to be formed into workflows, and leverage agents in a controlled and repeatable way….and more. OK, what else?

  • 87% of legal departments are handling AI procurement themselves without IT involvement – with only 4% doing full IT partnerships.

  • Only 21% have achieved what Axiom is calling ‘AI maturity’ despite 76% increasing budgets by 26% on average for AI spending.

And that’s not great either, as it suggests a real ‘free-for-all’.  It’s a kind of legal AI anarchy…. Plus, they found that ‘according to in-house leaders, 79% of law firms are using AI, but 58% aren’t reducing rates for AI-assisted work. 34% actually charging more for it’….”

SOURCEAXIOMLAW Report – The AI Legal Divide: How. Global In-House Teams Are Racing to Avoid Being Left Behind. “Corporate legal departments face unprecedented pressure to harness AI’s potential, with three-quarters increasing AI budgets by 26% to 33% and two-thirds accelerating adoption timelines—yet only one in five has achieved “AI maturity,” reflecting a chasm between teams racing to reap AI’s benefits and those trapped in analysis paralysis. These insights and more are covered in this report on AI maturity, budgets, adoption trends, and strategies among global enterprise in-house legal teams…”



Tuesday, July 08, 2025

I still think that opposing council should be paid (some multiple?) for the time they spent finding the errors. The authors “saved time” by not checking.

https://coloradosun.com/2025/07/07/mike-lindell-attorneys-fined-artificial-intelligence/

MyPillow CEO’s lawyers fined for AI-generated court filing in Denver defamation case

A federal judge ordered two attorneys representing MyPillow CEO Mike Lindell to pay $3,000 each after they used artificial intelligence to prepare a court filing that was riddled with errors, including citations to nonexistent cases and misquotations of case law. 

Christopher Kachouroff and Jennifer DeMaster violated court rules when they filed the motion that had contained nearly 30 defective citations, Judge Nina Y. Wang of the U.S. District Court in Denver ruled Monday.

Notwithstanding any suggestion to the contrary, this Court derives no joy from sanctioning attorneys who appear before it,” Wang wrote in her ruling, adding that the sanction against Kachourouff and Demaster was “the least severe sanction adequate to deter and punish defense counsel in this instance.” 



(Related?) Anyone looking for internal errors?

https://www.bespacific.com/ai-reduces-client-use-of-law-firms-by-13-study/

AI Reduces Client Use Of Law Firms ‘By 13%’ – Study

Artificial Lawyer: “A new study by LexisNexis, conducted for them by Forrester, and using a model inhouse legal team of a hypothetical $10 billion company, found that if they were using AI tools at scale internally it could reduce work sent to law firms by 13%, based on the volume of matters handled. Other key findings included:

  • A 25% reduction in annual time spent advising the business on legal inquiries’ (i.e. advising the business the inhouse team is within).

  • And, ‘Annual time savings of 50% for paralegals on administrative tasks’ (i.e. paralegals employed by the inhouse team).

To get to these results the consulting group Forrester interviewed four senior inhouse people ‘with experience using and deploying Lexis+ AI’ in their companies. They then combined the four companies into a ‘single composite organization based in North America with $10 billion in annual revenue and a corporate legal staff of 70 attorneys and 10 paralegals. Its legal budget is 0.33% of the organization’s annual revenue’. This scenario was then considered over three years, taking into account broad use of AI. Now, although there is a clear effort to be empirical here, the dataset is very small – four companies – and the extrapolations on cost and time savings are from a composite entity over three years. So, let’s not get carried away here. It really is a model, not a set of facts. That said, if all of the Fortune 500, for example, used AI tools across their inhouse teams at scale – and every day, not just occasionally – and actually were able to reduce the amount of work sent out to law firms by 13% in terms of the volume of matters, then that would total many $ millions in reductions of external legal spend across the US Big Law market…”





A hint of things to come?

https://futurism.com/companies-fixing-ai-replacement-mistakes

Companies That Tried to Save Money With AI Are Now Spending a Fortune Hiring People to Fix Its Mistakes

Companies that rushed to replace human labor with AI are now shelling out to get human workers to fix the technology's screwups.

As the BBC reports, there's now something of a cottage industry for writers and coders who specialize in fixing AI's mistakes — and those who are good at it are using the opportunity to rake in cash.