Friday, September 26, 2025

Similar to India’s?

https://www.bbc.com/news/articles/cn832y43ql5o

New digital ID will be mandatory to work in the UK

Digital ID will be mandatory in order to work in the UK, as part of plans to tackle illegal migration.

Sir Keir Starmer said the new digital ID scheme would make it tougher to work in the UK illegally and offer "countless benefits" to citizens.

However, opposition parties argued the proposals would not stop people crossing the Channel in small boats.



Thursday, September 25, 2025

Is the US isolating itself?

https://www.theguardian.com/technology/2025/sep/24/tiktok-ownership-chinese-american-australia-may-choose-between-china-usa-trump-billionaire-backers

Australia may have to choose between a Chinese TikTok and one owned by Trump’s billionaire backers

Would you rather a Chinese-owned TikTok or one run by a consortium of Trump-supporting billionaires?

That’s the choice Australia is being asked to consider.





Fool me once…

https://fortune.com/2025/09/24/lexisnexis-exec-says-its-a-matter-of-time-before-attorneys-lose-their-licenses-over-using-open-source-ai-pilots-in-court/

LexisNexis exec says it’s ‘a matter of time’ before attorneys lose their licenses over using open-source AI pilots in court

Courts across the country have sanctioned attorneys for misuse of open-source LLMs like OpenAI’s ChatGPT and Anthropic’s Claude, which have made up “imaginary cases, suggested that attorneys invent court decisions to strengthen their arguments, and provided improper citations to legal documents. 

Experts tell Fortune more of these cases will crop up—and along with them steep penalties for the attorneys who misuse AI.

Damien Charlotin, a lawyer and research fellow at HEC Paris, runs a database of AI hallucination cases. He’s tallied 376 cases to date, 244 of which are U.S. cases.

Entering sensitive information into the open-source models also risks breach of attorney-client privilege. 





Tools & Techniques.

https://www.zdnet.com/article/you-can-google-the-world-around-you-by-video-now-for-free-with-search-live/

You can Google the world around you by video now for free - with Search Live

On Wednesday, Google finally launched its Live Search feature, first unveiled at Google I/O in May.

The feature is straightforward in the same way you would use Google Lens to point to something and take a photo. Now there's a new option in which you can select the Search Live feature and activate a live video experience where you show it the world around you and ask questions using your voice.



Wednesday, September 24, 2025

Who benefits?

https://www.bbc.com/news/articles/cn4w0d8zz22o

Secret Service disrupts telecom threat near UN General Assembly

The US Secret Service disrupted a network of telecommunications devices that could have shut down cellular systems as leaders gather for the United Nations General Assembly in New York City.

The agency said on Tuesday that last month it found more than 300 SIM servers and 100,000 SIM cards that could have been used for telecom attacks within the area encompassing parts of New York, New Jersey and Connecticut.

"This network had the power to disable cell phone towers and essentially shut down the cellular network in New York City," said special agent in charge Matt McCool.

The unidentified nation-state actors were sending encrypted messages to organised crime groups, cartels and terrorist organisations, he added.

The equipment was capable of texting the entire population of the US within 12 minutes, officials say. It could also have disabled mobile phone towers and launched distributed denial of service attacks that might have blocked emergency dispatch communications.





I’ve been waiting for this. It seemed inevitable after reading “The dynamo and the computer.”

https://gizmodo.com/study-claims-over-half-of-tech-firms-are-considering-restructuring-thanks-to-ai-2000659089

Study Claims Over Half of Tech Firms Are Considering ‘Restructuring,’ Thanks to AI

Murmurs about a linkage between the rollout of new AI services and recent waves of layoffs within the tech industry have been ongoing for some time. Similarly, a recent cooling of the job market for coders has also been attributed to the rise of so-called “vibe coding,” in which less skilled technicians create websites and products with the help of an automated assistant.

Now, a new report from a firm that works with tech companies claims that a majority of its clients say they are considering big changes to accommodate greater integration of AI.

The report comes from Source, a consultant that provides services to media, tech, and telecom firms. The company found that some 55 percent of its clients expect to invest in organizational restructuring during the next 18 months. The report seems to attribute these changes to AI:





What could be worse than bogus citations?

https://www.bespacific.com/ai-models-are-using-material-from-retracted-scientific-papers/

AI models are using material from retracted scientific papers

MIT Technology Review: “Some AI chatbots rely on flawed research from retracted scientific papers to answer questions, according to recent studies. The findings, confirmed by MIT Technology Review, raise questions about how reliable AI tools are at evaluating scientific research and could complicate efforts by countries and industries seeking to invest in AI tools for scientists. AI search tools and chatbots are already known to fabricate links and references. But answers based on the material from actual papers can mislead as well if those papers have been retracted. The chatbot is “using a real paper, real material, to tell you something,” says Weikuan Gu, a medical researcher at the University of Tennessee in Memphis and an author of one of the recent studies. But, he says, if people only look at the content of the answer and do not click through to the paper and see that it’s been retracted, that’s really a problem. Gu and his team asked OpenAI’s ChatGPT, running on the GPT-4o model, questions based on information from 21 retracted papers about medical imaging. The chatbot’s answers referenced retracted papers in five cases but advised caution in only three. While it cited non-retracted papers for other questions, the authors note that it may not have recognized the retraction status of the articles. In a study from August, a different group of researchers used ChatGPT-4o mini to evaluate the quality of 217 retracted and low-quality papers from different scientific fields; they found that none of the chatbot’s responses mentioned retractions or other concerns. (No similar studies have been released on GPT-5, which came out in August.)

  • If [a tool is] facing the general public, then using retraction as a kind of quality indicator is very important,” says Yuanxi Fu, an information science researcher at the University of Illinois Urbana-Champaign. There’s “kind of an agreement that retracted papers have been struck off the record of science,” she says, “and the people who are outside of science—they should be warned that these are retracted papers.” OpenAI did not provide a response to a request for comment about the paper results. The problem is not limited to ChatGPT. In June, MIT Technology Review tested AI tools specifically advertised for research work, such as Elicit, Ai2 ScholarQA (now part of the Allen Institute for Artificial Intelligence’s Asta tool), Perplexity, and Consensus, using questions based on the 21 retracted papers in Gu’s study. Elicit referenced five of the retracted papers in its answers, while Ai2 ScholarQA referenced 17, Perplexity 11, and Consensus 18—all without noting the retractions. Some companies have since made moves to correct the issue. “Until recently, we didn’t have great retraction data in our search engine,” says Christian Salem, cofounder of Consensus. His company has now started using retraction data from a combination of sources, including publishers and data aggregators, independent web crawling, and Retraction Watch, which manually curates and maintains a database of retractions. In a test of the same papers in August, Consensus cited only five retracted papers…” 



Tuesday, September 23, 2025

Eventually, even lawyers will learn.

https://calmatters.org/economy/technology/2025/09/chatgpt-lawyer-fine-ai-regulation/

California issues historic fine over lawyer’s ChatGPT fabrications

A California attorney must pay a $10,000 fine for filing a state court appeal full of fake quotations generated by the artificial intelligence tool ChatGPT.

The fine appears to be the largest issued over AI fabrications by a California court and came with a blistering opinion stating that 21 of 23 quotes from cases cited in the attorney’s opening brief were made up. It also noted that numerous out-of-state and federal courts have confronted attorneys for citing fake legal authority.

We therefore publish this opinion as a warning,” it continued. “Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations— whether provided by generative AI or any other source—that the attorney responsible for submitting the pleading has not personally read and verified.”





Perspective.

https://www.forbes.com/sites/bernardmarr/2025/09/22/the-8-biggest-ai-trends-for-2026-that-everyone-must-be-ready-for-now/

The 8 Biggest AI Trends For 2026 That Everyone Must Be Ready For Now

As I predicted last year, 2025 was the year that AI very much entered everyday life. Across work, play, learning, and just about everything we did, its impact was impossible to ignore.

So where do we go from here? I believe that in 2026, we’ll start to see the long-term effects begin to manifest.

This will continue to create fantastic opportunities, from improving standards of healthcare and education to boosting scientific discovery to simplifying and streamlining our lives in any number of ways.

At the same time, society will be forced to face up to problems such as its growing energy costs and social challenges, as well as issues around trust, privacy, and regulation.





Tools & Techniques. Replace “Charlie Kirk” with any name…

https://databreaches.net/2025/09/22/no-need-to-hack-when-its-leaking-app-for-outing-charlie-kirks-critics-leaked-its-users-personal-data/?pk_campaign=feed&pk_kwd=no-need-to-hack-when-its-leaking-app-for-outing-charlie-kirks-critics-leaked-its-users-personal-data

No Need to Hack When It’s Leaking: App for outing Charlie Kirk’s critics leaked its users’ personal data

Mikael Thalen reports:

An app for anonymously reporting individuals accused of speaking ill against conservative activist Charlie Kirk leaked personal data about its users. The app, known as “Cancel the Hate,” was taken offline on Thursday amid an investigation into the data leak by Straight Arrow News.
Launched in the wake of Kirk’s assassination on Sept. 10, Cancel the Hate aims to “hold individuals accountable for their public words,” according to its website. It calls on users to “express concern” by submitting “intel” on alleged offenders, including their names, locations and employers.
Cancel the Hate says users who submit data on others will not have their own personal details made public. However, a social media-style app launched alongside the website appears to have been exposing just that.
The flaw in the app, discovered by the security researcher who identifies himself as “BobDaHacker,” enabled the exposure of user information such as email addresses and phone numbers. Although email addresses were included in profile bios by default, seemingly unbeknownst to many of the platform’s users, the data could still be exposed even if privacy settings were enabled to keep it hidden.

Read more at San.com



Monday, September 22, 2025

Another attempt to eliminate lawyers?

https://www.bespacific.com/the-divergence-of-law-firms-from-lawyers/

The divergence of law firms from lawyers

Via LLRX – The divergence of law firms from lawyers – Jordan Furlong contends right now it’s possible for an ordinary person to obtain from an LLM like ChatGPT-5 the performance of a legal task — the provision of legal analysis, the production of a legal instrument, the delivery of legal advice — that previously could only be acquired from a human lawyer. He further states he’s not saying a person should do that. The LLM’s output might be effective and reliable, or it might prove disastrously off-base. But many people are already using LLMs in this way, and in the absence of other accessible options for legal assistance, they will continue to do so. Furlong offers insights into the challenges such a paradigm shift pose as well as the consequences of not meeting the moment as the velocity of AI’s adoption permeates the legal profession.





Eventually, this will be figured out…

https://www.bespacific.com/new-bluebook-rule-on-citing-to-ai-generates-criticism/

New Bluebook Rule On Citing to AI Generates Criticism from Legal Scholars and Practitioners

LawSites: “Has there ever been a time since the advent of legal reporting systems when citations have been under greater attack? Driven by their unwitting reliance on AI to generate legal briefs, lawyers seem to have forgotten everything they ever learned in law school about how to research and cite the law. Standing as a bulwark against this attack, one would think, is The Bluebook, the uniform system of citation that is among the first things taught to a first-year law student, and to which virtually all lawyers are expected to abide, except where excused by local rules of court. Yet now that very bulwark is itself under attack, thanks to the release last May of its 22nd edition, which introduced Rule 18.3, The Bluebook’s first standardized format for citing to generative artificial intelligence content. While the addition of AI citation guidance would seem to reflect The Bluebook’s expected role of evolving to address new types and formats of sources, the new rule has sparked criticism from legal scholars and practitioners who argue it is fundamentally flawed in both conception and execution. As Cullen O’Keefe, director of research at the Institute for Law & AI, says in his analysis, “Citing AI in the New Bluebook,” “I’m afraid The Bluebook editors have fallen a fair bit short in the substance and structure of Rule 18.3.”  Susan Tanner, associate professor of law at the University of Louisville, put it more bluntly:This is bonkers.”…”





AI as viewed by the defense industry?

https://dsm.forecastinternational.com/2025/09/22/what-is-artificial-intelligence-a-primer/

What is Artificial Intelligence? A Primer

The problem with Artificial Intelligence (AI) is that nobody is quite sure what it is.

Yet, the infusion of all things “AI” in defense makes some understanding of the technology imperative. We do this through a high-level review of the dominant approaches in the field of AI. The major benefits and shortcomings of these approaches are related to their suitability in mission-critical domains.





Tools & Techniques.

https://www.bespacific.com/legal-boolean-search-builder/

Legal Boolean Search Builder

Rebecca Fordon:  “For years, I’ve taught Boolean searching with a slide deck (8 steps!) always thinking, “this could be way more interactive.” Well, AI’s got me thinking I can make any little website my heart desires, so I gave it a shot.  The Legal Boolean Search Builder is a free tool designed to make crafting precise Boolean queries easier and (crucially) to help users understand how to create a good search. I wrote a quick post about the journey from concept to app, with a huge thank you to Charlie Amiot and Deborah Ginsberg for their excellent beta testing and feedback. This also builds on worksheets and fillable PDFs that have come before, as I note in the blog post.”



Sunday, September 21, 2025

Law enforcement uses “bogus” cell towers, why not criminals?

https://www.wired.com/story/sms-blasters-scam-texts/

Cybercriminals Have a Weird New Way to Target You With Scam Texts

Scammers are now using “SMS blasters” to send out up to 100,000 texts per hour to phones that are tricked into thinking the devices are cell towers. Your wireless carrier is powerless to stop them.





Outsmarted by AI?

https://futurism.com/openai-scheming-cover-tracks

OpenAI Tries to Train AI Not to Deceive Users, Realizes It's Instead Teaching It How to Deceive Them While Covering Its Tracks

OpenAI researchers tried to train the company's AI to stop "scheming" — a term the company defines as meaning "when an AI behaves one way on the surface while hiding its true goals" — but their efforts backfired in an ominous way.

In reality, the team found, they were unintentionally teaching the AI how to more effectively deceive humans by covering its tracks.

"A major failure mode of attempting to 'train out' scheming is simply teaching the model to scheme more carefully and covertly," OpenAI wrote in an accompanying blog post.

As detailed in a new collaboration with AI risk analysis firm Apollo Research, engineers attempted to develop an "anti-scheming" technique to stop AI models from "secretly breaking rules or intentionally underperforming in tests."








Making takeovers less valuable?

https://www.cnbc.com/2025/09/20/trump-golden-share-us-steel.html

Trump wields ‘golden share’ to halt U.S. Steel plant shutdown, WSJ reports

The Trump administration stepped in to stop U.S. Steel from idling operations at its Granite City, Ill., plant, exercising new powers tied to the company’s recent takeover, the Wall Street Journal reported.

The steelmaker had informed nearly 800 workers that the plant would close in November, noting however that they would still be paid. But after Commerce Secretary Howard Lutnick warned CEO Dave Burritt the administration wouldn’t allow it, U.S. Steel reversed course on Friday, saying the facility would keep rolling slabs into sheet steel, the Journal reported, citing a person familiar with the matter.

The intervention marked Trump’s first use of so-called “golden share” rights, a condition of the $14.1 billion takeover by Japan’s Nippon that cleared in June. The national-security agreement gave the White House veto power over plant closures, offshore production shifts and other strategic decisions.