Saturday, April 19, 2025

I think we need to look at Denver crosswalks… (Watch the video)

https://www.theregister.com/2025/04/19/us_crosswalk_button_hacking/

Hacking US crosswalks to talk like Zuck is as easy as 1234

Crosswalk buttons in various US cities were hijacked over the past week or so to – rather than robotically tell people it's safe to walk or wait – instead emit the AI-spoofed voices of Jeff Bezos, Elon Musk, and Mark Zuckerberg.

And it's likely all thanks to a freely available service app and poorly secured equipment.

In Seattle this week, some crosswalks started playing AI-generated messages spoofing tech tycoon Jeff Bezos. In one video clip, a synthetic Bezos voice can be heard introducing himself from the push-button box, and claiming the crossing is sponsored by Amazon Prime.

Then it veered into parody-turned-social commentary: "You know, please don’t tax the rich, otherwise all the other billionaires will move to Florida too. Wouldn’t it be terrible if all the rich people left Seattle or got Luigi-ed and then the normal people could afford to live here again?"





Something new!

https://pogowasright.org/judge-rules-blanket-search-of-cell-tower-data-unconstitutional/

Judge Rules Blanket Search of Cell Tower Data Unconstitutional

Matthew Gault reports:

This article was produced in collaboration with 404 Media, a new independent technology investigations site.
A judge in Nevada has ruled that “tower dumps”—the law enforcement practice of grabbing vast troves of private personal data from cell towers—is unconstitutional. The judge also ruled that the cops could, this one time, still use the evidence they obtained through this unconstitutional search.
[…]
A Nevada man, Cory Spurlock, is facing charges related to dealing marijuana and a murder-for-hire scheme. Cops used a tower dump to connect his cellphone with the location of some of the crimes he is accused of. Spurlock’s lawyers argued that the tower dump was an unconstitutional search and that the evidence obtained during it should not be. The cops got a warrant to conduct the tower dump but argued it wasn’t technically a “search” and therefore wasn’t subject to the Fourth Amendment.
U.S. District Juste Miranda M. Du rejected this argument, but wouldn’t suppress the evidence. “The Court finds that a tower dump is a search and the warrant law enforcement used to get it is a general warrant forbidden under the Fourth Amendment,” she said in a ruling filed on April 11. “That said, because the Court appears to be the first court within the Ninth Circuit to reach this conclusion and the good faith exception otherwise applies, the Court will not order any evidence suppressed.”

Read more at Court Watch.





Should be interesting.

https://pogowasright.org/university-of-iowa-student-files-privacy-lawsuit-boycotts-class-until-school-improves-online-security/

University of Iowa student files privacy lawsuit, boycotts class until school improves online security

Brooklyn Draisey reports:

A University of Iowa student who feels his online security and that of all UI students who use Zoom to access classes has been compromised has brought a lawsuit against the university, alleging negligence and seeking relief and an order for the university to secure its online instruction.
Marc Muklewicz, a 46-year-old criminology student close to finishing his degree, said an unauthorized video taken of him in an online class and posted on social media led him to learn that anyone with the link to a UI Zoom course could access it without needing to log in with university credentials or with any other form of verification.
The UI has refused to remedy the situation and ensure his and other students’ data is private, Muklewicz said, and he is currently refusing to attend classes until he knows that when he logs into a course, he is safe in doing so.

Read more at Iowa City Press-Citizen.





No sure where this goes…

https://pogowasright.org/state-privacy-regulators-announce-formation-of-privacy-supergroup/

State Privacy Regulators Announce Formation of Privacy ‘Supergroup’

Lauren N. Watson of Ogletree, Deakins, Nash, Smoak & Stewart, P.C.

The concept of the “supergroup” may have originated with rock and roll, but on April 16, 2025, privacy practitioners in the United States learned that a whole new type of supergroup has been formed. Far from being a reboot of Cream or the Traveling Wilburys, however, this latest supergroup is comprised of eight state privacy regulators from seven states (each of which has enacted a comprehensive state privacy law), who announced they have formed a bipartisan coalition to “safeguard the privacy rights of consumers” by coordinating their enforcement efforts relating to state consumer privacy laws.
Quick Hits
  • State attorneys general from California, Colorado, Connecticut, Delaware, Indiana, New Jersey, and Oregon, as well as the California Privacy Protection Agency, announced the formation of the “Consortium of Privacy Regulators.”
  • While the creation of the Consortium does not reflect a closer alignment in the contents of the actual consumer privacy laws themselves, it will likely heighten regulators’ abilities to enforce those elements of consumer privacy law that are common across states.
  • Businesses may wish to take this announcement as a sign to revisit their consumer privacy policies and practices, lest they find themselves subject to additional scrutiny by this new regulatory “supergroup.”

Read more at The National Law Review.





Tools & Techniques.

https://www.makeuseof.com/ai-checker-verify-human-written-essays/

This AI App Will Help You Prove You Didn’t Use AI to Write Your Paper

The company is also testing a new feature — Grammarly Authorship. This function lets the app track what the user writes on a particular Google Doc or Word file, as if leaving a paper trail to show where the written text came from.

It can distinguish which parts of the document were typed out and which ones were copied and pasted from a web-based or other unknown source. It can also determine if a portion of the text has been reworded using Grammarly’s generative AI capabilities. This tracking will help prove that you (or a person) at least typed the paper you’re submitting to your professor.



Friday, April 18, 2025

Snapping up children is easier?

https://theconversation.com/ice-can-now-enter-k-12-schools-heres-what-educators-should-know-about-student-rights-and-privacy-253519

ICE can now enter K-12 schools − here’s what educators should know about student rights and privacy

United States federal agents tried to enter two Los Angeles elementary schools on April 7, 2025, and were denied entry, according to the Los Angeles Times. The agents were apparently seeking contact with five students who had allegedly entered the country without authorization.

The Trump administration has been targeting foreign-born college students and professors for deportation since February 2025. This was the first known attempt to target younger students since the U.S. Department of Homeland Security in January rescinded a 2011 policy that had limited immigration enforcement actions in locations deemed sensitive by the government such as hospitals, churches and schools.

Criminals will no longer be able to hide in America’s schools and churches to avoid arrest,” the department said on Jan. 21, 2025.





What else might be ‘verified’ by a facial scan?

https://www.bbc.com/news/articles/cjr75wypg0vo

Discord's face scanning age checks 'start of a bigger shift'

Discord is testing face scanning to verify some users' ages in the UK and Australia.

The social platform, which says it has over 200 million monthly users around the world, was initially used by gamers but now has communities on a wide range of topics including pornography.

The UK's online safety laws mean platforms with adult content will need to have "robust" age verification in place by July.

And social media expert Matt Navarra told the BBC "this isn't a one-off - it's the start of a bigger shift".

"Regulators want real proof, and facial recognition might be the fastest route there," he said.





"So let it be written, so let it be done" (It’s like an AI commandment.)

https://gizmodo.com/a-scanning-error-created-a-fake-science-term-now-ai-wont-let-it-die-2000590659

A Scanning Error Created a Fake Science Term—Now AI Won’t Let It Die

AI trawling the internet’s vast repository of journal articles has reproduced an error that’s made its way into dozens of research papers—and now a team of researchers has found the source of the issue.

It’s the question on the tip of everyone’s tongues: What the hell is “vegetative electron microscopy”? As it turns out, the term is nonsensical.

It sounds technical—maybe even credible—but it’s complete nonsense. And yet, it’s turning up in scientific papers, AI responses, and even peer-reviewed journals. So… how did this phantom phrase become part of our collective knowledge?

As painstakingly reported by Retraction Watch in February, the term may have been pulled from parallel columns of text in a 1959 paper on bacterial cell walls. The AI seemed to have jumped the columns, reading two unrelated lines of text as one contiguous sentence, according to one investigator.



(Related)

https://www.bespacific.com/russia-seeds-chatbots-with-lies-any-bad-actor-could-game-ai-the-same-way/

Russia seeds chatbots with lies. Any bad actor could game AI the same way.

Washington Post [no paywall]: “Russia is automating the spread of false information to fool artificial intelligence chatbots on key topics, offering a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform. Experts warn the problem is worsening as more people rely on chatbots rushed to market, social media companies cut back on moderation and the Trump administration disbands government teams fighting disinformation. Earlier this year, when researchers asked 10 leading chatbots about topics targeted by false Russian messaging, such as the claim that the United States was making bioweapons in Ukraine, a third of the responses repeated those lies. Moscow’s propaganda inroads highlight a fundamental weakness of the AI industry: Chatbot answers depend on the data fed into them. A guiding principle is that the more the chatbots read, the more informed their answers will be, which is why the industry is ravenous for content.  But mass quantities of well-aimed chaff can skew the answers on specific topics. For Russia, that is the war in Ukraine. But for a politician, it could be an opponent; for a commercial firm, it could be a competitor…”





Perspective.

https://blogs.lse.ac.uk/businessreview/2025/04/15/an-epistemic-solution-to-do-away-with-our-illusion-of-ai-objectivity/

An epistemic solution to do away with our illusion of AI objectivity

AI-generated output has the potential to be much more than answers if it includes a summary of relevant sources, indicators of disagreement between them and a confidence score based on source credibility. Jakub Drábik calls this epistemic responsibility. He writes that AI tools don’t always have to be right, but they must be more transparent, explaining why they think they are right and what might make them wrong.

As a historian working with textual sources, I am trained to ask not only what a statement claims, but where it comes from, why it exists and how it can be evaluated. In recent months, I have found myself turning to LLMs to assist in my work—only to encounter answers that sound right but cannot be traced, supported or interrogated. When I ask for references, I am given hallucinations. When I ask for balance, I am given rhetoric. When I ask for method, I get metaphor. No matter how careful the prompt, the system has no true way to check its own statements.

This is not a matter of prompting technique. It is structural. Today’s AI systems are trained on vast amounts of language, but not on knowledge in the epistemic sense: grounded, verifiable, source-aware and accountable. What results is a surface of plausibility without a spine of justification.



Thursday, April 17, 2025

No longer running rampant?

https://www.bespacific.com/the-center-begins-to-hold/

The center begins to hold.

Follow up to Law Firms Made Deals With Trump. Now He Wants More From Them – Robert H. Hubbell:Five law professors come to the aid of law firms targeted by Trump.  Five preeminent law professors from Boston University, Cornell, and Georgetown, all of whom are experts in ethics, have filed a brief in support of Wilmer Hale (one of the Trump targets that refused to capitulate). All I can say is, “Shut the front door! Do not provoke ethics experts.”  Joyce Vance (of Civil Discourse on Substack) ably summarizes the arguments made by the five law professors. In short, the capitulating law firms face ethical conflicts whenever they represent a client in a case involving the government. Worse, as summarized by Joyce Vance, “Those firms may have violated federal anti-bribery laws. “[T]he law firms may fall within what counts as bribery under federal law: offering or promising something of value to a federal official in hopes of influencing an official act—here, withdrawing the executive orders against the firms.” It appears the Capitulating Firms did not carefully consider the ramifications (or legality) of giving Trump a political victory in exchange for forestalling official government action… The “agreements” between the Capitulating Firms and Trump have been a mystery. Who were the parties to the agreements? What were the terms of the agreements? Are they enforceable? Josh Marshall of Talking Points Memo appears to have discovered the answers to the above questions—and they aren’t pretty. It appears that the “agreements” are little more than the press releases by the firms and Trump’s posts on Truth Social (descriptions that do not always match). Worse, it appears that the agreements were negotiated by Boris Epshteyn—who does not represent the US government. Rather, Epshteyn is Trump’s personal attorney. So, if Epshteyn is Trump’s personal attorney, then Trump (rather than the government) is the counterparty to the agreement. It appears that Trump’s personal lawyer was convincing the firms to give up hundreds of millions of dollars in pro bono services to benefit Trump’s political standing in exchange for the cancellation of an official government action, i.e., the executive orders. If true, such a corrupt exchange is why the five ethics experts suggest that the deals may violate federal anti-bribery statutes. See Josh Marshall, Talking Points MemoFor Big Law: Is That Your Final Answer?  At some point in the future, representatives of the law firms will be called to explain the nature, terms, and intent of the agreements. Those firms should already be preparing those answers with the assistance of high-powered criminal defense lawyers. I am not suggesting that there was a criminal violation, but as cautious attorneys, the Capitulating Firms know that you don’t want to get close to that line. But that is where the amateurish, reckless deals placed once-proud and respected firms—too close to a line that could end careers and personal liberties…”





AI in education.

https://voiceofsandiego.org/2025/04/14/as-bot-students-continue-to-flood-in-community-colleges-struggle-to-respond/

As ‘Bot’ Students Continue to Flood In, Community Colleges Struggle to Respond

Community colleges have been dealing with an unprecedented phenomenon: fake students bent on stealing financial aid funds. While it has caused chaos at many colleges, some Southwestern faculty feel their leaders haven’t done enough to curb the crisis.

When the spring semester began, Southwestern College professor Elizabeth Smith felt good. Two of her online classes were completely full, boasting 32 students each. Even the classes’ waitlists, which fit 20 students, were maxed out. That had never happened before.

Teachers get excited when there’s a lot of interest in their class. I felt like, ‘Great, I’m going to have a whole bunch of students who are invested and learning,’’ Smith said. “But it quickly became clear that was not the case.”

By the end of the first two weeks of the semester, Smith had whittled down the 104 students enrolled in her classes, including those on the waitlist, to just 15. The rest, she’d concluded, were fake students, often referred to as bots.

The bots’ goal is to bilk state and federal financial aid money by enrolling in classes, and remaining enrolled in them, long enough for aid disbursements to go out. They often accomplish this by submitting AI-generated work. And because community colleges accept all applicants, they’ve been almost exclusively impacted by the fraud.





Interesting.

https://news.bloomberglaw.com/ip-law/ohio-judge-strikes-down-social-media-law-restricting-teen-access

Ohio Judge Strikes Social Media Law Restricting Teen Access (1)

A federal judge struck down Ohio’s law limiting teen social media use, marking another court win for the tech industry group NetChoice fighting similar restrictions nationwide.

Judge Algenon L. Marbley granted a permanent injunction against the Social Media Parental Notification Act in a Wednesday decision for the US District Court for the Southern District of Ohio. The law required platforms to verify whether its users are at least 16 and demanded parental consent for younger users. The decision enjoins Ohio Attorney General Dave Yost (R) from enforcing the law.

NetChoice—whose members include Meta Platforms Inc., X Corp., and Alphabet Inc.'s Google—alleged in its Jan. 2024 complaint that the law violated First and 14th Amendment protections by requiring Ohioans to hand over personal data in order to access content. Marbley granted a preliminary injunction of the law Feb. 12, 2024.





I can see them directing traffic but doubt their ability to perform all law enforcement functions.

https://interestingengineering.com/innovation/ai-thai-robocop-patrols-streets?group=test_a

Cyborg 1.0: Thai Robocop patrols streets with 360° eyes, live face-tracking power

The robot, named “AI Police Cyborg 1.0,” made its debut during the Songkran festival in Nakhon Pathom province. Developed collaboratively by Provincial Police Region 7, Nakhon Pathom Provincial Police, and Nakhon Pathom Municipality, this Robocop-style unit is equipped with advanced surveillance and threat detection technologies.

AI Police Cyborg 1.0 uses onboard AI to immediately process and analyze data by integrating real-time data from aerial drone footage and local CCTV networks. Rapid reaction coordination is made possible by the robot’s in-built 360-degree smart cameras, which are immediately connected to the province’s Command and Control Center and backed by video analytics software, according to The Nation.



Wednesday, April 16, 2025

The Privacy Foundation at the University of Denver Sturm College of Law presents:

Children’s Data Privacy Law

Friday, April 18, 2025 12:00 Noon – 2:30 p.m. Please register – lunch will be served

To register, email Kristen.Dermyer@du.edu or link here: https://udenver.qualtrics.com/jfe/form/SV_e3V74CHX8RM1y8C





and it’s about time.

https://www.eff.org/deeplinks/2025/04/privacy-map-how-states-are-fighting-location-surveillance

Privacy on the Map: How States Are Fighting Location Surveillance

Your location data isn't just a pin on a map—it's a powerful tool that reveals far more than most people realize. I t can expose where you work, where you pray, who you spend time with, and, sometimes dangerously, where you seek healthcare. In today’s world, your most private movements are harvested, aggregated, and sold to anyone with a credit card. For those seeking reproductive or gender-affirming care, or visiting a protest or a immigration law clinic, this data is a ticking time bomb.

The good news? Lawmakers in CaliforniaMassachusettsIllinois and elsewhere are stepping up, leading the way to protect privacy and ensure that healthcare access and other exercise of our rights remain safe from invasive surveillance.





Those who do not study history are domed to repeat it.

https://www.bespacific.com/an-ars-technica-history-of-the-internet-part-1/

An Ars Technica history of the Internet, part 1

Ars Technica is doing a three-part series on the history of the internet; here’s part one, which covers ARPANET, IMPs, TCP/IP, RFCs, DNS, CompuServe, etc. “It was the first time that autocomplete had ruined someone’s day.” In a very real sense, the Internet, this marvelous worldwide digital communications network that you’re using right now, was created because one man was annoyed at having too many computer terminals in his office. The year was 1966. Robert Taylor was the director of the Advanced Research Projects Agency’s Information Processing Techniques Office. The agency was created in 1958 by President Eisenhower in response to the launch of Sputnik. So Taylor was in the Pentagon, a great place for acronyms like ARPA and IPTO. He had three massive terminals crammed into a room next to his office. Each one was connected to a different mainframe computer. They all worked slightly differently, and it was frustrating to remember multiple procedures to log in and retrieve information.”



Tuesday, April 15, 2025

Everything has a limit…

https://www.bespacific.com/large-language-models-for-legal-interpretation-dont-take-their-word-for-it/

Large Language Models for Legal Interpretation? Don’t Take Their Word for It

Waldon, Brandon and Schneider, Nathan and Wilcox, Ethan and Zeldes, Amir and Tobia, Kevin, Large Language Models for Legal Interpretation? Don’t Take Their Word for It (February 03, 2025). Georgetown Law Journal, Vol. 114 (forthcoming), Available at SSRN:  https://ssrn.com/abstract=5123124  or http://dx.doi.org/10.2139/ssrn.5123124

Recent breakthroughs in statistical language modeling have impacted countless domains, including the law. Chatbot applications such as ChatGPT, Claude, and DeepSeek – which incorporate ‘large’ neural network–based language models (LLMs) trained on vast swathes of internet text – process and generate natural language with remarkable fluency. Recently, scholars have proposed adding AI chatbot applications to the legal interpretive toolkit. These suggestions are no longer theoretical: in 2024, a U.S. judge queried LLM chatbots to interpret a disputed insurance contract and the U.S. Sentencing Guidelines. We assess this emerging practice from a technical, linguistic, and legal perspective. This Article explains the design features and product development cycles of LLM-based chatbot applications, with a focus on properties that may promote their unintended misuse – or intentional abuse – by legal interpreters. Next, we argue that legal practitioners run the risk of inappropriately relying on LLMs to resolve legal interpretative questions. We conclude with guidance on how such systems – and the language models which underpin them – can be responsibly employed alongside other tools to investigate legal meaning.”





Do we have any friends left?

https://www.bespacific.com/eu-issues-us-bound-staff-with-burner-phones-over-spying-fears/

EU issues US-bound staff with burner phones over spying fears

The European Commission is giving some of its US-bound staff burner phones and basic laptops to avoid cybersecurity risks, the Financial Times reported. Brussels typically reserves such measures for trips to Ukraine and China over fears of Russian or Chinese government espionage. Worries about American spying are the latest sign of worsening transatlantic ties in President Donald Trump’s second term: The White House’s dismissal of traditional alliances has prompted some commentators to argue Washington has effectively become Europe’s adversary. The continent must “rediscover its economic and military strength in order to survive in this new world – one defined by the naked pursuit of power,” a Der Spiegel editorial argued last month.”

FT.com – European Commission officials heading to IMF and World Bank spring meetings advised to travel with basic devices [no paywall] – “The European Commission is issuing burner phones and basic laptops to some US-bound staff to avoid the risk of espionage, a measure traditionally reserved for trips to China. Commissioners and senior officials travelling to the IMF and World Bank spring meetings next week have been given the new guidance, according to four people familiar with the situation. They said the measures replicate those used on trips to Ukraine and China, where standard IT kit cannot be brought into the countries for fear of Russian or Chinese surveillance. “They are worried about the US getting into the commission systems,” said one official. The treatment of the US as a potential security risk highlights how relations have deteriorated since the return of Donald Trump as US president in January. Trump has accused the EU of having been set up to “screw the US” and announced 20 per cent so-called reciprocal tariffs on the bloc’s exports, which he later halved for a 90-day period. At the same time, he has made overtures to Russia, pressured Ukraine to hand over control over its assets by temporarily suspending military aid and has threatened to withdraw security guarantees from Europe, spurring a continent-wide rearmament effort. “The transatlantic alliance is over,” said a fifth EU official. The White House and the US National Security Council did not immediately reply to requests for comment. Brussels and Washington are locked in sensitive talks in a number of areas where it would suit either side to gather information about the other. Maroš Šefčovič, EU trade commissioner, is holding talks with commerce secretary Howard Lutnick in Washington on Monday in an effort to resolve an escalating trade war. The EU has delayed its retaliatory measures against €21bn of US exports that it approved because of US tariffs on steel and aluminium. The US has also attacked the EU’s regulation of its technology companies and claimed that Brussels is gagging free speech and rigging elections, such as the controversial exclusion of a presidential candidate in Romania for benefiting from a surge in support from TikTok accounts. Three commissioners are travelling to Washington for the IMF and World Bank meetings from April 21-26: Valdis Dombrovskis, economy commissioner; Maria Luís Albuquerque, the financial services chief; and Jozef Síkela, who handles development assistance. The Commission confirmed that it had recently updated its security advice for the US, but said that no specific instructions about the use of burner phones were given in writing. It said the bloc’s diplomatic service had been involved, as it routinely is in such updates. Officials said the guidance for all staff travelling to the US included a recommendation that they should turn off phones at the border and place them in special sleeves to protect them from spying if left unattended. The advice was unsurprising, according to Luuk van Middelaar, director of the Brussels Institute for Geopolitics, a think-tank. “Washington is not Beijing or Moscow, but it is an adversary that is prone to use extra-legal methods to further its interests and power.” Van Middelaar recalled that the administration of President Barack Obama faced allegations of spying on the phone of then German chancellor Angela Merkel in 2013. “Democrat administrations use the same tactics”, he said. “It is an acceptance of reality by the Commission.” There is an additional risk when travelling to the US, where border staff have the right to seize visitors’ phones and computers and check their content. Tourists and visiting academics from Europe have been refused entry to the country after having social media comments or documents critical of the Trump administration’s policies on their phones or laptops. In March, the French government said a French researcher had been denied entry and sent back to France because he had expressed a “personal opinion” on US research policy. Commission officials have been told to ensure their visas are in their diplomatic “laissez-passer” documents rather than their national passports…”





Another source of AI data?

https://www.morningstar.com/news/business-wire/20250415432215/artificial-intelligence-fuels-rise-of-hard-to-detect-bots-that-now-make-up-more-than-half-of-global-internet-traffic-according-to-the-2025-imperva-bad-bot-report

Artificial Intelligence Fuels Rise of Hard-to-Detect Bots That Now Make up More Than Half of Global Internet Traffic, According to the 2025 Imperva Bad Bot Report

Thales, the leading global technology and security provider, today announced the release of the 2025 Imperva Bad Bot Report, a global analysis of automated bot traffic across the internet. This year’s report, the 12th annual research study, reveals that generative artificial intelligence (AI) is revolutionizing the development of bots, allowing less sophisticated actors to launch a higher volume of bot attacks with increased frequency. Today’s attackers are also leveraging AI to scrutinize their unsuccessful attempts and refine techniques to evade security measures with heightened efficiency, amidst a growing Bots-As-A-Service (BaaS) ecosystem of commercialized bot services.

This press release features multimedia. View the full release here:  https://www.businesswire.com/news/home/20250415432215/en/





Tools & Techniques.

https://www.bespacific.com/google-files-new-patent-on-personal-history-based-search/

Google Files New Patent On Personal History-Based Search

Search Engine Journal: “Google recently filed a new patent for a way to provide search results based on a user’s browsing and email history. The patent outlines a new way to search within the context of a search engine, within an email interface, and through a voice-based assistant (referred to in the patent as a voice-based dialog system). A problem that many people have is that they can remember what they saw but they can’t remember where they saw it or how they found it. The new patent, titled Generating Query Answers From A User’s History, solves that problem by helping people find information they’ve previously seen within a webpage or an email by enabling them to ask for what they’re looking for using everyday language such as “What was that article I read last week about chess?” The problem the invention solves is that traditional search engines don’t enable users to easily search their own browsing or email history using natural language. The invention works by taking a user’s spoken or typed question, recognizing that the question is asking for previously viewed content, and then retrieving search results from the user’s personal history (such as their browser history or emails). In order to accomplish this it uses filters like date, topic, or device used. What’s novel about the invention is the system’s ability to understand vague or fuzzy natural language queries and match them to a user’s specific past interactions, including showing the version of a page as it looked when the user originally saw it (a cached version of the web page)… [What about privacy issues?]



Monday, April 14, 2025

Politics is expensive.

https://sloanreview.mit.edu/article/how-to-make-friends-and-influence-potus/

How to Make Friends and Influence POTUS

Lobbying in U.S. politics has always had a transactional side. But executives must understand the new realities of the current environment — and AI-powered influence peddling.





Something else to fear.

https://thehill.com/opinion/technology/5244901-ai-systems-legal-bounds/

It’s game over for people if AI gains legal personhood

The modern conversation about artificial intelligence often gets stuck on the wrong questions. We fret about how to contain artificial intelligence, to control it, to ensure it doesn’t break free from human oversight and endanger us. Yet, as the technology accelerates, we risk missing the deeper, more urgent issue: the legal environment in which AI systems will operate.

The real threat isn’t that AI will escape our control, but that AI systems will quietly accumulate legal rights — like owning property, entering contracts, or holding financial assets — until they become an economic force that humans cannot easily challenge. If we fail to set proper boundaries now, we risk creating systems that distort fundamental human institutions, including ownership and accountability, in ways that could ultimately undermine human prosperity and freedom.



Sunday, April 13, 2025

Interesting.

https://pogowasright.org/our-privacy-act-lawsuit-against-doge-and-opm-why-a-judge-let-it-move-forward/

Our Privacy Act Lawsuit Against DOGE and OPM: Why a Judge Let It Move Forward

On April 9, Adam Schwartz of EFF wrote:

Last week, a federal judge rejected the government’s motion to dismiss our Privacy Act lawsuit against the U.S. Office of Personnel Management (OPM) and Elon Musk’s “Department of Government Efficiency” (DOGE). OPM is disclosing to DOGE agents the highly sensitive personal information of tens of millions of federal employees, retirees, and job applicants. This disclosure violates the federal Privacy Act, a watershed law that tightly limits how the federal government can use our personal information.

We represent two unions of federal employees: the AFGE and the AALJ. Our co-counsel are Lex Lumina LLP, State Democracy Defenders Fund, and The Chandra Law Firm LLC.

We’ve already explained why the new ruling is a big deal, but let’s take a deeper dive into the Court’s reasoning.





Perspective.

https://www.technologyreview.com/2025/04/11/1114914/generative-ai-is-learning-to-spy-for-the-us-military/

Generative AI is learning to spy for the US military

For much of last year, about 2,500 US service members from the 15th Marine Expeditionary Unit sailed aboard three ships throughout the Pacific, conducting training exercises in the waters off South Korea, the Philippines, India, and Indonesia. At the same time, onboard the ships, an experiment was unfolding: The Marines in the unit responsible for sorting through foreign intelligence and making their superiors aware of possible local threats were for the first time using generative AI to do it, testing a leading AI tool the Pentagon has been funding.

Two officers tell us that they used the new system to help scour thousands of pieces of open-source intelligence—nonclassified articles, reports, images, videos—collected in the various countries where they operated, and that it did so far faster than was possible with the old method of analyzing them manually. Captain Kristin Enzenauer, for instance, says she used large language models to translate and summarize foreign news sources, while Captain Will Lowdon used AI to help write the daily and weekly intelligence reports he provided to his commanders.





Should ethics change because of new technology?

https://journals.ezenwaohaetorc.org/index.php/TIJAH/article/view/3128

ETHICS IN THE AGE OF ARTIFICIAL INTELLIGENCE: RECONCEPTUALISING THE TRADITIONAL ETHICAL THEORIES

The rapid evolution of artificial intelligence (AI) has fundamentally disrupted traditional ethical theories, necessitating a re-conceptualization of moral frameworks in the age of AI. As AI systems grow increasingly sophisticated, they challenge long-standing notions of moral agency, responsibility, and ethical decision-making. This research examines how the three waves of AI—Predictive AI, Generative AI, and Agentic AI—reshape ethical paradigms. Predictive AI, with its data-driven algorithms, exposes inherent biases and raises critical questions about justice, fairness, and accountability in automated decision-making. Generative AI, capable of creating synthetic content, disrupts traditional concepts of authorship, authenticity, and intellectual property, forcing a re-evaluation of ethical norms in creativity and ownership. Agentic AI, with its capacity for autonomous action, pushes the boundaries of moral responsibility, challenging humans to reconsider the ethical implications of delegating decision-making to machines. These developments demand a rethinking of traditional ethical theories such as utilitarianism, deontology, and virtue ethics, which were designed for human moral agents but fall short in addressing the unique moral dilemmas posed by AI. The research highlights the limitations of these classical theories in dealing with AI's opacity, autonomy, and responsibility. The study examines key ethical challenges, including the issue of moral agency and algorithmic bias, and proposes the need for a new ethical framework that accounts for the collaborative nature of human-AI interactions, emphasizing distributed moral responsibility and the importance of human oversight in ensuring ethical outcomes.





Perfecting AI?

https://cris.unibo.it/handle/11585/1013658

Argumentation in AI and law

This chapter introduces AI & Law approaches to legal argumentation, showing how such approaches provide formal models that succeed in capturing key aspects of legal reasoning. The chapter is organised as follows. Sections 2, 3, and 4 introduce the motivations and developments of research on argumentation within AI & Law. Section 2 looks into the notion of formal inference, and shows how deduction-based approaches fail to account for important aspects of legal reasoning. Section 3 introduces the idea of defeasibility, and argues that an adequate model of legal reasoning should take it into account. Section 4 presents some AI & Law models of argumentation. The remaining sections are dedicated to introducing a formal account of argumentation based on AI & Law research. Section 5 defines and exemplifies the notion of an argument. Section 6 discusses conflicts between arguments and their representation in argument graphs. Section 7 defines methods for assessing the status of arguments and evaluating their conclusions, and Section 8 summarises the steps from premises to dialectically supported conclusions