Saturday, June 21, 2025

A new form of “reach out and touch someone.”

https://www.nytimes.com/2025/06/20/technology/us-tech-europe-microsoft-trump-icc.html?unlocked_article_code=1.QU8.L7r1.jIQ5Sl08LtOG&smid=nytcore-android-share

Europe’s Growing Fear: How Trump Might Use U.S. Tech Dominance Against It

To comply with a Trump executive order, Microsoft recently helped suspend the email account of an International Criminal Court prosecutor in the Netherlands who was investigating Israel for war crimes.

When President Trump issued an executive order in February against the chief prosecutor of the International Criminal Court for investigating Israel for war crimes, Microsoft was suddenly thrust into the middle of a geopolitical fight.

For years, Microsoft had supplied the court — which is based in The Hague in the Netherlands and investigates and prosecutes human rights breaches, genocides and other crimes of international concern — with digital services such as email. Mr. Trump’s order abruptly threw that relationship into disarray by barring U.S. companies from providing services to the prosecutor, Karim Khan.

Soon after, Microsoft, which is based in Redmond, Wash., helped turn off Mr. Khan’s I.C.C. email account, freezing him out of communications with colleagues just a few months after the court had issued an arrest warrant for Prime Minister Benjamin Netanyahu of Israel for his country’s actions in Gaza.

Microsoft’s swift compliance with Mr. Trump’s order, reported earlier by The Associated Press, shocked policymakers across Europe. It was a wake-up call for a problem far bigger than just one email account, stoking fears that the Trump administration would leverage America’s tech dominance to penalize opponents, even in allied countries like the Netherlands.





I wonder if the lawyers are AI?

https://thehackernews.com/2025/06/qilin-ransomware-adds-call-lawyer.html

Qilin Ransomware Adds "Call Lawyer" Feature to Pressure Victims for Larger Ransoms

The threat actors behind the Qilin ransomware-as-a-service (RaaS) scheme are now offering legal counsel for affiliates to put more pressure on victims to pay up, as the cybercrime group intensifies its activity and tries to fill the void left by its rivals.

The new feature takes the form of a "Call Lawyer" feature on the affiliate panel, per Israeli cybersecurity company Cybereason.





Perhaps this is the best way to employ AI?

https://www.zdnet.com/article/ai-agents-win-over-professionals-but-only-to-do-their-grunt-work-stanford-study-finds/

AI agents win over professionals - but only to do their grunt work, Stanford study finds

AI agents are one of the buzziest trends in Silicon Valley, with tech companies promising big productivity gains for businesses. But do individual workers actually want to use them?

A new study from Stanford University shows the answer may be yes -- as long as they automate mundane tasks and don't encroach too far on human agency.

Titled "Future of Work with AI Agents," the study set out to move beyond hype around AI agents to understand how, exactly, these tools can be practically integrated into the day-to-day routines of professionals. While previous studies have investigated the impact of AI agents on specific job categories, like software engineering and IT, the Stanford researchers analyzed individual categories of tasks, allowing them "to better capture the nuanced, open-ended, and contextual nature of real-world work," they noted in their report.



Thursday, June 19, 2025

Perspective.

https://www.gartner.com/en/newsroom/press-releases/2025-06-17-gartner-announces-top-data-and-analytics-predictions

Gartner Announces the Top Data & Analytics Predictions

Gartner, Inc. has announced the top data and analytics (D&A) predictions for 2025 and beyond. Among the top predictions, half of business decisions will be augmented or automated by AI agents; executive AI literacy will drive higher financial performance; and critical failures in managing synthetic data will risk AI governance, model accuracy and compliance.





Explains a lot, if not everything.

https://www.upworthy.com/why-does-it-seem-like-dumb-people-are-in-power

Philosophy expert answers the question: Why does it seem like dumb people are always in power?

As the old song by The Who goes, “Meet the new boss, same as the old boss.” It’s a sentiment many of us feel every time a new mayor, governor, or president takes office, and we can’t help but feel that we deserve someone better. In a country with so many brilliant scientists, business people, educators, and public policy experts, why do the least impressive of us seem to rise to power?

Philosophy expert Julian de Medeiros, a popular TikToker and Substack blogger, recently wrestled with this question, and it must have been on a lot of people’s minds because the video received over 4.2 million views. “Why does it seem like so many people in power are so dumb? It's like, why can't we get a better class of leaders?” he asked.

Ultimately, de Medeiros believes that power and intellect are often at odds. “I've thought about it a bit more, and I think this is my thesis: that power is inherently anti-intellectual. Because what does intellect do? Intellect questions power. It speaks truth to power. It critiques power. And power doesn't like that,” he says. “And so power has to speak to the lowest common denominator. It dumbs everything down."





We don’t need our brains when AI does all the work.

https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/

Brain activity much lower when using AI chatbots, MIT boffins find

EEG and recall tests suggest people who use ChatGPT to write essays aren't learning much

Using AI chatbots actually reduces activity in the brain versus accomplishing the same tasks unaided, and may lead to poorer fact retention, according to a new preprint study out of MIT.

Seeking to understand how the use of LLM chatbots affects the brain, a team led by MIT Media Lab research scientist Dr. Nataliya Kosmyna hooked up a group of Boston-area college students to electroencephalogram (EEG) headsets and gave them 20 minutes to write a short essay. One group was directed to write without any outside assistance, a second group was allowed to use a search engine, and a third was instructed to write with the assistance of OpenAI's GPT-4o model.  The process was repeated four times over several months.

While not yet peer reviewed, the pre-publication research results suggest a striking difference between the brain activity of the three groups and the corresponding creation of neural connectivity patterns. 



Wednesday, June 18, 2025

Give us everything and we’ll pick out what we want” isn’t good enough? Imagine that.

https://pogowasright.org/doj-seeks-more-time-on-tower-dumps/

DOJ Seeks More Time on Tower Dumps

Seamus Hughes reports:

After previously receiving a ninety day reprieve, today the Justice Department is asking for another month to decide if they will appeal a Mississippi federal judge’s sweeping ruling that determined so-called “tower dumps” are unconstitutional.
Tower Dumps are a frequently-used law enforcement technique of pulling large swaths of data from cellular towers, which would include location information about innocent individuals within the area of the tower, to find alleged criminal activity. 
FBI agents in Mississippi had initially submitted four sealed search warrants for a tower dump’s data as part of an investigation into a string of shootings and car thefts involving an unnamed violent gang. U.S. Magistrate Judge Andrew Harris repeatedly declined to authorize the search warrants, even after the DOJ submitted a follow-up memorandum clarifying their position, and a conference call was held with Judge Harris to address his concerns.
The February order marked the first instance in which a judge ruled against law enforcement’s use of tower dumps, extending the scope of an August ruling in a federal appeals court that found the use of a geofence warrant — in which law enforcement sends a request to Google for the location data of phones at a specific location  — was unconstitutional.

Read more at CourtWatch.





Bias is protected by the first amendment?

https://www.aljazeera.com/news/2025/6/17/elon-musks-x-sues-new-york-to-block-social-media-hate-speech-law

Elon Musk’s X sues New York to block social media hate speech law

In its complaint, X said the law forces firms to disclose ‘highly sensitive and controversial speech’ that is protected under the US Constitution’s First Amendment.

Deciding what content is acceptable on social media platforms “engenders considerable debate among reasonable people about where to draw the correct proverbial line”, X said, adding “this is not a role that the government may play”.

The complaint also quoted a letter from two legislators who sponsored the law, which said X and Musk in particular had a “disturbing record” on content moderation “that threatens the foundations of our democracy”.

New York’s law requires social media companies to disclose steps they take to eliminate hate on their platforms, and to report their progress. Civil fines could reach $15,000 per violation per day.





Modern war includes the digital front.

https://cyberscoop.com/iran-bank-sepah-cyberattack/

Iran’s Bank Sepah disrupted by cyberattack claimed by pro-Israel hacktivist group

Bank Sepah’s website is offline following a hacktivist group’s claimed attack on the Iran state-owned bank. The group, known as Predatory Sparrow — or Gonjeshke Darande in Persian — said in a social media post early Tuesday that it “destroyed the data of the Islamic Revolutionary Guard Corps’ Bank Sepah.”

Iran-focused media outlets report Bank Sepah branches are closed, customers are unable to access accounts and payment processing is down. London-based Iran International said Iran’s Fars News Agency confirmed Bank Sepah’s infrastructure was impacted by a cyberattack, resulting in service disruptions.

The attack on one of Iran’s largest financial institutions highlights the growing role of cyber warfare in the escalating conflict between Israel and Iran, and has had immediate consequences for the country’s critical infrastructure.



(Related)

https://thehackernews.com/2025/06/iran-restricts-internet-access-to.html

Iran Slows Internet to Prevent Cyber Attacks Amid Escalating Regional Conflict

Iran has throttled internet access in the country in a purported attempt to hamper Israel's ability to conduct covert cyber operations, days after the latter launched an unprecedented attack on the country, escalating geopolitical tensions in the region.



Tuesday, June 17, 2025

True.

https://www.schneier.com/blog/archives/2025/06/where-ai-provides-value.html

Where AI Provides Value

If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day.

But the fact remains that AI already has definite advantages over even the most skilled humans, and knowing where these advantages arise—and where they don’t—will be key to adapting to the AI-infused workforce.

AI will often not be as effective as a human doing the same job. It won’t always know more or be more accurate. And it definitely won’t always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope and sophistication. Understanding these dimensions is the key to understanding AI-human replacement.

This essay was written with Nathan E. Sanders, and originally appeared in The Conversation.



Monday, June 16, 2025

How to use deepfake?

https://www.wbur.org/news/2025/06/16/biden-ai-robocall-new-hampshire-steven-kramer-not-guilty

N.H. jury acquits consultant behind AI robocalls mimicking Biden on all charges

Kramer, who owns a firm specializing in get-out-the-vote projects, argued that the primary was a meaningless straw poll unsanctioned by the DNC, and therefore the state’s voter suppression law didn’t apply. The defense also said he didn’t impersonate a candidate because the message didn’t include Biden’s name, and Biden wasn’t a declared candidate in the primary.

Jurors apparently agreed, acquitting him of 11 felony voter suppression charges, each punishable by up to seven years in prison. The 11 candidate impersonation charges each carried a maximum sentence of a year in jail.



Sunday, June 15, 2025

How would you blur recall without destroying AI usefulness? (Did Llama get 58 percent wrong?)

https://www.understandingai.org/p/metas-llama-31-can-recall-42-percent

Meta's Llama 3.1 can recall 42 percent of the first Harry Potter book

In recent years, numerous plaintiffs—including publishers of books, newspapers, computer code, and photographs—have sued AI companies for training models using copyrighted material. A key question in all of these lawsuits has been how easily AI models produce verbatim excerpts from the plaintiffs’ copyrighted content.

For example, in its December 2023 lawsuit against OpenAI, the New York Times Company produced dozens of examples where GPT-4 exactly reproduced significant passages from Times stories. In its response, OpenAI described this as a “fringe behavior” and a “problem that researchers at OpenAI and elsewhere work hard to address.”

But is it actually a fringe behavior? And have leading AI companies addressed it? New research—focusing on books rather than newspaper articles and on different companies—provides surprising insights into this question. Some of the findings should bolster plaintiffs’ arguments, while others may be more helpful to defendants.

The paper was published last month by a team of computer scientists and legal scholars from Stanford, Cornell, and West Virginia University. They studied whether five popular open-weight models—three from Meta and one each from Microsoft and EleutherAI—were able to reproduce text from Books3, a collection of books that is widely used to train LLMs. Many of the books are still under copyright.





Probably not going to happen.

https://scholarlycommons.law.case.edu/jolti/vol16/iss2/3/

Policing in Pixels

Artificial Intelligence (AI) is transforming border security and law enforcement, with facial recognition technology (FRT) at the forefront of this shift. Widely adopted by U.S. federal agencies such as the FBI, ICE, and CBP, FRT is increasingly used to monitor both citizens and migrants, often without their knowledge. While this technology promises enhanced security, it’s early-stage deployment raises significant concerns about reliability, bias, and ethical data sourcing. This paper examines how FRT is being used at the U.S.-Mexico border and beyond, highlighting its potential to disproportionately target vulnerable groups and infringe on constitutional rights.

The paper provides an overview of AI’s evolution into tools like FRT that analyze facial features to identify individuals. It discusses how these systems are prone to errors—such as false positives—and disproportionately affect racial minorities. The analysis then delves into constitutional implications under the Fourth Amendment’s protection against unreasonable searches and seizures and the Fourteenth Amendment’s guarantee of equal protection. This framework is particularly relevant when considering cases like those involving Clearview AI and Rite Aid, which resulted in severe consequences for both companies and exemplify how improper facial recognition technology (FRT) deployment can lead to significant privacy violations and reinforce societal disparities.

This paper advocates for a multi-layered approach to address these challenges. It argues for halting FRT deployment until comprehensive safeguards are established, including bias mitigation measures, uniform procedures, and increased transparency. By reevaluating the relationship between law enforcement and citizens in light of emerging technologies, this paper underscores the urgent need for policies that balance national security with individual rights.





Because, science fiction!

https://alsun.journals.ekb.eg/article_432675.html?lang=en

The Ethical Dilemmas of the “Three Laws of Robotics” in Isaac Asimov’s Runaround (1942) and Little Lost Robot (1947)

This paper examines the ethical dilemmas presented by Isaac Asimov’s Three Laws of Robotics in his stories Runaround (1942) and Little Lost Robot (1947). The Laws are analyzed and reevaluated within the framework of the ethical theories of Immanuel Kant’s deontology and Jeremy Bentham’s utilitarianism. The analysis demonstrates the ethical conflicts between deontology’s rigid adherence to universal moral absolutes and utilitarianism’s emphasis on maximizing societal welfare. This is through illustrating Asimov’s critical insights into contemporary debates on artificial intelligence ethics and regulation, prompting a re-evaluation of human responsibility, human-robot trust, and the boundaries of robotic autonomy. The stories reveal the limitations of Asimov’s Laws in addressing real-world complexities, exposing their inability to guarantee consistent ethical behavior in artificial intelligence systems. Furthermore, this study introduces a novel perspective on the interplay between ethical theory and speculative fiction, underscoring the practical value of Asimov’s narratives in shaping forward-thinking approaches to robotic legislation and ethical programming

Runaround https://archive.org/details/Astounding_v29n01_1942-03_dtsg0318/page/n93/mode/2up