Tuesday, March 24, 2026

A dialog with politicians? Scary! (Imagine an AI Trump!)

https://www.schneier.com/blog/archives/2026/03/team-mirai-and-democracy.html

Team Mirai and Democracy

Japan’s election last month and the rise of the country’s newest and most innovative political party, Team Mirai, illustrates the viability of a different way to do politics.

In this model, technology is used to make democratic processes stronger, instead of undermining them. It is harnessed to root out corruption, instead of serving as a cash cow for campaign donations.

Imagine an election where every voter has the opportunity to opine directly to politicians on precisely the issues they care about. They’re not expected to spend hours becoming policy experts. Instead, an AI Interviewer walks them through the subject, answering their questions, interrogating their experience, even challenging their thinking.



Sunday, March 22, 2026

Should there be a similar line for legal misconduct?

https://www.tandfonline.com/doi/full/10.1080/08989621.2026.2645390#abstract

Hallucinated citations produced by generative artificial intelligence may constitute research misconduct when citations function as data in scholarly papers

In this article, we discuss the growing problem of hallucinated citations produced by Generative Artificial Intelligence (GenAI) in scholarly research and writing. We argue that GenAI hallucinated citations might qualify as a provable instance of research misconduct under the U.S. federal regulations when a) the researcher uses a GenAI tool to produce hallucinated (i.e., nonexistent) citations for a research document; b) the citations function as data because they directly support research findings, as in, for example, review articles or bibliometric studies; and c) the researcher demonstrates indifference to the risk of fabrication of the data (i.e. citations) because they did not check the GenAI’s output for veracity and accuracy. Other types of problematic citations such as bibliometrically incorrect citations, or contextually inaccurate citations, are indicative of poor scholarship and irresponsible behavior, but do not qualify as research misconduct. Recognizing that GenAI hallucinated citations could be regarded as research misconduct in certain cases will hopefully encourage researchers to take this problem more seriously than they do now. In partnership with scientific institutions, funders and professional societies, the scholarly community should work on establishing, promoting, and enforcing standards for responsible use of AI in research, including standards pertaining to citation practices.





Who should be looking out for you? Your doctor, a nurse, or the guy from IT?

https://www.atlantis-press.com/proceedings/tfol-25/126022211

Surveillance Medicine and the Law

Artificial intelligence is quickly becoming embedded in healthcare systems around the world. As this happens, the promise of efficiency, predictability, and personalisation of care is frequently presented as a moral imperative. However, there remains a growing body of evidence that AI-driven healthcare technologies can systematically undermine core principles of medical and legal ethics and, potentially, breach fundamental human rights. This study is an exploration of the deployment of AI in healthcare - specifically predictive algorithms, triage bots, and data-driven diagnostics - and how these risks infringe upon the right to health and the right to non-discrimination.

This study aims, through the lens of critical legal studies, to interrogate how these systems and technologies replicate and automate existing forms of inequality, while hidden by the veil of neutral language and innovation. Drawing upon case studies including UnitedHealth, Babylon Health, and DeepMind, the study demonstrates how algorithmic health tools can exacerbate systemic issues such as racism, gender biases and digital exclusion. It also aims to explore how existing legal systems fail to challenge these harmful effects and perpetually reinforce power dynamics and data commodification under the veil of progress.

By critically re-examining the legal governance of AI in healthcare, this study calls for a reassertion of ethical and rights-based principles in emerging health technology regulation, focused not on market efficiency, but on ethical principles like equality, autonomy and human dignity.





AIs don’t think. (Yet)

https://journal.ijtrp.com/index.php/ijtrp/article/view/21

The Legal and Ethical Implications of AI in Judicial Decision-Making: Challenges to Fair Trial and Due Process

A paradigm shift in the discussion of law, justice, and governance has resulted from the incorporation of artificial intelligence (AI) into judicial systems. Even though AI has been successful in increasing productivity, simplifying case management, and helping judges with research, using it to make decisions in court presents serious ethical and legal issues. The constitutional protections of due process and fair trial, which protect individual rights from caprice and guarantee openness, impartiality, and accountability in decision-making, are at the heart of this discussion. The ethical and legal ramifications of using AI in court decision-making are examined in this paper. It looks at how the idea of equality before the law may be threatened by algorithmic tools that, despite their promise of objectivity, may replicate or even worsen systemic biases present in training data. The constitutional requirement of reasoned judgments is challenged by the "black box problem," in which algorithms generate results without comprehensible reasoning, undermining public confidence in the legal system. Furthermore, there are serious concerns about who is responsible for incorrect or unfair results when accountability is distributed between algorithmic systems and human judges. The study examines developments in China, India, the United States, and the European Union using a comparative methodology. Both the advantages and disadvantages of AI-driven adjudication are highlighted in the study, ranging from the US controversy surrounding COMPAS risk-assessment tools to China's smart court experiment and India's cautious use of AI through SUPACE. It contends that although artificial intelligence (AI) can increase judicial efficiency, human conscience, empathy, and interpretive reasoning—all of which are essential components of justice—cannot be separated from adjudication. In order to ensure that technological innovation does not undermine constitutional values but rather strengthens the accessibility, fairness, and credibility of judicial systems, the paper ends by suggesting safeguards such as regulatory frameworks, transparency standards, and a "human-in-the-loop" principle.





Simple and effective?

https://www.tmmm.tsk.tr/publication/researches/24-Emerging_Disruptive_TechnologiesandTerrorism.pdf#page=105

TERROR-AI-SM THE FUTURE OF ARTIFICIAL INTELLIGENCE IN THE HANDS OF TERRORISTS

Terrorism remains one of the major challenges to international security. The past decade has witnessed a rapid convergence of two forces with profound implications for global stability: the accelerating capabilities of artificial intelligence and the persistent, adaptive threat of terrorism. What was once the realm of science fiction — autonomous machines making battlefield decisions, synthetic media manipulating public opinion — is now technically feasible and increasingly accessible to non-state actors. This convergence is already reshaping the threat landscape, compelling governments and international institutions to reconsider and adapt their counterterrorism frameworks in order to address the realities of an era where terrorism and cutting-edge technology are inextricably linked.



Friday, March 20, 2026

Due care (do we care?)

https://pogowasright.org/are-warrants-enough/

Are Warrants Enough?

Privacy law scholar Professor Daniel Solove writes:

Are Warrants Enough?

Why Fourth Amendment Warrants Can’t Meet the Moment

This year, in Chatrie v. United States, the U.S. Supreme Court will decide whether geofence warrants are valid under the Fourth Amendment. The geofence warrant at issue in the case was one that allowed the government to obtain account data from Google of hundreds of millions of users. It’s the equivalent to a digital dragnet, which I’ve long argued contravenes the core purpose of the Fourth Amendment. The Framers of the Constitution hated dragnet searches . . . actually, to be more precise, HATED them.
If the Supreme Court doesn’t find geofence warrants to be invalid, then it’s hard to imagine much left of the already-desiccated Fourth Amendment. But Chatrie is just the tip of the iceberg. Regular warrants under the Fourth Amendment—those that are properly circumscribed based on particularized suspicion—are also not strong enough for our times.

Read more at DanielSolove.substack.com

Related posts:





Governments frequently want to “do something” but this ain’t the way.

https://www.theregister.com/2026/03/20/jlr_bailout_cmc/

Jaguar Land Rover's cyber bailout sets worrying precedent, watchdog warns

Lack of clear criteria risks encouraging firms to lean on state support instead of worrying about insurance



Thursday, March 19, 2026

AI will always choose vanilla?

https://www.bespacific.com/homogenizing-effect-of-large-language-models-on-human-expression-and-thought/

The homogenizing effect of large language models on human expression and thought

Sourati Z, S. Ziabari A, Dehghani M. The homogenizing effect of large language models on human expression and thought. Trends in Cognitive Sciences, 2026; Online March 11, 2026. No paywall.

Cognitive diversity, reflected in variations of language, perspective, and reasoning, is essential to creativity and collective intelligence. This diversity is rich and grounded in culture, history, and individual experience. Yet, as large language models (LLMs) become deeply embedded in people’s lives, they risk standardizing language and reasoning. We synthesize evidence across linguistics, psychology, cognitive science, and computer science to show how LLMs reflect and reinforce dominant styles while marginalizing alternative voices and reasoning strategies. We examine how their design and widespread use contribute to this effect by mirroring patterns in their training data and amplifying convergence as all people increasingly rely on the same models across contexts. Unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability.”





Learn how to use tools before trying them out.

https://nypost.com/2026/03/18/tech/dancing-robot-bounced-from-restaurant-after-scaring-patrons/

Dancing robot seen dragged away by panicked restaurant staff after going haywire in bizarre video: ‘Actually scary’

This machine rages against you.

The rise of the machines could be closer than you think. A humanoid bot had to be “bounced” from a California restaurant after smashing tableware during a dance routine gone awry, as seen in viral X footage.

The smashing machine had reportedly been tasked with performing for patrons at the Haidilao hotpot restaurant in San Jose.





Just for fun…

https://www.adamsmith.org/blog/even-more-useful-maxims

Even More Useful Maxims



Wednesday, March 18, 2026

Apparently drone warfare is here to stay.

https://www.bloomberg.com/news/articles/2026-03-17/ai-drone-software-stock-jumps-700-in-best-ipo-since-newsmax?embedded-checkout=true

AI Drone Software Stock Jumps 520% in Best IPO Since Newsmax

Swarmer Inc. shares skyrocketed as much as 700% on Tuesday, making the artificial intelligence drone software company’s debut the best trading by a US stock since Newsmax Inc.’s blockbuster entry nearly a year ago.

Swarmer is a software company, and not a drone manufacturer. Drones powered by the company’s artificial intelligence technology enables them to deploy and coordinate drone swarms, like a bird flock, at scale. Its platform has been deployed in Ukraine with more than 100,000 real-world missions in active combat environment, since April 2024, according to it regulatory filing.

Swarmer’s opening-day rally comes as investors are weighing their bets on defense spending, as the industry is seeing an emergence of software-driven, autonomous, unmanned systems, reflecting a broader move in modern warfare toward low-cost weapons.





Tools & Techniques.

https://techcrunch.com/2026/03/02/nearby-glasses-new-app-alerts-you-wearing-smart-glasses-surveillance-meta-snap-bluetooth/

A new app alerts you if someone nearby is wearing smart glasses

One of the chief problems with “luxury surveillance” devices, like smart glasses with baked-in video recording cameras, is that they often look indistinguishable from regular eyewear, meaning you might be recorded without knowing it.

But now there is an app that can detect and alert you when someone nearby is wearing smart glasses, or potentially other always-recording tech.

The Android app, aptly named Nearby Glasses, constantly scans for nearby signals that emit from Bluetooth-enabled tech, such as wearable devices made by Meta (and Oakley) and Snap.





Tools & Techniques.

https://www.zdnet.com/article/optmeowt-free-privacy-tool-stop-sites-selling-data/

This free privacy tool makes it super easy to see which sites are selling your data

There's a service called Global Privacy Control that offers extensions and/or links to browsers and apps that support the cause. This service began in 2020 and was inspired by the California Consumer Privacy Act, which gives California residents the right to opt out of any business that would sell their data. Currently, GPC is available for:



Tuesday, March 17, 2026

Just because…

https://www.adamsmith.org/blog/useful-maxims

Useful maxims

https://www.adamsmith.org/blog/more-useful-maxims

More Useful Maxims





Modern war? (Or do hackers just see an opportunity?)

https://www.theregister.com/2026/03/16/cybercrime_iran_war_245_percent_rise/

Cybercrime has skyrocketed 245% since the start of the Iran war

However, not all of the malicious traffic originated from Iran. The embattled theocracy accounted for only 14 percent of the source IPs, compared to Russia (35 percent) and China (28 percent). This doesn't necessarily mean that the threat groups carrying out the cyber activities are based in these two counties. Both China and Russia have historically turned a blind eye toward digital-crime networks and services operating out of their countries – just as long as the attacks don't target Chinese and Russian government agencies or organizations.



Monday, March 16, 2026

Action, faster.

https://www.theverge.com/ai-artificial-intelligence/895030/palantirs-maven-smart-system-is-an-ai-powered-kanban-board-for-killing-people

Palantir’s Maven Smart System is an AI-powered Kanban board for killing people.

The company recently hosted a series of speakers at AIPCon, including Cameron Stanley, the Department of War’s Chief Digital and Artificial Intelligence Officer, who gave a chilling demo of Palantir’s Maven Smart System, where anyone or anything can be targeted for a military strike with a “Left click, right click, left click.”