Sunday, March 22, 2026

Should there be a similar line for legal misconduct?

https://www.tandfonline.com/doi/full/10.1080/08989621.2026.2645390#abstract

Hallucinated citations produced by generative artificial intelligence may constitute research misconduct when citations function as data in scholarly papers

In this article, we discuss the growing problem of hallucinated citations produced by Generative Artificial Intelligence (GenAI) in scholarly research and writing. We argue that GenAI hallucinated citations might qualify as a provable instance of research misconduct under the U.S. federal regulations when a) the researcher uses a GenAI tool to produce hallucinated (i.e., nonexistent) citations for a research document; b) the citations function as data because they directly support research findings, as in, for example, review articles or bibliometric studies; and c) the researcher demonstrates indifference to the risk of fabrication of the data (i.e. citations) because they did not check the GenAI’s output for veracity and accuracy. Other types of problematic citations such as bibliometrically incorrect citations, or contextually inaccurate citations, are indicative of poor scholarship and irresponsible behavior, but do not qualify as research misconduct. Recognizing that GenAI hallucinated citations could be regarded as research misconduct in certain cases will hopefully encourage researchers to take this problem more seriously than they do now. In partnership with scientific institutions, funders and professional societies, the scholarly community should work on establishing, promoting, and enforcing standards for responsible use of AI in research, including standards pertaining to citation practices.





Who should be looking out for you? Your doctor, a nurse, or the guy from IT?

https://www.atlantis-press.com/proceedings/tfol-25/126022211

Surveillance Medicine and the Law

Artificial intelligence is quickly becoming embedded in healthcare systems around the world. As this happens, the promise of efficiency, predictability, and personalisation of care is frequently presented as a moral imperative. However, there remains a growing body of evidence that AI-driven healthcare technologies can systematically undermine core principles of medical and legal ethics and, potentially, breach fundamental human rights. This study is an exploration of the deployment of AI in healthcare - specifically predictive algorithms, triage bots, and data-driven diagnostics - and how these risks infringe upon the right to health and the right to non-discrimination.

This study aims, through the lens of critical legal studies, to interrogate how these systems and technologies replicate and automate existing forms of inequality, while hidden by the veil of neutral language and innovation. Drawing upon case studies including UnitedHealth, Babylon Health, and DeepMind, the study demonstrates how algorithmic health tools can exacerbate systemic issues such as racism, gender biases and digital exclusion. It also aims to explore how existing legal systems fail to challenge these harmful effects and perpetually reinforce power dynamics and data commodification under the veil of progress.

By critically re-examining the legal governance of AI in healthcare, this study calls for a reassertion of ethical and rights-based principles in emerging health technology regulation, focused not on market efficiency, but on ethical principles like equality, autonomy and human dignity.





AIs don’t think. (Yet)

https://journal.ijtrp.com/index.php/ijtrp/article/view/21

The Legal and Ethical Implications of AI in Judicial Decision-Making: Challenges to Fair Trial and Due Process

A paradigm shift in the discussion of law, justice, and governance has resulted from the incorporation of artificial intelligence (AI) into judicial systems. Even though AI has been successful in increasing productivity, simplifying case management, and helping judges with research, using it to make decisions in court presents serious ethical and legal issues. The constitutional protections of due process and fair trial, which protect individual rights from caprice and guarantee openness, impartiality, and accountability in decision-making, are at the heart of this discussion. The ethical and legal ramifications of using AI in court decision-making are examined in this paper. It looks at how the idea of equality before the law may be threatened by algorithmic tools that, despite their promise of objectivity, may replicate or even worsen systemic biases present in training data. The constitutional requirement of reasoned judgments is challenged by the "black box problem," in which algorithms generate results without comprehensible reasoning, undermining public confidence in the legal system. Furthermore, there are serious concerns about who is responsible for incorrect or unfair results when accountability is distributed between algorithmic systems and human judges. The study examines developments in China, India, the United States, and the European Union using a comparative methodology. Both the advantages and disadvantages of AI-driven adjudication are highlighted in the study, ranging from the US controversy surrounding COMPAS risk-assessment tools to China's smart court experiment and India's cautious use of AI through SUPACE. It contends that although artificial intelligence (AI) can increase judicial efficiency, human conscience, empathy, and interpretive reasoning—all of which are essential components of justice—cannot be separated from adjudication. In order to ensure that technological innovation does not undermine constitutional values but rather strengthens the accessibility, fairness, and credibility of judicial systems, the paper ends by suggesting safeguards such as regulatory frameworks, transparency standards, and a "human-in-the-loop" principle.





Simple and effective?

https://www.tmmm.tsk.tr/publication/researches/24-Emerging_Disruptive_TechnologiesandTerrorism.pdf#page=105

TERROR-AI-SM THE FUTURE OF ARTIFICIAL INTELLIGENCE IN THE HANDS OF TERRORISTS

Terrorism remains one of the major challenges to international security. The past decade has witnessed a rapid convergence of two forces with profound implications for global stability: the accelerating capabilities of artificial intelligence and the persistent, adaptive threat of terrorism. What was once the realm of science fiction — autonomous machines making battlefield decisions, synthetic media manipulating public opinion — is now technically feasible and increasingly accessible to non-state actors. This convergence is already reshaping the threat landscape, compelling governments and international institutions to reconsider and adapt their counterterrorism frameworks in order to address the realities of an era where terrorism and cutting-edge technology are inextricably linked.



Friday, March 20, 2026

Due care (do we care?)

https://pogowasright.org/are-warrants-enough/

Are Warrants Enough?

Privacy law scholar Professor Daniel Solove writes:

Are Warrants Enough?

Why Fourth Amendment Warrants Can’t Meet the Moment

This year, in Chatrie v. United States, the U.S. Supreme Court will decide whether geofence warrants are valid under the Fourth Amendment. The geofence warrant at issue in the case was one that allowed the government to obtain account data from Google of hundreds of millions of users. It’s the equivalent to a digital dragnet, which I’ve long argued contravenes the core purpose of the Fourth Amendment. The Framers of the Constitution hated dragnet searches . . . actually, to be more precise, HATED them.
If the Supreme Court doesn’t find geofence warrants to be invalid, then it’s hard to imagine much left of the already-desiccated Fourth Amendment. But Chatrie is just the tip of the iceberg. Regular warrants under the Fourth Amendment—those that are properly circumscribed based on particularized suspicion—are also not strong enough for our times.

Read more at DanielSolove.substack.com

Related posts:





Governments frequently want to “do something” but this ain’t the way.

https://www.theregister.com/2026/03/20/jlr_bailout_cmc/

Jaguar Land Rover's cyber bailout sets worrying precedent, watchdog warns

Lack of clear criteria risks encouraging firms to lean on state support instead of worrying about insurance



Thursday, March 19, 2026

AI will always choose vanilla?

https://www.bespacific.com/homogenizing-effect-of-large-language-models-on-human-expression-and-thought/

The homogenizing effect of large language models on human expression and thought

Sourati Z, S. Ziabari A, Dehghani M. The homogenizing effect of large language models on human expression and thought. Trends in Cognitive Sciences, 2026; Online March 11, 2026. No paywall.

Cognitive diversity, reflected in variations of language, perspective, and reasoning, is essential to creativity and collective intelligence. This diversity is rich and grounded in culture, history, and individual experience. Yet, as large language models (LLMs) become deeply embedded in people’s lives, they risk standardizing language and reasoning. We synthesize evidence across linguistics, psychology, cognitive science, and computer science to show how LLMs reflect and reinforce dominant styles while marginalizing alternative voices and reasoning strategies. We examine how their design and widespread use contribute to this effect by mirroring patterns in their training data and amplifying convergence as all people increasingly rely on the same models across contexts. Unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability.”





Learn how to use tools before trying them out.

https://nypost.com/2026/03/18/tech/dancing-robot-bounced-from-restaurant-after-scaring-patrons/

Dancing robot seen dragged away by panicked restaurant staff after going haywire in bizarre video: ‘Actually scary’

This machine rages against you.

The rise of the machines could be closer than you think. A humanoid bot had to be “bounced” from a California restaurant after smashing tableware during a dance routine gone awry, as seen in viral X footage.

The smashing machine had reportedly been tasked with performing for patrons at the Haidilao hotpot restaurant in San Jose.





Just for fun…

https://www.adamsmith.org/blog/even-more-useful-maxims

Even More Useful Maxims



Wednesday, March 18, 2026

Apparently drone warfare is here to stay.

https://www.bloomberg.com/news/articles/2026-03-17/ai-drone-software-stock-jumps-700-in-best-ipo-since-newsmax?embedded-checkout=true

AI Drone Software Stock Jumps 520% in Best IPO Since Newsmax

Swarmer Inc. shares skyrocketed as much as 700% on Tuesday, making the artificial intelligence drone software company’s debut the best trading by a US stock since Newsmax Inc.’s blockbuster entry nearly a year ago.

Swarmer is a software company, and not a drone manufacturer. Drones powered by the company’s artificial intelligence technology enables them to deploy and coordinate drone swarms, like a bird flock, at scale. Its platform has been deployed in Ukraine with more than 100,000 real-world missions in active combat environment, since April 2024, according to it regulatory filing.

Swarmer’s opening-day rally comes as investors are weighing their bets on defense spending, as the industry is seeing an emergence of software-driven, autonomous, unmanned systems, reflecting a broader move in modern warfare toward low-cost weapons.





Tools & Techniques.

https://techcrunch.com/2026/03/02/nearby-glasses-new-app-alerts-you-wearing-smart-glasses-surveillance-meta-snap-bluetooth/

A new app alerts you if someone nearby is wearing smart glasses

One of the chief problems with “luxury surveillance” devices, like smart glasses with baked-in video recording cameras, is that they often look indistinguishable from regular eyewear, meaning you might be recorded without knowing it.

But now there is an app that can detect and alert you when someone nearby is wearing smart glasses, or potentially other always-recording tech.

The Android app, aptly named Nearby Glasses, constantly scans for nearby signals that emit from Bluetooth-enabled tech, such as wearable devices made by Meta (and Oakley) and Snap.





Tools & Techniques.

https://www.zdnet.com/article/optmeowt-free-privacy-tool-stop-sites-selling-data/

This free privacy tool makes it super easy to see which sites are selling your data

There's a service called Global Privacy Control that offers extensions and/or links to browsers and apps that support the cause. This service began in 2020 and was inspired by the California Consumer Privacy Act, which gives California residents the right to opt out of any business that would sell their data. Currently, GPC is available for:



Tuesday, March 17, 2026

Just because…

https://www.adamsmith.org/blog/useful-maxims

Useful maxims

https://www.adamsmith.org/blog/more-useful-maxims

More Useful Maxims





Modern war? (Or do hackers just see an opportunity?)

https://www.theregister.com/2026/03/16/cybercrime_iran_war_245_percent_rise/

Cybercrime has skyrocketed 245% since the start of the Iran war

However, not all of the malicious traffic originated from Iran. The embattled theocracy accounted for only 14 percent of the source IPs, compared to Russia (35 percent) and China (28 percent). This doesn't necessarily mean that the threat groups carrying out the cyber activities are based in these two counties. Both China and Russia have historically turned a blind eye toward digital-crime networks and services operating out of their countries – just as long as the attacks don't target Chinese and Russian government agencies or organizations.



Monday, March 16, 2026

Action, faster.

https://www.theverge.com/ai-artificial-intelligence/895030/palantirs-maven-smart-system-is-an-ai-powered-kanban-board-for-killing-people

Palantir’s Maven Smart System is an AI-powered Kanban board for killing people.

The company recently hosted a series of speakers at AIPCon, including Cameron Stanley, the Department of War’s Chief Digital and Artificial Intelligence Officer, who gave a chilling demo of Palantir’s Maven Smart System, where anyone or anything can be targeted for a military strike with a “Left click, right click, left click.”



Sunday, March 15, 2026

Ethics for all actually.

https://scholarworks.uark.edu/arlnlaw/27/

Ethics Of Artificial Intelligence For Lawyers: Resistance Is Futile: Candor, Supervision, And Fees

In Star Trek: The Next Generation, the Borg deliver their iconic warning to every species they encounter: “Resistance is futile.” The line resonates because it conveys the inevitability that once the Borg arrive, escape is no longer an option.

For lawyers, the duties of candor, supervision, and fairness in fees are just as inescapable. ABA Formal Opinion 512 (“ABA Opinion”) makes clear that, regardless of how powerful artificial intelligence becomes, it cannot relieve attorneys of their obligation. Attorneys must verify what they file, oversee how their colleagues use the technology, and ensure that clients are charged fairly. This installment examines those three pillars, showing how courts and the ABA are making plain that ethical rules still govern.



(Related)

https://scholarworks.uark.edu/arlnlaw/26/

Ethics Of Artificial Intelligence For Lawyers: You Will Be Assimilated: Best Practices For Lawyers Using Artificial Intelligence

This installment explores the best practices for responsible adoption: protecting client confidentiality, addressing AI openly in engagement letters, learning the skill of prompt engineering, and preparing for the workforce changes AI will accelerate. Assimilation may be inevitable, but the terms of assimilation, ethical, careful, client-centered, are still within the control of the profession.





A scary thought that I ain’t thunk yet.

https://scholarship.law.ufl.edu/jtlp/vol30/iss1/2/

Python Hunting: How Laws that Protect the Everglades from the Invasive Burmese Python, Including Eradication Programs, Can Inform the Regulation of Objects Controlled by Artifical Intelligence

This Article explores the surprisingly apt analogy between the Burmese python problem in the Florida Everglades and abandoned objects that are controlled by artificial intelligence (AI). With few natural predators, the invasive Burmese python, which was likely introduced to the Everglades through abandonment by pet owners, has threatened native species with extinction. Objects controlled by AI, which we will likely increasingly share our environment with, such as autonomous taxis and food delivery robots, as well as a variety of objects that are used by the military, may be abandoned by their owners and continue to operate. Over time, these objects may be given increasing levels of agency and learn from their environments, making them potentially more dangerous. These objects are likely to create material losses if allowed to run amok. The Burmese python similarly has agency and has run amok.

Beyond the superficial analogy between these two paradigms, this Article provides an interesting thought journey aimed at finding a precedent to cling to when we predict and analyze a problem that hasn’t fully emerged but is likely on the horizon. Borrowing frameworks from other areas of law when writing atop a blank slate is a time-honored tradition in American law. What is old can be new again, and we have seen—and wrestled with—the essence of this problem before. Unfortunately, we seem to be fighting a losing battle against the pythons in the Everglades. Hopefully, creative solutions, technology and the dedication of resources will cause the tide to turn. Sounding the alarm now about autonomous AI objects can help us predict problems in advance and create mechanisms for the mitigation of losses and ultimate redress when harm occurs, unlike the situation in the Everglades.





For want of a nail…

https://finance.yahoo.com/news/iran-war-could-wreak-havoc-on-farmers-create-a-potential-bottleneck-for-the-entire-ai-story-171240723.html

Iran war could wreak havoc on farmers, create a potential 'bottleneck for the entire AI story'

Earlier this month, Qatar shut down one of the world's largest energy hubs due to drone attacks. That halted production of liquefied natural gas and helium, a byproduct of natural gas extraction. The disruption accounts for about one-third of the global helium supply, according to Bloomberg estimates.

Helium has essential uses, including in magnetic resonance imaging (MRI) and welding, as well as electronics and semiconductor manufacturing, which consumes a large portion of the world's supply. It's crucial for rapidly cooling chips during fabrication to prevent overheating and defects.





It’s like…

https://academic.oup.com/jiplp/advance-article/doi/10.1093/jiplp/jpag018/8509416?guestAccessKey=

Metaphors we judge (AI) by: a rhetorical analysis of artificial copyright disputes

  • This article is a ‘metaphorical’ guide to today’s most pressing artificial intelligence (AI) copyright questions, focusing in particular on the EU and the USA. Is unauthorized training on copyright-protected works permitted? Can AI models copy? And is AI-generated output itself protected? As this article demonstrates, debates on these questions can all be traced back to a handful of crucial metaphors.

  • After all, generative AI is hardly comprehensible without the extensive use of metaphors and analogies. Most notably, AI is systematically conceptualized in human terms such as ‘neural networks’ that ‘learn’, ‘know’ or ‘memorize’. This article aims to demonstrate how such metaphors (unconsciously) influence legal evaluations and even judicial decisions in copyright law.

  • The resulting analysis is particularly relevant to lawyers, judges and artists interested in copyright and its intersection with AI. Yet, it may also appeal to those interested in AI, legal reasoning and language more generally, as metaphors and their (rhetorical) effects are by no means unique to copyright and may be equally relevant in fields such as privacy law and (legal) philosophy.





The whole book.

https://www.researchgate.net/profile/Sayed-Mahbub-Hasan-Amiri-2/publication/401660183_The_AI_Classroom_How_Artificial_Intelligence_Will_Reshape_Teaching_and_Learning/links/69ac6250bff9750ad9c95e3e/The-AI-Classroom-How-Artificial-Intelligence-Will-Reshape-Teaching-and-Learning.pdf

The AI Classroom: How Artificial Intelligence Will Reshape Teaching and Learning