I’m not sure I understand.
(Place your bets now!)
https://blogs.lse.ac.uk/businessreview/2026/04/16/prediction-markets-have-made-uncertainty-itself-a-tradable-asset/
Prediction
markets have made uncertainty itself a tradable asset
The
history of prediction markets can be traced back to Francis Galton’s
ox and Kenneth Arrow’s promise. But their recent stratospheric
rise is reliant on our polycrisis era. Bets can be made on
elections, interest rates and war. More uncertainty leads to more
disagreement, more trading and larger markets. Chirantan Chatterjee
explains what this reveals about the world.
Citizenship
requires us to keep an eye on government…
https://www.engadget.com/apps/judge-sides-with-creators-of-banned-ice-trackers-who-allege-dhs-and-doj-violated-their-first-amendment-rights-191701801.html
Judge
sides with creators of banned ICE trackers who allege DHS and DOJ
violated their First Amendment rights
A judge
has granted the
makers of the "ICE Sightings - Chicagoland" Facebook group
and the Eyes Up app a preliminary injunction to stop the Trump
administration from coercing platforms to take these projects down.
Judge Jorge L. Alonso of the United States District Court for the
Northern District of Illinois found that the plaintiffs, Kassandra
Rosado and Kreisau Group, are likely to succeed in their case, which
alleges that the government suppressed protected speech under the
First Amendment by strong-arming Facebook and Apple into removing ICE
monitoring efforts.
Both Eyes
Up and ICE Sightings - Chicagoland use publicly available
information to keep tabs on ICE activity. But after pressure from
Trump officials, they were removed from Apple's App Store and
Facebook, respectively.
Figure out
your responsibility.
https://www.ecgi.global/publications/blog/algorithmic-incompetence-the-fiduciary-duty-your-board-is-already-breaching
Algorithmic
Incompetence: The Fiduciary Duty Your Board Is Already Breaching
Whoever
exercises a function affecting third parties cannot delegate judgment
to a system they neither understand nor supervise.
A pillow in
the wrong hands suffocates; in the right hands, it supports. Roberto
Cingolani's metaphor captures what corporate law has always known:
responsibility lies not with the instrument but with whoever adopts
it without understanding its implications.
In boardrooms
across Europe and North America, a quiet abdication is underway.
Boards are adopting algorithmic systems they do not understand,
delegating comprehension to opaque technologies, and assuming that
regulatory grace periods exempt them from thinking. They are wrong.
The duty to understand what you govern is not a novelty of the AI Act
— it is an ancient obligation that artificial intelligence now
renders inescapable.
Modern war.
https://www.researchgate.net/profile/Muhammad-Faisal-Sddiqui/publication/403643037_Artificial_Intelligence_in_Future_Warfare_Ethical_Frameworks_and_the_Regulation_of_Lethal_Autonomous_Weapons_IEEE_Transactions_on_Technology_and_Society/links/69d73ef05518257d60e8ede8/Artificial-Intelligence-in-Future-Warfare-Ethical-Frameworks-and-the-Regulation-of-Lethal-Autonomous-Weapons-IEEE-Transactions-on-Technology-and-Society.pdf
Artificial
Intelligence in Future Warfare: Ethical Frameworks and the Regulation
of Lethal Autonomous Weapons
The
integration of artificial intelligence into weapons systems has
compressed the decision cycle of lethal engagement from hours to
milliseconds, outpacing the international legal and ethical
frameworks designed to constrain state violence. This paper surveys
the landscape of deployed and tested lethal autonomous weapons
systems (LAWS), analyzes the adequacy of existing international law
relative to current AI capabilities, and proposes a regulatory
structure calibrated to the actual risk profile of autonomous
lethality. We examine nine real-world systems -- from the Kargu-2's
documented autonomous engagement in Libya (2020) to Israel's
"Lavender" AI targeting in Gaza (2023-2024) and the ongoing
2026 Iran-US-Israel conflict "Operation Epic Fury," the
largest AI-assisted warfare campaign in recorded history -- and
classify each using a three-tier autonomy model: human-in-the-loop
(HITL), human-on-the-loop (HOTL), and human-out-of-the-loop (HOOTL).
Our gap analysis of the Geneva Conventions, the Convention on Certain
Conventional Weapons (CCW), and International Humanitarian Law (IHL)
identifies four critical
regulatory failures: the absence of a binding definition
of "meaningful human control," an accountability vacuum
when LAWS cause civilian casualties, a speed asymmetry between AI
warfare timescales and legal review processes, and the dual-use
nature of civilian AI technologies. To address these gaps, we
propose a five-tier governance framework scaling regulatory
stringency with the product of autonomy level and lethality
threshold. The framework carries direct implications for stalled UN
CCW Group of Governmental Experts negotiations, offering a
technically grounded basis for legally binding distinctions that
current diplomatic language lacks.
The only good
terrorist is…
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6549339
Human
Rights related to AI in Counterterrorism
Counterterrorism
outside armed conflict increasingly relies on Artificial Intelligence
(AI). States use AI notably for detecting, predicting, and
responding to terrorism. Despite acclamations of States and regional
organizations that AI needs to be used in compliance with
international human rights law, there is still insufficient clarity
on how human rights law guides and governs legality in the use of AI
in counterterrorism. Accordingly, this chapter analyses the key
human rights that are relevant to - and which help to determine the
lawful use of - AI in counterterrorism. This concerns, notably, the
right to privacy; the rights to liberty and security; the principle
of non-discrimination; the right to freedom of expression; the right
to freedom of peaceful assembly; and the rights to life and to
freedom from ill-treatment. The chapter assesses how these rights
concern the use of AI in counterterrorism by relating them to the
functions of AI applications. This is achieved through analysis of
international and national rules and jurisprudence that are directly
or indirectly pertinent.
I thought
Trump still hated Musk? Does he hate the French more?
https://www.cnbc.com/2026/04/18/justice-department-france-probe-exlon-musk-x.html
Justice
Department refuses to assist French probe into Musk’s X, WSJ
reports
The
U.S. Justice Department has told French law enforcement it will not
assist with efforts to investigate tech billionaire Elon
Musk’s social
media platform X, The
Wall Street Journal reported on
Saturday, citing a letter from the DOJ’s Office of International
Affairs, dated Friday.