With
great power comes great responsibility…
https://www.bespacific.com/artificial-intelligence-and-aggregate-litigation/
Artificial
Intelligence and Aggregate Litigation
Wilf-Townsend,
Daniel, Artificial Intelligence and Aggregate Litigation (March 01,
2025). 103 Wash. U. L. Rev. __ (forthcoming 2026), Available at SSRN:
https://ssrn.com/abstract=5163640
or
http://dx.doi.org/10.2139/ssrn.5163640
The
era of AI litigation has begun, and it is already clear that the
class action will have a distinctive role to play. AI-powered tools
are often valuable because they can be deployed at scale. And the
harms they cause often exist at scale as well, pointing to the class
action as a key device for resolving the correspondingly numerous
potential legal claims. This article presents the first general
account of the complex interplay between aggregation and artificial
intelligence. First, the article identifies a pair of effects that
the use of AI tools is likely to have on the availability of class
actions to pursue legal claims. While the use of increased
automation by defendants will tend to militate in favor of class
certification, the increased individualization enabled by AI tools
will cut against it. These effects, in turn, will be strongly
influenced by the substantive laws governing AI tools—especially by
whether liability attaches “upstream” or “downstream” in a
given course of conduct, and by the kinds of causal showings that
must be made to establish liability. After identifying these
influences, the article flips the usual script and describes how,
rather than merely being a vehicle for enforcing substantive law,
aggregation
could actually enable new types of liability regimes.
AI tools can create harms that are only demonstrable at the level of
an affected group, which is likely to frustrate traditional
individual claims. Aggregation creates opportunities to prove harm
and assign remedies at the group level, providing a path to address
this difficult problem. Policymakers hoping for fair and effective
regulations should therefore attend to procedure, and aggregation in
particular, as they write the substantive laws governing AI use.
What
if the AI hates me?
https://www.politico.com/newsletters/digital-future-daily/2025/04/08/the-worries-about-ai-in-trumps-social-media-surveillance-00279255
The
worries about AI in Trump’s social media surveillance
As
the Trump administration goes after immigrants for allegedly posing
national security threats, social media posts have taken a prominent
role in the story — coming up in the Department of Homeland
Security’s allegations against Palestinian activist Mahmoud Khalil,
the Georgetown University researcher Badar Khan Suri and alleged gang
member Jerce Reyes Barrios.
It’s
not clear what tools the government is using to collect and analyze
social media posts, and DHS didn’t respond to a direct request
about how it is surveilling online platforms.
… Earlier
social media monitoring tools functioned more like a search engine,
surfacing and ranking results based on relevancy, but AI tools take
on a more deterministic role, Rachel Levinson-Waldman, the managing
director of the Brennan Center’s Liberty and National Security
Program, told DFD.
“AI
is starting to be used, not just to streamline the process, which
already brings its own significant concerns, but to augment or
replace the judgment,” said Levinson-Waldman, who studies social
media monitoring tools.
… “There
are real concerns that AI is being used to automate target selection,
and potentially initiating surveillance without adequate human
review,” Kia Hamadanchy, a senior policy counsel for the American
Civil Liberties Union, told the committee.
The
use of AI in social media surveillance also creates greater potential
for what experts call automation bias. The term describes a tendency
to trust technology to deliver accurate information — an issue that
has surfaced in healthcare, aviation and law enforcement.
(Related)
https://www.theguardian.com/uk-news/2025/apr/08/uk-creating-prediction-tool-to-identify-people-most-likely-to-kill
UK
creating ‘murder prediction’ tool to identify people most likely
to kill
The
UK government is developing a “murder prediction” programme which
it hopes can use personal data of those known to the authorities to
identify the people most likely to become killers.
Researchers
are alleged to be using algorithms to analyse the information of
thousands of people, including victims of crime, as they try to
identify those at greatest risk of committing serious violent
offences.
The
scheme was originally called the “homicide prediction project”,
but its name has been changed to “sharing data to improve risk
assessment”. The Ministry of Justice hopes the project will help
boost public safety but campaigners have called it “chilling and
dystopian”.