Will this spread to other
countries?
https://www.ft.com/content/4a5235c5-acd0-4e81-9d44-2362a25c8eb3
Brazil
supreme court rules digital platforms are liable for users’ posts
Brazil’s
supreme court has ruled that social media platforms can be held
legally responsible for users’ posts, in a decision that tightens
regulation on technology giants in the country.
Companies
such as Facebook, TikTok and X will have to act immediately to remove
material such as hate speech, incitement to violence or
“anti-democratic acts”, even without
a prior judicial takedown order, as a result of the
decision in Latin America’s largest nation late on Thursday.
Could
I sue my twin brother?
https://www.theguardian.com/technology/2025/jun/27/deepfakes-denmark-copyright-law-artificial-intelligence
Denmark
to tackle deepfakes by giving people copyright to their own features
The
Danish government is to clamp down on the creation and dissemination
of AI-generated deepfakes by changing copyright law to ensure that
everybody has the right to their own body, facial features and voice.
New
tool…
https://www.404media.co/ice-is-using-a-new-facial-recognition-app-to-identify-people-leaked-emails-show/
ICE
Is Using a New Facial Recognition App to Identify People, Leaked
Emails Show
Immigration
and Customs Enforcement (ICE) is using a new mobile phone app that
can identify someone based on their fingerprints or face by simply
pointing a smartphone camera at them, according to internal ICE
emails viewed by 404 Media. The underlying system used for the
facial recognition component of the app is ordinarily used when
people enter or exit the U.S. Now, that system is being used inside
the U.S. by ICE to identify people in the field.
The
news highlights the Trump administration’s growing use of
sophisticated technology for its mass deportation efforts and ICE’s
enforcement of its arrest quotas. The document also shows how
biometric systems built for one reason can be repurposed for another,
a constant fear and critique from civil liberties proponents of
facial recognition tools.
Can
a non-person speak?
https://www.thefire.org/news/fire-court-ai-speech-still-speech-and-first-amendment-still-applies
FIRE
to court: AI speech is still speech — and the First Amendment still
applies
This
week, FIRE filed a “friend-of-the-court”
brief in Garcia
v. Character Technologies urging
immediate review of a federal court’s refusal to recognize the
First Amendment implications of AI-generated speech.
The
plaintiff in the lawsuit is
the mother of a teenage boy who committed suicide after interacting
with an AI chatbot modeled on the character Daenerys Targaryen from
the popular fantasy series Game
of Thrones.
The suit alleges the interactions with the chatbot, one of hundreds
of chatbots hosted on defendant Character Technologies’ platform,
caused the teenager’s death.
Character
Technologies moved
to dismiss the
lawsuit, arguing among other things that the First Amendment protects
chatbot outputs and bars the lawsuit’s claims. A federal district
court in Orlando denied
the motion,
and in doing so stated it was “not
prepared to hold that the Character A.I. LLM's output is speech.”
FIRE’s
brief argues
the court failed to appreciate the free speech implications of its
decision, which breaks with a well-established tradition of applying
the First Amendment to new technologies with the same strength and
scope as applies to established communication methods like the
printing press or even the humble town square. The significant
ramifications of this error for the future of free speech make it
important for higher courts to provide immediate input.
Contrary
to the court’s uncertainty about whether “words strung together
by an LLM” are speech, assembling words to convey messages and
information is the essence of speech. And, save for a limited number
of carefully
defined exceptions,
the First Amendment protects speech — regardless of the tool used
to create, produce, or transmit it.
(Related)
https://cdt.org/insights/cdt-and-eff-urge-court-to-carefully-consider-users-first-amendment-rights-in-garcia-v-character-technologies-inc/
CDT
and EFF Urge Court to Carefully Consider Users’ First Amendment
Rights in Garcia v. Character Technologies, Inc.
On
Monday, CDT and EFF sought leave to submit an amicus brief urging the
U.S. District Court of the Middle District of Florida to grant an
interlocutory appeal to the Eleventh Circuit to ensure adequate
review of users’ First Amendment rights in Garcia
v. Character Technologies, Inc.
The case involves the tragic suicide of a child that followed his
use of a chatbot and the complex First Amendment questions that
accompany whether and how plaintiffs can appropriately recover
damages alleged to stem from chatbot outputs.
CDT
and EFF’s brief discusses how First Amendment-protected expression
may be implicated throughout the design, delivery, and use of chatbot
LLMs and urges the court to prioritize users’ interests in
accessing chatbot outputs in its First Amendment analysis. The brief
documents the Supreme Court’s long-standing precedent holding that
the First Amendment’s protections for speech extend not just to
speakers but also to people who seek out information. A failure to
appropriately consider users’ First Amendment rights in relation to
seeking information from chatbots, the brief argues, would open the
door for unprecedented governmental interference in the ways that
people can create, seek, and share information.
Read
the full brief.