New definitions. Could they
apply to all media?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5561522
The
New Art Forgers
The
“substantial similarity” between a copyrighted work and an
unauthorized derivative has formed the bedrock of copyright
infringement jurisprudence since the mid-nineteenth century. Recent
technological developments, however, are destabilizing these
conceptual foundations. In May, the Copyright Office suggested that
the use of copyrighted works to train AI models may constitute
infringement even if model outputs are not “substantially similar”
to model inputs if they nevertheless “dilute the market” for
similar works. One month later, Judge Chhabria of the Northern
District of California argued that AI outputs do not have to be
“substantially similar” to copyrighted training data in order to
be infringing. The plaintiff’s incentives are sufficiently harmed,
Judge Chhabria argued, when the market is flooded with “similar
enough” AI-generated works.
These
developments should be read as early warning signs of a
disturbing doctrinal shift from “substantial similarity” to a new
and dubious threshold for actionable infringement: “substitutive
similarity”, where the substitutability of the
defendant’s work, rather than the similarity of protected
expression, provides the cause of action. This novel theory of harm,
if widely adopted, would impose dangerous restrictions on downstream
creativity. Any new work that was “similar enough” to existing
works would be treated as potentially infringing, despite the absence
of substantially similar expression. This would corrupt what is
essentially a question of fact – whether the defendant copied
“enough” of the plaintiff’s work to constitute unlawful
appropriation – with deontic considerations of the wrongfulness of
free-riding.
At
the same time, artists are understandably rattled by the speed and
scale of AI generation. AI models can produce “new” works in the
style of established artists in a matter of seconds, dramatically
undercutting the market for their work. AI style mimicry makes it
difficult for artists to control their personal brands and for
consumers to locate authentic works by their favorite artists.
Copyright is responsible for protecting artists’ creative
incentives, but its legal tests were not designed to handle the scale
of imitation enabled by AI.
This
Article offers a way out of this jurisprudential morass. Instead of
lowering the burden of proof for infringement, Congress should
strengthen the attribution rights of existing creators.
Low-protectionists have long advocated for attribution rights as a
way of protecting authors’ interests without expanding the scope of
their economic entitlements. Proper attribution allows creators to
capture the full reputational benefits of their labor without
stifling downstream creativity. For example, Congress could enact an
AI-specific attribution right that requires the disclosure of
copyrighted training data in output metadata. This would mitigate
the labor-displacing effects of generative AI by directing consumers
to the original creators of a popular style or aesthetic.
Generative
AI places copyright jurisprudence at a critical crossroads. Indulging
Judge Chhabria’s novel theory of harm would effectively inaugurate
a new standard for infringement – “substitutive similarity” –
that would stifle not just AI innovation but human creativity more
broadly. The stakes for protecting free expression through careful
guardianship of longstanding doctrine could not be higher. This
Article guides readers through this critical inflection point with
new terminology for the jurisprudential lexicon as well as practical
proposals for reform.
Interesting
idea.
https://www.proquest.com/openview/f49bcfbaea46db396599409c08492adf/1?pq-origsite=gscholar&cbl=18750&diss=y
The
Upcoming Moral Crisis in Primitive Artificial Intelligence
As
we continue to develop artificially intelligent systems, there is an
increasingly high chance that we will develop a system that is both
conscious and capable of suffering. Furthermore, it
is likely that the development of this conscious machine will be
entirely unintentional. While this machine will have
moral status, identifying it will be extremely difficult, leading to
it being treated the same as its inert predecessors. For these
reasons I believe that a crisis in ethics is looming. This paper
aims to argue that it is possible for a machine to have moral status,
that the first such machine will likely be produced unintentionally,
and that identifying this machine will involve significant
difficulties.
At
least they are thinking about it…
https://www.reuters.com/legal/government/new-york-court-system-sets-rules-ai-use-by-judges-staff-2025-10-10/
New
York court system sets rules for AI use by judges, staff
The New York
state court system on Friday set out a new policy on the use of
artificial intelligence by judges and other court staff, joining at
least four other U.S. states that have adopted similar rules in the
past year.
, opens new tab
The interim
policy, which applies to all judges, justices and nonjudicial
employees in the New York Unified Court System, limits the use of
generative AI to approved products and mandates AI training.