Is there value in arguing both
sides?
https://www.researchgate.net/profile/Robert-Mcgee-5/publication/378069290_Was_Russia's_Annexation_of_Crimea_Legitimate_A_Study_in_Artificial_Intelligence/links/65c5020c1bed776ae337a276/Was-Russias-Annexation-of-Crimea-Legitimate-A-Study-in-Artificial-Intelligence.pdf
Was
Russia’s Annexation of Crimea Legitimate? A Study in Artificial
Intelligence
This
study used Copilot and Gab AI, two tools of artificial intelligence
(AI), to examine the question of whether Russia’s annexation of
Crimea was legitimate. Both chatbots were asked to write a brief
essay summarizing the history of Crimea, with emphasis on its
annexation by Russia. They were then asked to write a two-part essay
providing arguments for both sides of the legitimacy issue. This
methodology can be used for any number of research projects in
economics, law, history, sociology, philosophy, political science and
ethics, to name a few. Professors can utilize this methodology to
stimulate class discussion. Graduate students can use it to generate
initial outlines for their theses and dissertations. It can be used
as a starting point for further discussion.
Who
wins?
https://scholar.law.colorado.edu/cgi/viewcontent.cgi?article=2616&context=faculty-articles
Risky
Speech Systems: Tort Liability for AI-Generated Illegal Speech
How should
we think about liability when AI systems generate illegal speech?
The Journal of Free Speech Law, a peer-edited journal, ran a topical
2023 symposium on Artificial Intelligence and Speech that is a
must-read. This JOT addresses two symposium pieces that take
particularly interesting and interlocking approaches to the question
of liability for AI-generated content: Jane Bambauer’s Negligent AI
Speech: Some Thoughts about Duty, and Nina Brown’s Bots Behaving
Badly: A Products Liability Approach to Chatbot-Generated Defamation.
These articles evidence how the law constructs technology: the
diverse tools in the legal sensemaking toolkit that are important to
pull out every time somebody shouts “disruption!”
Each author
offers a cogent discussion of possible legal frameworks for
liability, moving beyond debates about First Amendment coverage of AI
speech to imagine how substantive tort law will work. While these
are not strictly speaking First Amendment pieces, exploring the
application of liability rules for AI is important, even crucial, for
understanding how courts might shape First Amendment law. First
Amendment doctrine often hinges on the laws to which it is applied.
By focusing on substantive tort law, Bambauer and Brown take the
as-yet largely abstract First Amendment conversation to a much
welcomed pragmatic yet creative place.
What makes
these two articles stand out is that they each address AI-generated
speech that is illegal—that is, speech that is or should be
unprotected by the First Amendment, even if First Amendment coverage
extends to AI-generated content. Bambauer talks about speech that
physically hurts people, a category around which courts have been
conducting free-speech line-drawing for decades; Brown talks about
defamation, which is a historically unprotected category of speech.
While a number of scholars have discussed whether the First Amendment
covers AI-generated speech, until this symposium there was little
discussion of how the doctrine might adapt to handle liability for
content that’s clearly unprotected.
Judge AI is
coming.
https://yjolt.org/sites/default/files/avery_abril_delriego_26yalejltech64.pdf
ChatGPT,
Esq.: Recasting Unauthorized Practice of Law in the Era of Generative
AI
In March of 2023, OpenAI released GPT-4, an
autoregressive language model that uses deep learning to produce
text. GPT-4 has unprecedented ability to practice law: drafting
briefs and memos, plotting litigation strategy, and providing general
legal advice. However, scholars and practitioners have yet to unpack
the implications of large language models, such as GPT-4, for
long-standing bar association rules on the unauthorized practice of
law (“UPL”). The intersection of large language models with UPL
raises manifold issues, including those pertaining to important and
developing jurisprudence on free speech, antitrust, occupational
licensing, and the inherent-powers doctrine. How the intersection is
navigated, moreover, is of vital importance in the durative struggle
for access to justice, and low-income individuals will be
disproportionately impacted.
In this Article, we offer a recommendation that is
both attuned to technological advances and avoids the extremes that
have characterized the past decades of the UPL debate. Rather than
abandon UPL rules, and rather than leave them undisturbed, we propose
that they be recast as primarily regulation of entity-type claims.
Through this recasting, bar associations can retain their role as the
ultimate determiners of “lawyer” and “attorney”
classifications while allowing nonlawyers, including the AI-powered
entities that have emerged in recent years, to provide legal
services—save for a narrow and clearly defined subset. Although
this recommendation is novel, it is easy to implement, comes with few
downsides, and would further the twin UPL aims of competency and
ethicality better than traditional UPL enforcement. Legal technology
companies would be freed from operating in a legal gray area; states
would no longer have to create elaborate UPLavoiding mechanisms, such
as Utah’s “legal sandbox”; consumers—both individuals and
companies—would benefit from better and cheaper legal services; and
the dismantling of access-to-justice barriers would finally be
possible. Moreover, the clouds of free speech and antitrust
challenges that are massing above current UPL rules would dissipate,
and bar associations would be able to focus on fulfilling their
already established UPL-related aims.
Oops! What should we try next?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4721955
Police
Technology Experiments
Police departments often adopt new surveillance
technologies that make mistakes, produce unintended effects, or
harbor unforeseen problems. Sometimes the police try a new
surveillance technology and later abandon it -either from a lack of
success, community resistance, or both. Critics have identified many
problems with these tools: racial bias, privacy violations, opacity,
secrecy, and undue corporate influence, to name a few. A different
framework is needed. This essay considers the growing use of these
algorithmic surveillance technologies and argues that they function
as technology experiments on human subjects. Such technology
experiments result in police reliance on automated systems to engage
in investigative stops and consensual encounters, or to increase
police presence and surveillance in a community. Not only do these
tools act as experiments, in practice they often function as poorly
designed and executed experiments on human subjects. Moreover,
ethical considerations that are common in the conventional human
subjects research context are entirely absent, even though the new
technologies involve uncontrolled experiments on people. And because
these algorithmic surveillance technologies are often adopted in
low-income, communities of color, they function as poorly designed
experiments that raise particularly sensitive concerns about ethics
and experimentation borne out by historical experience. By
understanding the adoption of new algorithmic surveillance tools as
experiments on human subjects, we can develop prospective controls
and methods of evaluation for the use of these tools by police, ones
that balance innovation with ethical responsibility as artificial
intelligence becomes a normal part of police investigations.