It’s
hard to keep track so articles like this are useful.
https://jsp-ls.berkeley.edu/sites/default/files/california_legal_studies_journal_2023.pdf#page=15
Science
vs. The Law: How Developments in Artificial Intelligence Are
Challenging Various Civil Tort Laws
For centuries, the law has been
playing catch-up as science pushes the boundaries of how we define
both society and our own realities. Today, artificial intelligence
(AI) perhaps poses the biggest challenges the legal system has ever
faced. This paper aims to explore many of the numerous ways in which
artificial intelligence developments are actively pushing the
boundaries of contemporary civil law, challenging lawyers and judges
alike to rethink the law as they know it. It first offers a general
overview of artificial intelligence as well as some relevant legal
fields: negligence, product liability, fault information, invasion of
privacy, and copyright. Then, it dives into the specifics of how
artificial intelligence is challenging these fields. Each section
will introduce a new field in which artificial intelligence is
rapidly changing the game, explain the benefits and pitfalls of said
use of AI, introduce the relevant legal field and its policies, and
then explore the challenges that AI is causing to the law and how, if
at all, that legal field is adapting to those challenges.
It seems we need an example greater
than Russia’s attacks on the Ukraine?
https://digitalcommons.liberty.edu/hsgconference/2023/foreign_policy/13/
The
Future of the Cyber Theater of War
Few could imagine how it would
develop when the air was the new theater of war. The literature
showcases that a lack of imagination and state-level
institutionalized power structures, particularly in the U.S.,
hampered the progress of air as a new theater of war both in thought
and application. Today, a similar lack of imagination on the cyber
theater of war is a great source of insecurity in the world system;
it sets the stage for strategic shocks like the ones to the U.S. on
December 7, 1941, and 9/11. To
avoid this, states should imagine how a convergence of cyber
technologies into new weapons could be used in war and by whom.
Popular movies today form the basis for considering what has yet to
be realized in the cyber theater of war. Its nascent history and
designation as a theater of war foreshadow the expectation that
eventual traditional war will occur in the cyber realm. When
nanocomputers, artificial intelligence, quantum computing, speed, and
advanced robotics fully converge, new weapons are possible and
likely. The Just War Theory, understood through the Christian lens
rather than only as a matter of secular international law, is applied
to the evolving cyber theater of war to fill current doctrinal gaps
in the just cause and conduct of future war within the cyber realm.
AI is too human to be considered a
person?
https://link.springer.com/article/10.1007/s43545-023-00667-x
Hybrid
theory of corporate legal personhood and its application to
artificial intelligence
Artificial intelligence (AI) is often
compared to corporations in legal studies when discussing AI legal
personhood. This article also uses this analogy between AI and
companies to study AI legal personhood but contributes to the
discussion by utilizing the hybrid model of corporate legal
personhood. The hybrid model simultaneously applies the real entity,
aggregate entity, and artificial entity models. This
article adopts a legalistic position, in which anything can be a
legal person. However, there might be strong pragmatic
reasons not to confer legal personhood on non-human entities. The
article recognizes that artificial intelligence is autonomous by
definition and has greater de facto autonomy than corporations and,
consequently, greater potential for de jure autonomy. Therefore, AI
has a strong attribute to be a real entity. Nevertheless,
the article argues that AI has key characteristics from the aggregate
entity and artificial entity models. Therefore, the hybrid entity
model is more applicable to AI legal personhood than any single model
alone. The discussion
recognises that AI might be too autonomous for legal personhood.
Still, it concludes that the hybrid model is a useful analytical
framework as it incorporates legal persons with different levels of
de jure and de facto autonomy.
Government by ChatBot? What evidence
will I have to keep when the ChatBot says I no longer have to pay
taxes?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4444869
Automated
Agencies
When individuals have questions about federal
benefits, services, and legal rules, they increasingly seek help from
government chatbots, virtual assistants, and other automated tools.
Most scholars who have studied artificial intelligence and federal
government agencies have not focused on the government’s use of
technology to offer guidance to the public. The absence of scholarly
attention to automation as a means of communicating government
guidance is an important gap in the literature. Through the use of
automated legal guidance, the federal government is responding to
millions of public inquiries each year about the law, a number that
may multiply many times over in years to come. This new form of
guidance is thereby shaping public views of and behavior with respect
to the law, without serious examination.
This Article describes the results of a
qualitative study of automated legal guidance across the federal
government. This study was conducted under the auspices of the
Administrative Conference of the United States (ACUS), an independent
federal agency of the U.S. government charged with recommending
improvements to administrative process and procedure. Our goal was
to understand federal agency use of automated legal guidance, and
offer recommendations to ACUS based on our findings. During our
study, we canvassed the automated legal guidance activities of all
federal agencies. We found extensive use of automation to offer
guidance to the public by federal agencies, with varying levels of
sophistication and legal content. We identified two principal models
of automated legal guidance, and we conducted in-depth legal research
regarding the most sophisticated examples of such models. We also
interviewed agency officials with direct, supervisory, or support
responsibility over well-developed automated legal guidance tools.
We find that automated legal guidance offers
agencies an inexpensive way to help the public navigate complex legal
regimes. However, we also find that automated legal guidance may
mislead members of the public about how the law will apply in their
individual circumstances. In particular, automated legal guidance
exacerbates the tendency of federal agencies to present complex law
as though it is simple without actually engaging in simplification of
the underlying law. While this approach offers advantages in terms
of administrative efficiency and ease of use by the public, it also
causes the government to present the law as simpler than it is,
leading to less precise advice and potentially inaccurate legal
positions. In some cases, agencies heighten this problem by, among
other things, making guidance seem more personalized than it is,
ignoring how users may rely on the guidance, and failing to
adequately disclose that the guidance cannot be relied upon as a
legal matter. At worst, automated legal guidance enables the
government to dissuade members of the public from accessing benefits
to which they are entitled, a cost that may be borne
disproportionately by members of the public least capable of
obtaining other forms of legal advice.
In reaching these conclusions, we do not suggest
that automated legal guidance is uniquely problematic relative to
alternative forms of communicating the law. The question of how to
respond to complex legal problems, in light of a public that has
limited ability or inclination to understand complex legal systems,
is a difficult one. There are different, potential solutions to this
problem, which each present their own series of cost-benefit
tradeoffs. However, failure to appreciate, or even examine, the
tradeoffs inherent in automated legal guidance, relative to the
alternatives, undermines our ability to make informed decisions about
when to use which solution, or how to minimize the costs of this form
of guidance.
In this Article, after exploring these challenges,
we chart a path forward. We offer policy recommendations, organized
into five categories: transparency; reliance; disclaimers; process;
and accessibility, inclusion, and equity. We believe that our
descriptive as well as theoretical work regarding automated legal
guidance, and the detailed policy recommendations that flow from it,
will be critical for evaluating existing, as well as future,
government uses of automated legal guidance.