Worth
thinking about.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5131058
Large
Language Models and International Law
Large
Language Models (LLMs) have the potential to transform public
international lawyering. ChatGPT and similar LLMs can do so in at
least five ways: (i) helping to identify the contents of
international law; (ii) interpreting existing international law;
(iii) formulating and drafting proposals for new legal instruments or
negotiating positions; (iv) assessing the international legality of
specific acts; and (v) collating and distilling large datasets for
international courts, tribunals, and treaty bodies.
The
article uses two case studies to show how LLMs may work in
international legal practice. First, it uses LLMs to identify
whether particular behavioral expectations rise to the level of
customary international law. In doing so, it tests LLMs’ ability
to identify persistent objectors and a more egalitarian collection of
state practice, as well as their proclivity to produce orthogonal or
inaccurate answers. Second, it explores how LLMs perform in
producing draft treaty texts, ranging from a U.S.-China extradition
treaty to a treaty banning the use of artificial intelligence in
nuclear command and control systems.
Based
on our analysis of the five potential functions and the two more
detailed case studies, the article identifies four roles for LLMs in
international law: as collaborator,
confounder, creator, or corruptor. In some cases, LLMs
will be collaborators, complementing existing international lawyering
by drastically improving the scope and speed with which users can
assemble and analyze materials and produce new texts. At the same
time, without careful
prompt engineering and curation of results, LLMs may generate
confounding outcomes, leading international lawyers down
inaccurate or ambiguous paths. This is particularly likely when LLMs
fail to accurately explain or defend particular conclusions.
Further, LLMs also hold surprising potential to help to create new
law by offering inventive proposals for treaty language or
negotiations.
Most
importantly, we highlight the potential for LLMs to corrupt
international law by fostering automation bias in users. That is,
even where analog work by
international lawyers would produce different results, LLM results
may soon be perceived to accurately reflect the contents of
international law. The implications of this potential are
profound. LLMs could effectively realign the contents and contours
of international law based on the datasets they employ. The
widespread use of LLMs may even incentivize states and others to push
their desired views into those datasets to corrupt LLM outputs. Such
risks and rewards lead us to conclude with a call for further
empirical and theoretical research on LLMs’ potential to assist,
reshape, or redefine international legal practice and scholarship.
Not
sure I agree.
https://thejoas.com/index.php/thejoas/article/view/263
The
Intersection of Ethics and Artificial Intelligence: A Philosophical
Study
The
rapid development of artificial intelligence (AI) has had a
significant impact on various aspects of human life, ranging from the
economy, education, to health. However, these advances also raise
complex ethical challenges, such as privacy concerns, algorithmic
bias, moral responsibility, and potential misuse of technology. This
research aims to explore the intersection between ethics and
artificial intelligence through a philosophical approach. The method
used in this study is qualitative with literature study (library
research), examining various classical and contemporary ethical
theories and their application in the context of AI development. The
results of the study show that AI presents a new moral dilemma that
cannot be fully answered by traditional ethical frameworks.
For example, the concept of responsibility in AI becomes blurred
when decisions are taken by autonomous systems without human
intervention. Additionally, bias in AI training data indicates the
need for strict ethical oversight in the design and implementation
process of this technology. The study also highlights the need for a
multidisciplinary approach in drafting ethical guidelines that are
able to accommodate future AI developments. Thus, this research is
expected to contribute to enriching the discourse on AI ethics and
offering a deeper philosophical perspective in understanding the
moral challenges faced.
You
only get out what you design in… (Garbage in, garbage out.)
https://www.livescience.com/technology/artificial-intelligence/older-ai-models-show-signs-of-cognitive-decline-study-shows
Older
AI models show signs of cognitive decline, study shows
People
increasingly rely on artificial intelligence (AI) for medical
diagnoses because of how quickly and efficiently these tools can spot
anomalies and warning signs in medical histories, X-rays and other
datasets before they become obvious to the naked eye. But a new
study published Dec. 20, 2024 in the BMJ raises concerns that AI
technologies like large language models (LLMs) and chatbots, like
people, show signs of deteriorated cognitive abilities with age.
"These
findings challenge the assumption that artificial intelligence will
soon replace human doctors," the study's authors wrote in the
paper, "as the cognitive impairment evident in leading chatbots
may affect their reliability in medical diagnostics and undermine
patients' confidence."