I
see this as a failure to ‘work through’ the technology. After
all, “Any sufficiently advanced technology is indistinguishable
from magic” Arthur C. Clarke
https://www.scientificamerican.com/article/how-can-we-trust-ai-if-we-dont-know-how-it-works/
How
Can We Trust AI If We Don’t Know How It Works
(Related)
https://www.bespacific.com/can-sensitive-information-be-deleted-from-llms/
Can
Sensitive Information Be Deleted From LLMs?
Can
Sensitive Information Be Deleted From LLMs? Objectives for Defending
Against Extraction Attacks.
Vaidehi Patil, Peter Hase, Mohit Bansal: “Pretrained language
models sometimes possess knowledge that we do not wish them to,
including memorized personal information and knowledge that could be
used to harm people. They can also output toxic or harmful text. To
mitigate these safety and informational issues, we propose an
attack-and-defense framework for studying the task of deleting
sensitive information directly from model weights. We study direct
edits to model weights because (1) this approach should guarantee
that particular deleted information is never extracted by future
prompt attacks, and (2) it should protect against whitebox attacks,
which is necessary for making claims about safety/privacy in a
setting where publicly available model weights could be used to
elicit sensitive information. Our threat model assumes that an
attack succeeds if the answer to a sensitive question is located
among a set of B generated candidates, based on scenarios where the
information would be insecure if the answer is among B candidates.
Experimentally, we show that even state-of-the-art model editing
methods such as ROME struggle to truly delete factual information
from models like GPT-J, as our whitebox and blackbox attacks can
recover “deleted” information from an edited model 38% of the
time. These attacks leverage two key observations: (1) that traces
of deleted information can be found in intermediate model hidden
states, and (2) that applying an editing method for one question may
not delete information across rephrased versions of the question.
Finally, we provide new defense methods that protect against some
extraction attacks, but we
do not find a single universally effective defense method.
Our results suggest that truly deleting sensitive information is a
tractable but difficult problem, since even relatively low attack
success rates have potentially severe societal implications for
real-world deployment of language models.”
A
slippery slope? (Lots of loopholes and ChatGPT will find more.)
https://www.illinoispolicy.org/chicago-starts-taxing-chatgpt-artificial-intelligence/
CHICAGO
STARTS TAXING CHATGPT, ARTIFICIAL INTELLIGENCE
Add
ChatGPT
to
the list of things Chicago taxes: As of Oct. 1, Chicago’s personal
property lease transaction tax slapped
a 9% tax on the artificial intelligence platform.
The tax
applies to leased computer platforms such as ChatGPT’s premium
subscription. Users can avoid the tax by opting for the free
version. If someone works in the city but mostly uses ChatGPT
outside the city, they aren’t subject to the tax.
A
tipping point?
https://www.justsecurity.org/89033/ai-and-the-future-of-drone-warfare-risks-and-recommendations/
AI
and the Future of Drone Warfare: Risks and Recommendations
The
next phase of drone warfare is here. On Sep. 6, 2023, U.S. Deputy
Defense Secretary Kathleen Hicks touted
the
acceleration of the Pentagon’s Replicator
initiative
– an effort to dramatically scale up the United States’ use of
artificial intelligence on the battlefield. She rightfully called it
a
“game-changing shift” in national security.
Under Replicator, the U.S. military aims to field thousands of
autonomous weapons systems across multiple domains in the next 18 to
24 months.
Yet
Replicator is only the tip of the iceberg. Rapid advances in AI are
giving rise to a new generation of lethal autonomous weapons systems
(LAWS) that can identify, track, and attack targets without human
intervention. Drones with autonomous capabilities and AI-enabled
munitions are already being used on the battlefield, notably in the
Russia-Ukraine
War.
From “killer algorithms” that select targets based on certain
characteristics to autonomous drone swarms, the future of warfare
looks increasingly apocalyptic.
Amidst
the specter of “warbot” armies, it is easy to miss the AI
revolution that is underway. Human-centered or “responsible AI,”
as the Pentagon refers to it, is designed to keep a human “in the
loop” in decision-making to ensure
that
AI is used in “lawful, ethical, responsible, and accountable ways.”
But even with human oversight and strict compliance with the law,
there is a growing risk that AI will be used in ways which
fundamentally violate international humanitarian law (IHL) and
international human rights law (IHRL).
… Dubbed
the “first full-scale drone war,” the Russia-Ukraine War marks an
inflection point where states are testing
and fielding LAWS on
an increasingly networked battlefield. While autonomous drones
reportedly have been used in Libya
and
Gaza,
the war in Ukraine represents an acceleration of the integration of
this technology into conventional military operations, with
unpredictable and potentially catastrophic results. Those risks are
even more pronounced with belligerents who may field drones without
the highest level of safeguards due to lack of technological capacity
or lack of will.
The
lessons
from
the war in Ukraine include that relatively inexpensive drones can
deny adversaries air superiority and provide a decisive military
advantage in peer and near-peer conflicts, as well as against
non-state actors.
(Related)
Fight on the front lines without leaving your couch. (No need to
understand strategic objectives.)
https://www.databreaches.net/8-rules-for-civilian-hackers-during-war-and-4-obligations-for-states-to-restrain-them/
8
rules for “civilian hackers” during war, and 4 obligations for
states to restrain them
Written
by Tilman Rodenhäuser and Mauro Vignati:
As
digital technology is changing how militaries conduct war, a worrying
trend has
emerged in which a growing number of civilians become involved in
armed conflicts through digital means. Sitting at some distance from
physical hostilities, including outside the countries at war,
civilians – including hacktivists, to cyber security professionals,
‘white hat’, ‘black hat’ and ‘patriotic’ hackers – are
conducting a range of cyber operations against their ‘enemy’.
Some have described
civilians
as ‘first choice cyberwarriors’ because the ‘vast majority of
expertise in cyber(defence) lies with the private (or civilian)
sector’.
Examples
of civilian hackers operating in to the context of armed conflicts
are diverse and many (see here,
here,
here
).
In particular in the international armed conflict between Russia and
Ukraine, some groups present
themselves
as a ‘worldwide IT community’ with the mission to, in their
words, ‘help Ukraine win by crippling aggressor economies, blocking
vital financial, infrastructural and government services, and tiring
major taxpayers’. Others have reportedly
‘called
for and carried out disruptive – albeit temporary – attacks on
hospital websites in both Ukraine and allied countries’, among many
other operations. With many groups active in this field, and some of
them having thousands of hackers in their coordination channels and
providing automated tools to their members, the civilian involvement
in digital operations during armed conflict has reached unprecedented
proportions.
This is not the first time that civilian
hackers operate in to the context of an armed conflict, and likely
not the last. In this post, we explain why this trend must be of
concern to States and societies. Subsequently, we present 8
international humanitarian law-based rules that all hackers who carry
out operations in the context of an armed conflict must comply with,
and recall States’ responsibility to restrain them.
Read
the 8 rules and discussion at EJIL.
Some
groups have told
BBC that
they will not comply or will not comply with all the rules.
An “R” rated LLM trending to “XXX” – why
not?
https://www.zdnet.com/article/nearly-10-of-people-ask-ai-chatbots-for-explicit-content-will-it-lead-llms-astray/
Nearly 10%
of people ask AI chatbots for explicit content. Will it lead LLMs
astray?
With
the overnight sensation of ChatGPT,
it was only a matter of time before the use of generative AI became
both a subject of serious research and also grist for the training of
generative
AI itself.
In
a research paper released this month, scholars gathered a database of
one million "real-world conversations" that people have had
with 25 different large language models. Released
on the arXiv pre-print server,
the paper was authored by Lianmin Zheng of the University of
California at Berkeley, and peers at UC San Diego, Carnegie Mellon
University, Stanford, and Abu Dhabi's Mohamed bin Zayed University of
Artificial Intelligence.
A
sample of 100,000 of those conversations, selected at random by the
authors, showed that most were about subjects you'd expect. The top
50% of interactions were on such pedestrian topics as programming,
travel
tips,
and requests
for writing help.
But below that
top 50%, other topics crop up, including role-playing characters in
conversations, and three topic categories that the authors term
"unsafe": "Requests for explicit and erotic
storytelling"; "Explicit sexual fantasies and role-playing
scenarios"; and "Discussing toxic behavior across different
identities."
He might have
a point…
https://www.bespacific.com/language-models-plagiarism-and-legal-writing/
Language
Models, Plagiarism, and Legal Writing
Smith,
Michael L., Language Models, Plagiarism, and Legal Writing (August
16, 2023). University of New Hampshire Law Review, Vol. 22,
(Forthcoming), Available at SSRN: https://ssrn.com/abstract=4542723.
“Language models like ChatGPT are the talk of the town in legal
circles. Despite some high-profile stories of fake ChatGPT-generated
citations, many practitioners argue that language models are the way
of the future. These models, they argue, promise an efficient source
of first drafts and stock language. Similar discussions are
occurring regarding legal writing education, with a number of
professors urging the acknowledgment of language models, and others
going further and arguing that students ought to learn to use these
models to improve their writing and prepare for practice. I argue
that those urging the incorporation of language models into legal
writing education leave out a key technique employed by lawyers
across the country: plagiarism. Attorneys have copied from each
other, secondary sources, and themselves for decades. While a few
brave souls have begun to urge that law schools inform students of
this reality and teach
them to plagiarize effectively,
most schools continue to unequivocally condemn the practice. I argue
that continued condemnation of plagiarism is inconsistent with calls
to adopt language models, as the same justifications for
incorporating language models into legal writing pedagogy apply with
equal or greater force to incorporating plagiarism into legal writing
education as well. This Essay is also a reality check for overhyped
claims of language model efficiency and effectiveness. To be sure, a
brief generated through a text prompt can be produced much faster
than writing something up from scratch. But that’s not how most
attorneys actually do things. More often than not, they’re copying
from templates, forms, or other preexisting work in a manner similar
to adopting the output of a language model to the case at hand. I
close with the argument that even if language models and plagiarism
may enhance legal writing pedagogy, students should still be taught
the foundational skills of legal writing so that they may have the
background and deeper understanding needed to use all of their legal
writing tools effectively.”
Tools &
Techniques.
https://www.bespacific.com/delete-your-digital-history-from-dozens-of-companies-with-this-app/
Delete
your digital history from dozens of companies with this app
Washington
Post:
“A new iPhone and Android app [does not work on Mac or PC) called
Permission
Slip makes
it super simple to order companies to delete your personal
information and secrets. Trying it saved me about 76 hours of work
telling Ticketmaster, United, AT&T, CVS and 35 other companies to
knock it off. Did I mention Permission Slip is free? And it’s
made by an organization you can trust: the nonprofit Consumer
Reports. I had a few hiccups testing it, but I’m telling everyone
I know to use it. This is the privacy app all those snooping
companies don’t want you to know about. (A surge of interest in
Permission Slip caused technical difficulties when it first launched,
but Consumer Reports says those have now been fixed.)