I
find questions like this quite amusing.
https://www.pogowasright.org/can-an-ai-chatbot-be-convicted-of-an-illegal-wiretap-a-case-against-gaps-old-navy-may-answer-that/
Can
an AI chatbot be convicted of an illegal wiretap? A case against
Gap’s Old Navy may answer that
NBC
reports:
Can an AI be convicted of illegal
wiretapping?
That’s a question currently playing out
in court for Gap’s Old Navy brand, which is facing a lawsuit
alleging that its chatbot participates in illegal wiretapping by
logging, recording and storing conversations. The suit, filed in the
Central District of California, alleges that the chatbot
“convincingly impersonates an actual human that encourages
consumers to share their personal information.”
In
the filing,
the plaintiff says he communicated with what he believed to be a
human Old Navy customer service representative and was unaware that
the chatbot was recording and storing the “entire conversation,”
including keystrokes, mouse clicks and other data about how users
navigate the site. The suit also alleges that Old Navy unlawfully
shares consumer data with third parties without informing consumers
or seeking consent.
Old Navy, through its parent company Gap,
declined to comment.
A
model for retire-but-still-work-full-time?
https://apnews.com/article/kiss-digital-avatars-end-of-road-finale-37a8ae9905099343c7b41654b2344d0c
Kiss
say farewell to live touring, become first US band to go virtual and
become digital avatars
On
Saturday night, Kiss closed out the final performance of their “The
End of the Road” farewell tour at New York City’s famed Madison
Square Garden.
But
as dedicated fans surely know — they were never going to call it
quits. Not really.
During
their encore, the band’s current lineup — founders Paul Stanley
and Gene Simmons as well as guitarist Tommy Thayer and drummer Eric
Singer — left the stage to reveal digital avatars of themselves.
After the transformation, the virtual Kiss launched into a
performance of “God Gave Rock and Roll to You.”
They
ain’t human but they is intellectual?
https://engagedscholarship.csuohio.edu/clevstlrev/vol72/iss1/12/
That
Thing Ain't Human: The Artificiality of "Human Authorship"
and the Intelligence in Expanding Copyright Authorship to
Fully-Autonomous AI
The
U.S. Copyright Review Board (the "Board") decided that
works entirely created by fully-autonomous artificial intelligence
("AI") are not entitled to copyright protections. The
Board based its decision on a copyrightability requirement referred
to as “human authorship.” However,
the Copyright Act of 1976 (the "Act") never mentions a
“human” requirement to copyright authorship, nor do
most of the Board’s cited authorities. Denying authorship to
intellectually-impressive and economically-valuable works under a
poorly-established legal subelement is antithetical to copyright
law’s history and to Congress’s constitutional mandate to
“promote . . . [the] useful [a]rts . . . .” It leaves creators
who use AI to create works with no protections for their creations.
But this Note argues that, when properly interpreting various
copyright-law authorities that allegedly establish a “human
authorship” requirement, copyright
law does not require “human authorship,” but “intellectual
labor.” Under this standard, AI-produced works are
entitled to copyright protections.
Perspective.
https://www.frontiersin.org/articles/10.3389/frbhe.2023.1338608/full
The
Ethics and Behavioral Economics of Human-AI Interactions: Navigating
the New Normal
Although
some patterns already documented for interactions with previous
generations of technologies are likely to extend to the current wave
of AI, some of its features warrant specific examination. In
particular, the ability of AI systems to continuously learn from new
data and experiences means that they can evolve over time and even in
real time, offering contextually relevant interactions and providing
information that are tailored to the individual user's needs. On the
one hand, this changes the performance expectations of the user, but
on the other hand, it makes the outcomes less predictable, and the
process more opaque, than in the interaction with older generations
of automated agents. In essence, the special quality of AI lies in
its mimicry of human learning processes and its adaptability to the
user. This feature opens a
space for strategic interactions on the both sides: Human
users may adjust their behavior to generate desirable outcomes,
for example, to affect individualized pricing; AI
agents might adjust their behavior to increase engagement,
for instance, by offering the information that the user is more
likely to like, thus potentially fostering and amplifying biases,
creating echo chambers, and spreading disinformation. These
peculiarities raised questions and concerns not for a distant future;
they are immediate and pressing as AI technologies become more
capable and widespread. How, for example, is cooperation achieved
when humans interact with "artificial agents"? What is
different or similar as compared to human-human interactions? Do
people display similar or different behavioral tendencies and biases
(other regarding preferences, time preferences, risk attitudes,
(over)confidence, etc.) when interacting with artificial agents as
compared to humans? What are people's attitudes toward the use of
intelligent machines for certain tasks or functions? What moral
concerns does this raise? What are the reasons for any potential
opposition to the reliance on AI-operated machines for certain tasks?
Behavioral economics offers a lens to understand the nuanced ways in
which interacting with AI affects human behavior. The papers in this
special issue highlight the breadth of questions to be addressed:
from the role of human personality traits for the hybrid
interactions, to reliance on technology, intergroup dynamics and
immoral behavior. The findings from these studies as well as from
many ongoing research efforts remind us that this interaction is not
a simple case of mechanical replacement but a fundamental
transformation of the decision landscape. AI's
influence on human behavior is intricate and often counterintuitive.
The presence of AI alters the context in which decisions are made,
the information that is available, and the strategies that are
employed. Various foundational methods in behavioral economics, such
as laboratory and field experiments, have been employed to provide
causal evidence on the topic. These methods effectively abstract
from and control for potential confounding factors that might be
challenging or unfeasible to isolate using observational data. In
addition, new tools -such as field-in-the-lab experiments with a
learning factory -allows investigating real-world interactions in a
controlled environment. Taking stock of existing evidence and
theoretical contributions, moreover, conceptual analyses can offer
unique insights from a number of the regularities documented in
previous studies. The interaction with AI is dynamic and evolving
due to the rapid pace of technological change. Although the exact
sizes of the estimated effects might be context-specific and may
change from one generation of a technology to another, we can and
should study underlying behavioral regularities that are persistent
and shape the general framework of the interaction with technology.
The overarching narrative is clear: the
rise of AI is not just a technological or economic phenomenon, but a
behavioral one. The research presented here is united by
a common goal: to navigate the ethical and economic implications of
our deepening relationship with AI. The insights gleaned from these
and many other studies to come can help pave the way for a future
where AI and human behavior co-evolve in a manner that is beneficial
and, above all, humancentric.