A way to monetize ‘fake news?’ There is a security fix – print
a one-time review ‘password’ on the check. (So much for
privacy…)
https://ny.eater.com/2022/7/8/23200007/one-star-review-scam-extortion-nyc-restaurants
Scammers
Are Threatening NYC Restaurants With 1-Star Reviews in Exchange for
Online Gift Cards — Um, What?
An
extortion scam affecting restaurant owners across
the country has touched down in New York City.
Restaurants including Avant Garden in the East Village, Dame in
Greenwich Village, and Huertas in the East Village are among the
first to be hit by the online scammers who are threatening to leave
one-star reviews on restaurants’ business pages until their owners
hand over gift cards to Google’s app store.
(Related)
https://sfist.com/2022/07/09/upscale-sf-restaurants-targeted-by-cookie-cutter-bad-reviews-from-online-trolls/
Upscale
SF Restaurants Targeted by Cookie-Cutter Bad Reviews From Online
Trolls
Review
aggregation websites like Yelp! and OpenTable have a massive impact
on small businesses, like bars and eateries. It's been estimated
that just an extra half-star rating increase can cause restaurants to
sell out 19%
more frequently;
conversely, a full-star rating drop can be financially
devastating to an eatery,
especially when they're left on baseless accounts.
A
simple question? Disclosure may trigger bias?
https://www.researchgate.net/profile/Kai-Spriestersbach/publication/361547935_AI_Ethics_When_does_an_AI_voice_agent_need_to_disclose_itself_as_an_AI_agent/links/62b8541ddc817901fc7e7c99/AI-Ethics-When-does-an-AI-voice-agent-need-to-disclose-itself-as-an-AI-agent.pdf
AI
Ethics: When does an AI voice agent need to disclose itself as an AI
agent?
There
is an ongoing debate in the field of artificial intelligence (AI)
about when, or even if, AI agents should reveal themselves as such to
humans. The research investigates business policy and principles and
academic research into when an AI agent needs to disclose itself to
the end-user when might not be aware they are interacting with an AI
agent. The research finds key situations and conditions when an AI
agent needs to disclose itself to the end-user. Moreover, the
investigation outlines the gap between the business and academic
world towards AI disclosure to the human.
(Related)
https://link.springer.com/chapter/10.1007/978-3-030-95346-1_128
Humanizing
the Terminator: Artificial Intelligence Trends in the Customer
Journey: An Abstract
Current
use of artificial intelligence (AI) in marketing is to assist and
empower consumers or a human workforce. While AI is not yet
replacing humans (Chen et al. 2019; Davenport et al. 2020), it is
transforming many industries (Huang and Rust 2018; Rust 2020; Wirth
2018). Whether consumers recognize it or not, AI is already embedded
into many aspects of today’s customer journey. In this process,
tradeoffs between data privacy, AI driven technology, and resulting
benefits have blurred and at times, been accepted by consumers via
social complacency. There is evidence that this tradeoff can create
a feeling of cognitive dissonance within some users of AI.
The
theory of cognitive dissonance proposes that when a person has two
inconsistent thoughts, beliefs, attitudes, or actions, dissonance
(mental distress) will occur (Festinger 1957). Dissonance is
uncomfortable, and thus people will seek to resolve that discomfort
through various strategies, such as creating an explanation that
allows the inconsistency to exist or rejecting new information that
is in conflict with existing beliefs (Festinger 1964). Research by
Levin et al. (2010) supports that cognitive dissonance is increased
in human-robot interactions as compared to human-human interactions
for similar purposes.
Much
of the existing research has examined perceptions and behaviors of
those aware of an AI-based interaction, not those who may be
interacting with AI unknowingly. The
purpose of this research is to explore the differences in attitudes
and behaviors of consumers when they are and are not aware of the
existence of AI and how cognitive dissonance may play a role in their
AI interactions. This study will employ a mixed-methods
approach consisting of a consumer survey and interviews to better
understand this phenomena.
Criminals
are people. AI is a criminal. Therefore, AI is a people?
https://elib.sfu-kras.ru/handle/2311/147462
Criminal
Liability for Actions of Artificial Intelligence: Approach of Russia
and China
In
the Era of Artificial intelligence (AI) it is necessary not only to
define precisely in the national legislation the extent of protection
of personal information and limits of its rational use by other
people, to improve data algorithms and to create ethics committee to
control risks, but also to establish precise liability (including
criminal liability) for violations, related to AI agents. According
to existed criminal law of Russia and criminal law of the People’s
Republic of China AI crimes can be divided into three types:
crimes, which can be regulated with existed criminal laws;
crimes, which are regulated inadequately with existed criminal laws;
crimes, which cannot be regulated with existed criminal laws.
Solution
of the problem of criminal liability for AI crimes should depend on
capacity of the AI agent to influence on ability of a human to
understand public danger of committing action and to guide his
activity or omission. If a machine integrates with an individual,
but it doesn’t influence on his ability to recognize or to make
decisions. In this case an individual is liable to be prosecuted.
If a machine influences partially on human ability to recognize or to
make decisions. In this case engineers, designers and units of
combination should be prosecuted according to principle of relatively
strict liability. In case, when AI machine integrates with an
individual and controls his ability to recognize or to make
decisions, an individual should be released from criminal prosecution
Relentless
surveillance…
https://www.pogowasright.org/why-privacy-matters-a-conversation-with-neil-richards/
Why
Privacy Matters: A Conversation with Neil Richards
Julia
Angwin writes:
Hello, friends,
In
the wake of the Supreme Court’s jaw-dropping
ruling overturning
constitutional protections for abortion in the United States, there’s
been a lot of discussion about how to keep data about pregnant people
private.
Google
announced, for instance, that it would
remove sensitive locations,
such as abortion clinics, from the location data it stores about
users of its Android phones. Many people—including
me in this newsletter —worried
about whether they or their loved ones should delete their
period-tracking apps.
But
as Vox reporter Sara Morrison wisely observed, “[D]eleting
a period tracker app is like taking a teaspoon of water out of the
ocean.”
So much data is collected about people these days that removing a
small amount of data from an app or a phone is not going to erase all
traces of a newly criminalized activity.
The
Electronic Frontier Foundation notes that pregnant people are
ar
more likely to be turned over to law enforcement by
hospital staff, a partner or a family member than by data in an app
—and that the types of digital evidence used to indict people are
often text messages, emails, and web search queries.
So
how do you protect yourself in a world of relentless surveillance?
This seems like a good time to go back to the basics and understand
what privacy is and why we seek it. Because it’s not just people
fearing arrest who need it, but all of us.
And so this week, I turned to an expert
on this topic, Neil Richards, Koch Distinguished Professor in Law at
Washington University, in St. Louis. Richards is the author of two
seminal privacy books: “Why Privacy Matters” (Oxford Press, 2022)
and “Intellectual Privacy” (Oxford Press, 2015). He also serves
on the board of the Future of Privacy Forum and the Electronic
Privacy Information Center and is a member of the American Law
Institute. He served as a law clerk to William H. Rehnquist, former
chief justice of the Supreme Court.
Read
Julia’s conversation with Neil Richards at The
Markup.
A
completely acceptable use for facial recognition?
https://futurism.com/the-byte/smart-pet-door-facial-recognition
SMART
PET DOOR USES FACIAL RECOGNITION TO KEEP STRANGE ANIMALS OUT
(Related_
https://www.proquest.com/openview/f9e891252c379bbcd6e91885ed30d095/1?pq-origsite=gscholar&cbl=18750&diss=y
Principles
for Facial Recognition Technology: A Content Analysis of Ethical
Guidance
Ethical
issues are a significant challenge for the facial recognition
technology industry and the users of this technology. In response to
these issues, private sector, government, and civil society groups
created ethical guidance documents. The primary purpose of this
qualitative content analysis study was to identify common ethical
recommendations in these facial recognition technology ethical
guidance documents. The research questions explored within this
study included: What are the common recommendations in facial
recognition ethical guidance; are there certain ethical
recommendations that are more prevalent; and are there differences
between recommendations from governments, the private sector, or
other organizational groups? The scope of the study was limited to
ethical guidance documents published within the United States or
published by international groups that included representation from
the United States. Using a qualitative content analysis research
methodology with an inductive research design for theme development,
eight themes were identified representing the common recommendations
in facial recognition technology ethical guidance documents. The
eight thematic categories of common recommendations were privacy,
responsibility, accuracy and performance, accountability,
transparency, lawful use, fairness, and purpose limitation. The
research findings can inform ethical debates and might further the
development of ethical norms within the industry. The findings also
have significant implications for practice, providing organizations
with a deeper understanding of the most common recommendations across
all organizational groups and knowledge of differences between
organizational groups. Thus, where there might be an opportunity for
organizations to demonstrate ethical leadership.
I read a
lot of science fiction, therefore I know what might be
possible with the right AI. Anyone got a few million to invest in my
start-up?
https://www.technologyreview.com/2022/07/07/1055526/why-business-is-booming-for-military-ai-startups/
Why
business is booming for military AI startups
Exactly
two weeks after Russia invaded Ukraine in February, Alexander Karp,
the CEO of data analytics company Palantir, made his pitch to
European leaders. With war on their doorstep, Europeans ought to
modernize their arsenals with Silicon Valley’s help, he argued in
an open
letter.
For Europe to “remain strong enough to defeat
the threat of foreign occupation,” Karp wrote, countries need to
embrace “the relationship between technology and the state, between
disruptive companies that seek to dislodge the grip of entrenched
contractors and the federal government ministries with funding.”
Militaries
are responding to the call. NATO announced
on
June 30 that it is creating a $1 billion innovation fund that will
invest in early-stage startups and venture capital funds developing
“priority” technologies such as artificial intelligence, big-data
processing, and automation.
Since
the war started, the UK has launched a new AI strategy specifically
for defense, and the Germans have earmarked
just
under half a billion for research and artificial intelligence within
a $100 billion cash injection to the military.