A pretty good bad example. I wonder what their contract with the PoS
vendor says about liability?
POS Malware
Found at 102 Checkers Restaurant Locations
The popular Checkers and Rally’s drive-through
restaurant chain was attacked by Point of Sale (POS) malware
impacting 15 percent of its stores across the U.S.
… “We
recently became aware of a data security issue involving malware at
certain Checkers and Rally’s locations,” said Checkers on a
Wednesday website
advisory.
… The
incident impacted 102 stores Checkers across 20 states – which were
all exposed at varying dates, including as
early as December 2015
to as recently as April 2019 (a full list of impacted stores is on
Checkers’ data breach security
advisory page ).
I
don’t need to spend much time gathering examples for my Computer
Security class.
NY
Investigates Exposure of 885 Million Mortgage Documents
New
York regulators are investigating a weakness that exposed 885 million
mortgage records at First
American Financial Corp. as
the first test of the state’s strict new cybersecurity regulation.
That measure, which went into effect in March 2019 and is considered
among the toughest in the nation, requires
financial companies to regularly audit and report on how they protect
sensitive data,
and provides for fines in cases where violations were reckless or
willful.
On
May 24, KrebsOnSecurity broke
the news that
First American had just fixed a weakness in its Web site that exposed
approximately 885 million documents — many of them with Social
Security and bank account numbers — going
back at least 16 years. No authentication was needed to access the
digitized records.
I doubt they are in a hurry. Let’s see what
Brazil and California do…
… When
Europe first implemented the gold-standard GDPR privacy law, Apple
was one of the first companies to pledge to offer similar protections
to
its customers globally,
not just to EU citizens …
However,
the
company went on to argue that
it’s not enough to rely on companies to voluntarily do the right
thing and that the US needs its own version of GDPR.
Others
have since joined the call, including
Microsoft, Google,
and even
Facebook.
This is less surprising than it might seem even for companies where
users are the product: it’s
better for a company to know ahead of time what it can and can’t do
than to make business decisions based on practices which may later be
outlawed.
… There
seem to be three main sticking points. First, ensuring that the law
doesn’t place too great a burden on small businesses, who are not
as well placed as large companies to absorb compliance costs.
Second, disagreement
between Republicans and Democrats on
the role of the FTC. Third, concern among Democrats in particular
that the federal government would be overriding privacy laws already
being created at the state level.
A
mini-GDPR?
Zack
Whittaker reports:
Good news!
Maine lawmakers have passed a bill that will prevent internet providers from selling consumers’ private internet data to advertisers.
The state’s senate unanimously passed the bill 35-0 on Thursday following an earlier vote by state representatives 96-45 in favor of the bill.
Read
more on TechCrunch.
Cost?
Who cares about cost? My students, for starters.
Understanding
the GDPR Cost of Continuous Compliance
Before
the new European General Data Protection Regulation (GDPR) went into
effect in May 2018, both small- and mid-sized companies and larger
enterprises found themselves scrambling to comply with a regulation
they found vague and complex, with no clear path to achieving
compliance. Now, one year later, we have a much better view of not
just the GDPR cost to prepare for the new regulatory environment, but
also how much organizations are spending on continuous compliance. A
new report from DataGrail, “The Cost of Continuous Compliance,”
provides valuable benchmarking data on just how much organizations
are spending – both in terms of financial resources and time – in
order to keep up with the demands of continuous compliance.
Interesting
because it’s not how we normally look at AI. More like Milton
Friedman’s pencil.
Alexa,
please explain the dark side of artificial intelligence
Last
year Kate
Crawford,
a New
York University professor
who runs an artificial intelligence research centre, set out to study
the “black box” of processes that exist around the hugely popular
Amazon
Echo
device.
Crawford did
not do what you might expect when approaching AI – namely, study
algorithms, computing systems and suchlike. Instead, she teamed up
with Vladan
Joler, a
Serbian academic, to map the supply chains, raw materials, data and
labour that underpin Alexa, the AI agent that Echo’s users talk to.
It was a daunting process – so much so that
Joler and Crawford admit that their map, Anatomy of an AI System, is
just a first step. The results are both chilling and challenging.
For what the map shows is that contemporary western society is blind
to the real price of its thirst for technology.
[Anatomy
of an AI System: https://anatomyof.ai/
[Friedman’s
pencil:
https://fee.org/articles/milton-friedman-reveals-the-humbling-truth-of-i-pencil-in-just-two-minutes/
Seems
to indicate we have a long way to go.
HOW
DO YOU TEACH A MACHINE RIGHT FROM WRONG? ADDRESSING THE MORALITY
WITHIN ARTIFICIAL INTELLIGENCE
In
his new novel, Machines Like Me, the novelist Ian McEwan tells the
story, set in an alternate history in England in 1982, of a man who
buys a humanoid robot.
… One
of the first things Adam says when he is switched on is “I don’t
feel right,” and, typically for cautionary tales about robots, it
only gets worse from there.
… Based
on an archive of ethnographic research on various societies, known as
the Human Relations Area Files, the research has revealed seven
“plausible candidates for universal moral rules” that are
constant among 60 societies randomly chosen around the world, from
bands of hunter-gatherers to industrialized nation states. These
behaviours were regarded as “uniformly positive,” without
exception, in every society studied, from Ojibwa, Tlingit and Copper
Inuit in North America, to Somali, Korea, Highland Scots, Serbs, and
Lau Fijians internationally.
The
rules are: to allocate resources to kin; be loyal to groups; be
reciprocal in altruism; be brave, strong, heroic and dominant like a
hawk; be humble, subservient, respectful and obedient like a dove; be
fair in dividing resources; and recognize property rights.
… McEwan’s
novel opens with a quotation from a Rudyard Kipling poem about the
terrifying promise of the industrial age: “But
remember, please, the Law by which we live, / We are not built to
comprehend a lie…”
The
line that follows in Kipling’s poem seems equally grim today, in
the age of AI, now that robots threaten to live up to the all the
good and evil of human behaviour: “We
can neither love nor pity nor forgive. / If you make a slip in
handling us you die!”
[The
Kipling poem:
(Related)
Video. 1:31
Standards
and Oversight of Artificial Intelligence
The
National Institute of Standards and Technology (NIST) and The
Information Technology and Innovation Foundation (ITIF) Center for
Data Innovation hosted a discussion on setting standards and
oversight for artificial intelligence. Among the panelists were
representatives from federal agencies working on scientific standards
as well as researchers and technology developers working for firms in
the artificial intelligence space. They talked about the benefits to
setting technological standards early for both private companies and
government agencies, and ways the two could work together to expedite
standards.
It’s
a start. Ethics will be a large part of my Security
Compliance class this summer.
SF
State launches new certificate in ethical artificial intelligence
… Artificial
intelligence (AI) has the potential to transform our life and work,
but it also raises some thorny ethical questions. That’s why a
team of professors from three different colleges at San Francisco
State University have created a new graduate certificate program in
ethical AI for students who want to gain a broader perspective on
autonomous decision-making.
The
program is one of just a handful focusing on AI ethics nationally and
is unique in its collaborative approach involving the College of
Business, Department of Philosophy and Department of Computer
Science.
… Courses
for the certificate will begin this fall with a philosophy class
focusing on the idea of responsibility, which will also give some
historical context for modern AI and discuss its impacts on labor.
… In
another course, students will learn about how businesses can act
ethically and will consider their responsibility to ensure that
technology — for instance, facial recognition — doesn’t
interfere with the rights of others.
No comments:
Post a Comment