Computer
Security is about making sure this can not happen.
Julie
Anderson reports:
An IT error resulted in the deletion of
patient records Tuesday from the Creighton University Campus Pharmacy
at 2412 Cuming St.
The lost data includes prescription and
refill history and insurance information for all customers. A count
of customers wasn’t immediately available, but the pharmacy filled
50,000 prescriptions in 2017.
The incident did not involve breach,
Creighton officials said. No patient records were stolen; the data
was deleted.
However,
the loss means that the pharmacy’s database must be rebuilt. All
patient data must be re-entered and new prescriptions obtained from
physicians.
So…
was this/is this pharmacy a HIPAA-covered entity? It would seem that
it almost certainly is. So where was its risk assessment? And did
they really have no backup?
This
may not be a reportable breach under HIPAA and HITECH, but HHS OCR
should be auditing them and looking into this if they are covered by
HIPAA.
(Related)
or this.
Frédéric
Tomesco reports:
More
than 2.9 million Desjardins
Group members
have had their personal information compromised in a data breach
targeting Canada’s biggest credit union.
The incident stems from “unauthorized
and illegal use of internal data” by an employee who has since been
fired, Desjardins said Thursday in a statement. Computer
systems were not breached, the cooperative said. [But
the data was… Bob]
Names, dates of birth, social insurance
numbers, addresses and phone numbers of about 2.7 million individual
members were released to people outside the organization, Desjardins
said. Passwords, security questions and personal identification
numbers weren’t compromised, Desjardins stressed. About 173,000
business customers were also affected.
The
statement
from
Desjardins Group does not offer any explanation of the former
employee’s “ill-intentioned” conduct. Was the employee selling
the data to criminals? Were they selling it to spammers? Were they
giving it to a competitor? It would be easier to evaluate the risk
to individuals if they knew more about the crime itself, I think.
Anyone
can (and eventually will) screw up.
Thomas
Brewster reports:
Investigators
at the FBI and the DHS have failed to conceal minor victims’
identities in court documents where they disclosed a combination of
teenagers’ initials and their Facebook identifying numbers—a
unique code linked to Facebook accounts. Forbes
discovered
it was possible to quickly find their real names and other personal
information by simply entering the ID number after “facebook.com/”,
which led to the minors’ accounts.
In two cases unsealed this month,
multiple identities were easily retrievable by simply copying and
pasting the Facebook IDs from the court filings into the Web address.
Auditors
are a skeptical bunch.
Cyber
Crime Widely Underreported Says ISACA 2019 Annual Report on Cyber
Security Trends
… The
headliner of the most recent part of the cyber security trends report
is the underreporting of cyber crime around the globe, which appears
to have become normalized. About half of the respondents indicated
that they feel that most enterprises do not report all of the cyber
crime that they experience, including incidents that they are legally
obligated to disclose.
This
is taking place in a cyber security landscape in which just under
half of the respondents said that cyber attacks had increased in the
previous year, and nearly
80% expect to have to contend with a cyber attack on their
organization next year. And only a third of the cyber
security leaders reported “high” confidence in the ability of
their teams to detect and respond to such an attack.
Phishing
for people who should know better.
Phishing
Campaign Impersonates DHS Alerts
The
Cybersecurity and Infrastructure Security Agency (CISA) has issued an
alert on a phishing campaign using attachments that impersonate the
Department of Homeland Security (DHS).
In
an effort to make their attack successful, the phishers spoofed the
sender email address to appear as a National Cyber Awareness System
(NCAS) alert.
Using
social engineering, the attackers then attempt to trick users into
clicking the attachments, which were designed to appear as legitimate
DHS notifications.
The
attachments, however, are malicious, and the purpose of the attack
was to lure the targeted recipients into downloading malware onto
their systems.
… “
CISA
will never send NCAS notifications that contain email attachments.
Immediately report any suspicious emails to your information
technology helpdesk, security office, or email provider,” the alert
concludes.
Maybe
GDPR hasn’t solved all the problems yet.
Behavioural
advertising is out of control, warns UK watchdog
The
online behavioural advertising industry is illegally profiling
internet users.
That’s
the damning assessment of the U.K.’s data protection regulator in
an update
report published
today, in which it sets out major concerns about the programmatic
advertising process known as real-time bidding (RTB), which makes up
a large chunk of online advertising.
In
what sounds like a knock-out blow for highly invasive data-driven
ads, the Information Commissioner’s Office (ICO) concludes that
systematic profiling of web users via invasive tracking technologies
such as cookies is in breach of U.K. and pan-EU privacy laws.
“The
adtech industry appears immature in its understanding of data
protection requirements,” it writes.
I’m
shocked, shocked I tell you!
Americans
lack trust in social networks’ judgment to remove offensive posts,
study finds
…
In
a survey published Wednesday by Pew
Research Center,
66 percent of Americans say social networks have a responsibility to
delete offensive posts and videos. But determining the threshold for
removal has been a tremendous challenge for such companies as
Facebook, YouTube and Twitter — exposing them to criticism
that they’ve been too slow and reactive to
the relentless stream of abusive and objectionable content that
populate their platforms.
… When
it comes to removing offensive material, 45 percent of those surveyed
said they did not have much confidence in the companies to decide
what content to take down.
More art than
science and with strong regional bias?
Medicine
contends with how to use artificial intelligence
Artificial
intelligence (AI) is poised to upend the practice of medicine,
boosting the efficiency and accuracy of diagnosis in
specialties that rely on images, like radiology and
pathology. But as the technology gallops ahead, experts are
grappling with its potential downsides. One major concern: Most
AI software is designed and tested in one hospital, and it risks
faltering when transferred to another. Last month, in the
Journal of the American College of Radiology, U.S. government
scientists, regulators, and doctors published a road map describing
how to convert research-based AI into software for medical imaging on
patients. Among other things, the authors urged more collaboration
across disciplines in building and testing AI algorithms and
intensive validation of them before they reach patients. Right now,
most AI in medicine is used in research, but regulators have already
approved some algorithms for radiologists. Many studies are testing
algorithms to read x-rays, detect brain bleeds, pinpoint tumors, and
more.
Another
collection of what ifs…
Death
by algorithm: the age of killer robots is closer than you think
… Right
now, US machine learning and AI is the best in the world, [Debatable
at best. Bob]
which means that the US military is loath to promise that it will not
exploit that advantage on the battlefield. “The US military thinks
it’s going to maintain a technical advantage over its opponents,”
Walsh told me.
In
order to avoid that, AI development needs to be open, collaborative,
and careful. Researchers should not be conducting critical AI
research in secret, where no one can point out their errors. If AI
research is collaborative and shared, we are more likely to notice
and correct serious problems with advanced AI designs.
Probably,
maybe.
The
evolution of cognitive architecture will deliver human-like AI
But
you can't just slap features together and hope to get an AGI
There's
no one right way to build a robot, just as there's no singular means
of imparting it with intelligence. Last month, Engadget
spoke with Carnegie Mellon University associate research
professor and the director of the Resilient Intelligent Systems Lab,
Nathan Michael, whose work involves stacking and combining a robot's
various piecemeal capabilities together as it learns them into an
amalgamated artificial general intelligence (AGI). Think, a Roomba
that learns how to vacuum, then learns how to mop, then learns how to
dust and do dishes -- pretty soon, you've got Rosie from The
Jetsons.
Lawyers are not always concerned about getting
things done? I’m shocked!