This
week we are discussing HIPAA. Is GDPR worse?
https://www.dutchnews.nl/news/2019/07/hospital-fined-e460000-for-privacy-breaches-after-barbie-case/
Hospital
fined €460,000 for privacy breaches after Barbie case
The Haga
hospital in The Hague has been fined €460,000 for poor patient file
security, after it emerged a tv reality soap star’s medical records
had been accessed by dozens of unauthorised members of staff.
The Dutch
privacy watchdog Authoriteit Persoonsgegevens said its research
showed patient records at the hospital are still not properly secure.
… The
hospital gave 85 members of staff an official warning for looking at
the medical files of Samantha de Jong, better known as Barbie, when
she was hospitalised after a suicide attempt last year.
The
members of staff were not involved in treating the tv reality star
and were therefore not entitled to check her files, the hospital
said.
Concerns
about privacy have been one of the major brakes on developing a
nationwide digital medical record system in the Netherlands.
“Everything
in war is very simple. But the simplest thing is difficult.” Carl
von Clausewitz. Same with Computer Security.
How Small
Mistakes Lead to Major Data Breaches
Four
out of five of the top causes of data breaches are
down to human or process error. In other words, human mistakes that
could’ve been remedied with cybersecurity training or more careful
consideration of security practices.
So
far, no significant AI attack has been identified.
How
can attackers abuse artificial intelligence?
- Adversaries will continue to learn how to compromise AI systems as the technology spreads
- The number of ways attackers can manipulate the output of AI makes such attacks difficult to detect and harden against
- Powers competing to develop better types of AI for offensive/defensive purposes may end up precipitating an “AI arms race”
- Securing AI systems against attacks may cause ethical issues (for example, increased monitoring of activity may infringe on user privacy)
- AI tools and models developed by advanced, well-resourced threat actors will eventually proliferate and become adopted by lower-skilled adversaries
Won’t
you take the AI’s word for it?
Good
luck deleting someone's private info from a trained neural network –
it's likely to bork the whole thing
AI
systems have weird memories. The machines desperately cling onto the
data they’ve been trained on, making it difficult
to delete bits of it. In fact, they often have to be completely
retrained from scratch with the newer, smaller dataset.
That’s
no good in an age where individuals can request their personal data
be removed from company databases under privacy measures like the
Europe's GDPR rules. How do you remove a person’s sensitive
information from a machine learning that has already been trained? A
2017
research paper by
law and policy academics hinted that it may even be impossible.
So
what’s the answer?
How
The Software Industry Must Marry Ethics With Artificial Intelligence
Intelligent,
learning, autonomous machines are about to change the way we do
business forever. But in a world where corporations or even
executives may be liable in a civil or even criminal court for their
decisions, who is responsible for decisions made by artificial
intelligence (AI)?
In
the United States, courts are already having to wrestle with this
science fiction scenario after an Arizona woman was killed by an
experimental autonomous Uber vehicle. The European Commission
recently shared ethical guidelines, requiring AI to be transparent,
have human oversight and be subject to privacy and data protection
rules.
… How
can we, as Dr.
Joanna Bryson points
out, avoid being “manipulated into situations where corporations
can limit their legal and tax liability just by fully automating
their business processes?”
I
keep looking for something I understand.
How
to explain deep learning in plain English
… “For
decades, in order to get computers to respond to our requests for
information, we had to learn to speak to them in a way they would
understand,” says Tom Wilde, CEO at Indico Data Solutions. “This
meant having to learn things like boolean query language, or how to
write complex rules that carefully instructed the computer what
actions to take.
… “Deep
learning’s arrival flips that [historical context] on its head,”
Wilde says. “Now the computer says to us, you don’t need to
worry about carefully constructing your request ahead of time –
also known as programming – but rather provide a definition of the
desired outcome and an example set of inputs, and the deep learning
algorithm will backward solve the answer to your question. Now
non-technical people can create complex requests without knowing any
programming.”
Interesting
law. Does this apply to any terminated employee?
Lyft
broke the law when it failed to tell Chicago about a driver it kicked
off its app. A month later he was accused of killing a taxi driver
while working for Uber
Lyft
could face penalties of up to $10,000 for failing to report an
incident to Chicago authorities last year.
After
deactivating driver Fungqi Lu in July 2018 after a fight with a local
attorney, Lyft was required by law to alert the city's Department of
Business Affairs and Consumer Protection. However,
the Chicago Sun-Times reported on Monday that never happened.
Meanwhile,
Lu continued to drive for Uber despite being kicked off the Lyft
platform. (Many drivers work for multiple companies.) It was four
weeks after the first incident when he was accused of fatally kicking
a 64-year-old taxi driver, Anis Tungekar, in a heated traffic
argument caught on video.
… Earlier
this year, the family of the late Tungekar filed a lawsuit against
Uber, alleging that the company was negligent in its hiring of Lu and
seeking $10 million in damages. Uber declined to comment on its
policies for instances like this but passed along the following
statement:
"This is a horrible tragedy and our thoughts are with Mr. Tungekar's family and loved ones," a spokesperson said. "As soon as we were made aware of this, we immediately removed this individual's access from the platform. [What are they talking about? Bob]
No comments:
Post a Comment