Not
all computer systems are managed by IT. Shouldn’t they be?
Most
Medical Imaging Devices Run Outdated Operating Systems
You'd
think that mammography machines, radiology systems, and ultrasounds
would maintain the strictest possible security hygiene. But new
research shows
that a whopping 83 percent of medical imaging devices run on
operating systems that are so old they no longer receive any software
updates at all.
That
issue is endemic to internet of things devices generally, many of
which aren't designed to receive software improvements or offer only
a complicated path to doing so. But medical devices are an
especially troubling category for the issue to show up in, especially
when the number of devices with outdated operating systems is up 56
percent since 2018. You can attribute most of that increase to
Microsoft ending
support for Windows 7 in
January. As new vulnerabilities are found in the operating system,
any device still running it won't get patches for them.
Not
all IT decisions are based on experience. Some just sound good.
New
Data Rules Could Empower Patients but Undermine Their Privacy
In
a move intended to give Americans greater control over their medical
information, the Trump administration announced broad new rules on
Monday that will allow people for the first time to use apps of their
choice to retrieve data like their blood test results directly from
their health providers.
The
Department of Health and Human Services said the new system was
intended to make it as easy for people to manage their health care on
smartphones as it is for them to use apps to manage their finances.
… Prominent
organizations like the American Medical Association have warned that,
without accompanying federal safeguards, the new rules could expose
people who share their diagnoses and other intimate medical details
with consumer apps to serious data abuses.
Try
to make sense of this… The report should be available later today.
Giant
Report Lays Anvil on US Cyber Policy
Released
today, the bipartisan Cyberspace Solarium Commission makes more than
75 recommendations that range from common-sense to befuddling.
Today,
the US Cyberspace Solarium Commission published its final report.
The 182-page document is the culmination of a year-long, bipartisan
process to develop a new cyber strategy for the United States
How
does DNA differ from fingerprints?
Alexia
Rodriguez of ACLU writes:
Every two minutes, we shed enough skin cells to cover nearly an entire football field. With a single sneeze, we can spew 3,000 cell-containing droplets into the world. And, on average, we leave behind between 40 and 100 hairs per day. As long as we live in the world and leave our homes each day, we can’t avoid leaving a trail of our DNA in our wake.
Every strand of DNA holds a treasure trove of deeply personal information, from our propensity for medical conditions to our ancestry to our biological family relationships. And increasingly, police are accessing and testing the DNA contained in our unavoidably shed genetic material without judicial oversight. That’s why we’re asking a court to require police to get a warrant before collecting the DNA we unavoidably leave behind.
For
the Privacy team. Start learning AI.
AI
Predicted to Take Over Privacy Tech
More
than 40% of privacy tech solutions aimed at ensuring legal compliance
are predicted to rely on Artificial Intelligence (AI) over the course
of the next three years, analysts from the business research and
advisory firm Gartner Inc have found.
The
company—which is set to present these findings among others at the
Gartner IT Symposium/Xpo™ 2020 in Toronto, Canada in May—has
found
that
reliance on privacy tech to ensure compliance with various privacy
laws is expected to increase by at least 700% between 2020 and 2023.
This
marks an increase from the 5% of privacy tech solutions that are AI
driven today to the more than 40% that are predicted to become
available within the next 36 months.
The
first of many, I’m sure. Each possible/potential use of public
information requires a specific consent?
Vermont
sues secretive facial recognition company Clearview AI
The state of
Vermont has sued the company behind a facial recognition tool that
has built a vast database from photos of private individuals it has
gathered across the internet and social media platforms without
consent.
Attorney
General TJ Donovan announced Tuesday his office has filed a lawsuit
in Vermont Superior Court in Chittenden County against Clearview AI,
alleging the secretive business has violated the state’s consumer
protection law by illegally
collecting images of Vermont residents, including children, and
selling this information to private businesses, individuals and law
enforcement.
… In
January, around the time when the New York Times published its report
on Clearview, the company registered as a data broker in Vermont —
an entity that collects information about individuals from public and
private sources for profit.
Data brokers
that sell Vermonters’ data must register annually with the state’s
data broker registry and provide certain information about business
practices. In the registry, Clearview reported that it knowingly
“possesses the brokered personal information of minors,”
according to the attorney general’s lawsuit.
… The
state’s lawsuit also makes the case that when an individual uploads
a photograph to Facebook for “public” viewing, they
consent to others looking at the photograph but are not consenting to
the “mass collection of those photographs by an automated process
that will then put those photographs into a facial recognition
database.”
For all my
students.
Co-Designing
Checklists to Understand Organizational Challenges and Opportunities
around Fairness in AI
Many
organizations have published principles intended to guide the ethical
development and deployment of AI systems; however, their abstract
nature makes them difficult to operationalize. Some organizations
have therefore produced AI ethics checklists, as well as checklists
for more specific concepts, such as fairness, as applied to AI
systems. But unless checklists are grounded in practitioners’
needs, they may be misused. To understand the role of checklists in
AI ethics, we conducted an iterative co-design process with 48
practitioners, focusing on fairness. We co-designed an AI fairness
checklist and identified desiderata and concerns for AI fairness
checklists in general. We found that AI fairness checklists could
provide organizational infrastructure for formalizing ad-hoc
processes and empowering individual advocates. We discuss aspects of
organizational culture that may impact the efficacy of such
checklists, and highlight future research directions.
Recreational
privacy.
No comments:
Post a Comment