A topic of great interest to my students.
TALK AT THE
DEF CON CRYPTO & PRIVACY VILLAGE
I
recently had the pleasure of speaking at the Crypto
& Privacy Village,
which is part of the massive DEF
CON computer
security conference (and which I help
organize ).
My talk was about a topic that basically everyone seems to be
interested in: Can
you invoke your Fifth Amendment right against self-incrimination when
the police demand that you unlock your smartphone?
The answer, unsurprisingly, is: It depends. I'm still waiting for
video of the talk to go up on DEF CON's YouTube
channel,
and it'll be posted here on the CIS website once it does. But in the
meantime, some folks have asked for my slides, so I'm attaching them
here. (N.B. I should clarify something about the slide that mentions
full-disk encryption: lately Android is
moving from
FDE to file-based encryption. This doesn't change the legal
analysis.)
Attachments:
Giving
Cops the Finger slide deck FINAL copy.pdf
This
will happen sooner than later.
DEEPFAKES
ARTICLE IN THE WASHINGTON STATE BAR ASSOCIATION MAGAZINE
I'm
pleased to have written the cover story for the latest issue of
NWLawyer,
the
magazine of the Washington State Bar Association. The article,
available here,
discusses the impact that so-called "deepfake" videos may
have in the context of the courtroom. Are existing authentication
standards for admission of evidence sufficient, or should the rules
be changed? What ethical challenges will deepfakes pose for
attorneys? How will deepfakes affect juries? These and other
questions may come into play for courts and litigators in the near
future, so they would be well-advised to get ready while there is
still time.
Resources quoted by experts.
How
to operationalize AI ethics
… Sobhani’s
urging echoes calls for a broader definition of AI teams this year
from groups like OpenAI, whose researchers earlier this year penned
a paper titled “AI Safety Needs Social Scientists."
… One
of Thomas’ favorite AI ethics resources comes from the Markkula
Center for Applied Ethics at Santa Clara University: a
toolkit that
recommends a number of processes to implement.
… To
root out bias, Sobhani recommends the What-If
visualization tool from Google’s People + AI Research (PAIR)
initiative
as well as FairTest,
a tool for “discovering unwarranted association within data-driven
applications” from academic institutions like EPFL and Columbia
University. She also endorses privacy-preserving AI techniques like
federated
learning to
ensure better user privacy.
In
addition to resources recommended by panelists, Algorithm Watch
maintains
a running list of AI ethics guidelines.
… The
notion of a checklist like the
kind Microsoft introduced this spring has
drawn criticism from some in the AI ethics community who feel that a
step-by-step document can lead to a lack of thoughtful thinking or
analysis for specific use cases.
… Thomas
says checklists can be one part in a larger, ongoing process, and
pointed to a data
ethics checklist released
earlier this year by former White House chief data scientist DJ Patil
and Cloudera general manager of machine learning Hilary Mason.
A
slideshow for my geeks.
The
best open source software of 2019
No comments:
Post a Comment