Ready
to start a serious discussion of privacy?
https://iapp.org/news/a/privacy-as-code-a-new-taxonomy-for-privacy/
Privacy
as code: A new taxonomy for privacy
“Privacy
by design” implies putting privacy into practice in system
architectures and software development from the very beginning and
throughout the system lifecycle. It is required by the EU General
Data Protection Regulation in Article 25. In the U.S., the Federal
Trade Commission included an entire section on privacy by design in
its 2012
report on
recommendations for businesses and policymakers. Privacy by design
is also covered by India’s
PDP Bill and
by Australia’s
Privacy Management Framework,
to name just a few. Privacy by design has come a long way since its
original presentation by Ann Cavoukian, former Canadian privacy
commissioner of Ontario, in 2009.
While
privacy as design is conceptually simple, its reduction to practice
is not. System developers and privacy engineers responsible for it
face simple but hard-to-answer questions: Where is the actual data in
the organization? What types of information fall under personal
data? How does one set up a data deletion process for structured as
well as unstructured data?
Three
years ago, Cillian Kieran and his team at Ethyca embarked on a quest
to develop a unified solution to those questions. Their vision?
Nothing less than privacy-as-code – privacy built into the code
itself. This revolutionary approach classifies data in such a way
that its privacy attributes are obvious within the code structure.
… Last
week, Ethyca celebrated an additional $7.5 million in funding and
announced the first release of Fides.
Fides is named after the Roman god of trust.
Fides
is an open-source, human-readable description
language based
on the data-serialization language YAML. Fides allows one to write
code with privacy designed in. It is based on common definitions of
types, categories and purposes of personal data. Developers that use
this language can easily see where privacy-related information is at
any point in the software development. For any given system,
engineers shall be able to understand at a glimpse whose data is in
the system and what it is being used for.
Perhaps it is
not too late.
https://sloanreview.mit.edu/article/catching-up-fast-by-driving-value-from-ai/
Catching
Up Fast by Driving Value From AI
Some
organizations may feel that acquiring AI capabilities is a race, and
if a company starts late, it can never catch up.
That notion is
belied by Scotiabank (officially the Bank of Nova Scotia), which has
pursued a results-oriented approach to artificial intelligence over
the past two years. While some of its resources are devoted to
exploring how new technologies — including blockchain and quantum
computing — might drive fresh business models and products, the
great majority of its data and AI work is focused on improving
operations today rather than incubating for the future.
As a result,
Scotiabank — one of the Big Five banks based in Canada — has
caught up to competitors in some crucial areas. It has done so by
more closely integrating its data and analytics work; taking a
pragmatic approach to AI; and focusing on reusable data sets, which
help with both speed and return on investment.
Questions
yes, answers not so much.
https://www.fedscoop.com/questions-around-federal-ai-oversight/
2021
in review: Oversight questions loom over federal AI efforts
The Biden administration established several
artificial intelligence bodies in 2021 likely to impact how agencies
use the technology moving forward, but oversight mechanisms are
lacking, experts say.
Bills
mandating greater accountability around AI
haven’t
gained traction because the U.S. lacks comprehensive privacy
legislation, like the European Union’s General Data Protection
Regulation, which would serve as a foundation for regulating
algorithmic systems, according to an Open
Technology Institute brief
published
in November.
… “Right now most advocates and experts in
the space are really looking to the EU as the place that’s laying
the groundwork for these kinds of issues,” Spandana Singh, policy
analyst at OTI, told FedScoop. “And the U.S. is kind of lagging
behind because it hasn’t been able to identify a more consolidated
approach.”
Instead lawmakers propose myriad bills addressing
aspects of privacy, transparency, impact assessments, intermediary
liability, or a combination in a fragmented approach year after year.
The EU has only the Digital Services Act, requiring transparency
around algorithmic content curation, and the AI Act, providing a
risk-based framework for determining if a particular AI system is
“high risk.”