Corporate auditors should already
be addressing CMMC.
The
Pentagon’s first class of cybersecurity auditors is almost here
The Pentagon
hopes to have the first class of auditors to evaluate contractors’
cybersecurity ready by April, a top Department of Defense official
said March 5.
The auditors
will be responsible for certifying companies under the new
Cybersecurity Maturity Model Certification (CMMC), which is a tiered
cybersecurity framework that grades companies on a scale of one to
five. A score of one designates basic hygiene and a five represents
advanced hygiene.
Cases, just in
case. (Free PDF)
Cybersecurity
Law, Policy, and Institutions (version 3.0)
This is the
full text of my interdisciplinary “eCasebook” designed from the
ground up to reflect the intertwined nature of the legal and policy
questions associated with cybersecurity. My aim is to help the
reader understand the nature and functions of the various government
and private-sector actors associated with cybersecurity in the United
States, the policy goals they pursue, the issues and challenges they
face, and the legal environment in which all of this takes place. It
is designed to be accessible for beginners from any disciplinary
background, yet useful to experienced audiences too.
The first part
of the book focuses on the “defensive” perspective (meaning that
we will assume an overarching policy goal of minimizing unauthorized
access to or disruption of computer systems). The second part
focuses on the “offensive” perspective (meaning that there are
contexts in which unauthorized access or disruption might actually be
desirable as a matter of policy).
Another
perspective.
https://www.justsecurity.org/69054/an-ambitious-reading-of-facebooks-content-regulation-white-paper/
An
Ambitious Reading of Facebook’s Content Regulation White Paper
Corporate
pronouncements are usually anodyne. And at first glance one might
think the same of Facebook’s recent white
paper,
authored by Monika Bickert, who manages the company’s content
policies, offering up some perspectives on the emerging debate around
governmental regulation of platforms’ content moderation systems.
After all, by the paper’s own terms it’s simply offering up some
questions to consider rather than concrete suggestions for resolving
debates around platforms’ treatment of such things as anti-vax
narratives, coordinated harassment, and political disinformation.
But a careful read shows it to be a helpful document, both as a
reflection of the contentious present moment around online speech,
and because it takes seriously some options for “content
governance” that – if pursued fully – would represent a
moonshot for platform accountability premised on the partial but
substantial, and long-term, devolution of Facebook’s policymaking
authority.
For my
Architecture students.
The
Emergence Of ML Ops
In the latter
part of the 2000s, DevOps solutions emerged as a set of practices and
solutions that combines development-oriented activities (Dev) with IT
operations (Ops) in order to accelerate the development cycle while
maintaining efficiency in delivery and predictable, high levels of
quality. The core principles of DevOps include an Agile approach to
software development, with iterative, continuous, and collaborative
cycles, combined with automation and self-service concepts.
Best-in-class DevOps tools provide self-service configuration,
automated provisioning, continuous build and integration of
solutions, automated release management, and incremental testing.
… DevOps
approaches to machine learning (ML) and AI are limited by the fact
that machine learning models differ from traditional application
development in many ways. For one, ML models are highly dependent on
data: training data, test data, validation data, and of course, the
real-world data used in inferencing. Simply building a model and
pushing it to operation is not sufficient to guarantee performance.
DevOps approaches for ML also treat models as “code” which makes
them somewhat blind to issues that are strictly data-based, in
particular the management of training data, the need for re-training
of models, and concerns of model transparency and explainability.
As
organizations move their AI projects out of the lab and into
production across multiple business units and functions, the
processes by which models are created, operationalized, managed,
governed, and versioned need to be made as reliable and predictable
as the processes by which traditional application development is
managed.
Yesterday I was the most popular
Blog in Turkmenistan. I have no idea why.
No comments:
Post a Comment