IAWG colleagues --
Mark --
Thanks for the comments.
I am actually very interested in how to express digital policies:
not so much the technology, but the risk strategy and other
challenges. But for this NCCOE project I am just hoping they don't
keep policy-making in-scope.
I think when NCCOE says "classification" they are not at all
limiting themselves to traditional government national-security
labels, but any parameter that might appear in an access policy
(or archiving policy or other data-management policy.)
I absolutely agree that consistent interpretation is very
important in both making policies and the semantics of the
parameters (the "classification" metadata.) And it's very weak in
the non-automated world in which we currently operate, so
formalizing the process is potentially a major win for both
effectiveness and interoperability, not to mention efficiency.
I think (but not 100% sure) that I understand your "risk budget"
comment. But maybe this is what you have in mind: when
formulating an access-control strategy for personal information,
it may be OK to rely on effective (and well-publicized) ex-post
enforcement via analysis of access logs, since (per my own
aphorism) "there are no suicide privacy violators." And again:
for hard cases like "probable cause" (to justify law-enforcement
access to private digital records), it may be sufficient to
require the requestor to select from a list of things that are
typically accepted as indicating probability of a criminal act
(which would become a requestor-asserted access attribute), and
then have an effective ex-post review of those claims.
But I am hoping NCCOE will look at these
how-to-make-digital-policy issues separately, since there are
plenty of other challenges in managing data "classification"
metadata.
Martin
Martin.
a few points...
Automation requires consistent interpretation of policy. One area relevant to US export control and collaboration on classified matters is the inconsistent interpretation of NOFORN in relation to dual nationals.
Classification has historically been based purely on confidentiality, but integrity is an important part of security, and the policies for integrity are the dual of confidentiality: not where it can go to but where it comes from. High to low is fine for integrity, not for confidentiality.
The name ‘classified’ has always been problematic: does it include ‘unclassified’ and or things which would justify having a label but do not have one attached or associated? It’s often assumed to be obvious what is talked about, but, again, there is no consistency. Can we find something close to the latin ‘custodienda’: thing worthy of being protected.
There were some ideas of a risk budget - any nurse can see one patient in specific hospital in any minute; firetruck can see one picture from satellite high resolution database, and then check later if it was an appropriate picture. Will this be included?
Happy to discuss further.
Mark