Security Metric of the Week #4: Proportion of critical controls consistent with controls policy
This week's security metric example measures the proportion or percentage of critical controls that are consistant or comply with the associated policies.
The metric assumes that policies are defined, at least for controls deemed "critical". Arguably, all controls that have documented policies are critical but not necessarily so, and some critical controls may not have policies.
The metric assumes that policies are defined, at least for controls deemed "critical". Arguably, all controls that have documented policies are critical but not necessarily so, and some critical controls may not have policies.
Examples of control policies are “access is
permitted unless expressly forbidden” and “trust is transitive”. They might also be termed, or be part of, 'control specifications'. If such policies/specifications are not formally defined and mandated, implementation is more likely to be inconsistent and perhaps inappropriate or inadequate.
This is a relatively Costly metric due to the need to assess the consistency or compliance of critical controls with the associated policies. Such detailed compliance checks can be slow, hence its PRAGMATIC score is also depressed by the Timeliness criterion. However the metric's overall score works out at a respectable 72%.
If measured and reported consistently, the metric should in theory drive improvements in the consistency/compliance, which in turn should help ensure that critical controls are correctly configured. However, the metric could prove counterproductive if controls are artificially downgraded from "critical" simply in order to improve the numbers, suggesting perhaps the need to track and report the number of "critical" controls in parallel, or at least to find a way to prevent inappropriate downgrades (e.g. insisting that they are formalized through the change control process).
What do you think? Would this kind of metric work for you? Do you measure this already, or do you have a different way to achieve similar ends?
P
|
R
|
A
|
G
|
M
|
A
|
T
|
I
|
C
|
Score
|
83
|
92
|
80
|
83
|
89
|
82
|
32
|
70
|
35
|
72%
|
If measured and reported consistently, the metric should in theory drive improvements in the consistency/compliance, which in turn should help ensure that critical controls are correctly configured. However, the metric could prove counterproductive if controls are artificially downgraded from "critical" simply in order to improve the numbers, suggesting perhaps the need to track and report the number of "critical" controls in parallel, or at least to find a way to prevent inappropriate downgrades (e.g. insisting that they are formalized through the change control process).
What do you think? Would this kind of metric work for you? Do you measure this already, or do you have a different way to achieve similar ends?
No comments:
Post a Comment
Have your say!