30 April 2012

SMotW #4: Control policy compliance

Security Metric of the Week #4: Proportion of critical controls consistent with controls policy

This week's security metric example measures the proportion or percentage of critical controls that are consistant or comply with the associated policies.

The metric assumes that policies are defined, at least for controls deemed "critical".  Arguably, all controls that have documented policies are critical but not necessarily so, and some critical controls may not have policies. 

Examples of control policies are “access is permitted unless expressly forbidden” and “trust is transitive”.  They might also be termed, or be part of, 'control specifications'.  If such policies/specifications are not formally defined and mandated, implementation is more likely to be inconsistent and perhaps inappropriate or inadequate. 

This is a relatively Costly metric due to the need to assess the consistency or compliance of critical controls with the associated policies.  Such detailed compliance checks can be slow, hence its PRAGMATIC score is also depressed by the Timeliness criterion.  However the metric's overall score works out at a respectable 72%.


P
R
A
G
M
A
T
I
C
Score
83
92
80
83
89
82
32
70
35
72%





If measured and reported consistently, the metric should in theory drive improvements in the consistency/compliance, which in turn should help ensure that critical controls are correctly configured.  However, the metric could prove counterproductive if controls are artificially downgraded from "critical" simply in order to improve the numbers, suggesting perhaps the need to track and report the number of "critical" controls in parallel, or at least to find a way to prevent inappropriate downgrades (e.g. insisting that they are formalized through the change control process).

What do you think?  Would this kind of metric work for you?  Do you measure this already, or do you have a different way to achieve similar ends?

27 April 2012

Traffic light metrics fail in NZ

"Traffic light reporting" generally implies metrics that should make sense to anyone having normal color vision, and an appreciation of the conventional sequence of colors meaning stop, ready and go.  Short of binary yes/no type metrics (such as certified compliance), metrics doesn't get much simpler than traffic-light reporting ... or does it?  According to April's eBulletin from New Zealand's Ministry of Civil Defence & Emergency Management:
"The different meanings of the colours used for tsunami threat level maps in national warnings and those used for tsunami evacuation zone mapping (specifically red, orange and yellow) were identified as confusing in exercises and real events. For instance, in the threat level maps red indicated a ‘severe threat’ of 8m+ amplitude while yellow meant a ‘moderate threat’ of 3-5m. In the Tsunami Evacuation Zones Guideline the opposite applies, with the ‘red zone’ being evacuated for the lower level threat and the ‘yellow zone’ in the extreme (worst) case."
The piece went on to describe a new set of 6 color-coded tsunami threat levels and definitions, developed by the Tsunami Working Group.  While there is a green level, red and yellow/orange/amber are now conspicuously absent.  Perhaps they were too obvious for the bureaucrats.

While it's easy for me to poke fun at the governmental approach to disaster planning, the obvious fact remains that NZ sits in a geologically-hyperactive region.  Metrics relating to those and other incidents, plus scientific data from the geologists and other experts in plate tectonics stream into the government's risk assessment, planning and budgeting, while video footage from the Christchurch quakes, plus the Indonesian and Japanese tsunamis, represents the most graphic metric of all.  Without the government's efforts, it is unlikely that we citizens would be able to coordinate our efforts to prepare for the next Big One.  Let's just hope they have their act together.

Rgds,
Gary

23 April 2012

SMotW #3: Unpatched vulnerabilities

Security Metric of the Week #3: Number of unpatched technical vulnerabilities

This week's security metric is, at face value, straightforward: "simply count the number of technical/software vulnerabilities that remain unpatched."

In practice, as stated, the metric is quite ambiguous.  Are we meant to be counting the number of different vulnerabilities for which patches are not yet applied, or the number of systems that remain to be patched, or both?  If the organization is using a distributed vulnerability scanning utility that identifies missing patches, perhaps the management console gives us the metric directly as a number, or the average number of missing patches per machine.

The Predictability score for this metric would be higher if it also somehow addressed unknown and hence currently unpatchable vulnerabilities, plus nontechnical vulnerabilities (e.g. physical vulnerabilities and vulnerabilities in business processes).  Software vulnerabilities are important, but the organization needs to address risks, meaning other kinds of vulnerabilities as well, plus threats and impacts. 

The metric also disregards vulnerabilities for which no patch is currently available (so-called "oh-days").  If we have the capability to identify them (meaning some form of penetration testing or fuzzing), the metric could be expanded to cover oh-days as well as patchable vulnerabilities.  The problem here is that practically all software has bugs, some of which are security-relevant oh-days.  The number of oh-days we find is a compound function of the quality of the software design, coding and pre-release testing, and the quality of the vulnerability assessment/post-release testing.  The latter, in turn, depends on the amount of effort/resources applied to the testing and the expertise and tools the testers use.

If very carefully specified and rigorously measured so as to standardize or normalize for these variables, the metric might conceivably have value to track or compare different systems and test teams, but that's a lot of effort and hence Cost.  In short, there are so many variables that the more complicated metric would have little value for decision support: it would probably end up being a facile "Oh that's nice" or "That looks bad" metric.  Time lag may be an issue since it takes time to identify and characterize a vulnerability, put the corresponding signature into the scanning tools, scan the systems, examine and assess the output, and finally react appropriately to the findings - meaning a low Timeliness score.  The overall PRAGMATIC score for the relatively simple metric as stated works out at 68%.  Here's the detail:


P
R
A
G
M
A
T
I
C
Score
80
64
80
70
80
75
25
85
52
68%

16 April 2012

SMotW #2: Coupling index

Security Metric of the Week #2: Coupling index

Our second 'metric of the week', Coupling index, is a measure of the degree of interdependency between things such as:
  • IT systems;
  • Business processes;
  • Business units, departments and teams;
  • Organizations.
Coupling has an impact on the risk of cascade failures (known colloquially as 'the domino effect').  In tightly-coupled situations, upstream issues will quickly and dramatically affect downstream elements.  In loosely-coupled situations, by contrast, there is more leeway, more 'slack' so downstream effects tend to be less evident, show up less quickly if at all, and generally have less impact on the organization.

Contrast traditional mainframe-based batch-processing financial systems against real-time ERP systems, for instance: if something causes a single batch to fail on the financial system, it can generally be corrected and rerun without too much trouble, provided it still completes within the batch window.  However, a similar failure on an ERP system can sequentially topple a whole series of highly interdependent operations, bringing the entire ERP and the associated business activities to a crashing halt.



P
R
A
G
M
A
T
I
C
Score
68
85
50
60
72
47
35
61
42
58%






This candidate metric scores a mediocre 58% on the PRAGMATIC scale.  It is quite strong on Relevance but other aspects such as Timeliness and Cost detract from the final score. These criteria are issues because of the practical difficulties of quantifying the 'degree of coupling' in meaningful and comparable terms, especially across such a diverse set of factors as noted in the bullets above. 

COBIT 5 metrics

COBIT version 5, just released by ISACA, suggests numerous sample metrics in the Enabling Processes document - typically four or five metrics per goal of which there are 17 enterprise goals plus another 17 IT-related goals, giving approximately 150 metrics in total.

For example, supporting the third financial enterprise goal "Managed busines risk (safeguarding of assets)", the following three metrics are suggested:
  1. Percent of critical business objectives and services covered by risk assessment.
  2. Ratio of significant incidents that were not identified in risk assessments vs. total incidents.
  3. Frequency of update of risk profile.
The text introduces the metrics thus: "These metrics are samples, and every enterprise should carefully review the list, decide on relevant and achievable metrics for its own environment, and design its own scorecard system."  Fair enough, ISACA, but unfortunately COBIT 5 does not appear to offer any advice on how one might actually do that in practice.  How should we determine which metrics are 'relevant and achievable'?  What is involved in 'designing a scorecard system'? 

Some readers may assume that they should perhaps be using most if not all of the 150 sample metrics.  Others may feel less constrained to the examples, but may also assume that 150 is a reasonable number of metrics.  We beg to differ.

The PRAGMATIC approach is well suited to this kind of situation.  It is quite straightforward to assess and score ISACA's 150 metrics, comparing them alongside suggestions from various other sources in order to identify those that deserve further investigation: we call them 'candidate metrics'.  In conjunction with the people who will be receiving and using the metrics to support business decisions, the shortlisted candidate metrics can be further considered using the PRAGMATIC criteria and refined before finally selecting and implementing the very few security metrics that are actually going to make a positive difference.



09 April 2012

SMotW #1: Unowned information asset days

Security Metric of the Week #1:  Unowned information asset days

We will be introducing, discussing and scoring a "Security Metric of the Week" (SMotW) through this blog for the forseeable future.  We know of literally hundreds, if not thousands of candidate metrics, including those that are proposed in various standards, books and published lists of metrics, along with a few of our own creation.  On top of that, we frequently come across novel metrics during our consulting work or simply in the stream of information that flows past every well-connected professional every day.  If you would like us to consider, score and discuss your favorite security metric, by all means email us, raise a comment on this blog, or join the Security Metametrics discussion forum.

Unowned information asset days is a candidate metric concerning asset ownership, an important governance concept that underpins accountability for the adequate protection of information assets.  

It is a measure of the number of days that information assets remain without nominal owners for various reasons such as:
  • It is a new asset for which an owner has not yet been designated by management;
  • The designated owner has left the organization, or has been given a new set of responsibilities and is no longer accountable for the asset's protection;
  • The information asset ownership practices are not in place or are not working well. 
Unowned or orphaned assets are unlikely to get much attention and care, in other words if nobody feels responsible for their protection, perhaps nobody will bother to risk-assess, secure and generally look after them.  Just look at what happens to 'pool cars' and typical office printers to see what we mean.



P
R
A
G
M
A
T
I
C
Score
40
51
84
77
74
86
92
94
82
76%









Using the PRAGMATIC method, this metric scores a respectable 76%.  It is held back a little by the Predictive and Relevant criteria since ownership and accountability are not sufficient, in themselves, to ensure that information assets are adequately secured.  Driving up asset ownership and accountability should lead to better security in time, but only as part of a coherent and comprehensive approach to information management, governance and security. 
 

07 April 2012

Welcome!

Hello world!

We have started this blog to enhance the SecurityMetametrics website and support both our book (due out in August) and the global community of professionals using, trying, or intending to use metrics to measure and improve information security.

This blog, like the website and the discussion forum, is brand new and will take a while to get up to speed, but we hope it will soon start to deliver real value.  We have some great material to share with you, and definitely welcome your feedback comments and improvement suggestions.  The field of security metrics is wide open for creative, innovative approaches, academic input and practical experience.  As the field slowly matures, join us as we do our level best to nudge things along in the right direction.

We'll kick off by releasing our first "metric of the week" on Monday ...

Kind regards,