27 June 2012

A PRAGMATIC security/privacy compliance metric

In the course of considering how to measure an organization's compliance with security and privacy related obligations, the PRAGMATIC method has proven itself a valuable way to structure the analysis.  Today I want to discuss how taking the PRAGMATIC approach led me to design a better compliance metric by addressing the weaknesses in one of the candidate metrics.

I started by brainstorming possible ways to measure security/privacy compliance activities, focusing on the key factors or parameters that are most likely to be of interest to management for decision making purposes.  With a bit of Googling and creative thinking in odd spare moments over the course of a few days, I came up with a little collection of about 8 candidate compliance metrics:
  • The rate of occurrence of security/privacy-related compliance incidents, possibly just a simple timeline or trend, but ideally with some analysis of  the nature and significance of the incidents;
  • A 'compliance status' metric derived through reviews, audits or assessments across the organization;
  • Compliance process maturity using a maturity scale; 
  • 'Compliance burden'.  Management would presumably be quite keen to know how much compliance is really costing the organization, and could use this information to focus on areas where the costs are excessive;
  • Plus 4 other metrics I won't bother outlining right now, plus an further undetermined number of minor variants. 

In exploring the 'compliance burden' metric idea, it occurred to me that although it is technically possible for management to attempt to measure the time, effort and money spent on all security/privacy-related compliance-related activities such as compliance reviews/audits, disciplinary action, legal and other enforcement actions, it would be difficult and costly to measure all aspects accurately.  There is also the issue of 'double-accounting', in other words categorizing costs under multiple accounting headings and so artificially inflating the total.

However, simply recording, tracking and periodically reporting security/privacy-related enforcement actions (i.e. penalties imposed, disciplinary actions taken, successful prosecutions etc.) would significantly reduce the Cost (and complexity) of the metric, and at the same time makes it more Accurate, Meaningful and Relevant.  Focusing on enforcement improves the metric's Independence too since enforcement actions are almost invariably formally recorded somewhere, making it much harder for someone to falsify or ignore them - which a manager might well be tempted to do if, say, the metric reflects badly on his/her department.

The icing on the cake is that the metric remains highly Actionable: it is patently obvious that a department with a bad record of enforcement (e.g. a string of costly noncompliance penalties) needs to up its game, significantly improving its compliance efforts to reduce the threat of  further enforcement actions.  Since most enforcement actions either have direct costs (fines and legal bills), or the costs can be quite easily calculated or at least estimated, the metric could be expressed in dollars, resulting in the usual galvanizing effect on management.  

Creative managers might even be prompted to initiate enforcement actions against third parties who fail to comply with the organization's security/privacy requirements imposed through contractual clauses, nondisclosure agreements etc., since successful actions might offset enforcement actions against the organization and so improve the metric in their areas of responsibility.

This, then, is an example of an indicator: measuring enforcement actions, specifically, does not account for the full costs of compliance but looks to be a reasonable analog.  Over time, I anticipate management improving compliance activities to bring negative enforcement costs down and positive enforcement actions up to acceptable levels - the metric should gradually level off and act as a natural restraint against excessive, overly-aggressive and counterproductive compliance actions.

That's it for now.  I won't elaborate further on using the PRAGMATIC scores to rank the candidate metrics or to guide the design and selection of the best variants of the 8 metrics I started with, but if you have specific questions, please comment on this blog or raise it on the SecurityMetametrics forum.

Regards,
Gary

25 June 2012

SMotW #12: Firewall rule changes

Security Metric of the Week #12: count of firewall rule changes

This is one of the lowest-ranked example metrics in our collection of 150, with a pathetic PRAGMATIC score of just 9%.  What makes this one so bad?

For starters, as described, it is expressed as a simple number, a count.  What are recipients of the metric expected to make of a value such as, say, 243?  Is 243 a good number or does it indicate a security issue?  What about 0 - is that good or bad?  Without additional context, the count is close to meaningless.  

Additional context would involve knowing things such as:
  • The count from previous periods, giving trends (assuming a fixed period)
  • Expected value or ranges for the count (often expressed in practice by traffic-light color coding)
  • Verbal explanation for values that are outside the expected range

Even assuming we have such contextual information, sufficient to recognize that the latest value of the metric is high enough to take it into the red zone, what are we expected to do about it?  Presumably the number of firewall rule changes in the next period should be reduced to bring the metric back into the green.  Therefore tightening-up the change management processes that are reviewing rule changes to reject a greater proportion of changes would be a good thing, right?  Errr, no, not necessarily.  The very purpose of most firewall rule changes is to improve network security, in which case rejecting them purely on the basis that there are too many changes would harm security ... and this line of reasoning raises serious questions about the fundamental basis of this  metric.  We're going in circles at this point.  If we move on to ask whether rule changes on different firewalls are summed or averaged in some way, and what happens if some firewalls are much more dynamic than others, and we are fast losing the plot.



It must be obvious at this point that we have grave doubts about the metric's Relevance, Meaning and Actionability, which in turn mean is is not at all Predictive of security.  The Integrity (or Independence) rating is terrible and Accuracy rating poor since the most likely person to measure and report the metric is most likely the same person responsible for making firewall changes, and they are hardly going to recognize, let alone admit to others, that they might be harming security.  Unless the metric is much more carefully specified, they have plenty of leeway to determine whether a dozen new rules associated with, for instance, the introduction of IPv6 'counts' as 12 or 1.


The PRAGMATIC scoring table sums it up: this is a rotten metric, a lemon, almost certainly beyond redemption unless we are so totally lacking in  imagination and experience that we can't think up better way of measuring network security! 

P
R
A
G
M
A
T
I
C
Score
2
1
1
10
2
33
14
4
17
9%




That's almost it for this week's metric, except to leave you with a parting thought: if someone such as the CEO seriously proposed this metric or something equally lame, how would you have dealt with the proposal before the PRAGMATIC approach became available?  Being PRAGMATIC about it gives you a rational, objective basis for the analysis but does this help, in fact?  We are convinced your discussion with the CEO will be much more robust and objective if you have taken the time to think through the issues and scores.  What's more, the ability to suggest other network security metrics with  substantially better PRAGMATIC scores means you are far less likely to be landed with a lemon by default.

Regards,
Gary & Krag


18 June 2012

SMotW #11: Security budget

Security Metric of the Week #11: Security budget as a proportion of IT budget or turnover

Given how often this metric is mentioned, it was quite a surprise to find that it scores a measly 16% on the PRAGMATIC scale. Why is that?  What's so dreadful about this particular metric?

Our prime concern stems from the validity of comparing the 'security budget' with either the 'IT budget' or 'turnover' (the quotes are justified because those are somewhat ambiguous terms that would probably have to be clarified if we were actually going to use this metric).  First of all, comparing anything to the IT budget implies that we are talking about IT or technical security, whereas professional practice has expanded into the broader church of information security.  Information security is important for anyone using and relying on information.  It could be argued that it is even more important outside of the IT department, in the rest of the business, than within it.  Likewise, comparing the [information] security budget against the organization's turnover may be essentially meaningless as there are lots of factors determining each aspect independently of the other. 

<Cut to the chase>  Answer us this: what proportion should we be aiming for?  In other words, what's our target or ideal proportion?  If you can explain, rationally, how to determine that value, you are doing better than us!

The metric may have some value in enabling us to compare the security budgets over successive years, across a number of different organizations, or between several different operating units within one group structure, provided we compare them on an equal footing.  If, for example, a whole bunch of engineering companies belonging to a large conglomerate reported about 10% for this metric (making that the norm i.e. an implied target), apart from one company that stuck out with say 20% or 5%, management might be prompted to dig deeper to understand what makes that one so markedly different from the rest.  It's a fair bet that pressure would be brought to bear on the outlier to bring itself into line with the rest - such is the nature of metrics.  But would that necessarily be appropriate?  Who is to say that the majority are budgeting appropriately for security whereas the odd-man-out has got it wrong?  It is certainly conceivable that in fact it is taking the lead on security, or that there are perfectly valid and appropriate reasons that make it unique.  Perhaps the way it calculates its budgets is different, or maybe it is at a different state of security maturity.  It could be recovering from a major security incident or noncompliance, or its management may have a substantially different risk appetite than the others in the group.

The point is that the metric could be distinctly misleading if considered in isolation.  Management might even be accused of being negligent if they were to act on it without a lot more information about the security and business situations that underpin it, in which case would we be any worse off if we didn't bother with it at all?

P
R
A
G
M
A
T
I
C
Score
13
3
16
2
2
0
4
18
88
16%








Single-digit scores for five of the nine PRAGMATIC criteria banish this candidate metric to the realm of soothsayers and astrologers in respect of Acme Enterprises Inc anyway.  Perhaps in your specific organizational context, this metric makes more sense, provides true value and justifies its slot on the security management dashboard - if so, we'd love to hear from you.  Feel free to comment below.   What are we missing here?  How do you make this one work?

11 June 2012

SMotW #10: Unsecured access points

Security Metric of the Week #10: Number of unsecured access points

As worded, this candidate metric potentially involves simply counting how many access points are unsecured.  In practice, we would have to define both "access points" and "unsecured" to avoid significant variations (errors) in the numbers depending on who was doing the counting.

Depending on how broadly or narrowly it is interpreted, "access points" might mean any of the following, if not something completely different:
  • WiFi Access Points, specifically; 
  • Legitimate/authorized points of access into/out of the corporate network e.g. routers, modems, gateways, WiFi Access Points, Bluetooth connections etc.;
  • Both legitimate/authorized and illegitimate/unauthorized points of access into/out of the corporate network - assuming we can find and identify them as such;
  • Designated security/access control points between network segments or networks e.g. firewalls and authentication/access control gateways;
  • Physical points of access to/from the organization's buildings or sites - again both legitimate/authorized and illegitimate/unauthorized (e.g. unlocked or vulnerable windows, service ducts, sewers), assuming we can identify these too;
  • Points of contact and communications between the organization's systems, processes and people and the outside world e.g. telephones, social media, email, face-to-face meetings, post ... 

Similarly, absolutely any access point might be deemed "unsecured" (more likely, "insecure") by a professionally-paranoid risk-averse security person who can envisage particular scenarios or modes of attack/compromise that would defeat whatever  controls are in place, or who knows through experience that controls sometimes fail in service.  Conversely, a non-security-professional might claim that every single access point is "secured" since he/she personally can't easily bypass/defeat it.  This kind of discrepancy could be resolved by some sort of rational decision process according to an assessment of the risks and the strength of the controls.  However, if the metric is used by management specifically to attempt to drive through security improvements at the access points, the people making the improvements tend to be the very same people who are assessing risks and controls, hence the metric would lose its objectivity and teeth.  Defining security standards for access points might help address the issue, and in fact that might be a useful spin-off benefit of using such a metric.


P
R
A
G
M
A
T
I
C
Score
95
80
90
70
85
77
45
75
55
75%








The PRAGMATIC score for this metric worked out at a very respectable 75% in the imaginary context of Acme Enterprises Inc.  It scored very well for Predictiveness (given that access control is a core part of security, so weaknesses in access control undermine most other controls) and Actionability (it is pretty obvious what needs to be done to improve the measurements: secure those vulnerable access points!).  The lowest-scoring element was Cost at 55% since defining security standards, locating potential access points and assessing them against the standards would undoubtedly be a labor-intensive process.

In the course of discussing the scoring, we considered possible variants of the metric itself and variations in the measurement process.  For instance, there might be advantages in reporting the proportion of access points that are unsecured: without more information about the total number of access points, recipients can't tell whether, say, 87 is a good number for the simple count version of this metric, whereas 87% is more Meaningful.  That straightforward change to the metric has a minor impact on the Cost since someone would have to count and/or estimate the total number of access points, and periodically revisit the calculation as things change.  We suspect Acme's management would like it too.

Furthermore, for some purposes, it would be worthwhile knowing just how insecure are the unsecured access points, implying a rating scheme, perhaps something as crude as a read/amber/green rating for the security of each access point identified, maybe with a clear (uncolored) rating for those that have yet to be assessed.  Assessments that involve penetration testing, IT audits or professional security reviews might well generate the additional information anyway in order to prioritize the follow-up activities needed to secure the unsecured.  In short, although the metric's Cost would increase, so would its value, hence it might still rate 55% (the PRAGMATIC parameter we call Cost for short is in reality Cost-effectiveness).

The previous two paragraphs demonstrate how the PRAGMATIC approach is more than simply a static rating or ranking scheme for metrics: it facilitates and encourages creative discussion and improvement of metrics that are under consideration, focusing most attention on whichever aspects hold back the overall PRAGMATIC score.  Given the specific situation of this candidate metric, it would be feasible, for instance, to trade-off Accuracy and precision to improve both the Cost and Timeliness scores by settling for rough but ideally informed and reasonable estimates of the proportions of secured versus unsecured access points instead of actual counts.  That might be a perfectly acceptable compromise for Acme's management.  The PRAGMATIC method provides the framework within which to frame this kind of sensible discussion.

07 June 2012

Making statistics more visual

This "infographic" uses simple visual techniques to emphasize a potted selection of statistics from a security survey.  You may be interested in the information presented but I'm fascinated by the manner in which it is presented, the graphic visualization.

The author of the graphic has used artistic license in selecting particular image types, sizes, colors and fonts to depict and highlight what he/she feels are the most important elements.  The resulting colorful graphic image is visually appealing, if somewhat facile and potentially even misleading in places e.g. in representing 'Large organization hacking vectors by percentages of breaches within hacking' as a series of 4 differently colored circles, are the circles purely representational/figurative or are the sizes determined by the proportions - and if so, is it their areas or diameters or some other parameter that's important?  I am conscious that the eye is easily deceived by exactly this sort of comparison.

Speaking as a trained scientist, I'm personally not a big fan of the glitzy approach, although I can appreciate that some viewers will absolutely love it - they may not have the mathematical/scientific background, interest and/or time to glean information from the regular graphs, charts, tabular reports and wordy analysis that we scientific types tend to prefer.  I can see that if it were backed by the data, the graphic might entice viewers to take more of an interest in the numbers.  It would be fantastic if they could click on relevant parts to drill-down and explore the underlying numbers and analysis in more depth, but unfortunately all we have here is a golly-gosh image summarizing a more traditional report.  Sure the graphic has immediate impact but that soon wears off, leaving viewers with a key message or two perhaps but I suspect having little lasting impression.

Although there's a chapter in our book about using metrics i.e. data analysis and presentation, it's a rather brief introduction to an enormously important aspect of metrics.  We chose not to go into great depth on this subject as it is covered very well by the existing literature - pick up literally any statistics book for starters.

I think there is a valid lesson here about making our metrics presentations more engaging, interesting and memorable, while at the same time being careful not to take things too far.  Did you even spot my earlier comment about the author highlighting whatever aspects he/she feels are important?  Anyone summarizing and interpreting factual information that will be used in decision making has an implicit responsibility to the reader to remain true to the data.  Being selective about what we present, and how we present it, is itself a form of bias.


PS  Contrast this visually striking infographic with the more conventional tabular representation of similar statistics.  Which version will you remember next week, next month, next year?  

04 June 2012

SMotW #9: Vulnerability index

Security Metric of the Week #9: Vulnerability index

Well-known vulnerabilities in commercial software are commonly identified by patch-checking tools such as PSI from Secunia and Microsoft Update.  PSI generates a convenient system security score - a simple percentage related (in some way, determined by Secunia) to the patch status of the system.  Microsoft Update generates a count of the number of missing patches, categorized by the severity of the security vulnerabilities the patches (supposedly) fix, as determined by Microsoft.  Whether you are managing a single PC or a network of thousands, tools and metrics such as these are a helpful way to focus attention on the systems that need patching.

However, there is a lot more to security vulnerabilities than simply patching commercial software e.g.:

  • Finding and patching security vulnerabilities in private/non-commercial/obscure software, including programs such as spreadsheets, macros and batch files written by amateurs, and all manner of Java utilities and apps that are not addressed by PSI etc.;
  • Finding and patching currently unknown software security vulnerabilities through various forms of testing (software security testing, clear box/black box testing, penetration testing, fuzzing, static/dynamic source code analysis hacking, software audits);
  • Finding and fixing fundamental security design flaws in software, hardware and processes (e.g. missing policies and inadequate security awareness activities fail to address commonplace vulnerabilities to social engineering).

Furthermore, the importance or significance of different vulnerabilities varies markedly e.g.:
  • The risk represented by a missing security patch on a system exposed on the Internet is probably quite different to that on an isolated internal system tucked away behind multiple layers of defense (some people refer to this factor as 'exposure');  
  • Some vulnerabilities are trivially simple to exploit, whereas others can only be exploited under very specific circumstances, often with a lot of effort and sometimes a lot of luck;
  • Some vulnerabilities are of little concern in terms of their consequences, whereas others are extremely problematic since they allow vital controls to be totally disabled, undermined or negated.  
In short, the number of vulnerabilities does not necessarily reflect the amount of risk, even if we somehow take account of their severity in the metric.  Risk also depends on the threats and impacts of incidents.

We scored the metric thus:
P
R
A
G
M
A
T
I
C
Score
74
85
71
74
60
32
46
33
19
55%




The Accuracy, Integrity, Timeliness and Cost aspects all suffer if we intend to use the metric to manage information security as a whole, as opposed to simply helping us manage software security patching.  That said, a patching-type vulnerability metric may still be valuable at the operational level for those managing the IT infrastructure.