27 August 2012

SMotW #21: unclassified assets

Security Metric of the Week #21: proportion of information assets not marked with the correct classification

There are three key assumptions underlying this week's Security Metric of the Week:
  1. The meaning of "information asset" is clear to all involved;  
  2. There are suitable policies and procedures in place concerning how to risk-assess and classify information assets correctly;
  3. The metricator (person gathering/analyzing the data for the metric) is able to tell whether or not a given information asset is (a) correctly classified and (b) correctly marked.
Part of the concern about the meaning of "information asset" is the determination of what should be assessed and marked: should we classify the filing cabinet, the drawers, the files, the documents or the individual pages?  In some cases, it may be appropriate to classify them all, but there are practical limits in both the micro and macro directions.  The wording of the policies, procedures, examples etc. can make a big difference.

Whereas classification policies are fairly common, the related procedures plus the associated awareness/training and compliance/enforcement activities, are not universal.  This metric could be used to determine the need for additional procedures etc., and with a bit more detail it could help direct resources at the business units, departments, teams or people who evidently need more support.


However, the metric's poor PRAGMATIC score raises concerns:

P
R
A
G
M
A
T
I
C
Score
52
53
63
44
62
13
17
87
44
48%




Low ratings for Accuracy and Genuineness arise from the way the metric would have to be measured.  The third assumption above is the main fly in the ointment, since it is necessary for someone to review a sample of information assets to determine what their classifications should be, and confirm whether they are indeed correctly marked.  This is a tedious process that can result in disagreements regarding the correct classifications and the nature of marking required.

We marked it down on Timeliness since the measurement process would inevitably take days or weeks, during which time incorrectly classified and/or marked information assets would probably remain vulnerable to being mishandled.  Once the final numbers are available, management can take the decisions about additional procedures, awareness and compliance activities, but these will also take time to put into effect.  All in all, there are likely to be significant lags between taking, acting on and adjusting the measurements.

The relatively high Cost of assigning one or more suitable metricators to the job could be offset by reducing the frequency of measurement, perhaps  measuring and reporting this metric just once or twice a year ... but of course that makes the metric less useful as a management tool - it's a trade-off.

The bottom line is that although there are circumstances in which this metric might be worth using, its low score suggests that there are many more PRAGMATIC metrics that should probably take priority.

26 August 2012

The ultimate question

In the field of security metrics, much has been written about measuring all manner of detailed parameters that are deemed relevant and important, but the kinds of big-picture strategic questions that matter most to senior management are seldom addressed.  

Take for example the disarming, devilishly simplistic question "Are we more or less secure than we were a year ago?"  

Imagine being asked precisely that question by, say, the CEO or the minister.  How would you actually respond?  What if it turned out the question was not merely an off-the-cuff comment but a deadly serious request for information posed on behalf of a management team struggling to make sense of next year's infosec budget requests in relation to all the other proposals on the top table?  

Go ahead, picture yourself squirming in the hot seat.  What's going through your mind?  Where do you even start to address such a naive question?

For some of us, our knee-jerk reaction is to spew forth a confusing muddle of half-baked assertions and mumbo-jumbo.  We trot out a stream of primarily technical measures, some of which are so narrowly defined as to be of dubious value even to those professionals responsible for managing information security and other risks.  In many areas, we fall back on highly subjective measures that smack of "The answer is 42: what was the question again?"  Faced with a tsunami of dubious numbers, the CEO is left with the overriding impression that he's just been told precisely how fast we are going, and yet we still don't know for sure that we are headed the right way.  That's no way to direct the organization.


Alternatively, we may resort to a purely defensive approach, claiming that the question is unreasonable because infosec is 'too difficult' or 'too complex' to measure, and the situation is 'highly dynamic' to boot.  What may initially appear a perfectly reasonable and honest response from our rational perspective may be counterproductive when viewed in strategic business terms.  With mounting outrage, the CEO may well respond along the lines of "Are you seriously telling me that we don't know whether we are more or less secure than last year because information security is 'special'?  So how come we can measure production, finances, quality and human resources, but we can't measure infosec?"  Oh oh, now we're really in trouble!

Sorry to disappoint you if you are expecting me to answer the CEO's question for you: I won't but perhaps I can help you figure out how to address it yourself.  In fact, that's what I've just been doing.  A useful way to develop worthwhile security metrics is to pose yourself a bunch of rhetorical management questions in order to tease out the underlying concerns.  What are the key security issues facing your organization?  What are the big business drivers this year?  Which aspects matter most to your management: compliance, cost-effectiveness/efficiency, risk, accountability, assurance, adequacy or something else entirely?  

Unless you are a senior manager, or somehow become aware of those earnest discussions in the boardroom, you can only guess at what might be playing on the CEO's mind in respect of information security.  You might therefore get yourself ready to address not one but a whole bunch of potential big-picture questions.  The point is to identify the common themes, and to spot the information (and hence the underlying data) that would be needed to formulate meaningful answers.

In the book, we discuss using the GQM approach in its implied sequence: first determine the Goals, then the associated Questions, and finally the Metrics.  What I'm suggesting here could be termed the Q(GM) approach: first pose those rhetorical Questions, then figure out the Goals, objectives or issues that might have led to them, as well as the Metrics, information and data that would be necessary for the answers.  

Alternatively, do nothing, just sit back and wait for that fateful day when, finally, someone important demands to know "Are we secure enough?" or "How secure are we?" or "Do we really need to spend so much on security? ... or "What has Information Security ever done for us?"



20 August 2012

SMotW #20: uptime

Security Metric of the Week #20: ICT service availability ("uptime")

Uptime is a classic ICT metric that is also an information security metric, although it is seldom considered as such.

Uptime is commonly measured and reported in the context of Service Level Agreements or contracts for ICT services, but in our experience this is usually something of a farce.  The IT Department or company generally defines uptime narrowly in ways that suit their purposes rather than being a true reflection of the ICT services actually provided to the business users/customers, while business people don't honestly believe the numbers anyway since (a) they do not reflect their experience as consumers of ICT services, and (b) they are self-assessed and self-reported by the IT people  who clearly have an interest in reporting only good news.  Tying internal ICT cost recovery to uptime  makes things even worse from the security metrics perspective (i.e. providing genuine, fact-based data on which to make business decisions concerning information security), since it places IT and the business in diametrically opposing positions - a recipe for much more heat and smoke than light.

Being rather cynical graybeards, we note that uptime is often defined (by IT) only in relation to ICT service provision during "core service hours" (which IT determines unilaterally).  Exclusions are common, particularly 'scheduled downtime' (as if the fact that IT has decided when to take ICT services down somehow magically allows the business to carry on using them normally), 'change and patch implementation' (because the business wanted the changes or patches, so IT can hardly be blamed for doing what they are asked to do) and backups (again, IT rationalizes this exclusion along the lines of "Backups are required by the business, so they should be bloody grateful!").

Despite the drawbacks that we have just described, uptime turns out to have a pretty good PRAGMATIC score as an information security metric:

P
R
A
G
M
A
T
I
C
Score
84
97
66
78
94
61
79
47
89
77%




The very high 97% rating of uptime for Relevance may come as a surprise to those who are unfamiliar with the modern interpretation of information security as 'ensuring the confidentiality, integrity and availability of information'.  We rated the metric a few percent less than 100 for Relevance to account for the relatively small amount of business information which lies completely outside of IT: we completely accept that paperwork and knowledge need to be available as well as the ICT systems, networks and data, but without ICT support, a lot of the non-IT information is practically worthless to the business as a whole since it cannot be communicated or processed much beyond its immediate location.

The high rating for Meaningfulness reflects the fact that, leaving aside arcane issues relating to the precise definition, uptime is a simple and familiar measure.  

The high Cost-effectiveness score also reflects the metric's simplicity and familiarity: in most organizations, uptime is already being measured, analyzed and reported for purposes other than information security, so the marginal cost to include it in information security reports is negligible.  However, Costs can increase markedly if management decides to measure uptime independently of IT, for example using network and system availability monitoring outwith the department, or independently auditing the figures.

[By the way, in discussing this metric in the book, we refer to the interesting metrics challenges presented by cloud computing when significant parts of the ICT service delivery process depend on third parties and resources well outside the organization's physical and logical boundary.  The PRAGMATIC approach is just as well suited to developing, selecting and/or improving valuable, worthwhile security metrics for cloud computing as for more traditional approaches ... but I'm afraid you'll have to wait for the book to find out exactly what we mean!]

13 August 2012

SMotW #19: employee turnover/absenteeism

Security Metric of the Week #19: rate of change in employee turnover and/or absenteeism

In most organizations, employee turnover rumbles along at a 'normal' rate most of the time, due to the routine churn of people joining and leaving the organization.  Likewise, there is a 'normal' rate of absenteeism, due to sickness, holidays/leave and unexplained absences.  Big changes (especially sudden increases) in either set of numbers suggest the possibility that information security risks associated with disaffected or malicious employees might6 have substantially increased, in other words increased turnover and absenteeism may be indicators of a discontented workforce voting with their feet, or indeed of management sacking loads of employees.

Of course there are many reasons why people leave the organization or are temporarily absent, aside from discontent and redundancy, hence the metric is unlikely to be particularly useful in isolation.  We refer to it as an indicator, since an adverse change signals or indicates a situation that merits further investigation to determine the likely reasons.

We calculated the following PRAGMATIC score for this metric:

P
R
A
G
M
A
T
I
C
Score
60
66
20
85
60
80
75
80
91
69%




Being an indicator means it is fairly Predictive and Relevant to information security, but not very Actionable (if only there was some simple and straightforward thing that management could do to improve morale!).  

Assuming that the raw numbers are available from HR (and possibly Procurement if you account for the comings and goings of consultants and contractors, as well as employees), they are likely to be both Genuine and Independent.  

The score for Meaning suffers because of the need to investigate and explain changes in the metric, while the Timeliness suffers because of the inevitable delays in gathering, analyzing, presenting and using the numbers.

In comparison to most other measures of the morale and contentedness of the workforce, this metric has the merit of being low Cost to gather although as we said the analysis does involve a bit of digging to determine the likely reasons for sudden changes, so it is not exactly free.

Organizations that employ seasonal workers and have greater 'normal' variations in employee numbers could still use this kind of metric by normalizing the statistics over successive years, assuming sufficient historical data are available.  You can probably picture a scattergram-type graph showing employee numbers through the year, with a smoothed curve following the mean level and a range of values at any point based on historical data.  Highlighting this year's curve and particularly the current/latest value against the mean and  usual range should show whether or not things are ticking along nicely, or something unusual is going on.

Although the overall PRAGMATIC score of 69% is hardly outstanding, this metric does feature in the top three HR-related information security metrics in our example set.  The HR security maturity metric that we discussed recently on this blog scored over 80% so - given the choice between those two - we would definitely expect that metric to be a better option than this one.

What HR security metrics would you prefer to use?  We welcome your suggestions of totally different metrics and variants of those we have discussed here, particularly those that you feel score substantially better on the PRAGMATIC criteria.  So, over to you ...

06 August 2012

SMotW #18: security spend

Security Metric of the Week #18: information security expenditure

At first glance, this metric looks like it would be ideal for those managers who are obsessed with costs.  "Just how much are we spending on security?" they ask, followed shortly no doubt by "Do we really need to spend that much?"

OK, let's go with the flow and try to get them the figures they crave.

Our first challenge is to define what counts as security spend, or more precisely information security expenditure.  It's pretty obvious that the salaries for full-time dedicated information security professionals go in that particular bucket, but what about the security guards, or the network analysts and systems managers, or the architects and programmers spending some variable proportion of their time developing security functions?  Oh and don't forget managers and staff 'wasting their valuable time' constantly logging back in or changing their passwords or whatever: does that count as security spend?  If so, how much, exactly?  

Then there's the security hardware and software - the antivirus and firewall systems, and backups ... and what about the additional incremental costs of finding and purchasing secure as opposed to insecure systems?

Next, security incidents: these are 'clearly' security costs, aren't they?  Well, no, it could be argued that incidents result from the lack of security, the very opposite.

Issues of this nature fall into the realm of cost accounting, allocating the organization's costs rationally across the appropriate accounting categories.  Given sufficient interest and effort, costs can be allocated although the figures will inevitably be highly subjective depending on exactly what proportion of various costs is labeled 'information security'.  Due to the arbitrary decisions, this is likely to be a significant source of error when trying to compare the figures across successive periods, even if some of the cost allocation decisions are captured in a cost accounting system.  Consequently, the Accuracy rating for the metric is quite low, and the Time and Costs incurred in measuring it are also low-scoring factors on the PRAGMATIC score:



P
R
A
G
M
A
T
I
C
Score
82
94
60
60
89
29
33
49
59
62%









The high scores for Predictiveness, Relevance and Meaning might be worth building upon, in other words would it be possible to alter the metric's definition to improve the lackluster PRAGMATIC score?  Knowing the total expenditure on information security would be fascinating, but unfortunately that's still only half of the value equation.  What about the benefits of information security?  This is where things get really tricky.  The primary benefit of security is a reduction in risks, in other words a secure organization suffers fewer and/or smaller incidents.  Measuring the value of the risk reduction is difficult, involving various assumptions and estimations based on the predicted occurrence and severity of incidents if there was no security in place.  Further benefits are associated with the assurance element of security - the confidence for the organization to be able to do business that would otherwise be too risky.  Again, hard but not impossible to value.