09 November 2012

Risky metrics from security surveys

Published information security surveys can be a useful source of metrics concerning threat levels and trends, although there are several different ones, each with different methods, sizes, scopes, purposes and designs.  Big, scientifically designed and professionally executed surveys are inevitably expensive, begging questions such as why anyone would fund them and publish the results, often for free.  What are their motives?  What's in it for them?

This is an important but seldom considered issue because of the distinct possibility of bias.  Bias is kryptonite to metrics.  Bias works like a catalyst: a small amount can have a disproportionately large effect on the outcome.

Some published survey reports are quite obviously biased, being little more than marketing vehicles that selectively collect, analyze and portray the information largely to promote their particular "solutions" as must-haves.  They tend to be based on rather small and non-random samples (which fact is never disclosed, except perhaps for someone reading-between-the-lines of the small print tucked out of sight) and the survey questions (which are not usually included in the reports, again for obvious reasons) are designed to encourage respondents to support certain presumptions or premises.  Even the way the metrics are presented tends to be skewed towards promoting a particular perspective - common examples being the use of compressed scale graphs (often with unlabeled axes!) and cut-pie-charts (over-emphasizing a certain segment by showing it cut and pulled out of the main pie).  These are patently not good metrics: there would be significant risks in basing our security risk/threat analyses and plans on such dubious data.  Nevertheless, interesting anecdotal information about certain incidents is often included in the reports, and arguably they have some value in general security awareness terms.  Despite the inherent bias, marketing copy and  blatant advertising has a useful role in making us think about the issues and consider the merits of various approaches to dealing with them - provided we are awake enough to realize that there is a strong agenda to promote and sell particular products.  

Other security surveys are more scientific although often it is quite difficult to determine how they were actually designed and conducted.  True, they often give a broad descriptive outline of the sample stating fairly obvious parameters such as the total sample size, but the sample selection methods, survey questions and analysis are generally not so easy to determine.  Some surveys do provide additional information in this area but most fall well short of rigorous academic/scientific standards, at least in their main published reports.  Nevertheless, the surveys that at least make some effort to describe their 'materials and methods' are more useful sources of security metrics, particularly the ones that are conducted by trustworthy industry bodies such as the Information Security Forum, or involve truly independent survey/statistics experts (no, I don't mean market research companies!).

Some surveys are produced and published as a genuine public service, generally by public bodies (who are presumably concerned that citizens are aware of, appreciate and fulfill their information security responsibilities, as well as wishing to gather and use the information for their own governmental strategic/security planning purposes) and/or public-spirited commercial companies (often consultancies with less to gain from promoting specific products aside from the general marketing/brand value in highlighting their own consultancy and auditing services to a global audience, of course).

Think about these issues the next time you are asked to participate in a survey.  Consider the manner in which you have been selected to participate, and if you go ahead with it, notice how the questions are phrased and presented.  Even something as innocuous as the sequence of multiple-choice answers can have a measurable statistical effect, while the choice of words in both the question preamble/stem and the answers can introduce a subtle (or not so subtle!) bias.  Watch out for key words, emotive phrases, and hints that the surveyor anticipates a certain response.   Recall your concerns when the results are finally published.

Finally, for today, when you are reading and thinking about using the metrics from published survey reports, think carefully about the credentials and motivations of the organizations behind them.  For example, check back through previous reports from the same outfit to see whether, in the light of experience, their trends analysis and predictions were broadly accurate and their recommendations were helpful.  Do they have a good record as surveyors?  How much can you trust them?

By the way, one of the clues as to the scientific validity and rigor of a periodic  survey concerns how changes to the survey across successive iterations are handled and reported.  Aside from checking whether prior data remain a valid basis for comparison and trends analysis if the wording of a question has materially changed, ponder the precise nature of the changes and how those changes are justified and explained.  If the survey's authors don't even take the trouble to discuss changes to the questions, ask yourself what they might be concealing.

No comments:

Post a Comment

Have your say!