Safety in numbers

Safety in Numberssmiley

“What gets measured gets done.”

This phrase, or some version of it, has been attributed to a number of different sources going all the way back to renaissance mathematician and astronomer Rheticus. Whatever its true age and provenance, over the last few decades it has been memetically embedded into pretty much every organisation in one way or another. We have myriad ways of measuring and presenting data across countless parameters to allow us to ‘manage’ performance.

This is also true of safety. Key performance indicators are the way forward. We must measure. Only then can we understand how well we’re doing. How many accidents have we had? What is our total recordable injury frequency rate? How do our numbers compare against others in our industry, or in other industries? And finally, how do we use this information to properly target our improvement actions.

This is not an article about what we’re measuring, which is a well-worn path (although I would just like to paraphrase Peter Drucker when thinking about injury rate metrics and say that there is nothing quite so useless as measuring that which need not be measured at all). This is more about how we understand and interpret the data we are spending so much time an effort on collecting.

The data are collated, tabulated, charted and summarised with the level of detail inversely proportional to the level of management. Until, by the time it reaches the executive and the board, it is almost reduced to a sad face or a smiley face. While it is the reality of the situation that senior executives need summary data due to the sheer scale of what they are reviewing (at least in large organisations), if they cannot properly interpret it, it is worthless.

So how well do we navigate this sea of information? How well do we understand the numbers? How many safety professionals have any kind of higher level qualification that is strongly numerically based? Or, for that matter, how many of our management team genuinely understand numbers?

Unfortunately, a significant proportion of people are statistically illiterate. I highly recommend the book by Andrew Wilmot The Tiger That Isn’t to see just how extensive this is.

Numbers can be hugely persuasive. They are presented and received as fact, often with little underlying analysis to support the veracity of the data. Conclusions are drawn based on small data sets and wildly extrapolated to apply in all sorts of irrelevant areas. This is then used by the company as the information with which to make its broadest policy decisions around safety.

One of the most popular (and useful) ways of presenting data is in the form of trends. It is no use knowing that we have injured seven people this month if there is no context around that – is seven better than last month, or worse, or the same? If different, is it significantly different, or only marginal? Is that difference part of an ongoing trend, or a single outlier? Providing trending data allows some of these questions to be more thoughtfully considered. This is frequently done using a ‘12 month rolling average’. This is primarily because we need a timeframe for the trend, but if we choose calendar years or financial years, the year is halfway through before we start to see the trends. A 12 month rolling average takes the average of the immediately preceding 12 months and plots it on a graph so that there is always sufficient data to see a trend. This is a perfectly legitimate process, but there is a trap in the detail.

Take the following data set for injuries.

Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
8 8 8 8 7 7 6 6 5 5 4 4

This shows a clear downward trend. What we are doing appears to be working (subject to the problems of using lagging indicators). So we should continue to do more of the same and investigate other improvements (because we never stop looking for improvements). Shown on a graph, this should demonstrate our month on month improvement for all to see. Shouldn’t it?

However, now imagine that the month before this data set, the previous December, we had a stellar month where we had no injuries. In last month’s information, this zero was part of our rolling average, lowering it. This month, however, it has dropped off our data set, so we no longer have the benefit of it in the average. The rolling average graph looks like this.

Graph pic

Something is wrong here. We’ve had a sudden spike. Let’s change everything.

The way the data is displayed presents the exact opposite of what is happening. Without the careful analysis and questioning that needs to go with it, we end up drawing the completely wrong conclusion. This is the single biggest failing with statistics – drawing conclusions from insufficient information and basing our actions and strategies on them. As the old mathematics joke says, “There are two types of people in the world. One who can draw conclusions from incomplete data sets . . .”

Even if the graph above represented reality, it is a single data point that is inconsistent with a general trend. We should be very careful about knee jerk reactions to it, because next month we could revert to type. This is of particular concern at executive and board level where the information is packaged into short summary statements. If the safety manager cannot provide the interpretation, poor decisions will be made.

This lack of underlying information is replicated across a wide range of issues and topics – of ten reported injuries, how many of them had common causes? How many reached their full potential and how many could have resulted in something much worse but for luck? Is there a seasonal change that would only be revealed by a meta-analysis across a number of years’ data? And so on.

How can we improve this situation?

Firstly, safety managers need to be far more capable in the statistical field. It is their responsibility to provide their executive team with appropriate conclusions and recommendations for sanction, not to simply hand over data and let them draw their own conclusions. Secondly, they need to be very careful on what they are measuring, how and why. Recognise the limitations on data and the quality of its collection and draw this into the discussion. Thirdly, executives need to treat headline metrics as a starting point for inquiry and discussion, not as an end point for decision making.

3 thoughts on “Safety in numbers

  1. HI, Craig, Good point of your article. If put 4 or below for each month in previous year, we will see a more ironical “Up” trend using rolling 12 months average. That just suggest rolling average does not a good indicator

    Like

  2. Enlightening. Very good article. You would enjoy http://www.safetyrisk.net/the-seduction-of-measurement-in-risk-and-safety/ by a Dr Rob Long. He takes a very interesting view on metrics.

    Myself, I have visited too many sites with obvious safety failures that haven’t turned into accidents yet, however the metrics have all been good.

    What I have done is reexamine the concept of QSHE. – A construction company with a great safety record, turns over a project with 2500 quality failures or ‘snags’. If the job is done correctly, this should reduce the snags. For the job to done correctly it follows training, compliance and auditing is carried out more rigorously.

    If the snags reduce, then the job should have been conducted in a safer manner.
    More to the point, safety failures can be hidden or ignored not to dispel the myth that the company is safe, but a quality failure cannot be ignored. If quality is given more prominence in risk assessment, the safety will follow.

    It integrates the risks of safety and quality, if there is only one way to do the job. Add in environmental impact and we now have an integrated risk assessment that in theory is superior.

    Just thinking out loud,, the picture you describe, explaining the time line if fantastic; but could we also put up the quality and environmental performance as well? Would it act as a truer indication of trend on site?

    Karl

    Like

Leave a comment