I was halfway through writing a blog on the use of lost time injuries as a metric in safety, when David Broadbent posted one that was far better, so I abandoned it. Read David’s here.
I’m not going to repeat the arguments, but at the end of the article, David poses the question of why we still accept the use of LTIs as a metric.
LTIs are intended (I think) to be a measure of severity. The rationale is that if the injury requires time off work, it must be significant. It is, therefore, a nod towards trying to monitor high risk activities (albeit in a lagging way). Focusing on higher risk is something that I am all for and, indeed, is one of the key bases of improving safety that I advocate with clients. However, despite the good intentions, LTI categorisation does not achieve this.
There are too many factors in the extent of an injury that are outside of the realistic control of the work at hand. For a given event, the injury received can depend on the time of day, the temperature, the physiology of the individual, other people and external influences and sheer bad (or good) luck, amongst others. When I played rugby when I was younger, I ran into an opposition player as we were both moving towards the ball. We weren’t going at high speed; we weren’t doing anything unusual or against the rules and when we collided there was less impact than thousands of similar incidents throughout my playing career. But due to some unusual combination of timing, positioning, point of impact and whatever else, I came out of it with a kneecap in two pieces. The severity of outcome bore no relation to what we were doing when compared to our normal approach.
Similarly, driving through a red light can result in nothing, or a fatal impact to a cyclist. The point is not what the consequence was – the point is that the activity we did was unsafe. By focussing on the severity, we can miss many unsafe activities because the outcome was fortunately benign, or we can target accidents where actually the outcome was independent of any unsafe act. So, it is not an effective measure of severity in terms of recognising the level of risk we are taking in day to day operations.
But this is true of all lagging indicators. A bigger problem with LTIs is that every injury inevitably turns into a categorisation process, rather than a treatment and rehabilitation process. Trying to get people back to work (usually via the dreaded ‘light duties’) to ‘avoid’ an LTI sends a terrible message to the workforce that massively undermines anything we do in safety. It makes no difference how many times you promote safety if your action following an injury is based on massaging statistics rather than the welfare of the injured party. The negativity and cynicism it drives is enormous. I can’t think of any other function where we develop and deploy processes that clearly and systematically derail our efforts in quite the same predictable way.
Compare the (usually unspoken, but not always) impression of, “You might be in pain, but if we get you back to work we avoid an LTI” with, “Take what time you need to recover and we’ll help however we can to speed up your recovery.” The accident has happened, our controls have failed. Let’s at least treat the person with some respect so that they don’t completely disengage from safety in the future.
So, to return to David’s question – if it is not only ineffective, but actually damaging, why do we accept it? Especially as, almost without exception, every safety professional I discuss it with thinks it’s a bad metric.
I was recently working with a company to help them develop their indicators for reporting to the Executive and the Board and I challenged the use of the LTI graph. They recognised that it was not very helpful in what they were trying to achieve, but when they had researched for some benchmarking data (a whole separate blog topic), they found that it was the only metric that everyone used.
So it appears to boil down to this – we do it because everyone else does it.
We seem to be trapped in some sort of vicious circle of benchmarking peer pressure that reinforces bad data.
The reasons as to why and how we got here probably don’t matter too much. The next question to ask is how do we change it?
As in other examples of negative peer pressure, someone needs to take a leadership stance and just say no. Safety managers out there need to start flexing their professional muscles and begin reporting to management what they need to know to improve safety, not what they think they need to know to compare with someone else’s output. If we put as much effort into improving performance as we did into category management, we may make a little more progress.