Andrew Wiseman explores the real meaning behind headline metrics
MILTON KEYNES CENTRAL, 23/08/18, 06:35. So here I am, coffee in hand, Wolfmother programmed on Spotify for the journey to Honeycomb HQ. Today is slightly different though. The normally bullet-proof 06.46 Virgin Trains service to Manchester Piccadilly is delayed. That dreaded announcement “We’re sorry to announce that the 06.46 service to Manchester Piccadilly is delayed. Please wait for further announcements”.
Now, those that know me well know that I have a (perhaps unhealthy) knowledge of train times, especially on lines that I travel frequently. But today, during this unexpected delay, I began wondering what late really means.
I was drawn to the Virgin Trains board showing the latest data on punctuality. In the latest period, the annual moving average PPM (the official term for punctuality) was 82.3%. Not great, but certainly not bad. At least 84,000 journeys had arrived ‘on time’ in the last year. Certainly a news-worthy headline.
However, on closer inspection, it became clear that this was not quite true. The definition for ‘on time’ does not actually mean on time. For long distance journeys, it means on time or up to 10 minutes late. The ‘right time’ measure (on time or early) shows that actually only 41% of trains were on time in a year. Our news-worthy headline just halved, from 82% to 41%.
So, we get that some trains are delayed. But not all delays are equal. This is where understanding the real human impact of the minority is important. Being late for work, for example, can often be brushed off with “these things just happen sometimes”. Compare this to someone missing the birth of a child for example, or someone missing their flight home on Christmas Eve, and the consequences can be much greater. You only have to look at John McClane to understand that.
That’s the problem with the reporting of such headline metrics. Over the years I’ve been fortunate enough to work with some of the world’s biggest mobile operators, and for a while there was an obsession with reporting percent coverage figures. Now, 98% coverage sounds great doesn’t it? But what does this headline metric mean? Does it mean that 98% of the landmass of the UK is covered? Do such figures deal with nuances of 2G/3G/4G coverage? Nowadays, 3G / 4G coverage is much more important to most than being able to call someone.
When you add that human context, it turns out that according to Ofcom’s latest research, less than half of the UK population is very satisfied with reception strength and signal. Additionally, when you dig into latest coverage research for Wales, we see that actual geographic coverage for rural Wales is only 58% (and only 19% coverage for 4G).
If you’re in the measured coverage area then happy days. But if you’re not, that number means nothing when you’re leaning out of a window trying to get that 1 bar of signal to make an important call.
Ultimately, there’s nothing wrong with these headline metrics, provided you’re able to understand both the context behind the metric and any potential consequences. It was Mark Twain who popularised the phrase ‘lies, lies and damn statistics’ – and whilst these headline data are not entirely untrue, the real value from these measures can only be understood by questioning the definition of the metric, and the why that exists behind the what.