The problem of measuring
Successful planning depends on the context in which we are working. Planning in a stable, known and repeatable environment is relatively straightforward. We take the contributing factors into account, we do the analysis and we can produce a robust plan.
However, when planning in an uncertain environment this process becomes a lot more difficult – and fear-instilling. We can’t analyse an unknowable future and we can’t number-crunch our way out of this complexity.
The same premise applies to the problem of measuring results. As with planning, it is relatively easy to figure out how to measure when working in a stable, repeatable system. In manufacturing for example, outputs are the same as outcomes – if we produce 100 tractors (output), we have 100 tractors (outcome).
When measuring in more dynamic environments, outputs and outcomes often don’t marry up. The outcome might be unrelated to the output – we could spend a lot of energy attempting to deliver a service or develop an opportunity, only to not achieve what we would have liked. This is especially true in government departments attempting to determine the actual impact of their activity within complex social problems.
There has been a heavy focus in recent decades of measuring complex issues using metrics in a ‘stable system’ way. Examples (such as the Balanced Scorecard) have a number of shortcomings:
They operate from a ‘closed system’ assumption. The models are predicated on the absence of extraneous inputs, focusing heavily on selected information in goals and measurement metrics (which it also assumes is the right/best information). While categorisation is important, using this type of rigid model in measurement can create vulnerability to shocks and can blind organisations to the unexpected.
They fix a definite future outcome. Goals and targets help guide action and can motivate. However, in many circumstances they can also be either completely arbitrary (‘X% growth year-on-year for five years’), or too vague (‘Continue to be the best we can be’).
They 'lock in' the focus. The more definitive a measure, the more it focuses the minds of those required to achieve it. Intuitively this sounds like a positive thing – but in dynamic/fast changing environments, it can have negative side effects. It can pervert behaviour (people focus on activity to achieve a particular metric, while ignoring other opportunities). It can create disingenuous performance (work to achieve metrics while failing to achieve any benefit). Even worse, it can cause low moral (people tuning out due to disconnect between measured outputs and real outcomes).
They are seldom contextual. The primary issue with having a ‘best practice’ measurement system is that all organisations are different. Different functions require different approaches to development and measurement of outcomes. A model appropriate for your organisation today may not be relevant in two years’ time. Even if a measurement model is tailored to a sector or entity, it may still make massive assumptions about context. This can result in another amplification of the danger of ‘locking in’ – once an organisation decides on its approach, it struggles greatly to move away from it, even when it is demonstrably irrelevant. Getting the Board to throw something out is particularly challenging.
Organisations should instead work with a set of simplified KPIs in their strategic plans that essentially operate as principles across the organisation. This means that the organisation can report against goals, without being subject to organisation-wide measures that try to over-specify the activity or 'balance' a set of complex factors.
In addition, each area of activity should also have targets and measures in the annual business plan – but these should be appropriate to the individual activity in question, rather than part of a descending series of subsets as in many performance measurement methodologies. These will all connect with one another, without having to strictly align in a heavily engineered way.
The general idea here is one of independent connection. An individual area manager needs to be part of the business (supported by it and aiming for the same goals), but they also need to have freedom to act and therefore should not be strongly measured or incentivised around alignment to a set series of central measures. They should be able to find measures appropriate to the context of their work.
What this essentially comes down to is a degree of trust in the person or business unit undertaking an activity. Former US President Ronald Reagan’s phrase “trust, but verify” springs to mind.
Organisations need a broad set of KPIs or outcomes at a strategic level so that they have the flexibility to move with complex environments. But they also need some hard metrics against deliverables so that they can be held accountable. These hard metrics need to be clear outcomes of what different parts of the organisation will achieve – but they do not need to be over-engineered to ensure accountability.
This way the organisation can be both agile and accountable.
Steve McCrone and Paul Sullivan operate Cornwall Strategic
Tune into NBR Radio’s Sunday Business with Andrew Patterson on Sunday morning, for analysis and feature-length interviews.