Friday, 23 June 2017IMS HomepageHome

Performance Management Association Conference 2004

Edinburgh International Conference CentreEdinburgh International Conference Centre 28th - 30th July

This is a distillation of the content the conference, rather than a direct report. As with many conferences, many of the sessions consisted of parallel tracks; this is therefore a distillation only of those sessions I was able to attend.

There were 300 delegates from approximately 40 countries and interestingly - and usefully - there was a good mix between academics and practitioners, and between the public and private sectors. The academics were from a spread of disciplines since performance measurement and management are interpreted widely and 'claimed' by several areas… accounting & finance, industrial engineering, corporate governance, operations management, economics, etc.

The consistent themes that came across related to frameworks, scorecards and dashboards - performance measurement is becoming something of an industry. The Balanced Scorecard has given rise to (inspired?) a whole range of similar scorecards - each attempting to create a balanced set of performance measures for a particular industry, sector, situation or …. (Perhaps this is why Kaplan and Norton evolved the concept of the Scorecard into the Strategy Map…. To get away from all the imitators.) This move towards 'balance' (or 'total measurement' as it used to be called) is to be welcomed as it helps to avoid much of the sub-optimisation caused by incomplete measures.

Similarly with dashboards - graphical representations of performance measures. There were seemingly endless variations of displays, many showing real-time data to 'executives'.

Cutting through many of these interesting displays (some of which seemed only to be devices to sell more software) were the real lessons of performance measurement - and management. After all, measurement is not an end in itself; it merely offers evidence which dispels ignorance, reduces uncertainty and on which can be based improved decisions and actions.

Amongst these 'real lessons' were:

"Performance measurement and management regimes should stem from and not drive - organisational strategy. What needs to be measured is derived from what needs to be accomplished.

"Measurement systems should underpin performance incentives and rewards but should minimise opportunities for 'gaming' or playing the system. This is often largely dependent on the way in which the results are used - ideally as part of a mature discussion and debate.

"It may be necessary to recognise and address differences before applying 'standard' methodologies or measures - such differences may be in the nature of the organisation, in geography, in culture, etc.

"Though a balanced portfolio of measures is helpful, too many measures may confuse and distract. i.e. there is a tension between 'balance' and 'focus'.

"Many performance management programmes are really organisational change programmes. The measurement is used to drive the change, which then drives the performance improvement.

"Measurement regimes get 'tired' (or rather management teams seem to tire of them). This suggests that many of them are not actually measuring the right things; if the system gives a manager useful information, why would he/she tire of it.

"Many scorecards provide 'enterprise level' information. This needs to be translated into separate information feeds as appropriate at different managerial and operational levels if behaviours are to change throughout the organisation.

"Measurement systems need to be sufficiently flexible to evolve as the environment and the organisation change.

"The process of thinking about, designing and implementing a set of performance measures may be more important than the measures themselves; at the level of the enterprise, this thinking clarifies strategy.

"A measurement regime needs to be aligned with strategy, structure and reward system to have maximum effect.

"Trends are usually more important data than point measures, though point measures can be set in context by benchmarking.

One very interesting presentation referred to research work being carried out at Cranfield (on yet another dashboard) based on the work of Shewhart. In the 1920s and 30s, Shewhart developed the concept of the quality control charts still in use in many industries today. Such charts were used as second level information behind the primary dashboard display so that a manager who was alerted to a particular item of data on the dashboard (as perhaps it changed from amber to red) could call up a control chart to look at a history of the measure and its variation. This pattern of past behaviour allows the manager to recognise inherent variations and make a judgement on whether the 'alert' constitutes a real problem. (The presenter suggested that in one recent research study 50% of the time managers were found to be 'tampering' with a stable situation.)

Another presentation that I found useful offered guidance on aligning the perspectives contained in a particular Balanced Scorecard project with the organisational strategy. The speaker, to illustrate his point, cited one organisation that claimed to be environmentally-aware and committed - yet there were no measures relating to the environment in the Balanced Scorecard they had just devised. This suggests that either the Balanced Scorecard is incomplete, or that the commitment to the environment is merely window-dressing.

Overall the conference demonstrated that though corporate performance measurement is a growing industry, and many organisations are adopting a variety of measurement frameworks, scorecards and dashboards, the theoretical underpinnings are not yet at a level where the benefits claimed from pragmatic usage can be properly explained. However, many of the researchers suggested that a majority of performance measurement initiatives fail - perhaps this is because we do not fully understand all the various parameters and factors that underpin successful deployment.

John Heap

back to Reports