The key performance indicators could change the way clients select firms, but are they up to the job? Building asked Gardiner & Theobald to test them on real-life projects.
In May, Sir John Egan’s Rethinking Construction report came of age when the Movement for Innovation launched its 10 key performance indicators. The indicators give construction firms a way of measuring their performance and comparing it with the rest of the industry.

No longer is it enough to trot out the line that you support Egan. Now you must measure your performance and then improve on it. Clients are set on using key performance indicators to help select contractors and consultants. And who can blame them? The indicators allow clients to judge firms on a quantitative rather than qualitative basis.

So, how do the indicators work? Building asked leading quantity surveyor Gardiner & Theobald to put them through their paces. G&T partners Paul Ridout and Andrew Pollard were asked to work out key performance indicators for three architects bidding to design a hypothetical 10-storey, new-build office in the City of London. They were also asked to work out key performance indicators for three contractors bidding to build the office.

Using G&T’s cost database, Ridout and Pollard selected three architects and three contractors it had recently worked with. They calculated key performance indicators for all six firms and presented them on the “spider web” graphs – as recommended by the Movement for Innovation. The figures are based on data from comparable jobs. The firms have not been named because G&T was unwilling to criticise companies it works with.

Architects

G&T assessed key performance indicators for three architects that it has worked with recently. Using three comparable office projects for each practice, G&T calculated construction cost, productivity, predictability of design and construction time, predictability of construction cost, defects and client satisfaction with the product and service. Predictability of design cost, predictability of design and construction time, profitability and safety were not measured.

The benchmark value on the “spider web” graph is 50. Above 50 is better than the 1998 construction industry benchmark, as calculated by the DETR; below 50 is worse. The wider the web, the better the performance.

G&T’s verdict on all three architects

Ridout and Pollard would have recommended architect 1. “Although architect 1 performed least well on client product satisfaction, architect 2 probably lost out because of poor performance of the predictability of construction cost and design time,” says Ridout. Architect 3’s poor record on defects ruled it out.

Contractors

G&T assessed key performance indicators for three contractors it has recently worked with. Using three comparable office projects for each practice, G&T calculated construction cost, productivity, safety, profitability, predictability of construction time, predictability of construction costs, defects and client satisfaction with the product and service. Predictability of design cost, and predictability of design and construction time were not measured.

As with the architects’ graphs, the benchmark value is 50. Above 50 is better than the DETR’s 1998 construction industry benchmark; below 50 is worse. The wider the web, the better the performance.

G&T’s verdict on the three contractors

Ridout and Pollard chose contractor 1. “Even though contractor 1 scored most poorly on construction cost and profitability, it is our choice. A strong case could be made for contractor 2. But it may be that client or project priorities such as inflexible opening dates would eventually sway the decision one way or another.”

Do the indicators measure up?

Despite reservations about the validity of some data and about how the indicators are calculated, G&T’s Ridout and Pollard are fans of key performance indicators. “It’s a service we are interested in supplying for clients,” says Ridout. “It provides another tool in our armoury for selecting contractors.” But the duo have five criticisms:

  • Consultants and contractors are likely to be cautious about providing information that does not show them in a favourable light. Should references be asked to verify data?

  • Credible information on profitability is likely to be hard to obtain, particularly for partnerships

  • Cost and time predictability take no account of the reasons for cost/time changes, such as client variations and planning delays

  • The indicators for predictability of cost and time have two elements: design and construction.

  • Should a consultant be assessed on both elements and, if so, are the results averaged? Perhaps they could be assessed separately, thereby introducing a further key performance indicator?

  • The process of normalisation (where the costs and construction times of finished projects are adjusted to take into account size, quality and location) is subjective and could lead to doubts over the credibility of the results, particularly in a competitive tender.