No longer is it enough to trot out the line that you support Egan. Now you must measure your performance and then improve on it. Clients are set on using key performance indicators to help select contractors and consultants. And who can blame them? The indicators allow clients to judge firms on a quantitative rather than qualitative basis.
So, how do the indicators work? Building asked leading quantity surveyor Gardiner & Theobald to put them through their paces. G&T partners Paul Ridout and Andrew Pollard were asked to work out key performance indicators for three architects bidding to design a hypothetical 10-storey, new-build office in the City of London. They were also asked to work out key performance indicators for three contractors bidding to build the office.
Using G&T’s cost database, Ridout and Pollard selected three architects and three contractors it had recently worked with. They calculated key performance indicators for all six firms and presented them on the “spider web” graphs – as recommended by the Movement for Innovation. The figures are based on data from comparable jobs. The firms have not been named because G&T was unwilling to criticise companies it works with.
G&T assessed key performance indicators for three architects that it has worked with recently. Using three comparable office projects for each practice, G&T calculated construction cost, productivity, predictability of design and construction time, predictability of construction cost, defects and client satisfaction with the product and service. Predictability of design cost, predictability of design and construction time, profitability and safety were not measured.
The benchmark value on the “spider web” graph is 50. Above 50 is better than the 1998 construction industry benchmark, as calculated by the DETR; below 50 is worse. The wider the web, the better the performance.
G&T’s verdict on all three architects
Ridout and Pollard would have recommended architect 1. “Although architect 1 performed least well on client product satisfaction, architect 2 probably lost out because of poor performance of the predictability of construction cost and design time,” says Ridout. Architect 3’s poor record on defects ruled it out.
G&T assessed key performance indicators for three contractors it has recently worked with. Using three comparable office projects for each practice, G&T calculated construction cost, productivity, safety, profitability, predictability of construction time, predictability of construction costs, defects and client satisfaction with the product and service. Predictability of design cost, and predictability of design and construction time were not measured.
As with the architects’ graphs, the benchmark value is 50. Above 50 is better than the DETR’s 1998 construction industry benchmark; below 50 is worse. The wider the web, the better the performance.
G&T’s verdict on the three contractors
Ridout and Pollard chose contractor 1. “Even though contractor 1 scored most poorly on construction cost and profitability, it is our choice. A strong case could be made for contractor 2. But it may be that client or project priorities such as inflexible opening dates would eventually sway the decision one way or another.”
Do the indicators measure up?
Despite reservations about the validity of some data and about how the indicators are calculated, G&T’s Ridout and Pollard are fans of key performance indicators. “It’s a service we are interested in supplying for clients,” says Ridout. “It provides another tool in our armoury for selecting contractors.” But the duo have five criticisms: