Too often, project schedules just aren’t fit for purpose – so this initiative on standard programming benchmarks is welcome
MPs are seeking inspiration from the London 2012 Olympics for the multibillion-pound scheme to renew the crumbling Houses of Parliament. Their interest is piqued chiefly by the Olympic Delivery Authority, whose commanding role in procuring the venue infrastructure cut through many potential inefficiencies.
Megaprojects are a breed of their own, of course, and involve years of exhaustive planning and immense knowledge transfer. Nonetheless, the same fundamental issues can determine why some construction projects succeed and others fail. One such factor is how carefully the scheduling has been carried out.
Parties should consider demarcating clearly in the contract what the programme must include, how it should be built up and when it ought to be updated to ensure continued veracity.
A programme displays the order and duration of tasks required to progress from start to finish and includes key dates and constraints. The shortest period in which completion can be achieved comprises the “critical path”. This is a sequence of “critical” activities that must be undertaken in linear fashion – for instance, foundations should be poured before walls are built. A delay to any of them will commensurately hold up the rest. The term “float” describes the extent to which non-critical activities can be deferred without affecting other tasks.
An accurate programme becomes a tool that can be relied on to properly plan work and resourcing. If a variation to the works or intervening event arises, the programme can be used to compute the impact on outstanding activities and completion date. It’s invaluable in dealing with extensions of time or comparing delivery against planned-for progress.
Sadly, a conspicuous number of schedules aren’t fit for purpose. Contractors may have put them together with little or no direction from the employer or its advisers. Alternatively, they may be contractually non-compliant or error-strewn. Since the employer may have no contractual entitlement to comment on the document (or may simply neglect to scrutinise prudently), material failings may not be initially detected. Where delays occur, the employer may need a new, retrospective programme in order to paint a truer picture.
It’s a scenario best avoided. Parties should consider demarcating clearly in the contract what the programme must include, how it should be built up and when it ought to be updated to ensure continued veracity. A great example can be found in how the Time and Cost Management Contract from the Chartered Institute of Building (CIOB) deals with its “Working Schedule”. Though less prescriptive, NEC4 and the FIDIC Red Book follow similar principles.
While the protocol’s performance indicators overlap to a degree with the DCMA’s, ascertainment instead falls on a pass/fail basis, depending on whether the threshold is met. This makes it straightforward for all to understand
Next, there should be a process through which parties can become comfortable with the fitness of the programme. To date, little guidance is publicly accessible. The Society of Construction Law’s Delay and Disruption Protocol, for example, steers away from providing “stress tests”. The Defense Contract Management Agency (DCMA) 14-Point Assessment does not but its reliance on US project terminology discourages broader UK recognition.
To address this, a specialist group is drafting a standard form programming protocol for the CIOB. Part of its purpose is to offer best practice instruction on what parties should look for in a baseline schedule. In addition, it enables them to put a programme through tests to determine its integrity. While the protocol’s performance indicators overlap to a degree with the DCMA’s, ascertainment instead falls on a pass/fail basis, depending on whether the threshold is met. This makes it straightforward for all to understand while transparency in decision-making is upheld through setting distinct criteria to be considered in judging each indicator.
The CIOB rightly recognises that good programme production should be a collaborative, rather than an adversarial, exercise. Passing the tests should therefore not present a challenge if the new protocol’s guidance has been followed in fixing the programme. Furthermore, the document should assist in keeping the programme regularly updated so as to see that a relatively recent record of actual progress and adjustments to planned activities is at hand.
The protocol advocates the authoring of a programme narrative, which details any assumptions made in developing the schedule. Its tests can scale to suit a range of project sizes. Whether use of the protocol should be a contractual requirement is left to the relevant terms of agreement or parties’ choice, though model clauses are there to facilitate it if desired.
Templates are now being readied for popular programme verification applications such as Elecosoft’s Powerproject and Deltek’s Acumen Fuse. The CIOB Programming Protocol will be released later this year in a consultation edition for comment and feedback from contractors, clients, programme managers and others.