In this month’s engineering services cost model, Davis Langdon Mott Green Wall reviews the principles of data centre design and look at the implications for the services costs

Demand for data centres slowed after the dot.com crash of 2000, but facilities in London are now reported to be bursting at the seams, with rack space at a premium.

It has been estimated that demand for data centres is growing at about 30% per year. A recent survey found most operators in the UK are already close to capacity, and the sector is set for significant long-term growth. This is due to the greater use of corporate websites and the insatiable demand for internet services, the development of high band-width services, and of organisations looking to outsource their data processing work to reduce costs.

The cost model concentrates on co-location centres, which provide serviced space for rent with guaranteed uninterrupted operation of tenants’ equipment and access to multiple, primary optical fibre networks.

Co-location centre requirements are very specific. A resilient, secure data centre is based on interdependencies. Like a jigsaw, where every piece matters, each system and process in construction of a data centre must be analysed so there is no one point of weakness to compromise its operation.

A typical data centre has at least 20 major mechanical, electrical, fire protection, security and other systems, each of which has subsystems and components, and all of which must be concurrently maintainable and/or fault-tolerant in order for the entire site to be considered the same. This means providing redundancy of plant to maintain system operation, and care must be taken to ensure this investment is not applied to the wrong parts of the system, resulting in ineffective expenditure.

One of the most common sources of confusion in relation to the design of data centres is the definition of reliability and availability (ie a system’s forecast downtime). With the explosive growth of the internet comes increased demand for computer hardware reliability. Information technology customers expect availability of 99.999%, but it is doubtful that even the most thoroughly designed infrastructure system can support such a figure.

A tiered classification for site infrastructure functionality has been developed to provide a common standard. This is summarised in Table 1 (overleaf). Under this system, measured availability ranges from 99.671% for tier 1 systems to 99.995% for tier 4 systems.

Co-location centres are either tier 3 or 4, and it is important the degree of risk (and hence the levels of protection and redundancy) is balanced with cost and that the design satisfies the operational requirements of the building and the business objectives of the operator.

Using Figure 1 as a backdrop, there are several key design issues, the first of which is the site. From a services point of view, there are three core business determinants: location, power and communications.

The location should be as free as possible from natural and man-made risks (ie flooding, adverse weather conditions, proximity to potentially polluting sites etc), and close to major clients.

There should be a reliable, high capacity electrical power supply or supplies. The nature and number required will depend on the levels of reliability and resilience to be provided. Not every part of the national grid is capable of delivering these requirements, and constraints on the available power supply may have significant effects on cost and programme.

Proximity to good, diverse high band-width fibre links, from multiple tier 1 suppliers, is key in providing the diversity that ensures the resilience essential to operators and tenants.

Conversion of warehouse, industrial and office space allows facilities to be completed quickly, but there is a finite supply of suitable buildings in the right locations, so purpose-built projects are becoming the norm.

Phasing of the development, allowing expansion without affecting the operation of the completed space, is important to the viability of a scheme. There are two main approaches to providing flexibility in the services installation.


  • Central distribution: where the complete primary distribution network is installed as part of the initial phase, but plant is only brought on line in response to tenant demand. Although the services disruption is minimised, the initial investment is high.
  • Modular services: by providing local distribution networks encompassing all critical services including primary and secondary distribution, services are only installed to areas on demand. This minimises up-front investment and improves the degree of resilience through the creation of stand-alone systems. Disadvantages include the need to link the modules and the greater complexity of installation work.

There are key prerequisites for construction of the data centre itself that have an impact on the design of the services.

  • Ideally a freestanding building with a boundary perimeter and vehicle access control, allowing good perimeter security.
  • Equipment rooms distanced both horizontally and vertically from internal water services and areas such as toilets and kitchens.
  • Secure reception and site perimeter access infrastructure with visitor parking away from the building and ideally no car parking beneath.
  • Plant/storage space for standby generation, dual utility entry points with diverse physical routing, UPSs, fire suppressant cylinders, equipment configuration before installation, admin areas etc.
  • Appropriate width and load capacity access route from exterior to equipment rooms.
  • Use of smaller segmented equipment rooms with fire separation between and communications/power distribution, air-conditioning, fire suppression etc that is resilient to the loss of another equipment room.

In terms of communications within the building, good practice requires:


  • Separate communications termination rooms to allow different providers access without the need for entry into other areas.
  • Servers with multiple network interface cards for dual (high speed) local area network (LAN) connection resilience.
  • Dual LAN switches, patching and cabling. Switches and routers that support hot swap of redundant power and network interface cards are desirable.
  • Comprehensive network monitoring and alert facilities, which may include external monitoring.

Electrical services

Traditionally, the primary cost unit for a co-location centre was floor space. Now it is electrical power.

The availability, size and security of the power supply is the major selling point of a co-location centre. Power loads of 800-1,000W/m² are available, but many operators are looking to increase this to 1,200-1,500W/m². This standard represents the maximum theoretical load if the technical areas were filled to capacity. In practice, below-optimum use of rack space and diversity of server operation mean this limit may not be approached.

Development of blade servers, allowing more muscle to be packed into existing racks, has contributed to the increasing power demands. Whereas a typical rack has a load of about 2kW, a rack full of new blade servers requires about 15kW of power, and so packing these into the space previously occupied by conventional racks significantly increases the power requirements per unit area.

The main design issues associated with the electrical design are:

  • Resilience strategies. These are based on eliminating potential single points of failure, by duplication where appropriate. A common approach involves providing dual supplies into high voltage switchgear. However, it is normally more cost-effective to provide resilience and redundancy at the level of low voltage switchgear, UPS and standby generation, avoiding the need to invest in additional high voltage panels and cabling. Tenants are provided with a secure supply through connections to two switchgear, UPS and standby power modules. This principle of duplication can be extended upward to dual main supplies, and downward to the final supply connections to each rack, which can be doubled-up and fed from different PDUs.
  • Provision of UPS and standby generation. The UPS provision in co-location centres is much higher than on conventional buildings. Almost the entire electrical load is critical, which means no break in supply or load shedding is permitted. Static battery UPS systems are the preferred option, although more expensive rotary systems, which combine standby generation with UPS functionality, are becoming more common. Rotary UPS systems are 5-10% more expensive than an equivalent battery UPS and generator but need less space.

Mechanical services

Key issues for mechanical services design are:

  • cooling loads
  • diversity and security of supply
  • environmental control

The IT equipment housed in data centres produces huge quantities of heat and is intolerant of temperature or humidity fluctuations, so these parameters need to be closely controlled. Cooling loads of 800-1,000W/m² are the norm, rising to 1,200-1,500W/m² for the latest generation of data centres.

While improved efficiency means fewer processors are needed for a given output, packing them into racks to increase power provision increases overall cooling load. Cooling system failure would rapidly result in IT equipment failure, so requirements for standby capacity are exacting for tier 3 and 4 installations (see Table 1). Redundancy is provided through the whole installation, at chiller plant, chilled water distribution and close control units.

Other important points are:

  • Careful attention should be paid to airflow to eliminate hot-spots. Also, as the room is populated the air flow and hot-spots may change and this needs to be taken into account.
  • Positive pressurisation should be provided to technical areas to minimise dust ingress.
  • With racks densely packed with equipment such as blade servers, a 19-inch rack could generate up to 20kW of heat. Conventional cooling, where air is drawn through the racks by fans, struggles to deal with these kinds of loads, so direct cooling of the racks using chilled water or liquid carbon dioxide may be required.
  • Leak detection needs to be provided where any liquids pass through the technical areas.

Fire protection

Key issues for the fire protection system are:


  • Using VESDA systems that can detect very early signs of a fire risk.
  • Gas suppression systems are commonly specified but can be costly. The space must be sealed to prevent gas escape, it must maintain its integrity during discharge and there must be a means of evacuating the gas afterwards.
  • Standard sprinkler systems are not used in the technical areas, but water mist systems are common. Provided the power is automatically isolated and the water is pure, the equipment is often unaffected once thoroughly dried. It offers the advantages of localised activation and gentle discharge. Water-based systems should not hold water in the pipework unless activated and activation should be on a per head basis.
  • It is prudent to divide a data centre into separate technical areas with fire breaks between and running off separate infrastructure to allow containment of disaster effects.
  • Consideration should be given to a fire suppression system that can be triggered more than once without the need for repriming.

Security

Security installations dealing with intrusion and access control are necessarily extensive. As well as physical barriers at the site and building perimeters, there should be access control within the building. Voids in floor and ceiling zones should be secured to prevent unauthorised access.

Intensive CCTV installations linked to intruder detection will include monitoring of tenants’ areas. CCTV coverage of every aisle in technical areas is becoming increasingly common.

Access control systems in co-location centres use high technology solutions. Physical control is provided by a series of ‘man-traps’ linked to scanners reading ID cards, palms etc. Doors controls limit access to technical spaces, and racks and cabinets are lockable, with keys tracked by database systems.

Energy efficiency

There is a view that the increasing divergence between rapid server computing performance gains, which increase by a factor of three every two years, and the far slower improvement in energy efficiency, may soon call into question the economic productivity of the co-location data centre.

While power consumption per unit drops, consumption actually increases, which results in rapidly escalating electricity and site infrastructure costs for every server deployed, to the point where these exceed the revenue potential from letting the space.

This continued growth in power requirements may see buildings running out of power and/or cooling capacity to meet this demand, leading to an unplanned capital investment to expand an existing centre or indeed build a new one. The long-term solution is to improve energy efficiency by at least equal the rate of computational performance increase, through higher energy efficiency of IT hardware components (increased research and development) and the elimination of energy wastage (good practice).

This will ultimately result in development of a brief for a green data centre, identifying and optimising energy performance factors from all sectors of the industry including user organisations, manufacturers and services designers.

Cost breakdown

The cost model is based on an 8,500m2 two-storey co-location development, with all technical space fully fitted out up to and including the PDUs, ready for the tenants’ racks.

Costs exclude the incoming dual feed high voltage 6MVA supply. This depends on availability, but could run into millions of pounds.

Co-location buildings are relatively simple, but the services installation needs to be carefully considered to deliver a cost-effective solution that is straightforward to maintain and adapt. Detailed design of the services is essential for co-ordination purposes, and to contribute to the identification and elimination of single points of failure at an early stage. Hence, projects are typically let on a lump sum basis to specialist contractors on a fully detailed design.

Early orders are required on long lead-in items such as chillers and generators, especially as these are likely to be large capacity. Full testing and commissioning is critical to the handover of the project and the programme allowance for this should not be compromised.