As IT power increases, so energy use has grown enormously. Simon Rawlinson and Nick Bending of Davis Langdon examine the design and cost implications of low-energy data centres

01 / Introduction

Networked IT has been a major engine of global productivity growth – driving financial services, global trade and e-commerce as well as national networked solutions for security, health service provision and so on. Global spending on IT is in the region of £8.9tr each year, and a high proportion of this is spent on servers, which are housed in data centres.

The world of IT continues to evolve at a mind-boggling speed. Processing power doubles every 18 months. As a result, a pound spent in 2006 on processing power delivered 27 times more performance than in 2000. These improvements have been achieved by the design of faster chipsets and the increase of chip density in individual servers. The design of data centres has not evolved as quickly. This is an increasing problem, as a consequence of the increase in IT performance and capability is the growth in energy and cooling loads. This is resulting in a fundamental shift in the balance of the overall costs of IT from the costs of the equipment to the costs of building and running the data centres that house it. For the first time in the evolution of IT systems, cost and availability of energy has become a drag on the ability to increase computing power, which is likely to lead to some radical data centre design solutions in due course, potentially involving wider use of more efficient water or CO2-based cooling in technical spaces. IT manufacturers are developing low-energy kit, and the developers and operators are beginning to respond with low-energy data centre developments.

The way in which IT is delivered is also changing. While corporate applications are typically run on centralised systems, much computer processing still takes place on desktop machines. “Cloud-based” computing, where processing is carried out centrally is a new approach being driven to introduce a “pay per use” revenue model. Its adoption will lead to further growth in server farms, and investment in cloud computing infrastructure is now worth £120bn a year.

On the basis of the continuing evolution of server-based solutions, a focus on the utilisation and energy efficiency of data centres will be a key issue not only in connection with the sustainability agenda but also with regard to cost-effective IT delivery.

02 / Current demand for data centres

Demand for data centres is driven by three main groups – owner-occupiers, tenants such as internet operations that take space in collocation centres, and managed service companies that need large-scale IT capability to deliver large projects.

Demand for space has been affected by the credit crunch but there are still requirements in the UK and Europe. Greater levels of regulatory oversight in financial services is paradoxically likely to result in increased processing and storage capacity rather than less, and the development of new media such as Web 2.0 media – things like YouTube, BBC iPlayer, social networking and so on – is also fuelling demand. Google alone is reported to operate between 200,000 and 500,000 servers. As use of IT expands, the real constraint on growth will be the availability of power and the ability of a data centre operator to use it effectively.

03 / What do data centres do?

Data centres provide a secure location for IT equipment. Data centre security is concerned primarily with ensuring high levels of system uptime, although mitigation of the risks of exposure to physical threats is also important. The key to guaranteeing uptime is a diverse high voltage (HV) power and cooling infrastructure that is large enough to meet peak loads together with all the connectivity required to link IT devices within and outside of the data centre. Data centres are supremely functional buildings and many are housed in warehouses and industrial premises which give developers high levels of flexibility, but do not draw attention to the sensitive nature of the building’s contents. With the UK’s power infrastructure under strain, the availability of HV infrastructure is a major determinant of location. Disused industrial facilities with a substantial power infrastructure could be very attractive to data centre operators.

04 / Design principles

Successful delivery of the key functionality of ultra-reliable uptime and connectivity depends on the application of the following key principles of design:

• Reliability. Once a data centre is switched on, it cannot be turned off. High levels of fault tolerance, together with provision for concurrent maintenance and the replacement or change of components are fundamental to data centre design. Use of high-quality components, fail-safe design and extended commissioning are critical to successful on-programme delivery

• Simplicity. Human error during operation is a major source of downtime risk, so the use of simple solutions, albeit as part of very large systems, is the preferred lowest-risk design strategy. This is particularly the case with the configuration of back-up capacity, where the drive to reduce the investment in central plant can result in highly complex controls and distribution

• Flexibility. Data centres operate on short time frames. Servers are replaced in a three-year cycle, so a building will accommodate more than 10 aggregated technology refreshes during its operational life. Flexibility to accommodate change without major work is valuable

• Scalability. Many centres will not initially run at full capacity. Scalability is concerned with facilitating a centre’s ability to accommodate sustained growth, making best use of plant capacity, without any interruption to services. This requirement has implications for the design of all aspects of the building, building services and information architecture

• Modularity. A modular approach is focused mostly on main plant – providing the overall capacity required in a number of smaller units. Modularity is the means by which very complex systems are organised to allow scalability and also to reduce the cost implications of redundancy standards

• Proportionality. An understanding of the total cost of solutions is very valuable, as IT system managers often impose arduous availability requirements without understanding the whole-life costs, which can vary by up to three times.

05 / Data centre cost drivers

Data centres have a unique set of cost drivers that are not necessarily driven by floor area. Conventional measures of efficiency used in commercial buildings do not apply when occupancy levels are 100 times lower than a conventional office and cooling loads 10 times higher. The key cost drivers are:

• Extent of technical area

• Power and heat load density, which is determined by planned server density. Average loads in the most recent data centres range from 1,200 to 1,500W/m2, with peak loads up to double that amount. This is a major increase from the 800 to 1,000 W/m2 standard, common earlier this decade, and has implications for power, cooling and standby capacity

• Fault tolerance and maintainability. Fault tolerance is concerned with the impact of the breakdown of single components. The overall aim is to avoid single points of failure. Maintainability also requires that all systems have a back-up , so that secure operation is maintained at all times. The extent of back-up systems is determined by requirements for guaranteed service availability.

These requirements are described in the following section dealing with resilience and availability. Fault tolerance and maintainability requirements drive costs, space requirements and utilities costs.

• Space balance. Plant space, storage and IT assembly areas, administration accommodation and so on are relatively low-impact cost drivers, as building costs account for no more than 20% of overall costs. However, adequate and well-planned space – particularly storage and other technical space is important to the effective, long-term operation of a data centre.

• Scalability. The chosen expansion strategy will determine the initial capital spend on base infrastructure such as HV supply, main cooling plant, distribution and so on. The strategy will also have an impact on running costs if plant sized for ultimate loads is run at below optimum efficiency in the early years of operation. After the dot com boom, some data centres were running plant at only 20% of design loads – which was very inefficient. Modular approaches to chillers, room cooling, standby generation and uninterruped power supply (UPS) provide a high level of scalability and potential for lower cost diversity of supply. Some systems, however, such as HV switch gear, water cooling towers, primary cooling distribution and so on, are more economic to provide as part of the first build phase.

In summary, total floor area is not a cost driver for data centres, and server density and the client’s requirements for guaranteed availability are a better determinant of final cost. In delivering best value solutions, the following issues should be considered:

• Adoption of a holistic approach which considers whole-life issues as well as immediate technical requirements

• The right level of modularity and the avoidance of over-provision

• Avoidance of the duplication of unintended design tolerances as final loads are calculated from initial assumptions – for example, basing calculations on “name plate” values

• Optimisation of the resilience strategy to avoid over-provision.

06 / Resilience and availability

As all aspects of life become more dependent on continual, real-time IT processing, user requirements for guaranteed availability have become more onerous, and the risks involved in challenging these requirements have increased. At one end of the scale, financial transactions, access to health records and military security applications require absolute security of round-the-clock operation, depending on extreme levels of 100% back-up. These requirements come with significant capital and running cost implications. However, most processing has less critical availability requirements and can be accommodated in centres that provide a lower level of resilience. The accepted definition of data centre availability is the four-tier system developed by the Uptime Institute. Depending on the level of availability guaranteed, data centre developers and operators have to provide an increasing degree of system back-up. For example, a tier 4 centre will be 50% larger and three times more expensive per unit of processing space than an equivalent tier 2 centre. Systems that are affected by redundancy/resilience requirements include:

• Physical security. The minimisation of risks associated with the location of the building and the provision of physical security to withstand disaster events or to prevent unauthorised access

• HV supplies. Very secure facilities will require two separate HV supplies rated to the maximum load. However, as HV capacity is restricted in many locations, a fully diversified supply is costly and difficult to obtain and has opportunity cost implications for overall grid capacity. For lower tier centres, a single HV feed will be used to feed two separate HV transformers and low voltage (LV) networks within the centre

• Central plant. Plant including chillers, pumps, standby generation and UPS will be provided with a varying degree of back-up to provide for component failure or cover for maintenance and replacement. On most data centres designed to tier 3 or below, an n+1 redundancy strategy is followed whereby one unit of plant – be it HV switchgear, chiller plant, UPS and so on – is configured to provide overall standby capacity for the system. The cost of n+1 resilience is influenced by the degree of modularity in the design. For example, 1,500 kW of chiller load provided by 3 x 500 kW chillers will require one extra 500 kW chiller to provide n+1 back-up. Depending on the degree of modularity, n+1 increases plant capacity by 25 – 35% rather than the 100% required under a 2n scenario

• LV switchgear and standby power. Power supplies to all technical areas are provided on a diverse basis. This avoids single points of failure and provides capacity for maintenance and replacement. The point at which diversity is provided has a key impact on project cost. Most solutions run duplicate systems from dual HV transformers, with multiple standby generators and UPS units feeding into both circuits

• Final distribution. Power supply and cooling provision to technical areas are the final link in the chain. Individual equipment racks will have two power supplies. Similarly, modular Computer Room Air Conditioning (CRAC) units will be run on multiple cooling circuits and the number of units will be calculated to provide sufficient standby capacity even at peak cooling loads.

In summary, the keys to the cost-effective provision of resilience are modularity, so that the unit of duplication is reduced, and consistent design, so that an equal level of tolerance is provided from main plant to termination, meaning there is no weak link in the chain.

07 / Sustainability and data centre design and operation

Data centres are the hidden gas guzzlers of the post-industrial age. Electricity consumption by data centres doubled in the US between 2000 and 2005, and is forecast to do so again by 2011. This represents a huge carbon footprint and a major draw on scarce generation capacity. “Brown-outs”, such as occurred in California, demonstrate the impact of data centre growth on creaking public infrastructure. Sustainability strategy is not only concerned with minimising the environmental impact of data centres, but also avoiding any drag on continuing growth that limited spare capacity and increased energy costs will bring. Unfortunately, server performance is increasing faster than improvements in energy efficiency, so steps need to be taken in design to reduce energy consumption.

Data centres are not obvious candidates for sustainability strategies. However, owing to high loads and round-the-clock operation, the savings that can be made are huge and paybacks are fast and money spent can yield more benefit than virtually any other building type.

As the servicing requirements of data centres have increased, the problems associated with ensuring the availability of power and effectiveness of cooling have grown too. Although securing HV supply is primarily a development issue, ensuring the effectiveness of cooling is more concerned with operation. Heat rejection plant should be positioned to benefit from air at ambient temperatures, and air movements in technical spaces need to be designed both to avoid hot-spots and bypass flows which could waste cool air. Despite the obvious benefits of a green strategy, IT clients have tended to ignore total ownership costs and carbon issues. As power alone now accounts for 30% of total costs, the agenda is slowly changing. There are a number of strands to data centre energy reduction and sustainability. These are:

• Optimisation of server performance. This is concerned with getting maximum processing power per unit of investment in servers and running costs. “Virtualisation” of server architecture, in which processes are shared, is now providing a one-off opportunity to maximise server use. Turning off servers that aren’t being used and using server standby functionality also help

• Appropriate definitions of fault tolerance and availability. Users are being encouraged to locate operations which can tolerate lower levels of security in lower-tier centres which run more efficiently

• Increasing data centre efficiency. Primary power load is determined by the number and type of servers installed and cannot be affected by data centre designers and operators. Their contribution is to reduce the total data centre power requirement relative to server consumption.

This is measured by the power usage effectiveness ratio (PUE).

A typical PUE is 2 to 2.5, which means that 2.25 times more power is required to run the data centre than powers the servers. A high-performance centre can have a PUE of no more than 1.5, but these centres have only emerged over the past two years. Approaches to achieving energy savings in data centre include:

• Management of airflow in the computer room. Cooling by air is the most common strategy because of concerns about the risk of water damage. Water or gas cooling is more efficient but only used for very densely packed blade server farms which generate enormous amounts of heat. With air cooling, the critical energy driver is fan power so it is vital that cold air is passed directly over heat sources and warm air is exhausted immediately so it doesn’t mix with the cold air. Hot and cold aisle layouts with highly directional supply and extract airflows through server racks are the most effective way of guaranteeing these controlled air movements

• Use of high efficiency equipment. Significant savings can also be made using high-efficiency plant such as water-cooled chillers

• Use of modular plant solutions, so that systems run at full loads and optimum efficiency rather than at inefficient part-loads

• Use of variable speed motors in chillers, CRACs and chilled water distribution. This technology is well established and its use is encouraged through enhanced capital allowances. However, it is not widely used in data centres, where designers prefer to use constant flow pumps and bypass circuits to manage temperature levels, resulting in a big waste of energy. Use of variable speed motors means that air and coolant is only circulated in quantities required to meet the current load

• Use of free cooling and challenging temperature standards. Computer room temperatures are required to be maintained at about 22°C, so there is plenty of opportunity in climates such as the UK to use free-air cooling. The room temperature setting also has an impact on energy use and operators are required to run data centres at current temperatures to meet manufacturers’ warranty requirements. The international technical society ASHRAE has claimed that servers can be safely run at 27°C, so there is plenty of scope for increased efficiency

• Keeping heat sources out of technical areas. Locating transformers and other heat sources related to the data centre infrastructure in general plant space reduces the aggregate heat load in the data hall

• Use of common low-energy strategies such as lighting control and low-energy lighting.

In summary, as data centres use so much energy, they present large opportunities to reduce carbon footprint, do not currently suffer from diminishing returns on investment and provide rapid payback. The main barrier to change is the conservatism of IT clients who take a safety first approach to system specification and are often not affected by the energy cost of their security strategies. Closer working by facilities management and IT operations teams during design will help to support the development of an optimum low-energy strategy.

08 / Capital Allowances

Capital allowances represent huge opportunities for owners or investors to recover some of the costs of investment. Between 60 and 80% of the capital value of a data centre may qualify for some allowance, and with the Enhanced Capital Allowances (ECA) scheme, there are opportunities to secure 100% first-year recovery in energy and water-saving plant.

With changes to the operation of the UK’s capital allowances system introduced in April 2008, overall allowances have increased, but a longer cash flow recovery means savings are generally not realised as quickly. A well-advised client could secure an advantageous settlement. The key issue involves determining what qualifies as “integral features”, on which recovery is calculated on a 10% reducing balance, and what qualifies as “general plant and machinery”, on which allowances are calculated on a 20% reducing basis. General water installations, heating, ventilation, air-conditioning, lighting, small power and lifts are classified as integral features, whereas other mechanical and electrical installations may qualify as general plant.

Recovery of enhanced capital allowances also benefits from a planned approach. The right equipment, which appears on the Energy Technology Product List or the Water Technology List on the ECA website, needs to be specified and appropriate certification must obtained through the supply chain. The process isn’t easy, but if set up correctly can yield significant benefits.

09 / Procurement

The essential issues associated with the procurement of data centres involve the detailed design and co-ordination of the services installation, pre-ordering of major items of plant and thorough testing and commissioning. Investing in time upfront really does reduce the overall duration of projects.

Data centre projects are engineering-led, and the lead contractor is often a service specialist. The buildings themselves are relatively simple, but the services installation has the potential to be very complex, involving significant buildability and maintainability issues. Projects are typically let as lump sum contracts to specialist contractors on the basis of an engineer’s fully detailed and sized service design. Detailed design of the services is essential for co-ordination purposes, which are particularly difficult because of the size of pipework and cable feeds involved, and the extent of back-up circuits. The detailed design process will also contribute to the identification and elimination of single points of failure at an early stage.

HV capacity is invariably on the critical path and if either network reinforcement or additional feeds are required, lead-in times in excess of 12 months are common. Early orders ahead of the appointment of the principal contractor are also necessary to accelerate the programme and to secure main plant items such as HV transformers, chillers, CRACs and standby generators. Full testing and commissioning is critical to the handover of the project, and clients will not permit slippage on testing periods set out in the programme.

10 / Cost Breakdown

The cost model is based on a 5,000m2, two-storey development designed to manage power loads of up to 1,500W/m2. The scheme comprises 2,000m2 of technical space, 2,500m2 of associated plant areas and 500m2 of support facilities. The technical space is fully fitted out, including on-floor power feeds, uninterruptible power supply and standby generation and extensive security installations. It is ready to receive racks. The support space is completed to a Cat A finish.

The m2 rate in the cost breakdown is based on the technical area, not gross floor area.

The cost breakdown does not include the costs of site preparation and external works and services. The breakdown also excludes professional and statutory fees and VAT.

Costs are given at fourth quarter 2008 price levels, based on a location in south-east England. The pricing assumes competitive procurement based on a lump-sum tender. Adjustments should be made to account for differences in specification, programme and procurement route. Because of the high proportion of specialist services installations in data centres, regional adjustment factors should not be applied.

11/ Cost Breakdown

For detailed cost breakdown tables see the file attached below.

12 / Acknowledgements

Davis Langdon would like to thank Anthony Purcell of Red Engineering, Nick Bending and Trevor Wickins of Davis Langdon Mott Green Wall and Rachel Sanders and Andy White of Davis Langdon’s Banking Tax and Finance Team for their assistance in preparing this cost model.

Downloads