The massive heat loads created by the latest generation of computer servers is becoming a crucial factor in data centre design. We look at the cooling challenges facing building services engineers.

Between 2000 and 2003 the number of online banking customers in Europe rose from 23 million to 60 million. By 2007 Datamonitor, which carried out the survey, predicts that numbers will reach 87 million, with the UK and Germany topping the list of the biggest markets. It’s a similar story in the retail sector where the number of on-line shoppers has soared as more companies persuade customers to embrace internet services.

E-commerce and e-retail are big business, the upshot of which is that fast and reliable communications, together with the ability to retrieve, analyse and store information are now the lifeblood of an ever-growing number of organisations. Relaibilty and availability are crucial, with ‘mission critical’ systems capable of running 24 h a day, seven days a week, 52 weeks of the year required, whatever the cost. Consequently, back-up power supplies and duplicate and diverse cooling systems make data centres and comms rooms amongst the most intensively serviced spaces around.

However, while the relentless development in technology, particularly the performance of computer servers, has helped drive the e-revolution, it is also beginning to challenge the traditional servicing strategies for many data centres and comms rooms. The processing power of semiconductors has risen inexorably over the last few years and with it has come an increase in power consumption and an inevitable increase in the heat they generate. High heat loads from chips results in high heat loads in servers, and in turn the installation of a large number of servers in very high densities leads to significant cooling challenges.

Simon Law, principal engineer with Faber Maunsell says the problem of rising heat loads in comms rooms was one they first started to experience around three years ago. Then the typical power consumption of a server was around 1 kW/m2. Now the minimum computer load is around 2 kW/m2. “We are getting a lot of enquiries from clients who are looking to upgrade their comms rooms and install more powerful servers,” says Law. “This has resulted in an increase in loads within the space that need to be dealt with. We’re now looking at minimum loads from around 6-8 kW up to 20 kW for a typical 0·8 m x 0·2 m x 2 m high cabinet.”

The traditional approach in data centres has been to arrange computer cabinets in parallel rows with racks alternately positioned face-to-face and back-to-back to create a cold aisle/hot aisle set-up. Conditioned air from room air conditioning units is typically forced via a 600 mm floor void through grilles into the cold aisle which is then drawn through the racks and exhausted into the hot aisle.

In the past this has worked well and the aim has been to keep the cool aisle as narrow as possible, to save on floor space. “Generally we aim to have a cool aisle two floor tiles wide (ie 1·2 m) which gives one tile per cabinet,” says Law. This gives a maximum airflow of around 350 litres/s through each tile, which working on a temperature difference of 20°C is fine up to about 6 kW.

The new generation of servers from the likes of Hewlett Packard, Sun and IBM are more demanding. IBM’s Blade Centers are proving very popular partly due to ability to cram increased power into a smaller footprint. Just one Blade Center (similar to a traditional rack) measures around 710 mm deep, 300 mm wide and 445 mm wide and comprises of 14 blades – effectively self-contained servers, each with a twin Intel Zeon processor. Increased power density means fewer racks but importantly there is less duplication of components with each Blade simply sliding into a bay in the chassis and sharing power, fans, floppy drives, switches and ports. One Blade Center typically produces around 3·6 kW of heat and in a cabinet with four such Centers the heat output could typically climb to around 15 kW.

Paul Feeney of IBM says the trend in increased heat loads isn’t one that is going away, however air-cooled solutions are still viable for meeting these challenges. One of the biggest issues is ensuring that the data centre and comms rooms are designed to get the basic cooling infrastructure right. "A lot of data centres can lose as much as 40% of the airflow that is being channelled through the raised floor," he says. This can occur due to a host of reasons such as gaps between floor tiles, badly placed diffusers or junctions where partition walls are placed. Feeney regularly uses cfd analysis to verify solutions and says it is often a case of getting a number of interrelated factors to work. These can be quite subtle such as obstructions or badly routed cabling in the floor void that can result in low pressure areas or low velocity air flow. IBM has also introduced a specialist rack solution, capable of holding a variety of manufacturers equipment. This integrates the comms and power cabling and is designed to allow clear airflow.

New approaches
Law however believes the conventional methods of supplying cooling become impractical when the heat density exceeds 4 kW/m2. “Getting the necessary cooling from a traditional hot/cold aisle layout means effectively doubling the width of the cool aisle in order to introduce sufficient floor grilles.” Not only does this reduce the number of cabinets that can be installed in a given area the increase in number of room air conditioning units could typically end up accounting for 15%-20% of the useable floor area.

“Water cooled cabinets are potentially the way forward,” he says, “and are particularly suited to retrofit applications where space is at a premium.” However he warns they are not without their issues.

Water cooled cabinets take the form of a sealed unit with a cooling coil built into the base and circulating fans located in the rear door to draw air through the racks. Typically they are designed for heat loads of 15-20 kW and, compared with room air conditioning units, have chilled water circulated at around 14°C flow, 20°C return to prevent condensation, which also allows higher cops on chillers.

Housing racks, fans and power/data cabling means space in cabinets is at a premium. While there is room for redundant fans, there is only space for a single cooling coil in the base and any failure of the chilled water supply would mean a rapid rate of rise inside the cabinet. Law says in tests they have carried out they have also found problems with air short circuiting within the cabinets.

Weight is another consideration particularly given that a single Blade Center can weigh 120 kg giving a total weight approaching 800 kg for a water cooled cabinet with five Centers.

Faber Maunsell has been testing another alternative to water cooled cabinets. This follows the hot aisle/cold aisle set-up but, rather than underfloor air delivery, unit coolers are located above the aisles – those in the hot aisle draw air upwards, while those in the cold aisle push air downwards. “This is less suitable for retrofit mainly because of the high floor to ceiling heights that are required,” says Law. “In total you are looking at a height of around 5-6 m.” The issue of redundancy is overcome by having two cooling coils and two fans to service each cabinet, both of which can be fed off separate circuits and power supplies.

Introducing water into such a space is not seen as a barrier. “Initially the IT people were concerned but with the quality of today’s pipe connection and fittings, water leaks are not a huge issue,” Law adds.

David Butler, principal consultant at BRE has been involved in testing and verifying the performance of a number of new cooling approaches. He emphasises the need to consider risks involved with new designs, given the high stakes involved. “Reliability and availability are crucial and the implications of failure could be immense,” he says. The justifications for carrying out physical mock-ups and cfd modelling are high. “The outlay of a mock-up is a small element in the overall cost and and getting it right is crucial. Getting it wrong could mean equipment failure, or if it runs at elevated temperatures, it’s going to reduce its lifetime and cause unreliability.”

Butler also raises the issue of energy consumption: “Because of the rising power levels and the rising number of these installations it is becoming a major consumer of energy and it is certainly a sector where power consumption is increasing.”

There are a number of issues building services engineers can look at. “If you can arrange it so that you only need chilled water at say 15°C compared with 6°C then you are going to see an improvement in energy efficiency,” says Butler. “And also if you only need water at 15°C in winter you can get it without running refrigeration plant by just using dry air coolers.”

When the chips are down
Hewlett Packard has been working on a number of new technologies to address the growing problem of heat generation and energy use in increasingly powerful microprocessors. “HP takes a holistic view when developing cooling solutions,” says Chandrakant Patel, principal scientist with HP Labs. “We are working on a variety of approaches.” One of these is the development of a robot that moves through the data centre looking for hot spots and signalling the building management system to adjust cooling or the computer network to move work loads from one system to another. “When we first conceived the robot, the motivation was to use it as a simpler, lower cost means to chart the temperature of the data centre and capture inlet and outlet temperatures at the rack level,” says Chandrakant. “However, now we have developed metrology at rack level (to adjust airflow and temperature) that is simpler and lower cost, and the robot plays the secondary role of data centre aisle temperature mapping.”

At the moment the robot is very close to being a standalone product for sensing and mapping without a control system. “We are currently gauging interest for commercial offering for standard data centres.” adds Chandrakant. “Also, we are testing our preliminary control system. The robot and the static sensors, coupled to our control system, will likely take another year. However this will enable a truly ‘smart’ data centre.”

Work is also being carried out at the chip level. HP has taken existing inkjet printing cartridge technology and re-engineered it into a cooling device for semiconductors. The spray cooling mechanism shoots a measured amount of dielectric liquid coolant onto specific areas of a chip, according to its heat level. The liquid vapourises on impact, cooling the chip, before being passed through a heat exchanger and pumped back into a reservoir.

HP’s spray cooling technology avoids the pooling effect that can occur with other phase change liquid cooling techniques that, due to the residual liquid left on the chip, actually form an insulating vapour bubble, causing chips to overheat and possibly malfunction.

This work is still at the development stage, but as high performance chips result in high heat density a breakthrough in cooling technology will be needed. Chandrakant explains: “This is an inevitability brought about by constant scaling down of semiconductor technology and ability to put multiple functionalities that were once segregated chips. So, ‘normal’ servers in three to five years will contain very high power density chips that would have to have precision heat removal to enable high power density cores. It can be made quite robust using several ideas we are pursuing.”

This technology might also extend to entire circuit boards, enabling smaller, more powerful systems than are possible with many of today’s alternative solutions.