The Data Center of the Future

By Melanie Rodier, Wall Street & Technology.
02 August 2010
URL: http://www.wallstreetandtech.com/showArticle.jhtml?articleID=226300061

Lately, NYSE Euronext has been thinking big. “Big,” as in the length of three football fields — the size of its new data center, located in Mahwah, N.J. The 400,000-square-foot facility is larger than a World War II aircraft carrier and includes room to expand for years after it becomes fully operational in the fourth quarter of 2010. As NYSE puts the finishing touches to the Mahwah facility, the exchange is simultaneously preparing to unveil another brand-new data center in Basildon, U.K., on the outskirts of London.

“It’s a huge place,” NYSE Euronext CIO Steve Rubinow says of the Mahwah facility. “We joke about using Segways to get from one end to another.”

In a departure from the past, the Mahwah facility was purpose built for the industry’s needs from the ground up with the aim of ensuring that it would last for two or three decades, says Andy Bach, SVP, network services, NYSE. “When selecting locations for the data centers, we did look at existing space with buildings you can retrofit as well as new sites,” he relates. “The important thing is that it has a long lifeline. I will probably be retired before the useful life of the Mahwah data center is expired.”

The fact that NYSE’s new data center is not a shared facility enabled the exchange to bring in its own fiber optic network, Bach explains. “We don’t have other carriers coming into the building — it allows us to control latency, to provide bandwidth and to scale it on our schedule as we want,” he says. “We weren’t constrained by the cabling left behind by the previous tenant,” Bach adds, pointing out that cables in the data center are run overhead, freeing the under-floor space where cables traditionally reside for cooling.

“Once you make the decision you want to control your own destiny by building from the ground up, you sketch your canvas,” Bach says, adding that NYSE sought out leading-edge technology in areas that can provide a competitive advantage. For example, NYSE heavily leveraged low-latency technology, including the high-performance switching and routing technologies of Juniper Networks’ 10-Gigabit Ethernet network as well as high-speed silicon ASIX chips to drive down latency further.

“Any time you build a new data center, you have a great excuse to do a technology refresh,” notes Rubinow. “The mindless thing to do would be to move equipment from an old data center and rearrange it. We’ve been going to all the vendors of hardware, software, networking — we said, ‘We’re building a new data center and have the opportunity to change something.’ We took the best of what they have. We’ll have faster, smaller, more efficient equipment.”

Power Struggle

The decision to build from the ground up largely was driven by NYSE’s power needs, difficult to meet in the northeast where many companies increasingly are competing for data center space. “We didn’t just want to rely on one source of power,” comments Rubinow, who notes that the Mahwah facility has 28 megawatts of power, enough to power 4,500 residential homes, and numerous diesel-powered backup generators. “You can’t go to any area of New Jersey or Pennsylvania and find an untapped source of power.”

In fact, electric capacity and cooling are the two biggest issues for clients, and the key areas data center owners are focusing on to increase efficiency and reduce costs, relates Tony Hernandez, managing director at LECG, an Emeryville, Calif.-based consultancy firm. Generally, “Clients either run out of floor space or cooling capacity,” according to Hernandez.

The amount of energy consumed by data centers has dramatically increased in recent years, and with it, the cost of running them. Citigroup’s data centers account for just 1 percent of the company’s physical infrastructure space but consume about 25 percent of the power, reports Jack Glass, director of data center planning at Citi. “And the number has been increasing,” he says. “[But] people now realize a lot can be done to reduce energy being used by a data center.”

According to Glass, Citi is focusing on increasing efficiency in two areas — the technology and the data center facility itself. Like many firms, Citi is increasingly exploring solutions such as energy-efficient servers and virtualization to minimize server power consumption. In 2007, 10 percent of Citi’s U.S. IT environment was virtualized, Glass reports. By the end of 2009, that figure shot up to 30 percent.

“We look for physical servers with low utilization and collapse them onto one physical box,” explains Jason Kutticherry, VP, critical systems, Citi. “Over time, as applications are redeveloped at a company like Citi, which has six strategic data centers in North America and 20,000 servers, older servers may just sit there. Over the last couple of years, we’ve been decommissioning legacy servers.”

Decommissioning servers and deploying them in a virtual environment may sound easy — “Like low-hanging fruit,” says Glass. “But having lived through it, it’s not easy; it’s not obvious. It takes a lot of hard work to identify, trace down and find the right documents to show that a server can be powered down. You can’t be cavalier about it, but the benefits can be tremendous.” Glass notes that decommissioning servers from one site alone last year yielded Citi a 10 percent savings in energy consumption.

Still, the computers themselves only account for a small portion of that energy consumption. Most of the energy consumed in a data center is used to power lights, air conditioning, transformers and other equipment used to support the computers, Glass says. “Only half the energy used in a data center is used by computers,” he contends. “The electrical and mechanical load equals the computers itself. And more than half the power is being used to cool computers.”

Chill Out

Don Atwood, senior data center engineer and strategist with Intel, says cooling alone can account for as much as 70 percent of the total energy consumption of a data center. As such, Citi’s push to reduce power consumption in its data centers encompasses cooling efforts.

“We closely manage air flow for cooling the hardware, we use higher-efficiency equipment and we use equipment that can vary the capacity as needed, such as variable speed fans and pumps,” Citi’s Glass relates. He adds that the financial institution is looking to improve efficiency further with natural (read: free) cooling through the use of outdoor air. Winters are cold enough in certain North American regions that some of the newest data centers, including a Citi data center in the midwest, are now drawing on natural cooling for at least part of the year, according to Glass.

“Free cooling during cold weather can be done two ways,” he says. “If it’s cold out, you bring cold air in to cool the data center, which people call ‘air side.’ Plus, there’s ‘water side,’ where we have chillers that cool water and then circulate it out to air conditioners.”

Meanwhile, vendors also have been exploring ways to reduce the need for cooling in the first place. According to Intel’s Atwood, most data center operators actually are over-zealous in cooling their facilities. Until recently, he relates, there were no standards for the ‘right’ temperature inside a data center, and engineers who were fearful that their servers would overheat instead were overcooling. But two years ago, Atwood asserts, Intel began exploring using ambient air — even when outdoor temperatures rise above 80 degrees Fahrenheit — to cool data centers.

“We said, ‘Do we need … the ultra-cold data centers that we typically run? Or can we challenge the status quo?’ ” Atwood recalls. “When it’s 60, 70 or 80 degrees outside, can we just use the air as is?”

According to Atwood, Intel carried out extensive tests, pushing the hot air at the back of servers outside and drawing in ambient air. “We did it with 700 production servers for 10 months,” he relates. “That was two years ago.” As a result of Intel’s tests, Atwood claims, the ASHRAE, the international technical society organized to advance the arts and sciences of heating, ventilation, air-conditioning and refrigeration, changed its standards for data centers.

“We were overcooling,” concedes Citi’s Glass. When ASHRAE released its standards, he adds, “We were able to raise temperatures in our data centers without worrying that it would affect the systems.”

Journey to the Center of the Earth

Beyond ambient air, Intel also is exploring geothermal cooling options, testing to see if the earth’s cooling capabilities can be used to cool data centers. “If you dig a big hole, that’s a consistent cold air temperature. … We’re looking at the coldness of the earth as opposed to cooling the hot air we have,” Atwood explains.

“If we allow ourselves to run data centers at a higher temperature than we currently do — say we have a tolerance to 80 degrees Fahrenheit — it allows us to consider using aquifers without doing anything to the water — just pumping it out, putting it through coils,” Atwood adds. “This is water we would use anyway for toilets, sinks, factories, etc.

“Right now, we just pump that water into a big holding tank. But we said, ‘What if we create a loop before it gets to the tank, and run it through our data centers?’ We would avoid all components of cooling. It’s both cost avoidance and energy saving.”

According to Atwood, Intel currently is testing whether a data center can run at 104 degrees Fahrenheit. “With newer technology, we anticipate we can run at 90 degrees Fahrenheit without a problem,” he says, pointing out that the issue of getting too hot is only a human factor — as long as the technology can withstand high temperatures, the computers won’t complain about a hot room.

Intel also is conducting tests on renewable energy, including solar and wind technology, Atwood notes. “But the problem with running a large data center is that we can’t have it go down. With a solar panel, what happens at nighttime?” he poses. “You would have to have complex switching technologies. We’re trying to find effective ways to use renewable energy solutions but avoid risk.”