By Penny Crosman, Wall Street & Technology
26 April 2007
When we heard that Bank of America is installing — brace yourselves, gentlemen — waterless urinals in the “green” skyscraper it’s building near Bryant Park for its investment banking operation, we realized that Wall Street is starting to take energy and water conservation seriously. Not that we’ve come across many tree-huggers on the Street, but for very practical business reasons, technology executives are looking to reduce the power consumed by their data centers — the primary electricity guzzlers in any company — even as their computing needs grow.
“So watt,” you say? Following are five reasons why you should care about the megawatts of energy flowing through your data centers:
1. The amount of juice soaked up by giant server farms is growing fast — it literally doubled between 2000 and 2005, according to a report out of Berkeley National Laboratory. And power consumption by the world’s data centers is predicted to increase by an additional 40 percent by 2010, the research found. The growth is fueled by the increasing number of installed, low-end volume servers — typically systems costing less than $25,000, including racks of blades — more than the actual energy usage per server.
“Data centers have voracious appetites for the consumption of electricity,” confirms Mark S. Nicholls, corporate workplace executive at Bank of America. One culprit is overcooling: In a 2006 study of 19 U.S. data centers, the Uptime Institute found that, on average, they were cooled nearly twice as much as necessary.
2. New York City’s power grid is struggling to keep up with such increased power consumption by data centers alongside the city’s housing boom and the proliferation of new electronic devices in offices and homes. (See related sidebar, “Don’t Panic, but the Grid’s Going Down,” opposite page.) “In places like Manhattan, there are times when you can’t get additional power for a data center at any price,” relates Forrester analyst Chris Mines. “The Wall Street firm says we want to expand our data center; the utility says we can’t bring you any more juice.”
3. Wall Street firms are under pressure from shareholders, environmental groups and local municipalities to reduce carbon emissions for their own properties and for those in which they invest. For example, the Wall Street Journal reported that in February members of the Rainforest Action Network demonstrated outside several Merrill Lynch offices to protest the firm’s raising money for some planned coal-fired power plants in Texas. However, the same article reported the opposite problem for Goldman Sachs — shareholders are complaining that its environmental good deeds, such as donating land for a nature preserve in Chile, are a waste of money. In some cases, there’s pressure from within to become more green — for instance, Credit Suisse has begun evaluating IT managers according to how well they’ve helped reduce energy use.
4. As firms suffering the effects of the falling housing market and subprime loans seek to cut overall costs, one place to look is the escalating energy bill.
5. The Environmental Protection Agency will soon begin a six-month study of power consumption in data centers and plans to issue guidelines for both the public and private sectors.
For all of these and perhaps other reasons, Wall Street firms are trying to conserve energy in their data centers. Some, such as the New York Stock Exchange, are working to consolidate systems and achieve efficiencies in software as well as hardware, bringing about fewer overall CPUs and lower energy use. Many, including Credit Suisse, are turning to virtualization to make better use of existing servers and storage, and eliminate excess. A few — such as Bank of America, Goldman Sachs and Merrill Lynch — are constructing green buildings.
At least one — Morgan Stanley — is volunteering for energy curtailment programs. HSBC is building a data center near a hydropower plant. Dozens of firms are collocating parts of their data centers in exchanges, collocation facilities or with on-demand computing providers such as IBM, HP and Sun. And many firms are investing in energy-efficient lighting, air-conditioning and heating to decrease energy consumption and become more environmentally friendly.
A Greener Big Board
Between its mergers with Euronext and Archipelago and the advent of electronic trading, power demands at the New York Stock Exchange, the world’s largest stock exchange, have never been greater. On any given day, its exchanges process hundreds of millions of orders, each with three to five associated messages. In its options business, the NYSE is fed, at peak times, more than 1 million quotes per second, all of which have to be processed and then saved. And customers, wanting to get closer to the exchanges and approach zero latency, are collocating their servers on the NYSE’s floors.
“The growth in our industry has been dramatic. Even over the last few weeks we’ve gotten spikes that make huge processing demands and we’ve got to be able to handle them,” says CTO Steve Rubinow, noting that the NYSE tries to maintain a minimum compute capacity of twice the largest peak it is likely to see. “Those peaks just grow and grow, so we’re constantly adding capabilities. We could keep throwing servers at the problem and keep on expanding until we’re out of space or money. A couple of things are working in our favor.”
One, according to Rubinow, is that the hardware manufacturers are coming out with better equipment with faster processors and more and faster memory. “So we’re able to grow to some extent within the current shell because better equipment comes down the road,” he says. “The second thing is that all these hardware manufacturers are doing a good job of making their equipment more energy efficient, making it easier to cool and reducing their power requirements.” (See related sidebar, “These Servers Are Really Cool,” opposite page.) The NYSE relies heavily on HP and Sun Microsystems servers, Rubinow notes.
However, the NYSE doesn’t care to be at the mercy of hardware manufacturers’ schedules of new releases. “We like to be in control of our own future,” Rubinow adds. “We constantly impress on our internal technology staff that we’ve got to get as [energy] efficient as we possibly can. We’re constantly forcing ourselves to get better at software efficiency so that we don’t need as much hardware today, proportionally speaking, as we would have a year or two years ago.”
Software efficiency means more parallel processing, finding better ways to run algorithms, “and some more clever things that I can’t talk about yet because they’re too new,” Rubinow says.
Rubinow relates, however, that he is devising an energy-conscious strategy for consolidating and relocating the 10 data centers that up till now have been operated independently by the NYSE, Archipelago and Euronext. “We’re going to squeeze that down to two data centers in the United States and two in Europe that will satisfy local needs and failover needs,” he says. “In the course of doing that, we’re figuring out how much compute capacity we need, where are the best places to put these centers, do we use existing facilities, would it make sense to build anything new? Especially where we’re thinking about building a new data center, energy considerations are important because we have to decide how much we need — we don’t want to overbuild but certainly we don’t want to underbuild.”
The available energy supply is a key issue for data center location. Rubinow says he is weighing the pros and cons of going to distant locations where energy is cheaper, versus the lower data latency afforded by being closer to the source. “We’d like to have our major data center as close to the largest concentration of customers possible and, of course, in our business that’s the New York metro area,” Rubinow says, noting that many of the NYSE’s customers used to have data centers in Manhattan but have moved them to New Jersey because, “9/11 notwithstanding, an island is not a great place to have a data center for logistics reasons.”
But assuming one major data center belongs in the metropolitan area, where do you put the backup/disaster recovery site? “From a geographical diversity standpoint, you don’t want to be too close to your primary — you like to have it somewhere else, ideally a place that isn’t subject to hurricanes, tornadoes, earthquake and flood,” Rubinow offers. “On the other hand, if you needed to failover to your second data center, you would still like that second data center to be close to your customers because everyone is latency-sensitive in this business.”
Archipelago currently has a data center in Chicago and another in New Jersey — a distance that translates to 12 milliseconds of latency, according to Rubinow. “Most businesses wouldn’t care about that, but in the trading business, 12 milliseconds is huge,” he says. The NYSE’s most-data-latency-sensitive customers collocate in its data centers to be as physically close to the exchange as possible; Rubinow says dozens of customers do this.
Going forward, “We’re going to be very thoughtful about the energy-efficient products the industry can afford us and the things we can do ourselves in order to make our compute footprint grow more slowly than the huge volumes in the market, so we don’t have a big problem five to 10 years from now,” Rubinow says.
Virtualization and Green Scorecards
Virtualization is probably the most common approach to achieving energy efficiency in data centers. Through virtualization software and techniques, each server can host multiple operating systems and applications so that, for example, servers that have been running at 10 percent of their total capacity can take on more load and run at 60 percent or more of capacity. VMWare, Xen and Virtual Iron are the primary providers in the space, but new open source providers are rapidly gaining traction.
Credit Suisse has made energy conservation in its data centers a priority in the past six to nine months, aggressively moving to server virtualization and green scorecards for the IT department. “We view electricity as a resource that needs to be managed, like money,” says Steve Hilton, managing director and head of enterprise servers and storage. “There’s a limited amount of money; there’s a limited amount of compute power; there’s also a limited amount of kilowatts, and we need to be extremely conscious of managing that to meet service goals and environmental goals.”
Credit Suisse uses VMWare virtualization software and the zones in Sun’s Solaris server that are virtual-like. The firm also is testing Sun’s newer, more-flexible hypervisor containers. Also in the labs are Xen and open-source virtualization software. Hilton says the firm is seeing an average 15-to-1 compression ratio from virtualization — in other words, one server can do the work that used to be done by 15. As a result, Credit Suisse has gotten rid of hundreds of servers. Of course, they can’t just throw servers in the trash — the firm gives the old servers back to the manufacturers, which recycle or resell them. “Virtualization is a big enabler of power-efficient data centers,” Hilton says.
Like Rubinow at the NYSE, Hilton appreciates that the server manufacturers have gotten better at energy efficiency. “A lot of it’s driven by the chip manufacturers — AMD, Intel and Sun — which all have in the past six to 12 months released architectures that are a lot cooler,” Hilton says. “Instead of the chip manufacturers being obsessed with clock speed and getting transactions from A to B quicker and quicker, which absorbs a lot of power, they took a step back a few years ago and redesigned many of their chip architectures to run slower and cooler, but with higher throughput.”
Hilton also credits server vendors such as IBM, HP and Sun with developing better cooling mechanisms within their server chassis. “We throw virtualization on top of those servers and we’re creating very green, economic and energy-efficient compute farms that we’re then internally selling to our internal application development clients to buy capacity, not boxes, from us,” he explains. Credit Suisse also is talking to Sun about its water-cooled BlackBox container full of water-cooled servers — the firm may use the box for disaster recovery or for extremely high computing needs, such as Monte Carlo calculations, according to Hilton.
The green scorecards Credit Suisse recently introduced provide data center and business metrics on green computing. IT managers are evaluated on their ability to reduce power consumption, and each data center has a specific energy reduction target. “It’s proven to be a powerful tool to help people understand the value they add by changing their compute requirements,” Hilton says. “People in charge of data centers can understand the positive impact they can have on the environment.”
Like any scorecard, the green scorecard creates peer pressure, Hilton notes. “The other way to look at it is data centers are limited resources,” he says. “We want people to understand that energy is a constrained resource and needs to be managed accordingly. Where 10 to 15 years ago you wouldn’t think or worry about the power draw of the server you’re plugging in, now it’s really important.”
Moving to Greener Pastures
One solution to metropolitan energy issues is to move data centers to areas of the country where energy is cheaper and more plentiful — namely the Midwest, the West and even the Carolinas. While the cost of power hovers around 12 cents per kilowatt per minute in New York City, it’s as low as two cents per kilowatt per minute in Washington State and other less-populated parts of the country. Even in New Jersey, energy is about 25 percent less expensive than in New York City.
Google and Microsoft have been high-profile pioneers of this movement, but Wall Street may not be far behind. “We’re seeing Wall Street firms move some of their data center processing capability into New Jersey where energy is much cheaper,” says Donna Rubin, senior director of worldwide financial services partner and industry marketing at Sun Microsystems. Nobody is moving their trade processing away from Wall Street because of latency concerns, she notes, but mortgage processing and other functions are shifting out of town.
HSBC, for instance, is building a data center near a hydropower plant in upstate New York that runs off of Niagara Falls. And Credit Suisse has placed some of its eight enterprise scale data centers in areas where electricity is cheaper (though it won’t specify where). “As you move out of the large metro areas into other states, you see a significant power arbitrage,” Credit Suisse’s Hilton says.
“We’re seeing a lot more high-density applications get shipped out to places like Oregon and Washington State, where you can have cheaper power because they have either wind or hydropower,” says Tesh Durvasula, COO and EVP of NYC-Connect, a collocation facility in New York.
Another way to deal with these issues is collocation — in other words, move your servers into someone else’s facility and make the energy issues their problem (although most likely you’ll still pay your share of the energy bill). Not only are Wall Street firms moving servers onto the stock exchange floors, some are taking advantage of collocation facilities such as NYC-Connect’s 50,000 square-foot space. To save on its energy bill, NYC-Connect recently brought in consulting firm Milford, N.H.-based Degree Controls, which, through data center configuration and air flow changes, brought about a 12 percent to 14 percent reduction of NYC-Connect’s power bill, according to Durvasula.