Check Out Facebook’s New Energy-Efficient Data centre

Cloud technologies power some of Internet’s most well-known sites—Picasa, Gmail, Facebook, and Zynga, just to name a few—and cloud companies are striving to make the computer processing behind these sites as energy efficient as possible.

With that in mind, Facebook, Dell, HP, Rackspace, Skype, Zynga, and others have teamed together to form the Open Compute Project to share best practices for making more energy efficient and economical data centres.

To kick-start the project, Facebook unveiled its innovative new data centre and contributed the specifications and designs to Open Compute. “Cloud companies are working hard to become more and more energy efficient…[and] this is a big step forward today in having computing be more and more green,” explains Graham Weston, Chairman of Rackspace.

A small team of Facebook engineers has been working on the project for two years. They custom designed the software, servers, and data centre from the ground up.

One of the most significant features of the facility was that Facebook eliminated the centralized UPS system found in most data centres. “In a typical data centre, you’re taking utility voltage, you’re transforming it, you’re bringing it into the data centre and you’re distributing it to your servers,” explains Tom Furlong, Director of Site Operations at Facebook.

Facebook data centre

“There are some intermediary steps there with a UPS system and with energy transformations that occur that cost you money and energy—between about 11% and 17%. In our case, you do the same thing from the utility, but you distribute it straight to the rack, and you do not have that energy transformation at a UPS or at a PDU level. You get very efficient energy to the actual server. The server itself is then taking that energy and making useful work out of it,” he said.

To regulate temperature in the facility, Facebook utilizes an evaporative cooling system. Outside air comes into the facility through a set of dampers and proceeds into a succession of stages where the air is mixed, filtered and cooled before being sent down into the data centre itself.

“The system is always looking at [the conditions] coming in”, says Furlong, “and then it’s trying to decide, ‘what is it that I want to present to the servers? Do I need to add moisture to [the air]? How much of the warm air do I add back into it?'” The upper temperature threshold for the centre is set for 80.6 degrees Fahrenheit, but it will likely be raised to 85 degrees, as the servers have proven capable of tolerating higher temperatures than had originally been thought.

Facebook data centre

The servers used in the data centre are unique as well. They are “vanity free”—no extra plastic and significantly fewer parts than traditional servers. And, by thoughtful placing of the memory, CPU and other parts, they are engineered to be easier to cool.

Now that these plans and specifications have been released as part of the Open Compute Project, the goal is for other companies to benefit from and contribute to them. “The idea of Open source, crowd sourcing, Wikipedia—these are all part of a new era of thinking enabled by the same force,” explains Weston, “which is that when things are open, there’s more innovation around them.”

More info:

Facebook announcement:
Open Compute Project web site:

This post originally appeared on Building43.