When big companies like Google and Microsoft build out their data centres, they generally keep the details secret.Not Facebook. Today, the company announced that it’s building its own data centre and releasing hardware specs under a program called the Open Compute Project..
Facebook has been leasing its data centres for the last 7 years, giving it no flexibility — Facebook technical VP Jon Heiliger likened it to living in his first apartment, where the landlord said that he could paint the walls any colour as long as it was Aztec White.
For the last year or so, Facebook has been working on its own data centre in Prineville, Oregon. The company says it has improved server efficiency 38% and lower costs by 24% — and to prove it, the company is releasing its design to the public.
Heiliger said “it’s time to stop treating data centres like Fight Club.”
Why? The company claims it wants to pass its efficiency gains on to the rest of the world. From a hard business perspective, if Facebook can convince it has a better way of designing data centres, hardware makers and other suppliers might adopt these designs. As Facebook and its partners have to build more data centres, they won’t have to redesign the wheel — and that lowers costs.
The hardware was designed by a small team of only three engineers, who worked on it in a small office in Facebook headquarters.
The design is actually pretty revolutionary. Highlights include:
- A new way of supplying power that leads to 99.9999% (six nines) availability.
- Cooling — one of the biggest costs in a typical data centre — is done entirely with outside air, which is driven through a “misting” system to control humidity. No ducts, no air conditioning.
- The actual servers are taller so they can use bigger heat sinks to keep them cool, and can be swapped out simply by pulling them out of the rack. They also use blue LEDs, which cost $0.07 apiece versus $0.03 for green but look really cool.