Musings on IT, data management, whitewater rafting, and backpacking

Wednesday, April 20, 2011

Facebook's Prineville Data Center

Data Center Knowledge posted two video tours of Facebook's new data center in Prineville, OR. This data center uses Facebook's Open Compute Project designs. The videos describe many features not found in the Open Compute Project documents.

Here are the direct links on blip.tv:
Observations and comments on features not found in the Open Compute Project documents:


  • No lights in the warm aisle, since all work is done from the cold aisle.
  • Lights on motion sensors everywhere else. Others have reported that all lighting uses LEDs.
  • Some of the server-warmed air is used to warm the office space. What about cool air for the office space in the summer?
  • 65° F to 80° F supply air for the cold aisle. Not as aggressive as some other designs, which use cold aisle temperatures 90° F or higher.
  • The penthouse fan wall pressurizes the cold aisle to 0.03 inches water column. This pressure takes most of the load off the server fans, so the server fans run very slow. The large VFD fans used in the fan wall should be much more efficient than the small server fans. Some server designs eliminate server fans entirely.  I wonder why Facebook choose to keep server fans?
  • The penthouse fan wall was moving about 340,000 CFM during the tour, with a target of 25-28° F delta-T through the servers.  That translates to roughly 2 MW of IT load cooled.
  • Original design used only expensive air filters, but Facebook temporarily added cheap air filters to mitigate construction dust. They plan to keep the cheap air filters, which should extend the life of the expensive air filters.
  • Looks like the outside air plus server-warmed air mixture is run through the filters. This seems redundant – the server-warmed air was already filtered. Would have been more efficient to place the filters only in the outside air stream, but maybe Facebook has other reasons for re-filtering server-warmed air.
  • One brief shot showed a cold aisle with no server racks (“triplets” in Facebook nomenclature), but with battery boxes and giant blanking panels in place of the missing triplets.
  • The use of large plenums, and walls of fans, filters, misters, etc., enables easy maintenance while operating.
  • The battery boxes can supply 90 seconds of backup power to the servers. Facebook has programmed them run for 45 seconds. Yet the generators need only 15 seconds to startup. Why the overkill on the battery box design?
  • The engineer claims the mister pumps run all the time, so they use the same amount of power whether the misters are spraying or not. Pumps should use more power if you are actually spraying water, but maybe they are using a recirculating pump design.
  • Misting water is drawn from on-site wells. The on-site storage tank holds 48 hours of water, with both city water and trucked-in water as backup.
  • The misting water is run through reverse osmosis to remove minerals, and sterilized to remove bacteria. RO and sterilization can use a lot of power. With a claimed PUE of 1.07, they can't be using much.
  • Because of the innovative dual AC and DC server power design, the battery boxes consume only battery trickle charge power until AC power is lost. Power sensors switch to battery power at any sign of trouble, similar to standby UPS designs. No double-conversion overhead typically found in large data center online UPS designs.
Facebook has not released many key specifications of their data center design, including:
  • Reverse osmosis and water sterilization system
  • Custom reactor power panel
  • Custom rack mounted network switches
  • Overall network design
I hope Facebook plans to release more specifications soon.

No comments:

Post a Comment