Robert Scoble tours the datacenter, which was built using Open Compute Project standards and best practices for energy efficiency.
One of the most significant features of the facility was that Facebook eliminated the centralized UPS system found in most data centers. “In a typical data center, you’re taking utility voltage, you’re transforming it, you’re bringing it into the data center and you’re distributing it to your servers,” explains Tom Furlong, Director of Site Operations at Facebook. “There are some intermediary steps there with a UPS system and with energy transformations that occur that cost you money and energy—between about 11% and 17%. In our case, you do the same thing from the utility, but you distribute it straight to the rack, and you do not have that energy transformation at a UPS or at a PDU level. You get very efficient energy to the actual server. The server itself is then taking that energy and making useful work out of it.”
To regulate temperature in the facility, Facebook utilizes an evaporative cooling system. Outside air comes into the facility through a set of dampers and proceeds into a succession of stages where the air is mixed, filtered and cooled before being sent down into the data center itself.
Saving 11-17% on power by redesigning how the UPS works is huge, as datacenters use enormous amounts of power. This not only green, it saves money too.
“The system is always looking at [the conditions] coming in”, says Furlong, “and then it’s trying to decide, ‘what is it that I want to present to the servers? Do I need to add moisture to [the air]? How much of the warm air do I add back into it?'” The upper temperature threshold for the center is set for 80.6 degrees Fahrenheit, but it will likely be raised to 85 degrees, as the servers have proven capable of tolerating higher temperatures than had originally been thought.
They also use large external fans for coolng rather than relying solely on fans inside the servers because external fans are more efficient.
The servers used in the data center are unique as well. They are “vanity free”—no extra plastic and significantly fewer parts than traditional servers. And, by thoughtful placing of the memory, CPU and other parts, they are engineered to be easier to cool.
This also makes the servers far easier to service, as parts are more out in the open.
My back yard. I interviewed, but they don’t hire old guys – regardless their Master of Science and twenty years experience. But then again, how much do you need to know to swap out a blade?