Evaluating the Opportunity for DC Power in the Data Center
by Mark Murrill and B.J. Sonnenberg, Emerson Network Power
A White Paper from the Experts
in Business-Critical Continuity™
Summary
With data center managers struggling to increase efficiency while maintaining or
improving availability, every system in the data center is being evaluated in terms of its
impact on these two critical requirements. The power system has proven to be one of
the more difficult systems to optimize because efficiency and availability are often in
conflict; the most efficient approach to critical power is rarely the most reliable.
One solution to power system optimization that deserves serious consideration is DC
power. Since utility AC power must ultimately be converted to DC for use by IT system
components and because stored energy systems (batteries, flywheel, etc.) provide DC
power for backup, a DC power architecture requires fewer total conversions from grid to
chip, creating the opportunity to reduce costs and increase efficiency.
A data center-optimized, row-based DC power protection system is now available to help
data center operators take advantage of that opportunity. This system, combined with
the availability of 48V DC-powered IT equipment from major manufacturers, makes DC
power an ideal solution for small and midsize data centers seeking to optimize efficiency,
reliability and scalability. Other applications include high-density equipment rows with a
consistent footprint and pod-based data centers.
As the leading provider of AC and DC power systems, Emerson Network Power is uniquely
positioned to help organizations evaluate the suitability of DC power and determine
whether a row-based DC infrastructure is appropriate for a given application.
2
Introduction
The first decade of the twenty-first
century was one of incredible growth
and change for data centers. The demand
for computing and storage capacity
exploded, and many IT organizations
struggled to deploy servers fast enough
to meet the needs of their businesses. At
the same time, the trend to consolidate
data centers and centralize computing
resources resulted in fewer opportunities
for planned downtime while also increasing the cost of unplanned outages.
Data center operators were able to meet
the demand for increased compute
capacity by deploying more powerful
servers—often in the same physical space
as the servers being displaced—creating a
dramatic rise in data center power consumption and density. Between 2004 and
2009, power and heat density became top
concerns among data center managers as
they struggled to adapt to a 400 to 1,000
percent increase in rack density.
The dramatic increase in data center
energy consumption created both financial
and environmental challenges. Energy
costs, which once had been relatively
inconsequential to overall IT management,
became more significant as the rise in
consumption was exacerbated by a
steady—and in some years significant—
increase in the cost of electricity. In
addition, increased awareness of the role
that power generation plays in atmospheric
carbon dioxide levels prompted the U.S.
EPA to investigate large energy consumers
such as data centers. In 2007 the EPA
presented a report to the U.S. Congress that
included recommendations for reducing
data center energy consumption.
The industry responded with a new
focus on energy efficiency and began
implementing server virtualization, higherefficiency server power supplies, and new
approaches to cooling. Yet, while significant progress has been made in some
areas, the critical power system has yet to
be fully optimized. While individual components have been improved, the overall
system complexity is high, which can
create inefficiency and add operational
risk. Faced with the choice of increasing
system efficiency or adding risk, many
continue to choose proven approaches
that deliver high availability but do not
deliver the highest efficiency.
However, a close examination of the
available options reveals that, in many
cases, efficiency can be improved without
sacrificing overall availability.
Established Data Center Power Distribution Options
Traditional AC power distribution systems in North America bring 480V AC power
into a UPS, where it is converted to DC to charge batteries, and then inverted back
to AC. The power is then stepped down to 208V within the distribution system (PDU) for
delivery to the IT equipment. The power supplies in the IT equipment convert the power
back to DC and step it down to lower voltages that are consumed by processors, memory
and storage [Figure 1].
Double Conversion UPS
Bypass
480V AC 480V AC 208V AC 12V DC
Rectifier
Inverter
Battery
PDU PSU
Transformer
AC
DC
Figure 1. Typical 480V AC to 208V AC data center power system configuration.
3
DC
Server
DC
Load