ISS Technology Update, Volume 9, Number 6

2
ISS Technology Update, Volume 9, Number 6
HP developing modular data center technologies ..................................................................................... 1
Sea of Sensors reduces power costs ....................................................................................................... 3
HP Smart Array optimizes Video on Demand performance ....................................................................... 4
Meet the Expert—Chris Wanner ............................................................................................................ 6
Recently published Industry-Standard Server technology papers ................................................................. 7
Contact us ........................................................................................................................................... 7
HP developing modular data center technologies
The U.S. Department of Energy (DOE) has awarded us a $7.4 million grant as part of the American Recovery and
Reinvestment Act of 2009. The HP Rack and Power Group and the Eaton Corporation will co-develop power and cooling
technologies for a fully enclosed IT rack system. Wide acceptance of these technologies can significantly increase energy
efficiency and reduce carbon dioxide emissions.
About the grant
The grant funds the research and development of a self-contained IT rack enclosure with chilled water and high-voltage AC
inputs. The enclosure must also accept power from renewable energy sources such as wind and solar. The grant covers 80%
of the anticipated project cost with HP and Eaton responsible for the remainder. We must complete the project within 2
years from the February 2010 award date and make public the technologies developed under the grant. The products that
result from these new technologies will create U.S. jobs, provide a more efficient means for the IT industry to use wind and
solar energy, and extend the life of data centers around the globe.
This is the first time that the DOE has awarded a grant to an HP product development team. A key reason is that we, and
Eaton, have previously investigated some of the required systems. This makes it possible to complete the project within the
two-year window.
The target markets
Ken Baker, HP Data Center Infrastructure Technologist, says, ―Existing self-contained rack enclosures, including the HP
Modular Cooling System, work well for high-density racks with heat loads above 10kW. But the majority of the market
consists of small and medium data centers with partially full racks and heat loads ranging from 3kW to 7kW per rack.‖
These data centers typically have power and cooling constraints rather than space constraints. Ken says, ―They want their
facility deficiencies to stop limiting their IT deployment strategies, so they need to use all of their existing rack space and
available floor space without increasing their facility’s heat load.‖
The modular data center
Eaton and we are working on a fully enclosed IT rack system, or modular data center (MDC). The MDC will provide 100kW
of power and cooling capacity using high voltage AC (400v-480v) and chilled water as the primary inputs. The MDC can
also use intermittent power from renewable energy sources.
The MDC powers the IT equipment using distributed high-voltage DC power for maximum efficiency. The MDC’s
management system aggressively controls energy use to optimize internal resources based on electrical grid demand, peakdemand usage times, and the availability and strength of wind and solar power. The MDC includes management controls
that actively optimize the internal environment for dynamic IT loads, and interface with facility management systems to
provide an end-to-end view of the power and cooling chain.
The MDC supports four, six, or eight 42U IT racks. That equates to a cooling capacity of 25kW/rack for four racks or
12.5kW/rack for eight racks. This requires an innovative cooling system that can fit in a single bay at the enclosure’s center.
Figure 1-1 shows an MDC with four bays, two on each side of the cooling system bay. One of our main goals is to design
the MDC so that two people can install it in two to three days with the aid of an electrician and plumber to connect the
power and cooling water inputs. The enclosure may be up to 7300mm (24 ft.) wide including eight IT racks and the cooling
system bay. The MDC will be approximately 1800mm (6 ft.) deep, with up to 600mm for cold air circulation.
Figure 1-1. Diagram of the proposed modular data center
2
Energy savings
We believe the MDC will cut energy use by 38% for a 100kW IT load based on increased efficiency for cooling power
conversion. This is equivalent to reducing carbon dioxide emissions by approximately 400 tons annually. It will reduce
energy use by:
Reducing inefficiencies caused by raised floor cooling infrastructures, such as warm air mixing with cold air above and
around server aisles
Reducing the energy required to provide chilled water by using a single chilled water coil
The grant requires that the MDC achieve a maximum Power Usage Effectiveness (PUE) ratio of 1.25, where
PUE = total MDC load ÷ IT equipment load
A typical data center has a PUE over 2.0, which means that for every watt the IT equipment consumes, the power and
cooling resources consume an additional watt.
Sea of Sensors reduces power costs
HP engineers designed ProLiant G6 and G7 servers with several thermal sensors, or Sea of Sensors, placed throughout the
chipset, memory, power, and I/O subsystems. The iLO management processor polls the sensors to gather system-wide
temperature data and uses sophisticated algorithms to adjust the speed of individual system fans and maintain sufficient
system cooling. The number of sensors varies by the server’s make, model, and installed options.
Sea of Sensors provide the minimum amount of cooling necessary by ramping up individual fans near hot spots and slowing
down fans near cool spots. The temperature plot in Figure 2-1 shows how air warms as it flows from the front (left) to the
back (right) of a server. Cool spots occur at the inlet (left) and hot spots occur around CPUs and some chipset components
(center). HP engineers observe how the plot changes with different loads to gain a better understanding of a system’s
thermal behavior. This approach reduces power use and fan noise, and can help facilities increase server density due to
greater overall efficiency. Because of this approach, the Sea of Sensors helps ProLiant servers achieve benchmark-level
operations per second per watt (OPS/watt).
Figure 2-1. HP visualization tool representation of system ―hot spots‖ and ―cool spots‖
3
The iLO user interface
System-specific Sensor Data Records (SDRs) determine which thermal sensors the iLO user interface (UI) displays. The
Intelligent Platform Management Interface specification defines the Sensor Data Repository, which houses SDR’s. The system
BIOS transfers the Sensor Data Repository to iLO during POST.
iLO consolidates data from a group of similar sensors (DIMMs for example). In aggregate readings, SDRs send the
temperature of the warmest component in a group to the iLO UI. Other sensors may send a ―remaining temperature margin‖
reading to the UI. iLO uses this information to tailor cooling for the system configuration.
Findings during development
We made some interesting observations during Sea of Sensors technology development, some of which are:
Running fans ―too slow‖ can result in increased power draw because of the inherent behavior of motors at slow speeds.
Running some fans ―too high‖ can decrease direct-attached storage performance if fan vibration couples with disc drive
rotation.
Although fan failures are rare, turning off fans to save power will not allow iLO to monitor them for failures and other
conditions. An unpowered fan can ―turn and re-circulate hot air.
Poorly balanced fan flow rates can produce an eddy (recirculated hot air), degrading cooling performance.
Additional resources
Resource
URL
―HP ProLiant Intel-based 300–series G6
and G7 servers‖ technology brief
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c0
0502616/c00502616.pdf
HP Smart Array optimizes Video on Demand performance
Video on Demand (VOD) is a challenging storage application area because content providers must deliver hundreds of
video streams simultaneously and without interruption. The VOD servers that deliver these services require large amounts of
relatively high performance disk storage. The ProLiant Smart Array team has been making important advances in the use of
HP technology to optimize storage for VOD applications.
Understanding how VOD servers interact with storage
Optimizing for large block random I/O
VOD servers stream files in a rotational manner, reading and delivering a block from one file and then another. As a result,
VOD servers access the storage as random I/O—although it is random I/O with a pattern.
High-performance VOD application servers use large block requests, typically 512 KB to 4 MB. This allows the servers to
grab enough data from the disk to fill the video stream until the next access window opens in the rotation. The best VOD
performance is gained by tuning the drive array, the file system, and the VOD application server so that each block request
is stripe-sized (a full stripe width) and stripe-aligned (aligned to stripe boundaries). Many VOD implementations use Linux
because it can support file systems that you can configure for stripe-aligned block requests.
VOD latency requirements
In storage terms, latency is the time it takes for a Smart Array controller to retrieve a block of requested data from a logical
drive and deliver it to the application. With VOD, it is more important to have consistent latencies and a low maximum
latency. Engineers configuring VOD servers gladly trade off a slightly higher average latency to keep the maximum latency
below a given threshold. For example, VOD server performance may be OK if the latency is 70, 80 or 90 milliseconds as
4
long as the server can deliver the next chunk of the video stream in the required window of time. However, if the latency
exceeds 1000 milliseconds, you will probably see a glitch or a pause in an on-demand movie.
Smart Array controller optimizations for VOD
Smart Array engineers analyze VOD requirements and use this information to optimize the Smart Array controller using the
Array Configuration Utility (ACU). Some configuration parameters require a Smart Array Advance Pack (SAAP) license to
access. Table 3-1 summarizes the optimal ACU configuration settings for VOD and their availability.
Table 3-1. Smart Array ACU configuration settings for VOD optimization
Configuration Parameter
Setting
Availability
Physical Drive Request Elevator Sort
OFF
SAAP
Parity RAID Degraded Mode Performance
Optimization
ON
SAAP
Cache Ratio
100% Write
Standard Smart Array
MNP Delay
0 (Disabled)
SAAP
MNP Data Collection
0 (Disabled)
SAAP
Rebuild Priority
Low
Standard Smart Array
Background Surface Scan Interval
30 seconds
Standard Smart Array
Physical Drive Request Elevator Sort (Table 3-1) can increase the performance of a Smart Array. It lets the controller re-order
block requests to the physical drives to minimize the drives’ seek time. Unfortunately, it also increases the maximum latency
since the controller may execute an earlier block request after it executes subsequent requests. You can turn this function OFF
to decrease the variability of latency.
When an array drive fails, the array operates in Degraded Mode until you replace the failed drive and the Smart Array
controller rebuilds it. During this period, the controller performs multiple disk operations to reconstruct degraded data while it
fulfills read requests. This increases latency. Our engineers found that turning RAID Degraded Mode Performance
Optimization to ON (Table 3-1) lets the Smart Array controller make greater use of the controller memory for regeneration
operations, minimizing but not eliminating the impact on latency during array rebuilds.
Setting the Cache Ratio to 100% Write (Table 3-1) may seem odd for VOD. However, engineers discovered that using the
Smart Array controller memory for read cache does not increase read performance for most VOD installations. Read caching
is ineffective due to the amount of large block data the VOD server reads combined with the number of streams it makes.
Setting the Cache Ratio to 100% Write allows the Smart Array controller to post infrequent writes to the cache and return to
fulfilling read requests as quickly as possible.
Engineers minimize the intrusion of Smart Array controller background tasks by disabling Monitor and Performance (MNP);
setting Rebuild Priority to Low; and setting the Background Surface Scan Interval to 30 seconds (Table 3-1). These tasks can
cause transient dips in read performance and affect the ability of the VOD server to deliver all of the video streams reliably.
Conclusion
Maximizing VOD performance involves many moving parts, such as tuning the Smart Array controller driver, the VOD
application server, the OS, and the file system itself. We continue to examine VOD performance for HP ProLiant systems for
future improvements.
5
Meet the Expert—Chris Wanner
Chris Wanner is a Signal Integrity (SI) Program lead for the HP BladeSystem c-Class
servers. Chris joined Compaq in June 1986 when it had less than 3,000
employees. His current responsibilities include HP BladeSystem architecture and
Signal Integrity as well as CPU roadmap performance analysis. According to his
manager, Gene Freeman, ―Chris has a broad understanding of the hardware
architecture from sheet metal to processors to ASICs [Application-Specific Integrated
Circuits]. One of his strengths is his ability to communicate that knowledge to our
field engineers and customers.‖
He found inspiration and destiny in the same place
Chris grew up in a small Wisconsin town of 4,000 in the 1960s and 70s. His
fascination with science and mathematics began in grade school, but his interest in
electrical engineering came from his best friend’s dad who worked on the Apollo
space program and the supersonic Concorde aircraft. Chris says, ―His
accomplishments inspired me to become an engineer. But he didn’t know that I
would also be inspired to marry his daughter.‖ Today, Chris and Elizabeth J.
Wanner, MD, have two sons: Nicholas (12) and Matthew (15).
‘That was fun!’
Name: Chris Wanner
Title: Senior Architect, Industry
Standard Servers
Years at HP: 24
University/Degree: Michigan
Technological University,
Houghton, MI., BSEE 1982;
Southern Methodist University
Graduate Studies, Dallas, TX
U.S. Patents: 10
One of the most technically challenging and industry-changing efforts Chris has
been part of was EISA (Extended Industry Standard Architecture). Compaq gained
respect as an innovator in the PC industry in the 1980’s. During this time, IBM
introduced a proprietary new hardware architecture called MicroChannel
Architecture (MCA). MCA didn’t catch on because it wasn’t backward compatible
with existing 8-bit and 16-bit ISA expansion cards. The decision to develop EISA as
an open standard, instead of following the proprietary MCA path, was a watershed development for the industry and
helped thrust Compaq to the forefront as an industry leader. ―That was fun!‖ says Chris.
The importance of SI
Chris points out that HP continues to make major signal integrity investments in terms of both people and equipment. Our SI
engineers ensure that electronic circuits interconnect reliably at the highest speed possible. We consistently push the
envelope on performance with our engineering ability to push data rates higher and sooner than others have in the industry.
The signaling rates common in HP servers today were impossible just 10 years ago. We constantly look years ahead to
prepare for the next barriers.
His ultimate goal
Chris works frequently with customers and HP field representatives to solve customers’ problems and further their business
needs faster and better than the competition.
6
Recently published Industry-Standard Server technology papers
Title
URL
Cooling strategies for IT equipment
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02
507744/c02507744.pdf
HP intelligent power infrastructure solutions
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02
505050/c02505050.pdf
HP ProLiant server power management on SUSE
Linux Enterprise Server 11 SP1, 2nd edition
Integration note
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02
011017/c02011017.pdf
Implementing Microsoft Windows Server Code
Name Aurora Preview Build on HP ProLiant
servers, Integration note
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02
510044/c02510044.pdf
Integrating HP Insight Management WBEM
Providers with HP System Insight Manager, 3rd
edition
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01
509052/c01509052.pdf
Rack and power planning with HP Insight
Control Power Management
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02
510164/c02510164.pdf
Solid state storage technology for ProLiant
servers, 2nd edition
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01
580706/c01580706.pdf
Upgrading VMware vCenter Server and VMware
ESX in an HP Insight Dynamics 4.1 or 6.0
environment HOWTO
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02
550452/c02550452.pdf
Using Integrated Lights-Out in a VMware ESX
environment, 2nd edition
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01
732803/c01732803.pdf
HP Industry Standard Server technology papers are at www.hp.com/servers/technology
Contact us
Send comments about the ISS Technology Update to TechCom@HP.com.
To subscribe to the ISS Technology Update, click mailto:techcom@hp.com?subject=TechUpdate_subscription.
Past articles are at http://h18004.www1.hp.com/products/servers/technology/whitepapers/general.html#archive.
Follow us on Twitter: http://twitter.com/ISSGeekatHP
Legal Notices
© Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein is
subject to change without notice. The only warranties for HP products and services are set forth in the
express warranty statements accompanying such products and services. Nothing herein should be
construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors
or omissions contained herein.
AMD and AMD Opteron are trademarks of Advanced Micro Devices, Inc.
Intel, Intel Xeon, and Intel Itanium are trademarks of Intel Corporation in the United States and other
countries.
TC101101NL, November 2010