Technical Whitepaper

HP ProLiant G7 c-Class server blades with
Intel® Xeon® processors
Technology brief
Introduction......................................................................................................................................... 2 ProLiant c-Class server blade architecture................................................................................................ 2 Processor technologies ......................................................................................................................... 3 Intel Xeon 5600 series processors ...................................................................................................... 3 How HP incorporates QuickPath technology........................................................................................ 4 HP processor installation tool............................................................................................................. 4 HP processor heat sink...................................................................................................................... 5 Memory technologies ........................................................................................................................... 5 I/O technologies ................................................................................................................................. 6 PCI Express technology..................................................................................................................... 6 HP Smart Array controllers ................................................................................................................ 8 SAS and SATA small form factor hard drives ....................................................................................... 8 Solid state drives.............................................................................................................................. 8 Optional mezzanine cards ................................................................................................................ 9 Networking technologies .................................................................................................................. 9 Configuration and management technologies ....................................................................................... 11 BladeSystem Onboard Administrator ................................................................................................ 12 Integrated Lights-Out 3 for ProLiant server blades ............................................................................... 12 HP Insight Control .......................................................................................................................... 12 Power and thermal technologies .......................................................................................................... 13 Thermal sensors ............................................................................................................................. 13 Power meter .................................................................................................................................. 13 HP Power Regulator for ProLiant....................................................................................................... 13 HP Dynamic Power Capping ........................................................................................................... 14 HP Enclosure Dynamic Power Capping............................................................................................. 14 Data security technology with the Trusted Platform Module ..................................................................... 15 For more information.......................................................................................................................... 16 Call to action .................................................................................................................................... 16 Introduction
This technology brief describes the architecture and the implementation of major technologies in HP
ProLiant G7 c-Class server blades based on Intel® Xeon® processors. If you need more information
about the infrastructure components and complete specifications of each server blade, see the HP
website: www.hp.com/go/bladesystem/.
ProLiant c-Class server blade architecture
HP ProLiant c-Class server blades can slide into any one of three different HP BladeSystem c-Class
Enclosures to meet the needs of large or small IT environments (Figure 1):
 The HP BladeSystem c7000 rack enclosure is 10U high and holds up to 16 ProLiant c-Class server
blades, enabling up to 32 server nodes per enclosure.
 The HP BladeSystem c3000 rack enclosure is 6U high and holds up to 8 ProLiant c-Class server
blades.
Figure 1. HP BladeSystem c-Class enclosures
BladeSystem c7000 enclosuure
BladeSystem c3000 rack enclosure
The rack enclosures fit in HP 10000 series racks and can operate with as few as one server blade
installed. One of the advantages of blade architecture, however, is the ease of adding more server
blades. ProLiant c-Class server blades are built in standard form-factors, referred to as half-height and
full-height. Both half-height and full-height server blades fit into any BladeSystem c-Class enclosure. 1
Half-height ProLiant G7 c-Class server blades include enterprise-class technologies:
 Two Xeon 5600 Series processors
 Advanced memory technologies
 Multiple slots for I/O cards
 Hot-plug internal disk drives
 Embedded FlexFabric Converged Network Adapter (CNA)
Half-height G7 server blades support 6 to 18 DIMM slots, up to 2 drives (hot-plug or non hot-plug), an
integrated FlexFabic CNA, and up to 2 mezzanine slots. Optional mezzanine cards provide multiple
types of I/O fabric connectivity to the interconnect bays.
1
More information about BladeSystem c-Class enclosure configuration options can be found at
http://www.hp.com/go/bladesystem/.
2
Processor technologies
HP uses the enhanced technologies in Intel Xeon 5600 series processors to construct a range of
performance and power options for ProLiant G7 server blades. Xeon 5600 series processors are built
on second-generation, high-k metal gate transistors for higher performance and lower electrical
leakage.
HP engineers developed a special tool to simplify installation of Xeon 5600 series processors. HP
engineers also designed smaller, more efficient heat sinks, and precise ducting for ProLiant c-Class
server blades.
Intel Xeon 5600 series processors
ProLiant G7 server blades support up to two Xeon 5600 series processors (Figure 2). Xeon 5600
Series processors are based on 32-nanometer (nm) Nehalem microarchitecture. These processors
support distributed shared memory, Intel QuickPath Technology, Hyper-Threading technology, Turbo
Boost Technology, Advanced Encryption Standard (AES) acceleration, and Intel Trusted Execution
Technology. Compared to four-core Xeon 5500 series processors, Xeon 5600 series processors have:
 Up to six cores and a CPU frequency up to 3.33 GHz
 Improved power efficiency (up to 60% higher performance in the same power envelope)
 A shared 12MB L3 cache (Xeon 5500 series processors have an 8MB or 4MB L3 cache)
 Support for LVDIMMs (1.35V)
 Improved efficiency in Turbo Boost mode
For more information about Intel Xeon processors in HP ProLiant servers, read the technology brief
“AMD Opteron™ and Intel® Xeon® x86 processors in industry-standard servers” at
http://h20000.www2.hp.com/bc/docs/support/SupportManual/C02731435/C02731435.pdf
Figure 2. Block diagram of Xeon 5600 Series processor architecture with bi-directional QuickPath Interconnect (QPI) links
3
How HP incorporates QuickPath technology
Bi-directional QuickPath Interconnect (QPI) links directly connect the processors with each other and
connect each processor to distributed, shared memory and the I/O chipset. Each link performs a
maximum of 6.4 gigatransfers per second (GT/s) and has a bandwidth of 12.8 gigabytes per second
(GB/s), for a total bandwidth of 25.6GB/s.
To reduce power requirements, our ROM-Based Setup Utility (RBSU) 2 allows QPI links to enter a low
power state when the links are not active. This feature is enabled by default. The Intel processor
determines when to put the QPI links into a low power state with minimal performance impact.
HP processor installation tool
The latest Intel 5000 series processors use a processor socket technology called Land Grid Array
(LGA). The processor package designs no longer have pins. Instead, the processor package has pads
of gold-plated copper that touch processor socket pins on the motherboard.
Because pin damage could require replacing the motherboard, HP engineers developed a tool to
simplify processor installation and reduce the possibility of damaging the socket pins (Figure 3).
Figure 3. Diagram showing how the processor installation tool simplifies installation and prevents pin damage
2
Additional information is provided in the “HP ROM-Based Setup Utility User Guide”:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00191707/c00191707.pdf.
4
HP processor heat sink
The processor heat sinks in ProLiant server blades are smaller than those of rack-mount servers to
allow more space on the server blades for DIMM slots and hard drives. The server blade heat sinks
have vapor chamber bases, thinner fins, and tighter fin pitch than previous designs to achieve the
largest possible heat transfer surface in the smallest possible package (Figure 4).
Figure 4. Processor heat sink using fully ducted design (left) and traditional heat sink used in a 1U rack-mount
server (right)
Top view
Side view
Memory technologies
DDR3, the third generation of DDR memory, delivers higher speeds, lower power consumption, and
more effective heat dissipation than DDR1 and DDR2 memory. It is an ideal memory solution for
bandwidth-hungry systems equipped with multi-core processors. Depending on the memory
configuration and processor model, the memory may run at 1333, 1066, or 800 megatransfers per
second (MT/s). 3 DDR3 operates at 1.5 V (standard) and at 1.35 V (low voltage), which reduces both
power consumption and heat generation. DDR3 DIMMs have 240 pins, the same number and size as
DDR2. However, they are electrically incompatible and have different key notch locations.
DDR3 memory is available in two forms: Unbuffered (UDIMM) and Registered (RDIMMs). Both
UDIMMs and RDIMMs support error correcting code (ECC). Administrators can use either RDIMMs or
UDIMMs, but RDIMM and UDIMM memory cannot be mixed within a single server. When choosing
memory configurations using DDR3 memory, the following guidelines are useful:
 UDIMM configurations are limited to a maximum of two UDIMMs per memory channel because the
memory controller must drive the address and command signals to each DRAM chip on a channel.
This results in a 48-GB maximum configuration when using 4GB UDIMMs. Because they require
fewer components, UDIMMs are typically less expensive than RDIMMs.
 RDIMM configurations can provide larger memory capacity because the memory controller only
drives the address and command signals to a single register chip, thereby reducing the electrical
load on the memory controller. Users requiring large memory footprints can install up to two 16-GB
quad-ranked RDIMMs per channel, for a total of 192GB.
3
For additional information about HP server memory technology, go to the HP server memory site:
http://h18004.www1.hp.com/products/servers/options/memory-description.html.
5
Table 1 summarizes the choices available for DDR3 memory.
Table 1. DDR3 memory options
Type
DIMM
capacity
(GB)
Rank
Max. data rate in
MT/s (1 or 2*
DIMMs/channel)
Max. data rate in MT/s
(3 DIMMs/channel)
PC3-10600R
RDIMM
2
dual
1333
800
PC3-10600R
RDIMM
4
dual
1333
800
PC3-10600R
RDIMM
8
dual
1333
800
PC3-10600E
UDIMM
1
single
1333**
N/A
PC3-10600E
UDIMM
2
dual
1333**
N/A
PC3L-10600R
RDIMM
4
single
1333
800
PC3L-10600R
RDIMM
8
dual
1333
800
PC3-8500R
RDIMM
16
Quad
1066
N/A
PC3-10600E
UDIMM
4
dual
1333**
N/A
* HP branded DIMMs only
**1333 MT/s in platforms with 12 DIMM slots. 1066 MT/s in platforms with 18 DIMM slots
For detailed memory configuration guidelines, use the Online DDR3 Memory Configuration Tool
available on the HP website: www.hp.com/go/DDR-3memory-configurator.
I/O technologies
HP has been a leader in the development and implementation of industry-standard I/O technology
and continues to be an active member of the PCI Special Interest Group. HP ProLiant c-Class server
blades support PCI Express (PCIe), serial attached SCSI (SAS), serial ATA (SATA) I/O technologies,
FlexFabric CNA or Multifunction 1Gb or 10Gb Ethernet adapters, 4Gb or 8Gb Fibre Channel, and
4X QDR (40Gb) InfiniBand.
PCI Express technology
PCIe allows administrators to add internal expansion cards that support various system capabilities
and connection to external storage blades. The PCIe serial interface provides point-to-point
connections between the chipset I/O controller hub and I/O devices. Each PCIe serial link consists of
one or more dual-simplex lanes. Each lane contains a send pair and a receive pair to transmit data at
the signaling rate in both directions simultaneously (Figure 5). PCIe 1.0 has a signaling rate of
2.5Gb/s per direction per lane. PCIe 2.0 doubles this per-lane signaling rate to 5Gb/s. This
flexibility allows slower devices to transmit on a single lane with a relatively small number of pins
while faster devices can transmit on more lanes as required. 4
4
For additional information about PCI Express technology, see the technology brief titled “HP local I/O
technology for ProLiant and BladeSystem servers”:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00231623/c00231623.pdf.
6
Figure 5. PCIe bandwidth
PCIe 2.0 is backward compatible with PCIe 1.0: A PCIe 2.0 device can be used in a PCIe 1.0 slot
and a PCIe 1.0 device can be used in a PCIe 2.0 slot. For best performance, however, each card
should be used in a slot that supports its logical link size.
A ProLiant G7 option allows all expansion slots to run at PCIe 1.0 rather than PCIe 2.0 speed.
Enabling this option saves power and provides backward compatibility with cards that may not
correctly operate in PCIe 2.0 slots. Administrators can control expansion slot speed through the RBSU
under Advanced Power Management Options (Figure 6).
Figure 6. Example of RBSU PCI Express Generation 2.0 support menu
The options available from the PCI Express Generation 2.0 Support menu include:
 Auto – Configures card to PCI-E Generation 1.0 or 2.0 based on the card’s capability. This mode
has a unique capability to work-around an issue with certain PCI-e Generation 1.0 cards that did
not work properly in PCI-e Gen 2 slots.
 Force PCI-E Generation 2 – Similar to Auto mode, this option configures the card to PCI-E
Generation 1.0 or 2.0 based on the card’s capability. However, this mode will work-around an
7
issue with certain PCI-e Generation 2.0 cards which get configured for PCI-e Gen 1 instead of PCI-e
Gen 2 because the card does not report its capabilities properly when the system is configured for
Auto mode. However, this mode does not work around the issue which exists with certain PCI-e
Generation 1.0 cards that did not work properly in PCI-e Gen 2 slots.
 Force PCI-E Generation 1 – Configures all PCI-e devices in the system to operate at PCI-E
Generation 1.0 speeds. This mode may be useful in allowing cards to function properly which have
an issue operating in PCI-e Generation 2.0 slots and which did not operate properly when the
system is configured for the other modes. In addition, this mode can be used to save power if
maximum PCI-e bandwidth is not required.
HP Smart Array controllers
ProLiant c-Class server blades with Xeon 5600 Series processors support internal hard drives through
integrated or optional HP Smart Array controllers. For example, the ProLiant BL460c G7 server blade
uses the Smart Array P410i controller and supports SAS and SATA SFF drives.
Smart Array controllers can use an optional battery-backed write cache (BBWC) or flash-backed write
cache (FBWC). The battery or flash device prevents information in the buffer from being lost in case of
an unexpected system shutdown. In the case of a complete system failure, you can move the controller
and disks to a different server, where the controller will flush out the cache to the disks after power is
restored. In the case of a controller failure, you can move the cache module and disks to a working
controller, where the cache will be flushed out to the disks. For BBWC, the battery will last up to 48
hours without receiving any power from the computer. FBWC eliminates this 48-hour battery life
limitation because the data is written to flash media and will be posted to the disk drive on the next
power up.
SAS and SATA small form factor hard drives
SAS 5 is a serial communication protocol for direct-attached storage devices such as high-performance
SAS and high-capacity SATA 6 small form factor (SFF) disk drives. SAS is a point-to-point architecture
in which each device connects directly to a SAS port rather than sharing a common bus, as parallel
SCSI devices do. Point-to-point links increase data throughput and improve the ability to locate and fix
disk failures.
SFF drives provide higher performance than large form factor drives. The smaller SFF platters reduce
seek times because the heads have a shorter distance to travel. At this writing, the peak data transfer
rate for SAS drives supported with the Smart Array P410i firmware is 6Gb/s in full duplex mode.
RAID performance improves by increasing the numbers of spindles.
The SAS architecture increases storage options for BladeSystem servers, providing simple in-rack
shared or zoned direct attach SAS storage. The architecture combines the simplicity and cost
efficiency of direct-attached storage with the flexibility and resource utilization of a SAN, delivering a
simple storage solution that’s ideal for growing capacity requirements.
Solid state drives
In late 2008, HP introduced solid state drives (SSDs) for use in specific BladeSystem environments.
SSDs connect to the host system using the same protocols as disk drives, but they store and retrieve
file data in flash memory arrays rather than on spinning media. SSDs eliminate the latency of
For more information about SAS technology, refer to the HP technology brief titled “Serial Attached SCSI
storage technology”:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01613420/c01613420.pdf.
6
For more information about these features, refer to the technology brief “Serial ATA technology”:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00301688/c00301688.pdf.
5
8
traditional hard drives by eliminating seek times and by powering up quickly. They also achieve high
random-read performance. HP SSDs provide a level of reliability equivalent to or slightly greater than
current HP Midline disk drives for servers.
Solid state memory (NAND) technology, used on SSDs, provides higher capacity, reliability, and
performance for local, low-power boot drives than USB keys provide. HP server SSD interfaces are
compatible with traditional disk drives connected to a SATA or SAS controller. This allows
benchmarking and direct comparison of their external performance with that of disk drives to
determine their suitability in various application environments. 7
Optional mezzanine cards
HP ProLiant c-Class server blades use two types of mezzanine cards to connect to the various
interconnect fabrics such as Fibre Channel, Ethernet, serial-attached SCSI, or InfiniBand. Type I and
Type II mezzanine cards differ only in the amount of power allocated to them by the server and in the
physical space they occupy on the server blade. Type I mezzanine cards have slightly less power
available to them and are slightly smaller. Type I mezzanine cards are compatible with all mezzanine
connectors in ProLiant c-Class server blades. Type II mezzanine cards are compatible with Mezzanine
2 or 3 connectors in full-height c-Class server blades. Type II mezzanine cards are also compatible
with Mezzanine 2 connectors in half-height c-Class server blades.
Both types of mezzanine cards use a 450-pin connector, enabling up to eight lanes of differential
transmit and receive signals. Because the connections between the device bays and the interconnect
bays are hard-wired through the signal midplane, the mezzanine cards must be matched to the
appropriate type of interconnect module. For example, a Fibre Channel mezzanine card must be
placed in the mezzanine connector that connects to an interconnect bay holding a Fibre Channel
switch. For the most up-to-date information about the c-Class mezzanine card options, go to the HP
website: http://h18004.www1.hp.com/products/blades/components/c-class-interconnects.html and
http://h18004.www1.hp.com/products/blades/components/vc-interconnects.html.
Networking technologies
Multifunction 1Gb or 10Gb Ethernet network adapters integrated on all c-Class server blades provide
several advantages 8 :
 TCP/IP Offload engine (TOE) for Microsoft® Windows® operating systems improves CPU
efficiency.
 Receive-side Scaling (RSS) for Windows dynamically load balances incoming traffic across all
processors in a server.
 iSCSI Acceleration (available on some integrated network adapters) offloads some of the work in
creating iSCSI packets from the processor onto the network controller, freeing up the processor for
other work.
 iSCSI boot for Linux® operating systems makes it possible to boot the server from a storage area
network (SAN) and eliminates the need for disk drives in a server.
 HP Virtual Connect (VC) and Flex 10 provide up to 16 FlexNICs across 4 ports to simplify server
connection setup and administration.
TCP/IP Offload Engine
The increased bandwidth of Gigabit Ethernet networks increases demand for CPU cycles to manage
the network protocol stack. This means that performance of even a fast CPU will degrade while
For more information about Solid state drive technology, refer to the HP technology brief titled “Solid state drive
technology for ProLiant servers”: http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01580706/c01580706.pdf.
8
For complete specifications about HP network adapter products, go to www.hp.com/go/ProLiantNICs.
7
9
simultaneously processing application instructions and transferring data to or from the network.
Computers most susceptible to this problem are application servers, web servers, and file servers that
have many concurrent connections.
The ProLiant TCP/IP Offload Engine (TOE) for Windows speeds up network-intensive applications by
offloading TCP/IP-related tasks from the processors onto the network adapter. TOE network adapters
have on-board logic to process common and repetitive tasks of TCP/IP network traffic. This effectively
eliminates the need for the CPU to segment and reassemble network data packets. Eliminating this
work significantly increases the application performance of servers attached to gigabit Ethernet
networks.
TOE is included on integrated Multifunction Gigabit Ethernet adapters and optional multifunction
mezzanine cards. It is supported on Microsoft Windows Server 2003 when the Scalable Networking
Pack is installed. With the delivery of Windows Server 2008, the TCP/IP Offload Chimney that
shipped in the Scalable Networking Pack is included as part of the latest Windows operating system.
Receive-side Scaling (RSS)
RSS balances incoming short-lived traffic across multiple processors while preserving ordered packet
delivery. Additionally, RSS dynamically adjusts incoming traffic as the system load varies. As a result,
any application with heavy network traffic running on a multi-processor server will benefit. RSS is
independent of the number of connections, so it scales well. This makes RSS particularly valuable to
web servers and file servers handling heavy loads of short-lived traffic.
For RSS support on servers running Windows Server 2003, Scalable Networking Pack must be
installed. Windows Server 2008 supports RSS as part of the operating system.
iSCSI Acceleration
Accelerated iSCSI offloads the iSCSI function to the NIC rather than taxing the server CPU.
Accelerated iSCSI is enabled by the HP ProLiant Essentials Accelerated iSCSI Pack that is used with
certain embedded Multifunction NICs in Windows and Linux environments.
iSCSI boot for Linux
iSCSI boot for Linux is available on BladeSystem NC370i, NC373i, and NC737m Gigabit Ethernet
adapters. iSCSI boot allows the host server to boot from a remote OS image located on a SAN within
a Red Hat or SUSE Linux environment. The host server uses an iSCSI firmware image (iSCSI boot
option ROM), making the remote disk drive appear to be a local, bootable C: drive. Administrators
can configure the server to connect to and boot from the iSCSI target disk on the network. It then
downloads the OS image from the iSCSI target disk. The HP iSCSI boot solution also includes scripts
to significantly simplify the installation process. Adding an iSCSI HBA card is not required.
Virtual Connect and Flex-10
Virtual Connect technology is a set of interconnect modules and embedded software for c-Class
enclosures that simplifies the setup and administration of server connections. For a listing of HP Virtual
Connect components, go to http://h18004.www1.hp.com/products/blades/virtualconnect/index.html.
Virtual Connect implements server-edge virtualization so that server administrators can upgrade,
replace, or move server blades within their enclosures without changes being visible to the external
LAN and SAN environments. HP recommends using Virtual Connect or managed switches to reduce
cabling and management overhead.
Like Ethernet and Fibre Channel switches, Virtual Connect modules slide into the interconnect bays of
c-Class enclosures. To support the Virtual Connect Fibre Channel module, the enclosure must have at
least one Virtual Connect Ethernet module, because the Virtual Connect Manager software runs on a
processor that resides on the Ethernet module (or the Virtual Connect FlexFabric module).
10
When the LAN and SAN connect to the pool of servers, the server administrator uses Virtual Connect
Manager to define a server connection profile for each server. The Virtual Connect Manager creates
bay-specific profiles, assigns unique MAC addresses and WWNs to these profiles, and administers
them locally. Network and storage administrators can establish all LAN and SAN connections once
during deployment. If servers are later deployed, added, or changed, no connection changes are
needed because Virtual Connect keeps the profile for that LAN and SAN connection constant. When
you have more than four enclosures, you can use Virtual Connect Enterprise Manager to manage up
to 250 Virtual Connects (1000 enclosures or 16,000 servers).
The advantages of using Flex-10 technology are significant. The 10GbE infrastructure becomes more
cost efficient and easier to manage because Flex-10 allows customers to fully utilize the 10-Gb
bandwidth. Customers can also match bandwidths to the network functions, such as management
console or production data. Flex-10 provides four times more network ports per server blade, cutting
network connection costs by up to 75% per virtual server. The fact that Flex-10 is hardware based
means that multiple FlexNICs are added without the additional processor overhead or latency
associated with server virtualization. Flex-10 also reduces infrastructure costs by eliminating the need
for additional server NIC mezzanine cards and interconnect modules. With the Virtual Connect
FlexFabric module, one Flex-10 connection per port can function as a FlexHBA.
Flex-10 technology uses two hardware components:
 The HP Virtual Connect Flex-10 10Gb Ethernet or FlexFabric 10Gb/24-Port Module
 Either the embedded HP NC532i Flex-10 adapter, an embedded HP NC553i FlexFabric adapter,
or HP NC551m FlexFabric Adapter.
The Virtual Connect Flex-10 10Gb Ethernet or FlexFabric 10Gb/24-Port Module is required to
manage the 10GbE server connections to the data center network. The 10Gb Flex-10 LOM,
integrated FlexFabric adapter and mezzanine cards are NICs, each with two 10Gb ports. Each
10Gb port can be configured from one to a maximum of four individual Flex-10 functions. The server
ROM and the operating system or hypervisor recognize each FlexNIC or FlexHBA as an individual
NIC or storage connection.
Full details about Flex-10 technology are available on the HP technology website in the technology
brief titled “HP Virtual Connect and Flex-10 technology”:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01608922/c01608922.pdf.
Full details about Virtual Connect technology are available on the HP technology website in the
technology brief titled “HP Virtual Connect technology implementation for the HP BladeSystem cClass”: http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00814156/c00814156.pdf.
Configuration and management technologies
The HP BladeSystem c-Class provides an intelligent infrastructure through extensive integrated
management capabilities. These capabilities are based on three unique HP components:
 BladeSystem Onboard Administrator
 Integrated Lights-Out 3 (iLO 3) management processor on each server blade
 Interconnect module management processors such as the HP Virtual Connect Manager
Integrating all these management capabilities provides powerful hardware management for remote
administration, local diagnostics, and troubleshooting.
For detailed information about c-Class management technologies and capabilities, refer to the HP
white paper titled “Managing HP BladeSystem c-Class systems”:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00814176/c00814176.pdf.
11
BladeSystem Onboard Administrator
The heart of c-Class enclosure management is the HP BladeSystem Onboard Administrator module
located in the enclosure. It performs four management functions for the entire enclosure:
 Managing power and cooling
 Detecting component insertion and removal
 Identifying components and required connectivity
 Controlling components
Through the Onboard Administrator controller, you can access real-time power and temperature data
to understand your existing power and cooling environments. Onboard Administrator allocates power
to the device bays based on the specific configuration of each blade in the enclosure. As you insert
blades into an enclosure, the Onboard Administrator discovers each blade and allocates power,
based on actual measured power requirements.
The BladeSystem Onboard Administrator works with the iLO management processor on each server
blade to form the core of the management architecture for HP BladeSystem c-Class.
To identify a component, the BladeSystem Onboard Administrator reads a Field-Replaceable Unit
(FRU) Electrically Erasable Programmable Read-Only Memory (EEPROM) that contains specific factory
information about the component, such as product name, part number, and serial number. The
BladeSystem Onboard Administrator accesses server blade FRU EEPROMs through their iLO
management processors.
The server blades contain several FRU EEPROMs: one on the server board that contains server
information and embedded NIC information, and one on each of the installed mezzanine option
cards. Server blade control options include auto login to the iLO web interface and remote server
consoles, virtual power control, and boot order control. Server blade control options also include
extensive server hardware information including BIOS and iLO firmware versions, server name,
network adapter and option card port IDs, and port mapping. The BladeSystem Onboard
Administrator provides easy-to-understand port mapping information for each of the server blades and
interconnect modules in the enclosure. To simplify the installation of the various mezzanine cards and
interconnect modules, the BladeSystem Onboard Administrator uses an electronic keying process to
detect mismatches.
Integrated Lights-Out 3 for ProLiant server blades
Each G7 HP BladeSystem c-Class Server Blade includes an embedded iLO 3 management processor
to configure, update, and operate the server remotely. The c-Class enclosure includes an Ethernet
management network to aggregate all iLO 3 management communications across the entire
enclosure. This management network directly connects iLO 3 processors to the BladeSystem Onboard
Administrator through the BladeSystem Onboard Administrator tray. The BladeSystem Onboard
Administrator uses this network to manage pooled enclosure power and cooling.
HP Insight Control
HP Insight Control provides essential management for HP BladeSystem lifecycles, including proactive
health management, lights-out remote control, optimized power use, rapid server deployment and
migration, performance analysis, and virtual machine management.
The software is delivered on DVD media. It includes an integrated installer for rapidly and consistently
deploying and configuring HP Insight software. The integrated installer includes a wizard-based
interface that presents a series of configuration questions. When the user has answered the
configuration questions, each of the selected components will be deployed in a single process. The
HP Insight Software Advisor checks to ensure that the host central management server meets all
12
installation prerequisites before initiating the installation process. When the installation is complete,
the Insight Software Update Utility will check for available software updates and allow administrators
to download them.
HP Insight Control installs and licenses the following components for use:
 HP Systems Insight Manager
 HP iLO Advanced for BladeSystem
 HP Insight Control power management
 HP Insight Control server deployment
 HP Insight Control server migration
 HP Insight Control performance management
 HP Insight Control virtual machine management
HP Insight Control integrates with leading enterprise management platforms such as Microsoft System
Center and VMware vCenter Server. It includes one year of 24 x 7 HP Software Technical Support
and Update Service, which can be upgrade to 3, 4, or 5 years
Power and thermal technologies
HP engineers have developed the following power and thermal technologies and components to
manage power for ProLiant G7 server blades:
 Thermal sensors
 Power meter
 HP Power Regulator for ProLiant
 HP Dynamic Power Capping
 HP Enclosure Dynamic Power Capping
Thermal sensors
ProLiant G7 server blades include many more embedded thermal sensors than previous ProLiant
server blades. Up to 64 thermal sensors, referred to as a “Sea of Sensors,” are located on DIMMs,
hard drives, and elsewhere throughout each server blade.
Power meter
An integrated power meter in HP ProLiant c-Class server blades analyzes actual server power use. The
BladeSystem Onboard Administrator can access the power meter through iLO 3 or through external
power management software such as HP Insight Control power management (ICpm). ICpm also
consolidates power data for multiple servers to a central location. IT departments can use this
information to charge business units or third parties for the actual energy costs associated with
workload processing. The BladeSystem Onboard Administrator provides instant and time-averaged
views of the power consumption of individual servers or of all servers within a c-Class BladeSystem
enclosure.
HP Power Regulator for ProLiant
HP Power Regulator for ProLiant servers provides firmware-based speed stepping for Intel x86
processors. Power Regulator improves server energy efficiency by giving processors full power when
they need it and reducing power when they do not. This power management feature allows ProLiant
servers with policy-based power management to control processor power states. You can configure
Power Regulator for HP Static High Performance mode, HP Static Low Power mode, or HP Dynamic
13
Power Savings mode. The latter automatically adjusts available power to match processor demand.
Additional information on the HP Power Regulator is provided in the paper titled “Power Regulator for
ProLiant servers”:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00300430/c00300430.pdf.
HP Dynamic Power Capping
HP Dynamic Power Capping achieves the goal of maintaining server power consumption at or below
the cap value set by an administrator. HP Dynamic Power Capping can bring a server experiencing a
sudden increase in workload back under its power cap in less than one-half second. This fast
response prevents any surge in power demand that could cause a typical data center circuit breaker
to trip. Dynamic Power Capping prevents tripping circuit breakers that have a specified trip time of
three seconds or longer at 50° C and 150 percent overload.
The ability to keep server power consumption below the power cap in real time means that Dynamic
Power Capping can be an effective tool in planning and managing both electrical provisioning and
cooling requirements in the data center. An administrator can electrically provision a PDU or a rack to
something less than the full faceplate power rating of all the servers supported because Dynamic
Power Capping guards against any unexpected change in power consumption that might cause a
circuit breaker to trip.
Support for Dynamic Power Capping requires iLO 2 version 1.70 or later, system BIOS 2008.11.01
or later, and BladeSystem Onboard Administrator firmware version 2.32 or later for HP BladeSystem
enclosures.
For detailed information about HP Dynamic Power Capping, refer to the HP technology brief titled
“HP Power Capping and HP Dynamic Power Capping for ProLiant servers”:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01549455/c01549455.pdf.
HP Enclosure Dynamic Power Capping
HP has designed Enclosure Dynamic Power Capping technology specifically for BladeSystem
enclosures. It allows your administrator to set a power cap on an enclosure by using either HP Insight
Control power management (ICpm) version 2.0 or later, or Onboard Administrator (OA) firmware
version 2.30 or later 9 . The OA monitors and maintains the enclosure’s power cap by adjusting cap
levels on individual server blades. Special hardware on each server blade lowers the processor
performance states (P-states), throttles the processor clock, or both, to keep the server’s energy use
below the cap.
The total power for an enclosure includes the power used by the managed server blades as well as
the power used by unmanaged devices such as I/O peripherals (switches) and cooling fans. The OA
cannot control the power use of unmanaged devices, so it calculates a blade server power budget
that represents the maximum amount of power the servers can consume. With the blade power
budget as its limit, the OA software uses a sophisticated algorithm to increase the power caps of
busier server blades and decrease the caps of less busy server blades. This power reallocation
process is repeated every 20 seconds. Normally, the OA software can quickly raise a low power cap
for an idle server blade when it receives new work with little overall impact on enclosure
performance. And if the power demand of all the server blades exceeds the available energy, the OA
will share the available power among all busy server blades.
9
See the Blade Servers Support Matrix at http://h18004.www1.hp.com/products/servers/management/dynamic-
power-capping/support.html.
14
Data security technology with the Trusted Platform Module
The Trusted Platform Module (TPM) is a hardware-based system security feature that can securely store
information such as passwords and encryption keys to authenticate the platform. Administrators can
also use TPM to store platform measurements that help ensure that the platform remains trustworthy.
ProLiant G7 server blades support an optional TPM v1.2. A rivet supplied with the optional TPM v1.2
module attaches and secures the module to the system board. To prevent possible damage to the TPM
module or to the system board, the TPM cannot be removed from the board once it has been
installed. For additional information about the TPM, see the HP technology brief titled “Data security in HP
ProLiant servers using the Trusted Platform Module and Microsoft® Windows® BitLocker™ Drive Encryption”:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01681891/c01681891.pdf.
With the TPM and a future firmware upgrade, ProLiant G7 servers will support Intel Trusted Execution
Technology (TXT). Intel TXT complements anti-virus software and increases protection against malicious
software attacks aimed at the hypervisor, BIOS, and firmware.
15
For more information
For additional information, refer to the resources listed below.
Resource description
Web address
AMD Opteron™ and Intel® Xeon® x86 processors
in industry-standard servers
http://h20000.www2.hp.com/bc/docs/support/SupportManual/C0
2731435/C02731435.pdf
“Server virtualization technologies for x86-based HP
BladeSystem and HP ProLiant servers” technology
brief
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c0
1067846/c01067846.pdf
General HP BladeSystem information
http://www.hp.com/go/bladesystem/
HP BladeSystem c-Class documentation
http://h71028.www7.hp.com/enterprise/cache/316735-0-0-0121.html
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00
698286/c00698286.pdf
HP BladeSystem c-Class Enclosure Setup and
Installation Guide
HP ROM-Based Setup Utility User Guide
HP BladeSystem c-Class interconnects
Technology briefs about HP BladeSystem
HP BladeSystem Power Sizer
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00
191707/c00191707.pdf
www.hp.com/go/bladesystem/interconnects
http://h18004.www1.hp.com/products/servers/technology/whitepap
ers/proliant-servers.html
www.hp.com/go/hppoweradvisor
“Serial-Attached SCSI storage technology”
technology brief
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00
301688/c00301688.pdf
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01
613420/c01613420.pdf
“Memory technology evolution: an overview of
system memory technologies” technology brief
http://h20000.www2.hp.com/bc/docs/support/SupportManu
al/c00256987/c00256987.pdf
Drive technology overview, 3rd edition
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01
071496/c01071496.pdf
“Serial ATA technology” technology brief
Call to action
Send comments about this paper to TechCom@HP.com.
© 2007, 2008, 2009, 2010 Hewlett-Packard Development Company, L.P. The information
contained herein is subject to change without notice. The only warranties for HP products and
services are set forth in the express warranty statements accompanying such products and services.
Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for
technical or editorial errors or omissions contained herein.
Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.
Linux is a registered trademark of Linus Torvalds.
Intel and Xeon are registered trademarks of Intel Corporation in the U.S. and other countries and
are used under license.
TC100807TB, August 2010
Download PDF