Hyper Messaging Protocol (HMP). HP HyperFabric

Add to My manuals
190 Pages

advertisement

Hyper Messaging Protocol (HMP). HP HyperFabric | Manualzz

Planning the Fabric

Hyper Messaging Protocol (HMP)

Hyper Messaging Protocol (HMP)

Hyper Messaging Protocol (HMP) is an HP patented, high performance cluster interconnect protocol. HMP provides reliable, high speed, low latency, low CPU overhead, datagram service to applications running on the HP-UX operating system.

HMP was jointly developed with Oracle Corp. The resulting feature set was tuned to enhance the scalability of the Oracle Cache Fusion clustering technology. It is implemented using Remote DMA (RDMA) paradigms.

HMP is integral to the HP-UX HyperFabric driver. It can be enabled or disabled at HyperFabric initialization using the clic_init command or

SAM. The HMP functionality is used by the applications listed in the

following “Application Availability” section.

HMP significantly enhances the performance of parallel and technical computing applications.

HMP firmware on HyperFabric adapter cards provides a shortcut that bypasses several layers in the protocol stack, boosting link performance and lowering latency. By avoiding interruptions and buffer copying in the protocol stack, communication task processing is optimized.

Application Availability

The following are the two families of applications that can use HMP over the HyperFabric interface:

• Oracle 9i Database, Release 1 (9.0.1) and Release 2 (9.2.0.1.0).

HMP has been certified on Oracle 9i Database Release 1 with HP-UX

11.0, 11i v1, and 11i v2.

HMP has been certified on Oracle 9i Database Release 2 with HP-UX

11.0, 11i v1, and 11i v2.

• Technical Computing Applications that use the HP Message Passing

Interface (HP-MPI).

26 Chapter 2

Chapter 2

Planning the Fabric

Hyper Messaging Protocol (HMP)

HP MPI is a native implementation of version 1.2 of the

Message-Passing Interface Standard. It has become the industry standard for distributed technical applications and is supported on most technical computing platforms.

Features

The following are the HyperFabric features on HMP:

• OnLine Addition and Replacement (OLAR)

The OLAR feature, which allows the replacement or addition of

HyperFabric adapter cards while the system (node) is running, is supported when applications use HMP to communicate.

Event Monitoring Service (EMS): Supported

In the HyperFabric version B.11.23.01, the HyperFabric EMS monitor enables the system administrator to separately monitor each HyperFabric adapter on every node in the fabric, in addition to monitoring the entire HyperFabric subsystem. The monitor can inform the user if the resource being monitored is UP or DOWN. The administrator defines the condition to trigger a notification (usually a change in interface status). Notification can be accomplished with a

SNMP trap, or by logging into a user specified log file with a choice of severity, or by email to a user defined email address.

For more information on EMS, including instructions for

implementing this feature, see “Configuring the HyperFabric EMS

Monitor” on page 85 in this manual, and the EMS Hardware

Monitors User’s Guide Part Number B6191-90028 September 2001

Edition.

• ServiceGuard: Supported

Within a cluster, ServiceGuard groups application services

(individual HP-UX processes) into packages. In the event of a single service failure (node or network), EMS provides notification and

ServiceGuard transfers control of the package to another node in the cluster, allowing services to remain available with minimal interruption. ServiceGuard using EMS, directly monitors cluster nodes, LAN interfaces, and services (the individual processes within an application). ServiceGuard uses a heartbeat LAN to monitor the

27

Planning the Fabric

Hyper Messaging Protocol (HMP)

28 nodes in a cluster. ServiceGuard cannot use the HyperFabric interconnect as a heartbeat link. Instead, a separate LAN must be used for the heartbeat.

For more information on configuring ServiceGuard, see “Configuring

HyperFabric with ServiceGuard” on page 87, as well as Managing

MC/ServiceGuard Part Number B3936-90065 March 2002 Edition.

• High Availability (HA): Supported

When applications use HMP to communicate between HP 9000 nodes in a HyperFabric cluster, you can configure ServiceGuard and the EMS monitor to identify node failure and automatically failover to a functioning HP 9000 node.

For more information on HA when running HMP applications, contact your HP representative.

• Transparent Local Failover: Supported

HMP supports Transparent Local Failover in the HyperFabric version B.11.23.01.

When a HyperFabric resource (adapter, cable, switch or switch port) fails in a cluster, HMP transparently fails over traffic using other available resources. This is accomplished using card pairs, each of which is a logical entity that comprises a pair of HF2 adapters on a

HP 9000 node. Only Oracle applications can make use of the Local

Failover feature. HMP traffic can only fail over between adapters that belong to the same card pair. Traffic does not fail over if both the adapters in a card pair fail. However, administrators do not need to configure HF2 adapters as card pairs if TCP/UDP/IP is run over HF2 or MPI uses HMP.

When HMP is configured in the local failover mode, all the resources in the cluster are utilized. If a resource fails in the cluster and is restored, HMP does not utilize that resource until another resource fails.

For more information on Transparent Local Failover while running

HMP applications, see “Configuring HMP for Transparent Local

Failover Support” on page 96.

• Dynamic Resource Utilization (DRU): Partially Supported

If you add a new HyperFabric resource (node, cable or switch) to a cluster running an HMP application, the HyperFabric subsystem will dynamically identify the added resource and start using it. The

Chapter 2

Chapter 2

Planning the Fabric

Hyper Messaging Protocol (HMP) same process takes place when a resource is removed from a cluster.

However, DRU is not supported if you add or remove an adapter from a node that is running an HMP application. This is consistent with the fact that OLAR is not supported when an HMP application is running on HyperFabric.

• Load Balancing: Supported

When an HP 9000 HyperFabric cluster is running HMP applications, the HyperFabric driver balances the load across all available resources in the cluster, including nodes, adapter cards, links, and multiple links between switches.

• Switch Management: Not Supported

Switch Management is not supported. Switch management will not operate properly if it is enabled on a HyperFabric cluster.

• Diagnostics: Supported

You can run diagnostics to obtain information on many of the

HyperFabric components using the clic_diag, clic_probe and clic_stat commands, as well as the Support Tools Manager (STM).

For more information on HyperFabric diagnostics, see “Running

Diagnostics” on page 149.

Configuration Parameters

This section discusses the maximum limits for HMP HyperFabric configurations. There are numerous variables that can impact the performance of any particular HyperFabric configuration. For more information on specific HyperFabric configurations for HMP

applications, see “HMP Supported Configurations” on page 33.

• HyperFabric is supported on the HP 9000 series servers and workstations only.

• HMP is supported on the HF2 adapter, A6386A, only.

• The performance advantages that HMP offers are not completely realized unless HMP is used with A6386A HF2 (fiber) adapters and related fiber hardware. See Table 2-2 on page 20 for details. The local failover configuration of HMP is supported only on the A6386A HF2 adapters.

• Maximum Supported Nodes and Adapter Cards

29

Planning the Fabric

Hyper Messaging Protocol (HMP)

HyperFabric clusters running HMP applications are limited to supporting a maximum of 64 adapter cards. However, in local failover configurations, a maximum of only 52 adapters are supported.

In point-to-point configurations running HMP applications, the complexity and performance limitations of having a large number of nodes in a cluster make it necessary to include switches in the fabric.

Typically, point-to-point configurations consist of only 2 or 3 nodes.

In switched configurations running HMP applications, HyperFabric supports a maximum of 64 interconnected adapter cards.

A maximum of 8 HyperFabric adapter cards are supported per instance of the HP-UX operating system. The actual number of adapter cards a particular node is able to accommodate also depends on slot availability and system resources. See node specific documentation for details.

A maximum of 8 configured IP addresses are supported by the

HyperFabric subsystem per instance of the HP-UX operating system.

• Maximum Number of Switches

You can interconnect (mesh) up to 4 switches (16-port fiber or Mixed

8 fiber ports) in a single HyperFabric cluster.

• Trunking Between Switches (multiple connections)

Trunking between switches can be used to increase bandwidth and cluster throughput. Trunking is also a way to eliminate a possible single point of failure. The number of trunked cables between nodes is only limited by port availability. To assess the effects of trunking on the performance of any particular HyperFabric configuration, contact your HP representative.

• Maximum Cable Lengths

HF2 (fiber): The maximum distance is 200m (4 standard cable lengths are sold and supported: 2m, 16m, 50m and 200m).

HMP supports up to 4 HF2 switches connected in series with a maximum cable length of 200m between the switches and 200m between switches and nodes.

HMP supports up to 4 hybrid HF2 switches connected in series with a maximum cable length of 200m between fiber ports.

30 Chapter 2

Planning the Fabric

Hyper Messaging Protocol (HMP)

Table 2-3

Table 2-4

HF

Adapter

A6386A

A6386A

A6386A

A6386A

A6386A

A6386A

A6386A

• HMP is supported on A400, A500, rp2400, rp2450, rp54xx (N-class), rp74xx (L-class), rp8400, and Superdome servers running 64-bit

HP-UX.

• HMP is supported on HyperFabric starting HyperFabric versions

B.11.00.11, B.11.11.01, and B.11.23.00.

• HMP is not supported on the A180 or A180C server.

• HMP is not supported on 32-bit versions of HP-UX.

• Throughput and Latency

HF2 Throughput and Latency with HMP Applications

Server Class rp 7400

Maximum Throughput

2 + 2 Gbps full duplex per link

Latency

< 22 microsec

Supported HyperFabric Adapter Configurations

Bus

Type

PCI (4X)

PCI (4X)

Supported

HP Systems rx2600 servers rx56XX servers

HP-UX

Version

11i v2

11i v2

PCI (4X) zx6000 workstations

11i v2

PCI (4X) SD64A servers 11i v2

Support?

No

No

No

Yes

OLAR

1

4

Maximum

Adapters per System

1

PCI (4X)

PCI (4X)

PCI (4X) rx7620 servers rx8620 servers rx4640 servers

11i v2

11i v2

11i v2

No

Yes

Yes

8 (maximum

4 per PCI card cage)

8 (maximum

4 per PCI card cage)

8 (maximum

4 per PCI card cage)

6

Chapter 2 31

Planning the Fabric

Hyper Messaging Protocol (HMP)

NOTE The local failover configuration on HMP is supported only on the A6386A

HF2 adapters.

32 Chapter 2

Chapter 2

Planning the Fabric

Hyper Messaging Protocol (HMP)

HMP Supported Configurations

Multiple HMP HyperFabric configurations are supported to match the performance, cost and scaling requirements of each installation.

In the section, “Configuration Parameters” on page 29, the maximum

limits for HMP enabled HyperFabric hardware configurations were outlined. This section discusses the HMP enabled HyperFabric configurations that HP supports. These recommended configurations offer an optimal mix of performance and availability for a variety of operating environments.

There are many variables that can impact HyperFabric performance. If you are considering a configuration that is beyond the scope of the following HP supported configurations, contact your HP representative.

Point-to-Point Configuration

You can interconnect large servers like the HP Superdome to run Oracle

RAC 9i and enterprise resource planning applications. These applications are typically consolidated on large servers.

Point-to-point connections between servers support the performance benefits of HMP without investing in HyperFabric switches. This is a good solution in small configurations where the benefits of a switched

HyperFabric cluster might not be required (see configuration A in

Figure 2-5).

If an HMP application is running over HyperFabric and another node or adapter is added to either of the nodes, then it is necessary to also add a

HyperFabric switch to the cluster (see configuration B in Figure 2-5).

33

Planning the Fabric

Hyper Messaging Protocol (HMP)

Figure 2-5 HMP Point-To-Point Configurations

34 Chapter 2

Figure 2-6

Planning the Fabric

Hyper Messaging Protocol (HMP)

Enterprise (Database) Configuration

The HMP enterprise configuration illustrated in Figure 2-6 is very

popular for running Oracle RAC 9i.

Superdomes or other large servers make up the Database Tier. Database

Tier nodes communicate with each other using HMP.

Application Tier nodes communicate with each other and to the

Database Tier using TCP/UDP/IP.

HMP Enterprise (Database) Configuration, Single Connection

Between Nodes

Chapter 2 35

Planning the Fabric

Hyper Messaging Protocol (HMP)

Figure 2-7

Enterprise (Database) - Local Failover Supported Configuration

The HMP enterprise configuration is a scalable solution. For high availability and performance, you can easily scale the HMP enterprise configuration with multiple connections between the HyperFabric resources. Any single point of failure in the database tier of the fabric is eliminated in Figure 2-7.

Local Failover Supported Enterprise (Database) Configuration,

Multiple Connections between Nodes

36

In this configuration, if a HyperFabric resource (adapter, cable, switch or switch port) fails in a cluster, HMP transparently fails over traffic using

another available resource. For more information, see “Configuring HMP for Transparent Local Failover Support” on page 96.

Chapter 2

Chapter 2

Planning the Fabric

Hyper Messaging Protocol (HMP)

Technical Computing (Work Stations) Configuration

This configuration is typically used to run technical computing applications with HP-MPI. A large number of small nodes are interconnected to achieve high throughput (see Figure 2-8). High availability is not usually a requirement in technical computing environments.

HMP provides the high performance, low latency path necessary for these technical computing applications. You can interconnect up to 56 nodes using HP 16-port switches. You cannot link more than four 16-port

switches in a single cluster (see Figure 2-9).

The HP “J”, “B”, and “C” class workstations provide excellent performance and return on investment in technical computing configurations.

37

Planning the Fabric

Hyper Messaging Protocol (HMP)

Figure 2-8 Technical Computing Configuration

38 Chapter 2

Figure 2-9

Planning the Fabric

Hyper Messaging Protocol (HMP)

Large Technical Computing Configuration

Chapter 2 39

Planning the Fabric

Hyper Messaging Protocol (HMP)

40 Chapter 2

3

Chapter 3

Installing HyperFabric

This chapter contains the following sections that describe the

HyperFabric installation:

“Checking HyperFabric Installation Prerequisites” on page 43.

41

Installing HyperFabric

“Installing HyperFabric Adapters” on page 44.

“Installing the Software” on page 51.

“Installing HyperFabric Switches” on page 57.

42 Chapter 3

advertisement

Key Features

  • Low latency and high bandwidth for fast data transfer
  • Scalable to connect thousands of devices
  • Versatile connectivity options for flexible network configurations
  • Supports a range of applications, including databases, virtualization, and high-performance computing
  • Provides high availability and fault tolerance for mission-critical applications
  • Easy to manage and troubleshoot with comprehensive management tools

Related manuals

Frequently Answers and Questions

What are the benefits of using HP HyperFabric?
HP HyperFabric provides several benefits over traditional networking technologies, including improved performance, scalability, flexibility, and reliability.
What are the different types of HyperFabric adapters?
There are two types of HyperFabric adapters: PCIe adapters and PCI-X adapters.
What is the difference between a HyperFabric switch and a switch module?
A HyperFabric switch is a standalone device that provides connectivity for multiple HyperFabric ports. A switch module is a component that can be installed in a chassis to provide additional connectivity.

advertisement

Table of contents