TCP/UDP/IP. HP HyperFabric

Add to My manuals
190 Pages

advertisement

TCP/UDP/IP. HP HyperFabric | Manualzz

Planning the Fabric

TCP/UDP/IP

14

TCP/UDP/IP

TCP/UDP/IP is supported on all HF2 hardware. Although some of the

HyperFabric adapter cards support both HMP and TCP/UDP/IP applications, in this section, the focus is on TCP/UDP/IP HyperFabric applications.

Application Availability

All applications, including Oracle 9i and HP-MPI, that use the

TCP/UDP/IP stack are supported.

Features

This section discusses the following HyperFabric features on

TCP/UDP/IP:

• OnLine Addition and Replacement (OLAR): Supported

The OLAR feature allows the replacement or addition of

HyperFabric adapter cards while the system (node) is running.

HyperFabric supports this functionality on the SD64A, rx8620, rx4640, rp54xx (L-class), rp74xx (N-class), rp8400 and Superdome systems, running on the HP-UX 11i v2 platform.

For more information on OLAR, including instructions for

implementing this feature, see “Online Addition and Replacement” on page 44 and Configuring HP-UX for Peripherals Part Number

B2355-90698 November 2000 Edition.

• Event Monitoring Service (EMS): Supported

In the HyperFabric version B.11.23.01, the HyperFabric EMS monitor enables the system administrator to separately monitor each HyperFabric adapter on every node in the fabric, in addition to monitoring the entire HyperFabric subsystem. The monitor can inform the user if the resource being monitored is UP or DOWN. The administrator defines the condition to trigger a notification (usually a change in interface status). Notification can be accomplished with one of the following:

— A Simple Network Management Protocol (SNMP) trap

Chapter 2

Chapter 2

Planning the Fabric

TCP/UDP/IP

— Logging into a user specified log file with a choice of severity

— Email to a user defined email address.

For more information on EMS, including instructions for

implementing this feature, see “Configuring the HyperFabric EMS

Monitor” on page 85 and the EMS Hardware Monitors User’s Guide

Part Number B6191-90028 September 2001 Edition.

• ServiceGuard: Supported

Within a cluster, ServiceGuard groups application services

(individual HP-UX processes) into packages. In the event of a single service failure (node, network, or other resource), EMS provides notification and ServiceGuard transfers control of the package to another node in the cluster, allowing services to remain available with minimal interruption.

ServiceGuard via EMS, directly monitors cluster nodes, LAN interfaces, and services (the individual processes within an application). ServiceGuard uses a heartbeat LAN to monitor the nodes in a cluster. ServiceGuard cannot use the HyperFabric interconnect as a heartbeat LAN. Instead, use a separate LAN for the heartbeat.

For more information on configuring ServiceGuard, see “Configuring

HyperFabric with ServiceGuard” on page 87, and Managing

MC/ServiceGuard Part Number B3936-90065 March 2002 Edition.

• High Availability (HA): Supported

To create a highly available HyperFabric cluster, there should not be any single point of failure. Once the HP 9000 nodes and the

HyperFabric hardware have been configured with no single point of failure, ServiceGuard and EMS can be configured to monitor and fail over nodes and services using ServiceGuard packages.

If any HyperFabric resource in a cluster fails (adapter card, cable or switch port), the HyperFabric driver transparently routes traffic over other available HyperFabric resources with no disruption of service.

The ability of the HyperFabric driver to transparently fail over traffic reduces the complexity of configuring highly available clusters with

ServiceGuard, because ServiceGuard has to take care of node and service failover only.

15

Planning the Fabric

TCP/UDP/IP

16

ServiceGuard uses a “heartbeat” to monitor the cluster. The

HyperFabric links cannot be used for the heartbeat. Instead, an alternate LAN connection such as 100BaseT, Ethernet, Token Ring, or FDDI must be made between the nodes for use as a heartbeat link.

End-To-End HA: HyperFabric provides end-to-end HA on the entire cluster fabric at the link level. If any of the available routes in the fabric fails, HyperFabric transparently redirects all the traffic to a functional route and, if configured, notifies ServiceGuard or other enterprise management tools.

Active-Active HA: In configurations where there are multiple routes between nodes, the HyperFabric software uses a hashing function to determine an adapter or a route through which it sends messages. This is done on a message-by-message basis. All of the available HyperFabric resources in the fabric are used for communication.

In contrast to Active-Passive HA, where one set of resources is not utilized until another set fails, Active-Active HA provides the best return on investment because all of the resources are utilized simultaneously. ServiceGuard is not required for Active-Active HA operation.

For more information on setting up HA HyperFabric clusters, see

Figure 2-3 “TCP/UDP/IP High Availability Switched Configuration” on page 24.

• Dynamic Resource Utilization (DRU): Supported

If you add a new resource (node, adapter, cable or switch) to a cluster, the HyperFabric subsystem dynamically identifies the added resource and starts using it. The same process takes place when a resource is removed from a cluster. The difference between DRU and

OLAR is that OLAR applies only to the addition or replacement of adapter cards from nodes.

• Load Balancing: Supported

When an HP 9000 HyperFabric cluster is running TCP/UDP/IP applications, the HyperFabric driver balances the load across all available resources in the cluster, including nodes, adapter cards, links, and multiple links between switches.

• Switch Management: Not Supported

Chapter 2

Chapter 2

Planning the Fabric

TCP/UDP/IP

Switch Management is not supported. Switch management will not operate properly if you enable it on a HyperFabric cluster.

• Diagnostics: Supported

Diagnostics can be run to obtain information on many of the

HyperFabric components using the clic_diag, clic_probe and clic_stat commands, as well as the Support Tools Manager (STM).

For more information on HyperFabric diagnostics, see “Running

Diagnostics” on page 133.

Configuration Parameters

This section describes the maximum limits for TCP/UDP/IP HyperFabric configurations. There are numerous variables that can impact the performance of any HyperFabric configuration. For guidance on specific

HyperFabric configurations for TCP/UDP/IP applications, see the

section, “TCP/UDP/IP Supported Configurations” on page 21.

• HyperFabric is supported only on the HP 9000 series servers and workstations.

• TCP/UDP/IP is supported for all HyperFabric hardware and software.

• Maximum Supported Nodes and Adapter Cards

In point-to-point configurations, the complexity and performance limitations of having a large number of nodes in a cluster make it necessary to include switching in the fabric. Typically, point-to-point configurations consist of only 2 or 3 nodes.

In switched configurations, HyperFabric supports a maximum of 64 interconnected adapter cards.

A maximum of 8 HyperFabric adapter cards are supported per instance of the HP-UX operating system. The actual number of adapter cards a particular node is able to accommodate also depends on slot availability and system resources. See node specific documentation for details.

HyperFabric subsystem supports a maximum of 8 configured IP addresses per instance of the HP-UX operating system.

• Maximum Number of Switches

17

Planning the Fabric

TCP/UDP/IP

You can interconnect (mesh) up to 4 switches (16-port fiber, or Mixed

8 fiber ports) in a single HyperFabric cluster.

• Trunking Between Switches (multiple connections)

You can use trunking between switches to increase bandwidth and cluster throughput. Trunking is also a way to eliminate a possible single point of failure. The number of trunked cables between nodes is limited only by port availability. To assess the effects of trunking on the performance of any particular HyperFabric configuration, contact your HP representative.

• Maximum Cable Lengths

HF2 (fiber): The maximum distance is 200m (Four standard cable lengths are sold and supported: 2m, 16m, 50m and 200m).

TCP/UDP/IP supports up to four HF2 switches connected in series with a maximum cable length of 200m between the switches and

200m between switches and nodes.

TCP/UDP/IP supports up to four hybrid HF2 switches connected in series with a maximum cable length of 200m between fiber ports.

18 Chapter 2

Table 2-1

Planning the Fabric

TCP/UDP/IP

• Throughput and Latency

HF2 Throughput and Latency with TCP/UDP/IP Applications

Server Class rp7400

Maximum Throughput

2 + 2 Gbps full duplex per link

Latency

< 42 microsec

Chapter 2 19

Planning the Fabric

TCP/UDP/IP

Table 2-2

HF

Adapter

A6386A

A6386A

A6386A

A6386A

A6386A

A6386A

A6386A

Supported HyperFabric Adapter Configurations

Bus

Type

PCI (4X)

Supported

HP Systems rx56XX servers

HP-UX

Version

PCI (4X) rx2600 servers 11i v2

11i v2

PCI (4X) zx6000 workstations

11i v2

PCI (4X) SD64A servers 11i v2

No

No

OLAR

Support?

1

4

Maximum

Adapters per System

No

Yes

1

PCI (4X)

PCI (4X)

PCI (4X) rx7620 servers rx8620 servers rx4640 servers

11i v2

11i v2

11i v2

No

Yes

Yes

8 (maximum

4 per PCI card cage)

8 (maximum

4 per PCI card cage)

8 (maximum

4 per PCI card cage)

6

20 Chapter 2

Chapter 2

Planning the Fabric

TCP/UDP/IP

TCP/UDP/IP Supported Configurations

Multiple TCP/UDP/IP HyperFabric configurations are supported to match the cost, scaling, and performance requirements of each installation.

In the previous section, “Configuration Parameters” on page 17, the

maximum limits for TCP/UDP/IP enabled HyperFabric hardware configurations were outlined. In this section the TCP/UDP/IP enabled

HyperFabric configurations that HP supports are explained. These recommended configurations offer an optimal mix of performance and availability for a variety of operating environments.

There are many variables that can impact HyperFabric performance. If you are considering a configuration that is beyond the scope of the following HP supported configurations, contact your HP representative.

Point-to-Point Configurations

You can interconnect large servers like the HP Superdome to run Oracle

RAC 9i and enterprise resource planning applications. These applications are typically consolidated on large servers.

Point-to-point connections between servers support the performance benefits of HMP without investing in HyperFabric switches. This is a good solution in small configurations where the benefits of a switched

HyperFabric cluster might not be required (see configuration A and

configuration C in Figure 2-1).

If there are multiple point-to-point connections between two nodes, traffic load is balanced over those links. If one link fails, the load fails

over to the remaining links (see configuration B in Figure 2-1).

Running applications using TCP/UDP/IP on a HyperFabric cluster provides major performance benefits compared to other technologies such as Ethernet. If a HyperFabric cluster is originally set up to run enterprise applications using TCP/UDP/IP and the computing environment stabilizes with a requirement for higher performance, migration to HMP is always an option.

21

Planning the Fabric

TCP/UDP/IP

Figure 2-1 TCP/UDP/IP Point-To-Point Configurations

22 Chapter 2

Figure 2-2

Planning the Fabric

TCP/UDP/IP

Switched Configuration

This configuration offers the same benefits as the point-to-point

configurations illustrated in Figure 2-1, but it has the added advantage

of greater connectivity (see Figure 2-2).

TCP/UDP/IP Basic Switched Configuration

Chapter 2 23

Planning the Fabric

TCP/UDP/IP

Figure 2-3

High Availability Switched Configuration

This configuration has no single point of failure. The HyperFabric driver provides end-to-end HA. If any HyperFabric resource in the cluster fails, traffic is transparently rerouted through other available resources. This configuration provides high performance and high availability (see

Figure 2-3).

TCP/UDP/IP High Availability Switched Configuration

24 Chapter 2

Figure 2-4

Planning the Fabric

TCP/UDP/IP

Hybrid Configuration

You can interconnect servers and workstations in a single heterogeneous

HyperFabric cluster.

In this configuration, the servers are highly available. In addition, the workstations and the servers can run the same application or different

applications (see Figure 2-4).

TCP/UDP/IP Hybrid Configuration

Chapter 2 25

advertisement

Key Features

  • Low latency and high bandwidth for fast data transfer
  • Scalable to connect thousands of devices
  • Versatile connectivity options for flexible network configurations
  • Supports a range of applications, including databases, virtualization, and high-performance computing
  • Provides high availability and fault tolerance for mission-critical applications
  • Easy to manage and troubleshoot with comprehensive management tools

Related manuals

Frequently Answers and Questions

What are the benefits of using HP HyperFabric?
HP HyperFabric provides several benefits over traditional networking technologies, including improved performance, scalability, flexibility, and reliability.
What are the different types of HyperFabric adapters?
There are two types of HyperFabric adapters: PCIe adapters and PCI-X adapters.
What is the difference between a HyperFabric switch and a switch module?
A HyperFabric switch is a standalone device that provides connectivity for multiple HyperFabric ports. A switch module is a component that can be installed in a chassis to provide additional connectivity.

advertisement

Table of contents