Managing HP Serviceguard Extension for SAP for Linux

Managing HP Serviceguard Extension for
SAP for Linux Version B.06.00.70
Abstract
This guide describes how to plan, configure, and administer highly available SAP Systems on Red Hat Enterprise Linux and
SUSE Linux Enterprise Server systems using HP Serviceguard.
HP Part Number: 790237-003
Published: December 2015
© Copyright 2015 Hewlett-Packard Development Company, L.P.
Serviceguard, Serviceguard Extension for SAP, Metrocluster and Serviceguard Manager are products of Hewlett-Packard Company, L. P., and all
are protected by copyright. Valid license from HP is required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products
and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Links to third-party websites take you outside the HP website. HP has no control over and is not responsible for information outside HP.com.
SAP, SAP NetWeaver®, Hana®, MaxDB™, ABAP™ and other SAP products and services mentioned herein as well as their respective logos are
trademarks or registered trademarks of SAP AG in Germany and other countries.
Microsoft® and Windows® are trademarks of the Microsoft group of companies.
Intel® and Itanium® are registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries.
Oracle ® Java™ are registered trademarks of Oracle Corporation.
IBM® and DB2® are registered trademarks of IBM in the United States and other countries.
Sybase®, is a registered trademark of Sybase, Inc, an SAP company.
NFS® is a registered trademark of Sun Microsystems, Inc.
NIS™ is a trademark of Sun Microsystems, Inc.
UNIX® is a registered trademark of The Open Group.
Linux™ is a U.S. registered trademark of Linus Torvalds™ .
Red Hat® is a registered trademark of Red Hat Software, Inc.
SUSE® is a registered trademark of SUSE AG, a Novell Business.
Contents
1 Overview..................................................................................................7
1.1 About this manual................................................................................................................7
1.2 Related documentation.........................................................................................................8
2 SAP cluster concepts...................................................................................9
2.1 SAP-specific cluster modules..................................................................................................9
2.2 Configuration restrictions....................................................................................................11
2.3 Example 1: Robust failover using the one package concept.....................................................12
2.4 Example 2: A mutual failover scenario.................................................................................13
2.5 Example 3: Follow-and-push with automated enqueue replication............................................14
2.6 Example 4: Realization of the SAP-recommended SPOF isolation.............................................17
2.7 Example 5: Dedicated failover host.....................................................................................17
2.8 Example 6: HANA System Replication and Dual-purpose.......................................................18
2.9 Dedicated NFS packages..................................................................................................20
2.10 Virtualized dialog instances for adaptive enterprises.............................................................21
2.11 Handling of redundant dialog instances..............................................................................21
3 SAP cluster administration..........................................................................23
3.1 Performing cluster administration on the System Management Homepage (SMH).......................23
3.2 Performing SAP administration with SAP's startup framework .................................................25
3.3 Change management........................................................................................................29
3.3.1 System level changes..................................................................................................29
3.3.2 SAP software changes...............................................................................................30
3.4 Ongoing verification of package failover capabilities............................................................31
3.5 Upgrading SAP software...................................................................................................32
3.6 Package conversion..........................................................................................................32
3.7 SAP HANA cluster administration.......................................................................................33
3.7.1 Starting and restarting the HANA cluster.......................................................................33
3.7.2 Starting and stopping the primary HANA system...........................................................33
3.7.3 Starting and stopping of the secondary HANA system....................................................34
3.7.4 Manual operations on scale-up production systems........................................................35
3.7.5 Planned extended downtime for HANA instance............................................................35
3.7.6 Patching Serviceguard software on a node...................................................................36
4 SAP Netweaver cluster storage layout planning............................................37
4.1 SAP Instance storage considerations....................................................................................37
4.1.1 Option 1: SGeSAP NFS cluster.....................................................................................37
4.1.1.1 Common directories that are kept local...................................................................38
4.1.1.2 Directories that Reside on Shared Disks..................................................................38
4.1.2 Option 2: SGeSAP NFS idle standby cluster..................................................................41
4.1.2.1 Common directories that are kept local..................................................................41
4.1.2.2 Directories that reside on shared disks...................................................................42
4.2 Database instance storage considerations............................................................................42
4.3 Oracle single instance RDBMS...........................................................................................43
4.3.1 Oracle databases in SGeSAP NFS and NFS Idle standby clusters.....................................43
4.4 MaxDB/liveCache storage considerations............................................................................44
4.5 Sybase ASE storage considerations.....................................................................................47
4.6 IBM DB2 storage considerations.........................................................................................47
4.7 Special liveCache storage considerations.............................................................................47
Contents
3
4.7.1 Option 1: Simple cluster with separated packages..........................................................48
4.7.2 Option 2: Non-MaxDB environments...........................................................................48
5 Clustering SAP Netweaver using SGeSAP packages......................................49
5.1 Overview.........................................................................................................................49
5.1.1 Three phase approach................................................................................................49
5.1.2 SGeSAP modules and services.....................................................................................50
5.1.2.1 SGeSAP modules................................................................................................50
5.1.2.2 SGeSAP services................................................................................................50
5.1.3 Installation options.....................................................................................................52
5.1.3.1 Serviceguard Manager GUI and Serviceguard CLI..................................................53
5.1.3.2 SGeSAP easy deployment....................................................................................53
5.2 Infrastructure setup, pre-installation preparation (Phase 1).......................................................54
5.2.1 Prerequisites..............................................................................................................54
5.2.2 Node preparation and synchronization........................................................................55
5.2.3 Intermediate synchronization and verification of virtual hosts...........................................55
5.2.4 Intermediate synchronization and verification of mount points..........................................55
5.2.5 Infrastructure setup for NFS toolkit (Phase 1a)................................................................55
5.2.5.1 Creating NFS Toolkit package using Serviceguard Manager....................................56
5.2.5.2 Creating NFS toolkit package using Serviceguard CLI............................................58
5.2.5.3 Automount setup................................................................................................59
5.2.5.4 Solutionmanager diagnostic agent file system preparations related to NFS toolkit.......59
5.2.5.5 Intermediate node sync and verification................................................................60
5.2.6 Infrastructure Setup - SAP base package setup (Phase 1b)...............................................60
5.2.6.1 Intermediate synchronization and verification of mount points...................................61
5.2.6.2 SAP base package with Serviceguard and SGeSAP modules...................................61
5.2.6.2.1 Creating the package with the Serviceguard Manager....................................61
5.2.6.2.2 Creating the package configuration file with the CLI.......................................63
5.2.6.3 SAP base package with Serviceguard modules only...............................................63
5.2.6.3.1 Creating the package with Serviceguard Manger...........................................63
5.2.6.3.2 Creating the package configuration file with the CLI.......................................64
5.2.6.4 Verification steps...............................................................................................65
5.3 SAP installation (Phase 2)..................................................................................................65
5.3.1 Prerequisite...............................................................................................................65
5.3.2 Installation of SAP instances........................................................................................65
5.4 Post SAP installation tasks and final node synchronization (Phase 3a).......................................66
5.4.1 SAP post installation modifications and checks..............................................................66
5.4.2 User synchronization..................................................................................................69
5.4.3 Network services synchronization................................................................................71
5.4.4 NFS and automount synchronization............................................................................71
5.4.5 SAP hostagent installation on secondary cluster nodes...................................................72
5.4.6 Other local file systems and synchronization.................................................................72
5.5 Completing SGeSAP package creation (Phase 3b)................................................................73
5.5.1 Creating SGeSAP package with guided configuration using Serviceguard Manager...........73
5.5.2 Creating SGeSAP package with CLI.............................................................................73
5.5.3 Module sgesap/sap_global – SAP common instance settings..........................................74
5.5.4 Module sgesap/sapinstance – SAP instances................................................................75
5.5.5 Module sgesap/dbinstance – SAP databases...............................................................76
5.5.6 Module sgesap/mdminstance – SAP MDM repositories..................................................77
5.5.7 Module sg/services – SGeSAP monitors.......................................................................78
5.5.8 Module sg/generic_resource – SGeSAP enqor resource.................................................78
5.5.9 Module sg/dependency – SGeSAP enqor MNP dependency..........................................79
5.5.10 Module sgesap/enqor – SGeSAP enqor MNP template................................................79
5.5.11 Configuring sgesap/sapextinstance, sgesap/sapinfra and sgesap/livecache ..................80
4
Contents
5.5.11.1 Remote access between cluster nodes and to external application servers..................80
5.5.11.2 Configuring external instances (sgesap/sapextinstance)..........................................80
5.5.11.3 Configuring SAP infrastructure components (sgesap/sapinfra)..................................83
5.5.11.4 Module sgesap/livecache – SAP liveCache instance..............................................84
5.6 Cluster conversion for existing instances...............................................................................86
5.6.1 Converting an existing SAP instance.............................................................................87
5.6.2 Converting an existing database.................................................................................87
6 Clustering SAP HANA System Replication using SGeSAP packages.................89
6.1 General considerations for building HANA clusters................................................................89
6.2 Configuring HANA replication...........................................................................................90
6.3 Deploying HANA scale-up cluster packages.........................................................................91
6.4 Deploying HANA scale-out cluster packages........................................................................95
6.5 Self-tuning timeouts...........................................................................................................99
6.6 Enforced parameter settings...............................................................................................99
6.7 Consistency ensuring synchronization tolerances.................................................................100
6.8 Dual-purpose SAP HANA configuration.............................................................................101
6.9 Graceful shutdown of HANA instances..............................................................................102
Contents
5
6
1 Overview
1.1 About this manual
This document describes how to plan, configure, and administer highly available SAP Netweaver
systems on Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) systems using
HP Serviceguard high availability cluster technology in combination with HP Serviceguard Extension
for SAP (SGeSAP). To use SGeSAP, you must be familiar with the knowledge of Serviceguard
concepts and commands, Linux operating system administration, and SAP basics.
This manual has five chapters:
•
Chapter 1— Overview
•
Chapter 2— SAP cluster concepts
This chapter gives an introduction to the high-level design of a High Availability SAP server
environment.
•
Chapter 3— SAP cluster administration
This chapter covers both SGeSAP cluster administration for IT basis administrators and clustered
SAP administration for SAP basis administrators.
•
Chapter 4— SAP cluster storage layout planning
This chapter describes the recommended file system and shared storage layout for clustered
SAP landscape and database systems.
•
Chapter 5— SAP cluster configuration
This chapter includes guidelines and configuration details for SGeSAP clusters.
•
Chapter 6— Clustering SAP HANA System Replication using SGeSAP packages
This chapter describes SAP HANA system.
Table 1 Abbreviations
Abbreviation
Meaning
<SID>, <sid>
System ID of the SAP system, RDBMS or other components in
uppercase/lowercase
<INSTNAME>
SAP instance, For example, DVEBMGS, D, J, ASCS, SCS, ERS, HDB
[A]SCS
Refers to either an SCS or an ASCS instance
<INSTNR>, <INR>
Instance number of the SAP system
<primary>, <secondary>, <local>
Names mapped to local IP addresses of the server LAN
<relocdb_s>, <relocci_s>, <relocdbci_s>
Names mapped to relocatable IP addresses of Serviceguard packages
in the server LAN
<primary_s>, <secondary_s>, <local_s>
Names mapped to local IP addresses of the server LAN
<relocdb_s>, <relocci_s>, <relocdbci_s>
Names mapped to relocatable IP addresses of Serviceguard packages
in the server LAN
<...>
Other abbreviations are self-explanatory and can be derived from the
surrounding context
1.1 About this manual
7
1.2 Related documentation
The following documents contain related information:
•
Managing HP Serviceguard A.12.00.40 for Linux User Guide
•
HP Serviceguard for Linux Advanced edition 12.00.40 Release Notes
•
HP Serviceguard for Linux Enterprise edition 12.00.40 Release Notes
The documents are available at http://www.hp.com/go/linux-serviceguard-docs.
8
Overview
2 SAP cluster concepts
This chapter introduces the basic concepts used by SGeSAP for Linux. It also includes
recommendations and typical cluster layouts that can be implemented for SAP environments.
2.1 SAP-specific cluster modules
HP SGeSAP extends Serviceguard failover cluster capabilities to SAP application environments. It
is intended to be used in conjunction with the HP Serviceguard Linux product and the HP
Serviceguard toolkit for NFS on Linux.
SGeSAP provides SAP-specific modules, service monitors, cluster resources, cluster deployment,
and cluster verification tools as well as a shared library that makes SAP startup framework
cluster-aware.
SGeSAP extends Serviceguard with these new major modules:
•
sgesap/sapinstance
•
sgesap/hdbinstance and sgesap/hdbdistinstance
•
sgesap/mdminstance
•
sgesap/livecache
•
sgesap/dbinstance
These modules allow quick configuration of instance-failover and clustering of all mandatory SAP
Netweaver software services. The mandatory services are categorized as Single Points of Failures
(SPOFs) software.
Most SAP applications rely on two central software services that define the major software Single
Point of Failure for SAP environments: the SAP Enqueue Service and the SAP Message Service.
These services are traditionally combined and run as part of a unique SAP instance that is referred
to as JAVA System Central Service instance (SCS) for SAP JAVA applications or Advanced Business
Application Programming (ABAP) System Central Service instance (ASCS) for SAP ABAP
applications. You can also configure both JAVA (SCS) and ABAP (ASCS) components in one SAP
application. In this case, both instances are SPOFs that require clustering.
In ABAP environments, the term Central Instance (CI) refers to a software entity that combines
additional SAP application services with the SPOFs instance. As any other SAP instance, a CI has
an Instance Name. Traditionally it is called DVEBMGS. Each letter represents a service that is
delivered by the instance. The "E" and the "M" stand for the Enqueue and Message Service. By
default, there are five other software services that are part of the CI.
An undesirable result is that a CI is complex software with a high resource demand.. Shutdown
and startup of CI is slower and more error-prone due to presence of redundant non-critical services.
Starting with SAP Application Server 6.40, a CI installation creates two instances instead of one,
an ASCS instance and a DVEBMGS instance. The SPOFs of the Central Instance are isolated into
the first instance. The second instance still named DVEBMGS for compatibility reasons but unlike
the name suggests, it includes no Enqueue Service and no Message Service and is not a Central
Instance anymore.
A package that uses the sgesap/sapinstance module can be set up to cluster the SCS and or
ASCS (or Central Instance) of a single SAP application.
All instance types and use cases for SAP Netweaver web application server software are covered
by module sgesap/sapinstance. This module allows adding of a set of SAP instances that
belong to the same Netweaver system into a module-based Serviceguard package. The package
can encapsulate the failover entity for a combination of ABAP-stack, JAVA-stack, or double-stack
instances.
2.1 SAP-specific cluster modules
9
NOTE: Split-stack installations require separate packages for each stack. In this case, a package
same_node dependency can be defined which ensures that split-stack packages can be handled
as a single entity.
Instance-type specific handling is provided by the module for SAP ABAP Central Service Instance,
SAP JAVA Central Service Instance, SAP ABAP Application Server Instances, SAP JAVA Application
Server Instances, SAP Central Instances, SAP Enqueue Replication Server Instances, SAP Gateway
Instances and SAP Webdispatcher Instances.
The module sgesap/mdminstance extends the coverage to the SAP Master Data Management
Instance types. The module to cluster SAP liveCache instances is called sgesap/livecache.
SGeSAP also covers single-instance database instance failover with built-in routines. This ensures
seamless integration and uniform look-and-feel of the solution across different database vendor
technologies. You can use the sgesap/dbinstance module for the SAP Sybase ASE, SAP MaxDB
IBM DB2 or Oracle-based SAP database services.
Single-host (scale-up) SAP HANA in-memory database services are clustered by the
sgesap/hdbinstance module. For distributed (scale-out) instances the module is wrapped into
sgesap/hdbdistinstance module. The HANA cluster solution is combined with SAP HANA
System Replication, which is a built-in feature of the HANA database.
In non-HANA environments, it is possible to combine clustered components of a single SAP software
stack into one failover package for simplicity and convenience. Also, it is possible to split these
components into several packages to avoid unwanted dependencies and to lower potential failover
times. Multiple SAP applications of different types and release version can be consolidated in
different packages of a single cluster. SGeSAP enables SAP instance virtualization.
It is possible to use SGeSAP to manually move SAP Application Server Instances between hosts
and quickly adapt to changing resource demands or maintenance needs.
All SGeSAP components support Serviceguard Live Application Detach, Serviceguard Package
Maintenance Mode, and the SAP Online Kernel Switch technologies to minimize planned downtime.
Optional software monitors that check the responsiveness of the major interaction interfaces of the
clustered software components are available.
On top of the major SGeSAP modules, there are six additional modules to refine functionality on
demand:
•
sgesap/sapinfra
•
sgesap/sapextinstance
•
sgesap/hdbdualpurpose
•
sgesap/hdbprimary
•
sgesap/hdbsico
•
sgesap/enqor
These six additional modules enable easy clustering of smaller SAP infrastructure software tools
(sgesap/sapinfra), allow to manipulate the behavior of non-clustered SAP Netweaver and
HANA instances (sgesap/sapextinstance and sgesap/hdbdualpurpose) coordinate
HANA System Replication (sgesap/hdbprimary) and handle SAP enqueue failover policies
(sgesap/enqor). The infrastructure tools covered by sgesap/sapinfra include the SAP
sapccmsr, saposcol, rfcadapter, Webdispatcher, and saprouter binaries. Depending
on the installation type SAP Web Dispatcher can be configured with either sgesap/sapinstance
or sgesap/sapinfra. The initial site selection in HANA scale-out environments is realized through
module sgesap/hdbsico, which is based on the site aware disaster tolerance architecture of
Serviceguard.
SGeSAP does not support installing additional SAP Netweaver or application components on
HANA cluster nodes. Along with cluster creation, SGeSAP also provides HANA instance startup,
10
SAP cluster concepts
shutdown, monitoring, restart, takeover, and role-reversal operations. SGeSAP introduces
sgesap/hdbinstance, sgesap/hdbprimary, and sgesap/hdbdualpurpose cluster
modules for configuring instance-failover and clustering SAP HANA System Replication feature.
The module sgesap/enqor is a special case in that it is a multi-node package (MNP) that must
run on all the nodes of the cluster, if ERS instances are installed. It is particularly useful for clusters
with more than two nodes. It is an optional package that complements the [A]SCS/ERS package
pairs and emulates a Follow-and-Push failover policy between them. Exactly one enqor MNP can
exist per cluster. It automatically covers all [A]SCS/ERS package pairs without further configuration.
The module sgesap/all provides the option to combine all of these functions into one package
with the exception of the enqor MNP and HANA Modules.
Module sgesap/all is available for convenience reasons to simplify configuration steps for
standard use cases. Only a subset of the SGeSAP modules can be configured in a package that
was created based on sgesap/all.
In many cases, shared access to selected parts of the filesystems is required. This is usually provided
via Network Filesystem Services (NFS). The tkit/nfs/nfs module is separately available and
can be included in any SGeSAP package. It can also run separately as part of a SGeSAP cluster.
SGeSAP provides additional submodules that are transparently included when creating packages
with the above mentioned modules. Explicitly using sub-modules during package creation is not
allowed.
NOTE:
HP recommends that any SGeSAP configuration use only modular-based packages.
For more information on modular package creation, see the following sections in chapter
5“Clustering SAP Netweaver using SGeSAP packages” (page 49):
•
Creating SGeSAP package with easy deployment
•
Creating SGeSAP package with guided configuration using Serviceguard Manager
2.2 Configuration restrictions
The following are the configuration restrictions:
•
A HANA package has the sole purpose of clustering a single HANA database instance. You
cannot add other SGeSAP modules to the HANA packages.
•
Configure HANA packages on nodes that are certified as HANA scale-up appliance servers.
No other SGeSAP packages are allowed to run on these servers.
•
Only one clustered HANA system is allowed per cluster. The sgesap/sapinstance module
must not be used for Diagnostic Agents.
•
It is not allowed to specify a single SGeSAP package with two database instances in it.
•
It is not allowed to specify a single SGeSAP package with a Central Service Instance [A]SCS
and its corresponding Replication Instance ERS.
•
The Diagnostic Agent instances are not mandatory for SAP line-of-business processes, but they
become installed on the relocatable IP address of the corresponding instance. The instances
must failover with the relocatable IP address. You must specify the sgesap/sapextinstance
module. Diagnostic Agent start and stop issues during failover are not critical for the overall
success of the failover of the package.
•
It is not a requirement, but it can help to reduce the complexity of a cluster setup, if SCS and
ASCS are combined in a single package. If this setup is selected, the failure of one of the two
instances also causes the failover of the other healthy instance. This might be tolerable in cases
in which SAP replication instances are configured.
•
The sgesap/sapinstance packages can identify the state of their corresponding
sgesap/dbinstance package in the same cluster without the requirement of explicitly
2.2 Configuration restrictions
11
configuring Serviceguard package dependencies. The information is for example used to
delay SAP instance package startups while the database is starting in a separate package,
but not yet ready to accept connections.
Restrictions applicable for the dual-purpose SAP HANA System Replication setup are:
•
Additional storage hardware is required to create independent I/O channels and subsystems
that do not interfere with the replication infrastructure.
•
Do not pre-load column tables of the replication system into SAP HANA system memory.
•
All additional instances on the secondary node must be stopped before triggering a takeover.
2.3 Example 1: Robust failover using the one package concept
In a one-package configuration, the database, NFS, and SAP SPOFs run on the same node at all
times and are configured in one SGeSAP package. Other nodes in the Serviceguard cluster function
as failover nodes for the primary node on which the system runs during normal operation.
Figure 1 One-package failover cluster concept
SGeSAP Cluster
One-Package Concept
Shared Disks
dbciC11
DBCI package moved and
required resources freed
up in the event of a failure
Node 1
QAS Systems
Dialog
Instances
Node 2
LAN
Application Servers
To maintain an expensive idle standby is not required as SGeSAP allows utilizing the secondary
node(s) with different instances during normal operation. A common setup installs one or more
non-mission critical SAP Systems on the failover nodes, typically a SAP Consolidation, Quality
Assurance, or Development Systems. They can gracefully be shutdown by SGeSAP during failover
to free up the computing resources for the critical production system. For modular packages, the
sgesap/sapextinstance module can be added to the package to allow specifying this kind
of behavior.
12
SAP cluster concepts
Development environments tend to be less stable than production systems. This must be taken into
consideration before mixing these use-cases in a single cluster. A feasible alternative would be to
install Dialog Instances of the production system on the failover node.
Figure 2 Visualization of a one-package cluster concept in Serviceguard Manager
If the primary node fails, the database and the Central Instance failover and continue functioning
on an adoptive node. After failover, the system runs without any manual intervention. All redundant
Application Servers and Dialog Instances, even those that are not part of the cluster, can stay up
or can be restarted (triggered by a failover). A sample configuration in Figure 2 (page 13) shows
node1 with a failure, which causes the package containing the database and central instance to
failover to node2. A Quality Assurance System and additional Dialog Instances get shut down,
before the database and Central Instance are restarted.
2.4 Example 2: A mutual failover scenario
If you are planning a simple three-tier SAP layout in a two node cluster, use the SGeSAP mutual
failover model. This approach distinguishes two SGeSAP packages—one for the database SPOF
and the other for the SAP SPOFs as defined above. In small and medium size environments, the
database package gets combined with NFS toolkit server functionality to provide all filesystems
that are required by the software in both packages. During normal operation, the two packages
are running on different nodes of the cluster. The major advantage of this approach is that the
failed SAP package will never cause a costly failover of the underlying database because it is
separated in a different package.
The failover process results in downtime that typically lasts a few minutes, depending on the work
in progress when the failover takes place. A main portion of downtime is needed for the recovery
of a database. The total recovery time of a failed database cannot be predicted reliably.
By tuning the Serviceguard heartbeat on a dedicated heartbeat LAN, it is possible to achieve
failover times in the range of about a minute or two for a ci package that contains a lightweight
[A]SCS instance without database.
A cluster can be configured in a way that two nodes back up each other. The basic layout is
depicted in Figure 3 (page 14).
2.4 Example 2: A mutual failover scenario
13
Figure 3 Two-package failover with mutual backup scenario
SGeSAP Cluster
Mutual Failover
Shared Disks
dbC11
DB and CI package
can fail and
recover independently
Node 1
ciC11
Node 2
LAN
Application Servers
It is a best practice to base the package naming on the SAP instance naming conventions whenever
possible. Each package name must also include the SAP System Identifier (SID) of the system to
which the package belongs. If similar packages of the same type get added later, they have a
distinct namespace because they have a different SID.
For example, a simple mutual failover scenario for an ABAP application defines two packages,
called dbSID and ascsSID (or ciSID for old SAP releases).
2.5 Example 3: Follow-and-push with automated enqueue replication
In case an environment has very high demands regarding guaranteed uptime, it makes sense to
activate a Replicated Enqueue with SGeSAP. With this additional mechanism, it is possible to
failover ABAP and/or JAVA System Central Service Instances without impacting ongoing transactions
of the Dialog Instances.
NOTE: It only makes sense to activate Enqueue Replication for systems that have Dialog Instances
that run on nodes different from the primary node of the System Central Service package.
A standard SAP Enqueue Service maintains a table of exclusive locks that are temporarily granted
exclusively to an ongoing transaction. This mechanism avoids inconsistencies that could be caused
by parallel transactions that access the same data in the database simultaneously. In case of a
failure of the Enqueue Service, the table with all granted locks gets lost. After package failover
and restart of the Enqueue Service, all Dialog Instances need to get notified that the lock table
14
SAP cluster concepts
content got lost. As a reaction they cancel ongoing transactions that still hold granted locks. These
transactions need to be restarted.
Enqueue Replication provides a concept that prevents the impact of a failure of the Enqueue Service
on the Dialog Instances. Thus transactions no longer need to be restarted. The Enqueue Server has
the ability to create a copy of its memory content to a Replicated Enqueue Service that needs to
be running as part of a Enqueue Replication Service Instance (ERS) on a remote host. This is a
realtime copy mechanism and ensures that the replicated memory accurately reflects the status of
the Enqueue Server at all times.
There might be two ERS instances for a single SAP system, replicating SCS, and ASCS locks
separately.
Figure 4 Follow-and-push failover concept for ABAP and JAVA instances
SGeSAP Cluster
Automated Replicated Enqueue
Shared Disks
SAP enqor multi-node package
ascsC11
follow
scsC11
follow
dbC11
Node 1
ers00C11
JD04
d04C11
Node 2
push
push
ers01C11
d03C11
Node 3
LAN
Application Servers
NOTE: If Enqueue Services are configured as an integral part of an ABAP DVEBMGS Central
Instance will not be able to utilize replication features. The DVEBMGS Instance needs to be split
up in a standard Dialog Instance and a ABAP System Central Service Instance (ASCS).
The SGeSAP packaging of the ERS Instance provides startup and shutdown routines, failure
detection, split-brain prevention and quorum services to the mechanism. SGeSAP also delivers a
service monitor sapenqor.mon that is meant to be configured as the sole part of a enqor multi-node
package. This MNP maintains a generic resource of type EnqOR (Enqueue Operation
Resource) for each Replicated Enqueue. An EnqOR resource is referred to by the system as
sgesap.enqor_<SID>_ERS<INSTNR>.
2.5 Example 3: Follow-and-push with automated enqueue replication
15
Setting up the enqor MNP allows a protected follow-and-push behavior for the two packages that
contain the enqueue and its replication. As a result, an automatism will make sure that Enqueue
and its Enqueue Replication server are never started on the same node initially. Enqueue will not
invalidate the replication accidentally by starting on a non-replication node while replication is
active elsewhere. It is possible to move the package with the replication server to any free node
in a multi-node cluster without a requirement to reconfigure the enqueue package failover policy.
Figure 5 Visualization of a follow-and-push cluster in Serviceguard Manager
During failover of Enqueue, its replication is located dynamically and the Enqueue restarts on the
currently active replication node. Enqueue synchronizes with the local replication server. As a next
step, the package with the replication service shuts down automatically and restarts on a healthy
node, if available. In case of a failover in a multi-node environment, this implements a self-healing
capability for the replication function. Enqueue will failover to just any node from the list of statically
configured hosts if no replication package is running.
Two replication instances are required if Enqueue Replication Services are to be used for both the
JAVA stack and the ABAP stack. From this approach, several configuration options derive. In most
cases, it is the best practice to create separate packages for ASCS, SCS, and the two ERS instances.
It is also supported to combine the replication instances within one SGeSAP package. SCS and
ASCS can be combined in one package, only if the two replication instances are combined in
another package at the same time. To combine ASCS and SCS in one package and keep the two
ERS instances in two separate packages is not supported. Otherwise, situations can arise in which
a failover of the combined ASCS/SCS package is not possible. Finally, ASCS cannot be combined
with its ERS instance (AREP) in the same package. For the same reason, SCS cannot be combined
with its ERS instance (REP).
SAP offers two possibilities to configure Enqueue Replication Servers:
16
SAP cluster concepts
1.
2.
SAP self-controlled using High Availability polling with Replication Instances on each cluster
node (active/passive).
Completely High Availability failover solution controlled with one virtualized Replication
Instance per Enqueue.
SGeSAP implements the second concept and avoids costly polling and complex data exchange
between SAP and the High Availability cluster software.
There are several SAP profile parameters that are related to the self-controlled approach. Most of
these parameters have names that start with the string enque/enrep/hafunc_. They will not
have any effect in SGeSAP clusters.
2.6 Example 4: Realization of the SAP-recommended SPOF isolation
SAP offers a formal HA cluster API certification for Netweaver 7.3x and above. SGeSAP implements
the certification requirements as mentioned by SAP. The approach requires isolating the SAP Central
Service software SPOFs in packages separate from non-SAP software packages like NFS package
and database package. The obvious advantage of this approach is that a failing software
component never causes a failover of healthy software component configured in the same package.
Figure 6 Visualization of the Serviceguard cluster layout for SAP certification
Correctly set up clusters of this type are capable to provide the highest level of availability that is
technically possible. SGeSAP provides tools to master the complexity of SAP-recommended cluster
configurations.
NOTE: SGeSAP provides an easy deployment functionality (see deploysappkgs(1)) that
generates all required package configuration files for a SAP System. The tool has an option that
allows single-step creation of configuration files that are compliant with the SAP certification
requirements. This minimizes the likelihood of running mal-configured systems.
2.7 Example 5: Dedicated failover host
More complicated clusters that consolidate a couple of SAP applications often have a dedicated
failover server. While each SAP application has its own set of primary nodes, there is no need to
also provide a failover node for each of these servers. Instead, there is one commonly shared
secondary node that is capable of replacing any single failed primary node. Often, some or all
of the primary nodes are partitions of a large server.
2.6 Example 4: Realization of the SAP-recommended SPOF isolation
17
Figure 7 Dedicated failover server
SGeSAP Cluster
Dedicated Failover Server with
Replicated Enqueue
jdbscsC11
Shared Disks
Node 1
dbascsC12
Failover paths from all
primary partitions to the
dedicated backup server
ers00C11
ers10C12
ersnnCnn
Node 2
Dialog Instances
dbscsCnn
Node 3
Figure 7 (page 18) shows an example configuration. The dedicated failover host can serve many
purposes during normal operation. With the introduction of Replicated Enqueue Servers, it is a
good practice to consolidate a number of Replicated Enqueues on the dedicated failover host.
These replication units can be halted at any time without disrupting ongoing transactions for the
systems they belong to. They are sacrificed in emergency conditions when a failing database and
or Central Service Instance need the spare resources.
2.8 Example 6: HANA System Replication and Dual-purpose
SAP HANA System Replication is a database replication feature used to deploy highly available
SAP HANA databases. SAP HANA System Replication requires an active primary database system
and a standby secondary database system.
If the secondary database system is located near primary system, it serves as a failover solution
for planned downtime or unplanned downtime (server failures, software failures, storage corruption
and so on). If the secondary system is located remotely, it provides disaster recovery solution. Using
System Replication, the database content of the production system is replicated from primary to
secondary SAP HANA database system. The primary and secondary system are either paired in
18
SAP cluster concepts
a synchronous, memory-synchronous, or asynchronous replication relationship. The production
system is the primary system from which replication takes place. Netweaver instances that connect
to the HANA must run outside the cluster nodes running the HANA primary and secondary instance.
In an SGeSAP configuration for SAP HANA the following packages are configured:
•
A primary package that makes sure that one of the system works as production system.
•
A secondary package that makes sure that the other system replicates.
The HANA instances are installed on node-local storage and do not move between cluster nodes
during package failover. Instead of failing over, the mode of secondary is switched to primary
and vice versa. The cluster controls client access capabilities to the production system and
coordinates takeover and role-reversal operations that promote primary instances or demote
secondary instances.
HANA System Replication and dual-purpose configurations allow the operation of a production
HANA instance together with a nonproduction HANA instance with certain restrictions. A
nonproduction HANA instance can be installed on the replication node (node2). However installing
and running additional SAP HANA instances on the primary node (node1) is not supported. Hence,
it is not possible to do a complete role-reversal in which node1 takes over the role of node2 for
both production and nonproduction instances. In case of a failover of the primary package to
node2 running the secondary instance on a nonproduction HANA system, SGeSAP ensures the
nonproduction systems on node2 are halted before it triggers the takeover procedure, that promotes
the secondary HANA instance into the production primary instance.
HANA scale-out systems can be clustered to realize many-to-many failover in a site aware disaster
tolerance architecture. This setup requires a HANA-specific site controller failover package and
two primary and secondary multi-node package pairs (one for each site) for the HANA instances.
The solution can be combined with cold standby instances and SAP HANA host auto-failover to
add a layer of high availability.
Full heartbeat loss between sites makes it impossible to judge whether the remote site is still
operational. Either the heartbeat loss is caused by pure interconnectivity issues that leave the remote
system up and running or it is caused by a disaster that has stopped the whole remote system,
uncertainty of what happened to the other site exists in both directions. This is a potentially
dangerous situation, if a HANA primary instance in one site does not shut down properly and a
HANA secondary instance in the second site becomes the active primary replacement due to an
automatically triggered takeover operation. With this, inconsistencies can occur. Thus, HANA
System Replication clusters must be connected to HP Serviceguard Quorum Services. The quorum
server software is delivered as part of HP Serviceguard, but it must be installed on a separate
server or in a separate cluster.
A single quorum server installation can serve up to a maximum of 150 clusters or 300 cluster
nodes. For high availability clusters without shared SAN infrastructure, the quorum server must be
directly connected to the HANA nodes via a dedicated quorum bond network that avoids additional
hops and uplinks. The quorum server can reside local in the same rack as the HANA hosts. But
for disaster tolerance clusters, the smart quorum server must be located in a third datacenter, which
is remote from the HANA primary and HANA secondary nodes. The remote quorum server network
connection must use a completely separate technology stack, as different and independent from
the workload and heartbeat carrying networks as possible. With regards to the quorum policy, it
is recommended to enable the Smart Quorum feature of HP Serviceguard Enterprise. In case of
scale-out HANA clustering with coexistence of SAP host auto-failover via cold standby instances it
is mandatory to use the Smart Quorum.
The standard quorum mechanism ensures that exactly one HANA instance survives in case of a
full heartbeat loss between the primary and the secondary datacenter, if both instances last ran
on an equal number of cluster nodes. It serves as a tiebreaker to prevent split-brain situations in
cases where one site does not have more than 50% of the active nodes after the connectivity gets
lost. After such a balanced split, the site that must survive is randomly chosen, which might either
be the primary site or the secondary site. Running more than 50% of the active nodes will make
2.8 Example 6: HANA System Replication and Dual-purpose
19
a site a guaranteed winner of the quorum process. A site that has less than 50% of the active nodes
after heartbeat loss will always shut down.
The Smart Quorum mechanism ensures that a running HANA scale-up or scale-out primary instance
survives in case of full heartbeat loss between the primary and the secondary datacenter. The
random choice of a standard quorum mechanism becomes eliminated in this case. The Smart
Quorum helps to avoid the abortion or delay of ongoing business transactions due to unnecessary
failover operations. Heartbeat loss between sites does not necessarily render the HANA primary
instance useless and failover by the secondary site is not always mandatory.
With Smart Quorum, scale-out primary instances become prioritized over secondary instances,
regardless of whether they run on less than 50% of the active nodes. Running on less than 50%
of the active nodes can happen to scale-out instances due to preceding single node outage incidents.
Subsequent cold standby activations via SAP host auto-failover keep the instance running. Such
cold standby activations must not impact the automated DR procedures.
With Smart Quorum, a scale-out secondary instance will still apply for quorum even if it runs on
less than 50% of the active cluster nodes. It remains available as takeover candidate after site
switch. Active nodes due to preceding host auto-failover incidents.
Figure 8 Serviceguard concept for HANA scale-out
SADTA
Cluster
Site A (Production)
HANA secondary MNP of site A (halted)
Site B (Replication)
HANA primary MNP of site B (halted)
sico
HANA primary MNP of site A
HA API
HA API
HA API
HA API
sapstart
srv
sapstart
srv
sapstart
srv
sapstart
srv
HANA
Node 1
active
HANA
Node 2
active
HANA
Node 3
active
HANA
Node 4
standby
Storage 1
Storage 2
Storage 3
HA NFS
System
NFS Failover
Node
HANA secondary MNP of site B
HA API
HA API
HA API
HA API
sapstart
srv
sapstart
srv
sapstart
srv
sapstart
srv
HANA
Node 1
active
HANA
Node 2
active
HANA
Node 3
active
HANA
Node 4
standby
Storage 1
Storage 2
Storage 3
HA NFS
System
NFS Failover
Node
2.9 Dedicated NFS packages
Small clusters with only a few SGeSAP packages usually provide NFS by combining the NFS toolkit
package functionality with the SGeSAP packages that contain a database component. The NFS
toolkit is a separate product with a set of configuration and control files that must be customized
for the SGeSAP environment. It needs to be obtained separately. NFS is delivered in a distributed
fashion with each database package serving its own file systems. By consolidating this into one
package, all NFS serving capabilities can be removed from the database packages. In complex,
consolidated environments with several SGeSAP packages, it is of significant help to use one
dedicated NFS package instead of blending this into existing packages.
20
SAP cluster concepts
A dedicated SAPNFS package allows access to shared filesystems that are needed by more than
one SAP component. Typical filesystems served by SAPNFS would be some common SAP directories
such as /usr/sap/trans or /sapmnt/<SID> or for example the global MaxDB executable directory
of MaxDB 7.7. The MaxDB client libraries are part of the global MaxDB executable directory and
access to these files is needed by APO and liveCache at the same time. Beginning with MaxDB
7.8 isolated installations, each database installation keeps a separate client.
SGeSAP setups are designed to avoid NFS with heavy traffic on shared filesystems if possible. For
many implementations, this allows the use of one SAPNFS package for all NFS needs in the SAP
consolidation.
2.10 Virtualized dialog instances for adaptive enterprises
Databases and Central Instances are Single Points of Failure where as ABAP and JAVA Dialog
Instances can be installed in a redundant fashion. In theory, there are no SPOFs in redundant
Dialog Instances. This does not mean that these Dialog Instances are free of any SPOFs. A simple
example for the need of a SAP Application Server package is to protect dedicated batch servers
against hardware failures.
Any number of SAP Application Server instances can be added to a package that uses the module
sgesap/sapinstance.
Dialog Instance packages allow simple approach to achieve abstraction from the hardware layer.
It is possible to shift around Dialog Instance packages between servers at any given time. This
might be desirable if the CPU resource consumption is eventually balanced poorly due to changed
usage patterns. Dialog Instances can then be moved between the different hosts to address this.
A Dialog Instance can also be moved to a standby host to allow planned hardware maintenance
for the node it was running on.
To simulate this flexibility, you can install Dialog Instances on every host and activate them if
required. This might be a feasible approach for many purposes and saves the need to maintain
virtual IP addresses for each Dialog Instance. But there are ways that SAP users unintentionally
create additional short-term SPOFs during operation if they reference a specific instance via its
hostname. This could e.g. be done during batch scheduling. With Dialog Instance packages, the
system becomes invulnerable against this type of user error.
Dialog Instance virtualization packages provide high availability and flexibility at the same time.
The system becomes more robust using Dialog Instance packages. The virtualization allows moving
the instances manually between the cluster hosts on demand.
2.11 Handling of redundant dialog instances
Non-critical SAP Application Servers can be run on HP-UX, SLES or RHEL Linux application server
hosts. These hosts do not need to be part of the Serviceguard cluster. Even if the additional SAP
services are run on nodes in the Serviceguard cluster, they are not necessarily protected by
Serviceguard packages.
All non-packaged ABAP instances are subsequently called Additional Dialog Instances or sometimes
synonymously Additional SAP Application Servers to distinguish them from mission-critical Dialog
Instances. An additional Dialog Instance that runs on a cluster node is called an Internal Dialog
Instance. External Dialog Instances run on HP-UX or Linux hosts that are not part of the cluster. Even
if Dialog Instances are external to the cluster, they may be affected by package startup and shutdown
For convenience, Additional Dialog Instances can be started, stopped or restarted with any SGeSAP
package that secures critical components. Some SAP applications require the whole set of Dialog
Instances to be restarted during failover of the Central Service package. This can be triggered with
SGeSAP.
It helps to better understand the concept, if one considers that all of these operations for non-clustered
instances are considered inherently non-critical. If they fail, this failure won’t have any impact on
the ongoing package operation. A best-effort attempt is made, but there are no guarantees that
2.10 Virtualized dialog instances for adaptive enterprises
21
the operation succeeds. If such operations need to succeed, package dependencies in combination
with SGeSAP Dialog Instance packages need to be used.
Dialog Instances can be marked to be of minor importance. They will then be shut down, if a
critical component fails over to the host they run to free up resources for the non-redundant packaged
components.
The described functionality can be achieved by adding the module sgesap/sapextinstance
to the package.
NOTE: Declaring non-critical Dialog Instances in a package configuration does not add them to
the components that are secured by the package. The package won't react to any error conditions
of these additional instances. The concept is distinct from the Dialog Instance packages that got
explained in the previous section.
If Additional Dialog Instances are used, then follow the below rules:
•
Use saplogon with Application Server logon groups. When logging on to an application
server group with two or more Dialog Instances, you don't need a different login procedure
even if one of the Application Servers of the group fails. Also, using login groups provides
workload balancing between Application Servers.
•
Avoid specifying a destination host when defining a batch job. This allows the batch scheduler
to choose a batch server that is available at the start time of the batch job. If you must specify
a destination host, specify the batch server running on the Central Instance or a packaged
Application Server Instance.
•
Print requests stay in the system until a node is available again and the Spool Server has been
restarted. These requests could be moved manually to other spool servers if one spool server
is unavailable for a long period of time. An alternative is to print all time-critical documents
through the highly available spool server of the central instance.
•
Configuring the Update Service as part of the packaged Central Instance is recommended.
Consider using local update servers only if performance issues require it. In this case, configure
Update Services for application services running on the same node. This ensures that the
remaining SAP Instances on different nodes are not affected if an outage occurs on the Update
Server. Otherwise, a failure of the Update Service will lead to subsequent outages at different
Dialog Instance nodes.
22
SAP cluster concepts
3 SAP cluster administration
In SGeSAP environments, SAP application instances are no longer considered to run on dedicated
(physical) servers. They are wrapped up inside one or more Serviceguard packages and packages
can be moved to any of the hosts that are inside of the Serviceguard cluster. The Serviceguard
packages provide a server virtualization layer. The virtualization is transparent in most aspects,
but in some areas special considerations apply. This affects the way a system gets administered.
This chapter discusses the following topics:
•
Performing cluster administration in the System Management Homepage (SMH)
•
Performing SAP administration with SAP's startup framework
•
Change management activities
3.1 Performing cluster administration on the System Management
Homepage (SMH)
SGeSAP packages can be administered using HP SMH. After login to SMH, click the Serviceguard
icon to access the Serviceguard Manager pages. Choose a cluster that has SGeSAP installed. In
the Map and Large Scale Grid views, from the View window on the right side of a page, move
cursor over a package icon to display the package information pop-up window. Each SGeSAP
package is identified as such in the package pop-up information under the Toolkits heading. In
the Table view, the toolkit is listed in the Type column of the Package Status table.
Figure 9 Pop-up information for SGeSAP toolkit package
To run, halt, move or enable maintenance for an SGeSAP toolkit package:
•
From the View window on the right side of the Serviceguard Manager Main page, right click
on a package icon to bring up the Operations menu, then click Run Package, Halt Package,
or Enable Package Maintenance to bring up the screen(s) that will allow you to perform each
of these operations.
•
You can also perform administrative tasks by clicking the Packages tab on the Serviceguard
Manager Main page to bring up the Packages screen. Select the package you want to perform
administrative tasks, and then click Administration in the menu bar to display a drop down
menu of administrative tasks. Click the task you want to perform to bring up the screen(s)
associated with performing that task.
3.1 Performing cluster administration on the System Management Homepage (SMH)
23
NOTE: Enabling package maintenance allows to temporarily disable the cluster functionality for
the SAP instances of this SGeSAP package. While maintenance mode is activated, the configured
SGeSAP monitoring services recognizes whether an instance is manually stopped. If maintenance
mode is deactivated, failover does not occur. SAP support personnel might request or perform
maintenance mode activation as part of reactive support actions. Similarly, you can use Serviceguard
Live Application Detach (LAD) mechanism to temporarily disable the cluster for the whole node.
Figure 10 Package administration tasks
To view SGeSAP toolkit configuration settings:
24
•
From the View window on the right hand side of the Serviceguard Manager Main page, left
click on the package name below a package icon to bring up the Package Properties screen
for that package. The Package Properties screen contains detailed package configuration
information.
•
The Package Properties screen can also be accessed by clicking the Packages tab on the Main
page to bring up the Packages screen, then click on a package name in the Package Name
column.
•
To return to the Main page, click Cluster Properties in the bottom right-hand corner of the
Package Properties screen.
SAP cluster administration
Figure 11 sgesap/sapinstance module configuration overview for a replication instance
To monitor a SGeSAP toolkit package:
•
Check the badges next to the SGeSAP package icons in the main view. Badges are tiny icons
that are displayed to the right of the package icon. Any Serviceguard Failover Package can
have Status, Alert, and HA Alert badges associated with it. In addition to the standard
Serviceguard alerts, SGeSAP packages report SAP application-specific information via this
mechanism. The additional data provides a more complete picture of the current and expected
future availability level of the SAP application. The Alert and HA Alert badges are clickable
icons; they are linked to the corresponding component’s alert page.
Figure 12 Package alerts
To update (edit) a SGeSAP toolkit package configuration:
•
From the View window on the right hand side of the Serviceguard Manager Main page, right
click on a package icon to bring up the Operations Menu, then click Edit a Package to bring
up the first in a series of screens where you can make edits to package properties.
•
A package can also be edited by clicking the Packages tab on the Serviceguard Manager
Main page to bring up the Packages screen, then click on the package you want to update
in the Package Name column to bring up the Package Properties screen. Click Edit This Package
in the upper left-hand corner of the Package Properties screen to bring up the first in a series
of screens where you can make edits to package properties.
3.2 Performing SAP administration with SAP's startup framework
This section describes how a clustered SAP system behaves differently from a non-clustered SAP
system configured with the SAP startup framework and how it responds to SAP Netweaver 7.x
3.2 Performing SAP administration with SAP's startup framework
25
standard administration commands issued manually from outside of the cluster environment. These
commands can be sapcontrol operations triggered by SAP system administrators, that is,
sidadm users who are logged in to the Linux operating system or remote SAP basis administration
commands via the SAP Management Console (SAP MC) or commands via SAP’s plugin for Microsoft
Management Console (SAP MMC).
The SAP Netweaver 7.x startup framework has a host control agent software process (hostctrl) that
runs on each node of the cluster and a sapstart service agent software process (sapstartsrv) per
SAP instance. SGeSAP does not interfere with the host control agents, but interoperates with the
sapstart service agents during instance start, stop and monitoring operations.
NOTE: startsap and stopsap operations must not be used in clusters that have SAP software
monitors configured. sapcontrol operations must be used instead. For more information on how
to use sapcontrol with old SAP releases, see SAP note 995116.
If SAP 4.x and 6.x startsap and stopsap commands get executed, any configured monitor will not
be able to judge whether an instance is down because of a failure or because of a stopsap
operation. SGeSAP will therefore trigger an instance restart or an instance failover operation in
reaction to execution of the stopsap script. The SAP Netweaver 7.x standard administration
commands communicate with the SGeSAP environment.
The startsap/stopsap scripts are not recommended by SAP as of Netweaver 7.x and must
not be used anymore. It is recommended to configure startup framework agents for older SAP
releases, too.
Without a cluster, each sapstart service agent is statically registered in the SAP configuration of
the host on which its SAP instance was installed. In a cluster, such registrations become dynamic.
The cluster package start operations perform registration of the required agents and the cluster
package shutdown operations include deregistration routines. After cluster package start, all
required startup agents are registered and running. After cluster package halt, these startup agents
are halted and not registered. As a consequence, the attempt to start a SAP startup agent after
bringing down the instance package that it belongs to must fail, because the agent is not registered.
NOTE:
•
sapcontrol –nr <instnr> –function StartService <SID> operations are usually
not required in SGeSAP environments. They fail if the package of the instance is down. A
clustered SAP instance might be accompanied with one or more SGeSAP service monitors
that regularly check whether the instance is up and running and whether it is responsive to
service requests. For this activity, the sapstart service agents are utilized. For the monitors to
continue to operate reliably it is thus mandatory that the sapstart services remain running.
•
sapcontrol –nr <instnr> -function StopService operations degrade the cluster
monitoring capabilities. SGeSAP has fallback mechanisms in the monitoring routines so that
do not require a running startup agent, but the monitoring becomes less reliable without the
agent. To reestablish reliable monitoring capabilities and to enable remote administration
console access, the cluster might chose to restart manually halted startup service agents
immediately.
sapcontrol –nr <instnr> -function StopService operations for the software
single points of failure have the same effect as sapcontrol –nr <instnr> -function
RestartService operations.
The cluster awareness of the sapstart service agent itself are activated by specifying the SGeSAP
cluster library in the profile of the corresponding SAP instance:
For SLES: service/halib=/opt/cmcluster/lib/saphpsghalib.so
For RHEL: service/halib=/usr/local/cmcluster/lib/saphpsghalib.so
26
SAP cluster administration
With this parameter being active, the sapstart service agent notifies the cluster software of any
triggered instance halt. Planned instance downtime does not require any preparation of the cluster.
Only the sapstart service agent needs to be restarted in order for the parameter to become effective.
During startup of the instance startup framework, a SAP instance with the SGeSAP HA library
configured , prints the following messages in the sapstartsrv.log file located in the instance
work directory:
SAP HA Trace: HP SGeSAP<versioninfo> (SG) <versioninfo> cluster-awareness
SAP HA Trace: Cluster <clustername> is up and stable
SAP HA Trace: Node <hostname> is up and running
SAP HA Trace: SAP_HA_Init returns: SAP_HA_OK ...
During startup of the instance startup framework, a SAP instance without the SGeSAP HA library
configured, prints the following message in the sapstartsrv.log file.
No halib defined => HA support disabled
NOTE: Within a single Serviceguard package it is possible to mix instances having the HA library
configured with instances not having HA library configured.
Subsequent startup or shutdown of an instance triggers the startup framework to dynamically
discover a package that has the instance configured. A corresponding sapstartsrv.log entry
is as follows:
SAP HA Trace: Reported package name is <packagename>
During startup of the instance startup framework, a SAP instance without the SGeSAP HA library
configured prints the following message in the sapstartsrv.log file located in the instance
work directory:
No halib defined => HA support disabled
CAUTION: It might not be safe to stop an instance that has HA support disabled. Cluster software
monitors will cause a failover of the halted instance and all other software instances configured in
the same cluster package to the secondary node. You can stop the instance, if software monitoring
is not used or if package maintenance mode is activated. Ask the cluster administrator for details
on a specific configuration.
While SAP instance package is running, and the HA library has been configured, a <sid>adm
can issue the following command.
sapcontrol -nr <instnr> -function Stop
The SAP instance shuts down as if there were no cluster configuration. The cluster package will
not report any error and continues to run. All instance filesystems remain accessible. The
sapstartsrv.log file reports:
trusted unix domain socket user is stopping SAP System …
SAP HA Trace: Reported package name is ERS41SYA
SAP HA Trace: Reported resource name is SYA_ERS41
SAP HA Trace: SAP_HA_FindSAPInstance returns: SAP_HA_OK …
SAP HA Trace: SAP_HA_StopCluster returns: SAP_HA_STOP_IN_PROGRESS
Depending on the service monitors that are configured for the instance, one or more operations
are logged to the package log file $SGRUN/log/<packagname>.log in the subsequent
monitoring intervals.
<date> root@<node> sapdisp.mon[xxx]: (sapdisp.mon,check_if_stopped):
Manual start operation detected for DVEBMGS41
3.2 Performing SAP administration with SAP's startup framework
27
<date> root@<node> sapdisp.mon[xxx]: (sapdisp.mon,check_if_stopped):
Manual stop in effect for DVEBMGS41
Other functions provided by sapcontrol for instance shutdowns work in a similar way.
The HP Serviceguard Manager displays a package alert (see Figure 12 (page 25)) that lists the
manually halted instances of a package. The SGeSAP software service monitoring for a halted
instance is automatically suspended until the instance is restarted.
An SGeSAP package configuration parameter allows blocking of administrator-driven instance
stop attempts for the SAP startup framework. If a stop operation is attempted, the
sapstartsrv.log file contains the following entries:
trusted unix domain socket user is stopping SAP System …
SAP HA Trace: Reported package name is ERS41SYA
SAP HA Trace: Reported resource name is SYA_ERS41
SAP HA Trace: SAP_HA_FindSAPInstance returns: SAP_HA_OK …
SAP HA Trace: sap_stop_blocked=yes is set in package config
SAP HA Trace: The stop request is blocked by the cluster …
NOTE: If the SGeSAP HA library is configured in the SAP instance profile, SAP system
administrators can stop and restart clustered Netweaver instances without interacting with the
cluster software explicitly. Instance status is visualized in the Serviceguard Manager GUI which
continues to provide a full picture of components that are up to the cluster administrators. The
SGeSAP monitoring suspends operation while the instance is manually stopped.
Packages that have several Netweaver instances configured, continue to monitor all the instances
that are not manually halted. If any actively monitored instance fails, it results in a failover and
restart of the whole package.
One of the methods to restart a manually halted instance is to issue the following command:
sapcontrol -nr <instnr> -function Start
Any other startup method provided by SAP's sapcontrol command works in the similar way.
Example of messages added to the package log:
<date> root@<node> sapdisp.mon[xxx]: (sapdisp.mon,check_if_stopped):
Manual start operation detected for DVEBMGS41
<date> root@<node> sapdisp.mon[xxx]: (sapdisp.mon,check_if_stopped):
Resume monitored operation of DVEBMGS41
If the instance fails to start, the service monitor enters the yellow state. The yellow state is printed
as a warning to the package log and displayed as a package alert in the HP Serviceguard
Manager.
<date> root@<node> sapdisp.mon[xxx]: (sapdisp.mon,check_if_stopped):
Resume monitored operation of DVEBMGS41
<date> root@<node> sapdisp.mon[xxx]: (sapdisp.mon,dispmon_monitors):
WARNING: Dispatcher of DVEBMGS41 - monitor state:YELLOW,2
The service monitor remains in yellow state for up to five monitoring intervals. Then, it changes
to red state and fails the package with the next monitoring interval. If another instance halt
operation is issued while the monitor is in yellow or red state, the monitoring is suspended again
and the package failover is prevented. This occurs regardless of whether the manual halt succeeds
or not. It is an effective way to prevent undesirable failovers.
Issuing a Serviceguard package halt is always possible whether or not instances are currently
halted. The halt operation causes the cluster to loose any manually halted state of the package.
28
SAP cluster administration
NOTE: Activating package maintenance mode is a way to pause all SGeSAP service monitors
of a package immediately, but it can only be triggered with Serviceguard commands directly.
While package maintenance mode is active, failover of the package is disabled. Maintenance
mode also works for instances without HA library configured.
3.3 Change management
Serviceguard manages the cluster configuration. Among the vital configuration data are the
relocatable IP addresses and their subnets, the volume groups, the logical volumes and their
mountpoints. If you change this configuration for the SAP system, you have to change and reapply
the cluster configuration accordingly.
3.3.1 System level changes
Do not delete secure shell setup and mutual .rhosts entries of <sid>adm on any node.
Entries in /etc/hosts, /etc/services, /etc/passwd or /etc/group must be kept unified
across all nodes.
Directories below /export have an equivalent directory whose fully qualified path comes without
this prefix. These directories are managed by the automounter. NFS file systems get mounted
automatically as needed. Servers outside of the cluster that have External Dialog Instances installed
are set up in a similar way. See /etc/auto.direct for a full list of automounter file systems of
SGeSAP.
It enhances the security of the installation if the directories below /export are exported without
root permissions. The root user cannot modify these directories or their contents. With standard
permissions set, the root user cannot even see the files. If changes need to be done as root, the
equivalent directory below /export on the host the package runs on can be used as access point.
If the system is not configured correctly and if you attempt to login as <sid>adm user, the system
might hang. The reason is the /usr/sap/<SID>/SYS/exe command is in the path of the
<sid>adm user. Without local binaries, this directory contains a link to the /sapmnt/<SID>,
which is managed by the automounter file systems. If the package is down, the automounter cannot
contact the host that belongs to the relocatable IP address and the system hangs. To avoid this,
always retain a local copy of the executable on each node.
NOTE: If the database package with NFS services is halted, you may be able to log in as
<sid>adm unless you keep a local copy of the SAP binaries using sapcpe.
To allow proper troubleshooting, there is a more verbose package startup log is located in the
Serviceguard log directory. It should be checked first in case of problems. The level of information
can be adjusted by changing the log_level package parameter.
If problems with package startup remain, a general debug mode can be activated by creating an
SGeSAP debug flag file:
touch debug_<packagename> in the Serviceguard run directory, which is:
/usr/local/cmcluster/run on RHEL.
/opt/cmcluster/run on SLES.
The debug mode will allow the package to start until the SAP or database specific start commands
are reached. Any SAP instance or database instance startup will be skipped. Service monitors will
be started, but they do not report failures as long as the debug mode is turned on.
In this mode it is possible to attempt manual startups of the database and/or SAP software. All
rules of manual troubleshooting of SAP instances now apply. For example, it is possible to access
the application work directories of the SAP instance to have a look at the trace files.
3.3 Change management
29
CAUTION: Make sure that all debug flag files become removed before a system is switched back
to production use.
NOTE: The debug/partial package setup behavior is different from the Serviceguard package
maintenance mode. In package maintenance mode, the debug file does not disable package
failover or allow partial startup of the package, but allows a package in running state. Startup
with debug mode starts all the SGeSAP service monitors, except the monitored application software.
The monitors suspend execution until the debug file is removed.
It is not required to halt the package before package operations can be tested. If a package halt
operation is prompted while the debug file exists, all SAP specific routines in the package shutdown
logic are executed. Clustered SAP software components that were absent during package startup
in debug mode, but manually started during subsequent debugging operations are stopped with
the standard package halt routines.
Make sure to remove the debug file at the end of the debugging operations. If the package still
runs, all monitors begins to work immediately and the package failover mechanism is restored.
3.3.2 SAP software changes
During installation of the SGeSAP Integration for SAP releases with kernel < 7.0, SAP profiles are
changed to contain only relocatable IP-addresses for the database as well as the Central Instance.
You can check these using transaction RZ10. In file DEFAULT.PFL these entries are altered:
SAPDBHOST = <relocatible_db_name>
rdisp/mshost = <relocatible_ci_name>
rdisp/vbname = <relocatible_ci_name>_<SID>_<INR>
rdisp/enqname = <relocatible_ci_name>_<SID>_<INR>
rdisp/btcname = <relocatible_ci_name>_<SID>_<INR>
rslg/collect_daemon/host = <relocatible_ci_name>
The additional profile parameters are SAPLOCALHOST and SAPLOCALHOSTFULL, that are included
as part of the Instance Profile of the Central Instance. Anywhere SAP uses the local hostname
internally, this name is replaced by the relocatable values <relocatable_ci_name> or
<relocatable_ci_name>.domain.organization of these parameters. Make sure that they
are always defined and set to the correct value. This is vital for the system to function correctly.
Relocatable IP addresses can be used consistently beginning with SAP kernel 6.40. Older releases
use local hostnames in profile names and startup script names. Renamed copies of the files or
symbolic links exist to overcome this issue.
The destination for print formatting, which is done by a Spool Work process, uses the Application
Server name. Use the relocatable name if you plan to use Spool Work processes with your Central
Instance. In these cases no changes need to be done in case of a failover—printing will work
persistently.
NOTE: Any print job in process at the time of the failure is canceled and required to be reissued
manually after the failover. To make a sprint spooler highly available in the Central Instance, set
the destination of the printer to <relocatible_ci_name>_<SID>_<nr> using the transaction
SPAD. Print all time critical documents via the high available spool server of the Central Instance.
Print requests to other spool servers stay in the system after failure until the host is available again
and the spool server has been restarted. These requests can be moved manually to other spool
servers if the failed server is unavailable for a longer period of time.
Batch jobs can be scheduled to run on a particular instance. Generally speaking, it is better not
to specify a destination host at all. Sticking to this rule, the batch scheduler chooses a batch server
that is available at the start time of the batch job. However, if you want to specify a destination
host, specify the batch server running on the highly available Central Instance. The application
server name and the hostname (retrieved from the Message Server) are stored in the batch control
30
SAP cluster administration
tables TBTCO,TBTCS,.... In case a batch job is ready to run, the application server name will be
used to start it. Therefore, when using the relocatable name to build the Application Server name
for the instance, you do not need to change batch jobs that are tied to it after a switchover. This
is true even if the hostname, that is also stored in the above tables, differs.
Plan to use saplogon to application server groups instead of saptemu/sapgui to individual
application servers. When logging on to an application server group with two or more application
servers, the SAP user does not need a different login procedure if one of the application servers
of the group fails. Also, using login groups, provides workload balancing between application
servers, too.
Within the CCMS you can define operation modes for SAP instances. An operation mode defines
a resource configuration. It can be used to determine which instances are started and stopped and
how the individual services are allocated for each instance in the configuration. An instance
definition for a particular operation mode has the number and types of Work processes as well
as Start and Instance Profiles. When defining an instance for an operation mode, you need to
enter the hostname and the system number of the application server. By using relocatable names
to fill in the hostname field, the instance will be working under control of the CCMS after a failover
without a change.
NOTE: If an instance is running on the standby node in normal operation and is stopped during
the switchover, only configure the update service on a node for Application Services running on
the same node. As a result, the remaining servers, running on different nodes, are not affected by
the outage of the update server. However, if the update server is configured to be responsible for
application servers running on different nodes, any failure of the update server leads to subsequent
outages at these nodes. Configure the update server on a clustered instance. Using local update
servers must only be considered, if performance issues require it.
3.4 Ongoing verification of package failover capabilities
The SGeSAP functionality includes SAP specific verifications that test the node-local operating
environment configurations. These checks detect the incorrect local settings that might prevent a
successful SAP failover. The routines are executed as part of cmcheckconf(1) and
cmapplyconf(1) commands run on SGeSAP package configurations. The cmcheckconf -P
<pkg_config_file> command can be scheduled at regular intervals to verify the failover
capabilities of already running SGeSAP packages. These tests are executed on the current node
as well as on all reachable, configured failover nodes of the SGeSAP package. The resulting logs
will be merged.
A cmcheckconf(1) run command performs a complete test only if an SGeSAP package is up
and running. In this case all file systems are accessible, and allows a complete verification. If the
SGeSAP package is halted, only a subset of the checks can be performed.
NOTE: Successful execution of cmcheckconf(1) command does not guarantee a successful
failover. The currently available functionality cannot replace any regular failover test mechanism.
These checks complement existing tests and are useful to detect issues timely.
If required, the execution of SGeSAP cluster verification as part of the cmcheckconf(1) and
cmapplyconf(1) command routines can be deactivated. The existence of a file called ${SGRUN}/
debug_sapverify skips SGeSAP cluster verification for all packages on that cluster node. The
existence of a file ${SGRUN}/debug_sapverify_<packagename> skips verification only for
a single package on that cluster node. Generic and SGeSAP clustering specific check routines
which are not related to SAP requirements towards local operating environment configurations are
not deactivated and are executed as part of both cmcheckconf(1) and cmapplyconf(1)
commands.
The deploysappkgs(1) command is used during initial cluster creation. It is also called after
system level or SAP application configuration change operations to verify if any of the performed
3.4 Ongoing verification of package failover capabilities
31
changes must be reflected in the cluster package configuration. deploysappkgs(1) command
is aware of the existing package configurations and compares them to settings of the SAP
configuration and the operating system.
3.5 Upgrading SAP software
SAP rolling kernel switches can be performed in a running SAP cluster exactly as described in the
SAP Netweaver 7.x documentation and support notes.
Upgrading the application version of the clustered SAP application to another supported version
rarely requires changes to the cluster configuration. Usually SGeSAP detects the release of the
application that is packaged automatically and treats it as appropriate.
A list of supported application versions can be taken from the SGeSAP release note document.
The list of currently installed Serviceguard Solution product versions can be created with the
command
rpm -qa |grep -i serviceguard
For a safe upgrade of SAP with modular-style packages, switch all impacted SGeSAP packages
in package maintenance mode and perform a partial package start before the first SGeSAP specific
module becomes executed. Now, you can manually handle the SAP startup or shutdown operations
during the upgrade without interference from the Serviceguard cluster software.
deploysappkgs(1) and cmcheckconf(1) issued on the existing packages after upgrade give
hints on whether cluster configuration changes are required. Perform failover tests for all potential
failure scenarios before putting the system back in production.
Table 2 Summary of methods that allow SAP instance stop operations during package uptime
Method
Granularity
How achieved?
Effect
SAP stop block
deactivation
SAP Instance
Ensure that package
parameter setting
sap_stop_blocked is set to
no and is applied. Stop
the instance as <sid>adm
with standard SAP
methods, for example by
calling sapcontrol …
–function Stop
SAP Instance service
SAP rolling kernel switch
monitoring of the
package is temporarily
suspended for stopped
instances; Stopped
instances cause alerts in
Serviceguard Manager
Package
maintenance
mode
Serviceguard
Package
cmmodpkg –m on
<pkgname>
All package service
SAP software version
monitoring is suspended; upgrade
Package cannot fail or
switch nodes while in
maintenance mode
SGeSAP debug
flag
SGeSAP Modules
Create debug flag file
touch
debug_<packagename>
in the SG run directory
location, which is
/usr/local/cmcluster/run
on Red Hat and
/opt/cmcluster/run
on SUSE)
All SGeSAP package
nonproduction SGeSAP
service monitoring is
cluster trouble-shooting
temporarily suspended;
SGeSAP modules are
skipped during package
start
Live Application Serivceguard Node cmhaltnode –d
Detach
Package can fail, but
cannot yet failover
Usecase example
Serviceguard patch
installation
3.6 Package conversion
The deploysappkgs(1) command of SGeSAP/LX A.06.xx can still be used to create
modular-based cluster configuration files for NFS, database, SAP central and SAP replication
32
SAP cluster administration
services of existing clusters if there are SGeSAP/LX A.03.00 legacy configurations. Thus, for the
majority of existing clusters, no additional migration tool is required to move from legacy to modular.
For other cases, like liveCache, SAP external instance and SAP infrastructure tool clusters, the
conversion of SGeSAP/LX 3.xx legacy configurations to SGeSAP/LX A.06.xx module configurations
requires manual steps. Preparatory effort lies in the range of 1 hour per package.
The cmmigratepkg(1) command can be applied to SGeSAP legacy packages. The output file
will lack SAP-specific package configurations of the sap*.config file, but the resulting configuration
file can be used to simplify the creation of a modular SGeSAP package:
cmmakepkg –i cmmigratepkg_output_file -m sgesap/all modular_sap_pkg.config
The configuration of the SGeSAP specific parameters to modular_sap_pkg.config can be
done manually. For more information on configuring modular package configuration, see “Clustering
SAP Netweaver using SGeSAP packages” (page 49).
Caution is required for clusters that have altered unofficial SGeSAP parameters not described in
the official product documentation and in clusters that use customized code as part of the SGeSAP
customer.functions hooks. In these cases, an individual analysis of the required steps is needed.
3.7 SAP HANA cluster administration
The list of administrative operations possible using SGeSAP cluster packages are:
3.7.1 Starting and restarting the HANA cluster
HANA clusters are started using cmruncl command. Optionally, scale-up HANA clusters and
packages can be configured to automatically start after rebooting the servers.
For scale-out, the site controller package hdbc <SID> must be enabled and started manually after
starting the cluster. Subsequently, the site controller service hdbsico.mon will trigger which site
starts the primary MNP and which site starts the secondary MNP based on the current state of the
clustered HANA instance.
A special case occurs if nodes of one site are available during cluster restart. This can happen as
part of an enforced manual cluster start with cmruncl -f option. In this case, the site controller
cannot determine whether the local instance must be primary or secondary and none of the MNPs
starts until at least one node of the other site joins the cluster and allows access to information
about its HANA instance. There are situations in which it is possible to start a production HANA
after cluster restart without access to any node of a failed site.
To achieve this, it is necessary to issue cmresetsc command after starting the cluster manually.
Now, the site controller package can be started as usual and it will bring up the primary package
on the remaining site. This might involve a takeover operation.
3.7.2 Starting and stopping the primary HANA system
If the primary system is up and running without package, then the package will also start successfully.
If the instance is not yet running, the package will start it. The instance and required resources are
subsequently monitored by the cluster. The SAP application servers access the primary HANA
instance via relocatable IP addresses on the data network.
Commands to start the scale-up package and to enable subsequent failover are:
# cmrunpkg –n <node_1> hdbp<SID>
# cmmodpkg –e hdbp<SID>
3.7 SAP HANA cluster administration
33
NOTE: For scale-out HANA the command sequence is similar to scale-up HANA. The site controller
package hdbc <SID> is usually started manually, after which, it automatically starts primary and
secondary packages.
Only when the HANA package is brought down manually, it needs to be started. In that case, a
specific command sequence must be used if the full instance with all nodes needs to be started
together. Otherwise, only some HANA nodes need to be started depending on the nodes that are
enabled for the package and the nodes on which the package was failed versus manually halted.
Sequence to start a scale-out primary HANA system and subsequently enabling failover for its
multi-node package is as follows:
cmmodpkg
cmmodpkg
cmrunpkg
cmmodpkg
-d hdbp <SID> <Site>
-e -n... -n <node_n_site> hdbp<SID>_<site>
hdbp<SID>_<site>
-e hdbp<SID>_<site>
To halt the primary package on the node(s) configured as the primary system, run the following
command. This will command stop the instance of the primary system. Once the primary system
is shut down you cannot monitor the node.
# cmhaltpkg hdbp<SID>
Stop or start the local HANA instances with standard SAP tools (HANA studio, HDB, or sapcontrol
CLI commands), even if the package is up or down. The instance monitoring for an active package
distinguishes triggered instance stops from unintended software crashes. During planned instance
downtime, an active package indicates the downtime in the package log file.
Command to view SGeSAP alerts:
# sapgetconf alerts
If you start a primary system manually without a primary package, then you cannot access the
production application clients using the network or monitor the cluster. Also, the primary system
does not trigger failover operations.
3.7.3 Starting and stopping of the secondary HANA system
If the secondary instance is not running, then run the secondary package on the node configured
as a secondary system, this will start the instance of the secondary system. The instance and its
resources are monitored by the cluster.
Command to view the cluster:
# cmmodpkg –e hdbs<SID>
The secondary package startup operation will fail if it cannot establish an initial connection with
the primary system to resynchronize the local data. A secondary package will not fail if there is
no primary instance available, but it will attempt to start the secondary instance once the primary
system is available.
If you manually halt the secondary package on the node configured as the secondary system, this
will not stop the instance of the secondary system. However, the secondary instance will no longer
be monitored.
Command to halt secondary package on the node:
# cmhaltpkg hdbs<SID>
Stop and restart the local HANA instances manually with standard SAP tools, the same way as
the primary systems.
If you attempt to restart during secondary package uptime and primary system downtime, the
restart will not be successful; this is because a secondary instance will refuse to start if its primary
instance is not reachable. In this situation, the secondary package will create an alert that the
34
SAP cluster administration
secondary instance is down. The secondary package will not fail, but it will attempt to finalize the
start of the secondary instance once the primary system is available.
3.7.4 Manual operations on scale-up production systems
1.
To move the production environment to the second node (takeover):
# cmhaltpkg hdbs<SID>
# cmhaltpkg hdbp<SID>
# cmrunpkg –n <node_2> hdbp<SID>
# cmmodpkg –e hdbp<SID>
As part of this operation sequence, the original primary system is stopped; the active secondary
system will take over and become a primary system. The cluster services for the production
system are activated including network accessibility for production clients. Enabling cluster
run commands (cmruncl), node run commands (cmrunnode), and primary package autorun
(cmmodpkg –e hdbp<SID>) will not result in an automatic takeover if an active primary
system is available on one of the two nodes. This behavior is implemented to ensure that
takeovers are not accidentally triggered.
2.
To move the secondary system (demotion):
# cmrunpkg –n <node_1> hdbs<SID>
# cmmodpkg –e hdbs<SID>
The previous primary system will subsequently become the new secondary system by starting
the secondary package on its node. As part of this operation sequence, the original primary
instance is on halt, demoted, and registered as secondary instance, and then restarted.
Depending on the availability and accuracy of local data, this operation causes significant
workload on the replication network and the disk sub-systems of both the appliances.
In this scenario, it is possible that the system is updated with delta data shipments from the
new primary node, which restricts the exchange of the altered data during takeover and
demotion. If the local data set is corrupt or if it was altered after the takeover operation, a full
re-initialization is triggered. This will take some time, similar to the initialization time of a new
replication, because the complete data files of the primary database are transferred through
the replication network.
You can implement “Starting and stopping the primary HANA system” (page 33) and “Starting
and stopping of the secondary HANA system” (page 34) in full role-reversal operation mode
also.
3.
To failback perform:
If 1 and 2 are repeated with the primary and secondary nodes reversed, a manual failback
is possible, and then the nodes will be back to their original state. Such a failback operation
is desirable in an environment that is not symmetric. For example, if the client connectivity of
the second node has significantly higher latency due to larger distance or if the second node
is a dual-purpose node that can run additional nonproduction systems.
Manual site-switches for scale-out HANA instances are performed by shutting down the site
controller package hdbc<SID> and restarting it in the other datacenter.
3.7.5 Planned extended downtime for HANA instance
SGeSAP HANA scale-up package types (primary and secondary packages) support the Serviceguard
Package Maintenance Mode. Package Maintenance Mode temporarily disables failover and
monitoring for an active package. This allows extended administration operations for the HANA
instance without the risk of causing a failover. The feature allows deactivation of the production
instance cluster monitoring without the need to stop the instance. This feature is not HANA-specific.
3.7 SAP HANA cluster administration
35
For additional information on HP Serviceguard, see Managing HP Serviceguard A.12.00.20 for
Linux.
3.7.6 Patching Serviceguard software on a node
SGeSAP HANA package types (primary and secondary) support the Serviceguard LAD (Live
Application Detach) which allows stopping a Serviceguard cluster node without the need to bring
down the applications. This feature allows patching the Serviceguard software on the node without
the need to stop the instance. This feature is not HANA-specific. For additional information on HP
Serviceguard, see Managing HP Serviceguard A.12.00.20 for Linux.
36
SAP cluster administration
4 SAP Netweaver cluster storage layout planning
Volume managers are tools that let you create units of disk storage known as storage groups.
Storage groups contain logical volumes for use on single systems and in high availability clusters.
In Serviceguard clusters, package control scripts activate the storage groups.
This chapter discusses disk layout for clustered SAP components and database components of
several vendors on a conception level. It is divided into two main sections:
•
SAP Instance storage considerations
•
Database Instance storage considerations
4.1 SAP Instance storage considerations
In general, it is important to stay as close as possible to the original layout intended by SAP. But,
certain cluster-specific considerations might suggest a slightly different approach. SGeSAP supports
various combinations of providing shared access to file systems in the cluster.
Table 3 Option descriptions
Option
Description
1. SGeSAP NFS Cluster
Optimized to provide maximum flexibility. Following the recommendations given below
allows for expansion of existing clusters without limitations caused by the cluster.
Another important design goal of SGeSAP option 1 is that a redesign of the storage
layout is not imperative when adding additional SAP components later on. Effective
change management is an important aspect for production environments. The disk
layout needs to be as flexible as possible to allow growth to be done by just adding
storage for newly added components. If the design is planned carefully at the
beginning, making changes to already existing file systems is not required.
2. SGeSAP NFS Idle Standby
Cluster
Optimized to provide maximum simplicity. The option is only feasible for very simple
clusters. It needs to be foreseeable that the layout and configuration won't change
over time. It comes with the disadvantage of being locked into restricted configurations
with a single SAP System and idle standby nodes. HP recommends option 1 in case
of uncertainty about potential future layout changes.
Each file system added to a system by SAP installation routines must be classified and a decision
has to be made:
•
Whether the file system needs to be kept as a local copy on internal disks of each node of
the cluster (local).
•
Whether the file system needs to be shared on a SAN storage device to allow failover and
exclusive activation (shared exclusive).
•
Whether the file system to be allowed shared access to more than one node of the cluster at
the same time (shared NFS).
NOTE: SGeSAP packages and service monitors require SAP tools. Patching the SAP kernel
sometimes also patches SAP tools. Depending on what SAP changed, this might introduce additional
dependencies on shared libraries that weren't required before the patch. Depending on the shared
library path settings (LD_LIBRARY_PATH) of the root user, it may not be possible for SGeSAP to
execute the SAP tools after applying the patch. The introduced additional libraries are not found.
Creating local copies of the complete central executable directory prevents this issue.
The following sections details the different storage options.
4.1.1 Option 1: SGeSAP NFS cluster
With this storage setup, SGeSAP makes extensive use of exclusive volume group activation.
Concurrent shared access is provided via NFS services. Automounter and cross-mounting concepts
4.1 SAP Instance storage considerations
37
are used in order to allow each node of the cluster to switch roles between serving and using NFS
shares. It is possible to access the NFS file systems from servers outside of the cluster that is an
intrinsic part of many SAP configurations.
4.1.1.1 Common directories that are kept local
The following common directories and their files are kept local on each node of the cluster:
Table 4 List of common directories
Directory
Description
/home/<sid>adm
Home directory of the SAP system administrator with node
specific startup log files
/home/<dasid>adm
Home directory of the SAP diagnostic agent administrator
/usr/sap/<sid>/SYS/exe/run>
Directory holding local copies of SAP instance executables,
libraries, and tools (optional for kernel 7.x and higher).
/usr/sap/tmp
Directory where the SAP operating system collector keeps
monitoring data of the local operating system
/usr/sap/hostctrl
Directory where SAP control services for the local host are kept
(kernel 7.x and higher).
/usr/sap/ccms
CCMS agent work directory (6.40 and 7.00 only)
/usr/sap/sapservices
List of startup service started by sapinit (boot)
Depending on database vendor and version, it might be required to keep local database client
software. Details can be found in the database sections below.
All files belonging to the cluster software and runtime environment are kept local.
Part of the content of the local group of directories must be synchronized manually between all the
nodes of the cluster. Serviceguard provides a tool cmcp(1) that allows replication of a file to all
the cluster nodes.
SAP instance (startup) profile names contain either local hostnames or virtual hostnames. SGeSAP
prefers profiles with virtual hostname and uses those with local hostnames only for fallback and
backwards compatibility.
In clustered SAP environments prior to 7.x releases, executables must be installed locally. Local
executables help to prevent several causes of package startup or package shutdown hangs due
to the unavailability of the centralized executable directory. Availability of executables delivered
with packaged SAP components is mandatory for proper package operation. It is a good practice
to create local copies for all files in the central executable directory. This includes shared libraries
delivered by SAP.
To automatically synchronize local copies of the executables, SAP components deliver the sapcpe
mechanism. With every startup of the instance, sapcpe matches new executables stored centrally
with those stored locally.
4.1.1.2 Directories that Reside on Shared Disks
Volume groups on SAN shared storage are configured as part of the SGeSAP packages. The
volume groups can be following:
•
Instance specific
•
System specific
•
Environment specific
Instance-specific volume groups are required by only one SAP instance or one database instance.
They usually get included with exactly the package that is set up for this instance.
38
SAP Netweaver cluster storage layout planning
System-specific volume groups get accessed from all instances that belong to a particular SAP
System. Environment-specific volume groups get accessed from all instances that belong to all SAP
Systems installed in the whole SAP environment. System and environment-specific volume groups
are set up using NFS to provide access for all instances. They must not be part of a package that
is only dedicated to a single SAP instance if there are several of them. If this package is down,
then other instances would also be impacted. As a rule of thumb, it is a good default to put all
these volume groups into a package that holds the database of the system. These filesystems often
provide tools for database handling that don't require the SAP instance at all.
In consolidated environments with more than one SAP application component, the recommendation
is to separate the environment-specific volume groups to a dedicated NFS package. This package
will be referred to as sapnfs package. It must remain running all the time, since it is of central
importance for the whole setup. Because sapnfs is serving networked file systems, there rarely
is a need to stop this package at any time. If environment-specific volume groups become part of
a database package, there is a potential dependency between packages of different SAP Systems.
Stopping one SAP System by halting all related Serviceguard packages will lead to a lack of
necessary NFS resources for unrelated SAP Systems. The sapnfs package avoids this unpleasant
dependency. It is an option to also move the system-specific volume groups to the sapnfs package.
This can be done to keep NFS mechanisms completely separate.
A valuable naming convention for most of these shared volume groups is vg<INSTNAME> or
vg<INSTNAME><SID> (for example, vgASCSC11). Table 5 (page 40) provides an overview of
SAP shared storage and maps them to the component and package type for which they occur.
Usually, instance specific volume groups can be put into dedicated packages or combined with
packages containing the database. Exceptions are ERS instances, because they need to failover
separately and Gateway (G) or WebDispatcher (W) instances , because there is no database
configure with these.
Modular SGeSAP package names do not have to follow a certain naming convention, but it is
recommended to include instance names (at least instance types) and the SAP SID into the name.
A package containing a database must be indicated this in its name (“DB”).
4.1 SAP Instance storage considerations
39
Table 5 Instance specific volume groups for exclusive activation with a package
Mount point
Access point
Recommended packages setups
/usr/sap/<SID>/SCS<INR>
Shared disk
SAP instance specific
For example, /usr/sap/C11/SCS10
Combined SAP instances
Database plus SAP instances
/usr/sap/<SID>/ASCS<INR>
For example, /usr/sap/C11/ASCS11
/usr/sap/<SID>/DVEBMGS<INR>
For example,
/usr/sap/C11/DVEBMGS12
/usr/sap/<SID>/D<INR>
SAP instance specific
/usr/sap/<SID>/J<INR>
Combined SAP instances
For example, /usr/sap/C11/D13
Database and SAP instances
NOTE: Combining a DB with SAP instances is not a
recommended package set up.
/usr/sap/<SID>/ERS/<INR>
SAP ERS instance specific
For example, /usr/sap/C11/ERS20
Combined ERS instances
/usr/sap/<SID>/G<INR>
SAP gateway instance specific
For example, /usr/sap/GW1/G50
Combined gateway instances, if configured more than
one to the SID
/usr/sap/<SID>/W<INR>
SAP WebDispatcher instance specific
For example, /usr/sap/WDP/W60
Combined WebDispatcher instances, if configured
more than one to the SID
/usr/sap/<DASID>/SMDA<INR>
For example, /usr/sap/DAA/SMDA97
Solutionmanager Diagnostic Agent instance associated
with the a clustered dialog instance
/usr/sap/<SID>/MDS<INR>
SAP MDM instance specific
/usr/sap/<SID>/MDIS<INR>
Combined SAP MDM instances
/usr/sap/<SID>/MDSS<INR>
Database plus SAP MDM instances
For example, /usr/sap/SP8/MDS30
For example, /usr/sap/SP8/MDIS31
For example, /usr/sap/SP8/MDSS32
/export/sapmnt/<SID>
For example, /export/sapmnt/C11
Shared disk or NFS Package containing DB or dedicated NFS package
toolkit
(sapnfs)
/usr/sap/trans
/export/sapmnt/<DASID>
/usr/sap/put
Shared disk
None
/usr/sap/<SID> must not be added to a package, because using this as a dynamic mount point
prohibits access to the instance directories of additional SAP application servers that are locally
installed. The /usr/sap/<SID> mount point will also be used to store local SAP executables.
This prevents problems with busy mount points during database package shutdown. Because of
the size of the directory content, it must not be part of the local root file system. The /usr/sap/tmp
might or might not be part of the root file system. This is the working directory of the operating
system collector process saposcol. The size of this directory will rarely be beyond a few
megabytes.
40
SAP Netweaver cluster storage layout planning
If you have more than one system, place /usr/sap/put on separate volume groups created on
shared drives. The directory must not be added to any package. This ensures that they are
independent from any SAP Netweaver system and you can mount them on any host by hand if
needed.
All files systems mounted below /export are part of NFS cross-mounts used by the automount
program. The automount program uses virtual IP addresses to access the NFS directories via the
path that comes without the /export prefix. Three components must be configured for NFS toolkit:
•
The NFS server consisting of a virtual hostname, storage volumes and mount points in the
/export directory
•
The NFS server export table consisting of a list of NFS exported file systems, export options
and NFS client access control. Note: The specification of the fsid export option is key as it
ensures that the device minor number is retained during the failover to an adoptive node.
•
The automount configuration on each adoptive node, consisting of a list of NFS client mount
points
This ensures that the directories are quickly available after a switchover. The cross-mounting allows
coexistence of NFS server and NFS client processes on nodes within the cluster.
Special attention needs to be given to the diagnostic agent (DA) instance directory if a related
dialog instance (both using the same virtual hostname) is planned to be clustered. Such DA instances
need to move with the related dialog instances. Therefore, their instance directory has to be part
of the package. It is recommended that the DA instance filesystem resides on the volume group of
the dialog instance (not on a volume group common to all DA instances).
The SYS directory of the DA SID requires special attention. The DA installation does not create
links underneath SYS as the standard Netweaver installation does, but just creates this directory
on the local filesystem. To keep a central and consistent copy of this SYS directory within the cluster,
it is recommend to manually create a similar setup as with a Netweaver installation (SYS containing
links into /sapmnt and /sapmnt itself mounted to an exported directory). For more details on
the preparation steps, see Chapter 5, “Clustering SAP Netweaver using SGeSAP packages” (page
49). If each cluster node has local dialog instances installed and therefore there is a DA SYS
directory on each node, the /sapmnt approach is not necessary.
4.1.2 Option 2: SGeSAP NFS idle standby cluster
This option has a simple setup, but is limited in flexibility. It is recommended to follow option 1 for
most of the cases. A cluster can be configured using option 2, if it fulfills all of the following
prerequisites:
•
Only one SGeSAP package is configured in the cluster. Underlying database technology is
a single-instance Oracle RDBMS. The package combines failover services for the database
and all required NFS services and SAP central components (ABAP CI, SCS, ASCS). Application
Server Instances are not installed on cluster nodes. Replicated Enqueue is not used.
•
Additional SAP software is not installed on the cluster nodes.
The use of a NFS toolkit service can be configured to export file systems to external Application
Servers that manually mount them. A dedicated NFS package is not possible. Dedicated NFS
requires option 1.
4.1.2.1 Common directories that are kept local
For information on common directories that are kept local, see Table 4 (page 38)
Local database client software needs to be stored locally on each node. Details can be found in
the database sections below.
Parts of the content of the local group of directories must be synchronized manually between all
the nodes in the cluster.
4.1 SAP Instance storage considerations
41
In clustered SAP environments prior to 7.x releases, executables must be installed locally. Local
executables help to prevent several causes for package startup or package shutdown hangs due
to the unavailability of the centralized executable directory. The availability of executables delivered
with packaged SAP components is mandatory for proper package operation. Experience has
shown that it is a good practice to create local copies for all files in the central executable directory.
This includes shared libraries delivered by SAP.
To automatically synchronize local copies of the executables, SAP components deliver the sapcpe
mechanism. With every startup of the instance, sapcpe matches new executables stored centrally
with those stored locally.
4.1.2.2 Directories that reside on shared disks
Volume groups on a SAN shared storage get configured as part of the SGeSAP package.
Instance-specific volume groups are required by only one SAP instance or one database instance.
They usually get included with exactly the package that is set up for this instance. In this configuration
option, the instance-specific volume groups are included in the package.
System-specific volume groups get accessed from all instances that belong to a particular SAP
System. Environment-specific volume groups get accessed from all instances that belong to any
SAP System installed in the whole SAP scenario. System and environment-specific volume groups
must be set up using NFS toolkit to provide access capabilities to SAP instances on nodes outside
of the cluster. The cross-mounting concept of option 1 is not required.
A valuable naming convention for most of these shared volume groups is vg<INSTNAME><SID>
or vg<INSTNAME><INR><SID>. Table 6 (page 42) provides an overview of SAP shared storage
for this special setup and maps them to the component and package type for which they occur.
Table 6 File systems for the SGeSAP package in NFS idle standby clusters
Mount Point
Access Point
Remarks
/sapmnt/<SID>
Shared disk and NFS toolkit
Required
/usr/sap/<SID>
Shared disk
/usr/sap/trans
Shared disk and NFS toolkit
Optional
If you have more than one system, place /usr/sap/put on separate volume groups created on
shared drives. The directory must not be added to any package. This ensures that they are
independent from any SAP WAS system and you can mount them on any host by hand if needed.
4.2 Database instance storage considerations
SGeSAP supports clustering of database technologies from different vendors. The vendors have
implemented individual database architectures. These storage layout for the SGeSAP cluster
environments needs to be discussed individually for each of the following. Because of its similarity
to MaxDB this section also contains liveCache storage considerations, although LiveCache is not
associated with a classical Netweaver installation. All supported platforms are Intel x86_64 only:
42
•
Oracle Single Instance RDBMS storage consideration
•
MaxDB/liveCache storage considerations
•
SAP Sybase ASE storage considerations
•
IBM DB2 storage considerations
•
Special liveCache storage considerations
SAP Netweaver cluster storage layout planning
Table 7 Availability of SGeSAP storage layout options for different Database RDBMS
DB Technology
SGeSAP Storage Layout
Options
Cluster Software Bundles
Oracle Single-Instance
SGeSAP NFS clusters
1. Serviceguard
2. SGeSAP
3. Serviceguard NFS toolkit
Idle standby
1. Serviceguard
2. SGeSAP
3. Serviceguard NFS toolkit (Optional)
SAP MaxDB
SAP Sybase ASE
IBM DB2
Oracle Single Instance
4.3 Oracle single instance RDBMS
Single Instance Oracle databases can be used with both SGeSAP storage layout options. The setup
for NFS and NFS Idle Standby Clusters are identical.
4.3.1 Oracle databases in SGeSAP NFS and NFS Idle standby clusters
Oracle server directories reside below /oracle/<DBSID> (for example, /oracle/C11).
These directories get shared via the database package.
ORACLE_HOME usually points to /oracle/<DBSID>/<version>_64 (for example, for Oracle
10.2: /oracle/C11/102_64)
In addition, the SAP Application Servers will need access to the Oracle client libraries, including
the Oracle National Language Support files (NLS) shown in Table 8 (page 43). The default location
where the client NLS files get installed differs with the SAP kernel release used. See the table below:
Table 8 NLS files - default location
Kernel version
Client NLS location
6.x,
/oracle/client/<rdbms_version>/ocommon/nls/admin/data
7.x
/oracle/client/<rdbms_version>/instantclient
For systems using Oracle instant client
(/oracle/client/<major-version>x_64/instantclient) no client side NLS directory
exists.
A second type of NLS directory, called the "server" NLS directory, always exists. This directory is
created during database or SAP Central System installations. The location of the server NLS files
is identical for all supported SAP kernel versions:
$ORACLE_HOME/nls/data
The setting of the ORA_NLS10 variable in the environments of <sid>adm and ora<sid> determines
whether the client or the server path to NLS is used. The variable gets defined in the
dbenv_<hostname>.[c]sh files in the home directories of these users.
However, newer installations don’t define that variable anymore and it is even forbidden to set it
for user <sid>adm (SAP Note 819829)
Sometimes a single host may have an installation of both a Central Instance and an additional
Application Server of the same SAP System. These instances need to share the same environment
settings. SAP recommends using the server path to NLS files for both instances in this case. This
does not work with SGeSAP since switching the database would leave the application server
without NLS file access.
4.3 Oracle single instance RDBMS
43
The Oracle database server and SAP server might need different types of NLS files. The server NLS
files are part of the database Serviceguard package. The client NLS files are installed locally on
all hosts. Do not mix the access paths for ORACLE server and client processes.
The discussion of NLS files has no impact on the treatment of other parts of the ORACLE client files.
The following directories need to exist locally on all hosts where an Application Server might run.
The directories cannot be relocated to different paths. The content needs to be identical to the
content of the corresponding directories that are shared as part of the database SGeSAP package.
The setup for these directories follows the "on top" mount approach, that is, the directories might
become hidden beneath identical copies that are part of the package:
$ORACLE_HOME/rdbms/mesg
$ORACLE_HOME/oracore/zoneinfo
$ORACLE_HOME/network/admin
Table 9 File system layout for NFS-based Oracle clusters
Mount Point
Access Point
Potential Owning Packages
$ORACLE_HOME
Shared disk
Database only or combined DB instance specific
DB plus CI package
Local
None
Environment specific
Some local Oracle client files reside in
Local
/oracle/<SID> as part of the root filesystem
None
DB instance specific
/oracle/<SID>/saparch
VG Type
/oracle/<SID>/sapreorg
/oracle/<SID>/sapdata1
...
/oracle/<SID>/sapdatan
/oracle/<SID>/origlogA
/oracle/<SID>/origlogB
/oracle/<SID>/mirrlogA
/oracle/<SID>/mirrlogB
/oracle/client
4.4 MaxDB/liveCache storage considerations
This section describes the recommended storage considerations for MaxDB and liveCache.
NOTE: SGeSAP/LX does not support hoststandby liveCache(HSS), hence the following description
also applies to a liveCache setup, unless noted otherwise. MaxDB can be substituted by liveCache,
<DBSID> by <LCSID>.
The main difference between MaxDB and liveCache is how it is used by the clients. Therefore,
depending on the situation alternative setups are possible for liveCache. For more information on
only liveCache storage considerations, see “Special liveCache storage considerations” (page 47)
section. A High Availability (HA) setup for liveCache must ensure that the liveCache client (usually
the SCM installation) has the client libraries.
•
SGeSAP supports failover of MaxDB databases as part of SGeSAP NFS cluster option.
•
MaxDB distinguishes an instance dependant path /sapdb/<DBSID> and two instance
independent paths, called IndepDataPath and IndepProgramsPath (IndepData and
IndepProgram in the ini-file). By default, all three point to a directory below /sapdb.
•
The paths are configured in a file called /etc/opt/sdb. For compatibility with older release
there must be a file called /var/spool/sql/ini/SAP_DBTech.ini.
Depending on the version of the MaxDB database, this file may contain different sections and
settings.
44
SAP Netweaver cluster storage layout planning
•
The sections [Installations], [Databases], and [Runtime] are stored in separate
files Installations.ini, Databases.ini, and Runtimes.ini in the IndepData
path /sapdb/data/config.
•
MaxDB 7.8 does not create SAP_DBTech.ini anymore. The [Globals] section is defined
in /etc/opt/sdb. With the concept of isolated installations , a DB installation contains its
own set of (version specific) executables (/sapdb/<DBSID>/db/bin), its own data directory
(/sapdb/<DBSID>/data), and a specific client directory (/sapdb/clients/<DBSID>).
At runtime, there will be a database specific set of x_server related processes.
NOTE: IndepDataPath and IndepProgramsPath are now referred as
GlobalDataPath and GlobalProgramPath respectively.
The following directories are of special interest:
•
/sapdb/programs— This can be seen as a central directory with all MaxDB executables.
The directory is shared between all MaxDB instances that reside on the same host. It is also
possible to share the directory across hosts. But, it is not possible to use different executable
directories for two MaxDB instances on the same host. Furthermore, different SAPDB versions
could get installed on the same host. The files in /sapdb/programs must be the latest version
in any MaxDB on the cluster node. Files in /sapdb/programs are downwards compatible.
•
/sapdb/data/config— This directory is also shared between instances, though you can
find lots of files that are Instance specific in here; for example,
/sapdb/data/config/<DBSID>.* According to SAP this path setting is static.
•
/sapdb/data/wrk— The working directory of the main MaxDB processes is also a
subdirectory of the IndepData path for non-HA setups. If a SAPDB restarts after a crash, it
copies important files from this directory to a backup location. This information is then used
to determine the reason of the crash. In HA scenarios, for SAPDB/MaxDB lower than version
7.6, this directory must move with the package. Therefore, SAP provided a way to redefine
this path for each SAPBDB/MaxDB individually. SGeSAP expects this work directory to be
part of the database package. The mount point moves from/sapdb/data/wrk to
/sapdb/data/<DBSID>/wrk for the clustered setup. This directory must not be confused
with the directory /sapdb/data/<DBSID>/db/wrk that might also exist. Core files of the
kernel processes are written into the working directory. These core files have file sizes of
several Gigabytes. Sufficient free space needs to be configured for the shared logical volume
to allow core dumps.
For MaxDB version 7.8 or later, this directory is replaced by /sapdb/<DBSID>/data (private
data path).
NOTE: For MaxDB RDBMS starting with version 7.6, these limitations do not exist. The
working directory is utilized by all the instances (IndepData/wrk) and can be shared
globally.
•
/etc/opt/sdb : Only exists when using MaxDB or liveCache 7.5 or later. Needs to be
local on each node together with entries in /etc/passwd and /etc/group.
•
/var/spool/sql: For MaxDB version 7.5 or later, /var/spool/sql is created only for
compatibility with older versions. Depending on the versions installed it may not exist anymore
This directory hosts local runtime data of all locally running MaxDB instances. Most of the
data in this directory would become meaningless in the context of a different host after failover.
The only critical portion that still has to be accessible after failover is the initialization data in
/var/spool/sql/ini. This directory is usually very small (< 1 Megabyte). With MaxDB
and liveCache 7.5 or higher, the only local files are contained in /var/spool/sql/ini,
other paths are links to local runtime data in IndepData path:
4.4 MaxDB/liveCache storage considerations
45
dbspeed -> /sapdb/data/dbspeed
diag -> /sapdb/data/diag
fifo -> /sapdb/data/fifo
ipc -> /sapdb/data/ipc
pid -> /sapdb/data/pid
pipe -> /sapdb/data/pipe
ppid -> /sapdb/data/ppid
The links need to exist on every possible failover node in the MaxDB for the liveCache instance
to run.
•
/sapdb/clients (MaxDB 7.8): Contains the client files in <DBSID> subdirectories for each
database installation.
•
/var/lib/sql: Certain patch level of MaxDB 7.6 and 7.7 (see SAP Note 1041650) use
this directory for shared memory files. Needs to be local on each node.
NOTE: In HA scenarios, valid for SAPDB/MaxDB versions up to 7.6, the runtime directory
/sapdb/data/wrk is configured to be located at /sapdb/<DBSID>/wrk to support consolidated
failover environments with several MaxDB instances. The local directory /sapdb/data/wrk is
referred to by the VSERVER processes (vserver, niserver), that means VSERVER core dump
and log files are located there.
Table 10 File system layout for SAPDB clusters
Mount Point
Access Point
Potential Owning
Packages
/sapdb/<DBSID>
Shared disk
DB only or combined DB DB specific
and CI package
Shared disk
and NFS
toolkit
DB only or combined
DB+CI
/etc/opt/sdb
Local
None
/var/lib/sql
Local
None
/sapdb/<DBSID>/wrk*
VG Type
/sapdb/<DBSID>/sapdata<nr>
/sapdb/<DBSID>/saplog<nr>
/sapdb/<DBSID>/data**
/sapdb/data<DBSID>/data<n>|log<n>***
/export/sapdb/programs
/export/sapdb/data
/export/sapdb/clients**
Environment specific
SAPNFS
/export/var/spool/sql/ini
Environment specific
*Only valid for versions lower than 7.6.
**Only valid for versions 7.8 or higher.
*** Only valid for older versions.
NOTE: When using tar or cpio to copy or move directories, it must be ensured that the file or
ownership permissions transported are retained, especially for files having the s-bit set:
/sapdb/<SID>/db/pgm/lserver and /sapdb/<SID>/db/pgm/dbmsrv. These files are
important for the vserver process ownership and they have an impact on starting the SAPDB via
<sid>adm.
Database and SAP instances depend on the availability of /sapdb/programs. To minimize
dependencies between otherwise unrelated systems, usage of a dedicated SAPNFS package is
strongly recommended—especially when the cluster has additional SAP application servers installed,
more than one SAPDB is installed, or the database is configured in a separate DB package. Keeping
46
SAP Netweaver cluster storage layout planning
local copies is possible, though not recommended because there are no administration tools that
keep track of the consistency between the local copies of these files on all the systems.
Using NFS toolkit filesystems underneath or export Table 10 (page 46) is required when multiple
MaxDB based components (including liveCache) are either planned or already installed. These
directories are shared between the instances and must be part of an instance package. Otherwise
the halt of one instance would prevent the other one to be started or run.
4.5 Sybase ASE storage considerations
SGeSAP supports failover of Sybase ASE databases as part of SGeSAP NFS cluster option 1. It is
possible to consolidate SAP instances in SGeSAP ASE environments.
Table 11 File system layout for Sybase ASE clusters
Mount Point
Access Point
Owning Package
VG Type
/sybase/<DBSID>
Shared disk
DB only or combined DB and DB specific
CI package
/sybase/<DBSID>/sapdiag
/sybase/<DBSID>/sybsystem
/sybase/<DBSID>/sybtemp
/sybase/<DBSID>/sapdata<n>
/sybase/<DBSID>/saplog<n>
/sybase/<DBSID>/saptemp
4.6 IBM DB2 storage considerations
The SGeSAP supports failover of IBM DB2 databases as part of the SGeSAP NFS cluster option
1. It is possible to consolidate the SAP instances in SGeSAP IBM DB2 environments.
Table 12 File system layout for DB2 clusters
Mount Point
Access Point
Owning Package
VG Type
/db2/db2<dbsid>
Shared disk
DB only or combined
DB and CI package
DB specific
/db2/db2<dbsid>/software
/db2/<DBSID>
/db2/<DBSID>
/db2/<DBSID>/log_dir
/db2/<DBSID>/sapdata<n>
/db2/<DBSID>/sapdata/saptemp1
/db2/<DBSID>/db2dump
/db2/<DBSID>/db2<dbsid>
4.7 Special liveCache storage considerations
Depending on the setup of the related Netweaver installation (usually an SCM or CRM application)
there are two additional options for setting up liveCache that can be used instead of the approach
described in the “MaxDB/liveCache storage considerations” (page 44).
4.5 Sybase ASE storage considerations
47
4.7.1 Option 1: Simple cluster with separated packages
Cluster layout constraints:
•
The liveCache package does not share a failover node with the SCM central instance package.
•
There is no MaxDB or additional liveCache running on cluster nodes.
•
There is no intention to install additional SCM Application Servers within the cluster.
Table 13 File System Layout for liveCache package running separate from SCM (Option 1)
Mount point
Storage type
Owning packages
/sapdb/data
Shared disk
Dedicated liveCache package
(lc<LCSID>)
/sapdb/programs
/sapdb/clients
/sapdb/<LCSID>/sapdata<n>
/sapdb/<LCSID>/saplog<n>
/var/spool/sql
In the above layout all relevant files get shared via standard procedures. The setup causes no
administrative overhead for synchronizing local files. SAP default paths are used.
4.7.2 Option 2: Non-MaxDB environments
Cluster layout constraints:
•
MaxDB or additional liveCache must not be running on cluster nodes. Especially, the SCM
System RDBMS must be based on Oracle, DB2, or Sybase, but not on MaxDB.
Often SCM does not rely on MaxDB as underlying database technology. But independent from
that, all Instances of the SCM System still need access to the liveCache client libraries. The best
way to deal with this is to make the client libraries available throughout the cluster via AUTOFS
cross-mounts from a dedicated NFS package.
Table 14 File system layout for liveCache in a non-MaxDB environment (Option 2)
Mount point
Storage type
Owning packages
/sapdb/data
Shared disk
Dedicated liveCache package
(lc<LCSID>)
Autofs shared
sapnfs1
/sapdb/<LCSID>/sapdata<n>
/sapdb/<LCSID>/saplog<n>
/var/spool/sql
/sapdb/programs
/sapdb/clients
1
This can be any standard, standalone NFS package. The SAP global transport directory must
already be configured in a similar package. This explains why this package is often referred to
as "the trans package" in related literature. A package serving SAP trans directory can optionally
be extended to also serve the global liveCache file shares.
48
SAP Netweaver cluster storage layout planning
5 Clustering SAP Netweaver using SGeSAP packages
5.1 Overview
This chapter describes in detail how to implement a SAP cluster using Serviceguard and Serviceguard
Extension for SAP (SGeSAP). Each task is described with examples.
A prerequisite for clustering SAP by using SGeSAP is that the Serviceguard cluster software
installation must have been completed, and the cluster setup and running.
The minimum software requirements are as follows:
•
Serviceguard for providing High Availability (HA)
•
SGeSAP for clustering SAP in an HA environment
•
NFS toolkit (optional for certain type of installations) for providing NFS services
•
Serviceguard Manager for the GUI based setup and configuration of Serviceguard clusters
5.1.1 Three phase approach
A three phase approach is used for clustering the SAP.
1.
Setup the infrastructure for SAP installation (SAP pre-installation).
a. Setup of one (or more) sapnfs packages for providing NFS services to all the cluster
nodes.
b. Setting up a base package, also called a tentative package with some selected
Serviceguard and SGeSAP modules. You can use the base package for the initial SAP
instance and database installations.
NOTE: The base package is not a strict requirement, but it allows you to troubleshoot
configuration issues in an early stage in the cluster.
2.
3.
Install SAP instances and databases (SAP installation).
Complete the package setup (SAP post-installation).
a. Synchronize configuration changes on the primary node with secondary nodes in the
cluster.
b. Add SGeSAP modules and or update attributes to the base packages introduced in step
1b of first phase.
The steps in Phase 2 of this approach are normally performed by a certified SAP Installer, whereas,
the steps in Phase 1 and 3 are performed by a customer service engineer trained in Serviceguard
and SGeSAP.
It is important to categorize the following:
•
The volume groups and logical volumes of the SAP or database instances
•
The virtual hostnames designated for these instances
•
How these are mapped in to Serviceguard packages
Before starting with Phase 1, it is important to determine the volume groups and logical volumes
that belong to the package, and finally it is also important to remember that resources like IP
addresses (derived from the virtual hostnames) and volume groups can only belong to one package.
5.1 Overview
49
This implies that two SAP instances sharing these resources must be part of the same package.
Finally, before clustering SAP using the Phase 1 approach, it is important to decide on the following:
•
The file systems to be used as local copies.
•
The file systems to be used as shared exclusive file systems.
•
The file systems to be used as shared nfs file systems.
For more information on file system configurations, see chapter 4 “SAP Netweaver cluster storage
layout planning” (page 37).
There can also be a requirement to convert an existing SAP instance or database for usage in a
Serviceguard cluster environment. For more information on how to convert an existing SAP instance
or database see, “Converting an existing SAP instance” (page 87).
5.1.2 SGeSAP modules and services
The following components are important for the configuration of a Serviceguard package with
SGeSAP:
•
Modules and the scripts that are used by these modules.
•
Service monitors.
5.1.2.1 SGeSAP modules
Various Serviceguard and SGeSAP modules are available for clustering SAP instances and database
instances. These modules contain attribute definitions that describe an SAP instances or database
instances required to configure them into a Serviceguard package. The following points give an
overview of the top-level SGeSAP modules:
Table 15 SGeSAP modules
Module
Description
sgesap/sapinstance
For clustering of one or more SAP instances such as SAP Central Instances,
System Central Services, Replication Instances, ABAP Application Servers, JAVA
Application Servers, Web Dispatcher, Gateway, and MDM instances of a single
SAP system.
sgesap/dbinstance
For Oracle, MaxDB, DB2 and Sybase ASE RDBMS databases.
sgesap/sapextinstance
For handling external instances (SAP software running in a non-clustered
environment).
sgesap/sapinfra
For clustering SAP infrastructure software.
sgesap/livecache
For clustering SAP liveCache instances.
sgesap/mdminstance
For handling SAP MDM repositories.
NOTE: These modules can also include other common SGeSAP and Serviceguard modules that
are required to setup a complete package.
5.1.2.2 SGeSAP services
SGeSAP provides monitors that can be used with the sg/service module, which monitor the
health of the SAP instances and their databases. Some monitors also offer local restart functionality.
For example, in the case of an instance failure, the monitors attempt to restart the instance on the
same node before initiating a package failover. The following monitors are provided with SGeSAP:
50
Clustering SAP Netweaver using SGeSAP packages
Table 16 SGeSAP monitors
Monitor
Description
sapms.mon
To monitor a message service that comes as part of a Central Instance or System
Central Service Instance for ABAP/JAVA usage.
sapenq.mon
To monitor an enqueue service that comes as part of a System Central Service
Instance for ABAP/JAVA usage.
sapenqr.mon
To monitor an enqueue replication service that comes as part of an Enqueue
Replication Instance.
sapdisp.mon
To monitor a SAP dispatcher that comes as part of a Central Instance or an ABAP
Application Server Instance.
sapwebdisp.mon
To monitor a SAP Web Dispatcher that is included either as a part of (W-type)
instance installation into a dedicated SID or by unpacking and bootstrapping
into an existing SAP Netweaver SID.
sapgw.mon
To monitor a SAP Gateway (G-type instance).
sapdatab.mon
To monitor MaxDB, Oracle, DB2, and Sybase ASE database instances.
Additionally, it monitors the xserver processes for MaxDB and listener processes
for Oracle.
saplc.mon
To monitor SAP liveCache instances.
sapmdm.mon
To monitor SAP MDM servers.
sapenqor.mon
To coordinate package startup in follow-and-push SCS/ERS scenarios. Used
internally by SGeSAP in the enqor MNP (multi node package).
These monitors are located at directory $SGCONF/monitors/sgesap. Each monitor automatically
performs regular checks on the availability and responsiveness of a specific software component
within all the SAP instances that provide this service in the package.
NOTE: Sourcing the Serviceguard cluster configuration file with . /etc/cmcluster.conf
sets the above $SGCONF environment variable, as well as other cluster environment variables.
For Oracle databases, issues with the Oracle listener process and the database are detected and
local restarts of the listener are triggered by the monitor, if required.
For MaxDB databases, issues with the xserver processes and the database are detected and
local restarts are triggered by the monitor, if required. The SAP central service monitors detect
issues with the SAP startup agent of their instances and attempt local restarts of the agent software.
The SAP message service monitor sapms.mon can work in environments that use the
Restart_Program_... setting in the SAP instance (start) profiles of the [A]SCS instances to
achieve local restarts of failing message services without triggering unnecessary instance failovers.
It is recommended to use the SAP restart mechanism only for the message server.
The SAP enqueue replication service monitor sapenqr.mon has built-in software restart functionality.
It locally restarts the replication instance in case the software fails. A related failover is only
triggered, if the instance fails to remain up for more than ten minutes three times in a row.
Momentary instability of the software is reported as an alert message in the Serviceguard Manager.
The SAP enqueue service monitor sapenq.mon and the SAP dispatcher monitor does not provide
built-in software restart and the native SAP instance restart must not be configured either. Configuring
local restarts may lead to serious malfunctions for these software components.
Service monitors that are enabled to react to shutdown notifications from SAP's startup framework
include sapdisp.mon, sapgw.mon, sapms.mon, sapenq.mon, sapenr.mon,
sapwebdisp.mon, and sapmdm.mon. For more information on SAP's startup framework, see
“Performing SAP administration with SAP's startup framework ” (page 25) section.
5.1 Overview
51
5.1.3 Installation options
Serviceguard and SGeSAP provide three different methods for installing and configuring packages
in an SAP environment.
1.
2.
3.
SGeSAP Easy Deployment using the deploysappkgs script: This is applicable for some
selected SAP installations types. For example, SAP Central Instance installations. It provides
an easy and fully automatic deployment of SGeSAP packages belonging to the same SAP
SID.
A guided installation using the Serviceguard Manager GUI: A web based graphical interface,
with plug-ins for automatic pre-filling of SGeSAP package attributes based on the currently
installed SAP and DB instances.
The classical Command Line Interface (CLI): The commands cmmakepkg, cmcheckconf,
and cmapplyconf are used for creating a package configuration, checking the configuration,
and registering the package with the cluster.
Table 17 (page 52) table provides a quick summary of the pros and cons of the methods and the
suggestions on when to use.
Table 17 Installing and configuring packages in SAP environment
Method
Description
SGeSAP Easy
Deployment using
the
deploysappkgs
script
This only works for some selected SAP installations • fully automatic
types. For example, SAP Central Instance
Serviceguard
installations. It provides an easy and fully automatic
package
deployment of SGeSAP packages belonging to the
configuration file
same SAP SID.
generation, no
manual intervention
NOTE: This method is useful only for:
required.
• a fully automatic package configuration creation
•
Can update
with no manual intervention
existing packages
• a SAP Central Instance and database
with the attributes
necessary to
It assumes the SAP installation is complete and
protect a SAP
therefore is only available in phase 3 approach.
instance or DB with
For more information on deploysappkgs, see
the package.
manpages.
A guided
installation using
the Serviceguard
Manager GUI
Pros
52
• limited to certain
configurations
• auto-discovery
code requires all
package relevant
file systems to be
mounted
• no GUI
• can only be used
in phase 3
approach
A web based graphical interface, with plug-ins for • user guided, GUI
automatic pre-filling of SGeSAP package attributes
based setup of
•
based on the currently installed SAP and DB
packages
instances.
• easy to reconfigure
NOTE:
the packages, if
required
• This method is useful for a guided and GUI
based package creation and registration of any • a basic validation
SAP configuration
of entered data is
provided
• In Phase 1 approach, this method can be used
to setup a "base" package configuration before
installing SAP as well as a package
configuration after the SAP installation was
completed in phase 3 approach.
The classical
Command Line
Interface (CLI)
Cons
The commands cmmakepkg, cmcheckconf, and • every package
cmapplyconf are used for creating a package
attribute can be
configuration, checking the configuration, and
edited
registering the package with the cluster.
• the package
NOTE: This method is useful for package setup
configuration file
where every detail is required.
contains extensive
documentation
Clustering SAP Netweaver using SGeSAP packages
pre-filling plugin
requires all
package relevant
file systems to be
mounted for
auto-discovery
• manual edits of
package attributes
can be
cumbersome and
error prone
5.1.3.1 Serviceguard Manager GUI and Serviceguard CLI
For more information about the installation option 2: package creation using the Serviceguard
Manager GUI, see the respective online help available in the Serviceguard Manager GUI. For
more information about installation option 3: package creation using the Serviceguard Command
Line Interface (CLI), see the Managing HP Serviceguard A.11.20.20 for Linux manual at http://
www.hp.com/go/ linux-serviceguard-docs.
5.1.3.2 SGeSAP easy deployment
This section describes the installation and configuration of packages using easy deployment (via
the deploysappkgs command, which is part of the SGeSAP product) .
This script allows easy deployment of the packages that are necessary to protect the critical SAP
components. The components that can be deployed into one or more packages are:
•
System Central Services (SCS or ASCS)
•
Central Instance (DVEBMGS) (if no ASCS is configured)
•
Enqueue Replication Services (Both ERS for Java and ABAP)
•
Database Instance
The NFS exports that are necessary to operate this environment are not part of this version of easy
deployment, and must be configured separately (for example, in the Phase 1 approach).
The SGeSAP easy deployment is invoked through command line by using the deploysappkgs
packaging-option SAPSID with the packaging-option being either the multi (multiple
packages) or the combi (combined packages) and the SAPSID being the SAP SID for which the
packages are created.
The multiple packages is the default and recommended option. This allows the distribution of
non-redundant SAP components into multiple, separate packages. It is a very flexible option, as
the individual components can failover independently, and unlike combined packages, failure of
one component does not bring down the other components. Thus "failover dependencies" can be
avoided.
The combined packages option allows the combining of the non-redundant SAP components of a
SAP system in as few packages as possible. This keeps the setup simple and can save resources.
With this option, a package initiates a failover, even if only one of the configured SAP components
configured in the package fails.
Easy deployment generates (or updates) one or more package configurations. These must be
reviewed before they are applied. The screen output reports the filename of the generated
configuration files as well as a detailed log of each step performed during generating the package
configuration.
Table 18 SGeSAP use cases for easy package deployment
Use case
Scenario
Create packages from scratch
SAP instances and DB are installed. Instances and DB can be
running or can be halted. They are not configured into a SGeSAP
package yet. Easy deployment will create new packages.
Extend "base packages"
Before setting up SAP, minimal package(s) with required volume
groups and IP addresses have been created and started. No
SGeSAP configuration has been added yet. After setting up SAP
this/these package(s) must be extended with SGeSAP related
attributes including installed instances and/or DB.
5.1 Overview
53
Table 18 SGeSAP use cases for easy package deployment (continued)
Use case
Scenario
Add a new SAP instance to an already configured For example:
package
A newly installed ASCS must be part of the existing SCS package.
Easy deployment will add such an ASCS into the existing package,
if it is configured to the same virtual host as the SCS or if option
"combined" is selected.
Update existing package with additionally
required resources
For example:
A new volume group related to a SAP instance must be added.
IPv6 is enabled and the virtual hostname now also has an IPv6
address which must be added. Easy deployment discovers these
new attributes and add them to the appropriate existing SGeSAP
packages.
5.2 Infrastructure setup, pre-installation preparation (Phase 1)
This section describes the infrastructure that is provided with the setup of a NFS toolkit package
and a base package for the upcoming SAP Netweaver installation. It also describes the prerequisites
and some selected verification steps.
There is a one to one or one to many relationship between a Serviceguard package and SAP
instances and a one to one relationship between a Serviceguard package and a SAP database.
A package can only serve a maximum of one SAP SID and one DB SID at a time, but SAP SID and
DB SID are not required to be identical.
Common resources on which SAP instances depend must go into a single package and (unless
Serviceguard package dependencies are going to be used) instances depending on these resources
must be configured into the same package later.
5.2.1 Prerequisites
•
Volume groups, logical volumes, and file system (with their appropriate sizes) must be setup
according to the storage layout described in Chapter 4“SAP Netweaver cluster storage layout
planning” (page 37).
•
Volume groups must be accessible from all cluster nodes.
•
The file system mount points must be created on all the cluster nodes.
•
Virtual hostnames (and their IP addresses) required for the SAP installation must exist, and
must resolve on all the cluster nodes. This is mandatory for the NFS toolkit package setup. If
the NFS toolkit setup is not used, continue to Infrastructure Setup - SAP base package setup
(Phase 1b).
After completing the steps in this section everything is ready for starting the SAP installation. This
infrastructure consists of:
54
•
A running sapnfs package exporting the relevant file systems (it depends upon the setup
chosen, as NFS may also be part of a SGeSAP package instead of a separate NFS toolkit
package).
•
A working automount configuration on all the hosts that will run the SAP instance.
•
One or more SGeSAP “base packages” providing the environment for the subsequent SAP
installation.
Clustering SAP Netweaver using SGeSAP packages
5.2.2 Node preparation and synchronization
Node preparation needs to be performed on every cluster node only once. If a node is added to
the cluster after the SGeSAP package setup, node preparation must be performed before the
packages are enabled on that node.
NOTE: It is critical for any of the following configuration and installation setup steps of Phase1
that the prerequisites (setup of volume groups, logical volumes, file systems, mount points, and
virtual hostnames) must be implemented and synchronized on all the nodes before continuing the
configuration or installation.
Synchronization means that the secondary nodes in the cluster must be coordinated with the
configuration changes from the primary node.
For example, configuration file changes in the cluster are copied from one source location (primary)
to one or more target locations (secondary).
Synchronization steps in Phase1 are intermediate as the goal here is to identify and isolate
configuration issues at an early stage.
Phase 3 contains the final synchronization steps. For more information on final synchronization,
see “Post SAP installation tasks and final node synchronization (Phase 3a)” (page 66) section.
5.2.3 Intermediate synchronization and verification of virtual hosts
To synchronize virtual hosts:
1.
2.
Ensure that all the virtual hosts that are used later in the SAP installation and the NFS toolkit
package setup are added to the /etc/hosts. If a name resolver is used instead of
/etc/hosts, then ensure that all the virtual hosts resolve correctly.
Verify the order and entries for the host name lookups in the /etc/nsswitch.conf.
For example:
hosts: files dns
Verification step(s):
To verify whether the name is resolved with the same IP address, ping the virtual host
on all the nodes.
ping nfsreloc
PING nfsreloc (192.168.100.99) 56(84) bytes of data.
64 bytes from saplx-0-31 (192.168.100.99): icmp_seq=1 ttl=64 time=0.113 ms
64 bytes from saplx-0-31 (192.168.100.99): icmp_seq=2 ttl=64 time=0.070 ms
5.2.4 Intermediate synchronization and verification of mount points
The procedure for synchronizing mount points are as follows:
1.
Ensure that all the file system mount points for this package are created on all the cluster nodes
as mentioned in the prerequisites.
Verification step(s):
For example:
Run the cd /sapmnt/C11 command on all the nodes, and test for availability.
5.2.5 Infrastructure setup for NFS toolkit (Phase 1a)
If a dedicated NFS toolkit package sapnfs for SAP is planned for the installation, this must be
setup at a very early stage. You can create the package using either the Serviceguard Manager
or the CLI interface.
5.2 Infrastructure setup, pre-installation preparation (Phase 1)
55
NOTE: If a common sapnfs package already exists, it can be extended by the new volume
groups, file systems, and exports. Mount points for the directories that are used by the NFS toolkit
package and the automount subsystem must exist as part of the prerequisites. If the mount points
do not exist, you must create it depending on the requirement.
For example:
mkdir -p /export/sapmnt/C11
5.2.5.1 Creating NFS Toolkit package using Serviceguard Manager
NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This
section describes GUI steps and the CLI steps are described in the “Creating NFS toolkit package
using Serviceguard CLI” (page 58) section.
The Serviceguard Manager GUI can be used to setup, verify and apply SAP sapnfs packages.
To create an NFS toolkit package:
1.
2.
3.
From the Serviceguard Manager Main page, click Configuration in the menu toolbar, then
select Create a Modular Package from the drop down menu.
If toolkits are installed, a Toolkits Selection screen for selecting toolkits appears. Click yes
following the question Do you want to use a toolkit?
Select NFS toolkit and click Next >>. The Package Selection screen appears.
Figure 13 Toolkit selection page
4.
In the Package Name box, enter a package name that is unique for the cluster.
NOTE: The name can contain a maximum of 39 alphanumeric characters, dots, dashes, or
underscores.
The Failover package type is pre-selected and Multi-Node is disabled. NFS does not support
Multi-Node.
5.
Click Next >>. The Modules Selection screen appears.
The modules in the Required Modules table are selected by default and cannot be changed.
In the Select Modules table, you can select additional modules (or clear the default
recommended selections) by selecting the check box next to each module that you want to
add (or remove) from the package.
NOTE:
6.
Click Reset at the bottom of the screen to return to the default selections.
Click Next >>. The first of several consecutive sg/failover modules configuration screens
appear with the following message at the top of the screen:
•
Step 1 of X: Configure Failover module attributes (sg/failover)
X will vary, depending on how many modules you selected.
56
Clustering SAP Netweaver using SGeSAP packages
There are two tables in this screen, Select Nodes and Specify Parameters. By default, nodes
and node order are pre-selected in the Select Nodes table. You can clear the selection or
change the node order, to accommodate your configuration requirements. Alternatively, you
can select Enable package to run on any node configured in the cluster (node order defined
by Serviceguard) and allow Serviceguard to define the node order.
To help in decision making, you can move the cursor over the configurable parameters, and
view the tool tips that provide more information about the parameter.
7.
Click Next >>. The second of several consecutive sg/failover modules configuration screens
appear. Fill in the required fields, and accept or edit the default settings.
8. Click Next >> at the bottom of the screen to open the next screen in the series.
9. See the configuration summary below for an example of the NFS file system mount points, the
directories to export as well as the NFS export options.
10. After you complete all of the configuration screens, the Verify and submit configuration change
screen is displayed. Use the Check Configuration and Apply Configuration buttons at the
bottom of the screen to confirm and apply your changes.
Figure 14 Configuration summary page- sapnfs package
5.2 Infrastructure setup, pre-installation preparation (Phase 1)
57
Figure 15 Configuration summary page- sapnfs package (continued)
5.2.5.2 Creating NFS toolkit package using Serviceguard CLI
NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This
section describes the CLI steps and the GUI steps are described in the “Creating NFS Toolkit
package using Serviceguard Manager” (page 56) section.
1.
2.
Run the cmmakepkg –n sapnfs -m tkit/nfs/nfs sapnfs.config command to
create the NFS server package configuration file using the CLI.
Edit the sapnfs.config configuration file.
The following is an example for a package configuration with volume group vgnfs with
filesystem lvnfsC11 to be exported and mounted from virtual host 192.168.100.99.
Add the relevant attributes for the NFS server: virtual hostname, volume groups, file systems,
and the mount point of the /export directory. A package_ip address specifies the virtual
address through which the NFS clients must mount the exports.
vg
vgnfs
…
fs_name
fs_server
fs_directory
fs_type
fs_mount_opt
fs_umount_opt
fs_fsck_opt
…
ip_subnet
ip_address
…
/dev/vgnfs/lvnfsC11
""
/export/sapmnt/C11
ext4
""
""
""
192.168.100.0
192.168.100.99
Add the list of exported files systems for the NFS clients. The fsid needs to be unique for each
exported file system:
58
Clustering SAP Netweaver using SGeSAP packages
…
tkit/nfs/nfs/XFS
…
NOTE:
3.
4.
"-o rw,no_root_squash,fsid=102
*:/export/sapmnt/C11"
Change the service_name attribute, if it is not unique within the cluster.
Run the cmapplyconf –P sapnfs.config command to apply the package.
Run the cmrunpkg sapnfs command to start the package.
5.2.5.3 Automount setup
On each NFS client, add a direct map entry /- /etc/auto.direct to the /etc/auto.master
automounter configuration file.
On each NFS client that will mount the /sapmnt/C11 from the NFS server, edit and add the
following to /etc/auto.direct:
/sapmnt/C11 -fstype=nfs,nfsvers=3,udp,nosymlink 192.168.100.99:/export/sapmnt/C11
This is also valid if both the NFS server and the NFS client are on the same cluster node.
NOTE:
You can specify the virtual host name nfsreloc, instead of the IP address.
Reload the autofs changes with:
/etc/init.d/autofs reload
For more information about how to specify options and the NFS export string, see HP Serviceguard
Toolkit for NFS version A.03.03.01 for Linux User Guide at http://www.hp.com/go/
linux-serviceguard-docs.
NOTE: If a common sapnfs package already exists it can be extended by the new volume
groups, file systems, and exports instead.
5.2.5.4 Solutionmanager diagnostic agent file system preparations related to NFS toolkit
If a dialog instance with a virtual hostname is installed initially and clustering the instance is done
later, then some steps related to the file system layout must be performed before the SAP installation
starts.
These steps are optional if:
•
It is planned to keep all the diagnostic agent installations on the “local” file system or
•
The agent is not configured to move with the related dialog instance.
The SAP installation installs a separate diagnostic agent instance for each host of a dialog instance
installation (physical and virtual). Therefore, diagnostic agent and dialog instances are “linked”
via the virtual hostname and shares the same IP address. As a result of this “link”, an agent instance
must move with the related (clustered) dialog instances, if the dialog instances fail over. As described
in chapter 4 “SAP Netweaver cluster storage layout planning” (page 37), the logical volume of
the diagnostic agent also has to fail over.
There is also a SYS directory underneath /usr/sap/DASID. Compared to other SAP installations
this does not contain links to /sapmnt. To have the same diagnostic agent SYS available on all
the cluster nodes, these links must be created and subsequently the /sapmnt/DASID must be
mapped to a NFS-exported directory.
The steps to install file system layout are as follows:
1.
2.
3.
Create directory /sapmnt/DASID.
Create a link from /usr/sap/DASID/SYS to /sapmnt/DASID.
Create a logical volume and filesystem for the files on /sapmnt/DASID.
5.2 Infrastructure setup, pre-installation preparation (Phase 1)
59
4.
5.
Mount that file system to /export/sapmnt/DASID (create this directory if it doesn’t exist
yet) and export it via NFS.
Mount the exported filesystem to /sapmnt/DASID.
To make this exported filesystem highly available, the same mechanism as for other SAP SIDs can
be used.
1.
2.
3.
Add the exported file system together with its volume_group, logical volume, and file system
mountpoints to a NFS toolkit package.
Add /sapmnt/DASID to the automount configuration.
Mountpoint /export/sapmnt/DASID must be available on all the cluster nodes where the
NFS toolkit package runs. /sapmnt/DASID must be available on all the nodes where the
dialog instances run.
5.2.5.5 Intermediate node sync and verification
For more information about synchronization of the other cluster nodes with the automount
configuration, see “Post SAP installation tasks and final node synchronization (Phase 3a)” (page
66) section.
It is possible to perform intermediate synchronization to test the NFS configuration.
For more information on synchronization with the other cluster nodes, see “NFS and automount
synchronization” (page 71) section.
Verification step(s):
1. Check if the package will start up on each cluster node, where it is configured.
2. Run showmount –e <sapnfs package ip_address> and verify if name resolution
works.
3. Run showmount –e <virtual NFS hostname> on an external system (or a cluster node
currently not running the sapnfs package) and check the exported file systems are shown.
On each NFS client in the cluster check the following:
•
Run cd /usr/sap/trans command to check the read access of the NFS server
directories.
•
Run touch /sapmnt/C11/abc; rm /sapmnt/C11/abc command to check the
write access.
NOTE:
•
For more information on synchronization with the other cluster nodes, see “NFS and
automount synchronization” (page 71) section.
•
For information on the final synchronization of the other cluster nodes with the automount
configuration, see “Post SAP installation tasks and final node synchronization (Phase
3a)” (page 66) section.
5.2.6 Infrastructure Setup - SAP base package setup (Phase 1b)
This step will finally make the basic infrastructure available to start the SAP installation afterwards,
which includes the instance and database file systems as well as the IP addresses of the virtual
hostnames used for the installation. While there are other manual ways to provide that basic
infrastructure, setting up a Serviceguard package is the recommended way.
SAP base package setup enables the available basic infrastructure to start the SAP installation.
This includes the instance and database file systems as well as the IP addresses of the virtual
hostnames used for the installation. Setting up a Serviceguard package is another recommended
method to set up the basic infrastructure.
There are two ways to set up the initial SAP base package:
60
Clustering SAP Netweaver using SGeSAP packages
1.
2.
Set up the package with both Serviceguard and SGeSAP modules.
Set up the package with only Serviceguard modules.
5.2.6.1 Intermediate synchronization and verification of mount points
The procedure for synchronizing mount points are as follows:
•
Ensure that all the file system mount points for this package are created on all the cluster nodes
as mentioned in the prerequisites.
For example:
mkdir /usr/sap/C11/SCS40
Verification step(s):
Invoke cd /usr/sap/C11/SCS40 on all the nodes and test for availability.
5.2.6.2 SAP base package with Serviceguard and SGeSAP modules
At this stage any SGeSAP modules relevant to the SAP instance can be included into the base
package configuration.
5.2.6.2.1 Creating the package with the Serviceguard Manager
NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This
section describes the GUI steps and the CLI steps are described in the “Creating the package
configuration file with the CLI” (page 63) section.
1.
From the Serviceguard Manager Main page, click Configuration in the menu toolbar, and
then select Create a Modular Package from the drop down menu.
If Metrocluster is installed, a Create a Modular Package screen for selecting Metrocluster
appears. If you do not want to create a Metrocluster package, click no (default is yes). Click
Next >> and another Create a Modular Package screen appears.
2.
3.
4.
5.
If toolkits are installed, a Create a Modular Package screen for selecting toolkits appears.
Click yes following the question Do you want to use a toolkit?
Select the SGeSAP toolkit.
In the Select the SAP Components in the Package table, select the SAP Instances. This component
is incompatible with SAP NetWeaver Operation Resource (enqor).
Optionally for a database package, select the SAP Database Instance. For a combined
package, select both SAP Instances and SAP Database Instance. Other combinations are also
possible.
Figure 16 Toolkit selection screen
5.2 Infrastructure setup, pre-installation preparation (Phase 1)
61
6.
7.
Click Next >> and in the Select package type window, enter a package name. The Failover
package type is pre-selected and Multi-Node is disabled. The SGeSAP Package with SAP
Instances does not support Multi-Node.
Click Next >> at the bottom of the screen and another Create a Modular Package screen
appears with the following messages at the top of the screen:
The recommended modules have been preselected.
Choose additional modules for extra Serviceguard capabilities.
8.
The modules in the Required Modules window are set by default and cannot be changed. In
the Select Modules window, you can select additional modules (or clear the default
recommended selections) by selecting the check box next to each module that you want to
add (or remove) from the package.
Click Reset to return to the default selections.
9.
Click Next >> and another Create a Modular Package screens appears with the following
message:
Step 1 of X: Configure Failover module attributes (sg/failover), where X will vary, depending
on how many modules you selected.
There are two windows in this screen, Select Nodes and Specify Parameters. By default, nodes
and node order are pre-selected in the Select Nodes window. You can deselect nodes, or
change the node order, to accommodate your configuration requirements. Alternatively, you
can select Enable package to run on any node configured in the cluster (node order defined
by Serviceguard) and allow Serviceguard to define the node order.
To help in decision making, you can move the cursor over the configurable parameters, and
view the tool tips that provide information about the parameter.
10. Click Next >> and another Create a Modular Package screens appears.
Step 2 of X: Configure SGeSAP parameters global to all clustered SAP software
(sgesap/sap_global) .
Fill in the required fields and accept or edit the default settings.
Click Next >>. Another Create a Modular Package screen appears with the following message
at the top of the screen:
Step 3 of X: Configure SGeSAP SAP instance parameters (sgesap/sapinstance) .
Fill in the required fields, and accept or edit the default settings.
Click Next >> until the mandatory <SID> can be entered.
Figure 17 Configuration screen: SAP System ID
62
Clustering SAP Netweaver using SGeSAP packages
11. After you are done with all the Create a Modular Package configuration screens, the Verify
and submit configuration change screen appears. Use the Check Configuration and Apply
Configuration buttons to confirm and apply your changes.
5.2.6.2.2 Creating the package configuration file with the CLI
NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This
section describes CLI steps and the GUI steps are described in “Creating the package with the
Serviceguard Manager” (page 61) the section.
Invoke cmmakepkg –n <pkg> -m sgesap/sapinstance [-m ...] <pkg>.config
command.
Initially no SGeSAP attributes are enabled except for the mandatory attribute
sgesap/sap_global/sap_system, which must be set to the SAP SID designated for the
installation. All others SGeSAP related attributes must be left unspecified at this point.
For a database package, specify the module sgesap/dbinstance. An sgesap/dbinstance
does not have any mandatory attributes.
For a combined package, both sgesap/sapinstance and sgesap/dbinstance module must
be specified. Other combinations are also possible.
NOTE: Specifying the SGeSAP modules automatically adds the necessary Serviceguard modules
such as volume_group or filesystem, or package_ip required for a base package.
5.2.6.3 SAP base package with Serviceguard modules only
It is possible to create a package configuration only specifying the Serviceguard modules. SGeSAP
modules can be installed later. Such a package configuration requires at least the following
Serviceguard modules:
•
volume_group
•
filesystem
•
package_ip
NOTE:
•
Include the service module at this stage to use SGeSAP service monitors (for both SAP instance
and DB) on a later stage.
•
Add generic_resource module, if the package is used for a SGeSAP SCS/ERS follow-and-push
configuration.
5.2.6.3.1 Creating the package with Serviceguard Manger
NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This
section describes GUI steps and the CLI steps are described in the “Creating the package
configuration file with the CLI” (page 64) section.
1.
2.
3.
4.
From the Serviceguard Manager Main page, click Configuration in the menu toolbar, and
then select Create a Modular Package from the drop down menu.
Click no following the question Do you want to use a toolkit?
Click Next >> and in the Package Name window, enter a package name. The Failover package
type is pre-selected and Multi-Node is disabled. The SGeSAP Package with SAP Instances
does not support Multi-Node.
Click Next >> and another Create a Modular Package screen appears with the following
messages at the top of the screen:
The recommended modules have been preselected.
5.2 Infrastructure setup, pre-installation preparation (Phase 1)
63
Choose additional modules for extra Serviceguard capabilities.
5.
The modules in the Required Modules window are set by default and cannot be changed. In
the Select Modules window, you can select additional modules (or clear the default
recommended selections) by selecting the check box next to each module that you want to
add (or remove) from the package.
Figure 18 Module selection page
Click Reset at the bottom of the screen to return to the default selection.
6.
After you are done with all the Create a Modular Package configuration screens, the Verify
and submit configuration change screen appears. Use the Check Configuration and Apply
Configuration buttons to confirm and apply your changes.
5.2.6.3.2 Creating the package configuration file with the CLI
NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This
section describes CLI steps and the GUI steps are described in the “Creating the package with
Serviceguard Manger” (page 63) section.
1.
Run any one of the following commands:
cmmakepkg –n <pkg> \
-m sg/volume_group \
-m sg/filesystem \
–m sg/package_ip <pkg>.config
or
cmmakepkg –n <pkg> \
-m sg/volume_group \
-m sg/filesystem \
–m sg/package_ip \
-m sg/service \
-m sg/generic_resource <pkg>.config
or
cmmakepkg –n <pkg> -m sg/all <pkg>.config
Add the required attributes for the SAP instance and database installation to the resulting
package configuration
64
Clustering SAP Netweaver using SGeSAP packages
The required attributes are vg, fs_name, fs_directory, fs_type, ip_subnet, and
ip_address.
For examples of these attributes, see “Creating NFS toolkit package using Serviceguard
CLI” (page 58) section.
2.
Verify the package configuration by using the cmcheckconf –P <pkg>.config command,
and if there are no errors, run the cmapplyconf –P <pkg>.config command to apply
the configuration.
5.2.6.4 Verification steps
A simple verification of the newly created base package is to test if the package startup succeeds
on each cluster node where it is configured.
5.3 SAP installation (Phase 2)
This section provides information for installing SAP into a Serviceguard cluster environment. For
more information, see SAP installation guides.
SAP instances are normally installed on one dedicated node, (referred to as <primary>). In Phase
3, the changes from the SAP installation on the <primary> node must be distributed to the other
nodes in the cluster (referred to as <secondary> nodes) intended to run the SAP instance. Once
the SAP instance is clustered, there is no concept of <primary> and <secondary> node anymore
and all the nodes provides an identical environment.
5.3.1 Prerequisite
•
The sapnfs package must be running on one of the nodes in the cluster and is exporting the
relevant NFS files systems.
•
The automount subsystem must be running and the NFS client file systems are available.
•
A SGeSAP base package(s) for the SAP instances installation must be running on the <primary>
node.
For example, the primary node is the node, where the sapinst tool (now known as Software
Provisioning Manger) is executed to start the installation.
Determine the SAP instance number(s) to be used for the installation and verify they are unique
in the cluster.
5.3.2 Installation of SAP instances
The SAP instances and database are installed with the SAP sapinst tool.
Use either of the following commands to specify the virtual hostname, where the instances are to
be attached:
Run export SAPINST_USE_HOSTNAME=<virtual host> before running sapinst , or add
the virtual hostname to the command line
sapinst … SAPINST_USE_HOSTNAME=<virtual host>
The sapinst tool offers various installation types. The type of installation determines the way instances
and the database are installed and the flexibility of the setup. The three installation types offered
are:
•
Central system
•
Distribute system
•
High-Availability system
5.3 SAP installation (Phase 2)
65
NOTE: While all three installation types can be clustered, the recommended installation type is
HA system. The HA option is available for all the Netweaver 7.x versions. After completing the
installation, the SAP system must be up and running on the local (primary) node.
For more information on the installation types, see the SAP installation guides.
5.4 Post SAP installation tasks and final node synchronization (Phase 3a)
After the SAP installation has completed in Phase 2, some SAP configuration values may have to
be changed for running the instance in the cluster.
Additionally, each cluster node (except the primary where the SAP installation runs) must be updated
to reflect the configuration changes from the primary.
Complete the following tasks for Phase 3 before the SGeSAP package is finalized:
•
Configuration settings of the SAP installation:
◦
Program start entries in the SAP profiles
◦
MaxDB xserver autostart
◦
Oracle listener names
◦
Hostname references in DB configuration files
◦
DB2 db2nodes.cfg and fault monitor
•
Review SAP parameters, which can conflict with SGeSAP.
•
At the Operating System (OS) level, synchronize the SAP installation changes on the primary
node with all the secondary nodes in the cluster.
•
◦
User and groups related information for the SAP SID and DB administrator.
◦
Update login scripts containing virtual hostnames.
◦
Duplicate file systems identified as “local” to the secondary nodes. For more information
on “local”, see chapter 4 “SAP Netweaver cluster storage layout planning” (page 37).
◦
DB related synchronization for MaxDB and Oracle
Adjust system wide configuration parameters to meet SAP requirements.
NOTE: Before starting with any Phase 3 configurations steps, ensure all the “base packages”
are up and running. For example, ensure that all the relevant file systems are mounted. This
is important for the Serviceguard Manager auto-discovery tool to work properly and provide
prefilled fields of the currently installed SAP configuration.
5.4.1 SAP post installation modifications and checks
HP recommends that you modify some of the settings generated by the SAP installation to avoid
conflicting behavior when run together with SGeSAP.
Disable SCS enqueue restarts, if SGeSAP ERS is also configured
Disable SCS enqueue restarts if SCS is installed with the Restart_Program parameter enabled
for the enqueue process.
This configuration automatically restarts the enqueue on the current node, destroying the replicated
enqueue lock table when the ERS reconnects to the restarted SCS instance. The desired behavior
is that the SCS package failover to the node, where the ERS package with the replicated lock table
is running and recover the replicated enqueue locks from there.
66
Clustering SAP Netweaver using SGeSAP packages
In [A]SCS profile the line with Restart (number might vary)
Restart_Program_01 = local $(_EN)
has to be changed to Start.
Start_Program_01 = local $(_EN)
Avoid database startup as part of Dialog instance startup
A dialog instance installation contains Start_Program_00 = immediate $(_DB) entry in
its profile. This entry is generated by the SAP installation to start the DB before the dialog instance
is started. It is recommended to disable this entry to avoid the possible conflicts with the DB startup
managed by the SGeSAP database package.
A dialog instance profile might contain an entry like Start_Program_xx = immediate
$(_DB). This entry is generated by the SAP installation to ensure that the DB is up before the
dialog instance is started. It is recommended to disable such entries to avoid possible conflicts with
the DB startup managed by the SGeSAP database package.
MaxDB/liveCache: Disable Autostart of instance specific xservers
With an "isolated installation" each MaxDB/liveCache 7.8 database has its own
"installation-specific xservers". The "global xserver" exists for older databases and for forwarding
requests to the xserver of a 7.8 database.
The startup of the "global xserver" also starts the "installation-specific xserver" out of the DB specific
bin directory.
NOTE:
Stopping of the global xserver does not stop the specific ones.
When running more than one 7.8 database in a clustered environment, the startup behavior can
lead to error messages (because the file system with the specific bin directory of the other DB is
not mounted) or even busy file systems error messages belonging to the other database. Therefore,
it is recommended to switch the xserver autostart feature off when running more than one
MaxDB database and or liveCache in the cluster.
However, SAP’s startDB currently relies on autostart being switched on and therefore does not
explicitly start the DB specific xserver.
Switch off the Autostart by editing the following entry:
[Params-/sapdb/<DBSID>/db] section in
/sapdb/data/config/Installations.ini:
Set XserverAutostart to no (default=yes).
A DB or liveCache SGeSAP package with the sgesap/dbinstance module or the
sgesap/livecache module configured controls the starting and stopping of the instance specific
xserver together with the package.
Oracle: Set SID specific listener names
If more than one Oracle database is configured to run on a cluster node, it is recommend to use
<SID> specific listener names to avoid conflicts. Duplicate the contents of file listener.ora.
For the first section, replace the LISTENER by LISTENER<SID1>. For the duplicated section, replace
the LISTNER with LISTENER<SID2>.
Update the HOST and PORT lines for each listener definition with the respective new values.
NOTE: If an existing package has the LISTENER name configured, then it must also be updated
with the new name. By adding tlistsrv<SID2> entries to /etc/services, the use of the port
can be documented. The ports must reflect the PORT used in listener.ora.
Check if database configuration files use the DB virtual hostname
5.4 Post SAP installation tasks and final node synchronization (Phase 3a)
67
The database installation by SAP configures the virtual host names, but it is recommended to verify
whether the Table 19 (page 68) are properly configured. You must update the files, if it is not
properly configured.
Table 19 DB Configuration Files
DB type
File(s)
Path
Oracle
tnsnames.ora $ORACLE_HOME/network/admin
Fields/Description
(HOST = hostname)
/oracle/client/<vers>/network/
admin
/sapmnt/<SID>/profile/oracle
listener.ora
$ORACLE_HOME/network/admin
(HOST = hostname)
MaxDB
.XUSER.62
/home/<sid>adm
Nodename in xuser list output. If
necessary recreate userkeys with xuser …
-n vhost
Sybase
interfaces
$SYBASE
Fourth column of master and query entry
for each server
dbenv.*
/home/<sid>adm
Variable dbs_syb_server
DB2
db2nodes.cfg /db2/db2<dbsid>/sqllib
Second Column
NOTE: The db2nodes.cfg contains the physical hostname even though the installation is
executed by using a virtual hostname.
DB2: Disable fault monitor
The DB2 installation configures the fault monitor to startup at boot time. The fault monitor runs out
of the instance directory and keeps the filesystem busy when halting the package. The command
to disable the fault monitor is:
/db2/db2<sid>/sqllib/bin/db2fmcu –d
DB2: Configure db2nodes.cfg
The db2nodes.cfg file contains either a physical host or the virtual host of the database. Both
alternatives have implications on the setup.
When using a physical host, ensure that db2nodes.cfg resides on each physical host in the
cluster. HP recommends that you use physical host. The db2nodes.cfg file must not be located
on the shared filesystem. Instead, it must be located on a local filesystem. The db2nodes.cfg on
the shared filesystem must be a softlink to the local version.
For example:
A dbnodes.cfg link in the database instance directory points to a db2nodes file. The
<dbsid>.cfg in the local or db2 directory. Each cluster-node has a local copy with different
contents.
db2nodes<dbsid>.cfg on nodeA0 nodeA 0
db2nodes<dbsid>.cfg on nodeB0 nodeB 0
You can create the link by using the following command:
ln -s /db2/db2nodes.<dbsid>.cfg /db2/db2<dbsid>/sqllib/db2nodes.cfg
When configuring a virtual host into the shared db2nodes.cfg, the DB2 database considers this
node as remote because this node is not equal to the local hostname. DB2 uses a remote
communication mechanism to start and stop the database. The DB2 DB2RSHCMD configuration
68
Clustering SAP Netweaver using SGeSAP packages
parameter determines the communication mechanism. It can be either rsh or ssh. The default
value is rsh.
When you use rsh for the virtual host configuration, the remote access commands must be installed
and configured. Password-less rsh must be allowed for db2<dbsid> user from the virtual host.
For example:
# cat/db2/db2<dbsid>/.rhosts
<vhost> db2<dbsid>
When using ssh for the virtual host configuration, the public key of each physical host in the cluster
must be put into the system's known hosts file. Each of these keys must be prefixed by the virtual
hostname and IP. Otherwise, DB2 triggers an error when the virtual hostname fails over to another
cluster node, and reports different key for the virtual hostname. In addition, password-less ssh
login must be enabled for db2<dbsid> user between cluster nodes.
NOTE: For information about how to setup ssh in a DB2 environment, see the SAP note 1138605
(DB6 - Use ssh as remote shell utility in DPF environment).
5.4.2 User synchronization
The synchronization of the user environment has the following three sub tasks after completing the
SAP installation:
•
Synchronize the user and group ids.
•
Copy home directory to all the secondary nodes.
•
In the home directory, adap the filenames containing hostnames.
NOTE: The user environment necessary to run the SAP instances and database instances
must be identical on each cluster node.
To be independent of external services to the cluster like DNS or LDAP, local authorization (for
example, /etc/passwd, /etc/shadow, and /etc/group) is recommended for users and
group information. The SAP and database administrators of the various SAP and DB instances
require the entries listed in Table 20 (page 69) table. The database specific user and groups exist
only if SAP is installed with the corresponding database.
Table 20 Password file users
Username
Purpose
Home directory
sapadm
SAP system administrator
/home /sapadm
<sid>adm
SAP SID administrator
/home/<sid>adm
<dasid>adm
SAP Diagnostic Agent administrator
/home/<dasid>adm
ora<dbsid>
Oracle database administrator
/home/ora<dbsid> or
/oracle/<DBSID> (shared)
sqd<dbsid>
MaxDB database administrator
/home/sqd<dbsid>
<lcsid>adm
liveCache database administrator1
/home/<lcsid>adm
sdb
MaxDB file owner
syb<dbsid>
Sybase database administrator
/sybase/<dbsid> (shared)
db2<dbsid>
DB2 database owner
/db2/db2<dbsid>(shared)
sap<dbsid>
ABAP/Java connect user in a DB2
environment
/home/sap<dbsid>
sap<dbsid>db
/home/sap<dbsid>db
5.4 Post SAP installation tasks and final node synchronization (Phase 3a)
69
1
does not follow sqd<dbsid>MaxDB convention.
Table 21 Groupfile file groups
Groups
Remark
sapsys
Primary group for all SAP SID users and DB users
sapinst
SAP installer group, secondary for SAP SID and DB users
sdba
MaxDB file owner
oper
Oracle database operators (limited privileges)
dba
Oracle database administrators
db2<dbsid>adm
IBM DB2 authorization groups
db2<dbsid>mnt
db2<dbsid>ctl
db2<dbsid>mon
NOTE: For more information of the terms local, shared exclusive, and shared nfs file
systems used in this section, see chapter 4 “SAP Netweaver cluster storage layout planning” (page
37).
Along with synchronizing user and group information, the HOME directories of the administrators
must be created on the local file system (unless this directory does not reside on a local disk
as in the case for some DB users) on each secondary node.
This duplication of the user’s HOME to secondary nodes is done by running the tar command
on the HOME directory on the primary and unpacking that archive on the secondary.
Use the tar –p flag to preserve permissions, user, and group ids.
Some SAP login scripts in the <sid> adm and database admin HOME directories contain versions
for execution on the local node. For example, they contain hostnames in the filename. These login
scripts also have versions for bash (.sh), and csh (.csh). Some files that are often found
are .sapenv, .dbenv, .lcenv, .sapsrc, and .dbsrc (it is not necessary that all of them
exist).
As these files are copied from the primary’s home directories, they will have the primary node
name in the filename. The primary must be replaced with the secondary node name.
For example:
.dbenv.csh on sgxsap51 was duplication from sgxsap50.
On the secondary node, sgsap51 execute:
mv .dbenv_sgxsap50.csh .dbenv_sgxsap51.csh
In the older installations, startsap and stopsap scripts exist with the primary node in the
filename. These must be replaced accordingly.
In the case of the Sybase database administrator, the home directory resides on a shared
exclusive disk. There is no duplication required as the file system is only mounted on the
node running the package. However, the node specific login scripts exist and therefore must be
created for all the secondary nodes. Run the copy command
cp .dbenv_sgxsap50.csh .dbenv_sgxsap51.csh on the primary (instead of the mv
command).
Verification:
Login to each secondary cluster node and su to each user listed in Table 21 (page 70). The
command must not produce errors. If the home directory does not reside on a local file system
70
Clustering SAP Netweaver using SGeSAP packages
but resides on shared file system instead, start the package containing the corresponding volume
group on that node first.
5.4.3 Network services synchronization
During the SAP installation the file /etc/services is updated with the SAP definitions. These
updates must be synchronized with the /etc/services on the secondary nodes. The very first
SAP installation on the primary node creates all the entries for the first four type of entries in
Table 22 (page 71).
Table 22 Services on the primary node
Service name
Remarks
sapdp<INR>
Dispatcher ports
sapdp<INR>s
Dispatcher ports (secure)
sapgw<INR>
Gateway ports
sapgw<INR>s
Gateway ports (secure)
sapms<SID>
Port for (ABAP) message server for installation <SID>
saphostctrl
SAP hostctrl
saphostctrls
SAP hostctrl (secure)
tlistsrv
Oracle listener port
sql6
MaxDB
sapdbni72
MaxDB
sapdb2<DBSID>
SAP DB2 communication ports
DB2_db2<dbsid>
DB2_db2<dbsid>_1
DB2_db2<dbsid>_2
DB2_db2<dbsid>_END
NOTE:
•
<INR> = 00..99.
•
There are no services related to Sybase ASE database in /etc/services. Ports used by
Sybase ASE may be assigned to different services. Those entries should be removed from
/etc/services to avoid confusion.
5.4.4 NFS and automount synchronization
1.
2.
Synchronize the automount configuration on all the secondary nodes, if it is not done in Phase
1.
Create the mount points for the directories that are used by the NFS package and the automount
subsystem.
For example:
mkdir -p sapmnt/C11
Synchronize the /etc/auto.master and /etc/auto.direct from the primary node to
all secondary nodes.
Reload the autofs with /etc/init.d/autofs reload.
5.4 Post SAP installation tasks and final node synchronization (Phase 3a)
71
5.4.5 SAP hostagent installation on secondary cluster nodes
It is recommend to have a SAP hostagent installation on each clusternode, even though it is not a
requirement for SGeSAP. Such an installation might exist on these hosts through a previous
installation. If not, it must be installed according to the instructions in SAP note 1031096. The
sapinst used for instance installation may offer an option to install the hostagent. This step can
be executed within or directly after phase 2. Make sure that both the uid of sapadm and the gid
of sapsys are identical in all the cluster nodes.
5.4.6 Other local file systems and synchronization
There are other directories and files created during SAP installation that reside on local file systems
on the primary node. This must be copied to the secondary node(s).
SAP
Recreate directory structure /usr/sap/SID/SYS on all secondary nodes.
MaxDB files
Copy the local /etc/opt/sdb to all the secondary nodes. This is required only after the first
MaxDB or liveCache installation.
If /var/spool/sql was created by the installation (usually only for older versions), recreate the
directory structure on all the secondary nodes.
SAP’s Oracle instant client files
Depending on the Oracle client version, the SAP Oracle instant client files are installed in either
the /oracle/client/11x_64 or /oracle/client/10x_64 directories. Synchronize these
with the secondary nodes.
For more information about the Oracle Instant Client Installation and Configuration into SAP
environments, see SAP note 819829.
Oracle client installation for MDM
If an MDM configuration is configured as a distributed Oracle configuration, for example, database
and MDM server run in separate packages, then the full Oracle client installation is required.
1.
2.
3.
After the installation, update tnsnames.ora with the virtual hostnames as described in
“Check Database Configuration Files”.
Synchronize /oracle/client with all the secondary nodes.
Set the environment for the mdm<SID> administrator as follows:
export LD_LIBRARY_PATH=/oracle/client/112_64/lib
export ORACLE_HOME=/oracle/client/112_64
4.
Synchronize these with the secondary nodes.
Verification:
After synchronization, it is possible to manually start up the SAP instances and database on all
cluster nodes, where its base packages are configured.
Follow the below procedure to test the manual start and stop of SGeSAP clustering:
1.
2.
3.
4.
Stop the SAP instances and or database intended for this package (by using stopsap or
other sap commands like sapcontrol).
Halt that package on the local node.
Start the package on another cluster node.
Start the SAP instances on the new cluster node.
This is the preliminary test for SGeSAP clustering. If the test fails, clustering the SAP instance
later on with SGeSAP also fails.
72
Clustering SAP Netweaver using SGeSAP packages
5.5 Completing SGeSAP package creation (Phase 3b)
The three options for creating the final SGeSAP package are as follows:
•
Easy deployment with deploysappkgs command.
•
Guided configuration using Serviceguard Manager.
•
Package creation with the CLI interface using the cmmakepkg and cmapplyconf command.
Creating SGeSAP package with easy deployment
To create the packages with the deploysappkgs command run:
deploysappkgs combi C11
or
deploysappkgs multi C11
This command attempts to create either a minimal (combi = combine instances) or a maximum
(multi = multiple instances) number of packages. If suitable base packages are already created in
Phase 1, it extends those packages with the necessary attributes found for the installed C11 instance.
If necessary, the configuration file for the enqor multi-node package is also created. You must
review the resulting configuration files before applying them. Depending on the attributes changed
or added, cmapplyconf command might fail, and stop running packages.
NOTE:
•
To get complete package configurations it is recommend that SAP database and instances
are running on the node where deploysappkgs is invoked. Otherwise, attributes (especially
regarding the filesystem and volume_group module) might be missing.
•
deploysappkgs can also be invoked at the end of Phase2 on the primary node. However,
the created package configurations cannot be applied yet.
5.5.1 Creating SGeSAP package with guided configuration using Serviceguard
Manager
NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This
section describes GUI steps and the CLI steps are described in the “Creating SGeSAP package
with CLI” (page 73) section.
1.
2.
Startup the Serviceguard Manager and add the SGeSAP modules to the existing base
packages, if they are not added in Phase 1.
Update the SGeSAP attributes with the current values, if the SGeSAP modules are already
added in Phase 1.
5.5.2 Creating SGeSAP package with CLI
NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This
section describes CLI steps and the GUI steps are described in the “Creating SGeSAP package
with guided configuration using Serviceguard Manager” (page 73) section.
The SGeSAP configuration must be added to the Serviceguard “base” packages created earlier,
if they are not added in Phase 1.
An example for adding or updating the package using the command line is as follows:
1.
2.
Run the mv <pkg>.config<pkg>.config.SAVE command to add sapinstance module
to an existing package.
Run the cmmakepkg –m sgesap/sapinstance -i <pkg>.config.SAVE
<pkg>.config command
5.5 Completing SGeSAP package creation (Phase 3b)
73
3.
4.
5.
Run the mv <pkg>.config<pkg>.config.SAVE command to add dbinstance module
to an existing package.
Run the cmmakepkg –m sgesap/dbinstance -i <pkg>.config.SAVE
<pkg>.config command
Edit the package configuration file to update the relevant SGeSAP attributes.
NOTE: Non SGeSAP attributes such as service or generic_resource must also be
updated.
6.
7.
Run the cmapplyconf –P <pkg>.config command to apply the configuration.
Run the cmrunpkg <pkg> command to start the package.
5.5.3 Module sgesap/sap_global – SAP common instance settings
This module contains the common SAP instance settings that are included by the following SGeSAP
modules:
•
sapinstance
•
mdminstance
•
sapextinstance
•
sapinfra
The following table describes the SGeSAP parameters and their respective values:
Parameter
Possible value
Description
sgesap/sap_global/sap_system
C11
Defines the unique SAP System Identifier
(SAP SID)
sgesap/sap_global/rem_comm
ssh
Defines the commands for remote
executions.
rsh
Default is ssh
sgesap/sap_global/parallel_startup
yes
no
Allows the parallel startup of SAP
application server instances.
If set to no, the instances start
sequentially.
sgesap/sap_global/cleanup_policy
normal
lazy
strict
Before the instance startups, the
package attempts to free up unused
system resources (temporary files, IPC
resources, and so on.) in order to make
the startups more likely to succeed.
A database package only frees up
database related resources. SAP
instance packages only remove IPCs
belonging to SAP administrators.
If this parameter is set to normal, only
instance shared memory is cleaned up.
If this parameter is set to lazy, cleanup
is deactivated.
If this parameter is set to strict and
process is not attached, all shared
memory - regardless of whether it is s
cleaned up.
NOTE: Using the strict setting can
crash running instances of different SAP
Systems on the failover host.
74
Clustering SAP Netweaver using SGeSAP packages
Parameter
Possible value
Description
sgesap/sap_global/retry_count
5
Specifies the number of retries for
several cluster operations that might not
succeed immediately due to racing
conditions with other parts of the
system.
The Default is 5.
sgesap/sap_global/sapcontrol_usage preferred
Specifies whether the SAP sapcontrol
interface and the SAP startup agent
framework are required for startup,
shutdown, and monitoring of SAP
software components.
exclusive
disabled
Setting the value to preferred ensures
that all the available SAP-provided
legacy monitoring tools are used in
addition to the agent framework
monitors.
When the value is exclusive only the
sapcontrol is used to start, stop, and
monitor SAP instances
When the value is disabled the
sapcontrol method is not used to
start, stop, and monitor SAP instances.
The default is preferred.
5.5.4 Module sgesap/sapinstance – SAP instances
This module contains the common attributes for any SAP Netweaver Instance. The following table
describes the SAP instance parameters:
Parameter
Possible example value
Description
sgesap/stack/sap_instance
• SCS40
Defines any SAP Netweaver Instances
such as: DVEBMGS, SCS, ERS, D,J,
ASCS, MDS, MDIS,MDSS,W,G.
• ERS50
sgesap/stack/sap_virtual_hostname
• vhostscs
• vhosters
sgesap/stack/sap_replicated_instance • —
• SCS40
sgesap/stack/ sap_stop_blocked
no
Corresponds to the virtual hostname,
which is mentioned during the SAP
installation.
For each SAP ERS Instance that is part
of the package, the corresponding,
replicated Central Service instance
(SCS/ASCS) needs to be specified.
Blocks manually triggered instance stop
commands.
5.5 Completing SGeSAP package creation (Phase 3b)
75
Figure 19 Configuring SAP instance screen
5.5.5 Module sgesap/dbinstance – SAP databases
This module defines the common attributes of the underlying database.
Parameter
Possible example value
Description
sgesap/db_global/db_vendor
oracle
Defines the underlying RDBMS
database: Oracle, MaxDB, DB2, or
Sybase
sgesap/db_global/db_system
C11
Determines the name of the database
(schema) for SAP
For db_vendor = oracle
sgesap/oracledb_spec/listener_name LISTENER
Oracle listener name. Specify if name
was changed to a SID specific name
sgesap/oracledb_spec/listener_password
Specify Oracle listener password, if set
For db_vendor = maxdb
sgesap/maxdb_spec/maxdb_userkey c
User key of control user
For db_vendor = sybase
sgesap/sybasedb_spec/aseuser
sapsa
sgesap/sybasedb_spec/asepasswd
Sybase system administration or
monitoring user
Password for specified aseuser
attribute
For Sybase the attribute aseuser and asepaswd are optional. When specified and that user has
system administration right, it is used for native fallback in database shutdown situations. If the
user does not have the right, it is used for monitoring purposes only.
There are no additional attributes for db_vendor=db2.
76
Clustering SAP Netweaver using SGeSAP packages
Figure 20 Configuring SAP database
5.5.6 Module sgesap/mdminstance – SAP MDM repositories
The sgesap/mdminstance module is based on sgesap/sapinstance with additional attributes for
MDM repositories, MDM access strings, and MDM credentials. Many configurations combine the
MDM instances like MDS, MDIS, and MDSS (and possibly a DB instance) into one SGeSAP
package. This is called a "MDM Central" or "MDM Central System" installation. Each instance
can also be configured into separate packages, called a "distributed MDM" installation. All MDM
repositories defined in the package configuration are automatically mounted and loaded from the
database, after the MDM server processes started successfully.
The following table contains some selected SGeSAP parameters relevant to a MDM Central System
instance. For more information, see the package configuration file.
Parameter
Value
Description
sgesap/sap_global/sap_system
MO7
Defines the unique SAP System Identifier (SAP
SID)
sgesap/stack/sap_instance
MDS01
Example for defining an MDM MDS instance
with instance number 01.
sgesap/stack/sap_instance
MDIS02
Example for defining an MDM MDIS instance
with instance number 02 in the same package
sgesap/stack/sap_instance
MDSS03
Example for defining an MDM MDSS instance
with instance number 03 in the same package
sgesap/stack/sap_virtual_hostname
mdsreloc
Defines the virtual IP hostname that is enabled
with the start of this package
sgesap/db_global/db_system
MO7
Determines the name of the database (schema)
for SAP
sgesap/db_global/db_vendor
oracle
Defines the underlying RDBMS database that is
to be used with this instance
sgesap/mdm_spec/mdm_mdshostspec_host
mdsreloc
The MDS server is accessible under this mdsreloc
virtual IP address/hostname
5.5 Completing SGeSAP package creation (Phase 3b)
77
Parameter
Value
Description
sgesap/mdm_spec/mdm_credentialspec_user
Admin
User credential for executing MDM CLIX
commands
sgesap/mdm_spec/mdm_credentialspec_password
Password credential for executing MDM CLIX
commands
The following contains some selected SGeSAP parameters relevant to MDM repository configuration.
For more information, see package configuration file.
Parameter
Value
Description
sgesap/mdm_spec/mdm_repositoryspec_repname PRODUCT_HA_REP
MDM repository name
sgesap/mdm_spec/mdm_repositoryspec_dbsid
DBMS instance name
MO7
sgesap/mdm_spec/mdm_repositoryspec_dbtype o
DBMS instance type: "o" stands for Oracle
sgesap/mdm_spec/mdm_repositoryspec_dbuser mo7adm
DBMS user name
sgesap/mdm_spec/mdm_repositoryspec_dbpasswd abcxyz
DBMS password
5.5.7 Module sg/services – SGeSAP monitors
Depending on the instance type configured in the package, SGeSAP monitors can be configured
with this module to check the health of the instance. Also, a monitor for the DB used can also be
configured.
Table 23 Module sg/services – SGeSAP monitors parameter
Parameter
Value
Description
service_name
CM2CIdisp
Unique name. A combination of package name
and monitor type is recommend
service_cmd
$SGCONF/monitors/sgesap/sapdisp.mon
Path to monitor script
service_restart
0
Usually, no restarts must be configured for a
SGeSAP monitor to have an immediate failover if
the instance fails.
service_fast_fail
No
No fast fail configured
service_halt_timeout 5
> 0 to give monitor some time to cleanup after it
received the TERM signal.
Serviceguard Manager guided package setup pre-populates the services screen with the monitors
appropriate for the instance if the service module has been selected to be included in the package.
Configure a database monitor with:
service_name <pkg>datab
service_cmd $SGCONF/monitors/sgesap/sapdatab.mon
All other values are set up as described in the Table 23 (page 78) table.
5.5.8 Module sg/generic_resource – SGeSAP enqor resource
A generic resources has to be setup for SCS and ERS packages, if the SGeSAP enqueue
follow-and-push mechanism is used. There is a common resource for each SCS/ERS pair. The
naming schema of the resource follows the below convention:
sgesap.enqor<SID>_<ERS>
For example:
78
Clustering SAP Netweaver using SGeSAP packages
SAP System C11 has SCS40 and ERS41 configured. ERS41 replicates SCS40. Both the package
containing SCS40 and the package of ERS41 must have the generic_resource_name
sgesap.enqor_C11_ERS41 setup with the generic_resource module.
The resource must be of evaluation_type before_package_start.
Up_criteria for the SCS package is “!=1”, for the ERS package “!=2”.
For example:
For the SCS package:
generic_resource_name
generic_resource_evaluation_type
generic_resource_up_criteria
sgesap.enqor_C11_ERS41
before_package_start
!=1
For the ERS package
generic_resource_name
generic_resource_evaluation_type
generic_resource_up_criteria
NOTE:
sgesap.enqor_C11_ERS41
before_package_start
!=2
In order to have any effect these enqor resources require the enqor MNP to be up.
Serviceguard Manager guided configuration offers the correct values preselected for the
generic_resource screen only if a SGeSAP enqor MNP is already setup.
The deploysappkgs script supports the generic_resource module for enqor.
5.5.9 Module sg/dependency – SGeSAP enqor MNP dependency
SCS and ERS packages taking part in the SGeSAP follow-and-push mechanism must have
same-node/up dependency with the enqor MNP. The attributes have to be set as follows:
dependency_name
dependency_location
dependency_condition
enqor_dep
same_node
enqor = UP
Serviceguard Manager guided configuration offers the correct values preselected for the dependency
screen only if a SGeSAP enqor MNP is already setup.
The deploysappkgs script supports the dependency module for enqor.
5.5.10 Module sgesap/enqor – SGeSAP enqor MNP template
This module is used to setup the SGeSAP enqor MNP. It has no attributes to be configured. An
SGeSAP enqor MNP is only mandatory in the SCS/ERS follow-and-push context.
The sgesap/enqor module must not be combined with any other SGeSAP module. A configured
enqor MNP is a prerequisite for the correct function of the sg/dependency and sg/generic_resource
attributes configured into a sapinstance package.
For the configuration of the sg/dependency and sg/generic_resource modules described above
the enqor MNP is a prerequisite.
On command line a enqor MNP can be created with:
cmmakepkg –n enqor –m sgesap/enqor enqor.config
The resulting enqor.config can be applied without editing.
The Serviceguard Manager offers SAP Netweaver Operations Resource in the Select the SAP
Components in the Package screen for configuring the enqor MNP.
deploysappkgs creates the enqor.config file when the follow-and-push mechanism is the
recommended way of operation for the creates SCS/ERS packages (and no enqor MNP is not
configured yet). In such a situation deploysappkgs will also extend existing SCS/ERS package
with the requires generic_resource and dependency module and their attributes.
Verification of Phase 3:
5.5 Completing SGeSAP package creation (Phase 3b)
79
•
Start and stop packages on each configured node. When testing SGeSAP follow-and-push
mechanism the enqor MNP package must be up. This will restrict the possible nodes for SCS
and ERS package startup.
•
Make sure client applications (dialog instances) can connect
5.5.11 Configuring sgesap/sapextinstance, sgesap/sapinfra and sgesap/livecache
This section describes configuring SGeSAP toolkit with sgesap/sapextinstance, sgesap/sapinfra
and sgesap/livecache parameters.
5.5.11.1 Remote access between cluster nodes and to external application servers
For external applications servers configured in a package, remote access between the cluster nodes
and between external hosts needs to be enabled. Root access between cluster hosts must be enabled
and the users <sid>adm and root from the cluster (in this case also a cluster host can assume the
role of an external appserver) must be allowed to run as <sid>adm on the from external
application servers. It is recommended to use ssh(1). Usage of rsh is discouraged.
To accomplish this, the following steps are necessary:
•
Create ssh keys for root and <sid>adm
•
Distribute those keys to allow access
To generate the keys as root user and as <sid>adm user run the ssh-keygen –t rsa command
on each host.
This will create files for the private (id.rsa) and public key (id_rsa.pub) in the user’s .ssh directory.
The public key then needs to be distributed to the other hosts. This can be accomplished by running
the command ssh-copy-id –i id_rsa.pub user@host. This will add the user’s public key
to the authorized_keys (not authorized_keys2) on the target host.
On each cluster node this has to be executed as the root user and host being one of the other
cluster nodes in turn. On each clusternode and for each external application server invoke the
ssh-copy-id … user@host command twice, replacing the user@host string with
<sid>adm@appserver and root@appserver.
It is also recommended to pre-populate the known hosts file (/etc/ssh/ssh_known_hosts) on each
cluster node by executing
ssh-keyscan list-of-remote-hosts >> /etc/ssh/ssh_known_hosts
This avoids the first login from the remote host hanging in fingerprint confirmation.
After you complete this section, you can login without using the password for:
•
root user on all cluster nodes
•
root and sidadm on external application servers.
5.5.11.2 Configuring external instances (sgesap/sapextinstance)
External dialog instances ("D" and "J" type) can be configured into an SGeSAP package using
the sgesap/sapextinstance module. These instances might have the same SID as used for
the package (values for thesap_ext_system and sap_system parameters are identical), or
the SID can be from a different instance (the values for sap_ext_system and sap_system
parameters are different). The external dialog instances can be started, stopped, and restarted
using the package, and also stopped when the package fails over to the node where the instance
is running. A restriction that applies to instances with a foreign SID is that these can only be stopped
if the package fails on the local node.
Any instances configured with the sapextinstance module are handled on a best effort basis.
Failing to start or stop an external instance will not cause the whole package to fail. Such instances
are also not monitored by a SGeSAP service monitor. If the sapcontrol usage attribute is enabled
80
Clustering SAP Netweaver using SGeSAP packages
(enabled per default on SGeSAP/LX), SGeSAP will try to use sapcontrol commands to start and
stop instances.
For instances on remote hosts, sapcontrol will use the –host option to control the remote instance.
Note this requires that the remote instance’s sapstartsrv is already running and the required
webservices (for starting and stopping the instance) are open for remote access from the local
host (For more information, see the SAP notes 927637 and 1439348). If remote access via
sapcontrol fails and fallback access via remote shell is enabled, the remote shell will be used
instead.
The sapextinstance module also uses the attributes configured in the sgesap/sap_global
module. The attribute sgesap/sap_global/sap_system is used as the default SID,
sgesap/sap_global/rem_comm as the communication mechanism from the cluster node to
the application server.
Serviceguard Manager or the CLI interface can be used to configure the module.
NOTE:
deploysappkgs cannot configure this module.
Attributes to define a single external instance are
Module Attribute
GUI Label
Description
sap_ext_instance
External SAP Instance
Instance type and number (like D01). Only D,J
and SMDA types allowed.
sap_ext_sytem
SAP System ID
SID of the external instance. If unspecified,
sap_system (the SID of the package) is assumed .
sap_ext_host
Hostname
Host were the external instance resides. Virtual
hostname allowed.
sap_ext_treat
Values represented as checkboxes
Actions on external instance (see table
Table 24 (page 81) for more information)
Contains a ‘y’ for each action to be executed, a
‘n’ if action must be skipped. List of five y/n values.
Table 24 Overview of reasonable treat values
Value (. = y/n)
Meaning
Description
y.... (position 1)
start with package
Application server is started with the package (own SID)
.y... (position 2)
Stop with package
Application server is stopped with the package (own SID)
..y.. (position 3)
Restart during failover
Application server is restarted when package performs a failover
(own SID). Restart occurs on package start.
...y. (position 4)
Stop if package local
Application server is stopped when package fails over to local
node, that is on the same node where the application server is
currently running (own & foreign SID)
....y (position 5)
Reserved for future used
5.5 Completing SGeSAP package creation (Phase 3b)
81
Figure 21 Configuring sapextinstance screen
Supported operating systems for running external instances are Linux, HP-UX and Microsoft
Windows-Server. For Windows the example functions start_WINDOWS_app and
stop_WINDOWS_app must be adapted to the remote communication mechanism used on the
Windows Server. In this case, there must be a customer specific version of these functions in
customer_functions.sh.
This sgesap/sapextinstance module can also be used to configure diagnostic instances which
failover with clustered dialog instances (They start together with the dialog instance and stop
together with the dialog instance). Although technically they belong to a different SID, they can
be started and stopped with the package. The hostname to be configured is the same as the virtual
hostname of the instance configured in the package (which usually also is part of diagnostic instance
profile name). If a SMDA instance is configured, it will be displayed in the Serviceguard Manger
guided package configuration.
Example 1:
The package is associated with SAP System SG1. The primary node is also running a non-clustered
ABAP Dialog Instance with instance ID 01. It must be stopped and started with manual package
operations. In case of a failover, a restart attempt must be made on the primary node (if the primary
node is reachable from the secondary). There is a second instance D01 on a server outside of the
cluster that must similarly be started, stopped and restarted.
sap_ext_instance
sap_ext_host
sap_ext_treat
sap_ext_instance
sap_ext_host
sap_ext_treat
D01
node1
yyynn
D02
hostname1
yyynn
Example 2:
The failover node is running a central, non-clustered test system QAS and a dialog instance D03
of the clustered SG1. All these must be stopped in case of a failover to the node, in order to free
up resources.
sap_ext_instance
sap_ext_system
sap_ext_host
sap_ext_treat
sap_ext_instance
sap_ext_host
sap_ext_treat
82
DVEBMGS10
QAS
node2
nnnyn
D03
node2
yyyyn
Clustering SAP Netweaver using SGeSAP packages
Example 3:
The package contains one or more dialog instances configured for vhost1 for which also a
diagnostic agent is configured. It must stop before the instances are stopped and started after the
instances are started.
sap_ext_instance
sap_ext_system
sap_ext_host
sap_ext_treat
SMDA97
DAA
vhost1
yynnn
5.5.11.3 Configuring SAP infrastructure components (sgesap/sapinfra)
The SAP infrastructure software defines software components that support a specific SAP Netweaver
Application Server, but are independent of the Server start or stop sequence.
NOTE:
SAP Netweaver Application Server instances cannot be specified here.
Legal values for sgesap/sap_infra_sw_type are described in Table 25 (page 83).
Table 25 Legal values for sgesap/sap_infra_sw_type
Value
Description
saposcol
SAP operating system monitor collector
sapccmsr
SAP additional monitor collector
rfcadapter
SAP XI/PI/EAI remote function call adapter
sapwebdisp
SAP webdispatcher (not installed as SAP instance, but
unpacked and bootstrapped to
/usr/sap/<SID>/sapwebdisp)
saprouter
SAP software network routing tool
You can specify the saprouter and biamaster values more than once.
The attribute sap_infra_treat specifies whether the component will only be started/notified
with the package startup, or whether it will also be stopped as part of a package shutdown (default).
Possible values are startonly and startnstop.
sap_infra_sw_params specifies additional command line parameters to be called with the
component. You can add the sap_infra_sw_host parameter to specify the hostname where
to start a BIA master instance. You can ignore this parameter for other infrastructure components
because this will not have any effect as these are always started or stopped locally.
Examples:
sap_infra_sw_type saposcol
sap_infra_sw_type saprouter
sap_infra_sw_treat startnstop
sap_infra_sw_params "-H virtual_IP -W 20000\
-R /sapmnt/C11/profile/saprouttab\
-T /sapmnt/C11/profile/dev_rout1"
sap_infra_sw_type sapccmsr
sap_infra_sw_params /sapmnt/C11/profile/ccmsr_profilename
sap_infra_sw_type sapwebdisp
sap_infra_sw_treat startnstop
sap_infra_sw_params “-shm_attachmode 6”
When using the Serviceguard Manager to configure this module the following can be configured:
5.5 Completing SGeSAP package creation (Phase 3b)
83
Figure 22 Configuring SAP infrastructure software components screen
To add an SAP infrastructure software component to the Configured SAP Infrastructure Software
Components list:
1.
2.
Enter information to the Type, Start/Stop, and Parameters boxes.
Click <Add to move this information to the Configured SAP Infrastructure Software Components
list.
To remove an SAP infrastructure software component from this list:
Click the option adjacent to the SAP infrastructure software component that you want to remove,
then click Remove.
To edit a configured SAP infrastructure software component:
Click the option adjacent to the SAP infrastructure software component that you want to edit, then
click Edit>>.
The SAP infrastructure software component information moves to the Type, Start/Stop, and
Parameters boxes—where you can make changes.
Click Update, and the edited SAP infrastructure software component information is returned to the
Configured SAP Infrastructure Software Components list.
5.5.11.4 Module sgesap/livecache – SAP liveCache instance
The liveCache setup is very similar to a sgesap/dbinstance setup with MaxDB. However, there
are a few minor differences:
•
The liveCache installation does not create a XUSER file (.XUSER.62)
•
The liveCache clients are the work processes of the SCM system which belong to a different
SID
Additional steps for livecCache are:
84
•
Create the XUSER file with the c key
•
Make sure SAP transaction LC10 in the SCM system has the virtual hostname for liveCache
configured
Clustering SAP Netweaver using SGeSAP packages
•
Disable liveCache xserver autostart (optional
•
Create the liveCache monitoring hook
Create XUSER file
The SGeSAP liveCache module requires that a userkey with the "control user" has been setup for
the lcsidadm user. Normally, key c is used for this, but other keys can also be used. If the c key
does not exist, login as user <lcsid>adm and execute xuser –U c –u control,password
–d LCSID –n virtual-host set to create the XUSER file and the c key. Other keys will only
necessary if SCM/liveCache integration uses decentralized authorization instead of centralized
authorization. Latter is preselected in transaction LC10 and the recommended way of authorization.
Verify the userkey setup by running a connect test using dbmcli –U c db_state. This command
must return "online" if the liveCache is up.
The XUSER file must be distributed (along with the user itself if not done yet) to all node in cluster
planned to run the liveCache.
Verify transaction LC10
Make sure that the SAP LC10 transaction (Maintain liveCache integration) of the SCM
system uses the virtual hostname for the LCA, LDA and LEA database connections: Field liveCache
Server in the liveCache connection information. This is usually the case if liveCache
client was selected during SCM central instance installation and the values used during liveCache
installation were provided in that step.
Disable xserver Autostart (optional)
If the liveCache is version >=7.8, then the xserver structure is the same as that of the MaxDB of
the corresponding version. If it is planned to run more than one liveCache or MaxDB on the system
it is advisable to decouple the xserver startup (sdggloballistener and DB specific). For more
information, see MaxDB section describing the decoupling of startup.
Setup of monitoring hook
Create a symbolic link that acts as a hook that informs SAP software where to find the liveCache
monitoring software to allow the prescribed interaction with it. Optionally, you can change the
ownership of the link to sdb:sdba. For this step the shared file system /sapdb/LCSID must be
mounted and the environment variable $SGCONF must be defined.
ln -s $SGCONF/sgesap/monitors/saplc.mon /sapdb/LCSID/db/sap/lccluster
Setting up the liveCache package
Attributes offered by the liveCache module are:
Parameter
Parameter (Example)
Description
lc_system
LC1
lc_virtual_hostname
reloc1
The virtual hostname onto which the
liveCache has been installed
lc_start_mode
online
List of values which define into which
state the liveCache must be started.
Possible values are offline, (only vserver
will be started) admin, (start in admin
mode) slow, (start in cold-slow mode)
online. (start in online mode)
lc_user_key
c
Key to access the liveCache as
described in previous step. Default is
‘c’.
The Serviceguard Manager guided package setup offers discovered values, if a liveCache is
installed.
5.5 Completing SGeSAP package creation (Phase 3b)
85
Check SAP liveCache instance in the initial SGeSAP module selection screen (Select a Toolkit ->
SGeSAP -> SAP liveCache instance).
The configuration dialogs brings up the screen for configuring the liveCache module.
Figure 23 Configuring liveCache screen
From the command line use:
cmmakepkg –m sgesap/livecache lcLC1.config to create the package configuration
file. Then edit and apply the configuration file.
NOTE:
•
An SGeSAP liveCache package should not be configured with other SGeSAP modules, even
though it is technically possible .
•
The SGeSAP easy deployment (deploysappkgs) script does not support liveCache.
Verification:
•
Start up package on each configured node
•
Make sure the liveCache can be accessed on each client node by executing dbmcli –U c
on these nodes
•
Make sure SCM LC10 integration can connect every time
5.6 Cluster conversion for existing instances
The recommended approach to clustering SAP configurations with Serviceguard is described in
“Installation options” (page 52), “SAP installation (Phase 2)” (page 65), “Post SAP installation
tasks and final node synchronization (Phase 3a)” (page 66), and “Completing SGeSAP package
creation (Phase 3b)” (page 73) sections. The basic goal is to initially setup a "cluster-aware"
environment and then install SAP using virtual hostnames and the appropriate storage layout. There
might be configurations though, where the cluster conversion of an existing SAP instance or database
is necessary in order to make it high available with SGeSAP.
Such a cluster conversion consists primarily of two tasks:
•
86
Separating and moving the SAP instance files/file systems to file systems categorized as local
copies, as shared exclusive or as shared nfs file systems as described in chapter 4
“SAP Netweaver cluster storage layout planning” (page 37). This might involve creating new
volume groups and logical volumes as well as copying the instance files from the current file
Clustering SAP Netweaver using SGeSAP packages
system to the newly generated ones. This step is usually straight-forward and is not covered
here.
•
Adapting each occurrence of the old instance hostname with the new instance virtual hostname.
This concerns filenames as well as configuration values containing hostname strings in the
files. For a Java based instances, this will also require the use of the SAP “configtool”.
The cluster conversion of a central instance (DVEBMGS with Message and Enqueue Server) into
a System Central Services (ASCS) instance and Enqueue Replication Service (ERS) Instance is not
covered here. SAP Note 821904 has details on how to accomplish such a conversion and/or the
reinstallation of instances using the High Availability installation option of the SAP sapinst
command.
5.6.1 Converting an existing SAP instance
•
Adapt the SAP instance profile names by renaming SID_Instance_oldhost to
SID_Instance_newvirthost
•
SAP profile names will also referenced in the SAP start profile of the instance. In the SAP start
profile look for _PF = or _PFL = entries and change the right hand side accordingly.
•
Adapt all SAP profile entries referencing the old hostname of the converted instance. These
depend on the type of instance. Possible profile entries might be:
◦
SCSHOST
◦
j2ee/scs/host
◦
enque/serverhost
◦
rdisp/mshost
•
The SAPGUI also contains connect strings referencing oldhost. These must be converted to
the newvirthost names.
•
Change properties Java instance referencing the old host using the configtool.
5.6.2 Converting an existing database
•
Adapt entries in the database configuration files listed in the Table 19 (page 68) table
•
Adapt all SAP profile entries referring to the DB (not all entries below need to exist).
◦
SAPDBHOST
◦
j2ee/dbhost
◦
Other DB specific entries with a dbs/<dbtype>prefix
For example, dbs/syb/server
•
Java DB connect URL configuration properties (to be modified using the SAP configtool)
For additional information related to this hostname conversion process can be found in SAP
notes 8307, 757692, 1275273 and 1033993.
5.6 Cluster conversion for existing instances
87
88
6 Clustering SAP HANA System Replication using SGeSAP
packages
6.1 General considerations for building HANA clusters
Only specific hardware and software configurations are certified for HANA by SAP. Many of the
HP Converged System for HANA appliances for scale-up and scale-out can be ordered with pre
configured HP Serviceguard HA or DTS/DR clustering setups.
These setups are created with the following approach:
•
Two SAP HANA certified hardware setups are required. They will be combined in a HANA
system replication setup with automated failover capabilities, see Figure 24 (page 90).
Additional hardware can be added for dual purposing scenarios with additional nonproduction
HANA instances on the replication nodes.
•
Scale-out systems require an identical number of nodes for both instances. Both SAP HANA
instances must be configured with the same storage layout, system ID, and instance number.
•
Two Serviceguard NFS clusters can be used for accessing the system areas of each scale-out
instance. Alternatively, HA NFS can be realized with HP File Persona technology.
•
All nodes need to have access to a dedicated replication LAN. The standard HANA network
structure has to be extended by a dedicated Serviceguard quorum LAN.
•
The SAP Netweaver or Business Objects client applications use the HANA data or client access
network to communicate with HANA. They are installed on non-HANA appliance operating
system images and can be clustered separately, optionally using storage-based replication or
XDC setups.
•
Clients can access HANA by using virtual hostnames which are specified using relocatable
IP addresses on the client access networks via Serviceguard. At least each production HANA
node requires a relocatable IP on each of the client access networks.
•
For scale-out HANA clusters, only the three relocatable IPs of the master nameserver nodes
must be specified in the client connection string. However, to avoid these entries from becoming
invalid without notice due to reconfigurations, it is more robust to specify all relocatable IPs,
that is, at least one per HANA node.
•
There should be at least three heartbeat networks. These networks must extend across all
HANA nodes of both instances. Typical candidates are the client access networks of HANA,
the administration network, and the replication network.
•
The HANA instances can be configured to use MDC with multiple tenants.
6.1 General considerations for building HANA clusters
89
Figure 24 HANA System Replication cluster scenario
Cluster
Primary Package
hdbprimary
hdbinstance
hdbdualpurpose
relocatable ip
prios & deps
Sapstart
srv
Primary HANA
Instance
(active)
Failover
Secondary
Package
Halt
hdbinstance
prios & deps
Start
Sapstart
srv
Stop
Takeover
Role-reversal
Secondary HANA
Instance
(replicating)
Node 1
data/log of
primary
Sapstart
srv
QA/DEV
HANA
Instance(s)
(active)
Node 2
Quorum Service
data/log of
secondary
data/log of
secondary
SYS
SYS
SYS
6.2 Configuring HANA replication
To configure the replication relationship between the two HANA instances follow these steps:
NOTE:
Configure the replication relationship on one of the HANA nodes.
1.
2.
Backup your primary HANA system using the hdbstudio utility.
Edit /hana/shared/<SID>/global/hdb/custom/config/global.ini on both nodes.
a. In the communication section, set listeninterface to .auto.
b. In the persistence section, set log_mode to normal.
3.
Enable the replication of HANA system configuration. Run the following command on the
primary node (node1) as <sid> adm:
hdbnsutil -sr_enable --name=<alphanumeric_primary_sitename>
checking local nameserver: checking for active nameserver ...
nameserver is running, proceeding ...
configuring ini files ...
successfully enabled system as primary site ...
done
The site names that are used here will be reused in the site aware scale-out cluster configuration.
For more information, see “Deploying HANA scale-out cluster packages” (page 95).
4.
Register the secondary instance with the primary instance. Run the following command on the
secondary node (node 2):
hdbnsutil -sr_register --name=<alphanumeric_secondary_sitename>
--remoteHost=<node_1> --remoteInstance=<nr>
checking for inactive nameserver ...
nameserver node_2:30001 not responding.
90
Clustering SAP HANA System Replication using SGeSAP packages
collecting information ...
sending registration request to primary site host node_1:30001
updating local ini files ...
done.
5.
Define the cluster awareness library in the SAP instance profiles. Add the following line the
SAP HANA instance profile:
/usr/sap/<SID>/SYS/profile/<SID>_<INST>_<nodename>.
service/halib = /opt/cmcluster/lib64/saphpsghalib.so For SUSE Linux and,
service/halib = /usr/local/cmcluster/lib/saphpsghalib.so For Redhat
Linux.
6.
Restart the HANA startup services on both the nodes using:
sapcontrol -nr <INSTNR> -function RestartService
•
If the command displays the following message, then verify if the process sapstartsrv
pf=<instance_profile> is running:
"FAIL: NIECONN_REFUSED (Connection refused), NiRawConnect failed in
7.
plugin_fopen()”
Reserve the HANA SQL port explicitly in the SAP host controller to prevent Serviceguard
occupying the HANA SQL port temporarily. On all the cluster nodes, add the following line
to /usr/sap/hostctrl/exe/host_profile file:
ADDITIONAL_RESERVED_PORTS = 3<INSTNR>15
Now execute the following commands on all nodes as root user:
echo "" > /proc/sys/net/ipv4/ip_local_reserved_ports
/usr/sap/hostctrl/exe/saphostexec -restart
6.3 Deploying HANA scale-up cluster packages
To integrate the SAP HANA instances into the cluster, follow these steps:
1.
Run the following command on primary node to identify the instance.
The output displays a HANA system ID.
sgesap/hdb_global/hdb_system=<SID>
NOTE: The SGeSAP tool sapgetconf scans for SAP configurations on the current node
and displays selected parameters.
Run the following command on the secondary node to identify the instance.
The second <QA_SID> is used for Dual-purpose.
# sapgetconf sap_global
The output displays all HANA system IDs.
sgesap/hdb_global/hdb_system=<SID>
sgesap/hdb_global/hdb_system=<QA_SID>
2.
Create two cluster package configuration files, one for one for the primary and one for the
secondary instance. Two methods can be used to achieve this:
a. Deployment:
To create a standard scale-up HANA clustering configuration, run this command:
# deploysappkgs multi <SID>
With Serviceguard Enterprise, it is a recommended option to use the Smart Quorum
feature. Packages that are able to maintain the quorum preference with the primary
package are created using:
6.3 Deploying HANA scale-up cluster packages
91
# deploysappkgs –o smart_quorum multi <SID>
NOTE: For a working Smart Quorum, you must enable the feature in the cluster
configuration after applying the package configurations. The correct procedure is described
further below.
The deploysappkgs command creates Serviceguard configuration files and prefills the
configuration files with discovered values for the primary and secondary HANA node.
The configuration file names follow SGeSAP naming conventions, however you can name
them as per your requirement. For additional information about the deploysapppkgs(1)
command, see the corresponding man page.
Two Serviceguard package configuration files got created:
# ls $SGCONF/sdpkg-<SID>/hdbp<SID>.conf
# ls $SGCONF/sdpkg-<SID>/hdbs<SID>.conf
Verify the ip_address settings. If missing, specify a virtual ip address on the client
access network in the primary package configuration file.
Additional parameters for optional cluster features must also be configured now. You can
find detailed explanations for various optional features in further subsections starting with
“Self-tuning timeouts” (page 99).
b.
Manual creation:
Use the Serviceguard package creation commands to create HANA packages if you are
unable to use the easy deployment feature.
For creating a primary package with all the required modules use the following commands:
cmmakepkg –m sg/failover\
–m sg/package_ip \
–m sg/dependency \
–m sg/priority \
–m sgesap/hdbprimary \
–m sgesap/hdbinstance \
> hdbp<SID>.conf
For creating a secondary package with all the required modules use the following
commands:
cmmakepkg –m sg/failover \
–m sg/dependency \
–m sg/priority \
–m sgesap/hdbinstance \
> hdbs<SID>.conf
Edit the mandatory content of the files according to Table 26. If you plan to use Smart Quorum,
add the thre generic_resource parameters exactly as given in Table 26 for both packages.
Other optional features can also be configured here. Detailed explanations of such parameters
can be found in following subsections.
92
Clustering SAP HANA System Replication using SGeSAP packages
Table 26 Settings of standard package configuration parameters for HANA scale-up
Parameter name
Value
Perform action
package_name
User Defined
User can perform changes to package
name.
package_description
User Defined
User can perform changes to package
description.
module_name
User Defined
User can add additional Serviceguard
modules.
node_name
User Defined
User can change the node name if a host
name is changed.
ip_address
User Defined
deploysappkgs(1) prefills this value
in the primary package, only if an alias
hdb<SID> exists for the address,
otherwise, the setting must be added
manually. SAP database client
applications must be configured on these
virtual ips.
ip_subnet
User Defined
Virtual ips are usually specified on the
data/BO client access network for primary
package only.
dependency_name
hdb[s|p]<SID>_re
Allows modification of the dependency
name.
dependency_condition
hdb[s|p]<SID> = down
Define a mutual exclusion dependency. In
the configuration of each package, specify
that the other package must be down.
Primary and secondary must be on
different nodes.
dependency_location
same_node
Primary package can replace the
secondary package.
sg/priority/priority
example primary priority: 990
The secondary package must maintain a
higher value of parameter than the
primary package.
example secondary priority: 1000
service_name
hdb[p|s]_mon
User can perform changes to service
name.
service_cmd
${SGCONF}/monitors/sgesap/saphdbsys.mon Removing these entries disables the cluster
monitoring of HANA instance software
and its responsiveness.
service_restart
None
—
service_fail_fast_enabled
no
Turn it on to reduce the duration of failover
caused by SAP software issue.
service_halt_timeout
10
—
generic_resource_name
sitecontroller_genres
Specify this parameter in both primary
and secondary packages, if a Smart
Quorum Server prefers the primary
package owner node.
6.3 Deploying HANA scale-up cluster packages
93
Table 26 Settings of standard package configuration parameters for HANA scale-up (continued)
Parameter name
3.
Value
Perform action
generic_resource_evaluation_type during_package_start
Specify this parameter in both primary
and secondary packages, if a Smart
Quorum Server must prefers the primary
package owner node.
generic_resource_up_criteria >1
Specify this parameter in both primary
and secondary packages, if a Smart
Quorum Server prefers the primary
package owner node.
Apply the configuration for both packages at the same time:
cmapplyconf –P $SGCONF/sdpkg-<SID>/hdbs<SID>.conf –P
$SGCONF/sdpkg-<SID>/hdbp<SID>.conf
4.
If Serviceguard Enterprise is used, a recommended option is to activate the smart quorum
server. A requirement for smart quorum is to ensure that the cluster is configured with site
awareness. If a cluster configuration file is not available, you can create one from the currently
active cluster configuration using this command:
cmgetconf –c <cluster_name> >/tmp/cluster.config
# sapgetconf sap_global
In the generated configuration file, specify two site names using the SITE_NAME parameter,
for example:
SITE_NAME <siteA>
SITE_NAME <siteB>
The site names must refer to the alphanumeric_sitenames used while setting up the HANA
replication relationships as described in steps 3 and 4 of “Configuring HANA
replication” (page 90) section.
Each of the two cluster nodes will now be associated with one of the two sites. In the
configuration file, assign a site to all cluster nodes by specifying the SITE parameter under
each NODE_NAME parameter.
For example:
NODE_NAME host1
SITE_<siteA>
NETWORK_INTERFACE bond2
HEARTBEAT_IP x.x.x.x
… # additional NETWORK_INTERFACE entries
NODE_NAME host2
SITE <siteB>
NETWORK_INTERFACE bond2
HEARTBEAT_IP x.x.x.x
… # additional NETWORK_INTERFACE entries
Ensure that a complete set of HANA interfaces has been specified with the SUBNET parameter.
Do not turn on the IP_MONITOR parameter for the replication network or any other network
that is not mandatory for HANA usage, for example, dedicated backup networks. However,
IP monitoring is often used for HANA client access networks.
In the quorum section of the file, ensure the following settings:
QS_SMART_QUORUM ON
QS_ARBITRATION_WAIT 3000000
The QS_ARBITRATION_WAIT parameter setting will cause a three second delay during cluster
reformation in which the primary HANA system can override a quorum request of a secondary
94
Clustering SAP HANA System Replication using SGeSAP packages
HANA system. If the quorum network is not exclusively used for quorum purposes, a higher
value can be chosen.
Use cmapplyconf(1) –C to make any changes to the cluster configuration file effective.
6.4 Deploying HANA scale-out cluster packages
To integrate a scale-out SAP HANA instance into a cluster follow these steps:
1.
Ensure that the cluster is site aware. If a cluster configuration file is not available, you can
create one using this command:
cmgetconf –c <cluster_name> >/tmp/cluster.config
In the generated configuration file, specify two site names using SITE_NAME parameter, for
example:
SITE_NAME <SID>_<siteA>
SITE_NAME <SID>_<siteB>
The site names must refer to the names used while setting up the HANA replication relationships
as described in steps 3 and 4 of “Configuring HANA replication” (page 90) section.
There are no specific naming conventions enforced in the SGeSAP HANA modules. All the
given examples use recommendations, but they do not have to be followed if there are
appropriate and different naming conventions for a given installation.
Each site consists of the hosts that run the HANA nodes of exactly one HANA scale-out instance.
Typically, the two sites are physically apart (present at different data centers). A HANA node
and its HANA replication node must be defined in a way that they belong to different sites.
In the configuration file, assign a site to all cluster nodes by specifying the SITE parameter
under each NODE_NAME parameter.
Also, add heartbeat to network interfaces that are on networks jointly accessible between all
HANA nodes of the two systems. It is recommended to define at least three heartbeat networks.
Typical candidates are HANA client access networks (data and BO client), HANA replication
network, and common administration networks.
NOTE:
Quorum networks must not be used for heartbeat exchange.
Example:
NODE_NAME <host1_siteA>
SITE <siteA>
NETWORK_INTERFACE bond2
HEARTBEAT_IP x.x.x.x
… # additional NETWORK_INTERFACE entries
NODE_NAME <host1_siteB>
SITE <siteB>
NETWORK_INTERFACE bond2
HEARTBEAT_IP x.x.x.x
… # additional NETWORK_INTERFACE entries
… # up to 14 additional NODE_NAME pairs
Finally, ensure the complete set of HANA interfaces are specified with the SUBNET parameter.
Do not turn on the IP_MONITOR parameter for the replication network or any other network
that is not mandatory for HANA usage, for example, dedicated backup networks. However,
IP monitoring is often used for HANA client access networks. Use cmapplyconf(1) –C to
make any changes to the file effective.
2.
Run the following command on primary node to identify the instance:
# sapgetconf sap_global
6.4 Deploying HANA scale-out cluster packages
95
The output displays a HANA system ID:
sgesap/hdb_global/hdb_system=<SID>
NOTE: The SGeSAP tool sapgetconf scans for SAP configurations on the current node
and displays selected parameters.
Run the following command on the secondary node to identify the instance:
# sapgetconf sap_global
The output displays all HANA system IDs:
sgesap/hdb_global/hdb_system=<SID>
sgesap/hdb_global/hdb_system=<QA_SID>
3.
Create and apply five cluster package configuration files; two for the primary instance, two
for the secondary instance (one per site ), and one package for the cluster sitecontroller
functionality. There are two methods to achieve this; deployment or manual creation:
NOTE: The representation “s” in the package name represents the HANA secondary. The
representation “<siteB>” in the package name represents the site. In some example for names,
one or the other is omitted by purpose, because the name is either site independent or HANA
instance replication role independent.
a.
Deployment of the cluster package configuration files. For more information about the
deploysapppkgs(1) command, see the corresponding manpage.
To create a scale-out HANA clustering configuration, run this command:
# deploysappkgs multi <SID>
This command creates five Serviceguard configuration files and prefills the configuration
files with discovered values for the primary and secondary HANA nodes. The configuration
file names follow SGeSAP naming conventions, however you can name them as per your
requirement. Standard naming is:
$SGCONF/sdpkg-<SID>/hdbp<SID>_<siteA>.conf
$SGCONF/sdpkg-<SID>/hdbp<SID>_<siteB>.conf
$SGCONF/sdpkg-<SID>/hdbs<SID>_<siteA>.conf
$SGCONF/sdpkg-<SID>/hdbs<SID>_<siteB>.conf
$SGCONF/sdpkg-<SID>/hdbc<SID>.conf
deploysappks(1) will generate as many entries for sgesap/hdbdistinstance/
hdb_ip_address in the primary packages as there are nodes per site. The alias
hdb<SID> will act as base address. If the alias is not defined, the parameters will not
be generated and virtual ip addresses for client access must be specified manually.
The deployment command creates package configuration files with the recommended
minimum settings. Optional HANA clustering features as described in the subsections
starting with “Self-tuning timeouts” (page 99) can be added at this point manually.
As a last step, apply all five package configurations using cmapplyconf(1).
b.
Manual creation of scale-out HANA packages. This step is not required if a was used.
The following steps describe the mandatory configurations. Optional parameter settings
can be made at the same time. They are described in “Self-tuning timeouts” (page 99).
1) Create a HANA scale-out secondary system package for site B. However, a distributed
instance multi-node package configuration file needs to be generated first:
cmmakepkg –m sgesap/hdbdistinstance> hdbs<SID>_<siteB>.conf
The following parameters must be set manually in the configuration file generated:
96
Clustering SAP HANA System Replication using SGeSAP packages
package_name hdbs<SID>_<siteB>
node_name
host1_<siteB>
… # add one node_name entry per HANA node in site B
auto_run
yes
script_log_file
/opt/cmcluster/run/log/hdb<SID>.log # will be used by 2 pkgs
sgesap/hdb_global/hdb_system
<SID>
# the HANA system ID
sgesap/hdbinstance_global/hdb_instance HDB01 # the HANA instance name
service_name
hdbs<SID>_<siteB>_svc
service_cmd
$SGCONF/monitors/sgesap/saphdbsys.mon
service_restart
none
service_fail_fast_enabled no
service_halt_timeout
0
generic_resource_name
sitecontroller_genres
generic_resource_evaluation_type before_package_start
generic_resource_up_criteria
==3
You can create a package based on these settings using the cmapplyconf(1)
–p command.
2)
Create a HANA scale-out secondary system package for site A. The package
configuration created in step 1 will now be modified to create a counterpart
secondary package for the first datacenter (site A) with package configuration file
hdbs<SID>_<siteA>.conf, but before proceeding with the operation you must
first retain a copy of the original file for later use:
cp hdbs<SID>_<siteB>.conf hdbs<SID>_<siteA>.conf
The following parameters must be adopted in hdbs<SID>_<siteA>.conf:
package_name hdbsSID_<siteA>
node_name
host1_<siteA>
… # similarly modify all node_name entries to state the HANA nodes in site <siteA>
…
service_name
hdbs<SID>_<siteA>_svc
…
NOTE:
The other parameters given in step 1 must be retained.
You can create a package based on these settings using the cmapplyconf(1)
–p command.
4.
Create a HANA scale-out primary system package for site A. The secondary package
configuration file generated from 2 can be used for the primary package creation:
cmmakepkg –m sgesap/hdbprimary –i hdbs<SID>_<siteA>.conf >hdbp<SID>_<siteA>.conf
Package name, service name, and resource_up_criteria must be adopted in
hdbp<SID>_<siteA>.conf. The other parameters can be retained:
package_name
hdbp<SID>_<siteA>
…
service_name hdbp<SID>_<siteA>_svc
…
generic_resource_name
sitecontroller_genres
generic_resource_evaluation_type before_package_start
generic_resource_up_criteria
>3
The relocatable IP addresses for the client access networks (often called as data and BO client
or public network) must be added as follows:
•
For each client network (usually only one network is used), a hdb_ip_subnet entry
must be created.
•
After this entry specify exactly as many number of IP addresses as the site and instance
has.
•
Serviceguard will assign these IP addresses to the instances as one IP address per instance
per subnet. The first three IP addresses of each subnet are assigned to the active and two
standby master nameserver nodes if possible. If the network and node topology does not
get reconfigured by change administration, Serviceguard tries to assign the same IP
addresses to each node always. If this is not possible, alternative approaches are
6.4 Deploying HANA scale-out cluster packages
97
attempted. The operation might be successful, because with the availability of HANA
cold standby instances, there may be more IP addresses available than are required for
production operation. If the relocatable IP address of a node is already configured on
the node during package start, it will remain configured to avoid interruptions. All
relocatable IP addresses that are defined in the package parameters are cleared on a
node if a package halt operation is executed.
sgesap/hdbdistinstance/hdb_ip_subnet
x.x.x.x
sgesap/hdbdistinstance/hdb_ip_address
x.x.x.x
… # add a hdb_ip_address entry for each node of the production instance
sgesap/hdbdistinstance/hdb_ip_subnet
x.x.x.x
… # additional subnets are possible
•
For the first subnet, the SAP HANA public address relocation configuration settings (as
specified in the SAP HANA global.ini file) will be updated automatically with the
relocatable IP assignments done by Serviceguard.
You can create the package based on these settings using the cmapplyconf(1) –p
command.
5.
Create a HANA scale-out primary system package for site B, which is similar to step 4. The
secondary package configuration file of step 1 can be used for the primary package creation:
cmmakepkg –m sgesap/hdbprimary –i hdbs<SID>_<siteB>.conf
>hdbp<SID>_<siteB>.conf
Package name, service name, and resource_up_criteria must be adopted in
hdbp<SID>_<siteA>.conf. The other parameters can be retained:
package_name
hdbpSID_<siteB>
…
service_name hdbp<SID>_<siteB>_svc
…
generic_resource_name
sitecontroller_genres
generic_resource_evaluation_type before_package_start
generic_resource_up_criteria
>3
It must be possible to create the package based on these settings using the cmapplyconf(1)
–p command.
6.
Create a HANA site controller package:
cmmakepkg –m sgesap/hdbsico >hdbc<SID>.conf
The primary packages are critical packages of the site controller. The secondary packages
are remote managed packages of the site controller. Any hdbsico package will automatically
have site_preferred failover policy and two service entries for saphdbsico.mon and
sc.mon preconfigured. Among the parameters that need to be manually configured are a
package name, HANA system, and instance name (identical to the system and instance name
as given in the other packages), and two site entries, which is followed by two packages (for
the primary and secondary package of the site).
package_name
hdbcSID
sgesap/hdbsico/sico_hdb_system <SID>
sgesap/hdbsico/sico_hdb_instance HDB01
sc_site
<SID>_<siteA>
critical_package
hdbp<SID>_<siteA>
remote_managed_package hdbs<SID>_<siteA>
sc_site
<SID>_<siteB>
critical_package
hdbp<SID>_<siteB>
remote_managed_package hdbs<SID>_<siteB>
98
Clustering SAP HANA System Replication using SGeSAP packages
If all the previously described package configurations were applied successfully, then you
create this site controller package also using cmapplyconf(1) –p command.
7.
In the quorum section of the cluster configuration file, ensure the following settings are added
to the standard quorum configuration parameters:
QS_SMART_QUORUM ON
QS_ARBITRATION_WAIT 3000000
This causes a three second delay during cluster reformation during which, the primary HANA
system can override a quorum request of a secondary HANA system. If the quorum network
is not exclusively used for quorum purposes, a higher value can be selected.
6.5 Self-tuning timeouts
Self-tuning TimeOuts (STO) can be activated by setting hdb_sto <group> parameters in the package
configuration files. The package will then store internal statistics about duration of HANA startup,
shutdown, and takeover operations on each node. The maximum duration derived for successful
execution of each operation on the given individual system is used to internally adjust cluster
timeouts. The operations can vary significantly between different hardware configurations, HANA
software versions, business application use cases, and system workload. It is recommended to
specify these parameters if the HANA package logs indicate that HANA operations time out or it
might not be functioning, while the same operations might work when manually triggered by SAP
administrator operations. It is recommended to specify these parameters for high end hardware
configurations with large size databases also.
6.6 Enforced parameter settings
Enforced Parameter Settings (EPS) can be activated by setting hdb_eps parameters. Depending
on the role of a HANA instance, the (EPS) parameters can be used to adjust SAP HANA
global.ini or any tenant-service ini-file parameter of the primary or the secondary HANA
instance.
An example use case is to configure the EPS feature to adopt HANA memory allocation behavior
depending on whether an instance is primary or secondary at a given point in time.
Entries in the primary package configuration:
hdb_eps_location * hdb_eps_param
"[system_replication]preload_column_tables=true"
hdb_eps_param
"[memorymanager]global_allocation_limit=0"
These settings enforce that a production primary uses maximum system memory resources for
in-memory database operations.
Corresponding entries in the secondary package configuration:
hdb_eps_location * hdb_eps_param
"[system_replication]preload_column_tables=false"
hdb_eps_param
"[memorymanager]global_allocation_limit=65536"
These settings minimize the memory usage of the secondary instance to a lower limit. Reducing
the memory usage allows to run additional nonproduction HANA instances on the same secondary
server used for replication. With the primary package failover to secondary, these values will be
adjusted to those given in the primary package configuration to optimize the system again for
production performance.
A subsequent role-reversal operation turns the instance back into secondary due to secondary
package start on the node. The secondary instance now switches back to limited memory usage
as per the secondary package configuration.
Normally, only one of the two HANA cluster nodes has dual-purpose configured hardware. In this
case you must explicitly restrict the enforcement of the limiting parameter settings to that single
node:value
6.5 Self-tuning timeouts
99
hdb_eps_location node_name hdb_eps_param
"[system_replication]preload_column_tables=false"
hdb_eps_param
"[memorymanager]global_allocation_limit=65536"
For examples on how to use these parameters to handle memory for dual-purpose use cases, see
“Dual-purpose SAP HANA configuration” (page 101) .
Table 27 (page 103) provides the HANA specific SGeSAP cluster package parameter settings. The
default value column gives the initial package configuration file settings generated with
cmmakepkg(1). The prefilled values refer to package configuration file settings generated with
deploysappkgs(1) for scale-up instances.
In a HANA multitenant environment, EPS can also be used to set per-tenant database
configuration parameter. The Serviceguard package parameters are hdb_eps_tenant_file
and hdb_eps_tenant_param. The first parameter specifies the tenant ini-file with relative
path to global ini-directory, the second specifies the configuration parameter and value
(identical to hdb_eps_param). The tenant file and parameter have to be specified pairwise. A
configuration parameter is removed from HANA configuration file if the assignment after the “=”
is omitted for a hdb_eps_tenant_param.
If the maximum memory allocation for the indexservers of a two-tenant replication instance must
be restricted individually, an example set of SGeSAP parameters for the secondary package looks
like this:
hdb_eps_location <dualpurpose node_name> hdb_eps_param
"[system_replication]preload_column_tables=false
"hdb_eps_tenant_file “DB_<tenantA>/indexserver.ini
”hdb_eps_tenant_param “[memorymanager]allocationlimit=65536
”hdb_eps_tenant_file “DB_<tenantB>/indexserver.ini
”hdb_eps_tenant_param “[memorymanager]allocationlimit=32768”
The same instance can be configured to use the maximum available physical RAM if it becomes
a primary. The way to do this is to reset the same parameters to their defaults with the following
set of SGeSAP parameters for the primary package:
hdb_eps_location <dualpurpose node_name>
hdb_eps_param
"[system_replication]preload_column_tables
"hdb_eps_tenant_file “DB_<tenantA>/indexserver.ini
”hdb_eps_tenant_param “[memorymanager]allocationlimit
”hdb_eps_tenant_file “DB_<tenantB>/indexserver.ini
”hdb_eps_tenant_param “[memorymanager]allocationlimit”
6.7 Consistency ensuring synchronization tolerances
An SGeSAP package configuration for SAP HANA synchronous in-memory system replication
requires setting up a HANA log shipping timeout. If the replication between primary site and
secondary site does not succeed within this timeout interval (for example, in the case of a network
issue), the primary system changes to asynchronous mode. Transactions are now committed
only in the primary HANA database without being replicated to the secondary site. The systems
will try to switch back to synchronous mode as soon as possible.
If the replication system is asynchronous, SGeSAP can prevent failover of the primary package
to the secondary node. This may cause inconsistencies in the application as committed transactions
are replicated to the secondary site.
Similarly, HANA systems that are configured to operate always with asynchronous replication
might run apart beyond an acceptable objective, if the replication mechanism is temporarily at
risk. This timeout parameter specifies how long a replication mechanism can fail to operate before
the primary package failover, and thus, the automated failover mechanism becomes temporarily
disabled. In this situation, a software failure of the primary package can cause a node local restart
attempt in spite of a secondary failover. If this parameter is not set, a failover operation of the
secondary system might cause the loss of committed transactions that happen after start of the
100 Clustering SAP HANA System Replication using SGeSAP packages
hdb_sync_time_tolerance interval, and before the primary failure detection and instance
halt.
SAP's HANA logshipping timeout must be larger than the sum of the hdb_sync_time_tolerance
and the SGeSAP HANA instance monitoring interval (which defaults to 30 seconds). The
hdb_sync_time_tolerance must be larger than the SGeSAP HANA instance monitoring
interval. A recommended value is 40 for a HANA logshipping timeout of 75 seconds and default
monitoring interval of 30 seconds. Setting the value to zero risks the exposure to network flickering
and other delays in the replication I/O stack and is not recommended. If the timeout parameter is
not set at all, package failover never becomes disabled due to a synchronization timeout.
NOTE: For synchronous in-memory replication, usage of hdb_sync_logpos_tolerance is
not recommended. Proper hdb_sync_time_tolerance settings are sufficient to prevent database
inconsistencies after failover in these cases.
For replication connections that are configured to run always asynchronously, both
hdb_sync_time_tolerance and hdb_sync_logpos_tolerance can be useful. In such
cluster setups, it must be agreed that a certain amount of committed transactions get lost during
failover. However, it might still be the loss that should not extend significantly beyond certain
boundaries.
hdb_sync_logpos_tolerance is the minimum number of entries in the HANA redo logs that
are allowed while the system replication mechanism is inactive before failover becomes disabled.
This parameter allows to specify the logfile volume that is tolerable to become lost due to a
failover operation. If a higher volume of changes to the database happens without getting replicated,
the automated takeover mechanism becomes temporarily disabled. In this situation, a software
failure of the primary can cause a node local restart attempt instead of a secondary failover. If this
parameter is not set to zero, a failover operation of the secondary system might cause the loss of
committed transactions that happens with the hdb_sync_logpos_tolerance logfile entries
before the primary failure detection and instance halt. If this parameter is not set all, package
failover never becomes disabled due to a high volume of improper replicated data.
You can specify hdb_sync_time_tolerance and hdb_sync_logpos_tolerance parameters.
The package failover mechanism becomes disabled if at least one of the two thresholds becomes
exceeded. The package failover mechanism becomes re-enabled if the replication is working again
and both thresholds are no longer exceeded.
6.8 Dual-purpose SAP HANA configuration
Assumptions
•
The initial primary and secondary packages are already created, see “Deploying HANA
scale-up cluster packages” (page 91).
•
The configuration will contain three HANA instances, a primary production instance, a
secondary production instance, and a nonproduction instance configured on the node running
the secondary production instance.
NOTE: The given examples assume a scale-up HANA configuration. Dual-purposing works
in the same fashion for scale-out, but then requires the package configurations in both sites
to be adopted according to the given steps.
To configure dual-purpose SAP HANA follow these steps:
For a dual-purpose configuration, add the hdbdualpurpose module to the primary package
configuration files.
# mv hdbp<SID>.conf hdbp<SID>.conf_backup
# cmmakepkg –m sgesap/hdbdualpupose \
6.8 Dual-purpose SAP HANA configuration
101
–i hdbp<SID>.conf_backup \
> hdbp<SID>.conf
1.
Specify the non production instances in the hdbp<SID>.conf configuration file by editing
sgesap/hdbdualpurpose parameters.
sgesap/hdbdualpurpose/hdb_dualpurpose_system <system>
sgesap/hdbdualpurpose/hdb_dualpurpose_instance <instance>
2.
Update EPS parameters in the primary package configuration.
These parameters will provide adequate resources for running a primary instance.
hdb_eps_location *
hdb_eps_param "[system_replication]preload_column_tables=true"
hdb_eps_param "[memorymanager]global_allocation_limit=0"
3.
Update the EPS parameters in the secondary package hdbs<SID>.conf configuration file.
These parameters will reduce the resource consumption of secondary instance so that the
nonproduction instance and the secondary production instance can run in parallel.
It is recommended to set the global allocation limit for the secondary instance to
the maximum for either of these options:
a. 64 GB
b. (row-store size in GB) + (20 GB)
The row-store size can be determined with the following SQL statement:
select host,round(sum(page_size*USED_BLOCK_COUNT)/1024/1024/1024,2)
as "RowStore Size GB" from m_data_volume_page_statistics where
page_sizeclass = '16k-RowStore' group by host;
Multiply the result by 1024.
Example: 64 GB =64 * 1024 MB = 65536
hdb_eps_location <secondary_node>
hdb_eps_param "[system_replication]preload_column_tables=false"
hdb_eps_param "[memorymanager]global_allocation_limit=65536"
4.
The resource consumption of the non-production instance also needs to be limited so it can
run in parallel with the secondary production instance.
Update the parameter for the non-production instance in file global.ini. This file is located
at /usr/sap/<SID>/SYS/global/hdb/custom/config/global.ini.
Perform the same calculation as in step 3 to set the global_allocation_limit for the
nonproduction instance.
[memorymanager]
global_allocation_limit = 65536
5.
Reapply the package configuration using these commands:
# cmcheckconf -P hdbp<SID>.config
# cmapplyconf -P hdbs<SID>.config
6.9 Graceful shutdown of HANA instances
SAP Netweaver ABAP instances can be notified that the HANA DB is about to shut down soon.
This may happen as part of planned maintenance tasks, like HANA instance patching or manually
triggered failover operations to free up hardware from production load.
102 Clustering SAP HANA System Replication using SGeSAP packages
If the SAP instance profile parameter quiesce_check_enable is set, the Netweaver processes
will periodically check for a flag file and temporarily disconnect from the database if it exits (for
details, see SAP OSS note 1913302). This can keep the downtime of the database transparent
to the business transactions that are executing. The processes can most of the time reconnect later
if the downtime did not exceed the timeout value. The SAP profile parameter has to be set manually
by the Netweaver administrator.
With setting the hdb_quiesce parameters for one or more ABAP systems, SGeSAP becomes
configured to notify the instances of these systems of an upcoming HANA database shutdown as
part of a package halt operation. Only then, the database stop will be executed after all SAP
instances have completely disconnected themselves from the database or a timeout threshold is
exceeded. A subsequent restart of the package will also reactivate the suspended ABAP instances.
Table 27 Overview of HANA specific SGeSAP package parameter settings
Default
value
Parameter name
Description
sgesap/hdb_global/hdb_system
Specifies a unique SAP SID (SAP System
Identifier) for the HANA database.
sgesap/hdbinstance/hdb_instance
Specifies a SAP HANA database instance HDB00
for which the package provides clustering
services.
sgesap/hdbdualpurpose/hdb_dualpurpose_system hdb_dualpurpose_system defines the
unique SAP System Identifier for a
nonproduction HANA database system
that can run on the same hosts as a
secondary system of a mission-critical
environment.
not set
Prefilled value
SID provided to
deploys
appkgs
Instance name
and number of
the locally
installed HANA
system
configured as
primary or
secondary.
not set
not set
sgesap/hdbdualpurpose/hdb_dualpupose_instance hdb_dualpurpose_instance defines a
not set
unique SAP Instance name for a
nonproduction HANA database system,
that can run on the same hosts as a
secondary system of a mission-critical
environment. The parameter is associated
with the hdb_dualpurpse_system value.
not set
NOTE: Specifying a system in the file
does not implement any HA/DTS
capability for it.
NOTE: Specifying an instance in the file
does not implement any HA/DTS
capability. It is possible to specify several
dualpurpose systems with one or more
instances each.
sgesap/hdb_global/hdb_retry_count
Specifies the number of attempts for
25
HANA database operations and pollings.
25
This parameter helps to increase the SAP
operations timeout. Set higher values for
large in-memory instances.
6.9 Graceful shutdown of HANA instances 103
Table 27 Overview of HANA specific SGeSAP package parameter settings (continued)
Default
value
Parameter name
Description
sgesap/hdb_global/hdb_sto_sample_size
Specifies number of times start, stop, and 20
takeover operations accrued previously
for HANA on each node. The values
obtained are considered for predicting
time duration of these operations in future,
which is helpful in adjusting operation
timeout settings.
not set
sgesap/hdb_global/hdb_sto_safety_factor
Specifies a factor that gets multiplied to
the predicted duration of a HANA
operation as a safety distance to take
exceptional variances into account.
not set
5
Prefilled value
sgesap/hdb_global/hdb_sto_minimum_threshold Defines a lower limit in seconds below
10
which the duration of a HANA operation
reported successfully is considered as
potentially not plausible. Such values are
excluded from the statistics because they
might represent undetected failures and
cause the prediction results to be low.
not set
sgesap/hdb_global/hdb_eps_location
Defines on which single server or multiple not set
servers hdb_eps_param parameter
setting gets adjusted in the SAP
global.ini file as part of the instance
start. For scale-up HANA , allowed values
are the node name of the primary or the
node name of the secondary or (*) for
both primary and secondary node names.
In scale-out HANA, always specify (*).
not set
sgesap/hdb_global/hdb_eps_param
Defines section, parameter, and value of not set
a SAP HANAglobal.ini setting that
needs to be adjusted as part of a package
startup operation. This parameter needs
to be preceded by hdb_eps_location
setting. Several hdb_eps_params can
be set for a given hdb_eps_location.
not set
sgesap/hdb_global/hdb_eps_tenant_file
Specifies the configuration file where the not set
parameter must be applied. This is a
relative file specification starting from
<SAP_GLOBAL>/hdb/custom/config
(the directory where global.ini
resides) that consists of the tenant DB
subdirectory and the configuration file.
not set
sgesap/hdb_global/hdb_eps_tenant_param Identical to hdb_eps_param, but applied not set
to the configuration file specified by the
corresponding hdb_eps_tenant_file.
not set
104 Clustering SAP HANA System Replication using SGeSAP packages
Table 27 Overview of HANA specific SGeSAP package parameter settings (continued)
Parameter name
Default
value
Description
sgesap/hdbprimary/hdb_sync_time_tolerance Specifies the duration of the replication
not set
mechanism that can fail to operate before
the primary package and so the
automated takeover mechanism becomes
temporarily disabled.
If this parameter is not set to 0, a takeover
operation of the secondary system might
cause the loss of committed transactions
that happened after start of the
hdb_sync_time_tolerance interval
before the primary failure detection and
instance halt.
Prefilled value
40 in case of
synchronous
in-memory
replication;
otherwise it is
not set.
If this parameter is not set at all, package
failover never becomes disabled due to
a synchronization timeout.
sgesap/hdbprimary/hdb_sync_logpos_tolerance Specifies the logfile volume that is
not set
tolerable to become lost due to a takeover
operation.
not set
If this parameter is not set to 0, a takeover
operation of the secondary system might
cause the loss of committed transactions
that happened with the
hdb_sync_logpos_tolerance logfile
entries before the primary failure detection
and instance halt. If this parameter is not
set at all, package failover never becomes
disabled due to a high volume of not
properly replicated data.
hdbglobal/hdb_userkey
Optional user key for monitoring user to
connect to the DB to check the state of
database connections for shutdown.
not set
not set
hdb_primary/hdb_quiesce_system
SID of a Netweaver installation to
quiesce.
not set
not set
hdb_primary/hdb_quiesce_remote_ip_address IP address of the Netweaver instances, if not set
they are not part of the HANA cluster.
Passwordless ssh access is required for
these systems.
not set
hdb_primary/hdb_quiesce_timeout
Represents the time after SGeSAP
continues to shut down the HANA DB
even if there are connections still active.
If hdb_userky is not set, SGeSAP will
continue to shut down after this timeout
period.
not set
hdb_primary/hdb_quiesce_dbadmin
In a multitenant configuration, it is
not set
possible to configure each tenant DB
administrator with an individual database
administrator.
not set
hdb_primary/hdb_quiesce_userkey
In a multitenant configuration, it is
not set
possible to configure each tenant DB with
an individual user key.
not set
60
6.9 Graceful shutdown of HANA instances 105