Oracle Grid Infrastructure Installation Guide for HP-UX

Oracle Grid Infrastructure Installation Guide for HP-UX
[1]
Oracle®
Grid Infrastructure
Installation Guide
11g Release 2 (11.2) for HP-UX
E48295-04
November 2016
Oracle Grid Infrastructure Installation Guide, 11g Release 2 (11.2) for HP-UX
E48295-04
Copyright © 2007, 2016, Oracle and/or its affiliates. All rights reserved.
Primary Author: Aparna Kamath
Contributing Authors: Douglas Williams, Mark Bauer, Jonathan Creighton, Barb Lundhild, Sundar Matpadi,
Markus Michalewicz, Soma Prasad, Janet Stern
Contributors: Diego Iglesias, Taisuke Gishi, Aneesh Khandelwal, Peng Miao, Ken Natsume, Ayaka Saeki,
Jarod Wang, Jessica Ye, Min Yu
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it
on behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software,
any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users
are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and
agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and
adaptation of the programs, including any operating system, integrated software, any programs installed on
the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to
the programs. No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management
applications. It is not developed or intended for use in any inherently dangerous applications, including
applications that may create a risk of personal injury. If you use this software or hardware in dangerous
applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other
measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages
caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of
their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks
are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD,
Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced
Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content,
products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and
expressly disclaim all warranties of any kind with respect to third-party content, products, and services
unless otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its
affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of
third-party content, products, or services, except as set forth in an applicable agreement between you and
Oracle.
Contents
Preface ................................................................................................................................................................. xi
Audience....................................................................................................................................................... xi
Documentation Accessibility ..................................................................................................................... xi
Related Documents .................................................................................................................................... xii
Conventions ............................................................................................................................................... xiii
What's New in Oracle Grid Infrastructure Installation and Configuration? ............ xv
New Features for Release 2 (11.2.0.4) ..................................................................................................... xv
Desupported Options for Release 2 (11.2.0.4) ....................................................................................... xvi
New Features for Release 2 (11.2.0.3) ..................................................................................................... xvi
New Features for Release 2 (11.2.0.2) ..................................................................................................... xvi
New Features for Release 2 (11.2) .......................................................................................................... xvii
New Features for Release 1 (11.1) ........................................................................................................... xx
1
Typical Installation for Oracle Grid Infrastructure for a Cluster
1.1
Typical and Advanced Installation ..........................................................................................
1.2
Preinstallation Steps Completed Using Typical Installation ...............................................
1.3
Preinstallation Steps Requiring Manual Tasks .......................................................................
1.3.1
Verify System Requirements..............................................................................................
1.3.2
Check Network Requirements...........................................................................................
1.3.2.1
Single Client Access Name (SCAN) for the Cluster ................................................
1.3.2.2
IP Address Requirements............................................................................................
1.3.2.3
Redundant Interconnect Usage ..................................................................................
1.3.2.4
Intended Use of Network Interfaces..........................................................................
1.3.3
Check Operating System Packages ...................................................................................
1.3.4
Create Groups and Users....................................................................................................
1.3.5
Check Storage.......................................................................................................................
1.3.6
Prepare Storage for Oracle Automatic Storage Management .......................................
1.3.7
Install Oracle Grid Infrastructure Software .....................................................................
1-1
1-1
1-2
1-2
1-3
1-3
1-3
1-4
1-5
1-5
1-5
1-5
1-6
1-6
2 Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation
Tasks
2.1
2.2
2.3
Reviewing Upgrade Best Practices........................................................................................... 2-1
Installation Fixup Scripts ........................................................................................................... 2-2
Logging In to a Remote System Using X Terminal................................................................ 2-3
iii
2.4
Creating Groups, Users and Paths for Oracle Grid Infrastructure...................................... 2-4
2.4.1
Determining If the Oracle Inventory and Oracle Inventory Group Exists.................. 2-5
2.4.2
Creating the Oracle Inventory Group If an Oracle Inventory Does Not Exist ........... 2-5
2.4.3
Creating the Oracle Grid Infrastructure User.................................................................. 2-6
2.4.3.1
Understanding Restrictions for Oracle Software Installation Owners ................. 2-6
2.4.3.2
Determining if an Oracle Software Owner User Exists .......................................... 2-7
2.4.3.3
Creating or Modifying an Oracle Software Owner User for Oracle Grid
Infrastructure 2-8
2.4.4
Creating the Oracle Base Directory Path.......................................................................... 2-9
2.4.5
Creating Job Role Separation Operating System Privileges Groups and Users...... 2-10
2.4.5.1
Overview of Creating Job Role Separation Groups and Users ........................... 2-10
2.4.5.2
Creating Database Groups and Users with Job Role Separation........................ 2-13
2.4.6
Example of Creating Standard Groups, Users, and Paths.......................................... 2-17
2.4.7
Example of Creating Role-allocated Groups, Users, and Paths................................. 2-18
2.4.8
Creating the External Jobs User Account for HP-UX .................................................. 2-19
2.5
Checking the Hardware Requirements ................................................................................ 2-19
2.6
Checking the Network Requirements................................................................................... 2-21
2.6.1
Network Hardware Requirements................................................................................. 2-22
2.6.2
IP Address Requirements ................................................................................................ 2-23
2.6.2.1
IP Address Requirements with Grid Naming Service ......................................... 2-24
2.6.2.2
IP Address Requirements for Manual Configuration .......................................... 2-25
2.6.3
Broadcast Requirements for Networks Used by Oracle Grid Infrastructure........... 2-26
2.6.4
Multicast Requirements for Networks Used by Oracle Grid Infrastructure ........... 2-26
2.6.5
DNS Configuration for Domain Delegation to Grid Naming Service ...................... 2-27
2.6.6
Grid Naming Service Configuration Example ............................................................. 2-28
2.6.7
Manual IP Address Configuration Example ................................................................ 2-29
2.6.8
Network Interface Configuration Options.................................................................... 2-30
2.6.9
Checking the Run Level and Name Service Cache Daemon...................................... 2-30
2.7
Identifying Software Requirements ...................................................................................... 2-31
2.7.1
Software Requirements List for HP-UX Itanium Platforms ....................................... 2-31
2.7.2
Software Requirements List for HP-UX PA-RISC (64-bit).......................................... 2-33
2.8
Checking the Software Requirements .................................................................................. 2-34
2.9
Network Time Protocol Setting ............................................................................................. 2-35
2.10
Enabling Intelligent Platform Management Interface (IPMI)............................................ 2-36
2.10.1
Requirements for Enabling IPMI.................................................................................... 2-37
2.10.2
Configuring the IPMI Management Network.............................................................. 2-37
2.10.3
Configuring the BMC....................................................................................................... 2-37
2.10.3.1
Configuring the iLO Processor on HP-UX ............................................................ 2-38
2.11
Automatic SSH Configuration During Installation ............................................................ 2-38
2.12
Configuring Grid Infrastructure Software Owner User Environments .......................... 2-39
2.12.1
Environment Requirements for Oracle Grid Infrastructure Software Owner......... 2-39
2.12.2
Procedure for Configuring Oracle Software Owner Environments ......................... 2-40
2.12.3
Configuring Kernel Parameters on HP-UX Systems ................................................... 2-42
2.12.4
Setting Display and X11 Forwarding Configuration................................................... 2-42
2.12.5
Preventing Installation Errors Caused by Terminal Output Commands ................ 2-43
2.13
Creating Required Symbolic Links........................................................................................ 2-43
2.14
Setting the Minor Number for Device Files ......................................................................... 2-44
2.15
Requirements for Creating an Oracle Grid Infrastructure Home Directory................... 2-44
iv
2.16
3
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC
3.1
3.1.1
3.1.2
3.1.2.1
3.1.2.2
3.1.3
3.1.4
3.1.5
3.1.6
3.2
3.2.1
3.2.2
3.2.3
3.2.3.1
3.2.3.2
3.2.3.3
3.2.3.4
3.2.4
3.2.5
3.2.6
3.2.7
3.2.8
3.2.9
3.2.10
3.2.11
3.3
3.3.1
3.3.1.1
3.3.1.2
3.3.1.3
3.3.1.4
3.3.2
3.3.2.1
3.3.2.2
3.3.3
3.4
4
Cluster Name Requirements .................................................................................................. 2-45
Reviewing Oracle Grid Infrastructure Storage Options........................................................ 3-1
Overview of Oracle Clusterware and Oracle RAC Storage Options ........................... 3-1
General Storage Considerations for Oracle Grid Infrastructure and Oracle RAC..... 3-2
General Storage Considerations for Oracle Clusterware........................................ 3-2
General Storage Considerations for Oracle RAC..................................................... 3-2
Guidelines for Using Oracle ASM Disk Groups for Storage......................................... 3-3
Using Logical Volume Managers with Oracle Grid Infrastructure and Oracle RAC 3-4
Supported Storage Options ................................................................................................ 3-4
After You Have Selected Disk Storage Options .............................................................. 3-4
Shared File System Storage Configuration.............................................................................. 3-5
Requirements for Using a Shared File System ................................................................ 3-5
Deciding to Use a Cluster File System for Oracle Clusterware Files ........................... 3-7
Deciding to Use Direct NFS Client for Data Files ........................................................... 3-7
About Direct NFS Client Storage ............................................................................... 3-7
Using the oranfstab File with Direct NFS Client...................................................... 3-7
Mounting NFS Storage Devices with Direct NFS Client ........................................ 3-8
Specifying Network Paths with the oranfstab File .................................................. 3-9
Deciding to Use NFS for Data Files................................................................................... 3-9
Configuring Storage NFS Mount and Buffer Size Parameters...................................... 3-9
Checking NFS Mount and Buffer Size Parameters for Oracle Clusterware ............ 3-10
Checking NFS Mount and Buffer Size Parameters for Oracle RAC.......................... 3-11
Enabling Direct NFS Client Oracle Disk Manager Control of NFS........................... 3-11
Creating Directories for Oracle Clusterware Files on Shared File Systems ............. 3-13
Creating Directories for Oracle Database Files on Shared File Systems................... 3-14
Disabling Direct NFS Client Oracle Disk Management Control of NFS .................. 3-15
Oracle Automatic Storage Management Storage Configuration ...................................... 3-15
Configuring Storage for Oracle Automatic Storage Management ............................ 3-15
Identifying Storage Requirements for Oracle ASM.............................................. 3-16
Creating Files on a NAS Device for Use with Oracle ASM................................. 3-19
Using an Existing Oracle ASM Disk Group .......................................................... 3-20
Configuring Disk Devices for Oracle ASM............................................................ 3-21
Using Disk Groups with Oracle Database Files on Oracle ASM ............................... 3-23
Identifying and Using Existing Oracle Database Disk Groups on ASM........... 3-23
Creating Disk Groups for Oracle Database Data Files......................................... 3-23
Upgrading Existing Oracle ASM Instances .................................................................. 3-24
Desupport of Block and Raw Devices................................................................................... 3-24
Installing Oracle Grid Infrastructure for a Cluster
4.1
4.2
4.2.1
4.2.2
4.3
Preparing to Install Oracle Grid Infrastructure with OUI ....................................................
Installing Oracle Grid Infrastructure .......................................................................................
Running OUI to Install Grid Infrastructure .....................................................................
Installing Grid Infrastructure Using a Cluster Configuration File...............................
Installing Grid Infrastructure Using a Software-Only Installation .....................................
4-1
4-6
4-6
4-7
4-8
v
4.3.1
4.3.2
4.3.3
4.4
4.5
5
Oracle Grid Infrastructure Postinstallation Procedures
5.1
5.1.1
5.2
5.2.1
5.2.2
5.2.3
5.2.4
5.2.4.1
5.2.4.2
5.2.5
5.3
5.3.1
5.3.2
5.3.3
5.3.4
5.3.5
5.4
6
5-1
5-1
5-2
5-2
5-2
5-3
5-3
5-4
5-4
5-5
5-5
5-5
5-6
5-6
5-7
5-7
5-7
Deciding When to Deinstall Oracle Clusterware ...................................................................
Migrating Standalone Grid Infrastructure Servers to a Cluster...........................................
Relinking Oracle Grid Infrastructure for a Cluster Binaries.................................................
Changing the Oracle Grid Infrastructure Home Path ...........................................................
Deconfiguring Oracle Clusterware Without Removing Binaries ........................................
Removing Oracle Clusterware and ASM ................................................................................
About the Deinstallation Tool............................................................................................
Deinstalling Previous Release Grid Home.......................................................................
Downloading The Deinstall Tool for Use with Failed Installations.............................
Running The Deinstallation Tool ......................................................................................
Deinstall Command Example for Oracle Clusterware and Oracle ASM ....................
Deinstallation Parameter File Example for Grid Infrastructure for a Cluster ............
6-1
6-2
6-3
6-3
6-4
6-5
6-5
6-8
6-8
6-8
6-8
6-9
Troubleshooting the Oracle Grid Infrastructure Installation Process
A.1
A.1.1
A.2
A.3
A.4
A.5
vi
Required Postinstallation Tasks................................................................................................
Download and Install Patch Updates ...............................................................................
Recommended Postinstallation Tasks .....................................................................................
Back Up the root.sh Script ..................................................................................................
Configure IPMI-based Failure Isolation Using Crsctl ....................................................
Tune Semaphore Parameters .............................................................................................
Create a Fast Recovery Area Disk Group.........................................................................
About the Fast Recovery Area and the Fast Recovery Area Disk Group ............
Creating the Fast Recovery Area Disk Group ..........................................................
Running RACcheck Configuration Audit Tool...............................................................
Using Older Oracle Database Versions with Grid Infrastructure........................................
General Restrictions for Using Older Oracle Database Versions .................................
Using ASMCA to Administer Disk Groups for Older Database Versions..................
Pinning Cluster Nodes for Oracle Database Release 10.x or 11.x.................................
Enabling The Global Services Daemon (GSD) for Oracle Database Release 9.2 ........
Using the Correct LSNRCTL Commands ........................................................................
Modifying Oracle Clusterware Binaries After Installation...................................................
How to Modify or Deinstall Oracle Grid Infrastructure
6.1
6.2
6.3
6.4
6.5
6.6
6.6.1
6.6.2
6.6.3
6.6.4
6.6.5
6.6.6
A
Installing the Software Binaries ......................................................................................... 4-8
Configuring the Software Binaries .................................................................................... 4-9
Configuring the Software Binaries Using a Response File ............................................ 4-9
Confirming Oracle Clusterware Function............................................................................ 4-10
Confirming Oracle ASM Function for Oracle Clusterware Files ...................................... 4-11
General Installation Issues........................................................................................................
Other Installation Issues and Errors ................................................................................
Interpreting CVU "Unknown" Output Messages Using Verbose Mode ...........................
Interpreting CVU Messages About Oracle Grid Infrastructure Setup...............................
About the Oracle Grid Infrastructure Alert Log ...................................................................
Performing Cluster Diagnostics During Oracle Grid Infrastructure Installations...........
A-1
A-5
A-5
A-5
A-7
A-8
A.6
A.7
A.8
A.9
A.9.1
A.10
B
A-8
A-9
A-10
A-10
A-10
A-11
Installing and Configuring Oracle Database Using Response Files
B.1
B.1.1
B.1.2
B.2
B.2.1
B.2.2
B.3
B.4
B.5
B.5.1
B.5.2
C
About Using CVU Cluster Healthchecks After Installation ................................................
Interconnect Configuration Issues...........................................................................................
SCAN VIP and SCAN Listener Issues ..................................................................................
Storage Configuration Issues .................................................................................................
Recovery from Losing a Node File System or Grid Home .........................................
Completing an Installation Before Completing the Scripts ...............................................
How Response Files Work........................................................................................................
Reasons for Using Silent Mode or Response File Mode ...............................................
General Procedure for Using Response Files .................................................................
Preparing a Response File.........................................................................................................
Editing a Response File Template ....................................................................................
Recording a Response File.................................................................................................
Running the Installer Using a Response File .........................................................................
Running Net Configuration Assistant Using a Response File ............................................
Postinstallation Configuration Using a Response File .........................................................
About the Postinstallation Configuration File................................................................
Running Postinstallation Configuration Using a Response File..................................
B-1
B-2
B-2
B-2
B-3
B-4
B-5
B-5
B-6
B-6
B-7
Oracle Grid Infrastructure for a Cluster Installation Concepts
C.1
C.1.1
C.1.1.1
C.1.1.2
C.1.2
C.1.2.1
C.1.2.2
C.1.3
C.1.3.1
C.1.3.2
C.1.3.3
C.1.3.4
C.1.3.5
C.1.4
C.2
C.2.1
C.2.2
C.3
C.3.1
C.3.2
C.3.3
C.3.4
C.4
Understanding Preinstallation Configuration.......................................................................
Understanding Oracle Groups and Users.......................................................................
Understanding the Oracle Inventory Group ...........................................................
Understanding the Oracle Inventory Directory......................................................
Understanding the Oracle Base Directory Path .............................................................
Overview of the Oracle Base directory.....................................................................
Understanding Oracle Base and Grid Infrastructure Directories.........................
Understanding Network Addresses ................................................................................
About the Public IP Address .....................................................................................
About the Private IP Address ....................................................................................
About the Virtual IP Address ....................................................................................
About the Grid Naming Service (GNS) Virtual IP Address .................................
About the SCAN ..........................................................................................................
Understanding Network Time Requirements................................................................
Understanding Storage Configuration ...................................................................................
About Migrating Existing Oracle ASM Instances ..........................................................
About Converting Standalone Oracle ASM Installations to Clustered Installations
Understanding Server Pools.....................................................................................................
Overview of Server Pools and Policy-based Management...........................................
How Server Pools Work ....................................................................................................
The Free Server Pool...........................................................................................................
The Generic Server Pool.....................................................................................................
Understanding Out-of-Place Upgrade....................................................................................
C-1
C-1
C-1
C-2
C-3
C-3
C-3
C-3
C-3
C-4
C-4
C-4
C-4
C-5
C-6
C-6
C-6
C-7
C-7
C-8
C-8
C-8
C-9
vii
D How to Complete Installation Prerequisite Tasks Manually
D.1
Configuring SSH Manually on All Cluster Nodes................................................................
D.1.1
Checking Existing SSH Configuration on the System...................................................
D.1.2
Configuring SSH on Cluster Nodes .................................................................................
D.1.2.1
Create SSH Directory, and Create SSH Keys On Each Node................................
D.1.2.2
Add All Keys to a Common authorized_keys File .................................................
D.1.3
Enabling SSH User Equivalency on Cluster Nodes.......................................................
D.2
Configuring Kernel Parameters Manually .............................................................................
D.2.1
Minimal Kernel Parameter Values ...................................................................................
D.2.2
Checking Kernel Parameters Manually...........................................................................
D.2.2.1
Modifying Kernel Settings Using KCTUNE............................................................
D.2.2.2
Modifying Kernel Settings Using SMH....................................................................
D.2.2.3
Modifying Kernel Settings Using KCWEB ..............................................................
D.3
Setting UDP and TCP Kernel Parameters Manually ............................................................
E
How to Upgrade to Oracle Grid Infrastructure 11g Release 2
E.1
E.2
E.3
E.4
E.5
E.5.1
E.5.2
E.6
E.6.1
E.6.2
E.6.3
E.7
E.7.1
E.7.2
E.7.2.1
E.7.2.2
E.8
E.9
E.10
Index
viii
D-1
D-1
D-2
D-2
D-3
D-4
D-5
D-5
D-6
D-6
D-7
D-7
D-8
Back Up the Oracle Software Before Upgrades .....................................................................
Unset Oracle Environment Variables......................................................................................
About Oracle ASM and Oracle Grid Infrastructure Installation and Upgrade ................
Restrictions for Clusterware and Oracle ASM Upgrades ....................................................
Preparing to Upgrade an Existing Oracle Clusterware Installation...................................
Checks to Complete Before Upgrade an Existing Oracle Clusterware Installation..
Running the Oracle RACcheck Upgrade Readiness Assessment................................
Using CVU to Validate Readiness for Oracle Clusterware Upgrades ...............................
About the CVU Grid Upgrade Validation Command Options ...................................
Example of Verifying System Upgrade Readiness for Grid Infrastructure ...............
Verifying System Readiness for Oracle Database Upgrades........................................
Performing Rolling Upgrades From an Earlier Release .......................................................
Performing a Rolling Upgrade of Oracle Clusterware..................................................
Performing a Rolling Upgrade of Oracle Automatic Storage Management ..............
Preparing to Upgrade Oracle ASM...........................................................................
Upgrading Oracle ASM ..............................................................................................
Updating DB Control and Grid Control Target Parameters ...............................................
Unlocking the Existing Oracle Clusterware Installation......................................................
Downgrading Oracle Clusterware After an Upgrade ........................................................
E-1
E-1
E-2
E-2
E-4
E-4
E-4
E-5
E-5
E-6
E-6
E-6
E-6
E-8
E-8
E-9
E-9
E-9
E-10
ix
List of Tables
2–1
2–2
2–3
2–4
2–5
3–1
3–2
3–3
3–4
3–5
B–1
B–2
D–1
x
Swap Space Required as a Multiple of RAM ...................................................................... 2-20
Grid Naming Service Example Network............................................................................. 2-28
Manual Network Configuration Example .......................................................................... 2-29
HP-UX Itanium Requirements.............................................................................................. 2-31
HP-UX PA-RISC (64-bit) Requirements .............................................................................. 2-33
Supported Storage Options for Oracle Clusterware and Oracle RAC............................... 3-4
Oracle Clusterware Shared File System Volume Size Requirements................................. 3-6
Oracle RAC Shared File System Volume Size Requirements.............................................. 3-6
Total Oracle Clusterware Storage Space Required by Redundancy Type ..................... 3-17
Total Oracle Database Storage Space Required by Redundancy Type........................... 3-18
Response Files for Oracle Database........................................................................................ B-3
Response files for Oracle Grid Infrastructure....................................................................... B-3
Minimal HP-UX Kernel Parameter Values ........................................................................... D-5
Preface
Oracle Grid Infrastructure Installation Guide for HP-UX explains how to configure a
server in preparation for installing and configuring an Oracle Grid Infrastructure
installation (Oracle Clusterware and Oracle Automatic Storage Management). It also
explains how to configure a server and storage in preparation for an Oracle Real
Application Clusters (Oracle RAC) installation.
■
Audience
■
Documentation Accessibility
■
Related Documents
■
Conventions
Audience
Oracle Grid Infrastructure Installation Guide for HP-UX provides configuration
information for network and system administrators, and database installation
information for database administrators (DBAs) who install and configure Oracle
Clusterware and Oracle Automatic Storage Management in an Oracle Grid
Infrastructure for a Cluster installation.
For customers with specialized system roles who intend to install Oracle RAC, this
book is intended to be used by system administrators, network administrators, or
storage administrators to configure a system in preparation for an Oracle Grid
Infrastructure for a cluster installation, and complete all configuration tasks that
require operating system root privileges. When Oracle Grid Infrastructure installation
and configuration is completed successfully, a system administrator should only need
to provide configuration information and to grant access to the database administrator
to run scripts as root during an Oracle RAC installation.
This guide assumes that you are familiar with Oracle Database concepts. For
additional information, refer to books in the Related Documents list.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Access to Oracle Support
Oracle customers that have purchased support have access to electronic support
through My Oracle Support. For information, visit
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit
xi
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing
impaired.
Related Documents
For more information, refer to the following Oracle resources:
Oracle Clusterware and Oracle Real Application Clusters Documentation
This installation guide reviews steps required to complete an Oracle Clusterware and
Oracle Automatic Storage Management installation, and to perform preinstallation
steps for Oracle RAC.
If you intend to install Oracle Database or Oracle RAC, then complete preinstallation
tasks as described in this installation guide, complete Oracle Grid Infrastructure
installation, and review those installation guides for additional information. You can
install either Oracle databases for a standalone server on an Oracle Grid Infrastructure
installation, or install an Oracle RAC database. If you want to install an Oracle Restart
deployment of grid infrastructure, then refer to Oracle Database Installation Guide for
HP-UX
Most Oracle error message documentation is only available in HTML format. If you
only have access to the Oracle Documentation media, then browse the error messages
by range. When you find a range, use your browser's "find in page" feature to locate a
specific message. When connected to the Internet, you can search for a specific error
message using the error message search feature of the Oracle online documentation.
Installation Guides
■
Oracle Database Installation Guide for HP-UX
■
Oracle Real Application Clusters Installation Guide for Linux and UNIX
Operating System-Specific Administrative Guides
■
Oracle Database Administrator's Reference, 11g Release 2 (11.2) for UNIX Systems
Oracle Clusterware and Oracle Automatic Storage Management Administrative
Guides
■
Oracle Clusterware Administration and Deployment Guide
■
Oracle Database Storage Administrator's Guide
Oracle Real Application Clusters Administrative Guides
Oracle Real Application Clusters Administration and Deployment Guide
■
■
Oracle Database 2 Day + Real Application Clusters Guide
Generic Documentation
xii
■
Oracle Database 2 Day DBA
■
Oracle Database Administrator's Guide
■
Oracle Database Concepts
■
Oracle Database New Features Guide
■
Oracle Database Net Services Administrator's Guide
■
Oracle Database Reference
Printed documentation is available for sale in the Oracle Store at the following web
site:
https://shop.oracle.com
To download free release notes, installation documentation, white papers, or other
collateral, visit the Oracle Technology Network (OTN). You must register online before
using OTN; registration is free and can be done at the following web site:
http://www.oracle.com/technetwork/index.html
If you already have a username and password for OTN, then you can go directly to the
documentation section of the OTN web site:
http://www.oracle.com/technetwork/indexes/documentation/index.html
Oracle error message documentation is available only in HTML. You can browse the
error messages by range in the Documentation directory of the installation media.
When you find a range, use your browser's search feature to locate a specific message.
When connected to the Internet, you can search for a specific error message using the
error message search feature of the Oracle online documentation.
Conventions
The following text conventions are used in this document:
Convention
Meaning
boldface
Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic
Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace
Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.
xiii
xiv
What's New in Oracle Grid Infrastructure
Installation and Configuration?
This section describes new features as they pertain to the installation and
configuration of Oracle Grid Infrastructure (Oracle Clusterware and Oracle Automatic
Storage Management), and Oracle Real Application Clusters (Oracle RAC). This guide
replaces Oracle Clusterware Installation Guide. The topics in this section are:
■
New Features for Release 2 (11.2.0.4)
■
Desupported Options for Release 2 (11.2.0.4)
■
New Features for Release 2 (11.2.0.3)
■
New Features for Release 2 (11.2.0.2)
■
New Features for Release 2 (11.2)
■
New Features for Release 1 (11.1)
New Features for Release 2 (11.2.0.4)
The following is a list of new features for Release 2 (11.2.0.4):
Cluster and Oracle RAC Diagnosability Tools Enhancements
Starting with Oracle Grid Infrastructure 11g Release 2 (11.2.0.4), the Trace File Analyzer
and Collector is installed automatically with Oracle Grid Infrastructure installation.
The Trace File Analyzer and Collector is a diagnostic collection utility to simplify
diagnostic data collection on Oracle Clusterware, Oracle Grid Infrastructure, and
Oracle RAC systems.
The Trace File Analyzer and Collector is available starting with
Oracle Grid Infrastructure 11g Release 2 (11.2.0.4).
Note:
See Also: Oracle Clusterware Administration and Deployment Guide for
information about using Trace File Analyzer and Collector
Oracle Database Health Checks and Best Practice Recommendation
Oracle RAC Configuration Audit tool (RACcheck) is available with Oracle Grid
Infrastructure 11g Release 2 (11.2.0.4) for assessment of single instance and Oracle RAC
database installations for known configuration issues, best practices, regular health
checks, and pre- and post upgrade best practices.
xv
The Oracle RAC Configuration Audit tool is available starting
with Oracle Grid Infrastructure 11g Release 2 (11.2.0.4).
Note:
See Also: Oracle Real Application Clusters Administration and
Deployment Guide
Desupported Options for Release 2 (11.2.0.4)
The following features are no longer supported with Oracle Grid Infrastructure 11g
Release 2 (11.2.0.4):
Block and Raw Devices Not Supported with OUI
With this release, OUI no longer supports installation of Oracle Clusterware files on
block or raw devices. Install Oracle Clusterware files either on Oracle Automatic
Storage Management disk groups, or in a supported shared file system.
Deprecation of -cleanupOBase
The -cleanupOBase flag of the deinstallation tool is deprecated in this release. There is
no replacement for this flag.
New Features for Release 2 (11.2.0.3)
The following feature is new for Release 2 (11.2.0.3):
Configuration Wizard for the Oracle Grid Infrastructure Software
The Oracle Grid Infrastructure Configuration Wizard enables you to configure the
Oracle Grid Infrastructure software after performing a software-only installation. You
no longer have to manually edit the config_params configuration file as this wizard
takes you through the process, step by step.
Oracle Clusterware Administration and Deployment Guide for
more information about the configuration wizard.
See Also:
Oracle Clusterware Upgrade Configuration Force Feature
If nodes become unreachable in the middle of an upgrade, starting with release
11.2.0.3, you can run the rootupgrade.sh script with the -force flag to force an upgrade
to complete.
New Features for Release 2 (11.2.0.2)
The following is a list of new features for Oracle Grid Infrastructure 11g Release 2
(Oracle Clusterware and Oracle Automatic Storage Management) release 11.2.0.2:
Enhanced Patch Set Installation
Starting with the release of the 11.2.0.2 patch set for Oracle Grid Infrastructure 11g
Release 2, Oracle Grid Infrastructure patch sets are full installations of the Oracle Grid
Infrastructure software. Note the following changes with the new patch set packaging:
■
xvi
Direct upgrades from previous releases (11.x, 10.x) to the most recent patch set are
supported.
■
■
Out-of-place patch set upgrades only are supported. An out-of-place upgrade is
one in which you install the patch set into a new, separate home.
New installations consist of installing the most recent patch set, rather than
installing a base release and then upgrading to a patch release.
See Also: My Oracle Support note 1189783.1, "Important Changes to
Oracle Database Patch Sets Starting With 11.2.0.2", available from the
following URL:
https://support.oracle.com
Grid Installation Owner and ASMOPER
During installation, in the Privileged Operating System Groups window, it is now
optional to designate a group as the OSOPER for ASM group. If you choose to create
an OSOPER for ASM group, then you can enter a group name configured on all cluster
member nodes for the OSOPER for ASM group. In addition, the Oracle Grid
Infrastructure installation owner no longer is required to be a member.
New Software Updates Option
Use the Software Updates feature to dynamically download and apply software
updates as part of the Oracle Database installation. You can also download the updates
separately using the downloadUpdates option and later apply them during the
installation by providing the location where the updates are present.
Redundant Interconnect Usage
In previous releases, to make use of redundant networks for the interconnect, bonding,
trunking, teaming, or similar technology was required. Oracle Grid Infrastructure and
Oracle RAC can now make use of redundant network interconnects, without the use of
other network technology, to enhance optimal communication in the cluster. This
functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2).
Redundant Interconnect Usage enables load-balancing and high availability across
multiple (up to four) private networks (also known as interconnects).
New Features for Release 2 (11.2)
The following is a list of new features for installation of Oracle Clusterware and Oracle
ASM 11g release 2 (11.2):
Oracle Automatic Storage Management and Oracle Clusterware Installation
With Oracle Grid Infrastructure 11g release 2 (11.2), Oracle Automatic Storage
Management (Oracle ASM) and Oracle Clusterware are installed into a single home
directory, which is referred to as the Grid Infrastructure home. Configuration
assistants start after the installer interview process that configures Oracle ASM and
Oracle Clusterware.
The installation of the combined products is called Oracle Grid Infrastructure.
However, Oracle Clusterware and Oracle Automatic Storage Management remain
separate products.
See Also: Oracle Database Installation Guide for HP-UX for
information about how to install Oracle Grid Infrastructure (Oracle
ASM and Oracle Clusterware binaries) for a standalone server. This
feature helps to ensure high availability for single-instance servers
xvii
Oracle Automatic Storage Management and Oracle Clusterware Files
With this release, Oracle Cluster Registry (OCR) and voting disks can be placed on
Oracle Automatic Storage Management (Oracle ASM).
This feature enables Oracle ASM to provide a unified storage solution, storing all the
data for the clusterware and the database, without the need for third-party volume
managers or cluster file systems.
For new installations, OCR and voting disk files can be placed either on Oracle ASM,
or on a cluster file system or NFS system. Installing Oracle Clusterware files on raw or
block devices is no longer supported, unless an existing system is being upgraded.
ASM Job Role Separation Option with SYSASM
The SYSASM privilege that was introduced in Oracle ASM 11g release 1 (11.1) is now
fully separated from the SYSDBA privilege. If you choose to use this optional feature,
and designate different operating system groups as the OSASM and the OSDBA
groups, then the SYSASM administrative privilege is available only to members of the
OSASM group. The SYSASM privilege also can be granted using password
authentication on the Oracle ASM instance.
You can designate OPERATOR privileges (a subset of the SYSASM privileges,
including starting and stopping ASM) to members of the OSOPER for ASM group.
Providing system privileges for the storage tier using the SYSASM privilege instead of
the SYSDBA privilege provides a clearer division of responsibility between Oracle
ASM administration and database administration, and helps to prevent different
databases using the same storage from accidentally overwriting each other's files.
See Also:
Oracle Database Storage Administrator's Guide
Cluster Time Synchronization Service
Cluster node times should be synchronized. With this release, Oracle Clusterware
provides Cluster Time Synchronization Service (CTSS), which ensures that there is a
synchronization service in the cluster. If Network Time Protocol (NTP) is not found
during cluster configuration, then CTSS is configured to ensure time synchronization.
Oracle Enterprise Manager Database Control Provisioning
Oracle Enterprise Manager Database Control 11g provides the capability to
automatically provision Oracle Grid Infrastructure and Oracle RAC installations on
new nodes, and then extend the existing Oracle Grid Infrastructure and Oracle RAC
database to these provisioned nodes. This provisioning procedure requires a successful
Oracle RAC installation before you can use this feature.
See Also: Oracle Real Application Clusters Administration and
Deployment Guide for information about this feature
Fixup Scripts and Grid Infrastructure Checks
With Oracle Clusterware 11g release 2 (11.2), Oracle Universal Installer (OUI) detects
when minimum requirements for installation are not completed, and creates shell
script programs, called fixup scripts, to resolve many incomplete system configuration
requirements. If OUI detects an incomplete task that is marked "fixable", then you can
easily fix the issue by generating the fixup script by clicking the Fix & Check Again
button.
The fixup script is generated during installation. You are prompted to run the script as
root in a separate terminal session. When you run the script, it raises kernel values to
xviii
required minimums, if necessary, and completes other operating system configuration
tasks.
You also can have Cluster Verification Utility (CVU) generate fixup scripts before
installation.
Grid Plug and Play
In the past, adding or removing servers in a cluster required extensive manual
preparation. With this release, you can continue to configure server nodes manually, or
use Grid Plug and Play to configure them dynamically as nodes are added or removed
from the cluster.
Grid Plug and Play reduces the costs of installing, configuring, and managing server
nodes by starting a grid naming service within the cluster to allow each node to
perform the following tasks dynamically:
■
Negotiating appropriate network identities for itself
■
Acquiring additional information it needs to operate from a configuration profile
■
Configuring or reconfiguring itself using profile data, making hostnames and
addresses resolvable on the network
Because servers perform these tasks dynamically, the number of steps required to add
or delete nodes is minimized.
Intelligent Platform Management Interface (IPMI) Integration
Intelligent Platform Management Interface (IPMI) is an industry standard
management protocol that is included with many servers today. IPMI operates
independently of the operating system, and can operate even if the system is not
powered on. Servers with IPMI contain a baseboard management controller (BMC)
which is used to communicate to the server.
If IPMI is configured, then Oracle Cluster uses IPMI when node fencing is required
and the server is not responding.
Oracle Clusterware Out-of-place Upgrade
With this release, you can install a new version of Oracle Clusterware into a separate
home from an existing Oracle Clusterware installation. This feature reduces the
downtime required to upgrade a node in the cluster. When performing an out-of-place
upgrade, the old and new version of the software are present on the nodes at the same
time, each in a different home location, but only one version of the software is active.
Oracle Clusterware Administration with Oracle Enterprise Manager
With this release, you can use Oracle Enterprise Manager Cluster Home page to
perform full administrative and monitoring support for both standalone database and
Oracle RAC environments, using High Availability Application and Oracle Cluster
Resource Management.
When Oracle Enterprise Manager is installed with Oracle Clusterware, it can provide a
set of users that have the Oracle Clusterware Administrator role in Enterprise
Manager, and provide full administrative and monitoring support for High
Availability application and Oracle Clusterware resource management. After you have
completed installation and have Oracle Enterprise Manager deployed, you can
provision additional nodes added to the cluster using Oracle Enterprise Manager.
xix
SCAN for Simplified Client Access
With this release, the Single Client Access Name (SCAN) is the host name to provide
for all clients connecting to the cluster. The SCAN is a domain name registered to at
least one and up to three IP addresses, either in the domain name service (DNS) or the
Grid Naming Service (GNS). The SCAN eliminates the need to change clients when
nodes are added to or removed from the cluster. Clients using the SCAN can also
access the cluster using EZCONNECT.
SRVCTL Command Enhancements for Patching
With this release, you can use the server control utility SRVCTL to shut down all
Oracle software running within an Oracle home, in preparation for patching. Oracle
Grid Infrastructure patching is automated across all nodes, and patches can be applied
in a multi-node, multi-patch fashion.
Typical Installation Option
To streamline cluster installations, especially for those customers who are new to
clustering, Oracle introduces the Typical Installation path. Typical installation defaults
as many options as possible to those recommended as best practices.
Voting Disk Backup Procedure Change
In prior releases, backing up the voting disks using a dd command was a required
postinstallation task. With Oracle Clusterware release 11.2 and later, backing up and
restoring a voting disk using the dd command is not supported.
Backing up voting disks manually is no longer required, because voting disks are
backed up automatically in the OCR as part of any configuration change. Voting disk
data is automatically restored to any added voting disks.
See Also:
Oracle Clusterware Administration and Deployment Guide
New Features for Release 1 (11.1)
The following is a list of new features for release 1 (11.1)
Changes in Installation Documentation
With Oracle Database 11g release 1, Oracle Clusterware can be installed or configured
as an independent product, and additional documentation is provided on storage
administration. For installation planning, note the following documentation:
Oracle Database 2 Day + Real Application Clusters Guide
This book provides an overview and examples of the procedures to install and
configure a two-node Oracle Clusterware and Oracle RAC environment.
Oracle Clusterware Installation Guide
This book (the guide that you are reading) provides procedures either to install Oracle
Clusterware as a standalone product, or to install Oracle Clusterware with either
Oracle Database, or Oracle RAC. It contains system configuration instructions that
require system administrator privileges.
Oracle Real Application Clusters Installation Guide
This platform-specific book provides procedures to install Oracle RAC after you have
completed successfully an Oracle Clusterware installation. It contains database
configuration instructions for database administrators.
xx
Oracle Database Storage Administrator's Guide
This book provides information for database and storage administrators who
administer and manage storage, or who configure and administer Oracle Automatic
Storage Management (ASM).
Oracle Clusterware Administration and Deployment Guide
This is the administrator's reference for Oracle Clusterware. It contains information
about administrative tasks, including those that involve changes to operating system
configurations and cloning Oracle Clusterware.
Oracle Real Application Clusters Administration and Deployment Guide
This is the administrator's reference for Oracle RAC. It contains information about
administrative tasks. These tasks include database cloning, node addition and
deletion, Oracle Cluster Registry (OCR) administration, use of SRVCTL and other
database administration utilities, and tuning changes to operating system
configurations.
Release 1 (11.1) Enhancements and New Features for Installation
The following is a list of enhancements and new features for Oracle Database 11g
release 1 (11.1):
New SYSASM Privilege and OSASM operating system group for Oracle ASM
Administration
This feature introduces a new SYSASM privilege that is specifically intended for
performing Oracle ASM administration tasks. Using the SYSASM privilege instead of
the SYSDBA privilege provides a clearer division of responsibility between Oracle ASM
administration and database administration.
OSASM is a new operating system group that is used exclusively for ASM. Members
of the OSASM group can connect as SYSASM using operating system authentication and
have full access to ASM.
xxi
xxii
1
1
Typical Installation for Oracle Grid
Infrastructure for a Cluster
This chapter describes the difference between a Typical and Advanced installation for
Oracle Grid Infrastructure for a cluster, and describes the steps required to complete a
Typical installation.
This chapter contains the following sections:
■
Typical and Advanced Installation
■
Preinstallation Steps Completed Using Typical Installation
■
Preinstallation Steps Requiring Manual Tasks
1.1 Typical and Advanced Installation
There are two installation options for Oracle Grid Infrastructure installations:
■
■
Typical Installation: The Typical installation option is a simplified installation
with a minimal number of manual configuration choices. Oracle recommends that
you select this installation type for most cluster implementations.
Advanced Installation: The Advanced Installation option is an advanced
procedure that requires a higher degree of system knowledge. It enables you to
select particular configuration choices, including additional storage and network
choices, use of operating system group authentication for role-based
administrative privileges, integration with IPMI, or more granularity in specifying
Oracle Automatic Storage Management roles.
1.2 Preinstallation Steps Completed Using Typical Installation
With Oracle Clusterware 11g Release 2 (11.2), during installation Oracle Universal
Installer (OUI) generates Fixup scripts (runfixup.sh) that you can run to complete
required preinstallation steps.
Fixup scripts are generated during installation. You are prompted to run scripts as
root in a separate terminal session. When you run scripts, they complete the following
configuration tasks:
■
■
If necessary, sets kernel parameters required for installation and run time to at
least the minimum value.
Reconfigures primary and secondary group memberships for the installation
owner, if necessary, for the Oracle Inventory directory and the operating system
privileges groups.
Typical Installation for Oracle Grid Infrastructure for a Cluster
1-1
Preinstallation Steps Requiring Manual Tasks
■
Sets shell limits if necessary to required values.
1.3 Preinstallation Steps Requiring Manual Tasks
Complete the following manual configuration tasks
■
Verify System Requirements
■
Check Network Requirements
■
Check Operating System Packages
■
Create Groups and Users
■
Check Storage
■
Prepare Storage for Oracle Automatic Storage Management
■
Install Oracle Grid Infrastructure Software
See Also: Chapter 2, "Advanced Installation Oracle Grid
Infrastructure for a Cluster Preinstallation Tasks" and Chapter 3,
"Configuring Storage for Oracle Grid Infrastructure and Oracle RAC"
if you need any information about how to complete these tasks
1.3.1 Verify System Requirements
Enter the following commands to check available memory:
# /usr/contrib/bin/machinfo
# /usr/sbin/swapinfo -a
| grep -i Memory
The minimum required RAM is at least 2.5 GB of RAM for Oracle Grid Infrastructure
for a Cluster installations, including installations where you plan to install Oracle
RAC. The minimum required swap space is 2 GB.
# bdf
This command checks the available space on file systems. If you use normal
redundancy for Oracle Clusterware files, which is three Oracle Cluster Registry (OCR)
locations and three voting disk locations, then you should have at least 2 GB of file
space available on shared storage volumes reserved for Oracle Grid Infrastructure
files.
Note: You cannot install OCR or voting disk files on raw partitions.
You can install only on Oracle ASM, or on supported
network-attached storage or cluster file systems. The only use for raw
devices is as Oracle ASM disks.
If you plan to install on Oracle ASM, then to ensure high availability of Oracle
Clusterware files on Oracle ASM, you must have at least 2 GB of disk space for Oracle
Clusterware files in three separate failure groups, with at least three physical disks.
Each disk must have at least 1 GB of capacity to ensure that there is sufficient space to
create Oracle Clusterware files.
Ensure you have at least 5.5 GB of space for the Oracle Grid Infrastructure for a cluster
home (Grid home) This includes Oracle Clusterware and Oracle Automatic Storage
Management (Oracle ASM) files and log files.
1-2 Oracle Grid Infrastructure Installation Guide
Preinstallation Steps Requiring Manual Tasks
# bdf /tmp
Ensure that you have at least 7 GB of space in /tmp. If this space is not available, then
increase the size, or delete unnecessary files in /tmp. If /tmp and the grid home are on
the same file system, then add together their respective disk space requirements for
the total minimum space required.
For more information, review the following section in Chapter 2:
"Checking the Hardware Requirements"
1.3.2 Check Network Requirements
Ensure that you have the following available:
■
Single Client Access Name (SCAN) for the Cluster
■
IP Address Requirements
■
Redundant Interconnect Usage
■
Intended Use of Network Interfaces
1.3.2.1 Single Client Access Name (SCAN) for the Cluster
During Typical installation, you are prompted to confirm the default Single Client
Access Name (SCAN), which is used to connect to databases within the cluster
irrespective of which nodes they run on. By default, the name used as the SCAN is
also the name of the cluster. The default value for the SCAN is based on the local node
name. If you change the SCAN from the default, then the name that you use must be
globally unique throughout your enterprise.
In a Typical installation, the SCAN is also the name of the cluster. The SCAN and
cluster name must be at least one character long and no more than 15 characters in
length, must be alphanumeric, and may contain hyphens (-).
For example:
NE-Sa89
If you require a SCAN that is longer than15 characters, then be aware that the cluster
name defaults to the first 15 characters of the SCAN.
1.3.2.2 IP Address Requirements
Before starting the installation, you must have at least two interfaces configured on
each node: One for the private IP address and one for the public IP address.
1.3.2.2.1 IP Address Requirements for Manual Configuration If you do not enable GNS, then
the public and virtual IP addresses for each node must be static IP addresses,
configured before installation for each node, but not currently in use. Public and
virtual IP addresses must be on the same subnet.
Oracle Clusterware manages private IP addresses in the private subnet on interfaces
you identify as private during the installation interview.
The cluster must have the following addresses configured:
The cluster must have the following addresses configured:
■
A public IP address for each node, with the following characteristics:
–
Static IP address
Typical Installation for Oracle Grid Infrastructure for a Cluster
1-3
Preinstallation Steps Requiring Manual Tasks
■
■
■
–
Configured before installation for each node, and resolvable to that node
before installation
–
On the same subnet as all other public IP addresses, VIP addresses, and SCAN
addresses
A virtual IP address for each node, with the following characteristics:
–
Static IP address
–
Configured before installation for each node, but not currently in use
–
On the same subnet as all other public IP addresses, VIP addresses, and SCAN
addresses
A Single Client Access Name (SCAN) for the cluster, with the following
characteristics:
–
Three Static IP addresses configured on the domain name server (DNS) before
installation so that the three IP addresses are associated with the name
provided as the SCAN, and all three addresses are returned in random order
by the DNS to the requestor
–
Configured before installation in the DNS to resolve to addresses that are not
currently in use
–
Given a name that does not begin with a numeral
–
On the same subnet as all other public IP addresses, VIP addresses, and SCAN
addresses
–
Conforms with the RFC 952 standard, which allows alphanumeric characters
and hyphens ("-"), but does not allow underscores ("_").
A private IP address for each node, with the following characteristics:
–
Static IP address
–
Configured before installation, but on a separate, private network, with its
own subnet, that is not resolvable except by other cluster member nodes
After installation, when a client sends a request to the cluster, the Oracle Clusterware
SCAN listeners redirect client requests to servers in the cluster.
Oracle strongly recommends that you do not configure SCAN
VIP addresses in the hosts file. Use DNS resolution for SCAN VIPs. If
you use the hosts file to resolve SCANs, then you will only be able to
resolve to one IP address and you will have only one SCAN address.
Note:
See Also: Appendix C.1.3, "Understanding Network Addresses" for
more information about network addresses
1.3.2.3 Redundant Interconnect Usage
In previous releases, to make use of redundant networks for the interconnect, bonding,
trunking, teaming, or similar technology was required. Oracle Grid Infrastructure and
Oracle RAC can now make use of redundant network interconnects, without the use of
other network technology, to enhance optimal communication in the cluster. This
functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2).
Redundant Interconnect Usage enables load-balancing and high availability across
multiple (up to 4) private networks (also known as interconnects).
1-4 Oracle Grid Infrastructure Installation Guide
Preinstallation Steps Requiring Manual Tasks
1.3.2.4 Intended Use of Network Interfaces
During installation, you are asked to identify the planned use for each network
interface that OUI detects on your cluster node. You must identify each interface as a
public or private interface, or as "do not use." For interfaces that you plan to have used
for other purposes—for example, an interface dedicated to a network file system—you
must identify those instances as "do not use" interfaces, so that Oracle Clusterware
ignores them.
Redundant Interconnect Usage cannot protect interfaces used for public
communication. If you require high availability or load balancing for public interfaces,
then use a third party solution. Typically, bonding, trunking or similar technologies
can be used for this purpose.
You can enable Redundant Interconnect Usage for the private network by selecting
multiple interfaces to use as private interfaces. Redundant Interconnect Usage creates
a redundant interconnect when you identify more than one interface as private. This
functionality is available starting with Oracle Grid Infrastructure 11g Release 2
(11.2.0.2).
1.3.3 Check Operating System Packages
Refer to the tables listed in Section 2.7, "Identifying Software Requirements" for the list
of required packages for your operating system.
1.3.4 Create Groups and Users
Enter the following commands to create default groups and users:
One system privileges group for all operating system-authenticated administration
privileges, including Oracle RAC (if installed):
#
#
#
#
#
#
#
groupadd -g 1000 oinstall
groupadd -g 1031 dba
useradd -u 1101 -g oinstall -G dba oracle
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle
chown -R oracle:oinstall /u01
chmod -R 775 /u01/
This set of commands creates a single installation owner, with required system
privileges groups to grant the OraInventory system privileges (oinstall), and to grant
the OSASM/SYSASM and OSDBA/SYSDBA system privileges. It also creates the
Oracle base for both Oracle Grid Infrastructure and Oracle RAC, /u01/app/oracle. It
creates the Grid home (the location where Oracle Grid Infrastructure binaries are
stored), /u01/app/11.2.0/grid.
1.3.5 Check Storage
You must have space available on Oracle ASM for OCR and voting disk files, and for
Oracle Database files, if you install standalone or Oracle Real Application Clusters
Databases. Creating Oracle Clusterware files on block or raw devices is no longer
supported for new installations.
When using Oracle Automatic Storage Management (Oracle
ASM) for either the Oracle Clusterware files or Oracle Database files,
Oracle creates one Oracle ASM instance on each node in the cluster,
regardless of the number of databases.
Note:
Typical Installation for Oracle Grid Infrastructure for a Cluster
1-5
Preinstallation Steps Requiring Manual Tasks
1.3.6 Prepare Storage for Oracle Automatic Storage Management
Review the relevant sections in Chapter 3.
See Also: Chapter 3, "Configuring Storage for Oracle Grid
Infrastructure and Oracle RAC" if you require detailed storage
configuration information
1.3.7 Install Oracle Grid Infrastructure Software
1.
Start OUI from the root level of the installation media. For example:
./runInstaller
2.
Select Install and Configure Grid Infrastructure for a Cluster, then select Typical
Installation. In the installation screens that follow, enter the configuration
information as prompted.
If you receive an installation verification error that cannot be fixed using a fixup
script, then review Chapter 2, "Advanced Installation Oracle Grid Infrastructure
for a Cluster Preinstallation Tasks" to find the section for configuring cluster
nodes. After completing the fix, continue with the installation until it is complete.
See Also: Chapter 4, "Installing Oracle Grid Infrastructure for a
Cluster"
1-6 Oracle Grid Infrastructure Installation Guide
2
Advanced Installation Oracle Grid
Infrastructure for a Cluster Preinstallation
Tasks
2
This chapter describes the system configuration tasks that you must complete before
you start Oracle Universal Installer (OUI) to install Oracle Grid Infrastructure for a
cluster, and that you may need to complete if you intend to install Oracle Real
Application Clusters (Oracle RAC) on the cluster.
This chapter contains the following topics:
■
Reviewing Upgrade Best Practices
■
Installation Fixup Scripts
■
Logging In to a Remote System Using X Terminal
■
Creating Groups, Users and Paths for Oracle Grid Infrastructure
■
Checking the Hardware Requirements
■
Checking the Network Requirements
■
Identifying Software Requirements
■
Checking the Software Requirements
■
Network Time Protocol Setting
■
Enabling Intelligent Platform Management Interface (IPMI)
■
Automatic SSH Configuration During Installation
■
Configuring Grid Infrastructure Software Owner User Environments
■
Creating Required Symbolic Links
■
Setting the Minor Number for Device Files
■
Requirements for Creating an Oracle Grid Infrastructure Home Directory
■
Cluster Name Requirements
2.1 Reviewing Upgrade Best Practices
Always create a backup of existing databases before
starting any configuration change.
Caution:
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-1
Installation Fixup Scripts
If you have an existing Oracle installation, then record the version numbers, patches,
and other configuration information, and review upgrade procedures for your existing
installation. Review Oracle upgrade documentation before proceeding with
installation, to decide how you want to proceed.
You can upgrade Oracle Automatic Storage Management (Oracle ASM) 11g release 1
(11.1) without shutting down an Oracle RAC database by performing a rolling
upgrade either of individual nodes, or of a set of nodes in the cluster. However, if you
have a standalone database on a cluster that uses Oracle ASM, then you must shut
down the standalone database before upgrading. If you are upgrading from Oracle
ASM 10g, then you must shut down the entire Oracle ASM cluster to perform the
upgrade.
If you have an existing Oracle ASM installation, then review Oracle upgrade
documentation. The location of the Oracle ASM home changes in this release, and you
may want to consider other configuration changes to simplify or customize storage
administration. If you have an existing Oracle ASM home from a previous release,
then it should be owned by the same user that you plan to use to upgrade Oracle
Clusterware.
During rolling upgrades of the operating system, Oracle supports using different
operating system binaries when both versions of the operating system are certified
with the Oracle Database release you are using.
Using mixed operating system versions is only supported for
the duration of an upgrade, over the period of a few hours. Oracle
Clusterware does not support nodes that have processors with
different instruction set architectures (ISAs) in the same cluster. Each
node must be binary compatible with the other nodes in the cluster.
For example, you cannot have one node using an Intel 64 processor
and another node using an IA-64 (Itanium) processor in the same
cluster. You could have one node using an Intel 64 processor and
another node using an AMD64 processor in the same cluster because
the processors use the same x86-64 ISA and run the same binary
version of Oracle software.
Note:
Your cluster can have nodes with CPUs of different speeds or sizes,
but Oracle recommends that you use nodes with the same hardware
configuration.
To find the most recent software updates, and to find best practices recommendations
about preupgrade, postupgrade, compatibility, and interoperability, refer to "Oracle
Upgrade Companion." "Oracle Upgrade Companion" is available through Note
785351.1 on My Oracle Support:
https://support.oracle.com
2.2 Installation Fixup Scripts
With Oracle Clusterware 11g release 2, Oracle Universal Installer (OUI) detects when
the minimum requirements for an installation are not met, and creates shell scripts,
called fixup scripts, to finish incomplete system configuration steps. If OUI detects an
incomplete task, then it generates fixup scripts (runfixup.sh). You can run the fixup
script after you click the Fix and Check Again Button.
You also can have CVU generate fixup scripts before installation.
2-2 Oracle Grid Infrastructure Installation Guide
Logging In to a Remote System Using X Terminal
Oracle Clusterware Administration and Deployment Guide for
information about using the cluvfy command
See Also:
The Fixup script does the following:
■
■
■
■
If necessary sets kernel parameters to values required for successful installation,
including:
–
Shared memory parameters.
–
Open file descriptor and UDP send/receive parameters.
Sets permissions on the Oracle Inventory (central inventory) directory.
Reconfigures primary and secondary group memberships for the installation
owner, if necessary, for the Oracle Inventory directory and the operating system
privileges groups.
Sets shell limits if necessary to required values.
If you have SSH configured between cluster member nodes for the user account that
you will use for installation, then you can check your cluster configuration before
installation and generate a fixup script to make operating system changes before
starting the installation.
To do this, log in as the user account that will perform the installation, navigate to the
staging area where the runcluvfy command is located, and use the following
command syntax, where node is a comma-delimited list of nodes you want to make
cluster members:
$ ./runcluvfy.sh stage -pre crsinst -n node -fixup -verbose
For example, if you intend to configure a two-node cluster with nodes node1 and
node2, enter the following command:
$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose
2.3 Logging In to a Remote System Using X Terminal
During installation, you are required to perform tasks as root or as other users on
remote terminals. Complete the following procedure for the user accounts to enable
for remote display.
If you log in as another user (for example, oracle), then repeat
this procedure for that user as well.
Note:
To enable remote display, complete one of the following procedures:
■
If you are installing the software from an X Window System workstation or X
terminal, then:
1.
Start a local terminal session, for example, an X terminal (xterm).
2.
If you are installing the software on another system and using the system as
an X11 display, then enter a command using the following syntax to enable
remote hosts to display X applications on the local X server:
# xhost + RemoteHost
where RemoteHost is the fully qualified remote host name. For example:
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-3
Creating Groups, Users and Paths for Oracle Grid Infrastructure
# xhost + somehost.example.com
somehost.example.com being added to the access control list
3.
If you are not installing the software on the local system, then use the ssh,
command to connect to the system where you want to install the software:
# ssh -Y RemoteHost
where RemoteHost is the fully qualified remote host name. The -Y flag ("yes")
enables remote X11 clients to have full access to the original X11 display.
For example:
# ssh -Y somehost.example.com
4.
If you are not logged in as the root user, then enter the following command to
switch the user to root:
$ su - root
password:
#
■
If you are installing the software from a PC or other system with X server software
installed, then:
If necessary, refer to your X server documentation for more
information about completing this procedure. Depending on the X
server software that you are using, you may need to complete the
tasks in a different order.
Note:
1.
Start the X server software.
2.
Configure the security settings of the X server software to permit remote hosts
to display X applications on the local system.
3.
Connect to the remote system where you want to install the software as the
Oracle Grid Infrastructure for a cluster software owner (grid, oracle) and
start a terminal session on that system, for example, an X terminal (xterm).
4.
Open another terminal on the remote system, and log in as the root user on
the remote system, so you can run scripts as root when prompted.
2.4 Creating Groups, Users and Paths for Oracle Grid Infrastructure
Log in as root, and use the following instructions to locate or create the Oracle
Inventory group and a software owner for Oracle Grid Infrastructure.
■
Determining If the Oracle Inventory and Oracle Inventory Group Exists
■
Creating the Oracle Inventory Group If an Oracle Inventory Does Not Exist
■
Creating the Oracle Grid Infrastructure User
■
Creating the Oracle Base Directory Path
■
Creating Job Role Separation Operating System Privileges Groups and Users
■
Example of Creating Standard Groups, Users, and Paths
■
Example of Creating Role-allocated Groups, Users, and Paths
■
Creating the External Jobs User Account for HP-UX
2-4 Oracle Grid Infrastructure Installation Guide
Creating Groups, Users and Paths for Oracle Grid Infrastructure
During an Oracle Grid Infrastructure installation, both Oracle
Clusterware and Oracle Automatic Storage Management are installed.
You no longer can have separate Oracle Clusterware installation
owners and Oracle Automatic Storage Management installation
owners.
Note:
2.4.1 Determining If the Oracle Inventory and Oracle Inventory Group Exists
When you install Oracle software on the system for the first time, OUI creates the
oraInst.loc file. This file identifies the name of the Oracle Inventory group (by
default, oinstall), and the path of the Oracle Central Inventory directory. An
oraInst.loc file has contents similar to the following:
inventory_loc=central_inventory_location
inst_group=group
In the preceding example, central_inventory_location is the location of the Oracle
central inventory, and group is the name of the group that has permissions to write to
the central inventory (the OINSTALL group privilege).
If you have an existing Oracle central inventory, then ensure that you use the same
Oracle Inventory for all Oracle software installations, and ensure that all Oracle
software users you intend to use for installation have permissions to write to this
directory.
To determine if you have an Oracle Inventory on your system:
1.
Enter the following command:
# more /var/opt/oracle/oraInst.loc
If the oraInst.loc file exists, then the output from this command is similar to the
following:
inventory_loc=/u01/app/oracle/oraInventory
inst_group=oinstall
In the previous output example:
■
■
2.
The inventory_loc group shows the location of the Oracle Inventory
The inst_group parameter shows the name of the Oracle Inventory group (in
this example, oinstall).
Ensure that Oracle Inventory group members are granted the HP-UX privileges
RTPRIO, MLOCK, and RTSCHED. For example:
# /usr/bin/getprivgrp oinstall
oinstall: RTPRIO MLOCK RTSCHED
If the group is not granted these privileges, then add these privileges as described
in the next section.
2.4.2 Creating the Oracle Inventory Group If an Oracle Inventory Does Not Exist
If the oraInst.loc file does not exist, then complete the following tasks:
1.
Create the Oracle Inventory group by entering a command similar to the
following:
# /usr/sbin/groupadd -g 1000 oinstall
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-5
Creating Groups, Users and Paths for Oracle Grid Infrastructure
The preceding command creates the oraInventory group oinstall, with the group
ID number 1000. Members of the oraInventory group are granted privileges to
write to the Oracle central inventory (oraInventory).
By default, if an oraInventory group does not exist, then the installer lists the
primary group of the installation owner for the Oracle Grid Infrastructure for a
Cluster software as the oraInventory group. Ensure that this group is available as a
primary group for all planned Oracle software installation owners.
Group and user IDs must be identical on all nodes in the
cluster. Check to make sure that the group and user IDs you want to
use are available on each cluster member node, and confirm that the
primary group for each Oracle Grid Infrastructure for a Cluster
installation owner has the same name and group ID.
Note:
2.
If it does not already exist, create the /etc/privgroup file. Add a line similar to the
following to grant Oracle installation owners the RTPRIO, MLOCK, and
RTSCHED privileges:
oinstall RTPRIO MLOCK RTSCHED
If /etc/privgroup exists, then add these privileges to the Oracle Inventory group.
For example:
# /usr/sbin/setprivgrp oinstall RTPRIO MLOCK RTSCHED
Confirm the grant of privileges to the group. For example:
# /usr/bin/getprivgrp oinstall
oinstall: RTPRIO MLOCK RTSCHED
3.
Repeat this procedure on all of the other nodes in the cluster.
2.4.3 Creating the Oracle Grid Infrastructure User
You must create a software owner for Oracle Clusterware in the following
circumstances:
■
■
If an Oracle software owner user does not exist; for example, if this is the first
installation of Oracle software on the system
If an Oracle software owner user exists, but you want to use a different operating
system user, with different group membership, to separate Oracle Grid
Infrastructure administrative privileges from Oracle Database administrative
privileges.
In Oracle documentation, a user created to own only Oracle Grid Infrastructure
software installations is called the grid user. A user created to own either all
Oracle installations, or only Oracle database installations, is called the oracle user.
2.4.3.1 Understanding Restrictions for Oracle Software Installation Owners
If you intend to use multiple Oracle software owners for different Oracle Database
homes, then Oracle recommends that you create a separate software owner for Oracle
Grid Infrastructure software (Oracle Clusterware and Oracle ASM), and use that
owner to run the Oracle Grid Infrastructure installation.
2-6 Oracle Grid Infrastructure Installation Guide
Creating Groups, Users and Paths for Oracle Grid Infrastructure
If you plan to install Oracle Database or Oracle RAC, then Oracle recommends that
you create separate users for the Oracle Grid Infrastructure and the Oracle Database
installations. If you use one installation owner, then when you want to perform
administration tasks, you must change the value for $ORACLE_HOME to the instance you
want to administer (Oracle ASM, in the Oracle Grid Infrastructure home, or the
database in the Oracle home), using command syntax such as the following example,
where /u01/app/11.2.0/grid is the Oracle Grid Infrastructure home:
$ ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
If you try to administer an instance using sqlplus, lsnrctl, or asmcmd commands
while $ORACLE_HOME is set to a different binary path, then you will encounter
errors. When starting srvctl from a database home, $ORACLE_HOME should be set. or
srvctl fails. But if you are using srvctl in the Oracle Grid Infrastructure home, then
$ORACLE_HOME is ignored, and the Oracle home path does not affect srvctl commands.
You always have to change $ORACLE_HOME to the instance to administer.
To create separate Oracle software owners to create separate users and separate
operating system privileges groups for different Oracle software installations, note that
each of these users must have the Oracle central inventory group (oraInventory group)
as their primary group. Members of this group have write privileges to the Oracle
central inventory (oraInventory) directory, and are also granted permissions for
various Oracle Clusterware resources, OCR keys, directories in the Oracle Clusterware
home to which DBAs need write access, and other necessary privileges. In Oracle
documentation, this group is represented as oinstall in code examples.
Each Oracle software owner must be a member of the same central inventory group.
Oracle recommends that you do not have more than one central inventory for Oracle
installations. If an Oracle software owner has a different central inventory group, then
you may corrupt the central inventory.
Caution: For Oracle Grid Infrastructure for a Cluster installations,
note the following restrictions for the Oracle Grid Infrastructure
binary home (Grid home):
■
■
It must not be placed under an Oracle base directory, including
the Oracle base directory of the Oracle Grid Infrastructure
installation owner.
It must not be placed in the home directory of an installation
owner
During installation, ownership of the path to the Grid home is
changed to root. This change causes permission errors for other
installations.
2.4.3.2 Determining if an Oracle Software Owner User Exists
To determine whether an Oracle software owner user named oracle or grid exists,
enter a command similar to the following (in this case, to determine if oracle exists):
# id oracle
If the user exists, then the output from this command is similar to the following:
uid=501(oracle) gid=501(oinstall) groups=502(dba),503(oper)
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-7
Creating Groups, Users and Paths for Oracle Grid Infrastructure
Determine whether you want to use the existing user, or create another user. The user
and group ID numbers must be the same on each node you intend to make a cluster
member node.
To use the existing user, ensure that the user's primary group is the Oracle Inventory
group (oinstall). If this user account will be used for Oracle Database installations,
then ensure that the Oracle account is also a member of the group you plan to
designate as the OSDBA for ASM group (the group whose members are permitted to
write to Oracle ASM storage).
2.4.3.3 Creating or Modifying an Oracle Software Owner User for Oracle Grid
Infrastructure
If the Oracle software owner (oracle, grid) user does not exist, or if you require a new
Oracle software owner user, then create it. If you want to use an existing user account,
then modify it to ensure that the user ID and group IDs are the same on each cluster
member node. The following procedures use grid as the name of the Oracle software
owner, and dba as the OSASM group. To create separate system privilege groups to
separate administration privileges, complete group creation before you create the user,
as described in Section 2.4.5, "Creating Job Role Separation Operating System
Privileges Groups and Users," on page 2-10.
1.
To create a grid installation owner account where you have an existing system
privileges group (in this example, dba), whose members you want to have granted
the SYSASM privilege to administer the Oracle ASM instance, enter a command
similar to the following:
# /usr/sbin/useradd -u 1100 -g oinstall -G dba grid
In the preceding command:
■
■
■
The -u option specifies the user ID. Using this command flag is optional, as
you can allow the system to provide you with an automatically generated user
ID number. However, you must make note of the user ID number of the user
you create for Oracle Grid Infrastructure, as you require it later during
preinstallation, and you must have the same user ID number for this user on
all nodes of the cluster.
The -g option specifies the primary group, which must be the Oracle
Inventory group. For example: oinstall.
The -G option specified the secondary group, which in this example is dba.
The secondary groups must include the OSASM group, whose members are
granted the SYSASM privilege to administer the Oracle ASM instance. You can
designate a unique group for the SYSASM system privileges, separate from
database administrator groups, or you can designate one group as the OSASM
and OSDBA group, so that members of that group are granted the SYSASM and
SYSDBA privileges to grant system privileges to administer both the Oracle
ASM instances and Oracle Database instances. In code examples, this group is
asmadmin.
If you are creating this user to own both Oracle Grid Infrastructure and an
Oracle Database installation, then this user must have the OSDBA for ASM
group as a secondary group. In code examples, this group name is asmdba.
Members of the OSDBA for ASM group are granted access to Oracle ASM
storage. You must create an OSDBA for ASM group if you plan to have
multiple databases accessing Oracle ASM storage, or you must use the same
group as the OSDBA for all databases, and for the OSDBA for ASM group.
2-8 Oracle Grid Infrastructure Installation Guide
Creating Groups, Users and Paths for Oracle Grid Infrastructure
Use the usermod command to change existing user id numbers and groups.
For example:
# id oracle
uid=501(oracle) gid=501(oracle) groups=501(oracle)
# /usr/sbin/usermod -u 1001 -g 1000 -G 1000,1001 oracle
# id oracle
uid=1001(oracle) gid=1000(oinstall) groups=1000(oinstall),1001(oracle)
2.
Set the password of the user that will own Oracle Grid Infrastructure. For
example:
# passwd grid
3.
Repeat this procedure on all of the other nodes in the cluster.
If necessary, contact your system administrator before using or
modifying an existing user.
Note:
Oracle recommends that you do not use the UID and GID defaults on
each node, as group and user IDs likely will be different on each node.
Instead, provide common assigned group and user IDs, and confirm
that they are unused on any node before you create or modify groups
and users.
2.4.4 Creating the Oracle Base Directory Path
The Oracle base directory for the grid installation owner is the location where
diagnostic and administrative logs, and other logs associated with Oracle ASM and
Oracle Clusterware are stored.
If you have created a path for the Oracle Clusterware home that is compliant with
Oracle Optimal Flexible Architecture (OFA) guidelines for Oracle software paths then
you do not need to create an Oracle base directory. When OUI finds an OFA-compliant
path, it creates the Oracle base directory in that path.
For OUI to recognize the path as an Oracle software path, it must be in the form
u[00-99]/app, and it must be writable by any member of the oraInventory (oinstall)
group. The OFA path for the Oracle base is /u01/app/user, where user is the name of
the software installation owner.
Oracle recommends that you create an Oracle Grid Infrastructure Grid home and
Oracle base homes manually, particularly if you have separate Oracle Grid
Infrastructure for a cluster and Oracle Database software owners, so that you can
separate log files.
For example:
#
#
#
#
#
#
#
#
mkdir
mkdir
mkdir
chown
chown
chown
chmod
chown
-p /u01/app/11.2.0/grid
-p /u01/app/grid
-p /u01/app/oracle
grid:oinstall /u01/app/11.2.0/grid
grid:oinstall /u01/app/grid
oracle:oinstall /u01/app/oracle
-R 775 /u01/
-R grid:oinstall /u01
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-9
Creating Groups, Users and Paths for Oracle Grid Infrastructure
Placing Oracle Grid Infrastructure for a Cluster binaries on a
cluster file system is not supported.
Note:
2.4.5 Creating Job Role Separation Operating System Privileges Groups and Users
A job role separation privileges configuration of Oracle ASM is a configuration with
groups and users that divide administrative access privileges to the Oracle ASM
installation from other administrative privileges users and groups associated with
other Oracle installations. Administrative privileges access is granted by membership
in separate operating system groups, and installation privileges are granted by using
different installation owners for each Oracle installation.
This configuration is optional, to restrict user access to Oracle
software by responsibility areas for different administrator users.
Note:
If you prefer, you can allocate operating system user privileges so that you can use one
administrative user and one group for operating system authentication for all system
privileges on the storage and database tiers.
For example, you can designate the oracle user to be the installation owner for all
Oracle software, and designate oinstall to be the group whose members are granted
all system privileges for Oracle Clusterware, Oracle ASM, and all Oracle Databases on
the servers, and all privileges as installation owners. This group must also be the
Oracle Inventory group.
Oracle recommends that you use at least two groups: A system privileges group
whose members are granted administrative system privileges, and an installation
owner group (the oraInventory group) to provide separate installation privileges (the
OINSTALL privilege). To simplify using the defaults for Oracle tools such as Cluster
Verification Utility, if you do choose to use a single operating system group to grant all
system privileges and the right to write to the oraInventory, then that group name
should be oinstall.
■
Overview of Creating Job Role Separation Groups and Users
■
Creating Database Groups and Users with Job Role Separation
To use a directory service, such as Network Information
Services (NIS), refer to your operating system documentation for
further information.
Note:
2.4.5.1 Overview of Creating Job Role Separation Groups and Users
This section provides an overview of how to create users and groups to use job role
separation. Log in as root to create these groups and users.
■
Users for Oracle Installations with Job Role Separation
■
Database Groups for Job Role Separation Installations
■
Oracle ASM Groups for Job Role Separation Installations
2.4.5.1.1 Users for Oracle Installations with Job Role Separation Oracle recommends that
you create the following operating system groups and users for all installations where
you create separate software installation owners:
2-10 Oracle Grid Infrastructure Installation Guide
Creating Groups, Users and Paths for Oracle Grid Infrastructure
One software owner to own each Oracle software product (typically, oracle, for the
database software owner user, and grid for Oracle Grid Infrastructure.
You must create at least one software owner the first time you install Oracle software
on the system. This user owns the Oracle binaries of the Oracle Grid Infrastructure
software, and you can also make this user the owner of the Oracle Database or Oracle
RAC binaries.
Oracle software owners must have the Oracle Inventory group as their primary group,
so that each Oracle software installation owner can write to the central inventory
(oraInventory), and so that OCR and Oracle Clusterware resource permissions are set
correctly. The database software owner must also have the OSDBA group and (if you
create it) the OSOPER group as secondary groups. In Oracle documentation, when
Oracle software owner users are referred to, they are called oracle users.
Oracle recommends that you create separate software owner users to own each Oracle
software installation. Oracle particularly recommends that you do this if you intend to
install multiple databases on the system.
In Oracle documentation, a user created to own the Oracle Grid Infrastructure binaries
is called the grid user. This user owns both the Oracle Clusterware and Oracle
Automatic Storage Management binaries.
Oracle Clusterware Administration and Deployment Guide
and Oracle Database Administrator's Guide for more information about
the OSDBA, OSASM and OSOPER groups and the SYSDBA, SYSASM and
SYSOPER privileges
See Also:
2.4.5.1.2 Database Groups for Job Role Separation Installations The following operating
system groups and user are required if you are installing Oracle Database:
■
The OSDBA group (typically, dba)
You must create this group the first time you install Oracle Database software on
the system. This group identifies operating system user accounts that have
database administrative privileges (the SYSDBA privilege). If you do not create
separate OSDBA, OSOPER and OSASM groups for the Oracle ASM instance, then
operating system user accounts that have the SYSOPER and SYSASM privileges must
be members of this group. The name used for this group in Oracle code examples
is dba. If you do not designate a separate group as the OSASM group, then the
OSDBA group you define is also by default the OSASM group.
To specify a group name other than the default dba group, then you must choose
the Advanced installation type to install the software or start Oracle Universal
Installer (OUI) as a user that is not a member of this group. In this case, OUI
prompts you to specify the name of this group.
Members of the OSDBA group formerly were granted SYSASM privileges on Oracle
ASM instances, including mounting and dismounting disk groups. This privileges
grant is removed with Oracle Grid Infrastructure 11g Release 2 (11.2), if different
operating system groups are designated as the OSDBA and OSASM groups. If the
same group is used for both OSDBA and OSASM, then the privilege is retained.
■
The OSOPER group for Oracle Database (typically, oper)
This is an optional group. Create this group if you want a separate group of
operating system users to have a limited set of database administrative privileges
(the SYSOPER privilege). By default, members of the OSDBA group also have all
privileges granted by the SYSOPER privilege.
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-11
Creating Groups, Users and Paths for Oracle Grid Infrastructure
To use the OSOPER group to create a database administrator group with fewer
privileges than the default dba group, then you must choose the Advanced
installation type to install the software or start OUI as a user that is not a member
of the dba group. In this case, OUI prompts you to specify the name of this group.
The usual name chosen for this group is oper.
2.4.5.1.3 Oracle ASM Groups for Job Role Separation Installations SYSASM is a new system
privilege that enables the separation of the Oracle ASM storage administration
privilege from SYSDBA. With Oracle Automatic Storage Management 11g Release 2
(11.2), members of the database OSDBA group are not granted SYSASM privileges,
unless the operating system group designated as the OSASM group is the same group
designated as the OSDBA group.
Select separate operating system groups as the operating system authentication groups
for privileges on Oracle ASM. Before you start OUI, create the following groups and
users for Oracle ASM
■
The Oracle Automatic Storage Management Group (typically asmadmin)
This is a required group. Create this group as a separate group if you want to have
separate administration privilege groups for Oracle ASM and Oracle Database
administrators. In Oracle documentation, the operating system group whose
members are granted privileges is called the OSASM group, and in code examples,
where there is a group specifically created to grant this privilege, it is referred to as
asmadmin.
If you have multiple databases on your system, and use multiple OSDBA groups
so that you can provide separate SYSDBA privileges for each database, then you
should create a separate OSASM group, and use a separate user from the database
users to own the Oracle Grid Infrastructure installation (Oracle Clusterware and
Oracle ASM). Oracle ASM can support multiple databases.
Members of the OSASM group can use SQL to connect to an Oracle ASM instance
as SYSASM using operating system authentication. The SYSASM privileges permit
mounting and dismounting disk groups, and other storage administration tasks.
SYSASM privileges provide no access privileges on an RDBMS instance.
■
The Oracle ASM Database Administrator group (OSDBA for ASM, typically
asmdba)
Members of the Oracle ASM Database Administrator group (OSDBA for ASM) are
granted read and write access to files managed by Oracle ASM. The Oracle Grid
Infrastructure installation owner and all Oracle Database software owners must be
a member of this group, and all users with OSDBA membership on databases that
have access to the files managed by Oracle ASM must be members of the OSDBA
group for ASM.
■
Members of the Oracle ASM Operator Group (OSOPER for ASM, typically
asmoper)
This is an optional group. Create this group if you want a separate group of
operating system users to have a limited set of Oracle ASM instance
administrative privileges (the SYSOPER for ASM privilege), including starting up
and stopping the Oracle ASM instance. By default, members of the OSASM group
also have all privileges granted by the SYSOPER for ASM privilege.
To use the Oracle ASM Operator group to create an Oracle ASM administrator
group with fewer privileges than the default asmadmin group, then you must
choose the Advanced installation type to install the software, In this case, OUI
2-12 Oracle Grid Infrastructure Installation Guide
Creating Groups, Users and Paths for Oracle Grid Infrastructure
prompts you to specify the name of this group. In code examples, this group is
asmoper.
2.4.5.2 Creating Database Groups and Users with Job Role Separation
The following sections describe how to create the required operating system user and
groups:.
■
Creating the OSDBA Group to Prepare for Database Installations
■
Creating an OSOPER Group for Database Installations
■
Creating the OSASM Group
■
Creating the OSOPER for ASM Group
■
Creating the OSDBA for ASM Group for Database Access to Oracle ASM
■
When to Create the Oracle Software Owner User
■
Determining if an Oracle Software Owner User Exists
■
Creating an Oracle Software Owner User
■
Modifying an Existing Oracle Software Owner User
■
Creating Identical Database Users and Groups on Other Cluster Nodes
2.4.5.2.1 Creating the OSDBA Group to Prepare for Database Installations If you intend to
install Oracle Database to use with the Oracle Grid Infrastructure installation, then you
must create an OSDBA group in the following circumstances:
■
■
An OSDBA group does not exist; for example, if this is the first installation of
Oracle Database software on the system
An OSDBA group exists, but you want to give a different group of operating
system users database administrative privileges for a new Oracle Database
installation
If the OSDBA group does not exist, or if you require a new OSDBA group, then create
it as follows. Use the group name dba unless a group with that name already exists:
# /usr/sbin/groupadd -g 1031 dba
2.4.5.2.2 Creating an OSOPER Group for Database Installations Create an OSOPER group
only if you want to identify a group of operating system users with a limited set of
database administrative privileges (SYSOPER operator privileges). For most
installations, it is sufficient to create only the OSDBA group. To use an OSOPER group,
then you must create it in the following circumstances:
■
■
If an OSOPER group does not exist; for example, if this is the first installation of
Oracle Database software on the system
If an OSOPER group exists, but you want to give a different group of operating
system users database operator privileges in a new Oracle installation
If you require a new OSOPER group, then create it as follows. Use the group name
oper unless a group with that name already exists.
# /usr/sbin/groupadd -g 1032 oper1
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-13
Creating Groups, Users and Paths for Oracle Grid Infrastructure
2.4.5.2.3 Creating the OSASM Group If the OSASM group does not exist or if you require
a new OSASM group, then create it as follows. Use the group name asmadmin unless a
group with that name already exists:
# /usr/sbin/groupadd -g 1020 asmadmin
2.4.5.2.4 Creating the OSOPER for ASM Group Create an OSOPER for ASM group if you
want to identify a group of operating system users, such as database administrators,
whom you want to grant a limited set of Oracle ASM storage tier administrative
privileges, including the ability to start up and shut down the Oracle ASM storage. For
most installations, it is sufficient to create only the OSASM group, and provide that
group as the OSOPER for ASM group during the installation interview.
If you require a new OSOPER for ASM group, then create it as follows. In the
following, use the group name asmoper unless a group with that name already exists:
# /usr/sbin/groupadd -g 1022 asmoper
2.4.5.2.5 Creating the OSDBA for ASM Group for Database Access to Oracle ASM You must
create an OSDBA for ASM group to provide access to the Oracle ASM instance. This is
necessary if OSASM and OSDBA are different groups.
If the OSDBA for ASM group does not exist or if you require a new OSDBA for ASM
group, then create it as follows. Use the group name asmdba unless a group with that
name already exists:
# /usr/sbin/groupadd -g 1021 asmdba
2.4.5.2.6 When to Create the Oracle Software Owner User You must create an Oracle
software owner user in the following circumstances:
■
■
If an Oracle software owner user exists, but you want to use a different operating
system user, with different group membership, to give database administrative
privileges to those groups in a new Oracle Database installation
If you have created an Oracle software owner for Oracle Grid Infrastructure, such
as grid, and you want to create a separate Oracle software owner for Oracle
Database software, such as oracle.
2.4.5.2.7 Determining if an Oracle Software Owner User Exists To determine whether an
Oracle software owner user named oracle or grid exists, enter a command similar to
the following (in this case, to determine if oracle exists):
# id oracle
If the user exists, then the output from this command is similar to the following:
uid=501(oracle) gid=501(oinstall) groups=502(dba),503(oper)
Determine whether you want to use the existing user, or create another user. To use the
existing user, ensure that the user's primary group is the Oracle Inventory group and
that it is a member of the appropriate OSDBA and OSOPER groups. Refer to the
following sections for more information:
■
■
To modify an existing user, refer to Section 2.4.5.2.9, "Modifying an Existing Oracle
Software Owner User," on page 2-15.
To create a user, refer to the following section.
2-14 Oracle Grid Infrastructure Installation Guide
Creating Groups, Users and Paths for Oracle Grid Infrastructure
If necessary, contact your system administrator before using or
modifying an existing user.
Note:
Oracle recommends that you do not use the UID and GID defaults on
each node, as group and user IDs likely will be different on each node.
Instead, provide common assigned group and user IDs, and confirm
that they are unused on any node before you create or modify groups
and users.
2.4.5.2.8 Creating an Oracle Software Owner User If the Oracle software owner user does
not exist, or if you require a new Oracle software owner user, then create it as follows.
Use the user name oracle unless a user with that name already exists.
1.
To create an oracle user, enter a command similar to the following:
# /usr/sbin/useradd -u 1101 -g oinstall -G dba,asmdba oracle
In the preceding command:
■
■
■
2.
The -u option specifies the user ID. Using this command flag is optional, as
you can allow the system to provide you with an automatically generated user
ID number. However, you must make note of the oracle user ID number, as
you require it later during preinstallation.
The -g option specifies the primary group, which must be the Oracle
Inventory group--for example, oinstall
The -G option specifies the secondary groups, which must include the OSDBA
group, the OSDBA for ASM group, and, if required, the OSOPER for ASM
group. For example: dba, asmdba, or dba, asmdba, asmoper
Set the password of the oracle user:
# passwd oracle
2.4.5.2.9 Modifying an Existing Oracle Software Owner User If the oracle user exists, but its
primary group is not oinstall, or it is not a member of the appropriate OSDBA or
OSDBA for ASM groups, then enter a command similar to the following to modify it.
Specify the primary group using the -g option and any required secondary group
using the -G option:
# /usr/sbin/usermod -g oinstall -G dba,asmdba oracle
Repeat this procedure on all of the other nodes in the cluster.
2.4.5.2.10 Creating Identical Database Users and Groups on Other Cluster Nodes Oracle
software owner users and the Oracle Inventory, OSDBA, and OSOPER groups must
exist and be identical on all cluster nodes. To create these identical users and groups,
you must identify the user ID and group IDs assigned them on the node where you
created them, and then create the user and groups with the same name and ID on the
other cluster nodes.
You must complete the following procedures only if you are
using local users and groups. If you are using users and groups
defined in a directory service such as NIS, then they are already
identical on each cluster node.
Note:
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-15
Creating Groups, Users and Paths for Oracle Grid Infrastructure
Identifying Existing User and Group IDs
To determine the user ID (uid) of the grid or oracle users, and the group IDs (gid) of
the existing Oracle groups, follow these steps:
1.
Enter a command similar to the following (in this case, to determine a user ID for
the oracle user):
# id oracle
The output from this command is similar to the following:
uid=502(oracle) gid=501(oinstall) groups=502(dba),503(oper),506(asmdba)
2.
From the output, identify the user ID (uid) for the user and the group identities
(gid) for the groups to which it belongs. Ensure that these ID numbers are
identical on each node of the cluster. The user's primary group is listed after gid.
Secondary groups are listed after groups.
Creating Users and Groups on the Other Cluster Nodes
To create users and groups on the other cluster nodes, repeat the following procedure
on each node:
1.
Log in to the next cluster node as root.
2.
Enter commands similar to the following to create the oinstall, asmadmin, and
asmdba groups, and if required, the asmoper, dba, and oper groups. Use the -g
option to specify the correct gid for each group.
#
#
#
#
#
#
/usr/sbin/groupadd
/usr/sbin/groupadd
/usr/sbin/groupadd
/usr/sbin/groupadd
/usr/sbin/groupadd
/usr/sbin/groupadd
-g
-g
-g
-g
-g
-g
1000
1020
1021
1022
1031
1032
oinstall
asmadmin
asmdba
asmoper
dba
oper
Note: If the group already exists, then use the groupmod command to
modify it if necessary. If you cannot use the same group ID for a
particular group on this node, then view the /etc/group file on all
nodes to identify a group ID that is available on every node. You must
then change the group ID on all nodes to the same group ID.
3.
To create the oracle or Oracle Grid Infrastructure (grid) user, enter a command
similar to the following (in this example, to create the oracle user):
# /usr/sbin/useradd -u 1101 -g oinstall -G asmdba,dba oracle
In the preceding command:
–
The -u option specifies the user ID, which must be the user ID that you
identified in the previous subsection
–
The -g option specifies the primary group, which must be the Oracle
Inventory group, for example oinstall
–
The -G option specifies the secondary groups, which can include the OSASM,
OSDBA, OSDBA for ASM, and OSOPER or OSOPER for ASM groups. For
example:
2-16 Oracle Grid Infrastructure Installation Guide
Creating Groups, Users and Paths for Oracle Grid Infrastructure
–
A grid installation owner: OSASM (asmadmin), whose members are
granted the SYSASM privilege.
–
An Oracle Database installation owner without SYSASM privileges access:
OSDBA (dba), OSDBA for ASM (asmdba), OSOPER for ASM (asmoper).
Note: If the user already exists, then use the usermod command to
modify it if necessary. If you cannot use the same user ID for the user
on every node, then view the /etc/passwd file on all nodes to identify
a user ID that is available on every node. You must then specify that
ID for the user on all of the nodes.
4.
Set the password of the user. For example:
# passwd oracle
5.
Complete user environment configuration tasks for each user as described in the
section Configuring Grid Infrastructure Software Owner User Environments on
page 2-39.
2.4.6 Example of Creating Standard Groups, Users, and Paths
The following is an example of how to create the Oracle Inventory group (oinstall),
and a single group (dba) as the OSDBA, OSASM and OSDBA for Oracle ASM groups.
In addition, it shows how to create the Oracle Grid Infrastructure software owner
(grid), and one Oracle Database owner (oracle) with correct group memberships.
This example also shows how to configure an Oracle base path compliant with OFA
structure with correct permissions:
#
#
#
#
#
#
#
#
#
#
groupadd -g 1000 oinstall
groupadd -g 1031 dba
useradd -u 1100 -g oinstall -G dba grid
useradd -u 1101 -g oinstall -G dba oracle
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
chown -R grid:oinstall /u01
mkdir /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
After running these commands, you have the following groups and users:
■
■
■
■
An Oracle central inventory group, or oraInventory group (oinstall). Members
who have the central inventory group as their primary group, are granted the
OINSTALL permission to write to the oraInventory directory.
A single system privileges group that is used as the OSASM, OSDBA, OSDBA for
ASM, and OSOPER for ASM group (dba), whose members are granted the
SYSASM and SYSDBA privilege to administer Oracle Clusterware, Oracle ASM,
and Oracle Database, and are granted SYSASM and OSOPER for ASM access to
the Oracle ASM storage.
An Oracle grid installation for a cluster owner (grid), with the oraInventory group
as its primary group, and with the OSASM group as the secondary group, with its
Oracle base directory /u01/app/grid.
An Oracle Database owner (oracle) with the oraInventory group as its primary
group, and the OSDBA group as its secondary group, with its Oracle base
directory /u01/app/oracle.
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-17
Creating Groups, Users and Paths for Oracle Grid Infrastructure
■
■
■
■
■
/u01/app owned by grid:oinstall with 775 permissions before installation, and
by root after the root.sh script is run during installation. This ownership and
permissions enables OUI to create the Oracle Inventory directory, in the path
/u01/app/oraInventory.
/u01 owned by grid:oinstall before installation, and by root after the root.sh
script is run during installation.
/u01/app/11.2.0/grid owned by grid:oinstall with 775 permissions. These
permissions are required for installation, and are changed during the installation
process.
/u01/app/grid owned by grid:oinstall with 775 permissions before installation,
and 755 permissions after installation.
/u01/app/oracle owned by oracle:oinstall with 775 permissions.
2.4.7 Example of Creating Role-allocated Groups, Users, and Paths
The following is an example of how to create role-allocated groups and users that is
compliant with an Optimal Flexible Architecture (OFA) deployment:
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
groupadd -g 1000 oinstall
groupadd -g 1020 asmadmin
groupadd -g 1021 asmdba
groupadd -g 1031 dba1
groupadd -g 1041 dba2
groupadd -g 1022 asmoper
useradd -u 1100 -g oinstall -G asmadmin,asmdba grid
useradd -u 1101 -g oinstall -G dba1,asmdba oracle1
useradd -u 1102 -g oinstall -G dba2,asmdba oracle2
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
chown -R grid:oinstall /u01
mkdir -p /u01/app/oracle1
chown oracle1:oinstall /u01/app/oracle1
mkdir -p /u01/app/oracle2
chown oracle2:oinstall /u01/app/oracle2
chmod -R 775 /u01
After running these commands, you have the following groups and users:
■
■
■
■
■
■
An Oracle central inventory group, or oraInventory group (oinstall), whose
members that have this group as their primary group are granted permissions to
write to the oraInventory directory.
A separate OSASM group (asmadmin), whose members are granted the SYSASM
privilege to administer Oracle Clusterware and Oracle ASM.
A separate OSDBA for ASM group (asmdba), whose members include grid,
oracle1 and oracle2, and who are granted access to Oracle ASM.
A separate OSOPER for ASM group (asmoper), whose members are granted
limited Oracle ASM administrator privileges, including the permissions to start
and stop the Oracle ASM instance.
An Oracle grid installation for a cluster owner (grid), with the oraInventory group
as its primary group, and with the OSASM (asmadmin), OSDBA for ASM (asmdba)
group as a secondary group.
Two separate OSDBA groups for two different databases (dba1 and dba2) to
establish separate SYSDBA privileges for each database.
2-18 Oracle Grid Infrastructure Installation Guide
Checking the Hardware Requirements
■
■
■
■
■
■
■
Two Oracle Database software owners (oracle1 and oracle2), to divide
ownership of the Oracle database binaries, with the OraInventory group as their
primary group, and the OSDBA group for their database (dba1 or dba2) and the
OSDBA for ASM group (asmdba) as their secondary groups.
An OFA-compliant mount point /u01 owned by grid:oinstall before
installation.
An Oracle base for the grid installation owner /u01/app/grid owned by
grid:oinstall with 775 permissions, and changed during the installation process
to 755 permissions.
An Oracle base /u01/app/oracle1 owned by oracle1:oinstall with 775
permissions.
An Oracle base /u01/app/oracle 2 owned by oracle2:oinstall with 775
permissions.
A Grid home /u01/app/11.2.0/grid owned by grid:oinstall with 775
(drwxdrwxr-x) permissions. These permissions are required for installation, and
are changed during the installation process to root:oinstall with 755
permissions (drwxr-xr-x).
/u01/app/oraInventory. This path remains owned by grid:oinstall, to enable
other Oracle software owners to write to the central inventory.
2.4.8 Creating the External Jobs User Account for HP-UX
On the HP-UX platform, if you intend to install Oracle Database or Oracle Real
Application Clusters on Oracle Grid Infrastructure, then use the following procedure
to create an external jobs user account to provide a low-privilege user with which
external jobs can be run:
1.
Log in as root
2.
Create the unprivileged user extjob. For example:
# useradd extjob
2.5 Checking the Hardware Requirements
■
■
■
Select servers with the same instruction set architecture; running 32-bit and 64-bit
Oracle software versions in the same cluster stack is not supported.
Ensure that the server is started with run level 3.
Ensure servers run the same operating system binary and the same processor
architecture. Oracle Grid Infrastructure installations and Oracle Real Application
Clusters (Oracle RAC) support servers with different hardware in the same cluster.
Each system must meet the following minimum hardware requirements:
■
■
■
At least 2.5 GB of RAM for Oracle Grid Infrastructure for a Cluster installations,
including installations where you plan to install Oracle RAC.
At least 1024 x 768 display resolution, so that Oracle Universal Installer (OUI)
displays correctly
Swap space equivalent to the multiple of the available RAM, as indicated in the
following table:
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-19
Checking the Hardware Requirements
Table 2–1
Swap Space Required as a Multiple of RAM
Available RAM
Swap Space Required
Between 2 GB and 8 GB
2 times the size of RAM
Between 8 GB and 32 GB
1.5 times the size of RAM
More than 32 GB
32 GB
■
■
7 GB of space in the /tmp directory
A minimum of 5.1 GB of space for the Oracle Grid Infrastructure for a cluster
home (Grid home) This includes Oracle Clusterware and Oracle Automatic
Storage Management (ASM) files and log files.
If you intend to install Oracle Databases or an Oracle RAC
database on the cluster, be aware that the size of the /dev/shm mount
area on each server must be greater than the system global areal (SGA)
and the program global area (PGA) of the databases on the servers.
Review expected SGA and PGA sizes with database administrators, to
ensure that you do not have to increase /dev/shm after databases are
installed on the cluster.
Note:
■
A minimum of 8.2 GB of space for Oracle Database binaries, if planned.
■
A minimum of 2.72 GB for Enterprise Edition data files
■
Upto 10 GB of additional space in the Oracle base directory of the Grid
Infrastructure owner for diagnostic collections generated by Trace File Analyzer
and Collector.
If you are installing Oracle Database, then you require additional space, either on a file
system or in an Oracle Automatic Storage Management disk group, for the Fast
Recovery Area if you choose to configure automated database backups.
See Also:
Oracle Database Storage Administrator's Guide
To ensure that each system meets these requirements, follow these steps:
1.
To determine the physical RAM size, enter the following command:
# /usr/contrib/bin/machinfo
| grep -i Memory
If the size of the physical RAM installed in the system is less than the required
size, then you must install more memory before continuing.
2.
To determine the size of the configured swap space, enter the following command:
# /usr/sbin/swapinfo -a
2-20 Oracle Grid Infrastructure Installation Guide
Checking the Network Requirements
Note:
■
■
3.
Oracle recommends that you take multiple values for the available
RAM and swap space before finalizing a value. This is because the
available RAM and swap space keep changing depending on the
user interactions with the computer.
Contact your operating system vendor for swap space allocation
guidance for your server. The vendor guidelines supersede the
swap space requirements listed in this guide.
To determine the amount of disk space available in the /tmp directory, enter the
following command:
# bdf /tmp
If there is less than 7 GB of disk space available in the /tmp directory, then
complete one of the following steps:
■
■
4.
Delete unnecessary files from the /tmp directory to make available the disk
space required.
Extend the file system that contains the /tmp directory. If necessary, contact
your system administrator for information about extending file systems.
To determine the amount of free disk space on the system, enter the following
command:
# bdf
The following table shows the approximate disk space requirements for software
files for each installation type:
Installation Type
Requirement for Software Files (GB)
Enterprise Edition
8.2
Standard Edition
7.1
2.6 Checking the Network Requirements
Review the following sections to check that you have the networking hardware and
internet protocol (IP) addresses required for an Oracle Grid Infrastructure for a cluster
installation:
■
Network Hardware Requirements
■
IP Address Requirements
■
Broadcast Requirements for Networks Used by Oracle Grid Infrastructure
■
Multicast Requirements for Networks Used by Oracle Grid Infrastructure
■
DNS Configuration for Domain Delegation to Grid Naming Service
■
Grid Naming Service Configuration Example
■
Manual IP Address Configuration Example
■
Network Interface Configuration Options
■
Checking the Run Level and Name Service Cache Daemon
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-21
Checking the Network Requirements
For the most up-to-date information about supported network
protocols and hardware for Oracle RAC installations, refer to the
Certify pages on the My Oracle Support web site at the following
URL:
Note:
https://support.oracle.com
2.6.1 Network Hardware Requirements
The following is a list of requirements for network configuration:
■
Each node must have at least two network adapters or network interface cards
(NICs): one for the public network interface, and one for the private network
interface (the interconnect).
With Redundant Interconnect Usage, you should identify multiple interfaces to
use for the cluster private network, without the need of using bonding or other
technologies. This functionality is available starting with Oracle Database 11g
Release 2 (11.2.0.2).
When you define multiple interfaces, Oracle Clusterware creates from one to four
highly available IP (HAIP) addresses. Oracle RAC and Oracle ASM instances use
these interface addresses to ensure highly available, load-balanced interface
communication between nodes. The installer enables Redundant Interconnect
Usage to provide a high availability private network.
By default, Oracle Grid Infrastructure software uses all of the HAIP addresses for
private network communication, providing load-balancing across the set of
interfaces you identify for the private network. If a private interconnect interface
fails or become non-communicative, then Oracle Clusterware transparently moves
the corresponding HAIP address to a remaining functional interface.
If you define more than one interface for the private network
interfaces, be aware that Oracle Clusterware activates only one
interface at a time. However, if the active interface fails, then Oracle
Clusterware transitions the HAIP addresses configured to the failed
interface to one of the reserve interfaces in the defined set of private
interfaces.
Note:
When you upgrade a node to Oracle Grid Infrastructure 11g release 2 (11.2.0.2) and
later, the upgraded system uses your existing network classifications.
To configure multiple public interfaces, use a third-party technology for your
platform to aggregate the multiple public interfaces before you start installation,
and then select the single interface name for the combined interfaces as the public
interface. Oracle recommends that you do not identify multiple public interface
names during Oracle Grid Infrastructure installation. Note that if you configure
two network interfaces as public network interfaces in the cluster without using
an aggregation technology, the failure of one public interface on a node does not
result in automatic VIP failover to the other public interface.
Oracle recommends that you use the Redundant Interconnect Usage feature to
make use of multiple interfaces for the private network. However, you can also
use third-party technologies to provide redundancy for the private network.
2-22 Oracle Grid Infrastructure Installation Guide
Checking the Network Requirements
■
If you install Oracle Clusterware using OUI, then the public interface names
associated with the network adapters for each network must be the same on all
nodes, and the private interface names associated with the network adaptors
should be the same on all nodes. This restriction does not apply if you use cloning,
either to create a new cluster, or to add nodes to an existing cluster.
For example: With a two-node cluster, you cannot configure network adapters on
node1 with lan0 as the public interface, but on node2 have lan1 as the public
interface. Public interface names must be the same, so you must configure lan0 as
public on both nodes. You should configure the private interfaces on the same
network adapters as well. If lan1 is the private interface for node1, then lan1
should be the private interface for node2.
Oracle Clusterware Administration and Deployment Guide for
information about how to add nodes using cloning
See Also:
■
■
For the public network, each network adapter must support TCP/IP.
For the private network, the interface must support the user datagram protocol
(UDP) using high-speed network adapters and switches that support TCP/IP
(minimum requirement 1 Gigabit Ethernet).
UDP is the default interface protocol for Oracle RAC, and TCP
is the interconnect protocol for Oracle Clusterware. You must use a
switch for the interconnect. Oracle recommends that you use a
dedicated switch.
Note:
Oracle does not support token-rings or crossover cables for the
interconnect.
■
Each node’s private interface for interconnects must be on the same subnet, and
those subnets must connect to every node of the cluster. For example, if the private
interfaces have a subnet mask of 255.255.255.0, then your private network is in the
range 192.168.0.0--192.168.0.255, and your private addresses must be in the range
of 192.168.0.[0-255]. If the private interfaces have a subnet mask of 255.255.0.0,
then your private addresses can be in the range of 192.168.[0-255].[0-255].
For clusters using Redundant Interconnect Usage, each private interface should be
on a different subnet. However, each cluster member node must have an interface
on each private interconnect subnet, and these subnets must connect to every node
of the cluster. For example, you can have private networks on subnets 192.168.0
and 10.0.0, but each cluster member node must have an interface connected to the
192.168.0 and 10.0.0 subnets.
■
For the private network, the endpoints of all designated interconnect interfaces
must be completely reachable on the network. There should be no node that is not
connected to every private network interface. You can test if an interconnect
interface is reachable using ping.
2.6.2 IP Address Requirements
Before starting the installation, you must have at least two interfaces configured on
each node: One for the private IP address and one for the public IP address.
You can configure IP addresses with one of the following options:
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-23
Checking the Network Requirements
■
■
Dynamic IP address assignment using Oracle Grid Naming Service (GNS). If you
select this option, then network administrators assign static IP address for the
physical host name and dynamically allocated IPs for the Oracle Clusterware
managed VIP addresses. In this case, IP addresses for the VIPs are assigned by a
DHCP and resolved using a multicast domain name server configured as part of
Oracle Clusterware within the cluster. If you plan to use GNS, then you must have
the following:
–
A DHCP service running on the public network for the cluster
–
Enough addresses on the DHCP to provide 1 IP address for each node's virtual
IP, and 3 IP addresses for the cluster used by the Single Client Access Name
(SCAN) for the cluster
Static IP address assignment. If you select this option, then network administrators
assign a fixed IP address for each physical host name in the cluster and for IPs for
the Oracle Clusterware managed VIPs. In addition, domain name server (DNS)
based static name resolution is used for each node. Selecting this option requires
that you request network administration updates when you modify the cluster.
Oracle recommends that you use a static host name for all
server node public hostnames.
Note:
Public IP addresses and virtual IP addresses must be in the same
subnet.
Oracle only supports DHCP-assigned networks for the default
network, not for any subsequent networks.
2.6.2.1 IP Address Requirements with Grid Naming Service
If you enable Grid Naming Service (GNS), then name resolution requests to the cluster
are delegated to the GNS, which is listening on the GNS virtual IP address. You define
this address in the DNS domain before installation. The DNS must be configured to
delegate resolution requests for cluster names (any names in the subdomain delegated
to the cluster) to the GNS. When a request comes to the domain, GNS processes the
requests and responds with the appropriate addresses for the name requested.
To use GNS, before installation the DNS administrator must establish DNS Lookup to
direct DNS resolution of a subdomain to the cluster. If you enable GNS, then you must
have a DHCP service on the public network that allows the cluster to dynamically
allocate the virtual IP addresses as required by the cluster.
The following restrictions apply to vendor configurations on
your system:
Note:
■
■
If you have vendor clusterware installed, then you cannot choose
to use GNS, because the vendor clusterware does not support it.
You cannot use GNS with another multicast DNS. If you want to
use GNS, then disable any third party mDNS daemons on your
system.
2-24 Oracle Grid Infrastructure Installation Guide
Checking the Network Requirements
2.6.2.2 IP Address Requirements for Manual Configuration
If you do not enable GNS, then the public and virtual IP addresses for each node must
be static IP addresses, configured before installation for each node, but not currently in
use. Public and virtual IP addresses must be on the same subnet.
Oracle Clusterware manages private IP addresses in the private subnet on interfaces
you identify as private during the installation interview.
The cluster must have the following addresses configured:
■
■
■
■
A public IP address for each node, with the following characteristics:
–
Static IP address
–
Configured before installation for each node, and resolvable to that node
before installation
–
On the same subnet as all other public IP addresses, VIP addresses, and SCAN
addresses
A virtual IP address for each node, with the following characteristics:
–
Static IP address
–
Configured before installation for each node, but not currently in use
–
On the same subnet as all other public IP addresses, VIP addresses, and SCAN
addresses
A Single Client Access Name (SCAN) for the cluster, with the following
characteristics:
–
Three Static IP addresses configured on the domain name server (DNS) before
installation so that the three IP addresses are associated with the name
provided as the SCAN, and all three addresses are returned in random order
by the DNS to the requestor
–
Configured before installation in the DNS to resolve to addresses that are not
currently in use
–
Given a name that does not begin with a numeral
–
On the same subnet as all other public IP addresses, VIP addresses, and SCAN
addresses
–
Conforms with the RFC 952 standard, which allows alphanumeric characters
and hyphens ("-"), but does not allow underscores ("_").
A private IP address for each node, with the following characteristics:
–
Static IP address
–
Configured before installation, but on a separate, private network, with its
own subnet, that is not resolvable except by other cluster member nodes
The SCAN is a name used to provide service access for clients to the cluster. Because
the SCAN is associated with the cluster as a whole, rather than to a particular node,
the SCAN makes it possible to add or remove nodes from the cluster without needing
to reconfigure clients. It also adds location independence for the databases, so that
client configuration does not have to depend on which nodes run a particular
database. Clients can continue to access the cluster in the same way as with previous
releases, but Oracle recommends that clients accessing the cluster use the SCAN.
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-25
Checking the Network Requirements
In a Typical installation, the SCAN you provide is also the
name of the cluster. In an advanced installation, The SCAN and
cluster name are entered in separate fields during installation.
Note:
Both the SCAN and the cluster name must be at least one character
long and no more than 15 characters in length, must be alphanumeric,
cannot begin with a numeral, and may contain hyphens (-).
You can use the nslookup command to confirm that the DNS is correctly associating
the SCAN with the addresses. For example:
[email protected]]$ nslookup mycluster-scan
Server:
dns.example.com
Address:
192.0.2.001
Name:
mycluster-scan.example.com
Address: 192.0.2.201
Name:
mycluster-scan.example.com
Address: 192.0.2.202
Name:
mycluster-scan.example.com
Address: 192.0.2.203
After installation, when a client sends a request to the cluster, the Oracle Clusterware
SCAN listeners redirect client requests to servers in the cluster.
Oracle strongly recommends that you do not configure SCAN
VIP addresses in the hosts file. Use DNS resolution for SCAN VIPs. If
you use the hosts file to resolve SCANs, then you will only be able to
resolve to one IP address and you will have only one SCAN address.
Note:
Configuring SCANs in a DNS or a hosts file is the only supported
configuration. Configuring SCANs in a Network Information Service
(NIS) is not supported.
See Also: Appendix C.1.3, "Understanding Network Addresses" for
more information about network addresses
2.6.3 Broadcast Requirements for Networks Used by Oracle Grid Infrastructure
Broadcast communications (ARP and UDP) must work properly across all the public
and private interfaces configured for use by Oracle Grid Infrastructure release 2
patchset 1 (11.2.0.2) and later releases.
The broadcast must work across any configured VLANs as used by the public or
private interfaces.
2.6.4 Multicast Requirements for Networks Used by Oracle Grid Infrastructure
With Oracle Grid Infrastructure release 2 (11.2), on each cluster member node, the
Oracle mDNS daemon uses multicasting on all interfaces to communicate with other
nodes in the cluster.
2-26 Oracle Grid Infrastructure Installation Guide
Checking the Network Requirements
With Oracle Grid Infrastructure release 2 patchset 1 (11.2.0.2) and later releases,
multicasting is required on the private interconnect. For this reason, at a minimum,
you must enable multicasting for the cluster:
■
Across the broadcast domain as defined for the private interconnect
■
On the IP address subnet ranges 224.0.0.0/24 and 230.0.1.0/24
You do not need to enable multicast communications across routers.
2.6.5 DNS Configuration for Domain Delegation to Grid Naming Service
If you plan to use GNS, then before Oracle Grid Infrastructure installation, you must
configure your domain name server (DNS) to send to GNS name resolution requests
for the subdomain GNS serves, which are the cluster member nodes. The following is
an overview of what needs to be done for domain delegation. Your actual procedure
may be different from this example.
Configure the DNS to send GNS name resolution requests using delegation:
1.
In the DNS, create an entry for the GNS virtual IP address, where the address uses
the form gns-server.CLUSTERNAME.DOMAINNAME. For example, where the
cluster name is mycluster, and the domain name is example.com, and the IP
address is 192.0.2.1, create an entry similar to the following:
mycluster-gns.example.com
A
192.0.2.1
The address you provide must be routable.
2.
Set up forwarding of the GNS subdomain to the GNS virtual IP address, so that
GNS resolves addresses to the GNS subdomain. To do this, create a BIND
configuration entry similar to the following for the delegated domain, where
cluster01.example.com is the subdomain you want to delegate:
cluster01.example.com
3.
NS
mycluster-gns.example.com
When using GNS, you must configure resolve.conf on the nodes in the cluster
(or the file on your system that provides resolution information) to contain name
server entries that are resolvable to corporate DNS servers. The total timeout
period configured—a combination of options attempts (retries) and options
timeout (exponential backoff)—should be less than 30 seconds. For example,
where xxx.xxx.xxx.42 and xxx.xxx.xxx.15 are valid name server addresses in your
network, provide an entry similar to the following in /etc/resolv.conf:
options attempts: 2
options timeout: 1
search cluster01.example.com example.com
nameserver xxx.xxx.xxx.42
nameserver xxx.xxx.xxx.15
/etc/nsswitch.conf controls name service lookup order. In some system
configurations, the Network Information System (NIS) can cause problems with
Oracle SCAN address resolution. Oracle recommends that you place the nis entry
at the end of the search list. For example:
/etc/nsswitch.conf
hosts:
files
dns
nis
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-27
Checking the Network Requirements
Be aware that use of NIS is a frequent source of problems
when doing cable pull tests, as host name and username resolution
can fail.
Note:
2.6.6 Grid Naming Service Configuration Example
If you use GNS, then you must specify a static IP address for the GNS VIP address,
and delegate a subdomain to be delegated to that static GNS IP address.
As nodes are added to the cluster, your organization's DHCP server can provide
addresses for these nodes dynamically. These addresses are then registered
automatically in GNS, and GNS provides resolution within the subdomain to cluster
node addresses registered with GNS.
Because allocation and configuration of addresses is performed automatically with
GNS, no further configuration is required. Oracle Clusterware provides dynamic
network configuration as nodes are added to or removed from the cluster. The
following example is provided only for information.
With a two node cluster where you have defined the GNS VIP, after installation you
might have a configuration similar to the following for a two-node cluster, where the
cluster name is mycluster, the GNS parent domain is example.com, the subdomain is
grid.example.com, 192.0.2 in the IP addresses represent the cluster public IP address
network, and 192.168.0 represents the private IP address subnet:
Table 2–2
Identity
Home
Node
Grid Naming Service Example Network
Host Node
Given Name
Type
Address
Address
Resolved
Assigned By By
GNS
VIP
None
Selected by
Oracle
Clusterware
mycluster-gns.example.com
virtual
192.0.2.1
Fixed by net DNS
administrator
Node 1
Public
Node
1
node1
node11
Public
192.0.2.101
Fixed
GNS
Node 1
VIP
Node
1
Selected by
Oracle
Clusterware
node1-vip
Virtual
192.0.2.104
DHCP
GNS
Node 1
Private
Node
1
node1
node1-priv
Private
192.168.0.1
Fixed or
DHCP
GNS
Node 2
Public
Node
2
node2
node21
Public
192.0.2.102
Fixed
GNS
Node 2
VIP
Node
2
Selected by
Oracle
Clusterware
node2-vip
Virtual
192.0.2.105
DHCP
GNS
Node 2
Private
Node
2
node2
node2-priv
Private
192.168.0.2
Fixed or
DHCP
GNS
2-28 Oracle Grid Infrastructure Installation Guide
Checking the Network Requirements
Table 2–2 (Cont.) Grid Naming Service Example Network
Identity
Home
Node
Host Node
Given Name
Type
Address
Address
Resolved
Assigned By By
SCAN
VIP 1
none
Selected by
Oracle
Clusterware
mycluster-scan.cluster01.
example.com
virtual
192.0.2.201
DHCP
GNS
SCAN
VIP 2
none
Selected by
Oracle
Clusterware
mycluster-scan.cluster01.
example.com
virtual
192.0.2.202
DHCP
GNS
SCAN
VIP 3
none
Selected by
Oracle
Clusterware
mycluster-scan.cluster01.
example.com
virtual
192.0.2.203
DHCP
GNS
1
Node host names may resolve to multiple addresses, including VIP addresses currently running on that host.
2.6.7 Manual IP Address Configuration Example
If you choose not to use GNS, then before installation you must configure public,
virtual, and private IP addresses. Also, check that the default gateway can be accessed
by a ping command. To find the default gateway, use the route command, as
described in your operating system's help utility.
For example, with a two node cluster where each node has one public and one private
interface, and you have defined a SCAN domain address to resolve on your DNS to
one of three IP addresses, you might have the configuration shown in the following
table for your network interfaces:
Table 2–3
Manual Network Configuration Example
Home
Node
Host Node
Given Name
Type
Address
Address
Resolved
Assigned By By
Node 1
Public
Node 1
node1
node11
Public
192.0.2.101
Fixed
DNS
Node 1
VIP
Node 1
Selected by
Oracle
Clusterware
node1-vip
Virtual
192.0.2.104
Fixed
DNS and
hosts file
Node 1
Private
Node 1
node1
node1-priv
Private
192.168.0.1
Fixed
DNS and
hosts file, or
none
Node 2
Public
Node 2
node2
node21
Public
192.0.2.102
Fixed
DNS
Node 2
VIP
Node 2
Selected by
Oracle
Clusterware
node2-vip
Virtual
192.0.2.105
Fixed
DNS and
hosts file
Node 2
Private
Node 2
node2
node2-priv
Private
192.168.0.2
Fixed
DNS and
hosts file, or
none
Identity
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-29
Checking the Network Requirements
Table 2–3 (Cont.) Manual Network Configuration Example
Identity
Home
Node
Host Node
Given Name
Type
Address
Address
Resolved
Assigned By By
SCAN
VIP 1
none
Selected by
Oracle
Clusterware
mycluster-scan virtual
192.0.2.201
Fixed
DNS
SCAN
VIP 2
none
Selected by
Oracle
Clusterware
mycluster-scan virtual
192.0.2.202
Fixed
DNS
SCAN
VIP 3
none
Selected by
Oracle
Clusterware
mycluster-scan virtual
192.0.2.203
Fixed
DNS
1
Node host names may resolve to multiple addresses.
You do not need to provide a private name for the interconnect. If you want name
resolution for the interconnect, then you can configure private IP names in the hosts
file or the DNS. However, Oracle Clusterware assigns interconnect addresses on the
interface defined during installation as the private interface (lan1, for example), and to
the subnet used for the private subnet.
The addresses to which the SCAN resolves are assigned by Oracle Clusterware, so
they are not fixed to a particular node. To enable VIP failover, the configuration shown
in the preceding table defines the SCAN addresses and the public and VIP addresses
of both nodes on the same subnet, 192.0.2.
All host names must conform to the RFC 952 standard, which
permits alphanumeric characters. Host names using underscores ("_")
are not allowed.
Note:
2.6.8 Network Interface Configuration Options
The precise configuration you choose for your network depends on the size and use of
the cluster you want to configure, and the level of availability you require.
If certified Network-attached Storage (NAS) is used for Oracle RAC and this storage is
connected through Ethernet-based networks, then you must have a third network
interface for NAS I/O. Failing to provide three separate interfaces in this case can
cause performance and stability problems under load.
2.6.9 Checking the Run Level and Name Service Cache Daemon
To allow Oracle Clusterware to better tolerate network failures with NAS devices or
NFS mounts, enable the Name Service Cache Daemon (nscd). The nscd provides a
caching mechanism for the most common name service requests. It is automatically
started when the system starts in a multiuser state. Oracle software requires that the
server is started with multiuser run level (3), which is the default for HP-UX.
To check to see if the server is set to 3, enter the command who -r. For example:
# who -r
.
run-level 3
Jan 4 14:04
3
0
S
Refer to your operating system documentation if you must change the run level.
To check to see if the password and group caching daemon (pwgrd) is running, enter
the following command:
2-30 Oracle Grid Infrastructure Installation Guide
Identifying Software Requirements
ps -aef |grep pwgrd
2.7 Identifying Software Requirements
Depending on the products that you intend to install, verify that the following
operating system software is installed on the system. Note that patch requirements are
minimum required patch versions, and that earlier patch numbers are rolled into later
patch updates.
To check software requirements, requirements refer to Section 2.8, "Checking the
Software Requirements."
Requirements listed here are current as of the initial release date. To obtain the most
current information about kernel requirements, refer to the online version on the
Oracle Technology Network (OTN) at the following URL:
http://www.oracle.com/technetwork/indexes/documentation/index.html
OUI performs checks your system to verify that it meets the listed operating system
package requirements. To ensure that these checks complete successfully, verify the
requirements before you start OUI.
Oracle does not support running different operating system
versions on cluster members, unless an operating system is being
upgraded. You cannot run different operating system version binaries
on members of the same cluster, even if each operating system is
supported.
Note:
The following is the list of supported HP-UX platforms and requirements at the time
of release:
■
Software Requirements List for HP-UX Itanium Platforms
■
Software Requirements List for HP-UX PA-RISC (64-bit)
At the time of this release, the 11.2.0.3 release of Oracle Grid
Infrastructure 11g Release 2 is supported only on HP-UX Itanium.
Note:
2.7.1 Software Requirements List for HP-UX Itanium Platforms
Table 2–4
HP-UX Itanium Requirements
Item
Requirement
Operating system
■
HP-UX 11iV3 patch Bundle Sep/ 2008 (B.11.31.0809.326a)
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-31
Identifying Software Requirements
Table 2–4 (Cont.) HP-UX Itanium Requirements
Item
Requirement
HP-UX 11.31
packages and
bundles
PHCO_43503 - 11.31 diskowner(1M) cumulative patch
PHCO_41479 - 11.31 diskowner patch
PHKL_38038 - VM cumulative patch
PHKL_38938 - 11.31 SCSI cumulative I/O patch
PHKL_40941 - Scheduler patch : post wait hang
PHSS_36354 - 11.31 assembler patch
PHSS_37042 - 11.31 hppac (packed decimal)
PHSS_37959 - Libcl patch for alternate stack issue fix
(QXCR1000818011)
PHSS_39094 - 11.31 linker + fdp cumulative patch
PHSS_39100 - 11.31 Math Library Cumulative Patch
PHSS_39102 - 11.31 Integrity Unwind Library
PHSS_38141 - 11.31 aC++ Runtime
Oracle Clusterware
All HP-UX 11.31 installations
No additional requirements for Oracle Clusterware.
At the time of this release, Hyper Messaging Protocol (HMP) is not
supported.
HP Serviceguard A.11.20 is supported with Oracle Clusterware 11g
Release 2 (11.2).
Note: HP Serviceguard is optional. It is required only if you want to
use shared logical volumes for Oracle Clusterware or database files.
HP-UX Logical
Volume Manager
(LVM)
PHCO_41479 (or later) 11.31 character device files control patch for
HP-UX on Itanium
VERITAS File System PHKL_39773 11.31 VRTS 5.0 GARP6 VRTSvxfs Kernel Patch
Note: The VERITAS file system is optional. This patch is required only
if you want to use a VERITAS File System 5.0.
C/C++ Compiler
Patches for
Pro*C/C++, Oracle
Call Interface, Oracle
C++ Call Interface,
and Oracle XML
Developer's Kit
(XDK) with Oracle
Database 11g release
2 (11.2)
Oracle JDBC/OCI
Drivers
C / C++ compiler
■
HP C/aC++ A.06.20 (Swlist Bundle - C.11.31.04) - September 2008
C Compiler Patches
■
PHSS_39824 11.31 HP C/aC++ Compiler (A.06.23)
■
PHSS_39826 11.31 u2comp/be/plug-in (C.06.23)
Gcc Compiler
■
Gcc 4.2.3
■
HPUX JDK 6.0.05
■
HPUX JDK 5.0.15
To use ODBC, you must also install gcc 4.2.3 or later.
Note: For JDBC/OCI, install the JDK with the JNDI extension with the
Oracle Java Database Connectivity and Oracle Call Interface drivers.
However, these are not mandatory for the database installation. JDK
1.5 is installed with this release.
Oracle Messaging
Gateway
Oracle Messaging Gateway supports the integration of Oracle Streams
Advanced Queuing (AQ) with the following software:
IBM MQ Series V. 6.0, client and server
■
MQSERIES.MQM-CL-HPUX
■
MQSERIES.MQM-SERVER
TIBCO Rendezvous 7.2
2-32 Oracle Grid Infrastructure Installation Guide
Identifying Software Requirements
Table 2–4 (Cont.) HP-UX Itanium Requirements
Item
Requirement
Oracle ODBC
The Oracle ODBC driver on HP UX Itanium is certified with ODBC
Driver Manager 2.2.14. You can download and install the Driver
Manager from the following URL:
http://www.unixodbc.org
You do not require ODBC Driver Manager to install Oracle Databases.
To use ODBC, you must also install gcc 4.2.3 or later.
Perl
Perl 5.8.8
Programming
Software (optional)
Pro*Cobol - Micro Focus Server Express 5.1
Pro*FORTRAN - HP FORTRAN/90 - Sep 2008 release
VERITAS File System PHKL_39773: 11.31 VRTS 5.0 GARP6 VRTSvxfs Kernel Patch
Note: This patch is required only if you want to use a VERITAS File
System 5.0.
SSH
Oracle Clusterware requires SSH. The required SSH software is the
default SSH shipped with your operating system.
2.7.2 Software Requirements List for HP-UX PA-RISC (64-bit)
At the time of this release, the 11.2.0.4 release of Oracle Grid Infrastructure 11g Release
2 is available only for HP-UX Itanium. The following requirements are for the 11.2.0.3
release of Oracle Grid Infrastructure 11g Release 2 for HP-UX PA-RISC systems.
Table 2–5
HP-UX PA-RISC (64-bit) Requirements
Item
Requirement
Operating system
■
HP-UX 11.31 packages and
bundles
HP-UX 11iV3 (11.31) patch Bundle Sep/ 2008
(B.11.31.0809.326a) or higher
PHKL_39773 (11.31 VRTS 5.0 GARP6 VRTSvxfs Kernel Patch.
This patch is needed only when VxFS
5.0 is installed.
The patch has no other dependencies. It
is included in the September 2009
update of HP-UX 11.31.)
PHCO_40381 - 11.31 Disk Owner Patch
PHKL_38038 - VM patch - hot patching/Core file creation
directory
PHKL_38938 - 11.31 SCSI cumulative I/O patch
PHKL_39351 - 11.31 scheduler cumulative patch
PHSS_37959 - Libcl patch for alternate stack issue fix
(QXCR1000818011)
PHSS_38141 - 11.31 aC++ Runtime
PHSS_39094 - 11.31 linker + fdp cumulative patch
PHSS_39080 - PA32 program startup code for PBO
instrumented builds
PHSS_47276 - core dump from u_get_previous_frame_x
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-33
Checking the Software Requirements
Table 2–5 (Cont.) HP-UX PA-RISC (64-bit) Requirements
Item
Requirement
C/C++ Compiler Patches
for Pro*C/C++, Oracle Call
Interface, Oracle C++ Call
Interface, and Oracle XML
Developer's Kit (XDK) with
Oracle Database 11g release
2 (11.2)
C++ compiler
■
aC++ A.03.85 (Swlist Bundle - C.11.31.04) - September 2008
C Compiler
■
HP ANSI C B.11.31.04 (Swlist Bundle - C.11.31.04) September 2008
C Compiler Patches
■
PHSS_39080
■
PHSS_39824
■
PHSS_39826
Gcc Compiler
■
Gcc 4.2.3
Programming Software
(optional)
Micro Focus Server Express 5.1
Oracle Clusterware
All HP-UX 11.31 installations
HP Fortran/90 - Sep 2008 - release
No additional requirements for Oracle Clusterware.
At the time of this release, Hyper Messaging Protocol (HMP) is
not supported.
HP Serviceguard A.11.20 is supported with Oracle Clusterware
11g Release 2 (11.2).
Note: HP Serviceguard is optional. It is required only if you
want to use shared logical volumes for Oracle Clusterware or
database files.
Oracle Messaging Gateway
(optional)
Oracle Messaging Gateway supports the integration of Oracle
Streams Advanced Queuing (AQ) with the following software:
IBM MQ Series V. 6.0, client and server
■
MQSERIES.MQM-CL-HPUX
■
MQSERIES.MQM-SERVER
TIBCO Rendezvous 7.2
Oracle ODBC
Oracle JDBC/OCI Drivers
VERITAS File System
At the time of this release, Oracle ODBC is not supported for
HP-UX on PA-RISC.
■
HPUX JDK 6.0.05
■
HPUX JDK 5.0.15
PHKL_39773: 11.31 VRTS 5.0 GARP6 VRTSvxfs Kernel Patch.
Note: This patch has no other dependencies. It is included in the
September 2009 update of HP-UX 11.31. It is needed only when
VxFS 5.0 is installled.
SSH
Oracle Clusterware requires SSH. The required SSH software is
the default SSH shipped with your operating system.
2.8 Checking the Software Requirements
To ensure that the system meets these requirements, follow these steps:
1.
To determine which version of HP-UX is installed, enter the following command:
# uname -a
"HP-UX hostname B.11.31 U ia64 4156074294 unlimited-user license"
2-34 Oracle Grid Infrastructure Installation Guide
Network Time Protocol Setting
In this example, the version of HP-UX 11 is 11.31 and the processor is Itanium.
2.
Verify that the system meets the minimum patch bundle requirements using the
following command:
# /usr/sbin/swlist -l bundle |grep QPK
The QPK (Quality Pack) bundles have version numbers of the form
B.11.31.0809.326a (for the September 2008 release), B.11.31.0903.334a (for the March
2009 release), and so on. If a required bundle, product, or fileset is not installed,
then you must install it. Refer to your operating system or software
documentation for information about installing products.
There may be more recent versions of the patches listed in the
preceding paragraph that are installed on the system. If a listed patch
is not installed, then determine if a more recent version is installed
before installing the version listed.
Note:
3.
If a required patch is not installed, then download it from the following web site
and install it:
http://itresourcecenter.hp.com
If the web site shows a more recent version of the patch, then download and install
that version.
4.
If you require a CSD for WebSphere MQ, then refer to the following web site for
download and installation information:
http://www-01.ibm.com/software/
There may be more recent versions of the patches listed
installed on the system. If a listed patch is not installed, then
determine whether a more recent version is installed before
installing the version listed.
Note:
2.9 Network Time Protocol Setting
Oracle Clusterware requires the same time zone setting on all cluster nodes. During
installation, the installation process picks up the time zone setting of the Grid
installation owner on the node where OUI runs, and uses that on all nodes as the
default TZ setting for all processes managed by Oracle Clusterware. This default is
used for databases, Oracle ASM, and any other managed processes.
You have two options for time synchronization: an operating system configured
network time protocol (NTP), or Oracle Cluster Time Synchronization Service. Oracle
Cluster Time Synchronization Service is designed for organizations whose cluster
servers are unable to access NTP services. If you use NTP, then the Oracle Cluster Time
Synchronization daemon (ctssd) starts in observer mode. If you do not have NTP
daemons, then ctssd starts in active mode and synchronizes time among cluster
members without contacting an external time server.
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-35
Enabling Intelligent Platform Management Interface (IPMI)
Before starting the installation of the Oracle Grid
Infrastructure, Oracle recommends that you ensure the clocks on all
nodes are set to the same time.
Note:
If you have NTP daemons on your server but you cannot configure them to
synchronize time with a time server, and you want to use Cluster Time
Synchronization Service to provide synchronization service in the cluster, then
deactivate and deinstall the Network Time Protocol (NTP).
To deactivate the NTP service, you must stop the existing ntpd service, disable it from
the initialization sequences and remove the ntp.conf file. To complete these steps, run
the following commands as the root user
# /sbin/init.d/xntpd stop
# rm /etc/ntp.conf
or, mv /etc/ntp.conf to /etc/ntp.conf.org.
When the installer finds that the NTP protocol is not active, the Cluster Time
Synchronization Service is installed in active mode and synchronizes the time across
the nodes. If NTP is found configured, then the Cluster Time Synchronization Service
is started in observer mode, and no active time synchronization is performed by
Oracle Clusterware within the cluster.
To confirm that ctssd is active after installation, enter the following command as the
Grid installation owner:
$ crsctl check ctss
If you are using NTP, and you prefer to continue using it instead of Cluster Time
Synchronization Service, then you must modify the NTP initialization file to enable
slewing, which prevents time from being adjusted backward. Restart the network time
protocol daemon after you complete this task.
To do this on HP-UX, open the file /etc/rc.config.d/netdaemons using a text editor,
and add the line export XNTPD_ARGS="-x" to the file. After you add the XNTPD_ARGS
line, load the setting by shutting down and restarting xntpd using the commands
/sbin/init.d/xntpd stop and /sbin/init.d/xntpd start.
2.10 Enabling Intelligent Platform Management Interface (IPMI)
Intelligent Platform Management Interface (IPMI) provides a set of common interfaces
to computer hardware and firmware that system administrators can use to monitor
system health and manage the system. With Oracle 11g release 2, Oracle Clusterware
can integrate IPMI to provide failure isolation support and to ensure cluster integrity.
Oracle Clusterware does not currently support the native IPMI driver on HP-UX, so
OUI does not collect the administrator credentials, and CSS is unable to obtain the IP
address. You must configure failure isolation manually by configuring the BMC with a
static IP address before installation, and using crsctl to store the IP address and IPMI
credentials after installation.
This section contains the following topics:
■
Requirements for Enabling IPMI
■
Configuring the IPMI Management Network
■
Configuring the BMC
2-36 Oracle Grid Infrastructure Installation Guide
Enabling Intelligent Platform Management Interface (IPMI)
Oracle Clusterware Administration and Deployment Guide for
information about how to configure IPMI after installation
See Also:
2.10.1 Requirements for Enabling IPMI
You must have the following hardware and software configured to enable cluster
nodes to be managed with IPMI:
■
■
■
■
■
Each cluster member node requires a Baseboard Management Controller (BMC)
running firmware compatible with IPMI version 1.5 or greater, which supports
IPMI over LANs, and configured for remote control using LAN.
The cluster requires a management network for IPMI. This can be a shared
network, but Oracle recommends that you configure a dedicated network.
Each cluster member node's port used by BMC must be connected to the IPMI
management network.
Each cluster member must be connected to the management network.
Some server platforms put their network interfaces into a power saving mode
when they are powered off. In this case, they may operate only at a lower link
speed (for example, 100 MB, instead of 1 GB). For these platforms, the network
switch port to which the BMC is connected must be able to auto-negotiate down to
the lower speed, or IPMI will not function properly.
2.10.2 Configuring the IPMI Management Network
On HP-UX platforms, the BMC shares configuration information with the
management processor (iLO). For Oracle Clusterware, you must configure the
ilO/BMC for static IP addresses. Configuring the BMC with dynamic addresses
(DHCP) is not supported on HP-UX.
If you configure IPMI, and you use Grid Naming Service
(GNS) you still must configure separate addresses for the IPMI
interfaces. As the IPMI adapter is not seen directly by the host, the
IPMI adapter is not visible to GNS as an address on the host.
Note:
2.10.3 Configuring the BMC
On each node, complete the following steps to configure the BMC to support
IPMI-based node fencing:
■
Configure a static IP address for the BMC.
■
Set a password for the null (noname) account for the BMC.
On HP-UX, the BMC and the iLO share network configuration. They have the same IP
address, the same hardware MAC address, and the same default gateway address.
If your iLO processor is already configured for network access using a static address,
then you have already established the required BMC network configuration. If you
have not set up a static address for the iLO, then you must set a static address, and
note the address so that you can enter it into the Oracle Clusterware local registry after
installation.
You must also set up a password for the null (noname) user account, as this is the single
account that the BMC uses. Administration accounts in the iLO are unrelated to the
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-37
Automatic SSH Configuration During Installation
BMC. For security reasons, Oracle requires that you set a password for the BMC
account.
Refer to your HP-UX documentation for more information about how to configure the
BMC. Ensure that the BMC is configured on each cluster member node.
2.10.3.1 Configuring the iLO Processor on HP-UX
To configure IPMI in the iLO processor on HP platforms, and to set a password for the
null (noname user), complete the following procedure:
1.
Log in to the iLO web interface, and configure the required network settings for
the iLO and BMC under Administration, then Network Settings to obtain the IP
address, and to check the netmask and default gateway.
2.
Start a terminal session from a device with network connectivity to the BMC of the
node to configure.
3.
From the terminal session, set the IPMI password for the anonymous user (noname)
over the network, using an IPMI administration tool, such as ipmitool from a
client or server connected to the note whose user password you want to change.
For example, where ipmiaddr is the IPMI address, and password is the password:
% ipmitool -H ipmiaddr -U "" user set password 1 "password"
Note that in this example, the anonymous username is being provided explicitly
using -U "" for clarity, but it is implied if the username argument is missing. This
command prompts for the current password, which can be the initial null
password. If the password is successfully changed, then the command prints an
error similar to: "Close session command failed." This message is printed because
the command attempts to terminate the IPMI network session using the previous
password.
It is not possible to set the password for the IPMI
administrator account using the Local Accounts web interface page.
Note:
4.
After you configure the BMC on each cluster member node, and have completed
Oracle Grid Infrastructure installation, you must store the IPMI administrator
credentials and the BMC static IP address in the Oracle Local Registry (OLR) on
each cluster member node. Use crsctl to do this, as described in Section 5.2.2,
"Configure IPMI-based Failure Isolation Using Crsctl." However, when you store
the IPMI credentials in the OLR, you must have the anonymous user specified
explicitly, or a parsing error will be reported:
% crsctl set css ipmiadmin ""
When prompted, provide the password you set with the IPMI administrator tool.
2.11 Automatic SSH Configuration During Installation
To install Oracle software, Secure Shell (SSH) connectivity should be set up between all
cluster member nodes. OUI uses the ssh and scp commands during installation to run
remote commands on and copy files to the other cluster nodes. You must configure
SSH so that these commands do not prompt for a password.
2-38 Oracle Grid Infrastructure Installation Guide
Configuring Grid Infrastructure Software Owner User Environments
SSH is used by Oracle configuration assistants for
configuration operations from local to remote nodes. It is also used by
Enterprise Manager.
Note:
You can configure SSH from the OUI interface during installation for the user account
running the installation. The automatic configuration creates passwordless SSH
connectivity between all cluster member nodes. Oracle recommends that you use the
automatic procedure if possible.
To enable the script to run, you must remove stty commands from the profiles of any
Oracle software installation owners, and remove other security measures that are
triggered during a login, and that generate messages to the terminal. These messages,
mail checks, and other displays prevent Oracle software installation owners from
using the SSH configuration script that is built into the Oracle Universal Installer. If
they are not disabled, then SSH must be configured manually before an installation
can be run.
In rare cases, Oracle Clusterware installation may fail during the "AttachHome"
operation when the remote node closes the SSH connection. To avoid this problem, set
the following parameter in the SSH daemon configuration file opt/ssh/etc/sshd_
config on all cluster nodes to set the timeout wait to unlimited:
LoginGraceTime 0
See Also: Section 2.12.5, "Preventing Installation Errors Caused by
Terminal Output Commands" for information about how to remove
stty commands in user profiles
2.12 Configuring Grid Infrastructure Software Owner User Environments
You run the installer software with the Oracle Grid Infrastructure installation owner
user account (oracle or grid). However, before you start the installer, you must
configure the environment of the installation owner user account. Also, create other
required Oracle software owners, if needed.
This section contains the following topics:
■
Environment Requirements for Oracle Grid Infrastructure Software Owner
■
Procedure for Configuring Oracle Software Owner Environments
■
Configuring Kernel Parameters on HP-UX Systems
■
Setting Display and X11 Forwarding Configuration
■
Preventing Installation Errors Caused by Terminal Output Commands
2.12.1 Environment Requirements for Oracle Grid Infrastructure Software Owner
You must make the following changes to configure the Oracle Grid Infrastructure
software owner environment:
■
■
Set the installation software owner user (grid, oracle) default file mode creation
mask (umask) to 022 in the shell startup file. Setting the mask to 022 ensures that
the user performing the software installation creates files with 644 permissions.
Set ulimit settings for file descriptors and processes for the installation software
owner (grid, oracle)
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-39
Configuring Grid Infrastructure Software Owner User Environments
■
Set the software owner's environment variable DISPLAY environment variables in
preparation for the Oracle Grid Infrastructure installation
Caution: Use shell programs supported by your operating system
vendor. If you use a shell program that is not supported by your
operating system, then you can encounter errors during installation.
2.12.2 Procedure for Configuring Oracle Software Owner Environments
To set the Oracle software owners' environments, follow these steps, for each software
owner (grid, oracle):
1.
Start a new terminal session; for example, start an X terminal (xterm).
2.
Enter the following command to ensure that X Window applications can display
on this system:
$ xhost + hostname
The hostname is the name of the local host.
3.
If you are not already logged in to the system where you want to install the
software, then log in to that system as the software owner user.
4.
If you are not logged in as the user, then switch to the software owner user you are
configuring. For example, with the grid user:
$ su - grid
5.
To determine the default shell for the user, enter the following command:
$ echo $SHELL
6.
Open the user's shell startup file in any text editor:
■
Bourne shell (sh) or Korn shell (ksh):
$ vi .profile
■
C shell (csh or tcsh):
% vi .login
7.
If the ORACLE_SID, ORACLE_HOME, or ORACLE_BASE environment variable is set in the
file, then remove the appropriate lines from the file.
8.
Save the file, and exit from the text editor.
9.
To run the shell startup script, enter one of the following commands:
■
Bourne or Korn shell:
$ . ./.profile
■
C shell:
% source ./.login
10. If you are not installing the software on the local system, then enter a command
similar to the following to direct X applications to display on the local system:
■
Bourne or Korn shell:
2-40 Oracle Grid Infrastructure Installation Guide
Configuring Grid Infrastructure Software Owner User Environments
$ DISPLAY=local_host:0.0 ; export DISPLAY
■
C shell:
% setenv DISPLAY local_host:0.0
In this example, local_host is the host name or IP address of the system to use to
display OUI (your workstation or PC).
11. If you determined that the /tmp directory has less than 7 GB of free disk space,
then identify a file system with at least 7 GB of free space and set the TEMP and
TMPDIR environment variables to specify a temporary directory on this file system:
Note: You cannot use a shared file system as the location of the
temporary file directory (typically /tmp) for Oracle RAC installation. If
you place /tmp on a shared file system, then the installation fails.
a.
Use the bdf command to identify a suitable file system with sufficient free
space.
b.
If necessary, enter commands similar to the following to create a temporary
directory on the file system that you identified, and set the appropriate
permissions on the directory:
$
#
#
#
c.
su - root
mkdir /mount_point/tmp
chmod a+wr /mount_point/tmp
exit
Enter commands similar to the following to set the TEMP and TMPDIR
environment variables:
*
Bourne or Korn shell:
$ TEMP=/mount_point/tmp
$ TMPDIR=/mount_point/tmp
$ export TEMP TMPDIR
*
C shell:
% setenv TEMP /mount_point/tmp
% setenv TMPDIR /mount_point/tmp
12. Enter the following command to ensure that the ORACLE_HOME and TNS_
ADMIN environment variables are not set:
■
Bourne or Korn shell:
$ unset ORACLE_HOME
$ unset TNS_ADMIN
■
C shell:
% unsetenv ORACLE_HOME
% unsetenv TNS_ADMIN
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-41
Configuring Grid Infrastructure Software Owner User Environments
If the ORACLE_HOME environment variable is set, the
Installer uses the value that it specifies as the default path for the
Oracle home directory. However, if you set the ORACLE_BASE
environment variable, Oracle recommends that you unset the
ORACLE_HOME environment variable and choose the default
path suggested by the Installer.
Note:
2.12.3 Configuring Kernel Parameters on HP-UX Systems
During installation, you can generate and run the Fixup script to check and set the
kernel parameter values required for successful installation of the database. This script
updates required kernel packages if necessary to minimum values.
If you cannot use the Fixup scripts, then verify that the kernel parameters are set to
values greater than or equal to the minimum value shown in Appendix D.2,
"Configuring Kernel Parameters Manually." The procedure following the table
describes how to verify and set the values manually.
The kernel parameter and shell limit values configured by
Fixup scripts and listed in Appendix D.2, "Configuring Kernel
Parameters Manually" are minimum values only. For production
database systems, Oracle recommends that you tune these values to
optimize the performance of the system. Refer to the operating system
documentation for more information about tuning kernel parameters.
Note:
2.12.4 Setting Display and X11 Forwarding Configuration
If you are on a remote terminal, and the local node has only one visual (which is
typical), then use the following syntax to set the DISPLAY environment variable:
Bourne, Korn, and Bash shells
$ export DISPLAY=hostname:0
C shell:
$ setenv DISPLAY hostname:0
For example, if you are using the Bash shell, and if your host name is node1, then enter
the following command:
$ export DISPLAY=node1:0
To ensure that X11 forwarding will not cause the installation to fail, create a user-level
SSH client configuration file for the Oracle software owner user, as follows:
1.
Using any text editor, edit or create the software installation owner's
~/.ssh/config file.
2.
Make sure that the ForwardX11 attribute is set to no. For example:
Host *
ForwardX11 no
2-42 Oracle Grid Infrastructure Installation Guide
Creating Required Symbolic Links
2.12.5 Preventing Installation Errors Caused by Terminal Output Commands
During an Oracle Grid Infrastructure installation, OUI uses SSH to run commands and
copy files to the other nodes. During the installation, hidden files on the system (for
example, .bashrc or .cshrc) will cause makefile and other installation errors if they
contain stty commands.
To avoid this problem, you must modify these files in each Oracle installation owner
user home directory to suppress all output on STDOUT or STDERR (for example, stty,
xtitle, and other such commands) as in the following examples:
■
Bourne, Bash, or Korn shell:
if [ -t 0 ]; then
stty intr ^C
fi
■
C shell:
test -t 0
if ($status == 0) then
stty intr ^C
endif
Note: When SSH is not available, the Installer uses the remsh
commands instead of ssh and scp.
If there are hidden files that contain stty commands that are loaded
by the remote shell, then OUI indicates an error and stops the
installation.
2.13 Creating Required Symbolic Links
This task is required only if the Motif 2.1 Development
Environment package (X11MotifDevKit.MOTIF21-PRG) is not
installed.
Note:
To enable you to successfully relink Oracle products after installing this software, enter
the following commands to create required X library symbolic links in the /usr/lib
directory:
#
#
#
#
#
#
#
#
#
#
cd
ln
ln
ln
ln
ln
ln
ln
ln
ln
/usr/lib
-s libX11.3 libX11.sl
-s libXIE.2 libXIE.sl
-s libXext.3 libXext.sl
-s libXhp11.3 libXhp11.sl
-s libXi.3 libXi.sl
-s libXm.4 libXm.sl
-s libXp.2 libXp.sl
-s libXt.3 libXt.sl
-s libXtst.2 libXtst.sl
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-43
Setting the Minor Number for Device Files
2.14 Setting the Minor Number for Device Files
Oracle Universal Installer, as part of its prerequisite checks, verifies that the device file
settings for the minor number across all nodes is 0x4 or 0x104 for the /dev/async
device. By default, the minor number is set to 0x0. To change the minor number prior
to starting the installation, perform the following steps:
1.
Log in as the root user.
2.
Determine whether /dev/async exists. If the device does not exist, then use the
following command to create it:
# /sbin/mknod /dev/async c 101 0x4
Alternatively, you can set the minor number value to 0x104 using the following
command:
# /sbin/mknod /dev/async c 101 0x104
3.
If /dev/async exists, then determine the current value of the minor number, as
shown in the following example:
# ls -l /dev/async
crw-r--r--
4.
1 root
sys
101 0x000000 Sep 28 10:38 /dev/async
If the existing minor number of the file is not 0x4 or 0x104, then change it to an
expected value using one of the following commands:
# /sbin/mknod /dev/async c 101 0x4
or
# /sbin/mknod /dev/async c 101 0x104
2.15 Requirements for Creating an Oracle Grid Infrastructure Home
Directory
During installation, you are prompted to provide a path to a home directory to store
Oracle Grid Infrastructure software. Ensure that the directory path you provide meets
the following requirements:
■
■
■
■
It should be created in a path outside existing Oracle homes, including Oracle
Clusterware homes.
It should not be located in a user home directory.
It should be created either as a subdirectory in a path where all files can be owned
by root, or in a unique path.
If you create the path before installation, then it should be owned by the
installation owner of Oracle Grid Infrastructure (typically oracle for a single
installation owner for all Oracle software, or grid for role-based Oracle installation
owners), and set to 775 permissions.
Oracle recommends that you install Oracle Grid Infrastructure on local homes, rather
than using a shared home on shared storage.
For installations with Oracle Grid Infrastructure only, Oracle recommends that you
create a path compliant with Oracle Optimal Flexible Architecture (OFA) guidelines,
so that Oracle Universal Installer (OUI) can select that directory during installation.
For OUI to recognize the path as an Oracle software path, it must be in the form
u0[1-9]/app.
2-44 Oracle Grid Infrastructure Installation Guide
Cluster Name Requirements
When OUI finds an OFA-compliant path, it creates the Oracle Grid Infrastructure and
Oracle Inventory (oraInventory) directories for you.
To create an Oracle Grid Infrastructure path manually, ensure that it is in a separate
path, not under an existing Oracle base path. For example:
# mkdir -p /u01/app/11.2.0/grid
# chown grid:oinstall /u01/app/11.2.0/grid
# chmod -R 775 /u01/app/11.2.0/grid
With this path, if the installation owner is named grid, then by default OUI creates the
following path for the Grid home:
/u01/app/11.2.0/grid
Create an Oracle base path for database installations, owned by the Oracle Database
installation owner account. The OFA path for an Oracle base is /u01/app/user, where
user is the name of the Oracle software installation owner account. For example, use
the following commands to create an Oracle base for the database installation owner
account oracle:
# mkdir -p /u01/app/oracle
# chown -R oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/app/oracle
If you choose to create an Oracle Grid Infrastructure home
manually, then do not create the Oracle Grid Infrastructure home for a
cluster under either the grid installation owner Oracle base or the
Oracle Database installation owner Oracle base. Creating an Oracle
Clusterware installation in an Oracle base directory will cause
succeeding Oracle installations to fail.
Note:
Oracle Grid Infrastructure homes can be placed in a local home on
servers, even if your existing Oracle Clusterware home from a prior
release is in a shared location.
Homes for Oracle Grid Infrastructure for a standalone server (Oracle
Restart) can be under Oracle base. Refer to Oracle Database Installation
Guide for your platform for more information about Oracle Restart.
2.16 Cluster Name Requirements
The cluster name must be at least one character long and no more than 15 characters in
length, must be alphanumeric, cannot begin with a numeral, and may contain hyphens
(-).
In a Typical installation, the SCAN you provide is also the name of the cluster, so the
SCAN name must meet the requirements for a cluster name. In an Advanced
installation, The SCAN and cluster name are entered in separate fields during
installation, so cluster name requirements do not apply to the SCAN name.
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-45
Cluster Name Requirements
2-46 Oracle Grid Infrastructure Installation Guide
3
3
Configuring Storage for Oracle Grid
Infrastructure and Oracle RAC
This chapter describes the storage configuration tasks that you must complete before
you start the installer to install Oracle Clusterware and Oracle Automatic Storage
Management (Oracle ASM), and that you must complete before adding an Oracle Real
Application Clusters (Oracle RAC) installation to the cluster.
This chapter contains the following topics:
■
Reviewing Oracle Grid Infrastructure Storage Options
■
Shared File System Storage Configuration
■
Oracle Automatic Storage Management Storage Configuration
■
Desupport of Block and Raw Devices
3.1 Reviewing Oracle Grid Infrastructure Storage Options
This section describes supported options for storing Oracle Grid Infrastructure for a
cluster storage options. It contains the following sections:
■
Overview of Oracle Clusterware and Oracle RAC Storage Options
■
General Storage Considerations for Oracle Grid Infrastructure and Oracle RAC
■
Guidelines for Using Oracle ASM Disk Groups for Storage
■
Using Logical Volume Managers with Oracle Grid Infrastructure and Oracle RAC
■
Supported Storage Options
■
After You Have Selected Disk Storage Options
See Also: The Oracle Certification site on My Oracle Support for the
most current information about certified storage options:
https://support.oracle.com
3.1.1 Overview of Oracle Clusterware and Oracle RAC Storage Options
There are two ways of storing Oracle Clusterware files:
■
Oracle Automatic Storage Management (Oracle ASM): You can install Oracle
Clusterware files (Oracle Cluster Registry and voting disks) in Oracle ASM disk
groups.
Oracle ASM is the required database storage option for Typical installations, and
for Standard Edition Oracle RAC installations. It is an integrated,
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC 3-1
Reviewing Oracle Grid Infrastructure Storage Options
high-performance database file system and disk manager for Oracle Clusterware
and Oracle Database files. It performs striping and mirroring of database files
automatically.
Only one Oracle ASM instance is permitted for each node regardless of the
number of database instances on the node.
■
A supported shared file system: Supported file systems include the following:
–
A supported cluster file system Note that if you intend to use a cluster file
system for your data files, then you should create partitions large enough for
the database files when you create partitions for Oracle Grid Infrastructure.
See Also: The Certify page on My Oracle Support for supported
cluster file systems
–
Network File System (NFS): Note that if you intend to use NFS for your data
files, then you should create partitions large enough for the database files
when you create partitions for Oracle Grid Infrastructure. NFS mounts differ
for software binaries, Oracle Clusterware files, and database files.
You can no longer use OUI to install Oracle Clusterware or
Oracle Database files on block or raw devices.
Note:
See Also: My Oracle Support for supported file systems and NFS or
NAS filers
3.1.2 General Storage Considerations for Oracle Grid Infrastructure and Oracle RAC
For all installations, you must choose the storage option to use for Oracle Grid
Infrastructure (Oracle Clusterware and Oracle ASM), and Oracle Real Application
Clusters databases (Oracle RAC). To enable automated backups during the
installation, you must also choose the storage option to use for recovery files (the Fast
Recovery Area). You do not have to use the same storage option for each file type.
3.1.2.1 General Storage Considerations for Oracle Clusterware
Oracle Clusterware voting disks are used to monitor cluster node status, and Oracle
Cluster Registry (OCR) files contain configuration information about the cluster. You
can place voting disks and OCR files either in an Oracle ASM disk group, or on a
cluster file system or shared network file system. Storage must be shared; any node
that does not have access to an absolute majority of voting disks (more than half) will
be restarted.
3.1.2.2 General Storage Considerations for Oracle RAC
Use the following guidelines when choosing the storage options to use for each file
type:
■
■
■
You can choose any combination of the supported storage options for each file
type provided that you satisfy all requirements listed for the chosen storage
options.
Oracle recommends that you choose Oracle ASM as the storage option for
database and recovery files.
For Standard Edition Oracle RAC installations, Oracle ASM is the only supported
storage option for database or recovery files.
3-2 Oracle Grid Infrastructure Installation Guide
Reviewing Oracle Grid Infrastructure Storage Options
■
■
If you intend to use Oracle ASM with Oracle RAC, and you are configuring a new
Oracle ASM instance, then your system must meet the following conditions:
–
All nodes on the cluster have Oracle Clusterware and Oracle ASM 11g Release
2 (11.2) installed as part of an Oracle Grid Infrastructure for a cluster
installation.
–
Any existing Oracle ASM instance on any node in the cluster is shut down.
Raw or block devices are supported only when upgrading an existing installation
using the partitions already configured. On new installations, using raw or block
device partitions is not supported by Oracle Automatic Storage Management
Configuration Assistant (ASMCA) or Oracle Universal Installer (OUI), but is
supported by the software if you perform manual configuration.
See Also: Oracle Database Upgrade Guide for information about how
to prepare for upgrading an existing database
■
If you do not have a storage option that provides external file redundancy, then
you must configure at least three voting disk areas to provide voting disk
redundancy.
3.1.3 Guidelines for Using Oracle ASM Disk Groups for Storage
During Oracle Grid Infrastructure installation, you can create one disk group. After
the Oracle Grid Infrastructure installation, you can create additional disk groups using
ASMCA, SQL*Plus, or ASMCMD. Note that with Oracle Database 11g release 2 (11.2)
and later releases, Oracle Database Configuration Assistant (DBCA) does not have the
functionality to create disk groups for Oracle ASM.
If you install Oracle Database or Oracle RAC after you install Oracle Grid
Infrastructure, then you can either use the same disk group for database files, OCR,
and voting disk files, or you can use different disk groups. If you create multiple disk
groups before installing Oracle RAC or before creating a database, then you can decide
to do one of the following:
■
Place the data files in the same disk group as the Oracle Clusterware files.
■
Use the same Oracle ASM disk group for data files and recovery files.
■
Use different disk groups for each file type.
If you create only one disk group for storage, then the OCR and voting disk files,
database files, and recovery files are contained in the one disk group. If you create
multiple disk groups for storage, then you can choose to place files in different disk
groups.
The Oracle ASM instance that manages the existing disk group
should be running in the Grid home.
Note:
See Also:
Oracle Database Storage Administrator's Guide for information about
creating disk groups
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC 3-3
Reviewing Oracle Grid Infrastructure Storage Options
3.1.4 Using Logical Volume Managers with Oracle Grid Infrastructure and Oracle RAC
Oracle Grid Infrastructure and Oracle RAC only support cluster-aware volume
managers. This means, the volume manager comes with a certain vendor cluster
solution. To confirm that a volume manager you want to use is supported, look under
the Certifications tab on My Oracle Support whether the associated cluster solution is
certified for Oracle RAC. My Oracle Support is available at the following URL:
https://support.oracle.com
3.1.5 Supported Storage Options
The following table shows the storage options supported for storing Oracle
Clusterware and Oracle RAC files.
Table 3–1
Supported Storage Options for Oracle Clusterware and Oracle RAC
OCR and Voting
Disks
Oracle
Clusterware
binaries
Oracle RAC
binaries
Oracle Recovery
Oracle Database Files Files
Oracle Automatic Storage
Management
Yes
No
No
Yes
Yes
Local file system
No
Yes
Yes
No
No
NFS file system on a certified NAS
filer
Yes
Yes
Yes
Yes
Yes
Shared disk partitions (block
devices or raw devices)
Not supported by No
OUI or ASMCA,
but supported by
the software.
They can be
added or
removed after
installation.
No
Not supported by OUI No
or ASMCA, but
supported by the
software. They can be
added or removed after
installation.
Shared Logical Volume Manager
(SLVM)
Not supported by No
OUI or ASMCA,
but supported by
the software.
They can be
added or
removed after
installation.
No
Not supported by OUI No
or ASMCA, but
supported by the
software. They can be
added or removed after
installation.
Storage Option
Note: Direct NFS Client does not
support Oracle Clusterware files.
Use the following guidelines when choosing storage options:
■
■
■
You can choose any combination of the supported storage options for each file
type provided that you satisfy all requirements listed for the chosen storage
options.
You can use Oracle ASM 11g Release 2 (11.2) to store Oracle Clusterware files. You
cannot use Oracle ASM releases prior to 11g release 2 (11.2) to do this.
If you do not have a storage option that provides external file redundancy, then
you must configure at least three voting disk locations and at least three Oracle
Cluster Registry locations to provide redundancy.
3.1.6 After You Have Selected Disk Storage Options
When you have determined your disk storage options, configure shared storage:
3-4 Oracle Grid Infrastructure Installation Guide
Shared File System Storage Configuration
■
■
To use a file system, refer to Section 3.2, "Shared File System Storage
Configuration."
To use Oracle ASM, refer to Section 3.3, "Oracle Automatic Storage Management
Storage Configuration."
3.2 Shared File System Storage Configuration
The installer does not suggest a default location for the Oracle Cluster Registry (OCR)
or the Oracle Clusterware voting disk. If you choose to create these files on a file
system, then review the following sections to complete storage requirements for
Oracle Clusterware files:
■
Requirements for Using a Shared File System
■
Deciding to Use a Cluster File System for Oracle Clusterware Files
■
Deciding to Use Direct NFS Client for Data Files
■
Deciding to Use NFS for Data Files
■
Configuring Storage NFS Mount and Buffer Size Parameters
■
Checking NFS Mount and Buffer Size Parameters for Oracle Clusterware
■
Checking NFS Mount and Buffer Size Parameters for Oracle RAC
■
Enabling Direct NFS Client Oracle Disk Manager Control of NFS
■
Creating Directories for Oracle Clusterware Files on Shared File Systems
■
Creating Directories for Oracle Database Files on Shared File Systems
The OCR is a file that contains the configuration information
and status of the cluster. Oracle Universal Installer (OUI)
automatically initializes the OCR during the Oracle Clusterware
installation. Database Configuration Assistant uses the OCR for
storing the configurations for the cluster databases that it creates.
Note:
3.2.1 Requirements for Using a Shared File System
To use a shared file system for Oracle Clusterware, Oracle ASM, and Oracle RAC, the
file system must comply with the following requirements:
■
To use an NFS file system, it must be on a certified NAS device. Log in to My
Oracle Support at the following URL, and click the Certify tab to find a list of
certified NAS devices.
https://support.oracle.com/
■
■
If you choose to place your Oracle Cluster Registry (OCR) files on a shared file
system, then Oracle recommends that one of the following is true:
–
The disks used for the file system are on a highly available storage device, (for
example, a RAID device).
–
At least two file systems are mounted, and use the features of Oracle
Clusterware 11g Release 2 (11.2) to provide redundancy for the OCR.
If you choose to place your database files on a shared file system, then one of the
following should be true:
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC 3-5
Shared File System Storage Configuration
■
–
The disks used for the file system are on a highly available storage device, (for
example, a RAID device).
–
The file systems consist of at least two independent file systems, with the
database files on one file system, and the recovery files on a different file
system.
The user account with which you perform the installation (oracle or grid) must
have write permissions to create the files in the path that you specify.
Upgrading from Oracle9i release 2 using the raw device or
shared file for the OCR that you used for the SRVM configuration
repository is not supported.
Note:
If you are upgrading Oracle Clusterware, and your existing cluster
uses 100 MB OCR and 20 MB voting disk partitions, then you must
extend these partitions to at least 300 MB. Oracle recommends that
you do not use partitions, but instead place OCR and voting disks in
disk groups marked as QUORUM disk groups.
All storage products must be supported by both your server and
storage vendors.
Use Table 3–2 and Table 3–3 to determine the minimum size for shared file systems:
Table 3–2
Oracle Clusterware Shared File System Volume Size Requirements
File Types Stored
Number of
Volumes
Volume Size
Voting disks with external
redundancy
1
At least 300 MB for each voting disk
volume.
Oracle Cluster Registry (OCR) with
external redundancy
1
At least 300 MB for each OCR
volume
Oracle Clusterware files (OCR and
voting disks) with redundancy
provided by Oracle software.
3
At least 300 MB for each OCR
volume
Table 3–3
At least 300 MB for each voting disk
volume
Oracle RAC Shared File System Volume Size Requirements
File Types Stored
Number of
Volumes
Volume Size
Oracle Database files
1
At least 1.5 GB for each volume
Recovery files
1
At least 2 GB for each volume
Note: Recovery files must be on a
different volume than database files
In Table 3–2 and Table 3–3, the total required volume size is cumulative. For example,
to store all Oracle Clusterware files on the shared file system with normal redundancy,
you should have at least 2 GB of storage available over a minimum of three volumes
(three separate volume locations for the OCR and two OCR mirrors, and one voting
disk on each volume). You should have a minimum of three physical disks, each at
least 500 MB, to ensure that voting disks and OCR files are on separate physical disks.
If you add Oracle RAC using one volume for database files and one volume for
3-6 Oracle Grid Infrastructure Installation Guide
Shared File System Storage Configuration
recovery files, then you should have at least 3.5 GB available storage over two
volumes, and at least 5.5 GB available total for all volumes.
3.2.2 Deciding to Use a Cluster File System for Oracle Clusterware Files
For new installations, Oracle recommends that you use Oracle Automatic Storage
Management (Oracle ASM) to store voting disk and OCR files.
3.2.3 Deciding to Use Direct NFS Client for Data Files
Direct NFS Client is an alternative to using kernel-managed NFS. This section contains
the following information about Direct NFS Client:
■
About Direct NFS Client Storage
■
Using the oranfstab File with Direct NFS Client
■
Mounting NFS Storage Devices with Direct NFS Client
■
Specifying Network Paths with the oranfstab File
3.2.3.1 About Direct NFS Client Storage
With Oracle Database 11g Release 2 (11.2), instead of using the operating system kernel
NFS client, you can configure Oracle Database to access NFS V3 servers directly using
an Oracle internal Direct NFS Client.
To enable Oracle Database to use Direct NFS Client, the NFS file systems must be
mounted and available over regular NFS mounts before you start installation. Direct
NFS Client manages settings after installation. You should still set the kernel mount
options as a backup, but for normal operation, Direct NFS Client will manage NFS
mounts.
Refer to your vendor documentation to complete NFS configuration and mounting.
Some NFS file servers require NFS clients to connect using reserved ports. If your filer
is running with reserved port checking, then you must disable it for Direct NFS Client
to operate. To disable reserved port checking, consult your NFS file server
documentation.
For NFS servers that restrict port range, you can use the insecure option to enable
clients other than root to connect to the NFS server. Alternatively, you can disable
Direct NFS Client as described in Section 3.2.11, "Disabling Direct NFS Client Oracle
Disk Management Control of NFS".
Use NFS servers supported for Oracle RAC. Refer to the
following URL for certification information:
Note:
https://support.oracle.com
3.2.3.2 Using the oranfstab File with Direct NFS Client
If you use Direct NFS Client, then you can choose to use a new file specific for Oracle
data file management, oranfstab, to specify additional options specific for Oracle
Database to Direct NFS Client. For example, you can use oranfstab to specify
additional paths for a mount point. You can add the oranfstab file either to /etc or to
$ORACLE_HOME/dbs.
With shared Oracle homes, when the oranfstab file is placed in $ORACLE_HOME/dbs,
the entries in the file are specific to a single database. In this case, all nodes running an
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC 3-7
Shared File System Storage Configuration
Oracle RAC database use the same $ORACLE_HOME/dbs/oranfstab file. In non-shared
Oracle RAC installs, oranfstab must be replicated on all nodes.
When the oranfstab file is placed in /etc, then it is globally available to all Oracle
databases, and can contain mount points used by all Oracle databases running on
nodes in the cluster, including standalone databases. However, on Oracle RAC
systems, if the oranfstab file is placed in /etc, then you must replicate the file
/etc/oranfstab file on all nodes, and keep each /etc/oranfstab file synchronized on
all nodes, just as you must with the /etc/fstab file.
See Also: Section 3.2.5, "Configuring Storage NFS Mount and Buffer
Size Parameters" for information about configuring /etc/fstab
In all cases, mount points must be mounted by the kernel NFS system, even when they
are being served using Direct NFS Client.
Caution: Direct NFS Client will not serve an NFS server with write
size values (wtmax) less than 32768.
3.2.3.3 Mounting NFS Storage Devices with Direct NFS Client
Direct NFS Client determines mount point settings to NFS storage devices based on
the configurations in /etc/mnttab, which are changed with configuring the
/etc/fstab file.
Direct NFS Client searches for mount entries in the following order and uses the first
matching entry it finds:
1.
$ORACLE_HOME/dbs/oranfstab
2.
/etc/oranfstab
3.
/etc/mnttab
Oracle Database is not shipped with Direct NFS Client enabled by default. To enable
Direct NFS Client, complete the following steps:
1.
Change the directory to $ORACLE_HOME/rdbms/lib.
2.
Enter the following command:
make -f ins_rdbms.mk dnfs_on
You can have only one active Direct NFS Client
implementation for each instance. Using Direct NFS Client on an
instance will prevent another Direct NFS Client implementation.
Note:
If Oracle Database uses Direct NFS Client mount points configured using oranfstab,
then it first verifies kernel NFS mounts by cross-checking entries in oranfstab with
operating system NFS mount points. If a mismatch exists, then Direct NFS Client logs
an informational message, and does not operate.
Section 3.1.5, "Supported Storage Options" lists the file types that are supported by
Direct NFS Client.
If Oracle Database cannot open an NFS server using Direct NFS Client, then Oracle
Database uses the platform operating system kernel NFS client. In this case, the kernel
NFS mount options must be set up as defined in Section 3.2.7, "Checking NFS Mount
3-8 Oracle Grid Infrastructure Installation Guide
Shared File System Storage Configuration
and Buffer Size Parameters for Oracle RAC." Additionally, an informational message is
logged into the Oracle alert and trace files indicating that Direct NFS Client could not
connect to an NFS server.
The Oracle files resident on the NFS server that are served by the Direct NFS Client are
also accessible through the operating system kernel NFS client.
See Also: Oracle Database Administrator's Guide for guidelines to
follow regarding managing Oracle database data files created with
Direct NFS Client or kernel NFS
3.2.3.4 Specifying Network Paths with the oranfstab File
Direct NFS Client can use up to four network paths defined in the oranfstab file for
an NFS server. Direct NFS Client performs load balancing across all specified paths. If
a specified path fails, then Direct NFS Client reissues I/O commands over any
remaining paths.
Use the following SQL*Plus views for managing Direct NFS in a cluster environment:
■
gv$dnfs_servers: Shows a table of servers accessed using Direct NFS Client.
■
gv$dnfs_files: Shows a table of files currently open using Direct NFS Client.
■
■
gv$dnfs_channels: Shows a table of open network paths (or channels) to servers
for which Direct NFS Client is providing files.
gv$dnfs_stats: Shows a table of performance statistics for Direct NFS Client.
Note: Use v$ views for single instances, and gv$ views for Oracle
Clusterware and Oracle RAC storage.
3.2.4 Deciding to Use NFS for Data Files
Network-attached storage (NAS) systems use NFS to access data. You can store data
files on a supported NFS system.
NFS file systems must be mounted and available over NFS mounts before you start
installation. Refer to your vendor documentation to complete NFS configuration and
mounting.
Be aware that the performance of Oracle software and databases stored on NAS
devices depends on the performance of the network connection between the Oracle
server and the NAS device.
For this reason, Oracle recommends that you connect the server to the NAS device
using a private dedicated network connection, which should be Gigabit Ethernet or
better.
3.2.5 Configuring Storage NFS Mount and Buffer Size Parameters
If you are using NFS for the Grid home or Oracle RAC home, then you must set up the
NFS mounts on the storage so that they allow root on the clients mounting to the
storage to be considered root instead of being mapped to an anonymous user, and
allow root on the client server to create files on the NFS file system that are owned by
root.
On NFS, you can obtain root access for clients writing to the storage by enabling
root=access_list on the server side. For example, to set up Oracle Clusterware file
storage in the path /vol/grid, with nodes node1, node 2, and node3 in the domain
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC 3-9
Shared File System Storage Configuration
mycluster.example.com, add a line similar to the following to the /etc/dfs/dfstab
file:
share -F nfs -o
rw=node1.mycluster.example.com,node2,mycluster.example.com,node3.mycluster.example
.com,root=node1.mycluster.example.com,node2.mycluster.example.com,node3.mycluster.
example.com /vol/grid
Note:
Enter share settings as a single line.
If the domain or DNS is secure so that no unauthorized system can obtain an IP
address on it, then you can grant root access by domain, rather than specifying
particular cluster member nodes:
For example:
/vol/grid/ *.mycluster.example.com
Oracle recommends that you use a secure DNS or domain, and grant root access to
cluster member nodes using the domain, as using this syntax allows you to add or
remove nodes without the need to reconfigure the NFS server.
If you use Grid Naming Service (GNS), then the subdomain allocated for resolution by
GNS within the cluster is a secure domain. Any server without a correctly signed Grid
Plug and Play (GPnP) profile cannot join the cluster, so an unauthorized system cannot
obtain or use names inside the GNS subdomain.
Caution: Granting root access by domain can be used to obtain
unauthorized access to systems. System administrators should refer to
their operating system documentation for the risks associated with
using root=access.
After changing /etc/dfs/dfstab, reload the file system mount using the following
command:
# /usr/sbin/shareall
3.2.6 Checking NFS Mount and Buffer Size Parameters for Oracle Clusterware
On the cluster member nodes, you must set the values for the NFS buffer size
parameters rsize and wsize to 32768.
The NFS client-side mount options are:
rw,bg,vers=3,proto=tcp,noac,hard,nointr,timeo=600,rsize=32768,wsize=32768,suid 0 0
If you have Oracle Grid Infrastructure binaries on an NFS mount, then you must
include the suid option.
The NFS client-side mount options for Oracle Clusterware files (OCR and voting disk
files) are:
rw,bg,vers=3,proto=tcp,noac,forcedirectio,hard,nointr,timeo=600,rsize=32768,wsize=
32768,suid 0 0
3-10 Oracle Grid Infrastructure Installation Guide
Shared File System Storage Configuration
Update the /etc/fstab file on each node with an entry containing the NFS mount
options for your platform. For example, if you are creating a mount point for Oracle
Clusterware files, then update the /etc/fstab files with an entry similar to the
following:
nfsserver:/vol/grid /u02/oracle/cwfiles nfs \
rw,bg,vers=3,proto=tcp,noac,forcedirectio,hard,nointr,timeo=600,rsize=32768,wsize=
32768,suid 0 0
Note that mount point options are different for Oracle software binaries, compared to
Oracle Clusterware files (OCR and voting disks) and data files.
To create a mount point for binaries only, provide an entry similar to the following for
a binaries mount point:
nfs-server:/vol/bin /u02/oracle/grid nfs \
rw,bg,vers=3,proto=tcp,noac,hard,nointr,timeo=600,rsize=32768,wsize=32768,suid 0 0
See Also: My Oracle Support bulletin 359515.1, "Mount Options for
Oracle Files When Used with NAS Devices" for the most current
information about mount options, available from the following URL:
https://support.oracle.com
Refer to your storage vendor documentation for additional
information about mount options.
Note:
3.2.7 Checking NFS Mount and Buffer Size Parameters for Oracle RAC
If you use NFS mounts, then you must mount NFS volumes used for storing database
files with special mount options on each node that has an Oracle RAC instance. When
mounting an NFS file system, Oracle recommends that you use the same mount point
options that your NAS vendor used when certifying the device. Refer to your device
documentation or contact your vendor for information about recommended
mount-point options.
Update the /etc/fstab file on each node with an entry similar to the following:
nfs-server:/vol/DATA/oradata /u02/oradata
nfs\
rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,forcedirectio,vers=3,suid
0 0
The mandatory mount options comprise the minimum set of mount options that you
must use while mounting the NFS volumes. These mount options are essential to
protect the integrity of the data and to prevent any database corruption. Failure to use
these mount options may result in the generation of file access errors. Refer to your
operating system or NAS device documentation for more information about the
specific options supported on your platform.
See Also: My Oracle Support note 359515.1 for updated NAS mount
option information, available at the following URL:
https://support.oracle.com
3.2.8 Enabling Direct NFS Client Oracle Disk Manager Control of NFS
Complete the following procedure to enable Direct NFS Client:
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC 3-11
Shared File System Storage Configuration
1.
Create an oranfstab file with the following attributes for each NFS server to be
accessed using Direct NFS Client:
■
■
■
Server: The NFS server name.
Local: Up to four paths on the database host, specified by IP address or by
name, as displayed using the ifconfig command run on the database host
Path: Up to four network paths to the NFS server, specified either by IP
address, or by name, as displayed using the ifconfig command on the NFS
server.
■
Export: The exported path from the NFS server.
■
Mount: The corresponding local mount point for the exported volume.
■
■
Mnt_timeout: Specifies (in seconds) the time Direct NFS Client should wait
for a successful mount before timing out. This parameter is optional. The
default timeout is 10 minutes (600).
Dontroute: Specifies that outgoing messages should not be routed by the
operating system, but instead sent using the IP address to which they are
bound.
The examples that follow show three possible NFS server entries in oranfstab. A
single oranfstab can have multiple NFS server entries.
Example 3–1 Using Local and Path NFS Server Entries
The following example uses both local and path. Since they are in different subnets, we
do not have to specify dontroute.
server: MyDataServer1
local: 192.0.2.0
path: 192.0.2.1
local: 192.0.100.0
path: 192.0.100.1
export: /vol/oradata1 mount: /mnt/oradata1
Example 3–2 Using Local and Path in the Same Subnet, with dontroute
The following example shows local and path in the same subnet. dontroute is
specified in this case:
server: MyDataServer2
local: 192.0.2.0
path: 192.0.2.128
local: 192.0.2.1
path: 192.0.2.129
dontroute
export: /vol/oradata2 mount: /mnt/oradata2
Example 3–3 Using Names in Place of IP Addresses, with Multiple Exports
server: MyDataServer3
local: LocalPath1
path: NfsPath1
local: LocalPath2
path: NfsPath2
local: LocalPath3
path: NfsPath3
3-12 Oracle Grid Infrastructure Installation Guide
Shared File System Storage Configuration
local: LocalPath4
path: NfsPath4
dontroute
export: /vol/oradata3
export: /vol/oradata4
export: /vol/oradata5
export: /vol/oradata6
2.
mount:
mount:
mount:
mount:
/mnt/oradata3
/mnt/oradata4
/mnt/oradata5
/mnt/oradata6
Oracle Database uses an ODM library, libnfsodm11.so, to enable Direct NFS
Client. To replace the standard ODM library, $ORACLE_HOME/lib/libodm11.so,
with the ODM NFS library, libnfsodm11.so, complete the following steps on all
nodes unless the Oracle home directory is shared:
a.
Change directory to $ORACLE_HOME/lib.
b.
Enter the following commands:
cp libodm11.so libodm11.so_stub
ln -s libnfsodm11.so libodm11.so
3.2.9 Creating Directories for Oracle Clusterware Files on Shared File Systems
Use the following instructions to create directories for Oracle Clusterware files. You
can also configure shared file systems for the Oracle Database and recovery files.
For NFS storage, you must complete this procedure only if
you want to place the Oracle Clusterware files on a separate file
system from the Oracle base directory.
Note:
To create directories for the Oracle Clusterware files on separate file systems from the
Oracle base directory, follow these steps:
1.
If necessary, configure the shared file systems to use and mount them on each
node.
The mount point that you use for the file system must be
identical on each node. Ensure that the file systems are configured
to mount automatically when a node restarts.
Note:
2.
Use the bdf command to determine the free disk space on each mounted file
system.
3.
From the display, identify the file systems to use. Choose a file system with a
minimum of 600 MB of free disk space (one OCR and one voting disk, with
external redundancy).
If you are using the same file system for multiple file types, then add the disk
space requirements for each type to determine the total disk space requirement.
4.
Note the names of the mount point directories for the file systems that you
identified.
5.
If the user performing installation (typically, grid or oracle) has permissions to
create directories on the storage location where you plan to install Oracle
Clusterware files, then OUI creates the Oracle Clusterware file directory.
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC 3-13
Shared File System Storage Configuration
If the user performing installation does not have write access, then you must
create these directories manually using commands similar to the following to
create the recommended subdirectories in each of the mount point directories and
set the appropriate owner, group, and permissions on the directory. For example,
where the user is oracle, and the Oracle Clusterware file storage area is cluster:
# mkdir /mount_point/cluster
# chown oracle:oinstall /mount_point/cluster
# chmod 775 /mount_point/cluster
After installation, directories in the installation path for the
Oracle Cluster Registry (OCR) files should be owned by root, and not
writable by any account other than root.
Note:
When you have completed creating a subdirectory in the mount point directory, and
set the appropriate owner, group, and permissions, you have completed NFS
configuration for Oracle Grid Infrastructure.
3.2.10 Creating Directories for Oracle Database Files on Shared File Systems
Use the following instructions to create directories for shared file systems for Oracle
Database and recovery files (for example, for an Oracle RAC database).
1.
If necessary, configure the shared file systems and mount them on each node.
The mount point that you use for the file system must be
identical on each node. Ensure that the file systems are configured to
mount automatically when a node restarts.
Note:
2.
Use the bdf command to determine the free disk space on each mounted file
system.
3.
From the display, identify the file systems:
File Type
File System Requirements
Database files
Choose either:
■
■
Recovery files
A single file system with at least 1.5 GB of free disk space.
Two or more file systems with at least 1.5 GB of free disk space in
total.
Choose a file system with at least 2 GB of free disk space.
If you are using the same file system for multiple file types, then add the disk
space requirements for each type to determine the total disk space requirement.
4.
Note the names of the mount point directories for the file systems that you
identified.
5.
If the user performing installation (typically, oracle) has permissions to create
directories on the disks where you plan to install Oracle Database, then DBCA
creates the Oracle Database file directory, and the Recovery file directory.
If the user performing installation does not have write access, then you must
create these directories manually using commands similar to the following to
3-14 Oracle Grid Infrastructure Installation Guide
Oracle Automatic Storage Management Storage Configuration
create the recommended subdirectories in each of the mount point directories and
set the appropriate owner, group, and permissions on them:
■
Database file directory:
# mkdir /mount_point/oradata
# chown oracle:oinstall /mount_point/oradata
# chmod 775 /mount_point/oradata
■
Recovery file directory (Fast Recovery Area):
# mkdir /mount_point/fast_recovery_area
# chown oracle:oinstall /mount_point/fast_recovery_area
# chmod 775 /mount_point/fast_recovery_area
By making members of the oinstall group owners of these directories, this permits
them to be read by multiple Oracle homes, including those with different OSDBA
groups.
When you have completed creating subdirectories in each of the mount point
directories, and set the appropriate owner, group, and permissions, you have
completed NFS configuration for Oracle Database shared storage.
3.2.11 Disabling Direct NFS Client Oracle Disk Management Control of NFS
Complete the following steps to disable Direct NFS Client:
1.
Log in as the Oracle Grid Infrastructure installation owner, and disable Direct NFS
Client using the following commands, where Grid_home is the path to the Oracle
Grid Infrastructure home:
$ cd Grid_home/rdbms/lib
$ make -f ins_rdbms.mk dnfs_off
Enter these commands on each node in the cluster, or on the shared Grid home if
you are using a shared home for the Oracle Grid Infrastructure installation.
2.
Remove the oranfstab file.
If you remove an NFS path that Oracle Database is using, then
you must restart the database for the change to be effective.
Note:
3.3 Oracle Automatic Storage Management Storage Configuration
Review the following sections to configure storage for Oracle Automatic Storage
Management (Oracle ASM):
■
Configuring Storage for Oracle Automatic Storage Management
■
Using Disk Groups with Oracle Database Files on Oracle ASM
3.3.1 Configuring Storage for Oracle Automatic Storage Management
This section describes how to configure storage for use with Oracle Automatic Storage
Management (Oracle ASM).
■
Identifying Storage Requirements for Oracle ASM
■
Creating Files on a NAS Device for Use with Oracle ASM
■
Using an Existing Oracle ASM Disk Group
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC 3-15
Oracle Automatic Storage Management Storage Configuration
■
Configuring Disk Devices for Oracle ASM
3.3.1.1 Identifying Storage Requirements for Oracle ASM
To identify the storage requirements for using Oracle ASM, you must determine how
many devices and the amount of free disk space that you require. To complete this
task, follow these steps:
1.
Determine whether you want to use Oracle ASM for Oracle Clusterware files
(OCR and voting disks), Oracle Database files, recovery files, or all files except for
Oracle Clusterware or Oracle Database binaries. Oracle Database files include data
files, control files, redo log files, the server parameter file, and the password file.
You do not have to use the same storage mechanism for Oracle
Clusterware, Oracle Database files and recovery files. You can use a
shared file system for one file type and Oracle ASM for the other.
Note:
If you choose to enable automated backups and you do not have a
shared file system available, then you must choose Oracle ASM for
recovery file storage.
If you enable automated backups during the installation, then you can select
Oracle ASM as the storage mechanism for recovery files by specifying an Oracle
Automatic Storage Management disk group for the Fast Recovery Area. If you
select a noninteractive installation mode, then by default it creates one disk group
and stores the OCR and voting disk files there. If you want to have any other disk
groups for use in a subsequent database install, then you can choose interactive
mode, or run ASMCA (or a command line tool) to create the appropriate disk
groups before starting the database install.
2.
Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
The redundancy level that you choose for the Oracle ASM disk group determines
how Oracle ASM mirrors files in the disk group and determines the number of
disks and amount of free disk space that you require, as follows:
■
External redundancy
An external redundancy disk group requires a minimum of one disk device.
The effective disk space in an external redundancy disk group is the sum of
the disk space in all of its devices.
For Oracle Clusterware files, External redundancy disk groups provide 1
voting disk file, and 1 OCR, with no copies. You must use an external
technology to provide mirroring for high availability.
Because Oracle ASM does not mirror data in an external redundancy disk
group, Oracle recommends that you use external redundancy with storage
devices such as RAID, or other similar devices that provide their own data
protection mechanisms.
■
Normal redundancy
In a normal redundancy disk group, to increase performance and reliability,
Oracle ASM by default uses two-way mirroring. A normal redundancy disk
group requires a minimum of two disk devices (or two failure groups). The
effective disk space in a normal redundancy disk group is half the sum of the
disk space in all of its devices.
3-16 Oracle Grid Infrastructure Installation Guide
Oracle Automatic Storage Management Storage Configuration
For Oracle Clusterware files, Normal redundancy disk groups provide 3
voting disk files, 1 OCR and 2 copies (one primary and one secondary mirror).
With normal redundancy, the cluster can survive the loss of one failure group.
For most installations, Oracle recommends that you select normal redundancy.
■
High redundancy
In a high redundancy disk group, Oracle ASM uses three-way mirroring to
increase performance and provide the highest level of reliability. A high
redundancy disk group requires a minimum of three disk devices (or three
failure groups). The effective disk space in a high redundancy disk group is
one-third the sum of the disk space in all of its devices.
For Oracle Clusterware files, High redundancy disk groups provide 5 voting
disk files, 1 OCR and 3 copies (one primary and two secondary mirrors). With
high redundancy, the cluster can survive the loss of two failure groups.
While high redundancy disk groups do provide a high level of data protection,
you should consider the greater cost of additional storage devices before
deciding to select high redundancy disk groups.
3.
Determine the total amount of disk space that you require for Oracle Clusterware
files, and for the database files and recovery files.
Use Table 3–4 and Table 3–5 to determine the minimum number of disks and the
minimum disk space requirements for installing Oracle Clusterware files, and
installing the starter database, where you have voting disks in a separate disk
group:
Table 3–4
Total Oracle Clusterware Storage Space Required by Redundancy Type
Minimum
Redundancy Number of
Disks
Level
Oracle Cluster
Registry (OCR)
Files
Voting Disk
Files
Both File Types
External
1
300 MB
300 MB
600 MB
Normal
3
600 MB
900 MB
1.5 GB1
High
5
840 MB
1.4 GB
4 GB
1
If you create a disk group during installation, then it must be at least 2 GB.
If the voting disk files are in a disk group, be aware that disk
groups with Oracle Clusterware files (OCR and voting disk files) have
a higher minimum number of failure groups than other disk groups.
Note:
If you create a disk group as part of the installation in order to install
the OCR and voting disk files, then the installer requires that you
create these files on a disk group with at least 2 GB of available space.
A quorum failure group is a special type of failure group and disks in
these failure groups do not contain user data. A quorum failure group
is not considered when determining redundancy requirements in
respect to storing user data. However, a quorum failure group counts
when mounting a disk group.
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC 3-17
Oracle Automatic Storage Management Storage Configuration
Table 3–5
Total Oracle Database Storage Space Required by Redundancy Type
Redundancy
Level
Minimum Number
of Disks
Database
Files
Recovery
Files
Both File
Types
External
1
1.5 GB
3 GB
4.5 GB
Normal
2
3 GB
6 GB
9 GB
High
3
4.5 GB
9 GB
13.5 GB
4.
Determine an allocation unit size. Every Oracle ASM disk is divided into
allocation units (AU). An allocation unit is the fundamental unit of allocation
within a disk group. You can select the AU Size value from 1, 2, 4, 8, 16, 32 or 64
MB, depending on the specific disk group compatibility level. The default value is
set to 1 MB.
5.
For Oracle Clusterware installations, you must also add additional disk space for
the Oracle ASM metadata. You can use the following formula to calculate the
additional disk space requirements (in MB) for OCR and voting disk files, and the
Oracle ASM metadata:
total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) +
(64 * nodes) + 533)]
Where:
–
redundancy = Number of mirrors: external = 1, normal = 2, high = 3.
–
ausize = Metadata AU size in megabytes.
–
nodes = Number of nodes in cluster.
–
clients - Number of database instances for each node.
–
disks - Number of disks in disk group.
For example, for a four-node Oracle RAC installation, using three disks in a
normal redundancy disk group, you require an additional X MB of space:
[2 * 1 * 3] + [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)] = 1684 MB
In a normal redundancy disk group, to ensure high availability of Oracle
Clusterware files on Oracle ASM, you must have at least 2 GB of disk space for
Oracle Clusterware files in three separate failure groups, with at least three
physical disks. Each disk must have at least 2.1 GB of capacity, with total capacity
of atleast 6.3 GB for three disks, to ensure that the effective disk space to create
Oracle Clusterware files is 2 GB.
6.
Optionally, identify failure groups for the Oracle ASM disk group devices.
If you intend to use a normal or high redundancy disk group, then you can further
protect your database against hardware failure by associating a set of disk devices
in a custom failure group. By default, each device comprises its own failure group.
However, if two disk devices in a normal redundancy disk group are attached to
the same SCSI controller, then the disk group becomes unavailable if the controller
fails. The controller in this example is a single point of failure.
To protect against failures of this type, you could use two SCSI controllers, each
with two disks, and define a failure group for the disks attached to each controller.
This configuration would enable the disk group to tolerate the failure of one SCSI
controller.
3-18 Oracle Grid Infrastructure Installation Guide
Oracle Automatic Storage Management Storage Configuration
Define custom failure groups after installation, using the GUI
tool ASMCA, the command line tool asmcmd, or SQL commands.
Note:
If you define custom failure groups, then for failure groups
containing database files only, you must specify a minimum of two
failure groups for normal redundancy disk groups and three failure
groups for high redundancy disk groups.
For failure groups containing database files and clusterware files,
including voting disks, you must specify a minimum of three failure
groups for normal redundancy disk groups, and five failure groups
for high redundancy disk groups.
Disk groups containing voting files must have at least 3 failure groups
for normal redundancy or at least 5 failure groups for high
redundancy. Otherwise, the minimum is 2 and 3 respectively. The
minimum number of failure groups applies whether or not they are
custom failure groups.
7.
If you are sure that a suitable disk group does not exist on the system, then install
or identify appropriate disk devices to add to a new disk group. Use the following
guidelines when identifying appropriate disk devices:
■
■
■
All of the devices in an Oracle ASM disk group should be the same size and
have the same performance characteristics.
Do not specify multiple partitions on a single physical disk as a disk group
device. Oracle ASM expects each disk group device to be on a separate
physical disk.
Although you can specify a logical volume as a device in an Oracle ASM disk
group, Oracle does not recommend their use because it adds a layer of
complexity that is unnecessary with Oracle ASM. In addition, Oracle RAC
requires a cluster logical volume manager in case you decide to use a logical
volume with Oracle ASM and Oracle RAC.
Oracle recommends that if you choose to use a logical volume manager, then
use the logical volume manager to represent a single LUN without striping or
mirroring, so that you can minimize the impact of the additional storage layer.
3.3.1.2 Creating Files on a NAS Device for Use with Oracle ASM
If you have a certified NAS storage device, then you can create zero-padded files in an
NFS mounted directory and use those files as disk devices in an Oracle ASM disk
group.
To create these files, follow these steps:
1.
If necessary, create an exported directory for the disk group files on the NAS
device.
Refer to the NAS device documentation for more information about completing
this step.
2.
Switch user to root.
3.
Create a mount point directory on the local system.
4.
To ensure that the NFS file system is mounted when the system restarts, add an
entry for the file system in the mount file /etc/fstab.
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC 3-19
Oracle Automatic Storage Management Storage Configuration
See Also: My Oracle Support note 359515.1 for updated NAS mount
option information, available at the following URL:
https://support.oracle.com
For more information about editing the mount file for the operating system, refer
to the man pages. For more information about recommended mount options, refer
to the section Section 3.2.7, "Checking NFS Mount and Buffer Size Parameters for
Oracle RAC."
5.
Mount the NFS file system on the local system
6.
Choose a name for the disk group to create. For example: sales1.
7.
Create a directory for the files on the NFS file system, using the disk group name
as the directory name.
8.
Use commands similar to the following to create the required number of
zero-padded files in this directory:
# dd if=/dev/zero of=/mnt/nfsdg/disk1 bs=1024k count=1000
This example creates 1 GB files on the NFS file system. You must create one, two,
or three files respectively to create an external, normal, or high redundancy disk
group.
9.
Enter commands similar to the following to change the owner, group, and
permissions on the directory and files that you created, where the installation
owner is grid, and the OSASM group is asmadmin:
# chown -R grid:asmadmin /mnt/nfsdg
# chmod -R 660 /mnt/nfsdg
10. If you plan to install Oracle RAC or a standalone Oracle Database, then during
installation, edit the Oracle ASM disk discovery string to specify a regular
expression that matches the file names you created. For example:
/mnt/nfsdg/sales1/
3.3.1.3 Using an Existing Oracle ASM Disk Group
Select from the following choices to store either database or recovery files in an
existing Oracle ASM disk group, depending on installation method:
■
If you select an installation method that runs Database Configuration Assistant in
interactive mode, then you can decide whether you want to create a disk group, or
to use an existing one.
The same choice is available to you if you use Database Configuration Assistant
after the installation to create a database.
■
If you select an installation method that runs Database Configuration Assistant in
noninteractive mode, then you must choose an existing disk group for the new
database; you cannot create a disk group. However, you can add disk devices to
an existing disk group if it has insufficient free space for your requirements.
The Oracle ASM instance that manages the existing disk group
can be running in a different Oracle home directory.
Note:
3-20 Oracle Grid Infrastructure Installation Guide
Oracle Automatic Storage Management Storage Configuration
To determine if an existing Oracle ASM disk group exists, or to determine if there is
sufficient disk space in a disk group, you can use the Oracle ASM command line tool
(asmcmd), Oracle Enterprise Manager Grid Control or Database Control. Alternatively,
you can use the following procedure:
1.
View the contents of the oratab file to determine if an Oracle ASM instance is
configured on the system:
$ more /etc/oratab
If an Oracle ASM instance is configured on the system, then the oratab file should
contain a line similar to the following:
+ASM2:oracle_home_path
In this example, +ASM2 is the system identifier (SID) of the Oracle ASM instance,
with the node number appended, and oracle_home_path is the Oracle home
directory where it is installed. By convention, the SID for an Oracle ASM instance
begins with a plus sign.
2.
Set the ORACLE_SID and ORACLE_HOME environment variables to specify the
appropriate values for the Oracle ASM instance.
3.
Connect to the Oracle ASM instance and start the instance if necessary:
$ $ORACLE_HOME/bin/asmcmd
ASMCMD> startup
4.
Enter one of the following commands to view the existing disk groups, their
redundancy level, and the amount of free disk space in each one:
ASMCMD> lsdg
or:
$ORACLE_HOME/bin/asmcmd -p lsdg
5.
From the output, identify a disk group with the appropriate redundancy level and
note the free space that it contains.
6.
If necessary, install or identify the additional disk devices required to meet the
storage requirements listed in the previous section.
If you are adding devices to an existing disk group, then
Oracle recommends that you use devices that have the same size and
performance characteristics as the existing devices in that disk group.
Note:
3.3.1.4 Configuring Disk Devices for Oracle ASM
To configure disks for use with Oracle ASM on HP-UX, follow these steps:
1.
If necessary, install the shared disks that you intend to use for the Oracle ASM disk
group.
2.
To make sure that the disks are available, enter the following command on every
node:
# /usr/sbin/ioscan -fun -C disk
The output from this command is similar to the following:
Class
I
H/W Path
Driver S/W State
H/W Type
Description
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC 3-21
Oracle Automatic Storage Management Storage Configuration
==========================================================================
disk
0 0/0/1/0.6.0 sdisk CLAIMED
DEVICE
HP
DVD-ROM 6x/32x
/dev/rdsk/c0t6d0
/dev/rdsk/c0t6d0
disk
1 0/0/1/1.2.0 sdisk CLAIMED
DEVICE
SEAGATE ST39103LC
/dev/rdsk/c1t2d0
/dev/rdsk/c1t2d0
This command displays information about each disk attached to the system,
including character raw device names (/dev/rdsk/).
Note: On HP-UX 11i v.3, you can also use agile view to review mass
storage devices, including character raw devices
(/dev/rdisk/diskxyz). For example:
#>ioscan -funN -C disk
Class
I H/W Path Driver S/W State
H/W Type
Description
===================================================================
disk
4 64000/0xfa00/0x1
esdisk
CLAIMED
DEVICE
HP 73.4GST373454LC
/dev/disk/disk4
/dev/rdisk/disk4
disk
907 64000/0xfa00/0x2f esdisk
CLAIMED
DEVICE
COMPAQ MSA1000 VOLUME
/dev/disk/disk907
/dev/rdisk/disk907
3.
If the ioscan command does not display device name information for a device,
enter the following command to install the special device files for any new
devices:
# /usr/sbin/insf -e
4.
For each disk to add to a disk group, enter the following command on any node to
verify that it is not already part of an LVM volume group:
# /sbin/pvdisplay /dev/dsk/cxtydz
If this command displays volume group information, the disk is already part of a
volume group. The disks that you choose must not be part of an LVM volume
group.
If you are using different volume management software, for
example VERITAS Volume Manager, refer to the appropriate
documentation for information about verifying that a disk is not in
use.
Note:
5.
Enter commands similar to the following on every node to change the owner,
group, and permissions on the character raw device file for each disk to add to a
disk group, so that the owner is the Oracle Grid Infrastructure owner (in this
example, grid), and the group that has access is the OSASM group (in this
example, asmadmin):
# chown grid:asmadmin /dev/rdsk/cxtydz
# chmod 660 /dev/rdsk/cxtydz
3-22 Oracle Grid Infrastructure Installation Guide
Oracle Automatic Storage Management Storage Configuration
If you are using a multi-pathing disk driver with ASM,
make sure that you set the permissions only on the correct logical
device name for the disk.
Note:
If the nodes are configured differently, the device name for a
particular device might be different on some nodes. Make sure that
you specify the correct device names on each node.
3.3.2 Using Disk Groups with Oracle Database Files on Oracle ASM
Review the following sections to configure Oracle ASM storage for Oracle Clusterware
and Oracle Database Files:
■
Identifying and Using Existing Oracle Database Disk Groups on ASM
■
Creating Disk Groups for Oracle Database Data Files
3.3.2.1 Identifying and Using Existing Oracle Database Disk Groups on ASM
The following section describes how to identify existing disk groups and determine
the free disk space that they contain.
■
Optionally, identify failure groups for the Oracle ASM disk group devices.
If you intend to use a normal or high redundancy disk group, then you can further
protect your database against hardware failure by associating a set of disk devices
in a custom failure group. By default, each device comprises its own failure group.
However, if two disk devices in a normal redundancy disk group are attached to
the same SCSI controller, then the disk group becomes unavailable if the controller
fails. The controller in this example is a single point of failure.
To protect against failures of this type, you could use two SCSI controllers, each
with two disks, and define a failure group for the disks attached to each controller.
This configuration would enable the disk group to tolerate the failure of one SCSI
controller.
If you define custom failure groups, then you must specify a
minimum of two failure groups for normal redundancy and three
failure groups for high redundancy.
Note:
3.3.2.2 Creating Disk Groups for Oracle Database Data Files
If you are sure that a suitable disk group does not exist on the system, then install or
identify appropriate disk devices to add to a new disk group. Use the following
guidelines when identifying appropriate disk devices:
■
■
■
All of the devices in an Oracle ASM disk group should be the same size and have
the same performance characteristics.
Do not specify multiple partitions on a single physical disk as a disk group device.
Oracle ASM expects each disk group device to be on a separate physical disk.
Although you can specify logical volumes as devices in an Oracle ASM disk
group, Oracle does not recommend their use. Non-shared logical volumes are not
supported with Oracle RAC. If you want to use logical volumes for your Oracle
RAC database, then you must use shared logical volumes created by a
cluster-aware logical volume manager.
Configuring Storage for Oracle Grid Infrastructure and Oracle RAC 3-23
Desupport of Block and Raw Devices
3.3.3 Upgrading Existing Oracle ASM Instances
If you have an Oracle ASM installation from a prior release installed on your server, or
in an existing Oracle Clusterware installation, then you can use Oracle Automatic
Storage Management Configuration Assistant (ASMCA, located in the path Grid_
home/bin) to upgrade the existing Oracle ASM instance to 11g Release 2 (11.2), and
subsequently configure failure groups and Oracle ASM volumes.
You must first shut down all database instances and
applications on the node with the existing Oracle ASM instance before
upgrading it.
Note:
During installation, if you are upgrading from an Oracle ASM release prior to 11.2, and
you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM
version installed in another Oracle ASM home, then after installing the Oracle ASM
11g Release 2 (11.2) binaries, you can start ASMCA to upgrade the existing Oracle
ASM instance.
If you are upgrading from Oracle ASM 11g Release 2 (11.2.0.1) or later, then Oracle
ASM is always upgraded with Oracle Grid Infrastructure as part of the rolling
upgrade, and ASMCA is started by the root scripts during upgrade. ASMCA cannot
perform a separate upgrade of Oracle ASM from release 11.2.0.1 to 11.2.0.2.
On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of
Oracle ASM instances on all nodes is 11g release 1, then you are provided with the
option to perform a rolling upgrade of Oracle ASM instances. If the prior version of
Oracle ASM instances on an Oracle RAC installation are from a release prior to 11g
release 1, then rolling upgrades cannot be performed. Oracle ASM on all nodes will be
upgraded to 11g Release 2 (11.2).
3.4 Desupport of Block and Raw Devices
With the release of Oracle Database 11g Release 2 (11.2) and Oracle RAC 11g Release 2
(11.2), using Database Configuration Assistant or the installer to store Oracle
Clusterware or Oracle Database files on block or raw devices is not supported.
If you intend to upgrade an existing Oracle RAC database, or an Oracle RAC database
with Oracle ASM instances, then you can use an existing raw or block device partition,
and perform a rolling upgrade of your existing installation. Performing a new
installation using block or raw devices is not allowed.
3-24 Oracle Grid Infrastructure Installation Guide
4
4
Installing Oracle Grid Infrastructure for a
Cluster
This chapter describes the procedures for installing Oracle Grid Infrastructure for a
cluster. Oracle Grid Infrastructure consists of Oracle Clusterware and Oracle
Automatic Storage Management (Oracle ASM). If you plan afterward to install Oracle
Database with Oracle Real Application Clusters (Oracle RAC), then this is phase one of
a two-phase installation.
This chapter contains the following topics:
■
Preparing to Install Oracle Grid Infrastructure with OUI
■
Installing Oracle Grid Infrastructure
■
Installing Grid Infrastructure Using a Software-Only Installation
■
Confirming Oracle Clusterware Function
■
Confirming Oracle ASM Function for Oracle Clusterware Files
4.1 Preparing to Install Oracle Grid Infrastructure with OUI
Before you install Oracle Grid Infrastructure with the installer, use the following
checklist to ensure that you have all the information you will need during installation,
and to ensure that you have completed all tasks that must be done before starting your
installation. Check off each task in the following list as you complete it, and write
down the information needed, so that you can provide it during installation.
❏
Shut Down Running Oracle Processes
You may need to shut down running Oracle processes:
Installing on a node with a standalone database not using Oracle ASM: You do
not need to shut down the database while you install Oracle Grid Infrastructure
software.
Installing on a node that already has a standalone Oracle Database 11g release 2
(11.2) installation running on Oracle ASM: Stop the existing Oracle ASM
instances. The Oracle ASM instances are restarted during installation.
Installing on an Oracle RAC Database node: This installation requires an
upgrade of Oracle Clusterware, as Oracle Clusterware is required to run Oracle
RAC. As part of the upgrade, you must shut down the database one node at a time
as the rolling upgrade proceeds from node to node.
Installing Oracle Grid Infrastructure for a Cluster 4-1
Preparing to Install Oracle Grid Infrastructure with OUI
If you are upgrading an Oracle RAC 9i release 2 (9.2) node,
and the TNSLSNR is listening to the same port on which the SCAN
listens (default 1521), then the TNSLSNR should be shut down.
Note:
If a Global Services Daemon (GSD) from Oracle9i Release 9.2 or earlier is running,
then stop it before installing Oracle Grid Infrastructure by running the following
command:
$ Oracle_home/bin/gsdctl stop
where Oracle_home is the Oracle Database home that is running the GSD.
If you receive a warning to stop all Oracle services after
starting OUI, then run the command
Note:
Oracle_home/bin/localconfig delete
where Oracle_home is the existing Oracle Clusterware home.
❏
Prepare for Oracle Automatic Storage Management and Oracle Clusterware
Upgrade If You Have Existing Installations
During the Oracle Grid Infrastructure installation, existing Oracle Clusterware and
clustered Oracle ASM installations are both upgraded.
When all member nodes of the cluster run Oracle Grid Infrastructure 11g Release 2
(11.2), then the new clusterware becomes the active version.
If you intend to install Oracle RAC, then you must first complete the upgrade to
Oracle Grid Infrastructure 11g Release 2 (11.2) on all cluster nodes before you
install the Oracle Database 11g Release 2 (11.2) version of Oracle RAC.
All Oracle Grid Infrastructure upgrades (upgrades of existing
Oracle Clusterware and Oracle ASM installations) are out-of-place
upgrades.
Note:
❏
Determine the Oracle Inventory (oraInventory) location
If you have already installed Oracle software on your system, then OUI detects the
existing Oracle Inventory (oraInventory) directory from the
/var/opt/oracle/oraInst.loc file, and uses this location. This directory is the
central inventory of Oracle software installed on your system. Users who have the
Oracle Inventory group as their primary group are granted the OINSTALL
privilege to write to the central inventory.
If you are installing Oracle software for the first time on your system, and your
system does not have an oraInventory directory, then the installer designates the
installation owner's primary group as the Oracle Inventory group. Ensure that this
group is available as a primary group for all planned Oracle software installation
owners.
Note:
The oraInventory directory cannot be placed on a shared file
system.
4-2 Oracle Grid Infrastructure Installation Guide
Preparing to Install Oracle Grid Infrastructure with OUI
See Also: The preinstallation chapters in Chapter 2 for information
about creating the Oracle Inventory, and completing required system
configuration
❏
Obtain root account access
During installation, you are asked to run configuration scripts as the root user.
You must run these scripts as root, or be prepared to have your system
administrator run them for you. You must run the root.sh script on the first node
and wait for it to finish. If your cluster has four or more nodes, then root.sh can
be run concurrently on all nodes but the first and last.
❏
Decide if you want to install other languages
During installation, you are asked if you want translation of user interface text into
languages other than the default, which is English.
If the language set for the operating system is not supported
by the installer, then by default the installer runs in the English
language.
Note:
See Also: Oracle Database Globalization Support Guide for detailed
information on character sets and language configuration
❏
Determine your cluster name, public node names, the SCAN, virtual node
names, GNS VIP and planned interface use for each node in the cluster
During installation, you are prompted to provide the public and virtual host
name, unless you use a third party cluster software. In that case, the public host
name information will be filled in. You are also prompted to identify which
interfaces are public, private, or interfaces in use for another purpose, such as a
network file system.
If you use Grid Naming Service (GNS), then OUI displays the public and virtual
host name addresses labeled as "AUTO" because they are configured
automatically.
If you configure IP addresses manually, then avoid changing
host names after you complete the Oracle Grid Infrastructure
installation, including adding or deleting domain qualifications. A
node with a new host name is considered a new host, and must be
added to the cluster. A node under the old name will appear to be
down until it is removed from the cluster.
Note:
If you use third-party clusterware, then use your vendor documentation to
complete setup of your public and private domain addresses.
When you enter the public node name, use the primary host name of each node. In
other words, use the name displayed by the hostname command.
In addition:
–
Provide a cluster name with the following characteristics:
*
It must be globally unique throughout your host domain.
Installing Oracle Grid Infrastructure for a Cluster 4-3
Preparing to Install Oracle Grid Infrastructure with OUI
*
It must be at least one character long and less than or equal to 15
characters long.
*
It must consist of the same character set used for host names, in
accordance with RFC 1123: Hyphens (-), and single-byte alphanumeric
characters (a to z, A to Z, and 0 to 9). If you use third-party vendor
clusterware, then Oracle recommends that you use the vendor cluster
name.
–
If you are not using Grid Naming Service (GNS), then determine a virtual host
name for each node. A virtual host name is a public node name that is used to
reroute client requests sent to the node if the node is down. Oracle Database
uses VIPs for client-to-database connections, so the VIP address must be
publicly accessible. Oracle recommends that you provide a name in the format
hostname-vip. For example: myclstr2-vip.
–
Provide SCAN addresses for client access to the cluster. These addresses
should be configured as round robin addresses on the domain name service
(DNS). Oracle recommends that you supply three SCAN addresses.
The following is a list of additional information about node IP
addresses:
Note:
■
■
■
–
For the local node only, OUI automatically fills in public and VIP
fields. If your system uses vendor clusterware, then OUI may fill
additional fields.
Host names and virtual host names are not domain-qualified. If
you provide a domain in the address field during installation,
then OUI removes the domain from the address.
Interfaces identified as private for private IP addresses should not
be accessible as public interfaces. Using public interfaces for
Cache Fusion can cause performance problems.
Identify public and private interfaces. OUI configures public interfaces for use
by public and virtual IP addresses, and configures private IP addresses on
private interfaces.
The private subnet that the private interfaces use must connect all the nodes
you intend to have as cluster members.
❏
Obtain proxy realm authentication information if you have a proxy realm on
your network
During installation, OUI attempts to download updates. You are prompted to
provide a proxy realm, and user authentication information to access the Internet
through the proxy service. If you have a proxy realm configured, then be prepared
to provide this information. If you do not have a proxy realm, then you can leave
the proxy authentication fields blank.
❏
Identify shared storage for Oracle Clusterware files and prepare storage if
necessary
During installation, you are asked to provide paths for the following Oracle
Clusterware files. These files must be shared across all nodes of the cluster, either
on Oracle ASM, or on a supported file system:
–
Voting disks are files that Oracle Clusterware uses to verify cluster node
membership and status.
4-4 Oracle Grid Infrastructure Installation Guide
Preparing to Install Oracle Grid Infrastructure with OUI
Voting disk files must be owned by the user performing the installation
(oracle or grid), and must have permissions set to 640.
–
Oracle Cluster Registry files (OCR) contain cluster and database configuration
information for Oracle Clusterware.
Before installation, OCR files must be owned by the user performing the
installation (grid or oracle). That installation user must have oinstall as its
primary group. During installation, OUI changes ownership of the OCR files
to root.
If your file system does not have external storage redundancy, then Oracle
recommends that you provide two additional locations for the OCR disk, and two
additional locations for the voting disks, for a total of six partitions (three for OCR,
and three for voting disks). Creating redundant storage locations protects the OCR
and voting disk in the event of a failure. To completely protect your cluster, the
storage locations given for the copies of the OCR and voting disks should have
completely separate paths, controllers, and disks, so that no single point of failure
is shared by storage locations.
When you select to store the OCR on Oracle ASM, the default configuration is to
create the OCR on one Oracle ASM disk group. If you create the disk group with
normal or high redundancy, then the OCR is protected from physical disk failure.
To protect the OCR from logical disk failure, create another Oracle ASM disk
group after installation and add the OCR to the second disk group using the
ocrconfig command.
See Also: Chapter 2, "Advanced Installation Oracle Grid
Infrastructure for a Cluster Preinstallation Tasks" and Oracle Database
Storage Administrator's Guide for information about adding disks to
disk groups
❏
Ensure cron jobs do not run during installation
If the installer is running when daily cron jobs start, then you may encounter
unexplained installation problems if your cron job is performing cleanup, and
temporary files are deleted before the installation is finished. Oracle recommends
that you complete installation before daily cron jobs are run, or disable daily cron
jobs that perform cleanup until after the installation is completed.
❏
Have IPMI Configuration completed and have IPMI administrator account
information
If you intend to use IPMI, then ensure BMC interfaces are configured, and have an
administration account username and password to provide when prompted
during installation.
For nonstandard installations, if you must change configuration on one or more
nodes after installation (for example, if you have different administrator
usernames and passwords for BMC interfaces on cluster nodes), then decide if you
want to reconfigure the BMC interface, or modify IPMI administrator account
information after installation.
❏
Ensure that the Oracle home path you select for the Oracle Grid Infrastructure
home uses only ASCII characters
This restriction includes installation owner user names, which are used as a
default for some home paths, as well as other directory names you may select for
paths.
Installing Oracle Grid Infrastructure for a Cluster 4-5
Installing Oracle Grid Infrastructure
❏
Unset Oracle environment variables. If you have set ORA_CRS_HOME as an
environment variable, then unset it before starting an installation or upgrade. You
should never use ORA_CRS_HOME as an environment variable.
If you have had an existing installation on your system, and you are using the
same user account to install this installation, then unset the following environment
variables: ORA_CRS_HOME; ORACLE_HOME; ORA_NLS10; TNS_ADMIN
❏
Decide if you want to use the Software Updates option. OUI can install critical
patch updates, system requirements updates (hardware, operating system
parameters, and kernel packages) for supported operating systems, and other
significant updates that can help to ensure your installation proceeds smoothly.
Oracle recommends that you enable software updates during installation.
If you choose to enable software updates, then during installation you must
provide a valid My Oracle Support user name and password during installation,
so that OUI can download the latest updates, or you must provide a path to the
location of an software updates packages that you have downloaded previously.
If you plan to run the installation in a secured data center, then you can download
updates before starting the installation by starting OUI on a system that has
Internet access in update download mode. To start OUI to download updates,
enter the following command:
$ ./runInstaller -downloadUpdates
Provide the My Oracle Support user name and password, and provide proxy
settings if needed. After you download updates, transfer the update file to a
directory on the server where you plan to run the installation.
4.2 Installing Oracle Grid Infrastructure
This section provides you with information about how to use the installer to install
Oracle Grid Infrastructure. It contains the following sections:
■
Running OUI to Install Grid Infrastructure
■
Installing Grid Infrastructure Using a Cluster Configuration File
4.2.1 Running OUI to Install Grid Infrastructure
Complete the following steps to install Oracle Grid Infrastructure (Oracle Clusterware
and Oracle Automatic Storage Management) on your cluster. At any time during
installation, if you have a question about what you are being asked to do, click the
Help button on the OUI page.
1.
Change to the /Disk1 directory on the installation media, or where you have
downloaded the installation binaries, and run the runInstaller command. For
example:
$ cd /home/grid/oracle_sw/Disk1
$ ./runInstaller
2.
Select Typical or Advanced installation.
3.
Provide information or run scripts as root when prompted by OUI. If you need
assistance during installation, click Help. Click Details to see the log file. If
root.sh fails on any of the nodes, then you can fix the problem and follow the
steps in Section 6.5, "Deconfiguring Oracle Clusterware Without Removing
Binaries," rerun root.sh on that node, and continue.
4-6 Oracle Grid Infrastructure Installation Guide
Installing Oracle Grid Infrastructure
You must run the root.sh script on the first node and wait for
it to finish. If your cluster has four or more nodes, then root.sh can be
run concurrently on all nodes but the first and last. As with the first
node, the root.sh script on the last node must be run separately.
Note:
4.
After you run root.sh on all the nodes, OUI runs Net Configuration Assistant
(netca) and Cluster Verification Utility. These programs run without user
intervention.
5.
Oracle Automatic Storage Management Configuration Assistant (asmca)
configures Oracle ASM during the installation.
When you have verified that your Oracle Grid Infrastructure installation is completed
successfully, you can either use it to maintain high availability for other applications,
or you can install an Oracle database.
The following is a list of additional information to note about installation:
■
If you intend to install Oracle Database 11g Release 2 (11.2) with Oracle RAC, then
refer to Oracle Real Application Clusters Installation Guide for HP-UX.
See Also: Oracle Real Application Clusters Administration and
Deployment Guide for information about using cloning and node
addition procedures, and Oracle Clusterware Administration and
Deployment Guide for cloning Oracle Grid Infrastructure
■
When you run root.sh during Oracle Grid Infrastructure installation, the Trace
File (TF) Analyzer and Collector is also installed in the directory grid_home/tfa.
See Also: Oracle Clusterware Administration and Deployment Guide for
information about using Trace File Analyzer and Collector
4.2.2 Installing Grid Infrastructure Using a Cluster Configuration File
During installation of Oracle Grid Infrastructure, you are given the option either of
providing cluster configuration information manually, or of using a cluster
configuration file. A cluster configuration file is a text file that you can create before
starting OUI, which provides OUI with cluster node addresses that it requires to
configure the cluster.
Oracle suggests that you consider using a cluster configuration file if you intend to
perform repeated installations on a test cluster, or if you intend to perform an
installation on many nodes.
To create a cluster configuration file manually, start a text editor, and create a file that
provides the name of the public and virtual IP addresses for each cluster member
node, in the following format:
node1 node1-vip
node2 node2-vip
.
.
.
For example:
mynode1 mynode1-vip
Installing Oracle Grid Infrastructure for a Cluster 4-7
Installing Grid Infrastructure Using a Software-Only Installation
mynode2 mynode2-vip
4.3 Installing Grid Infrastructure Using a Software-Only Installation
This section contains the following tasks:
■
Installing the Software Binaries
■
Configuring the Software Binaries
■
Configuring the Software Binaries Using a Response File
Oracle recommends that only advanced users should perform
the software-only installation, as this installation option requires
manual postinstallation steps to enable the Oracle Grid Infrastructure
software.
Note:
A software-only installation consists of installing Oracle Grid Infrastructure for a
cluster on one node.
If you use the Install Grid Infrastructure Software Only option during installation,
then this installs the software binaries on the local node. To complete the installation
for your cluster, you must perform the additional steps of configuring Oracle
Clusterware and Oracle ASM, creating a clone of the local installation, deploying this
clone on other nodes, and then adding the other nodes to the cluster.
Oracle Clusterware Administration and Deployment Guide for
information about how to clone an Oracle Grid Infrastructure
installation to other nodes, and then adding them to the cluster
See Also:
4.3.1 Installing the Software Binaries
To perform a software-only installation:
1.
On the local node, verify that the cluster node meets installation requirements
using the command runcluvfy.sh stage -pre crsinst. Ensure that you have
completed all storage and server preinstallation requirements.
2.
Run the runInstaller command from the relevant directory on the Oracle
Database 11g Release 2 (11.2) installation media or download directory. For
example:
$ cd /home/grid/oracle_sw/Disk1
$ ./runInstaller
3.
Complete a software-only installation of Oracle Grid Infrastructure on the first
node.
4.
When the software has been installed, run the orainstRoot.sh script when
prompted.
5.
For installations with Oracle RAC release 11.2.0.2 and later, proceed to step 6. For
installations with Oracle RAC release 11.2.0.1, to relink Oracle Clusterware with
the Oracle RAC option enabled, run commands similar to the following (in this
example, the Grid home is /u01/app/11.2.0/grid):
$ cd /u01/app/11.2.0/grid/
$ set env ORACLE_HOME pwd
$ cd rdbms/lib
4-8 Oracle Grid Infrastructure Installation Guide
Installing Grid Infrastructure Using a Software-Only Installation
$ make -f ins_rdbms.mk rac_on ioracle
6.
The root.sh script output provides information about how to proceed, depending
on the configuration you plan to complete in this installation. Make note of this
information.
However, ignore the instruction to run the roothas.pl script, unless you intend to
install Oracle Grid Infrastructure on a standalone server (Oracle Restart).
7.
On each remaining node, verify that the cluster node meets installation
requirements using the command runcluvfy.sh stage -pre crsinst. Ensure
that you have completed all storage and server preinstallation requirements.
8.
Use Oracle Universal Installer as described in steps 1 through 4 to install the
Oracle Grid Infrastructure software on every remaining node that you want to
include in the cluster, and complete a software-only installation of Oracle Grid
Infrastructure on every node.
Configure the cluster using the full OUI configuration wizard GUI as described in
Section 4.3.2, "Configuring the Software Binaries," or configure the cluster using a
response file as described in section Section 4.3.3, "Configuring the Software
Binaries Using a Response File."
4.3.2 Configuring the Software Binaries
Configure the software binaries by starting Oracle Grid Infrastructure configuration
wizard in GUI mode, available in release 11.2.0.2 and later:
1.
Log in to a terminal as the Grid infrastructure installation owner, and change
directory to grid_home/crs/config.
2.
Enter the following command:
$ ./config.sh
The configuration script starts OUI in Configuration Wizard mode. Provide
information as needed for configuration. Each page shows the same user interface
and performs the same validation checks that OUI normally does. However,
instead of running an installation, The configuration wizard mode validates inputs
and configures the installation on all cluster nodes.
3.
When you complete inputs, OUI shows you the Summary page, listing all inputs
you have provided for the cluster. Verify that the summary has the correct
information for your cluster, and click Install to start configuration of the local
node.
When configuration of the local node is complete, OUI copies the Oracle Grid
Infrastructure configuration file to other cluster member nodes.
4.
When prompted, run root scripts.
5.
When you confirm that all root scripts are run, OUI checks the cluster
configuration status, and starts other configuration tools as needed.
4.3.3 Configuring the Software Binaries Using a Response File
When you install or copy Oracle Grid Infrastructure software on any node, you can
defer configuration for a later time. This section provides the procedure for completing
configuration after the software is installed or copied on nodes, using the
configuration wizard utility (config.sh), available with release 11.2.0.2 and later.
Installing Oracle Grid Infrastructure for a Cluster 4-9
Confirming Oracle Clusterware Function
Oracle Clusterware Administration and Deployment Guide for
more information about the configuration wizard.
See Also:
To configure the Oracle Grid Infrastructure software binaries using a response file:
1.
As the Oracle Grid Infrastructure installation owner (grid) start OUI in Oracle
Grid Infrastructure configuration wizard mode from the Oracle Grid
Infrastructure software-only home using the following syntax, where Grid_home
is the Oracle Grid Infrastructure home, and filename is the response file name:
Grid_home/crs/config/config.sh [-debug] [-silent -responseFile filename]
For example:
$ cd /u01/app/grid/crs/config/
$ ./config.sh -responseFile /u01/app/grid/response/response_file.rsp
The configuration script starts OUI in Configuration Wizard mode. Each page
shows the same user interface and performs the same validation checks that OUI
normally does. However, instead of running an installation, The configuration
wizard mode validates inputs and configures the installation on all cluster nodes.
2.
When you complete inputs, OUI shows you the Summary page, listing all inputs
you have provided for the cluster. Verify that the summary has the correct
information for your cluster, and click Install to start configuration of the local
node.
When configuration of the local node is complete, OUI copies the Oracle Grid
Infrastructure configuration file to other cluster member nodes.
3.
When prompted, run root scripts.
4.
When you confirm that all root scripts are run, OUI checks the cluster
configuration status, and starts other configuration tools as needed.
4.4 Confirming Oracle Clusterware Function
After installation, log in as root, and use the following command syntax on each node
to confirm that your Oracle Clusterware installation is installed and running correctly:
crsctl check crs
For example:
$ crsctl check crs
CRS-4638:
CRS-4537:
CRS-4529:
CRS-4533:
Oracle High Availability Services is online
Cluster Ready Services is online
Cluster Synchronization Services is online
Event Manager is online
Caution: After installation is complete, do not remove manually or
run cron jobs that remove /tmp/.oracle or /var/tmp/.oracle or its
files while Oracle Clusterware is up. If you remove these files, then
Oracle Clusterware could encounter intermittent hangs, and you will
encounter error CRS-0184: Cannot communicate with the CRS
daemon.
4-10 Oracle Grid Infrastructure Installation Guide
Confirming Oracle ASM Function for Oracle Clusterware Files
4.5 Confirming Oracle ASM Function for Oracle Clusterware Files
If you installed the OCR and voting disk files on Oracle ASM, then use the following
command syntax as the Oracle Grid Infrastructure installation owner to confirm that
your Oracle ASM installation is running:
srvctl status asm
For example:
$ srvctl status asm
ASM is running on node1,node2
Oracle ASM is running only if it is needed for Oracle Clusterware files. If you have not
installed OCR and voting disks files on Oracle ASM, then the Oracle ASM instance
should be down.
To manage Oracle ASM or Oracle Net 11g release 2 (11.2) or
later installations, use the srvctl binary in the Oracle Grid
Infrastructure home for a cluster (Grid home). If you have Oracle Real
Application Clusters or Oracle Database installed, then you cannot
use the srvctl binary in the database home to manage Oracle ASM or
Oracle Net.
Note:
Installing Oracle Grid Infrastructure for a Cluster 4-11
Confirming Oracle ASM Function for Oracle Clusterware Files
4-12 Oracle Grid Infrastructure Installation Guide
5
5
Oracle Grid Infrastructure Postinstallation
Procedures
This chapter describes how to complete the postinstallation tasks after you have
installed the Oracle Grid Infrastructure software.
This chapter contains the following topics:
■
Required Postinstallation Tasks
■
Recommended Postinstallation Tasks
■
Using Older Oracle Database Versions with Grid Infrastructure
■
Modifying Oracle Clusterware Binaries After Installation
5.1 Required Postinstallation Tasks
You must perform the following tasks after completing your installation:
■
Download and Install Patch Updates
In prior releases, backing up the voting disks using a dd
command was a required postinstallation task. With Oracle
Clusterware release 11.2 and later, backing up and restoring a voting
disk using the dd command may result in the loss of the voting disk,
so this procedure is not supported.
Note:
5.1.1 Download and Install Patch Updates
Refer to the My Oracle Support web site for required patch updates for your
installation.
Browsers require an Adobe Flash plug-in, version 9.0.115 or
higher to use My Oracle Support. Check your browser for the correct
version of Flash plug-in by going to the Adobe Flash checker page,
and installing the latest version of Adobe Flash.
Note:
If you do not have Flash installed, then download the latest version of
the Flash Player from the Adobe web site:
http://www.adobe.com/go/getflashplayer
To download required patch updates:
Oracle Grid Infrastructure Postinstallation Procedures
5-1
Recommended Postinstallation Tasks
1.
Use a web browser to view the My Oracle Support web site:
https://support.oracle.com
2.
Log in to My Oracle Support web site.
Note: If you are not a My Oracle Support registered user, then click
Register for My Oracle Support and register.
3.
On the main My Oracle Support page, click Patches & Updates.
4.
On the Patches & Update page, in the Patch Search frame, type Oracle Database.
and select the release and your platform from the list selections, and click Search.
5.
Any available patch updates appear under the Patch Results heading.
6.
Click the patch number to download the patch.
7.
On the Patch Set page, click View README and read the page that appears. The
README page contains information about the patch set and how to apply the
patches to your installation.
8.
Return to the Patch Set page, click Download, and save the file on your system.
9.
Use the unzip utility provided with Oracle Database 11g release 2 (11.2) to
uncompress the Oracle patch updates that you downloaded from My Oracle
Support. The unzip utility is located in the $ORACLE_HOME/bin directory.
10. Refer to Appendix E on page E-1 for information about how to stop database
processes in preparation for installing patches.
5.2 Recommended Postinstallation Tasks
Oracle recommends that you complete the following tasks as needed after installing
Oracle Grid Infrastructure:
■
Back Up the root.sh Script
■
Configure IPMI-based Failure Isolation Using Crsctl
■
Tune Semaphore Parameters
■
Create a Fast Recovery Area Disk Group
■
Running RACcheck Configuration Audit Tool
5.2.1 Back Up the root.sh Script
Oracle recommends that you back up the root.sh script after you complete an
installation. If you install other products in the same Oracle home directory, then the
installer updates the contents of the existing root.sh script during the installation. If
you require information contained in the original root.sh script, then you can recover
it from the root.sh file copy.
5.2.2 Configure IPMI-based Failure Isolation Using Crsctl
On HP-UX platforms, where Oracle does not currently support the native IPMI driver,
DHCP addressing is not supported and manual configuration is required for IPMI
support. OUI will not collect the administrator credentials, the BMC must be
configured with a static IP address, and the address must be manually stored in the
OLR.
5-2 Oracle Grid Infrastructure Installation Guide
Recommended Postinstallation Tasks
To configure Failure Isolation using IPMI, complete the following steps on each cluster
member node:
1.
If necessary, start Oracle Clusterware using the following command:
$ crsctl start crs
2.
Use the BMC management utility to obtain the BMC’s IP address and then use the
cluster control utility crsctl to store the BMC’s IP address in the Oracle Local
Registry (OLR) by issuing the crsctl set css ipmiaddr address command. For
example:
$ crsctl set css ipmiaddr 192.168.10.45
3.
Enter the following crsctl command to store the user ID and password for the
resident BMC in the OLR, where the noname user is the IPMI administrator user
account, and provide the password when prompted:
$ crsctl set css ipmiadmin ""
IPMI BMC Password:
This command attempts to validate the credentials you enter by sending them to
another cluster node. The command fails if that cluster node is unable to access the
local BMC using the credentials.
When you store the IPMI credentials in the OLR, you must have the anonymous
user specified explicitly, or a parsing error will be reported.
5.2.3 Tune Semaphore Parameters
Refer to the following guidelines only if the default semaphore parameter values are
too low to accommodate all Oracle processes:
Oracle recommends that you refer to the operating system
documentation for more information about setting semaphore
parameters.
Note:
1.
Calculate the minimum total semaphore requirements using the following
formula:
2 * sum (process parameters of all database instances on the system) + overhead
for background processes + system and other application requirements
2.
Set semmns (total semaphores systemwide) to this total.
3.
Set semmsl (semaphores for each set) to 250.
4.
Set semmni (total semaphores sets) to semmns divided by semmsl, rounded up to the
nearest multiple of 1024.
5.2.4 Create a Fast Recovery Area Disk Group
During installation, by default you can create one disk group. If you plan to add an
Oracle Database for a standalone server or an Oracle RAC database, then you should
create the Fast Recovery Area for database files.
Oracle Grid Infrastructure Postinstallation Procedures
5-3
Recommended Postinstallation Tasks
5.2.4.1 About the Fast Recovery Area and the Fast Recovery Area Disk Group
The Fast Recovery Area is a unified storage location for all Oracle Database files
related to recovery. Database administrators can define the DB_RECOVERY_FILE_
DEST parameter to the path for the Fast Recovery Area to enable on-disk backups, and
rapid recovery of data. Enabling rapid backups for recent data can reduce requests to
system administrators to retrieve backup tapes for recovery operations.
When you enable Fast Recovery in the init.ora file, all RMAN backups, archive logs,
control file automatic backups, and database copies are written to the Fast Recovery
Area. RMAN automatically manages files in the Fast Recovery Area by deleting
obsolete backups and archive files no longer required for recovery.
Oracle recommends that you create a Fast Recovery Area disk group. Oracle
Clusterware files and Oracle Database files can be placed on the same disk group, and
you can also place Fast Recovery files in the same disk group. However, Oracle
recommends that you create a separate Fast Recovery disk group to reduce storage
device contention.
The Fast Recovery Area is enabled by setting DB_RECOVERY_FILE_DEST. The size of
the Fast Recovery Area is set with DB_RECOVERY_FILE_DEST_SIZE. As a general
rule, the larger the Fast Recovery Area, the more useful it becomes. For ease of use,
Oracle recommends that you create a Fast Recovery Area disk group on storage
devices that can contain at least three days of recovery information. Ideally, the Fast
Recovery Area should be large enough to hold a copy of all of your data files and
control files, the online redo logs, and the archived redo log files needed to recover
your database using the data file backups kept under your retention policy.
Multiple databases can use the same Fast Recovery Area. For example, assume you
have created one Fast Recovery Area disk group on disks with 150 GB of storage,
shared by three different databases. You can set the size of the Fast Recovery Area for
each database depending on the importance of each database. For example, if
database1 is your least important database, database 2 is of greater importance and
database 3 is of greatest importance, then you can set different DB_RECOVERY_FILE_
DEST_SIZE settings for each database to meet your retention target for each database:
30 GB for database 1, 50 GB for database 2, and 70 GB for database 3.
See Also:
Oracle Database Storage Administrator's Guide
5.2.4.2 Creating the Fast Recovery Area Disk Group
To create a Fast Recovery file disk group:
1.
Navigate to the Grid home bin directory, and start Oracle ASM Configuration
Assistant (asmca). For example:
$ cd /u01/app/11.2.0/grid/bin
$ ./asmca
2.
ASMCA opens at the Disk Groups tab. Click Create to create a new disk group
3.
The Create Disk Groups window opens.
In the Disk Group Name field, enter a descriptive name for the Fast Recovery Area
group. For example: FRA.
In the Redundancy section, select the level of redundancy you want to use.
In the Select Member Disks field, select eligible disks to be added to the Fast
Recovery Area, and click OK.
5-4 Oracle Grid Infrastructure Installation Guide
Using Older Oracle Database Versions with Grid Infrastructure
4.
The Diskgroup Creation window opens to inform you when disk group creation is
complete. Click OK.
5.
Click Exit.
5.2.5 Running RACcheck Configuration Audit Tool
Oracle recommends that you run the RACcheck audit tool to check your Oracle RAC
installation. RACcheck is an Oracle RAC auditing tool that checks various important
configuration settings within Oracle Real Application Clusters, Oracle Clusterware,
Oracle Automatic Storage Management and the Oracle Grid Infrastructure
environment.
For information about configuring and running RACcheck utility, refer to My Oracle
Support note 1268927.1, which is available at the following URL:
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1268927.1
5.3 Using Older Oracle Database Versions with Grid Infrastructure
Review the following sections for information about using older Oracle Database
releases with 11g release 2 (11.2) Oracle Grid Infrastructure installations:
■
General Restrictions for Using Older Oracle Database Versions
■
Using ASMCA to Administer Disk Groups for Older Database Versions
■
Pinning Cluster Nodes for Oracle Database Release 10.x or 11.x
■
Enabling The Global Services Daemon (GSD) for Oracle Database Release 9.2
■
Using the Correct LSNRCTL Commands
5.3.1 General Restrictions for Using Older Oracle Database Versions
You can use Oracle Database release 9.2, release 10.x and release 11.1 with Oracle
Clusterware 11g release 2 (11.2).
If you upgrade an existing version of Oracle Clusterware and Oracle ASM to Oracle
Grid Infrastructure release 11.2 (which includes Oracle Clusterware and Oracle ASM),
and you also plan to upgrade your Oracle RAC database to 11.2, then the required
configuration of existing databases is completed automatically when you complete the
Oracle RAC upgrade, and this section does not concern you.
However, if you upgrade to release 11.2 Oracle Grid Infrastructure, and you have
existing Oracle RAC installations you do not plan to upgrade, or if you install older
versions of Oracle RAC (9.2, 10.2 or 11.1) on a release 11.2 Oracle Grid Infrastructure
cluster, then you must complete additional configuration tasks or apply patches, or
both, before the older databases will work correctly with Oracle Grid Infrastructure.
Oracle Grid Infrastructure Postinstallation Procedures
5-5
Using Older Oracle Database Versions with Grid Infrastructure
Before you start an Oracle RAC or Oracle Database installation
on an Oracle Clusterware release 11.2 installation, if you are
upgrading from releases 11.1.0.7, 11.1.0.6, and 10.2.0.4, then Oracle
recommends that you check for the latest recommended patches for
the release you are upgrading from, and install those patches as
needed on your existing database installations before upgrading.
Note:
For more information on recommended patches, refer to "Oracle
Upgrade Companion," which is available through Note 785351.1 on
My Oracle Support:
https://support.oracle.com
You may also refer to Notes 756388.1 and 756671.1 for the current list
of recommended patches for each release
5.3.2 Using ASMCA to Administer Disk Groups for Older Database Versions
Use Oracle ASM Configuration Assistant (ASMCA) to create and modify disk groups
when you install older Oracle databases and Oracle RAC databases on Oracle Grid
Infrastructure installations. Starting with 11g release 2, Oracle ASM is installed as part
of an Oracle Grid Infrastructure installation, with Oracle Clusterware. You can no
longer use Database Configuration Assistant (DBCA) to perform administrative tasks
on Oracle ASM.
5.3.3 Pinning Cluster Nodes for Oracle Database Release 10.x or 11.x
When Oracle Clusterware 11g release 11.2 is installed on a cluster with no previous
Oracle software version, it configures cluster nodes dynamically, which is compatible
with Oracle Database Release 11.2 and later, but Oracle Database 10g and 11.1 require a
persistent configuration. This process of association of a node name with a node
number is called pinning.
During an upgrade, all cluster member nodes are pinned
automatically, and no manual pinning is required for existing
databases. This procedure is required only if you install older database
versions after installing Oracle Grid Infrastructure release 11.2
software.
Note:
To pin a node in preparation for installing or using an older Oracle Database version,
use Grid_home/bin/crsctl with the following command syntax, where nodes is a
space-delimited list of one or more nodes in the cluster whose configuration you want
to pin:
crsctl pin css -n nodes
For example, to pin nodes node3 and node4, log in as root and enter the following
command:
$ crsctl pin css -n node3 node4
To determine if a node is in a pinned or unpinned state, use Grid_home/bin/olsnodes
with the following command syntax:
To list all pinned nodes:
olsnodes -t -n
5-6 Oracle Grid Infrastructure Installation Guide
Modifying Oracle Clusterware Binaries After Installation
For example:
# /u01/app/11.2.0/grid/bin/olsnodes -t -n
node1 1
Pinned
node2 2
Pinned
node3 3
Pinned
node4 4
Pinned
To list the state of a particular node:
olsnodes -t -n node3
For example:
# /u01/app/11.2.0/grid/bin/olsnodes -t -n node3
node3 3
Pinned
Oracle Clusterware Administration and Deployment Guide for
more information about pinning and unpinning nodes
See Also:
5.3.4 Enabling The Global Services Daemon (GSD) for Oracle Database Release 9.2
By default, the Global Services daemon (GSD) is disabled. If you install Oracle
Database 9i release 2 (9.2) on Oracle Grid Infrastructure for a Cluster 11g release 2
(11.2), then you must enable the GSD. Use the following commands to enable the GSD
before you install Oracle Database release 9.2:
srvctl enable nodeapps -g
srvctl start nodeapps
5.3.5 Using the Correct LSNRCTL Commands
To administer 11g release 2 local and scan listeners using the lsnrctl command, set
your $ORACLE_HOME environment variable to the path for the Oracle Grid Infrastructure
home (Grid home). Do not attempt to use the lsnrctl commands from Oracle home
locations for previous releases, as they cannot be used with the new release.
5.4 Modifying Oracle Clusterware Binaries After Installation
After installation, if you need to modify the Oracle Clusterware configuration, then
you must unlock the Grid home.
For example, if you want to apply a one-off patch, or if you want to modify an Oracle
Exadata configuration to run IPC traffic over RDS on the interconnect instead of using
the default UDP, then you must unlock the Grid home.
Caution: Before relinking executables, you must shut down all
executables that run in the Oracle home directory that you are
relinking. In addition, shut down applications linked with Oracle
shared libraries.
Unlock the home using the following procedure:
1.
Change directory to the path Grid_home/crs/install, where Grid_home is the path to
the Grid home, and unlock the Grid home using the command rootcrs.pl
Oracle Grid Infrastructure Postinstallation Procedures
5-7
Modifying Oracle Clusterware Binaries After Installation
-unlock -crshome Grid_home, where Grid_home is the path to your Grid
infrastructure home. For example, with the grid home /u01/app/11.2.0/grid,
enter the following command:
# cd /u01/app/11.2.0/grid/crs/install
# perl rootcrs.pl -unlock -crshome /u01/app/11.2.0/grid
2.
Change user to the Oracle Grid Infrastructure software owner, and relink binaries
using the command syntax make -f Grid_home/rdbms/lib/ins_rdbms.mk target,
where Grid_home is the Grid home, and target is the binaries to relink. For example,
where the grid user is grid, $ORACLE_HOME is set to the Grid home, and where
you are updating the interconnect protocol from UDP to IPC, enter the following
command:
# su grid
$ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ipc_rds ioracle
To relink binaries, you can also change to the grid installation
owner and run the command Grid_home/bin/relink.
Note:
3.
Relock the Grid home and restart the cluster using the following command:
# perl rootcrs.pl -patch
4.
Repeat steps 1 through 3 on each cluster member node.
5-8 Oracle Grid Infrastructure Installation Guide
6
6
How to Modify or Deinstall Oracle Grid
Infrastructure
This chapter describes how to remove Oracle Clusterware and Oracle ASM.
Starting with Oracle Database 11g Release 2 (11.2), Oracle recommends that you use
the deinstallation tool to remove the entire Oracle home associated with the Oracle
Database, Oracle Clusterware, Oracle ASM, Oracle RAC, or Oracle Database client
installation. Oracle does not support the removal of individual products or
components.
This chapter contains the following topics:
■
Deciding When to Deinstall Oracle Clusterware
■
Migrating Standalone Grid Infrastructure Servers to a Cluster
■
Relinking Oracle Grid Infrastructure for a Cluster Binaries
■
Changing the Oracle Grid Infrastructure Home Path
■
Deconfiguring Oracle Clusterware Without Removing Binaries
■
Removing Oracle Clusterware and ASM
See Also: Product-specific documentation for requirements and
restrictions to remove an individual product
6.1 Deciding When to Deinstall Oracle Clusterware
Remove installed components in the following situations:
■
■
■
■
You have successfully installed Oracle Clusterware, and you want to remove the
Oracle Clusterware installation, either in an educational environment, or a test
environment.
You have encountered errors during or after installing or upgrading Oracle
Clusterware, and you want to reattempt an installation.
Your installation or upgrade stopped because of a hardware or operating system
failure.
You are advised by Oracle Support to reinstall Oracle Clusterware.
How to Modify or Deinstall Oracle Grid Infrastructure
6-1
Migrating Standalone Grid Infrastructure Servers to a Cluster
6.2 Migrating Standalone Grid Infrastructure Servers to a Cluster
If you have an Oracle Database installation using Oracle Restart (that is, an Oracle
Grid Infrastructure installation for a standalone server), and you want to configure
that server as a cluster member node, then complete the following tasks:
This procedure uses Oracle Clusterware Configuration
Wizard, available with release 11.2.0.2 and later.
Note:
1.
Inspect the Oracle Restart configuration with srvctl using the following syntax,
where db_unique_name is the unique name for the database, and lsnrname is the
name of the listeners:
srvctl config database -d db_unique_name
srvctl config service -d db_unique_name
srvctl config listener -l lsnrname
Write down the configuration information for the server.
2.
Change directory to Grid home/crs/install. For example:
# cd /u01/app/11.2.0/grid/crs/install
3.
Stop all of the databases, services, and listeners that you discovered in step 1.
4.
Deconfigure and deinstall the Oracle Grid Infrastructure installation for a
standalone server, using the following command:
# roothas.pl -deconfig -force
5.
Prepare the server for Oracle Clusterware configuration, as described in this
document.
6.
As the Oracle Grid Infrastructure installation owner, run Oracle Clusterware
Configuration Wizard, and save and stage the response file. For example:
$ Grid_home/crs/config/config.sh -silent -responseFile $HOME/GI.rsp
7.
Run root.sh for the Oracle Clusterware Configuration Wizard.
8.
Mount the Oracle Restart disk group.
9.
Enter the volenable command to enable all Oracle Restart disk group volumes.
10. Add back Oracle Clusterware services to the Oracle Clusterware home, using the
information you wrote down in step 1. For example:
/u01/app/grid/product/11.2.0/grid/bin/srvctl add filesystem -d /dev/asm/db1 -g
ORestartData -v db1 -m /u01/app/grid/product/11.2.0/db1 -u grid
11. Add the Oracle Database for support by Oracle Grid Infrastructure for a cluster,
using the configuration information you recorded in step 1. Use the following
command syntax, where db_unique_name is the unique name of the database on the
node, and nodename is the name of the node:
srvctl add database -d db_unique_name -o $ORACLE_HOME -x nodename
For example, with the database name mydb, and the service myservice, enter the
following commands:
srvctl add database -d mydb -o $ORACLE_HOME -x node1
6-2 Oracle Grid Infrastructure Installation Guide
Changing the Oracle Grid Infrastructure Home Path
12. Add each service to the database, using the command srvctl add service. For
example:
srvctl add service -d mydb -s myservice
6.3 Relinking Oracle Grid Infrastructure for a Cluster Binaries
After installing Oracle Grid Infrastructure for a cluster (Oracle Clusterware and Oracle
ASM configured for a cluster), if you need to modify the binaries, then use the
following procedure, where Grid_home is the Oracle Grid Infrastructure for a Cluster
home:
Caution: Before relinking executables, you must shut down all
executables that run in the Oracle home directory that you are
relinking. In addition, shut down applications linked with Oracle
shared libraries.
As root:
# cd Grid_home/crs/install
# perl rootcrs.pl -unlock
As the Oracle Grid Infrastructure for a Cluster owner:
$ export ORACLE_HOME=Grid_home
$ Grid_home/bin/relink
As root again:
#
#
#
#
cd Grid_home/rdbms/install/
./rootadd_rdbms.sh
cd Grid_home/crs/install
perl rootcrs.pl -patch
You must relink the Oracle Clusterware and Oracle ASM binaries every time you
apply an operating system patch or after an operating system upgrade.
For upgrades from previous releases, if you want to deinstall the prior release Grid
home, then you must first unlock the prior release Grid home. Unlock the previous
release Grid home by running the command rootcrs.pl -unlock from the previous
release home. After the script has completed, you can run the deinstall command.
6.4 Changing the Oracle Grid Infrastructure Home Path
With release 11.2.0.3 and later, after installing Oracle Grid Infrastructure for a cluster
(Oracle Clusterware and Oracle ASM configured for a cluster), if you must change the
Grid home path, then use the following example as a guide to detach the existing Grid
home, and to attach a new Grid home:
Caution: Before changing the Grid home, you must shut down all
executables that run in the Grid home directory that you are relinking.
In addition, shut down applications linked with Oracle shared
libraries.
How to Modify or Deinstall Oracle Grid Infrastructure
6-3
Deconfiguring Oracle Clusterware Without Removing Binaries
1.
Detach the existing Grid home by running the following command as the Oracle
Grid Infrastructure installation owner (grid), where /u01/app/11.2.0/grid is the
existing Grid home location:
$ cd /u01/app/11.2.0/grid/oui/bin
$ ./detachhome.sh -silent -local -invPtrLoc /u01/app/11.2.0/grid/oraInst.loc
2.
As root, move the Grid binaries from the old Grid home location to the new Grid
home location. For example, where the old Grid home is /u01/app/11.2.0/grid
and the new Grid home is /u01/app/grid:
# mv /u01/app/11.2.0/grid /u01/app/grid
3.
Clone the Oracle Grid Infrastructure installation, using the instructions provided
in "Creating a Cluster by Cloning Oracle Clusterware Step 3: Run the clone.pl
Script on Each Destination Node," in Oracle Clusterware Administration and
Deployment Guide.
When you navigate to the Grid_home/clone/bin directory and run the clone.pl
script, provide values for the input parameters that provide the path information
for the new Grid home.
4.
As root again, enter the following commands to start up in the new home location:
#
#
#
#
5.
cd Grid_home/rdbms/install/
./rootadd_rdbms.sh
cd Grid_home/crs/install
perl rootcrs.pl -patch -destcrshome /u01/app/grid
Repeat steps 1 through 4 on each cluster member node.
You must relink the Oracle Clusterware and Oracle ASM binaries every time you
move the Grid home.
6.5 Deconfiguring Oracle Clusterware Without Removing Binaries
Running the rootcrs.pl command flags -deconfig -force enables you to
deconfigure Oracle Clusterware on one or more nodes without removing installed
binaries. This feature is useful if you encounter an error on one or more cluster nodes
during installation when running the root.sh command, such as a missing operating
system package on one node. By running rootcrs.pl -deconfig -force on nodes
where you encounter an installation error, you can deconfigure Oracle Clusterware on
those nodes, correct the cause of the error, and then run root.sh again.
Stop any databases, services, and listeners that may be
installed and running before deconfiguring Oracle Clusterware.
Note:
Caution: Commands used in this section remove the Oracle Grid
infrastructure installation for the entire cluster. If you want to remove
the installation from an individual node, then refer to Oracle
Clusterware Administration and Deployment Guide.
To deconfigure Oracle Clusterware:
1.
Log in as the root user on a node where you encountered an error.
2.
Change directory to Grid_home/crs/install. For example:
6-4 Oracle Grid Infrastructure Installation Guide
Removing Oracle Clusterware and ASM
# cd /u01/app/11.2.0/grid/crs/install
3.
Run rootcrs.pl with the -deconfig -force flags. For example:
# perl rootcrs.pl -deconfig -force
Repeat on other nodes as required.
4.
If you are deconfiguring Oracle Clusterware on all nodes in the cluster, then on the
last node, enter the following command:
# perl rootcrs.pl -deconfig -force -lastnode
The -lastnode flag completes deconfiguration of the cluster, including the OCR
and voting disks.
6.6 Removing Oracle Clusterware and ASM
The deinstall command removes Oracle Clusterware and Oracle ASM from your
server. The following sections describe the command, and provide information about
additional options to use the command:
■
About the Deinstallation Tool
■
Deinstalling Previous Release Grid Home
■
Downloading The Deinstall Tool for Use with Failed Installations
■
Running The Deinstallation Tool
■
Deinstall Command Example for Oracle Clusterware and Oracle ASM
■
Deinstallation Parameter File Example for Grid Infrastructure for a Cluster
Caution: You must use the deinstallation tool from the same release
to remove Oracle software. Do not run the deinstallation tool from a
later release to remove Oracle software from an earlier release. For
example, do not run the deinstallation tool from the 12.1.0.1
installation media to remove Oracle software from an existing 11.2.0.4
Oracle home.
6.6.1 About the Deinstallation Tool
The Deinstallation Tool (deinstall) stops Oracle software, and removes Oracle
software and configuration files on the operating system. It is available in the
installation media before installation, and is available in Oracle home directories after
installation. It is located in the path $ORACLE_HOME/deinstall. The Deinstallation tool
command is also available for download from Oracle Technology Network (OTN) at
the following URL:
http://www.oracle.com/technetwork/database/enterprise-edition/downloads
You can download the Deinstallation tool with the complete Oracle Database 11g
release 2 software, or as a separate archive file. To download the Deinstallation tool
separately, click the See All link next to the software version.
The Deinstallation tool command uses the information you provide and the
information gathered from the software home to create a parameter file. Alternatively,
you can use a parameter file generated previously by the deinstall command using
How to Modify or Deinstall Oracle Grid Infrastructure
6-5
Removing Oracle Clusterware and ASM
the -checkonly flag and -o flag. You can also edit a response file template to create a
parameter file.
The Deinstallation tool command stops Oracle software, and removes Oracle software
and configuration files on the operating system for a specific Oracle home. If you run
the Deinstallation tool to remove an Oracle Grid Infrastructure for a cluster
installation, then the tool prompts you to run the rootcrs.pl script as root.
Caution: When you run the deinstall command, if the central
inventory (oraInventory) contains no other registered homes besides
the home that you are deconfiguring and removing, then the deinstall
command removes the following files and directory contents in the
Oracle base directory of the Oracle RAC installation owner:
■
admin
■
cfgtoollogs
■
checkpoints
■
diag
■
oradata
■
flash_recovery_area
Oracle strongly recommends that you configure your installations
using an Optimal Flexible Architecture (OFA) configuration, and that
you reserve Oracle base and Oracle home paths for exclusive use of
Oracle software. If you have any user data in these locations in the
Oracle base that is owned by the user account that owns the Oracle
software, then the deinstall command deletes this data.
The command uses the following syntax, where variable content is indicated by italics:
deinstall -home complete path of Oracle home [-silent] [-checkonly] [-local]
[-paramfile complete path of input parameter property file]
[-params name1=value name2=value . . .] [-o complete path of directory for saving
files] -h
The default method for running the deinstall tool is from the deinstall directory in the
Grid home. For example:
$ cd /u01/app/11.2.0/grid/deinstall
$ ./deinstall
In addition, you can run the deinstall tool from other locations, or with a parameter
file, or select other options to run the tool.
The options are:
■
-home
Use this flag to indicate the home path of the Oracle home to check or deinstall. To
deinstall Oracle software using the deinstall command in the Oracle home you
plan to deinstall, provide a parameter file in another location, and do not use the
-home flag.
If you run deinstall from the $ORACLE_HOME/deinstall path, then the -home
flag is not required because the tool knows from which home it is being run. If you
use the standalone version of the tool, then -home is mandatory.
6-6 Oracle Grid Infrastructure Installation Guide
Removing Oracle Clusterware and ASM
■
-silent
Use this flag to run the command in noninteractive mode. This option requires a
properties file that contains the configuration values for the Oracle home that is
being deinstalled or deconfigured.
To create a properties file and provide the required parameters, refer to the
template file deinstall.rsp.tmpl, located in the response folder of the
Deinstallation tool home or the Oracle home.
If you have a working system, then instead of using the template file, you can
generate a properties file by running the deinstall command using the
-checkonly flag. The deinstall command then discovers information from the
Oracle home to deinstall and deconfigure. It generates the properties file, which
you can then use with the -silent option.
■
-checkonly
Use this flag to check the status of the Oracle software home configuration.
Running the command with the checkonly flag does not remove the Oracle
configuration.
■
-local
Use this flag on a multinode non-shared environment to deconfigure Oracle
software in a cluster.
When you run deinstall with this flag, it deconfigures and deinstalls the Oracle
software on the local node (the node where deinstall is run) for non-shared home
directories. On remote nodes, it deconfigures Oracle software, but does not
deinstall the Oracle software.
■
-paramfile complete path of input parameter property file
Use this flag to run deinstall with a parameter file in a location other than the
default. When you use this flag, provide the complete path where the parameter
file is located. If you run the deinstall command from the Oracle home that you
plan to deinstall, then you do not need to use the -paramfile flag.
The default location of the parameter file depends on the location of deinstall:
■
–
From the installation media or stage location: $ORACLE_
HOME/inventory/response.
–
From a unzipped archive file from Oracle Technology Network (OTN):
/ziplocation/response.
–
After installation from the installed Oracle home: $ORACLE_
HOME/deinstall/response.
-params [name1=value name2=value name3=value ...]
Use this flag with a parameter file to override one or more values in a parameter
file you have already created.
■
-o complete path of directory for saving response files
Use this flag to provide a path other than the default location where the properties
file (deinstall.rsp.tmpl) is saved.
The default location of the parameter file depends on the location of deinstall:
–
From the installation media or stage location before installation: $ORACLE_
HOME/.
–
From a unzipped archive file from OTN: /ziplocation/response/.
How to Modify or Deinstall Oracle Grid Infrastructure
6-7
Removing Oracle Clusterware and ASM
–
■
After installation from the installed Oracle home: $ORACLE_
HOME/deinstall/response.
-h
Use the help option (-h) to obtain additional information about the command
option flags.
6.6.2 Deinstalling Previous Release Grid Home
For upgrades from previous releases, if you want to deinstall the previous release Grid
home, then as the root user, you must manually change the permissions of the
previous release Grid home, and then run the deinstall command.
For example:
# chown -R grid:oinstall /u01/app/grid/11.2.0
# chmod -R 775 /u01/app/grid/11.2.0
In this example, /u01/app/grid/11.2.0 is the previous release Grid home.
6.6.3 Downloading The Deinstall Tool for Use with Failed Installations
You can use the Deinstallation tool (deinstall) to remove failed or incomplete
installations. It is available as a separate download from the Oracle Technology
Network (OTN) web site.
To download the Deinstallation tool:
1.
Go to the following URL:
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.h
tml
2.
Under Oracle Database 11g Release 2, click See All for the respective platform for
which you want to download the Deinstallation Tool.
The Deinstallation tool is available for download at the end of this page.
6.6.4 Running The Deinstallation Tool
Oracle recommends that you run the deinstall command as the Oracle Grid
Infrastructure installation owner or the grid user that owns Oracle Clusterware.
After downloading the Deinstallation tool, ensure that:
■
The user running the Deinstallation tool is the same on all cluster member nodes
■
User equivalence is set up
To run the Deinstallation tool, follow these steps:
1.
Download or copy the Deinstallation tool to the grid user home directory.
2.
Verify that the deinstall binary is owned by the grid user.
3.
Run the Deinstallation tool.
6.6.5 Deinstall Command Example for Oracle Clusterware and Oracle ASM
As the deinstall command runs, you are prompted to provide the home directory of
the Oracle software to remove from your system. Provide additional information as
prompted.
6-8 Oracle Grid Infrastructure Installation Guide
Removing Oracle Clusterware and ASM
To run the deinstall command from an Oracle Grid Infrastructure for a cluster home,
enter the following command:
$ cd /u01/app/11.2.0/grid/deinstall/
$ ./deinstall
You can run generate a deinstall parameter file by running the deinstall command
using the -checkonly flag before you run the command to deinstall the home, or you
can use the response file template and manually edit it to create the parameter file to
use with the deinstall command.
6.6.6 Deinstallation Parameter File Example for Grid Infrastructure for a Cluster
You can run the deinstall command with the -paramfile option to use the values
you specify in the parameter file. The following is an example of a parameter file for a
cluster on nodes node1 and node2, in which the Oracle Grid Infrastructure for a cluster
software binary owner is grid, the Oracle Grid Infrastructure home (Grid home) is in
the path /u01/app/11.2.0/grid, the Oracle base (the Oracle base for Oracle Grid
Infrastructure, containing Oracle ASM log files, Oracle Clusterware logs, and other
administrative files) is /u01/app/11.2.0/grid/, the central Oracle Inventory home
(oraInventory) is /u01/app/oraInventory, the virtual IP addresses (VIP) are
192.0.2.2 and 192.0.2.4, the local node (the node where you run the deinstallation
session from) is node1:
#Copyright (c) 2005, 2006 Oracle Corporation. All rights reserved.
#Fri Feb 06 00:08:58 PST 2009
LOCAL_NODE=node1
HOME_TYPE=CRS
ASM_REDUNDANCY=\
ORACLE_BASE=/u01/app/11.2.0/grid/
VIP1_MASK=255.255.252.0
VOTING_DISKS=/u02/storage/grid/vdsk
SCAN_PORT=1522
silent=true
ASM_UPGRADE=false
ORA_CRS_HOME=/u01/app/11.2.0/grid
GPNPCONFIGDIR=$ORACLE_HOME
LOGDIR=/home/grid/SH/deinstall/logs/
GPNPGCONFIGDIR=$ORACLE_HOME
ORACLE_OWNER=grid
NODELIST=node1,node2
CRS_STORAGE_OPTION=2
NETWORKS="lan0"/192.0.2.1\:public,"lan1"/10.0.0.1\:cluster_interconnect
VIP1_IP=192.0.2.2
NETCFGJAR_NAME=netcfg.jar
ORA_DBA_GROUP=dba
CLUSTER_NODES=node1,node2
JREDIR=/u01/app/11.2.0/grid/jdk/jre
VIP1_IF=lan0
REMOTE_NODES=node2
VIP2_MASK=255.255.252.0
ORA_ASM_GROUP=asm
LANGUAGE_ID=AMERICAN_AMERICA.WE8ISO8859P1
CSS_LEASEDURATION=400
NODE_NAME_LIST=node1,node2
SCAN_NAME=node1scn
SHAREJAR_NAME=share.jar
HELPJAR_NAME=help4.jar
SILENT=false
How to Modify or Deinstall Oracle Grid Infrastructure
6-9
Removing Oracle Clusterware and ASM
local=false
INVENTORY_LOCATION=/u01/app/oraInventory
GNS_CONF=false
JEWTJAR_NAME=jewt4.jar
OCR_LOCATIONS=/u02/storage/grid/ocr
EMBASEJAR_NAME=oemlt.jar
ORACLE_HOME=/u01/app/11.2.0/grid
CRS_HOME=true
VIP2_IP=192.0.2.4
ASM_IN_HOME=n
EWTJAR_NAME=ewt3.jar
HOST_NAME_LIST=node1,node2
JLIBDIR=/u01/app/11.2.0/grid/jlib
VIP2_IF=lan0
VNDR_CLUSTER=false
CRS_NODEVIPS='node1-vip/255.255.252.0/lan0,node2-vip/255.255.252.0/lan0'
CLUSTER_NAME=node1-cluster
Do not use quotation marks with variables except in the
following cases:
Note:
■
Around addresses in CRS_NODEVIPS:
CRS_
NODEVIPS='n1-vip/255.255.252.0/lan0,n2-vip/255.255.252.0/lan0'
■
Around interface names in NETWORKS:
NETWORKS="lan0"/192.0.2.1\:public,"lan1"/10.0.0.1\:cluster_
interconnect VIP1_IP=192.0.2.2
6-10 Oracle Grid Infrastructure Installation Guide
A
A
Troubleshooting the Oracle Grid Infrastructure
Installation Process
This appendix provides troubleshooting information for installing Oracle Grid
Infrastructure.
See Also: The Oracle Database 11g Oracle RAC documentation set
included with the installation media in the Documentation directory:
■
■
Oracle Clusterware Administration and Deployment Guide
Oracle Real Application Clusters Administration and Deployment
Guide
This appendix contains the following topics:
■
General Installation Issues
■
Interpreting CVU "Unknown" Output Messages Using Verbose Mode
■
Interpreting CVU Messages About Oracle Grid Infrastructure Setup
■
About the Oracle Grid Infrastructure Alert Log
■
Performing Cluster Diagnostics During Oracle Grid Infrastructure Installations
■
About Using CVU Cluster Healthchecks After Installation
■
Interconnect Configuration Issues
■
SCAN VIP and SCAN Listener Issues
■
Storage Configuration Issues
■
Completing an Installation Before Completing the Scripts
A.1 General Installation Issues
The following is a list of examples of types of errors that can occur during installation.
It contains the following issues:
■
An error occurred while trying to get the disks
■
CRS-5823:Could not initialize agent framework.
■
Failed to connect to server, Connection refused by server, or Can't open display
■
Failed to initialize ocrconfig
■
INS-32026 INSTALL_COMMON_HINT_DATABASE_LOCATION_ERROR
Troubleshooting the Oracle Grid Infrastructure Installation Process
A-1
General Installation Issues
■
Nodes unavailable for selection from the OUI Node Selection screen
■
Node nodename is unreachable
■
PROT-8: Failed to import data from specified file to the cluster registry
■
PRVE-0038 : The SSH LoginGraceTime setting, or fatal: Timeout before
authentication
■
Time stamp is in the future
■
Timed out waiting for the CRS stack to start
■
Other Installation Issues and Errors
An error occurred while trying to get the disks
Cause: There is an entry in /etc/oratab pointing to a non-existent Oracle home.
The OUI log file should show the following error: "java.io.IOException:
/home/oracle/OraHome//bin/kfod: not found"
Action: Remove the entry in /etc/oratab pointing to a non-existing Oracle home.
CRS-5823:Could not initialize agent framework.
Cause: Installation of Oracle Grid Infrastructure fails when you run root.sh.
Oracle Grid Infrastructure fails to start because the local host entry is missing from
the hosts file.
The Oracle Grid Infrastructure alert.log file shows the following:
[/oracle/app/grid/bin/orarootagent.bin(11392)]CRS-5823:Could not initialize
agent framework. Details at (:CRSAGF00120:) in
/oracle/app/grid/log/node01/agent/crsd/orarootagent_root/orarootagent_root.log
2010-10-04 12:46:25.857
[ohasd(2401)]CRS-2765:Resource 'ora.crsd' has failed on server 'node01'.
You can verify this as the cause by checking crsdOUT.log file, and finding the
following:
Unable to resolve address for localhost:2016
ONS runtime exiting
Fatal error: eONS: eonsapi.c: Aug 6 2009 02:53:02
Action: Add the local host entry in the hosts file.
Failed to connect to server, Connection refused by server, or Can't open display
Cause: These are typical of X Window display errors on Windows or UNIX
systems, where xhost is not properly configured, or where you are running as a
user account that is different from the account you used with the startx command
to start the X server.
Action: In a local terminal window, log in as the user that started the X Window
session, and enter the following command:
$ xhost fully_qualified_remote_host_name
For example:
$ xhost somehost.example.com
Then, enter the following commands, where workstation_name is the host name
or IP address of your workstation.
Bourne, Bash, or Korn shell:
$ DISPLAY=workstation_name:0.0
A-2 Oracle Grid Infrastructure Installation Guide
General Installation Issues
$ export DISPLAY
To determine whether X Window applications display correctly on the local
system, enter the following command:
$ xclock
The X clock should appear on your monitor.
If the X clock appears, then close the X clock and start Oracle Universal Installer
again.
Failed to initialize ocrconfig
Cause: You have the wrong options configured for NFS in the /etc/fstab file.
You can confirm this by checking ocrconfig.log files located in the path Grid_
home/log/nodenumber/client and finding the following:
/u02/app/crs/clusterregistry, ret -1, errno 75, os err string Value too large
for defined data type
2007-10-30 11:23:52.101: [ OCROSD][3085960896]utopen:6'': OCR location
Action: For file systems mounted on NFS, provide the correct mount
configuration for NFS mounts in the /etc/fstab file. For example:
rrw,bg,vers=3,proto=tcp,noac,forcedirectio,hard,nointr,timeo=600,rsize=32768,ws
ize=32768,suid
You should not have netdev in the mount instructions, or
vers=2. The netdev option is only required for OCFS file systems, and
vers=2 forces the kernel to mount NFS using the older version 2
protocol.
Note:
After correcting the NFS mount information, remount the NFS mount point, and
run the root.sh script again. For example, with the mount point /u02:
#umount /u02
#mount -a -t nfs
#cd $GRID_HOME
#sh root.sh
INS-32026 INSTALL_COMMON_HINT_DATABASE_LOCATION_ERROR
Cause: The location selected for the Grid home for a cluster installation is located
under an Oracle base directory.
Action: For Oracle Grid Infrastructure for a Cluster installations, the Grid home
must not be placed under one of the Oracle base directories, or under Oracle home
directories of Oracle Database installation owners, or in the home directory of an
installation owner. During installation, ownership of the path to the Grid home is
changed to root. This change causes permission errors for other installations. In
addition, the Oracle Clusterware software stack may not come up under an Oracle
base path.
Nodes unavailable for selection from the OUI Node Selection screen
Cause: Oracle Grid Infrastructure is either not installed, or the Oracle Grid
Infrastructure services are not up and running.
Troubleshooting the Oracle Grid Infrastructure Installation Process
A-3
General Installation Issues
Action: Install Oracle Grid Infrastructure, or review the status of your Oracle Grid
Infrastructure installation. Consider restarting the nodes, as doing so may resolve
the problem.
Node nodename is unreachable
Cause: Unavailable IP host
Action: Attempt the following:
1.
Run the shell command netstat -in. Compare the output of this command
with the contents of the /etc/hosts file to ensure that the node IP is listed.
2.
Run the shell command nslookup to see if the host is reachable.
3.
As the oracle user, attempt to connect to the node with ssh or rsh. If you are
prompted for a password, then user equivalence is not set up properly. The
installer should configure SSH for you. If you are unable to use automatic SSH
configuration, then review the section Appendix D.1, "Configuring SSH
Manually on All Cluster Nodes."
PROT-8: Failed to import data from specified file to the cluster registry
Cause: Insufficient space in an existing Oracle Cluster Registry device partition,
which causes a migration failure while running rootupgrade. To confirm, look for
the error "utopen:12:Not enough space in the backing store" in the log file Grid_
home/log/hostname/client/ocrconfig_pid.log.
Action: Identify a storage device that has 280 MB or more available space. Locate
the existing raw device name from /var/opt/oracle/srvConfig.loc, and copy
the contents of this raw device to the new device using the command dd.
PRVE-0038 : The SSH LoginGraceTime setting, or fatal: Timeout before
authentication
Cause: PRVE-0038: The SSH LoginGraceTime setting on node "nodename" may
result in users being disconnected before login is completed. This error may
because the default timeout value for SSH connections is too low, or if the
LoginGraceTime parameter is commented out.
Action: Oracle recommends uncommenting the LoginGraceTime parameter in the
OpenSSH configuration file opt/ssh/etc/sshd_config, and setting it to a value of
0 (unlimited).
Time stamp is in the future
Cause: One or more nodes has a different clock time than the local node. If this is
the case, then you may see output similar to the following:
time stamp 2005-04-04 14:49:49 is 106 s in the future
Action: Ensure that all member nodes of the cluster have the same clock time.
Timed out waiting for the CRS stack to start
Cause: If a configuration issue prevents the Oracle Grid Infrastructure software
from installing successfully on all nodes, then you may see error messages such as
"Timed out waiting for the CRS stack to start," or you may notice that Oracle Grid
Infrastructure-managed resources were not create on some nodes after you exit the
installer. You also may notice that resources have a status other than ONLINE.
Action: Deconfigure the Oracle Grid Infrastructure installation without removing
binaries, and review log files to determine the cause of the configuration issue.
After you have fixed the configuration issue, rerun the scripts used during
installation to configure Oracle Grid Infrastructure.
A-4 Oracle Grid Infrastructure Installation Guide
Interpreting CVU Messages About Oracle Grid Infrastructure Setup
See Also: Section 6.5, "Deconfiguring Oracle Clusterware Without
Removing Binaries"
A.1.1 Other Installation Issues and Errors
For additional help in resolving error messages, refer to My Oracle Support. For
example, the note with Doc ID 1367631.1 contains some of the most common
installation issues for Oracle Grid Infrastructure and Oracle Clusterware.
A.2 Interpreting CVU "Unknown" Output Messages Using Verbose Mode
If you run Cluster Verification Utility using the -verbose argument, and a Cluster
Verification Utility command responds with UNKNOWN for a particular node, then this is
because Cluster Verification Utility cannot determine if a check passed or failed. The
following is a list of possible causes for an "Unknown" response:
■
■
■
■
■
The node is down
Common operating system command binaries required by Cluster Verification
Utility are missing in the /bin directory in the Oracle Grid Infrastructure home or
Oracle home directory
The user account starting Cluster Verification Utility does not have privileges to
run common operating system commands on the node
The node is missing an operating system patch, or a required package
The node has exceeded the maximum number of processes or maximum number
of open files, or there is a problem with IPC segments, such as shared memory or
semaphores
A.3 Interpreting CVU Messages About Oracle Grid Infrastructure Setup
If the Cluster Verification Utility report indicates that your system fails to meet the
requirements for Oracle Grid Infrastructure installation, then use the topics in this
section to correct the problem or problems indicated in the report, and run Cluster
Verification Utility again.
■
User Equivalence Check Failed
■
Node Reachability Check or Node Connectivity Check Failed
■
User Existence Check or User-Group Relationship Check Failed
User Equivalence Check Failed
Cause: Failure to establish user equivalency across all nodes. This can be due to
not creating the required users, or failing to complete secure shell (SSH)
configuration properly.
Action: Cluster Verification Utility provides a list of nodes on which user
equivalence failed.
For each node listed as a failure node, review the installation owner user
configuration to ensure that the user configuration is properly completed, and that
SSH configuration is properly completed. The user that runs the Oracle Grid
Infrastructure installation must have permissions to create SSH connections.
Oracle recommends that you use the SSH configuration option in OUI to configure
SSH. You can use Cluster Verification Utility before installation if you configure
SSH manually, or after installation, when SSH has been configured for installation.
Troubleshooting the Oracle Grid Infrastructure Installation Process
A-5
Interpreting CVU Messages About Oracle Grid Infrastructure Setup
For example, to check user equivalency for the user account oracle, use the
command su - oracle and check user equivalence manually by running the ssh
command on the local node with the date command argument using the following
syntax:
$ ssh nodename date
The output from this command should be the timestamp of the remote node
identified by the value that you use for nodename. If you are prompted for a
password, then you must configure SSH. If ssh is in the default location, the
/usr/bin directory, then use ssh to configure user equivalence. You can also use
rsh to confirm user equivalence.
If you see a message similar to the following when entering the date command
with SSH, then this is the probable cause of the user equivalence error:
The authenticity of host 'node1 (140.87.152.153)' can't be established.
RSA key fingerprint is 7z:ez:e7:f6:f4:f2:4f:8f:9z:79:85:62:20:90:92:z9.
Are you sure you want to continue connecting (yes/no)?
Enter yes, and then run Cluster Verification Utility to determine if the user
equivalency error is resolved.
If ssh is in a location other than the default, /usr/bin, then Cluster Verification
Utility reports a user equivalence check failure. To avoid this error, navigate to the
directory Grid_home/cv/admin, open the file cvu_config with a text editor, and
add or update the key ORACLE_SRVM_REMOTESHELL to indicate the ssh path location
on your system. For example:
# Locations for ssh and scp commands
ORACLE_SRVM_REMOTESHELL=/usr/local/bin/ssh
ORACLE_SRVM_REMOTECOPY=/usr/local/bin/scp
Note the following rules for modifying the cvu_config file:
■
Key entries have the syntax name=value
■
Each key entry and the value assigned to the key defines one property only
■
Lines beginning with the number sign (#) are comment lines, and are ignored
■
Lines that do not follow the syntax name=value are ignored
When you have changed the path configuration, run Cluster Verification Utility
again. If ssh is in another location than the default, you also must start OUI with
additional arguments to specify a different location for the remote shell and
remote copy commands. Enter runInstaller -help to obtain information about
how to use these arguments.
A-6 Oracle Grid Infrastructure Installation Guide
About the Oracle Grid Infrastructure Alert Log
Note: When you or OUI run ssh or rsh commands, including any
login or other shell scripts they start, you may see errors about invalid
arguments or standard input if the scripts generate any output. You
should correct the cause of these errors.
To stop the errors, remove all commands from the oracle user's login
scripts that generate output when you run ssh or rsh commands.
If you see messages about X11 forwarding, then complete the task
Section 2.12.4, "Setting Display and X11 Forwarding Configuration" to
resolve this issue.
If you see errors similar to the following:
stty: standard input: Invalid argument
stty: standard input: Invalid argument
These errors are produced if hidden files on the system (for example,
.bashrc or .cshrc) contain stty commands. If you see these errors,
then refer to Section 2.12.5, "Preventing Installation Errors Caused by
Terminal Output Commands" to correct the cause of these errors.
Node Reachability Check or Node Connectivity Check Failed
Cause: One or more nodes in the cluster cannot be reached using TCP/IP
protocol, through either the public or private interconnects.
Action: Use the command /bin/ping address to check each node address. When
you find an address that cannot be reached, check your list of public and private
addresses to make sure that you have them correctly configured. If you use
third-party vendor clusterware, then refer to the vendor documentation for
assistance. Ensure that the public and private network interfaces have the same
interface names on each node of your cluster.
User Existence Check or User-Group Relationship Check Failed
Cause: The administrative privileges for users and groups required for
installation are missing or incorrect.
Action: Use the id command on each node to confirm that the installation owner
user (for example, grid or oracle) is created with the correct group membership.
Ensure that you have created the required groups, and create or modify the user
account on affected nodes to establish required group membership.
See Also: Section 2.4, "Creating Groups, Users and Paths for Oracle
Grid Infrastructure" for instructions about how to create required
groups, and how to configure the installation owner user
A.4 About the Oracle Grid Infrastructure Alert Log
The Oracle Grid Infrastructure alert log is the first place to look for serious errors. In
the event of an error, it can contain path information to diagnostic logs that can
provide specific information about the cause of errors.
After installation, Oracle Grid Infrastructure posts alert messages when important
events occur. For example, you might see alert messages from the Cluster Ready
Services (CRS) daemon process when it starts, if it aborts, if the failover process fails,
or if automatic restart of a CRS resource failed.
Troubleshooting the Oracle Grid Infrastructure Installation Process
A-7
Performing Cluster Diagnostics During Oracle Grid Infrastructure Installations
Oracle Enterprise Manager monitors the Clusterware log file and posts an alert on the
Cluster Home page if an error is detected. For example, if a voting disk is not
available, a CRS-1604 error is raised, and a critical alert is posted on the Cluster Home
page. You can customize the error detection and alert settings on the Metric and Policy
Settings page.
The location of the Oracle Grid Infrastructure log file is Grid_
home/log/hostname/alerthostname.log, where Grid_home is the directory in which
Oracle Grid Infrastructure was installed and hostname is the host name of the local
node.
See Also: Oracle Real Application Clusters Administration and
Deployment Guide
A.5 Performing Cluster Diagnostics During Oracle Grid Infrastructure
Installations
If Oracle Universal Installer (OUI) does not display the Node Selection page, then use
the following command syntax to check the integrity of the Cluster Manager:
cluvfy comp clumgr -n node_list -verbose
In the preceding syntax example, the variable node_list is the list of nodes in your
cluster, separated by commas.
If you encounter unexplained installation errors during or
after a period when cron jobs are run, then your cron job may have
deleted temporary files before the installation is finished. Oracle
recommends that you complete installation before daily cron jobs are
run, or disable daily cron jobs that perform cleanup until after the
installation is completed.
Note:
A.6 About Using CVU Cluster Healthchecks After Installation
Starting with Oracle Grid Infrastructure 11g release 2 (11.2.0.3) and later, you can use
the CVU healthcheck command option to check your Oracle Grid Infrastructure and
Oracle Database installations for their compliance with mandatory requirements and
best practices guidelines, and to check to ensure that they are functioning properly.
Use the following syntax to run the healthcheck command option:
cluvfy comp healthcheck [-collect {cluster|database}] [-db db_unique_name]
[-bestpractice|-mandatory] [-deviations] [-html] [-save [-savedir directory_path]
For example:
$ cd /home/grid/cvu_home/bin
$ ./cluvfy comp healthcheck -collect cluster -bestpractice -deviations -html
The options are:
■
-collect [cluster|database]
Use this flag to specify that you want to perform checks for Oracle Grid
Infrastructure (cluster) or Oracle Database (database). If you do not use the collect
flag with the healthcheck option, then cluvfy comp healthcheck performs checks
for both Oracle Grid Infrastructure and Oracle Database.
■
-db db_unique_name
A-8 Oracle Grid Infrastructure Installation Guide
Interconnect Configuration Issues
Use this flag to specify checks on the database unique name that you enter after
the db flag.
CVU uses JDBC to connect to the database as the user cvusys to verify various
database parameters. For this reason, if you want checks to be performed for the
database you specify with the -db flag, then you must first create the cvusys user
on that database, and grant that user the CVU-specific role, cvusapp. You must
also grant members of the cvusapp role select permissions on system tables.
A SQL script is included in CVU_home/cv/admin/cvusys.sql to facilitate the
creation of this user. Use this SQL script to create the cvusys user on all the
databases that you want to verify using CVU.
If you use the db flag but do not provide a database unique name, then CVU
discovers all the Oracle Databases on the cluster. If you want to perform best
practices checks on these databases, then you must create the cvusys user on each
database, and grant that user the cvusapp role with the select privileges needed
to perform the best practice checks.
■
[-bestpractice | -mandatory] [-deviations]
Use the bestpractice flag to specify best practice checks, and the mandatory flag
to specify mandatory checks. Add the deviations flag to specify that you want to
see only the deviations from either the best practice recommendations or the
mandatory requirements. You can specify either the -bestpractice or -mandatory
flag, but not both flags. If you specify neither -bestpractice or -mandatory, then
both best practices and mandatory requirements are displayed.
■
-html
Use the html flag to generate a detailed report in HTML format.
If you specify the html flag, and a browser CVU recognizes is available on the
system, then the browser is started and the report is displayed on the browser
when the checks are complete.
If you do not specify the html flag, then the detailed report is generated in a text
file.
■
-save [-savedir dir_path]
Use the save or -save -savedir flags to save validation reports
(cvuchecdkreport_timestamp.txt and cvucheckreport_timestamp.htm), where
timestamp is the time and date of the validation report.
If you use the save flag by itself, then the reports are saved in the path CVU_
home/cv/report, where CVU_home is the location of the CVU binaries.
If you use the flags -save -savedir, and enter a path where you want the CVU
reports saved, then the CVU reports are saved in the path you specify.
A.7 Interconnect Configuration Issues
If you plan to use multiple network interface cards (NICs) for the interconnect, and
you do not configure them during installation or after installation with Redundant
Interconnect Usage, then you should use a third party solution to aggregate the
interfaces at the operating system level. Otherwise, the failure of a single NIC will
affect the availability of the cluster node.
If you use aggregated NIC cards, and use the Oracle Clusterware Redundant
Interconnect Usage feature, then they should be on different subnets. If you use a
Troubleshooting the Oracle Grid Infrastructure Installation Process
A-9
SCAN VIP and SCAN Listener Issues
third-party vendor method of aggregation, then follow the directions for that vendor’s
product.
If you encounter errors, then carry out the following system checks:
■
■
Verify with your network providers that they are using correct cables (length,
type) and software on their switches. In some cases, to avoid bugs that cause
disconnects under loads, or to support additional features such as Jumbo Frames,
you may need a firmware upgrade on interconnect switches, or you may need
newer NIC driver or firmware at the operating system level. Running without
such fixes can cause later instabilities to Oracle RAC databases, even though the
initial installation seems to work.
Review VLAN configurations, duplex settings, and auto-negotiation in accordance
with vendor and Oracle recommendations.
A.8 SCAN VIP and SCAN Listener Issues
If your installation reports errors related to the SCAN VIP addresses or listeners, then
check the following items to make sure your network is configured correctly:
■
■
■
Check the file /etc/resolv.conf - verify the contents are the same on each node
Verify that there is a DNS entry for the SCAN, and that it resolves to three valid IP
addresses. Use the command nslookup scan-name; this command should return
the DNS server name and the three IP addresses configured for the SCAN.
Use the ping command to test the IP addresses assigned to the SCAN; you should
receive a response for each IP address.
If you do not have a DNS configured for your cluster
environment, then you can create an entry for the SCAN in the
/etc/hosts file on each node. However, using the /etc/hosts file to
resolve the SCAN results in having only one SCAN available for the
entire cluster instead of three. Only the first entry for SCAN in the
hosts file is used.
Note:
■
Ensure the SCAN VIP uses the same netmask that is used by the public interface.
If you need additional assistance troubleshooting errors related to the SCAN, SCAN
VIP or listeners, then refer to My Oracle Support. For example, the note with Doc ID
1373350.1 contains some of the most common issues for the SCAN VIPs and listeners.
A.9 Storage Configuration Issues
The following is a list of issues involving storage configuration:
■
Recovery from Losing a Node File System or Grid Home
A.9.1 Recovery from Losing a Node File System or Grid Home
With Oracle Grid Infrastructure release 11.2 and later, if you remove a file system by
mistake, or encounter another storage configuration issue that results in losing the
Oracle Local Registry or otherwise corrupting a node, you can recover the node in one
of two ways:
1.
Restore the node from an operating system level backup (preferred)
A-10 Oracle Grid Infrastructure Installation Guide
Completing an Installation Before Completing the Scripts
2.
Remove the node, and then add the node. With 11.2 and later clusters, profile
information for is copied to the node, and the node is restored.
The feature that enables cluster nodes to be removed and added again, so that they can
be restored from the remaining nodes in the cluster, is called Grid Plug and Play
(GPnP). Grid Plug and Play eliminates per-node configuration data and the need for
explicit add and delete nodes steps. This allows a system administrator to take a
template system image and run it on a new node with no further configuration. This
removes many manual operations, reduces the opportunity for errors, and encourages
configurations that can be changed easily. Removal of the per-node configuration
makes the nodes easier to replace, because they do not need to contain
individually-managed state.
Grid Plug and Play reduces the cost of installing, configuring, and managing database
nodes by making their per-node state disposable. It allows nodes to be easily replaced
with regenerated state.
Initiate recovery of a node using addnode syntax similar to the following, where
lostnode is the node that you are adding back to the cluster:
If you are using Grid Naming Service (GNS):
$ ./addNode.sh -silent "CLUSTER_NEW_NODES=lostnode"
If you are not using GNS:
$ ./addNode.sh -silent "CLUSTER_NEW_NODES={lostnode}" "CLUSTER_NEW_VIRTUAL_
HOSTNAMES={lostnode-vip}"
Note that you require access to root to be able to run the root.sh script on the node you
restore, to recreate OCR keys and to perform other configuration tasks. When you see
prompts to overwrite your existing information in /usr/local/bin, accept the default
(n):
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
A.10 Completing an Installation Before Completing the Scripts
When the root.sh script completes, you must click OK in OUI to finish the
installation, and to start the configuration assistants. If OUI exits before the root.sh
script has been run or has finished running, then the Oracle Grid Infrastructure
installation is incomplete.
To complete an interrupted installation, as the grid user, on the node where the
installation was started, run the following command:
$Grid_home/cfgtoollogs/configToolAllCommands
Run this command on only the first node. Running this command completes the
Oracle Grid Infrastructure installation. If the configToolAllCommands file does not
exist, then contact My Oracle Support for assistance in creating the file manually.
Troubleshooting the Oracle Grid Infrastructure Installation Process A-11
Completing an Installation Before Completing the Scripts
A-12 Oracle Grid Infrastructure Installation Guide
B
B
Installing and Configuring Oracle Database
Using Response Files
This appendix describes how to install and configure Oracle products using response
files. It includes information about the following topics:
■
How Response Files Work
■
Preparing a Response File
■
Running the Installer Using a Response File
■
Running Net Configuration Assistant Using a Response File
■
Postinstallation Configuration Using a Response File
B.1 How Response Files Work
When you start the installer, you can use a response file to automate the installation
and configuration of Oracle software, either fully or partially. The installer uses the
values contained in the response file to provide answers to some or all installation
prompts.
Typically, the installer runs in interactive mode, which means that it prompts you to
provide information in graphical user interface (GUI) screens. When you use response
files to provide this information, you run the installer from a command prompt using
either of the following modes:
■
Silent mode
If you include responses for all of the prompts in the response file and specify the
-silent option when starting the installer, then it runs in silent mode. During a
silent mode installation, the installer does not display any screens. Instead, it
displays progress information in the terminal that you used to start it.
■
Response file mode
If you include responses for some or all of the prompts in the response file and
omit the -silent option, then the installer runs in response file mode. During a
response file mode installation, the installer displays all the screens, screens for
which you specify information in the response file, and also screens for which you
did not specify the required information in the response file.
You define the settings for a silent or response file installation by entering values for
the variables listed in the response file. For example, to specify the Oracle home name,
supply the appropriate value for the ORACLE_HOME variable:
ORACLE_HOME="OraDBHome1"
Installing and Configuring Oracle Database Using Response Files
B-1
Preparing a Response File
Another way of specifying the response file variable settings is to pass them as
command line arguments when you run the installer. For example:
-silent "ORACLE_HOME=OraDBHome1" ...
See Also: Oracle Universal Installer and OPatch User's Guide for
Windows and UNIX for more information about response files
B.1.1 Reasons for Using Silent Mode or Response File Mode
The following table provides use cases for running the installer in silent mode or
response file mode.
Mode
Uses
Silent
Use silent mode to do the following installations:
■
■
■
Complete an unattended installation, which you schedule using
operating system utilities such as at.
Complete several similar installations on multiple systems without user
interaction.
Install the software on a system that does not have X Window System
software installed on it.
The installer displays progress information on the terminal that you used to
start it, but it does not display any of the installer screens.
Response file
Use response file mode to complete similar Oracle software installations on
multiple systems, providing default answers to some, but not all of the
installer prompts.
In response file mode, all the installer screens are displayed, but defaults for
the fields in these screens are provided by the response file. You have to
provide information for the fields in screens where you have not provided
values in the response file.
B.1.2 General Procedure for Using Response Files
The following are the general steps to install and configure Oracle products using the
installer in silent or response file mode:
You must complete all required preinstallation tasks on a
system before running the installer in silent or response file mode.
Note:
1.
Prepare a response file.
2.
Run the installer in silent or response file mode.
3.
If you completed a software-only installation, then run Net Configuration
Assistant and Database Configuration Assistant in silent or response file mode.
These steps are described in the following sections.
B.2 Preparing a Response File
This section describes the following methods to prepare a response file for use during
silent mode or response file mode installations:
■
Editing a Response File Template
B-2 Oracle Grid Infrastructure Installation Guide
Preparing a Response File
■
Recording a Response File
B.2.1 Editing a Response File Template
Oracle provides response file templates for each product and installation type, and for
each configuration tool. These files are located at database/response directory on the
installation media.
If you copied the software to a hard disk, then the response
files are located in the directory /response.
Note:
Table B–1 lists the response files provided with this software:
Table B–1
Response Files for Oracle Database
Response File
Description
db_install.rsp
Silent installation of Oracle Database 11g
dbca.rsp
Silent installation of Database Configuration Assistant
netca.rsp
Silent installation of Oracle Net Configuration Assistant
Table B–2
Response files for Oracle Grid Infrastructure
Response File
Description
grid_install.rsp
Silent installation of Oracle Grid Infrastructure installations
Caution: When you modify a response file template and save a file
for use, the response file may contain plain text passwords.
Ownership of the response file should be given to the Oracle software
installation owner only, and permissions on the response file should
be changed to 600. Oracle strongly recommends that database
administrators or other administrators delete or secure response files
when they are not in use.
To copy and modify a response file:
1.
Copy the response file from the response file directory to a directory on your
system:
$ cp /directory_path/response/response_file.rsp local_directory
In this example, directory_path is the path to the database directory on the
installation media. If you have copied the software to a hard drive, then you can
edit the file in the response directory if you prefer.
2.
Open the response file in a text editor:
$ vi /local_dir/response_file.rsp
See Also: Oracle Universal Installer and OPatch User's Guide for
Windows and UNIX for detailed information on creating response files
3.
Follow the instructions in the file to edit it.
Installing and Configuring Oracle Database Using Response Files
B-3
Preparing a Response File
The installer or configuration assistant fails if you do not
correctly configure the response file.
Note:
4.
Change the permissions on the file to 600:
$ chmod 600 /local_dir/response_file.rsp
A fully specified response file for an Oracle Database
installation contains the passwords for database administrative
accounts and for a user who is a member of the OSDBA group
(required for automated backups). Ensure that only the Oracle
software owner user can view or modify response files or consider
deleting them after the installation succeeds.
Note:
B.2.2 Recording a Response File
You can use the installer in interactive mode to record a response file, which you can
edit and then use to complete silent mode or response file mode installations. This
method is useful for custom or software-only installations.
Starting with Oracle Database 11g Release 2 (11.2), you can save all the installation
steps into a response file during installation by clicking Save Response File on the
Summary page. You can use the generated response file for a silent installation later.
When you record the response file, you can either complete the installation, or you can
exit from the installer on the Summary page, before it starts to copy the software to the
system.
If you use record mode during a response file mode installation, then the installer
records the variable values that were specified in the original source response file into
the new response file.
Oracle Universal Installer does not record passwords in the
response file.
Note:
To record a response file:
1.
Complete preinstallation tasks as for a normal installation.
2.
Ensure that the Oracle Grid Infrastructure software owner user (typically grid)
has permissions to create or write to the Oracle home path that you will specify
when you run the installer.
3.
On each installation screen, specify the required information.
4.
When the installer displays the Summary screen, perform the following steps:
a.
Click Save Response File and specify a file name and location to save the
values for the response file, and click Save.
b.
Click Finish to create the response file and continue with the installation.
Click Save Response File and click Cancel if you only want to create the
response file but not continue with the installation. The installation will stop,
but the settings you have entered will be recorded in the response file.
5.
Before you use the saved response file on another system, edit the file and make
any required changes.
B-4 Oracle Grid Infrastructure Installation Guide
Running Net Configuration Assistant Using a Response File
Use the instructions in the file as a guide when editing it.
B.3 Running the Installer Using a Response File
Run Oracle Universal Installer at the command line, specifying the response file you
created. The Oracle Universal Installer executable, runInstaller, provides several
options. For help information on the full set of these options, run the runInstaller
command with the -help option. For example:
$ directory_path/runInstaller -help
The help information appears in a window after some time.
To run the installer using a response file:
1.
Complete the preinstallation tasks as for a normal installation
2.
Log in as the software installation owner user.
3.
If you are completing a response file mode installation, set the DISPLAY
environment variable.
You do not have to set the DISPLAY environment variable if
you are completing a silent mode installation.
Note:
4.
To start the installer in silent or response file mode, enter a command similar to the
following:
$ /directory_path/runInstaller [-silent] [-noconfig] \
-responseFile responsefilename
Do not specify a relative path to the response file. If you
specify a relative path, then the installer fails.
Note:
In this example:
■
■
■
■
5.
directory_path is the path of the DVD or the path of the directory on the
hard drive where you have copied the installation binaries.
-silent runs the installer in silent mode.
-noconfig suppresses running the configuration assistants during installation,
and a software-only installation is performed instead.
responsefilename is the full path and file name of the installation response
file that you configured.
When the installation completes, log in as the root user and run the root.sh
script. For example
$ su root
password:
# /oracle_home_path/root.sh
B.4 Running Net Configuration Assistant Using a Response File
You can run Net Configuration Assistant in silent mode to configure and start an
Oracle Net listener on the system, configure naming methods, and configure Oracle
Installing and Configuring Oracle Database Using Response Files
B-5
Postinstallation Configuration Using a Response File
Net service names. To run Net Configuration Assistant in silent mode, you must copy
and edit a response file template. Oracle provides a response file template named
netca.rsp in the response directory in the database/response directory on the DVD.
If you copied the software to a hard disk, then the response
file template is located in the database/response directory.
Note:
To run Net Configuration Assistant using a response file:
1.
Copy the netca.rsp response file template from the response file directory to a
directory on your system:
$ cp /directory_path/response/netca.rsp local_directory
In this example, directory_path is the path of the database directory on the DVD.
If you have copied the software to a hard drive, you can edit the file in the
response directory if you prefer.
2.
Open the response file in a text editor:
$ vi /local_dir/netca.rsp
3.
Follow the instructions in the file to edit it.
Net Configuration Assistant fails if you do not correctly
configure the response file.
Note:
4.
Log in as the Oracle software owner user, and set the ORACLE_HOME environment
variable to specify the correct Oracle home directory.
5.
Enter a command similar to the following to run Net Configuration Assistant in
silent mode:
$ $ORACLE_HOME/bin/netca /silent /responsefile /local_dir/netca.rsp
In this command:
■
■
The /silent option indicates runs Net Configuration Assistant in silent mode.
local_dir is the full path of the directory where you copied the netca.rsp
response file template.
B.5 Postinstallation Configuration Using a Response File
Use the following sections to create and run a response file configuration after
installing Oracle software.
B.5.1 About the Postinstallation Configuration File
When you run a silent or response file installation, you provide information about
your servers in a response file that you otherwise provide manually during a graphical
user interface installation. However, the response file does not contain passwords for
user accounts that configuration assistants require after software installation is
complete. The configuration assistants are started with a script called
configToolAllCommands. You can run this script in response file mode by creating and
using a password response file. The script uses the passwords to run the configuration
tools in succession to complete configuration.
B-6 Oracle Grid Infrastructure Installation Guide
Postinstallation Configuration Using a Response File
If you keep the password file to use for clone installations, then Oracle strongly
recommends that you store it in a secure location. In addition, if you have to stop an
installation to fix an error, you can run the configuration assistants using
configToolAllCommands and a password response file.
The configToolAllCommands password response file consists of the following syntax
options:
■
internal_component_name is the name of the component that the configuration
assistant configures
■
variable_name is the name of the configuration file variable
■
value is the desired value to use for configuration.
The command syntax is as follows:
internal_component_name|variable_name=value
For example:
oracle.assistants.asm|S_ASMPASSWORD=welcome
Oracle strongly recommends that you maintain security with a password response file:
■
■
Permissions on the response file should be set to 600.
The owner of the response file should be the installation owner user, with the
group set to the central inventory (oraInventory) group.
B.5.2 Running Postinstallation Configuration Using a Response File
To run configuration assistants with the configToolAllCommands script:
1.
Create a response file using the syntax filename.properties. For example:
$ touch cfgrsp.properties
2.
Open the file with a text editor, and cut and paste the password template,
modifying as needed.
Example B–1 Password response file for Oracle Grid Infrastructure installation for a
cluster
Oracle Grid Infrastructure requires passwords for Oracle Automatic Storage
Management Configuration Assistant (ASMCA), and for Intelligent Platform
Management Interface Configuration Assistant (IPMICA) if you have a BMC card and
you want to enable this feature. Provide the following response file:
oracle.assistants.asm|S_ASMPASSWORD=password
oracle.assistants.asm|S_ASMMONITORPASSWORD=password
oracle.crs|S_BMCPASSWORD=password
If you do not have a BMC card, or you do not want to enable IPMI, then leave the S_
BMCPASSWORD input field blank.
Example B–2 Password response file for Oracle Real Application Clusters
Oracle Database configuration requires configuring a password for the SYS, SYSTEM,
SYSMAN, and DBSNMP passwords for use with Database Configuration Assistant
(DBCA). In addition, if you use Oracle ASM storage, then configure the ASMSNMP
password. Also, if you selected to configure Oracle Enterprise Manager, then you must
Installing and Configuring Oracle Database Using Response Files
B-7
Postinstallation Configuration Using a Response File
provide the password for the Oracle software installation owner for the S_
HOSTUSERPASSWORD response.
oracle.assistants.server|S_SYSPASSWORD=password
oracle.assistants.server|S_SYSTEMPASSWORD=password
oracle.assistants.server|S_SYSMANPASSWORD=password
oracle.assistants.server|S_DBSNMPPASSWORD=password
oracle.assistants.server|S_HOSTUSERPASSWORD=password
oracle.assistants.server|S_ASMSNMPPASSWORD=password
If you do not want to enable Oracle Enterprise Manager or Oracle ASM, then leave
those password fields blank.
3.
Change permissions to secure the file. For example:
$ ls -al cfgrsp.properties
-rw------- 1 oracle oinstall 0 Apr 30 17:30 cfgrsp
4.
Change directory to $ORACLE_HOME/cfgtoollogs, and run the configuration script
using the following syntax:
configToolAllCommands RESPONSE_FILE=/path/name.properties
for example:
$ ./configToolAllCommands RESPONSE_FILE=/home/oracle/cfgrsp.properties
B-8 Oracle Grid Infrastructure Installation Guide
C
C
Oracle Grid Infrastructure for a Cluster
Installation Concepts
This appendix explains the reasons for preinstallation tasks that you are asked to
perform, and other installation concepts.
This appendix contains the following sections:
■
Understanding Preinstallation Configuration
■
Understanding Storage Configuration
■
Understanding Server Pools
■
Understanding Out-of-Place Upgrade
C.1 Understanding Preinstallation Configuration
This section reviews concepts about Oracle Grid Infrastructure for a cluster
preinstallation tasks. It contains the following sections:
■
Understanding Oracle Groups and Users
■
Understanding the Oracle Base Directory Path
■
Understanding Network Addresses
■
Understanding Network Time Requirements
C.1.1 Understanding Oracle Groups and Users
This section contains the following topics:
■
Understanding the Oracle Inventory Group
■
Understanding the Oracle Inventory Directory
C.1.1.1 Understanding the Oracle Inventory Group
You must have a group whose members are given access to write to the Oracle
Inventory (oraInventory) directory, which is the central inventory record of all Oracle
software installations on a server. Members of this group have write privileges to the
Oracle central inventory (oraInventory) directory, and are also granted permissions
for various Oracle Clusterware resources, OCR keys, directories in the Oracle
Clusterware home to which DBAs need write access, and other necessary privileges.
By default, this group is called oinstall. The Oracle Inventory group must be the
primary group for Oracle software installation owners.
The oraInventory directory contains the following:
Oracle Grid Infrastructure for a Cluster Installation Concepts
C-1
Understanding Preinstallation Configuration
■
■
■
A registry of the Oracle home directories (Oracle Grid Infrastructure and Oracle
Database) on the system
Installation logs and trace files from installations of Oracle software. These files are
also copied to the respective Oracle homes for future reference.
Other metadata inventory information regarding Oracle installations are stored in
the individual Oracle home inventory directories, and are separate from the
central inventory.
You can configure one group to be the access control group for the Oracle Inventory,
for database administrators (OSDBA), and for all other access control groups used by
Oracle software for operating system authentication. However, if you use one group to
provide operating system authentication for all system privileges, then this group
must be the primary group for all users to whom you want to grant administrative
system privileges.
If Oracle software is already installed on the system, then the
existing Oracle Inventory group must be the primary group of the
operating system user (oracle or grid) that you use to install Oracle
Grid Infrastructure. Refer to "Determining If the Oracle Inventory and
Oracle Inventory Group Exists" to identify an existing Oracle
Inventory group.
Note:
C.1.1.2 Understanding the Oracle Inventory Directory
The Oracle Inventory directory (oraInventory) is the central inventory location for all
Oracle software installed on a server.
The first time you install Oracle software on a system, you are prompted to provide an
oraInventory directory path.
When you provide an Oracle base path when prompted during installation, or you
have set the environment variable ORACLE_BASE for the user performing the Oracle
Grid Infrastructure installation, OUI creates the Oracle Inventory directory in the path
ORACLE_BASE/../oraInventory. For example, if ORACLE_BASE is set to /opt/oracle/11,
then the Oracle Inventory directory is created in the path /opt/oracle/oraInventory,
so that the central inventory for all installations is outside of the Oracle base for this
particular Oracle installation user.
If you neither enter a path nor set ORACLE_BASE, then the Oracle Inventory directory is
placed in the home directory of the user that is performing the installation. For
example:
/home/oracle/oraInventory
As this placement can cause permission errors during subsequent installations with
multiple Oracle software owners, Oracle recommends that you do not accept this
option, and instead use an OFA-compliant path.
For new installations, Oracle recommends that you either create an Oracle path in
compliance with OFA structure, such as /u01/app/oraInventory, that is owned by an
Oracle software owner, or you set the Oracle base environment variable to an
OFA-compliant value.
If you set an Oracle base variable to a path such as /u01/app/grid or
/u01/app/oracle, then the Oracle Inventory is defaulted to the path
u01/app/oraInventory using correct permissions to allow all Oracle installation
owners to write to this central inventory directory.
C-2 Oracle Grid Infrastructure Installation Guide
Understanding Preinstallation Configuration
By default, the Oracle Inventory directory is not installed under the Oracle base
directory for the installation owner. This is because all Oracle software installations
share a common Oracle Inventory, so there is only one Oracle Inventory for all users,
whereas there is a separate Oracle base for each user.
C.1.2 Understanding the Oracle Base Directory Path
This section contains information about preparing an Oracle base directory.
C.1.2.1 Overview of the Oracle Base directory
During installation, you are prompted to specify an Oracle base location, which is
owned by the user performing the installation. You can choose a location with an
existing Oracle home, or choose another directory location that does not have the
structure for an Oracle base directory.
Using the Oracle base directory path helps to facilitate the organization of Oracle
installations, and helps to ensure that installations of multiple databases maintain an
Optimal Flexible Architecture (OFA) configuration.
C.1.2.2 Understanding Oracle Base and Grid Infrastructure Directories
Even if you do not use the same software owner to install Grid Infrastructure (Oracle
Clusterware and Oracle ASM) and Oracle Database, be aware that running the
root.sh script during the Oracle Grid Infrastructure installation changes ownership of
the home directory where clusterware binaries are placed to root, and all ancestor
directories to the root level (/) are also changed to root. For this reason, the Oracle
Grid Infrastructure for a cluster home cannot be in the same location as other Oracle
software.
However, Oracle Grid Infrastructure for a standalone database--Oracle Restart--can be
in the same location as other Oracle software.
See Also: Oracle Database Installation Guide for your platform for
more information about Oracle Restart
C.1.3 Understanding Network Addresses
During installation, you are asked to identify the planned use for each network
interface that OUI detects on your cluster node. Identify each interface as a public or
private interface, or as an interface that you do not want Oracle Clusterware to use.
Public and virtual IP addresses are configured on public interfaces. Private addresses
are configured on private interfaces.
Refer to the following sections for detailed information about each address type:
■
About the Public IP Address
■
About the Private IP Address
■
About the Virtual IP Address
■
About the Grid Naming Service (GNS) Virtual IP Address
■
About the SCAN
C.1.3.1 About the Public IP Address
The public IP address is assigned dynamically using DHCP, or defined statically in a
DNS or in a hosts file. It uses the public interface (the interface with access available to
clients).
Oracle Grid Infrastructure for a Cluster Installation Concepts
C-3
Understanding Preinstallation Configuration
C.1.3.2 About the Private IP Address
Oracle Clusterware uses interfaces marked as private for internode communication.
Each cluster node needs to have an interface that you identify during installation as a
private interface. Private interfaces must have addresses configured for the interface
itself, but no additional configuration is required. Oracle Clusterware uses interfaces
you identify as private for the cluster interconnect. If you identify multiple interfaces
during information for the private network, then Oracle Clusterware configures them
with Redundant Interconnect Usage. Any interface that you identify as private must
be on a subnet that connects to every node of the cluster. Oracle Clusterware uses all
the interfaces you identify for use as private interfaces.
For the private interconnects, because of Cache Fusion and other traffic between
nodes, Oracle strongly recommends using a physically separate, private network. If
you configure addresses using a DNS, then you should ensure that the private IP
addresses are reachable only by the cluster nodes.
After installation, if you modify interconnects on Oracle RAC with the CLUSTER_
INTERCONNECTS initialization parameter, then you must change it to a private IP
address, on a subnet that is not used with a public IP address. Oracle does not support
changing the interconnect to an interface using a subnet that you have designated as a
public subnet.
You should not use a firewall on the network with the private network IP addresses, as
this can block interconnect traffic.
C.1.3.3 About the Virtual IP Address
The virtual IP (VIP) address is registered in the GNS, or the DNS. Select an address for
your VIP that meets the following requirements:
■
■
The IP address and host name are currently unused (it can be registered in a DNS,
but should not be accessible by a ping command)
The VIP is on the same subnet as your public interface
C.1.3.4 About the Grid Naming Service (GNS) Virtual IP Address
The GNS virtual IP address is a static IP address configured in the DNS. The DNS
delegates queries to the GNS virtual IP address, and the GNS daemon responds to
incoming name resolution requests at that address.
Within the subdomain, the GNS uses multicast Domain Name Service (mDNS),
included with Oracle Clusterware, to enable the cluster to map host names and IP
addresses dynamically as nodes are added and removed from the cluster, without
requiring additional host configuration in the DNS.
To enable GNS, you must have your network administrator provide a set of IP
addresses for a subdomain assigned to the cluster (for example, grid.example.com),
and delegate DNS requests for that subdomain to the GNS virtual IP address for the
cluster, which GNS will serve. The set of IP addresses is provided to the cluster
through DHCP, which must be available on the public network for the cluster.
Oracle Clusterware Administration and Deployment Guide for
more information about Grid Naming Service
See Also:
C.1.3.5 About the SCAN
Oracle Database 11g release 2 clients connect to the database using SCANs. The SCAN
and its associated IP addresses provide a stable name for clients to use for connections,
C-4 Oracle Grid Infrastructure Installation Guide
Understanding Preinstallation Configuration
independent of the nodes that make up the cluster. SCAN addresses, virtual IP
addresses, and public IP addresses must all be on the same subnet.
The SCAN is a virtual IP name, similar to the names used for virtual IP addresses,
such as node1-vip. However, unlike a virtual IP, the SCAN is associated with the
entire cluster, rather than an individual node, and associated with multiple IP
addresses, not just one address.
The SCAN works by being able to resolve to multiple IP addresses in the cluster
handling public client connections. When a client submits a request, the SCAN listener
listening on a SCAN IP address and the SCAN port is made available to a client.
Because all services on the cluster are registered with the SCAN listener, the SCAN
listener replies with the address of the local listener on the least-loaded node where
the service is currently being offered. Finally, the client establishes connection to the
service through the listener on the node where service is offered. All of these actions
take place transparently to the client without any explicit configuration required in the
client.
During installation listeners are created. They listen on the SCAN IP addresses
provided on nodes for the SCAN IP addresses. Oracle Net Services routes application
requests to the least loaded instance providing the service. Because the SCAN
addresses resolve to the cluster, rather than to a node address in the cluster, nodes can
be added to or removed from the cluster without affecting the SCAN address
configuration.
The SCAN should be configured so that it is resolvable either by using Grid Naming
Service (GNS) within the cluster, or by using Domain Name Service (DNS) resolution.
For high availability and scalability, Oracle recommends that you configure the SCAN
name so that it resolves to three IP addresses. At a minimum, the SCAN must resolve
to at least one address.
If you specify a GNS domain, then the SCAN name defaults to clustername-scan.GNS_
domain. Otherwise, it defaults to clustername-scan.current_domain. For example, if you
start Oracle Grid Infrastructure installation from the server node1, the cluster name is
mycluster, and the GNS domain is grid.example.com, then the SCAN Name is
mycluster-scan.grid.example.com.
Clients configured to use IP addresses for Oracle Database releases prior to Oracle
Database 11g release 2 can continue to use their existing connection addresses; using
SCANs is not required. When you upgrade to Oracle Clusterware 11g Release 2 (11.2),
the SCAN becomes available, and you should use the SCAN for connections to Oracle
Database 11g release 2 or later databases. When an earlier version of Oracle Database is
upgraded, it registers with the SCAN listeners, and clients can start using the SCAN to
connect to that database. The database registers with the SCAN listener through the
remote listener parameter in the init.ora file. The REMOTE_LISTENER parameter
must be set to SCAN:PORT. Do not set it to a TNSNAMES alias with a single address
with the SCAN as HOST=SCAN.
The SCAN is optional for most deployments. However, clients using Oracle Database
11g release 2 and later policy-managed databases using server pools should access the
database using the SCAN. This is because policy-managed databases can run on
different servers at different times, so connecting to a particular node virtual IP
address for a policy-managed database is not possible.
C.1.4 Understanding Network Time Requirements
Oracle Clusterware 11g Release 2 (11.2) is automatically configured with Cluster Time
Synchronization Service (CTSS). This service provides automatic synchronization of all
cluster nodes using the optimal synchronization strategy for the type of cluster you
Oracle Grid Infrastructure for a Cluster Installation Concepts
C-5
Understanding Storage Configuration
deploy. If you have an existing cluster synchronization service, such as NTP, then it
will start in an observer mode. Otherwise, it will start in an active mode to ensure that
time is synchronized between cluster nodes. CTSS will not cause compatibility issues.
The CTSS module is installed as a part of Oracle Grid Infrastructure installation. CTSS
daemons are started up by the OHAS daemon (ohasd), and do not require a
command-line interface.
C.2 Understanding Storage Configuration
This section contains the following information:
■
About Migrating Existing Oracle ASM Instances
■
About Converting Standalone Oracle ASM Installations to Clustered Installations
C.2.1 About Migrating Existing Oracle ASM Instances
If you have an Oracle ASM installation from a prior release installed on your server, or
in an existing Oracle Clusterware installation, then you can use Oracle Automatic
Storage Management Configuration Assistant (ASMCA, located in the path Grid_
home/bin) to upgrade the existing Oracle ASM instance to Oracle ASM 11g Release 2
(11.2), and subsequently configure failure groups, and Oracle ASM volumes.
You must first shut down all database instances and
applications on the node with the existing Oracle ASM instance before
upgrading it.
Note:
During installation, if you chose to use Oracle ASM and ASMCA detects that there is a
prior Oracle ASM version installed in another Oracle ASM home, then after installing
the Oracle ASM 11g Release 2 (11.2) binaries, you can start ASMCA to upgrade the
existing Oracle ASM instance.
On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of
Oracle ASM instances on all nodes is Oracle ASM 11g release 1, then you are provided
with the option to perform a rolling upgrade of Oracle ASM instances. If the prior
version of Oracle ASM instances on an Oracle RAC installation are from an Oracle
ASM release prior to Oracle ASM 11g release 1, then rolling upgrades cannot be
performed. Oracle ASM is then upgraded on all nodes to 11g Release 2 (11.2).
C.2.2 About Converting Standalone Oracle ASM Installations to Clustered Installations
If you have existing standalone Oracle ASM installations on one or more nodes that
are member nodes of the cluster, then OUI proceeds to install Oracle Grid
Infrastructure for a cluster.
If you place Oracle Clusterware files (OCR and voting disks) on Oracle ASM, then
ASMCA is started at the end of the clusterware installation, and provides prompts for
you to migrate and upgrade the Oracle ASM instance on the local node, so that you
have an Oracle ASM 11g Release 2 (11.2) installation.
On remote nodes, ASMCA identifies any standalone Oracle ASM instances that are
running, and prompts you to shut down those Oracle ASM instances, and any
database instances that use them. ASMCA then extends clustered Oracle ASM
instances to all nodes in the cluster. However, disk group names on the cluster-enabled
Oracle ASM instances must be different from existing standalone disk group names.
See Also:
Oracle Database Storage Administrator's Guide
C-6 Oracle Grid Infrastructure Installation Guide
Understanding Server Pools
C.3 Understanding Server Pools
The following section provides a short overview of server pools. It contains the
following topics:
■
Overview of Server Pools and Policy-based Management
■
How Server Pools Work
■
The Free Server Pool
■
The Generic Server Pool
Oracle Clusterware Administration and Deployment Guide for
information about how to configure and administer server pools
See Also:
C.3.1 Overview of Server Pools and Policy-based Management
With Oracle Clusterware 11g release 2 (11.2) and later, resources managed by Oracle
Clusterware are contained in logical groups of servers called server pools. Resources
are hosted on a shared infrastructure and are contained within server pools. The
resources are restricted with respect to their hardware resource (such as CPU and
memory) consumption by policies, behaving as if they were deployed in a
single-system environment.
You can choose to manage resources dynamically using server pools to provide
policy-based management of resources in the cluster, or you can choose to manage
resources using the traditional method of physically assigning resources to run on
particular nodes.
Caution: By default, any named user may create a server pool. To
restrict the operating system users that have this privilege, Oracle
strongly recommends that you add specific users to the CRS
Administrators list.
Oracle Clusterware Administration and Deployment Guide for
more information about adding users to the CRS Administrator's list.
See Also:
The Oracle Grid Infrastructure installation owner has permissions to create and
configure server pools, using SRVCTL, Oracle Enterprise Manager Database Control,
or Oracle Database Configuration Assistant (DBCA).
Policy-based management:
■
■
■
Enables dynamic capacity assignment when needed to provide server capacity in
accordance with the priorities you set with policies
Enables allocation of resources by importance, so that applications obtain the
required minimum resources, whenever possible, and so that lower priority
applications do not take resources from more important applications
Ensures isolation where necessary, so that you can provide dedicated servers in a
cluster for applications and databases
Applications and databases running in server pools do not share resources. Because of
this, server pools isolate resources where necessary, but enable dynamic capacity
assignments as required. Together with role-separated management, this capability
addresses the needs of organizations that have a standardized cluster environments,
but allow multiple administrator groups to share the common cluster infrastructure.
Oracle Grid Infrastructure for a Cluster Installation Concepts
C-7
Understanding Server Pools
C.3.2 How Server Pools Work
Server pools divide the cluster into groups of servers hosting the same or similar
resources. They distribute a uniform workload (a set of Oracle Clusterware resources)
over several servers in the cluster. For example, you can restrict Oracle databases to
run only in a particular server pool. When you enable role-separated management,
you can explicitly grant permission to operating system users to change attributes of
certain server pools.
Top-level server pools:
■
■
Logically divide the cluster
Are always exclusive, meaning that one server can only reside in one particular
server pool at a certain point in time
Server pools each have three attributes that they are assigned when they are created:
■
■
■
MIN_SIZE: The minimum number of servers the server pool should contain. If the
number of servers in a server pool is below the value of this attribute, then Oracle
Clusterware automatically moves servers from elsewhere into the server pool until
the number of servers reaches the attribute value or until there are no free servers
available from less important pools.
MAX_SIZE: The maximum number of servers the server pool can contain.
IMPORTANCE: A number from 0 to 1000 (0 being least important) that ranks a
server pool among all other server pools in a cluster.
When Oracle Clusterware is installed, two server pools are created automatically:
Generic and Free. All servers in a new installation are assigned to the Free server pool,
initially. Servers move from Free to newly defined server pools automatically. When
you upgrade Oracle Clusterware, all nodes are assigned to the Generic server pool, to
ensure compatibility with database releases before Oracle Database 11g release 2 (11.2).
C.3.3 The Free Server Pool
The Free server pool contains servers that are not assigned to any other server pools.
The attributes of the Free server pool are restricted, as follows:
■
SERVER_NAMES, MIN_SIZE, and MAX_SIZE cannot be edited by the user
■
IMPORTANCE and ACL can be edited by the user
Oracle Clusterware Administration and Deployment Guide for
more information about how newly added servers are assigned to a
server pool
See Also:
C.3.4 The Generic Server Pool
The Generic server pool stores pre-11g release 2 (11.2) databases and
administrator-managed databases that have fixed configurations. Additionally, the
Generic server pool contains servers that match either of the following:
■
■
Servers that you specified in the HOSTING_MEMBERS attribute of all resources of
the application resource type
Servers with names you specified in the SERVER_NAMES attribute of the server
pools that list the Generic server pool as a parent server pool
The Generic server pool’s attributes are restricted, as follows:
C-8 Oracle Grid Infrastructure Installation Guide
Understanding Out-of-Place Upgrade
■
■
■
No one can modify configuration attributes of the Generic server pool (all
attributes are read-only)
When you specify a server name in the HOSTING_MEMBERS attribute, Oracle
Clusterware only allows it if the server is:
–
Online and exists in the Generic server pool
–
Online and exists in the Free server pool, in which case Oracle Clusterware
moves the server into the Generic server pool
–
Online and exists in any other server pool and the client is either a CRS
Administrator (the user role that controls resource administration for server
pools) or is allowed to use the server pool's servers, in which case, the server is
moved into the Generic server pool
–
Offline and the client is a CRS Administrator
When you register a child server pool with the Generic server pool, Oracle
Clusterware only allows it if the server names pass the same requirements as
previously specified for the resources.
Servers are initially considered for assignment into the Generic server pool at
cluster startup time or when a server is added to the cluster, and only after that to
other server pools.
C.4 Understanding Out-of-Place Upgrade
With an out-of-place upgrade, the installer installs the newer version in a separate
Oracle Clusterware home. Both versions of Oracle Clusterware are on each cluster
member node, but only one version is active.
Rolling upgrade avoids downtime and ensure continuous availability while the
software is upgraded to a new version.
If you have separate Oracle Clusterware homes on each node, then you can perform
an out-of-place upgrade on all nodes, or perform an out-of-place rolling upgrade, so
that some nodes run Oracle Clusterware from the earlier version Oracle Clusterware
home, and other nodes run Oracle Clusterware from the new Oracle Clusterware
home.
An in-place upgrade of Oracle Clusterware 11g release 2 is not supported.
See Also: Appendix E, "How to Upgrade to Oracle Grid
Infrastructure 11g Release 2" for instructions on completing rolling
upgrades
Oracle Grid Infrastructure for a Cluster Installation Concepts
C-9
Understanding Out-of-Place Upgrade
C-10 Oracle Grid Infrastructure Installation Guide
D
How to Complete Installation Prerequisite
Tasks Manually
D
This appendix provides instructions for how to complete configuration tasks manually
that Cluster Verification Utility (CVU) and the installer (OUI) normally complete
during installation. Use this appendix as a guide if you cannot use the fixup script.
This appendix contains the following information:
■
Configuring SSH Manually on All Cluster Nodes
■
Configuring Kernel Parameters Manually
■
Setting UDP and TCP Kernel Parameters Manually
D.1 Configuring SSH Manually on All Cluster Nodes
Passwordless SSH configuration is a mandatory installation requirement. SSH is used
during installation to configure cluster member nodes, and SSH is used after
installation by configuration assistants, Oracle Enterprise Manager, Opatch, and other
features.
Automatic Passwordless SSH configuration using OUI creates RSA encryption keys on
all nodes of the cluster. If you have system restrictions that require you to set up SSH
manually, such as using DSA keys, then use this procedure as a guide to set up
passwordless SSH.
In the examples that follow, the Oracle software owner listed is the grid user.
If SSH is not available, then OUI attempts to use rsh and rcp instead. However, these
services often are disabled for security.
This section contains the following:
■
Checking Existing SSH Configuration on the System
■
Configuring SSH on Cluster Nodes
■
Enabling SSH User Equivalency on Cluster Nodes
D.1.1 Checking Existing SSH Configuration on the System
To determine if SSH is running, enter the following command:
$ pgrep sshd
If SSH is running, then the response to this command is one or more process ID
numbers. In the home directory of the installation software owner (grid, oracle), use
How to Complete Installation Prerequisite Tasks Manually
D-1
Configuring SSH Manually on All Cluster Nodes
the command ls -al to ensure that the .ssh directory is owned and writable only by
the user.
You need either an RSA or a DSA key for the SSH protocol. RSA is used with the SSH
1.5 protocol, while DSA is the default for the SSH 2.0 protocol. With OpenSSH, you can
use either RSA or DSA. The instructions that follow are for SSH1. If you have an SSH2
installation, and you cannot use SSH1, then refer to your SSH distribution
documentation to configure SSH1 compatibility or to configure SSH2 with DSA.
D.1.2 Configuring SSH on Cluster Nodes
To configure SSH, you must first create RSA or DSA keys on each cluster node, and
then copy all the keys generated on all cluster node members into an authorized keys
file that is identical on each node. Note that the SSH files must be readable only by
root and by the software installation user (oracle, grid), as SSH ignores a private key
file if it is accessible by others. In the examples that follow, the DSA key is used.
You must configure SSH separately for each Oracle software installation owner that
you intend to use for installation.
To configure SSH, complete the following:
D.1.2.1 Create SSH Directory, and Create SSH Keys On Each Node
Complete the following steps on each node:
1.
Log in as the software owner (in this example, the grid user).
2.
To ensure that you are logged in as grid, and to verify that the user ID matches the
expected user ID you have assigned to the grid user, enter the commands id and
id grid. Ensure that Oracle user group and user and the user terminal window
process you are using have group and user IDs are identical. For example:
$ id
uid=502(grid) gid=501(oinstall) groups=501(oinstall),502(grid,asmadmin,asmdba)
$ id grid
uid=502(grid) gid=501(oinstall) groups=501(oinstall),502(grid,asmadmin,asmdba)
3.
If necessary, create the .ssh directory in the grid user's home directory, and set
permissions on it to ensure that only the oracle user has read and write
permissions:
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
Note:
4.
SSH configuration will fail if the permissions are not set to 700.
Enter the following command:
$ /usr/bin/ssh-keygen -t dsa
At the prompts, accept the default location for the key file (press Enter).
SSH with passphrase is not supported for Oracle Clusterware
11g release 2 and later releases.
Note:
D-2 Oracle Grid Infrastructure Installation Guide
Configuring SSH Manually on All Cluster Nodes
This command writes the DSA public key to the ~/.ssh/id_dsa.pub file and the
private key to the ~/.ssh/id_dsa file.
Never distribute the private key to anyone not authorized to perform Oracle
software installations.
5.
Repeat steps 1 through 4 on each node that you intend to make a member of the
cluster, using the DSA key.
D.1.2.2 Add All Keys to a Common authorized_keys File
Complete the following steps:
1.
On the local node, change directories to the .ssh directory in the Oracle Grid
Infrastructure owner's home directory (typically, either grid or oracle).
Then, add the DSA key to the authorized_keys file using the following
commands:
$ cat id_dsa.pub >> authorized_keys
$ ls
In the .ssh directory, you should see the id_dsa.pub keys that you have created,
and the file authorized_keys.
2.
On the local node, use SCP (Secure Copy) or SFTP (Secure FTP) to copy the
authorized_keys file to the oracle user .ssh directory on a remote node. The
following example is with SCP, on a node called node2, with the Oracle Grid
Infrastructure owner grid, where the grid user path is /home/grid:
[[email protected] .ssh]$ scp authorized_keys node2:/home/grid/.ssh/
You are prompted to accept a DSA key. Enter Yes, and you see that the node you
are copying to is added to the known_hosts file.
When prompted, provide the password for the grid user, which should be the
same on all nodes in the cluster. The authorized_keys file is copied to the remote
node.
Your output should be similar to the following, where xxx represents parts of a
valid IP address:
[[email protected] .ssh]$ scp authorized_keys node2:/home/grid/.ssh/
The authenticity of host 'node2 (xxx.xxx.173.152) can't be established.
DSA key fingerprint is 7e:60:60:ae:40:40:d1:a6:f7:4e:zz:me:a7:48:ae:f6:7e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,xxx.xxx.173.152' (dsa) to the list
of known hosts
[email protected]'s password:
authorized_keys
100%
828
7.5MB/s
00:00
3.
Using SSH, log in to the node where you copied the authorized_keys file. Then
change to the .ssh directory, and using the cat command, add the DSA keys for
the second node to the authorized_keys file, clicking Enter when you are
prompted for a password, so that passwordless SSH is set up:
[[email protected] .ssh]$ ssh node2
[[email protected] grid]$ cd .ssh
[[email protected] ssh]$ cat id_dsa.pub
>> authorized_keys
Repeat steps 2 and 3 from each node to each other member node in the cluster.
How to Complete Installation Prerequisite Tasks Manually
D-3
Configuring SSH Manually on All Cluster Nodes
When you have added keys from each cluster node member to the authorized_
keys file on the last node you want to have as a cluster node member, then use scp
to copy the authorized_keys file with the keys from all nodes back to each cluster
node member, overwriting the existing version on the other nodes.
To confirm that you have all nodes in the authorized_keys file, enter the
command more authorized_keys, and determine if there is a DSA key for each
member node. The file lists the type of key (ssh-dsa), followed by the key, and
then followed by the user and server. For example:
ssh-dsa AAAABBBB . . . = [email protected]
The grid user's /.ssh/authorized_keys file on every node
must contain the contents from all of the /.ssh/id_dsa.pub files that
you generated on all cluster nodes.
Note:
D.1.3 Enabling SSH User Equivalency on Cluster Nodes
After you have copied the authorized_keys file that contains all keys to each node in
the cluster, complete the following procedure, in the order listed. In this example, the
Oracle Grid Infrastructure software owner is named grid:
1.
On the system where you want to run OUI, log in as the grid user.
2.
Use the following command syntax, where hostname1, hostname2, and so on, are
the public host names (alias and fully qualified domain name) of nodes in the
cluster to run SSH from the local node to each node, including from the local node
to itself, and from each node to each other node:
[[email protected]]$ ssh hostname1 date
[[email protected]]$ ssh hostname2 date
.
.
.
For example:
[[email protected] grid]$ ssh node1 date
The authenticity of host 'node1 (xxx.xxx.100.101)' can't be established.
DSA key fingerprint is 7z:60:60:zz:48:48:z1:a0:f7:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,xxx.xxx.100.101' (DSA) to the list of
known hosts.
Mon Dec 4 11:08:13 PST 2006
[[email protected] grid]$ ssh node1.example.com date
The authenticity of host 'node1.example.com (xxx.xxx.100.101)' can't be
established.
DSA key fingerprint is 7z:60:60:zz:48:48:z1:a0:f7:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1.example.com,xxx.xxx.100.101' (DSA) to the
list of known hosts.
Mon Dec 4 11:08:13 PST 2006
[[email protected] grid]$ ssh node2 date
Mon Dec 4 11:08:35 PST 2006
.
.
.
At the end of this process, the public host name for each member node should be
registered in the known_hosts file for all other cluster nodes.
D-4 Oracle Grid Infrastructure Installation Guide
Configuring Kernel Parameters Manually
If you are using a remote client to connect to the local node, and you see a message
similar to "Warning: No xauth data; using fake authentication data for X11
forwarding," then this means that your authorized keys file is configured correctly,
but your SSH configuration has X11 forwarding enabled. To correct this issue,
proceed to Section 2.12.4, "Setting Display and X11 Forwarding Configuration."
3.
Repeat step 2 on each cluster node member.
If you have configured SSH correctly, then you can now use the ssh or scp commands
without being prompted for a password. For example:
[[email protected] ~]$ ssh
Mon Feb 26 23:34:42
[[email protected] ~]$ ssh
Mon Feb 26 23:34:48
node2 date
UTC 2009
node1 date
UTC 2009
If any node prompts for a password, then verify that the ~/.ssh/authorized_keys file
on that node contains the correct public keys, and that you have created an Oracle
software owner with identical group membership and IDs.
D.2 Configuring Kernel Parameters Manually
The kernel parameter values shown in this section are
recommended values only. For production database systems, Oracle
recommends that you tune these values to optimize the performance
of the system. See your operating system documentation for more
information about tuning kernel parameters.
Note:
On HP-UX 11.31 (version 3), the following parameters are not valid:
■
msgmap
■
msgseg
■
tcp_smallest_anon_port
■
udp_smallest_anon_port
D.2.1 Minimal Kernel Parameter Values
On all cluster nodes, verify that the kernel parameters shown in Table D–1 are set
either to the formula shown, or to values greater than or equal to the recommended
value shown. The procedure following the table describes how to verify and set the
values.
Table D–1
Minimal HP-UX Kernel Parameter Values
Parameter
Minimum Required Value
ksi_alloc_max
32768
executable_stack
0
max_thread_proc
1024
maxdsiz
1073741824 (1 GB)
maxdsiz_64bit
2147483648 (2 GB)
maxfiles
1024
How to Complete Installation Prerequisite Tasks Manually
D-5
Configuring Kernel Parameters Manually
Table D–1 (Cont.) Minimal HP-UX Kernel Parameter Values
Parameter
Minimum Required Value
maxfiles_lim
32767
maxssiz
134217728 (128 MB)
maxssiz_64bit
1073741824 (1 GB)
maxuprc
3686
msgmni
4096
msgtql
4096
ncsize
35840
nflocks
4096
ninode
34816
nkthread
7184
nproc
4096
semmni
4096
semmns
8192
semmnu
4096
semvmx
32767
shmmax
1073741824
shmmni
4096
shmseg
512
tcp_largest_anon_port
65500
udp_largest_anon_port
65500
Ensure that you set the TCP and UDP kernel parameters by following the procedure
described in the section Setting UDP and TCP Kernel Parameters Manually.
If the current value for any parameter is higher than the value
listed in this table, then do not change the value of that parameter.
Note:
D.2.2 Checking Kernel Parameters Manually
To view the current value or formula specified for these kernel parameters, and to
change them if necessary, use kctune, HP System Management Homepage (HP SMH),
or HP-UX Kernel Configuration (kcweb).
■
Modifying Kernel Settings Using KCTUNE
■
Modifying Kernel Settings Using SMH
■
Modifying Kernel Settings Using KCWEB
D.2.2.1 Modifying Kernel Settings Using KCTUNE
Use /usr/sbin/kctune to view or change kernel parameters. For example, to view
kernel parameters, enter kctune with the -d flag. For example:
# /usr/sbin/kctune -d
D-6 Oracle Grid Infrastructure Installation Guide
Configuring Kernel Parameters Manually
To modify a kernel parameter, enter kctune variable=value. For example:
# /usr/sbin/kctune aio_listio_max=512
Complete this procedure on all other cluster nodes.
D.2.2.2 Modifying Kernel Settings Using SMH
The following procedure describes how to modify kernel settings using HP System
Management Homepage (SMH):
1.
Optionally, set the DISPLAY environment variable to specify the display of the
local system:
–
Bourne, Bash, or Korn shell:
# DISPLAY=local_host:0.0 ; export DISPLAY
–
C shell:
# setenv DISPLAY local_host:0.0
2.
Start SMH:
# /usr/sbin/smh
3.
Choose the Kernel Configuration area, then choose the Configurable Parameters
area.
4.
Check the value or formula specified for each of these parameters and, if
necessary, modify that value or formula.
If necessary, refer to the SMH online Help for more information about completing
this step.
If you modify the value of a parameter that is not dynamic,
then you must restart the system.
Note:
5.
If necessary, when the system restarts, log in and switch user to root.
6.
Complete this procedure on all other cluster nodes.
D.2.2.3 Modifying Kernel Settings Using KCWEB
The following procedure shows how to
1.
Enter the following command to start the kcweb application:
# /usr/sbin/kcweb -F
2.
Check the value or formula specified for each of these parameters and, if
necessary, modify that value or formula.
If necessary, refer to the kcweb online Help for more information about completing
this step.
If you modify the value of a parameter that is not dynamic,
then you must restart the system.
Note:
3.
If necessary, when the system restarts, log in and switch user to root.
How to Complete Installation Prerequisite Tasks Manually
D-7
Setting UDP and TCP Kernel Parameters Manually
4.
Complete this procedure on all other cluster nodes.
D.3 Setting UDP and TCP Kernel Parameters Manually
Use NDD to ensure that the HP-UX kernel TCP/IP ephemeral port range is broad
enough to provide enough ephemeral ports for the anticipated server workload. Set
the port range high enough to avoid reserved ports for any applications you may
intend to use.
For example:
# /usr/bin/ndd /dev/tcp tcp_largest_anon_port
65535
In the preceding example, tcp_largest_anon_port is set to the default value.
If necessary, edit the file /etc/rc.config.d/nddconf and add entries to update the
UDP and TCP ephemeral port values. For example:
TRANSPORT_NAME[0]=tcp
NDD_NAME[0]=tcp_largest_anon_port
NDD_VALUE[0]=65500
TRANSPORT_NAME[1]=udp
NDD_NAME[1]=udp_largest_anon_port
NDD_VALUE[1]=65500
Ensure that the entries are numbered in proper order. For example, if there are two
entries present for the TCP and UDP ports in nddconf, then ensure that they are
numbered 0 and 1: TRANSPORT_NAME[0]=tcp and TRANSPORT_NAME[1]=udp.
Note: The tcp_smallest_anon_port and udp_smallest_anon_port
parameters are obsolete and you do not need to set them.
D-8 Oracle Grid Infrastructure Installation Guide
E
E
How to Upgrade to Oracle Grid Infrastructure
11g Release 2
This appendix describes how to perform Oracle Clusterware and Oracle Automatic
Storage Management upgrades.
Oracle Clusterware upgrades can be rolling upgrades, in which a subset of nodes are
brought down and upgraded while other nodes remain active. Oracle Automatic
Storage Management 11g Release 2 (11.2) upgrades can be rolling upgrades. If you
upgrade a subset of nodes, then a software-only installation is performed on the
existing cluster nodes that you do not select for upgrade.
This appendix contains the following topics:
■
Back Up the Oracle Software Before Upgrades
■
Unset Oracle Environment Variables
■
About Oracle ASM and Oracle Grid Infrastructure Installation and Upgrade
■
Restrictions for Clusterware and Oracle ASM Upgrades
■
Preparing to Upgrade an Existing Oracle Clusterware Installation
■
Performing Rolling Upgrades From an Earlier Release
■
Updating DB Control and Grid Control Target Parameters
■
Unlocking the Existing Oracle Clusterware Installation
■
Downgrading Oracle Clusterware After an Upgrade
E.1 Back Up the Oracle Software Before Upgrades
Before you make any changes to the Oracle software, Oracle recommends that you
create a backup of the Oracle software and databases.
E.2 Unset Oracle Environment Variables
If you have set ORA_CRS_HOME as an environment variable, following instructions from
Oracle Support, then unset it before starting an installation or upgrade. You should
never use ORA_CRS_HOME as an environment variable except under explicit direction
from Oracle Support.
Check to ensure that installation owner login shell profiles (for example,.profile or
.cshrc) do not have ORA_CRS_HOME set.
If you have had an existing installation on your system, and you are using the same
user account to install this installation, then unset the following environment
How to Upgrade to Oracle Grid Infrastructure 11g Release 2
E-1
About Oracle ASM and Oracle Grid Infrastructure Installation and Upgrade
variables: ORA_CRS_HOME; ORACLE_HOME; ORA_NLS10; TNS_ADMIN; and any other
environment variable set for the Oracle installation user that is connected with Oracle
software homes.
E.3 About Oracle ASM and Oracle Grid Infrastructure Installation and
Upgrade
In past releases, Oracle Automatic Storage Management (Oracle ASM) was installed as
part of the Oracle Database installation. With Oracle Database 11g release 2 (11.2),
Oracle ASM is installed when you install the Oracle Grid Infrastructure components
and shares an Oracle home with Oracle Clusterware when installed in a cluster such as
with Oracle RAC or with Oracle Restart on a standalone server.
If you have an existing Oracle ASM instance, you can either upgrade it at the time that
you install Oracle Grid Infrastructure, or you can upgrade it after the installation,
using Oracle ASM Configuration Assistant (ASMCA). However, be aware that a
number of Oracle ASM features are disabled until you upgrade Oracle ASM, and
Oracle Clusterware management of Oracle ASM does not function correctly until
Oracle ASM is upgraded, because Oracle Clusterware only manages Oracle ASM
when it is running in the Oracle Grid Infrastructure home. For this reason, Oracle
recommends that if you do not upgrade Oracle ASM at the same time as you upgrade
Oracle Clusterware, then you should upgrade Oracle ASM immediately afterward.
You can perform out-of-place upgrades to an Oracle ASM instance using Oracle ASM
Configuration Assistant (ASMCA). In addition to running ASMCA using the graphic
user interface, you can run ASMCA in non-interactive (silent) mode.
In prior releases, you could use Database Upgrade Assistant (DBUA) to upgrade either
an Oracle Database, or Oracle ASM. That is no longer the case. You can only use
DBUA to upgrade an Oracle Database instance. Use Oracle ASM Configuration
Assistant (ASMCA) to upgrade Oracle ASM.
See Also: Oracle Database Upgrade Guide and Oracle Database Storage
Administrator's Guide for additional information about upgrading
existing Oracle ASM installations
E.4 Restrictions for Clusterware and Oracle ASM Upgrades
Oracle recommends that you use CVU to check if here are any patches required for
upgrading your existing Oracle Grid Infrastructure 11g release 2 or Oracle RAC
database 11g Release 2 installations.
Section E.6, "Using CVU to Validate Readiness for Oracle
Clusterware Upgrades"
See Also:
Be aware of the following restrictions and changes for upgrades to Oracle Grid
Infrastructure installations, which consists of Oracle Clusterware and Oracle
Automatic Storage Management (Oracle ASM):
■
■
■
To upgrade existing Oracle Clusterware installations to Oracle Grid Infrastructure
11g, your release must be greater than or equal to 10.1.0.3, 10.2.0.3, 11.1.0.6, or 11.2.
To upgrade existing Oracle Grid Infrastructure from 11.2.0.2, to 11.2.0.3 or later,
you must apply patch 11.2.0.2.1 (11.2.0.2 PSU 1) or later
To upgrade existing 11.1 Oracle Clusterware installations to Oracle Grid
Infrastructure 11.2.0.3 or later, you must patch the Oracle Clusterware release 11.1
E-2 Oracle Grid Infrastructure Installation Guide
Restrictions for Clusterware and Oracle ASM Upgrades
home with the patch for bug 7308467.
■
To upgrade existing Oracle ASM installations to Oracle Grid Infrastructure 11g
Release 2 (11.2) in a rolling fashion, your release must be at least 11.1.0.6.
See Also: "Oracle 11gR2 Upgrade Companion" Note 785351.1 on My
Oracle Support:
https://support.oracle.com
■
■
■
■
■
■
■
■
Do not delete directories in the Grid home. For example, do not delete Grid_
home/Opatch. If you delete the directory, then the Grid infrastructure installation
owner cannot use Opatch to patch the grid home, and Opatch displays the error
"checkdir error: cannot create Grid_home/OPatch'
To upgrade existing 11.2.0.1 Oracle Grid Infrastructure installations to Oracle Grid
Infrastructure 11.2.0.2, you must first verify if you need to apply any mandatory
patches for upgrade to succeed. Refer to Section E.6 for steps to check readiness.
To upgrade existing 11.1 Oracle Clusterware installations to Oracle Grid
Infrastructure 11.2.0.3 or later, you must patch the release 11.1 Oracle Clusterware
home with the patch for bug 7308467.
Oracle Clusterware and Oracle ASM upgrades are always out-of-place upgrades.
With 11g Release 2 (11.2), you cannot perform an in-place upgrade of Oracle
Clusterware and Oracle ASM to existing homes.
If the existing Oracle Clusterware home is a shared home, note that you can use a
non-shared home for the Oracle Grid Infrastructure for a Cluster home for Oracle
Clusterware and Oracle ASM 11g Release 2 (11.2).
With Oracle Clusterware 11g release 1 and later releases, the same user that owned
the Oracle Clusterware 10g software must perform the Oracle Clusterware 11g
upgrade. Before Oracle Database 11g, either all Oracle software installations were
owned by the Oracle user, typically oracle, or Oracle Database software was
owned by oracle, and Oracle Clusterware software was owned by a separate user,
typically crs.
Oracle ASM and Oracle Clusterware both run in the Oracle Grid Infrastructure
home.
During a major version upgrade to 11g Release 2 (11.2), the software in the 11g
Release 2 (11.2) Oracle Grid Infrastructure home is not fully functional until the
upgrade is completed. Running srvctl, crsctl, and other commands from the 11g
Release 2 (11.2) home is not supported until the final rootupgrade.sh script is run
and the upgrade is complete across all nodes.
To manage databases in the existing earlier version (release 10.x or 11.1) database
homes during the Oracle Grid Infrastructure upgrade, use the srvctl from the
existing database homes.
■
■
During Oracle Clusterware installation, if there is a single instance Oracle ASM
version on the local node, then it is converted to a clustered Oracle ASM 11g
Release 2 (11.2) installation, and Oracle ASM runs in the Oracle Grid Infrastructure
home on all nodes.
If a single instance (non-clustered) Oracle ASM installation is on a remote node,
which is a node other than the local node (the node on which the Oracle Grid
Infrastructure installation is being performed), then it will remain a single instance
Oracle ASM installation. However, during installation, if you select to place the
How to Upgrade to Oracle Grid Infrastructure 11g Release 2
E-3
Preparing to Upgrade an Existing Oracle Clusterware Installation
Oracle Cluster Registry (OCR) and voting disk files on Oracle ASM, then a
clustered Oracle ASM installation is created on all nodes in the cluster, and the
single instance Oracle ASM installation on the remote node will become
nonfunctional.
See Also:
Oracle Database Upgrade Guide
E.5 Preparing to Upgrade an Existing Oracle Clusterware Installation
If you have an existing Oracle Clusterware installation, then you upgrade your
existing cluster by performing an out-of-place upgrade. You cannot perform an
in-place upgrade.
This section contains the following topics:
■
Checks to Complete Before Upgrade an Existing Oracle Clusterware Installation
■
Running the Oracle RACcheck Upgrade Readiness Assessment
E.5.1 Checks to Complete Before Upgrade an Existing Oracle Clusterware Installation
Complete the following tasks before starting an upgrade:
1.
For each node, use Cluster Verification Utility to ensure that you have completed
preinstallation steps. It can generate Fixup scripts to help you to prepare servers.
In addition, the installer will help you to ensure all required prerequisites are met.
Ensure that you have information you will need during installation, including the
following:
■
■
■
■
■
2.
An Oracle base location for Oracle Clusterware.
An Oracle Grid Infrastructure home location that is different from your
existing Oracle Clusterware location
A SCAN address
Privileged user operating system groups to grant access to Oracle ASM data
files (the OSDBA for ASM group), to grant administrative privileges to the
Oracle ASM instance (OSASM group), and to grant a subset of administrative
privileges to the Oracle ASM instance (OSOPER for ASM group)
root user access, to run scripts as root during installation
For the installation owner running the installation, if you have environment
variables set for the existing installation, then unset the environment variables
$ORACLE_HOME and $ORACLE_SID, as these environment variables are used during
upgrade. For example:
$ unset ORACLE_BASE
$ unset ORACLE_HOME
$ unset ORACLE_SID
See Also:
Section E.2, "Unset Oracle Environment Variables"
E.5.2 Running the Oracle RACcheck Upgrade Readiness Assessment
The RACcheck (Oracle RAC Configuration Audit Tool) Upgrade Readiness
Assessment can be used to obtain an automated upgrade-specific health check for
upgrades to Oracle Grid Infrastructure 11.2.0.3, 11.2.0.4, and 12.1.0.1. You can run the
E-4 Oracle Grid Infrastructure Installation Guide
Using CVU to Validate Readiness for Oracle Clusterware Upgrades
RACcheck Upgrade Readiness Assessment tool and automate many of the manual
pre-upgrade and post upgrade checks.
Oracle recommends that you download and run the latest version of RACcheck from
My Oracle Support. For information about downloading, configuring, and running
RACcheck configuration audit tool, refer to My Oracle Support note 1457357.1, which
is available at the following URL:
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1457357.1
E.6 Using CVU to Validate Readiness for Oracle Clusterware Upgrades
Review the contents in this section to validate that your cluster is ready for upgrades.
E.6.1 About the CVU Grid Upgrade Validation Command Options
Navigate to the staging area for the upgrade, where the runcluvfy.sh command is
located, and run the command runcluvfy.sh stage -pre crsinst -upgrade to check
the readiness of your Oracle Clusterware installation for upgrades. Running
runcluvfy.sh with the -pre crsinst -upgrade flags performs system checks to
confirm if the cluster is in a correct state for upgrading from an existing clusterware
installation.
The command uses the following syntax, where variable content is indicated by italics:
runcluvfy.sh stage -pre crsinst -upgrade [-n node_list] [-rolling] -src_crshome
src_Gridhome -dest_crshome dest_Gridhome -dest_version dest_version
[-fixup[-fixupdir fixupdirpath]] [-verbose]
The options are:
■
-n nodelist
The -n flag indicates cluster member nodes, and nodelist the comma-delimited
list of non-domain qualified node names on which you want to run a preupgrade
verification. If you do not add the -n flag to the verification command, then all the
nodes in the cluster are verified. You must add the -n flag if the clusterware is
down on the node where runcluvfy.sh is run.
■
-rolling
Use this flag to verify readiness for rolling upgrades.
■
-src_crshome src_Gridhome
Use this flag to indicate the location of the source Oracle Clusterware or Grid
home that you are upgrading, where src_Gridhome is the path to the home to
upgrade.
■
-dest_crshome dest_Gridhome
Use this flag to indicate the location of the upgrade Grid home, where dest_
Gridhome is the path to the Grid home.
■
-dest_version dest_version
Use the dest_version flag to indicate the release number of the upgrade,
including any patchset. The release number must include the five digits
designating the release to the level of the platform-specific patch. For example:
11.2.0.2.0.
■
-fixup [-fixupdir fixupdirpath]
How to Upgrade to Oracle Grid Infrastructure 11g Release 2
E-5
Performing Rolling Upgrades From an Earlier Release
Use the -fixup flag to indicate that you want to generate instructions for any
required steps you must complete to ensure that your cluster is ready for an
upgrade. The default location is the CVU work directory. If you want to place the
fixup instructions in a different directory, then add the flag -fixupdir, and
provide the path to the directory where you want to put the instructions for
required fixes.
■
-verbose
Use the -verbose flag to produce detailed output of individual checks
E.6.2 Example of Verifying System Upgrade Readiness for Grid Infrastructure
You can verify that the permissions required for installing Oracle Clusterware have
been configured on the nodes node1 and node2 by running the following command:
$ ./runcluvfy.sh stage -pre crsinst -upgrade -n node1,node2 -rolling -src_crshome
/u01/app/grid/11.2.0.1 -dest_crshome /u01/app/grid/11.2.0.2 -dest_version
11.2.0.3.0 -fixup -fixupdir /home/grid/fixup -verbose
E.6.3 Verifying System Readiness for Oracle Database Upgrades
Use Cluster Verification Utility to assist you with system checks in preparation for
starting a database upgrade. The installer runs the appropriate CVU checks
automatically, and either prompts you to fix problems, or provides a fixup script to be
run on all nodes in the cluster before proceeding with the upgrade.
See Also:
Oracle Database Upgrade Guide
E.7 Performing Rolling Upgrades From an Earlier Release
Use the following procedures to upgrade Oracle Clusterware or Oracle Automatic
Storage Management:
■
Performing a Rolling Upgrade of Oracle Clusterware
■
Performing a Rolling Upgrade of Oracle Automatic Storage Management
When you upgrade to Oracle Clusterware 11g Release 2 (11.2),
Oracle Automatic Storage Management (Oracle ASM) is installed in
the same home as Oracle Clusterware. In Oracle documentation, this
home is called the Oracle Grid Infrastructure home, or Grid home.
Also note that Oracle does not support attempting to add additional
nodes to a cluster during a rolling upgrade.
Note:
E.7.1 Performing a Rolling Upgrade of Oracle Clusterware
Use the following procedure to upgrade Oracle Clusterware from an earlier release to
a later release:
E-6 Oracle Grid Infrastructure Installation Guide
Performing Rolling Upgrades From an Earlier Release
Oracle recommends that you leave Oracle RAC instances
running. When you start the root script on each node, that node's
instances are shut down and then started up again by the
rootupgrade.sh script. If you upgrade from release 11.2.0.1 to 11.2.0.2,
then all nodes are selected by default. You cannot select or de-select
the nodes.
Note:
For single instance Oracle Databases on the cluster, only those that use
Oracle ASM must be shut down. Listeners do not need to be shut
down.
1.
Start the installer, and select the option to upgrade an existing Oracle Clusterware
and Oracle ASM installation.
2.
On the node selection page, select all nodes.
In contrast with releases prior to Oracle Clusterware 11g
release 2, all upgrades are rolling upgrades, even if you select all
nodes for the upgrade. If you are upgrading from 11.2.0.1 to 11.2.0.2,
then all nodes are selected by default. You cannot select or de-select
the nodes.
Note:
Oracle recommends that you select all cluster member nodes for the
upgrade, and then shut down database instances on each node before
you run the upgrade root script, starting the database instance up
again on each node after the upgrade is complete. You can also use
this procedure to upgrade a subset of nodes in the cluster.
3.
Select installation options as prompted.
4.
When prompted, run the rootupgrade.sh script on each node in the cluster to
upgrade.
Run the script on the local node first. The script shuts down the earlier release
installation, replaces it with the new Oracle Clusterware release, and starts the
new Oracle Clusterware installation.
After the script completes successfully, you can run the script in parallel on all
nodes except for one, which you select as the last node. When the script is run
successfully on all the nodes except the last node, run the script on the last node.
5.
After running the rootupgrade.sh script on the last node in the cluster, if you are
upgrading from a release earlier than release 11.2.0.1, and left the check box with
ASMCA marked, as is the default, then Oracle ASM Configuration Assistant runs
automatically, and the Oracle Clusterware upgrade is complete. If you uncloaked
the box on the interview stage of the upgrade, then ASMCA is not run
automatically.
If an earlier version of Oracle Automatic Storage Management is installed, then the
installer starts Oracle ASM Configuration Assistant to upgrade Oracle ASM to 11g
Release 2 (11.2). You can choose to upgrade Oracle ASM at this time, or upgrade it
later.
Oracle recommends that you upgrade Oracle ASM at the same time that you
upgrade the Oracle Clusterware binaries. Until Oracle ASM is upgraded, Oracle
databases that use Oracle ASM cannot be created. Until Oracle ASM is upgraded,
How to Upgrade to Oracle Grid Infrastructure 11g Release 2
E-7
Performing Rolling Upgrades From an Earlier Release
the 11g Release 2 (11.2) Oracle ASM management tools in the Grid home (for
example, srvctl) will not work.
6.
Because the Oracle Grid Infrastructure home is in a different location than the
former Oracle Clusterware and Oracle ASM homes, update any scripts or
applications that use utilities, libraries, or other files that reside in the Oracle
Clusterware and Oracle ASM homes.
At the end of the upgrade, if you set the OCR backup location
manually to the older release Oracle Clusterware home (CRS home),
then you must change the OCR backup location to the Oracle Grid
Infrastructure home (Grid home). If you did not set the OCR backup
location manually, then this issue does not concern you.
Note:
Because upgrades of Oracle Clusterware are out-of-place upgrades,
the previous release Oracle Clusterware home cannot be the location
of the OCR backups. Backups in the old Oracle Clusterware home
could be deleted.
E.7.2 Performing a Rolling Upgrade of Oracle Automatic Storage Management
After you have completed the Oracle Clusterware 11g Release 2 (11.2) upgrade, if you
did not choose to upgrade Oracle ASM when you upgraded Oracle Clusterware, then
you can do it separately using the Oracle Automatic Storage Management
Configuration Assistant (asmca) to perform rolling upgrades.
You can use asmca to complete the upgrade separately, but you should do it soon after
you upgrade Oracle Clusterware, as Oracle ASM management tools such as srvctl
will not work until Oracle ASM is upgraded.
ASMCA performs a rolling upgrade only if the earlier version
of Oracle ASM is either 11.1.0.6 or 11.1.0.7. Otherwise, ASMCA
performs a normal upgrade, in which ASMCA brings down all Oracle
ASM instances on all nodes of the cluster, and then brings them all up
in the new Grid home.
Note:
E.7.2.1 Preparing to Upgrade Oracle ASM
Note the following if you intend to perform rolling upgrades of Oracle ASM:
■
The active version of Oracle Clusterware must be 11g Release 2 (11.2). To
determine the active version, enter the following command:
$ crsctl query crs activeversion
■
■
■
You can upgrade a single instance Oracle ASM installation to a clustered Oracle
ASM installation. However, you can only upgrade an existing single instance
Oracle ASM installation if you run the installation from the node on which the
Oracle ASM installation is installed. You cannot upgrade a single instance Oracle
ASM installation on a remote node.
You must ensure that any rebalance operations on your existing Oracle ASM
installation are completed before starting the upgrade process.
During the upgrade process, you alter the Oracle ASM instances to an upgrade
state. Because this upgrade state limits Oracle ASM operations, you should
complete the upgrade process soon after you begin. The following are the
operations allowed when an Oracle ASM instance is in the upgrade state:
E-8 Oracle Grid Infrastructure Installation Guide
Unlocking the Existing Oracle Clusterware Installation
–
Disk group mounts and dismounts
–
Opening, closing, resizing, or deleting database files
–
Recovering instances
–
Queries of fixed views and packages: Users are allowed to query fixed views
and run anonymous PL/SQL blocks using fixed packages, such as dbms_
diskgroup)
E.7.2.2 Upgrading Oracle ASM
Complete the following procedure to upgrade Oracle ASM:
1.
On the node you plan to start the upgrade, set the environment variable ASMCA_
ROLLING_UPGRADE as true. For example:
$ export ASMCA_ROLLING_UPGRADE=true
2.
From the Oracle Grid Infrastructure 11g Release 2 (11.2) home, start ASMCA. For
example:
$ cd /u01/11.2/grid/bin
$ ./asmca
3.
Select Upgrade.
ASM Configuration Assistant upgrades Oracle ASM in succession for all nodes in
the cluster.
4.
After you complete the upgrade, run the command to unset the ASMCA_
ROLLING_UPGRADE environment variable.
See Also: Oracle Database Upgrade Guide and Oracle Database Storage
Administrator's Guide for additional information about preparing an
upgrade plan for Oracle ASM, and for starting, completing, and
stopping Oracle ASM upgrades
E.8 Updating DB Control and Grid Control Target Parameters
Because Oracle Clusterware release 2 (11.2) is an out-of-place upgrade of the Oracle
Clusterware home in a new location (the Oracle Grid Infrastructure for a cluster home,
or Grid home), the path for the CRS_HOME parameter in some parameter files must
be changed. If you do not change the parameter, then you encounter errors such as
"cluster target broken" on DB Control or Grid control.
Use the following procedure to resolve this issue:
1.
Log in to dbconsole or gridconsole.
2.
Navigate to the Cluster tab.
3.
Click Monitoring Configuration
4.
Update the value for Oracle Home with the new Grid home path.
E.9 Unlocking the Existing Oracle Clusterware Installation
After upgrade from previous releases, if you want to deinstall the previous release
Oracle Grid Infrastructure Grid home, then you must first change the permission and
ownership of the previous release Grid home. Log in as root, and change the
permission and ownership of the previous release Grid home using the following
How to Upgrade to Oracle Grid Infrastructure 11g Release 2
E-9
Downgrading Oracle Clusterware After an Upgrade
command syntax, where oldGH is the previous release Grid home, swowner is the
Oracle Grid Infrastructure installation owner, and oldGHParent is the parent directory
of the previous release Grid home:
#chmod -R 755 oldGH
#chown -R swowner oldGH
#chown swowner oldGHParent
For example:
#chmod -R 755 /u01/app/11.2.0.1/grid
#chown -R grid /u01/app/11.2.0.1/grid
#chown grid /u01/app/11.2.0.1
E.10 Downgrading Oracle Clusterware After an Upgrade
After a successful or a failed upgrade to Oracle Clusterware 11g Release 2 (11.2), you
can restore Oracle Clusterware to the previous version.
The restoration procedure in this section restores the Clusterware configuration to the
state it was in before the Oracle Clusterware 11g Release 2 (11.2) upgrade. Any
configuration changes you performed during or after the 11g Release 2 (11.2) upgrade
are removed and cannot be recovered.
In the following procedure, the local node is the first node on which the rootupgrade
script was run. The remote nodes are all other nodes that were upgraded.
To restore Oracle Clusterware to the previous release:
1.
Use the downgrade procedure for the release to which you want to downgrade.
Downgrading to releases prior to 11g release 2 (11.2.0.1):
On all remote nodes, use the command syntax Grid_home/crs/install/rootcrs.pl
-downgrade [-force] to stop the 11g Release 2 (11.2) resources, shut down the 11g
Release 2 (11.2) stack.
Note:
This command does not reset the OCR, or delete ocr.loc.
Command syntax is as follows:
# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -oldcrshome -version
For example:
# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -oldcrshome
/u01/app/crs -version 11.1.0.1.0
Ensure that Oracle Clusterware version is specified in the
correct format. For example, 11.1.0.1.0.
Note:
If you want to stop a partial or failed 11g Release 2 (11.2) installation and restore
the previous release Oracle Clusterware, then use the -force flag with this
command.
Downgrading to a release 11.2.0.1 or later release:
E-10 Oracle Grid Infrastructure Installation Guide
Downgrading Oracle Clusterware After an Upgrade
Use the command syntax /rootcrs.pl -downgrade -oldcrshome oldGridHomePath
-version oldGridversion, where oldGridhomepath is the path to the previous release
Oracle Grid Infrastructure home, and oldGridversion is the release to which you
want to downgrade. For example:
./rootcrs.pl -downgrade -oldcrshome /u01/app/11.2.0/grid -version 11.2.0.1
If you want to stop a partial or failed 11g release 2 (11.2) installation and restore
the previous release Oracle Clusterware, then use the -force flag with this
command.
2.
After the rootcrs.pl -downgrade script has completed on all remote nodes, on
the local node use the command syntax Grid_home/crs/install/rootcrs.pl
-downgrade -lastnode -oldcrshome pre11.2_crs_home -version pre11.2_crs_version
[-force], where pre11.2_crs_home is the home of the earlier Oracle Clusterware
installation, and pre11.2_crs_version is the release number of the earlier Oracle
Clusterware installation.
For example:
# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -downgrade -lastnode -oldcrshome
/u01/app/crs -version 11.1.0.6.0
This script downgrades the OCR. If you want to stop a partial or failed 11g Release
2 (11.2) installation and restore the previous release Oracle Clusterware, then use
the -force flag with this command.
3.
Log in as the Grid infrastructure installation owner, and run the following
commands, where /u01/app/grid is the location of the new (upgraded) Grid
home (11.2):
.Grid_home/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=false ORACLE_HOME=/u01/app/grid
4.
As the Grid infrastructure installation owner, run the command ./runInstaller
-nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true
ORACLE_HOME=pre11.2_crs_home, where pre11.2_crs_home represents the home
directory of the earlier Oracle Clusterware installation.
For example:
.Grid_home/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=true ORACLE_HOME=/u01/app/crs
5.
For downgrades to 11.2 and later releases
1.
If you are downgrading from Oracle Grid Infrastructure release 2 (11.2.0.4) to
an earlier release of 11.2, after execution of rootupgrade.sh is completed on all
cluster nodes, run the following command from the 11.2.0.4 Grid home:
acfsroot uninstall
2.
From the earlier release Grid home, run the following command as a
privileged user (root):
acfsroot install
3.
On each node, start Oracle Clusterware from the earlier release Oracle
Clusterware home using the command crsctl start crs. For example,
where the earlier release home is crshome11202, use the following command
on each node:
How to Upgrade to Oracle Grid Infrastructure 11g Release 2
E-11
Downgrading Oracle Clusterware After an Upgrade
crshome11202/bin/crsctl start crs
For downgrades to 11.1 and earlier releases
You are prompted to run root.sh from the earlier release Oracle Clusterware
installation home in sequence on each member node of the cluster. After you
complete this task, downgrade is completed.
Running root.sh from the earlier release Oracle Clusterware installation home
restarts the Oracle Clusterware stack, starts up all the resources previously
registered with Oracle Clusterware in the older version, and configures the old
initialization scripts to run the earlier release Oracle Clusterware stack.
6.
Change the following environment variables to point to the directories of the
release to which you are downgrading:
■
ORACLE_HOME
■
PATH
Ensure that your oratab file and any client scripts that set the value of ORACLE_
HOME point to the downgraded Oracle home.
E-12 Oracle Grid Infrastructure Installation Guide
Index
Numerics
32-bit and 64-bit
software versions in the same cluster not
supported, 2-19
A
ASM
and multiple databases, 2-12
and rolling upgrade, E-8
changing owner and permissions of disks
on HP-UX, 3-22
character device names
on HP-UX, 3-22
characteristics of failure groups, 3-18, 3-23
checking disk availability on HP-UX, 3-21
creating the OSDBA for ASM group, 2-14
disk groups, 3-16
failure groups, 3-16
examples, 3-18, 3-23
identifying, 3-18, 3-23
identifying available disks on HP-UX, 3-21
identifying disks on HP-UX, 3-21
number of instances on each node, 1-5, 3-2
OSASM or ASM administrator, 2-12
OSDBA for ASM group, 2-12
recommendations for disk groups, 3-16
required for Standard Edition Oracle RAC, 3-1
required for Typical install type, 3-1
rolling upgrade of, 4-2
space required for Oracle Clusterware files, 3-17
space required for preconfigured database, 3-17
storage option for data files, 3-2
storing Oracle Clusterware files on, 3-4
ASM group
creating, 2-14
ASMCA
Used to create disk groups for older Oracle
Database releases on Oracle ASM, 5-6
Automatic Storage Management. See ASM.
B
Bash shell
default user startup file, 2-40
.bash_profile file, 2-40
bdf, 2-21
binaries
relinking, 5-7
block devices
and upgrades, 3-3
creating permissions file for Oracle Clusterware
files, 3-21
creating permissions file for Oracle Database
files, 3-24
desupport of, 3-24
desupported, 3-3
for upgrades only, 3-24
BMC
configuring, 2-37
BMC interface
preinstallation tasks, 2-36
Bourne shell
default user startup file, 2-40
C
C shell
default user startup file, 2-40
central inventory, 2-11
about, C-1
central inventory. See Also OINSTALL group, and
Oracle Inventory group
changing host names, 4-3
character device
device name on HP-UX, 3-22
checkdir error, E-3
checking version, 2-34
chmod command, 3-14, 3-15, 3-22
chown command, 3-14, 3-15, 3-22
clients
connecting to SCANs, C-4
cloning
cloning a Grid home to other nodes, 4-8
cluster configuration file, 4-7
cluster file system
storage option for data files, 3-2
cluster interconnect
Hyper Messaging protocol, 2-32, 2-34
cluster name
requirements for, 4-3
Index-1
cluster nodes
private node names, 4-4
public node names, 4-3
specifying uids and gids, 2-15
virtual node names, 4-4
Cluster Time Synchronization Service, 2-36
Cluster Verification Utility
fixup scripts, 2-2
user equivalency troubleshooting, A-6
commands
asmca, 4-7, 5-4, E-8
asmcmd, 2-7
bdf, 1-2
chmod, 3-14, 3-15
chown, 3-14, 3-15
crsctl, 4-10, 5-6, E-3, E-8
fdisk, 3-21
groupadd, 2-16
id, 2-16
mkdir, 3-14, 3-15
nscd, 2-30
partprobe, 3-24
passwd, 2-17
ping, 2-23
rootcrs.pl, 5-7
and deconfigure option, 6-4
rootupgrade.sh, E-3
sqlplus, 2-7
srvctl, E-3
umask, 2-39
unset, E-4
useradd, 2-8, 2-15, 2-16
usermod, 2-15
xhost, 2-3
xterm, 2-4
configToolAllCommands script, A-11
configuring kernel parameters, D-5
creating required X library symbolic links, 2-43
cron jobs, 4-5, A-8
crs_install.rsp file, B-3
CSD
download location for WebSphere MQ, 2-35
ctsdd, 2-36
custom database
failure groups for ASM, 3-18, 3-23
requirements when using ASM, 3-17
Custom installation type
reasons for choosing, 2-11
D
data files
creating separate directories for, 3-13, 3-14
setting permissions on data file directories, 3-14,
3-15
storage options, 3-2
data loss
minimizing with ASM, 3-18, 3-23
database files
supported storage options, 3-4
Index-2
databases
ASM requirements, 3-17
dba group
and ASM disks on HP-UX, 3-22
dba group. See OSDBA group
DBCA
no longer used for Oracle ASM disk group
administration, 5-6
dbca.rsp file, B-3
deconfiguring Oracle Clusterware, 6-4
default file mode creation mask
setting, 2-39
deinstall, 6-1
deinstallation, 6-1
device names
on HP-UX, 3-22
Direct NFS
disabling, 3-15
enabling, 3-11
enabling for Oracle Database, 3-8
for data files, 3-7
minimum write size value for, 3-8
directory
creating separate data file directories, 3-13, 3-14
permission for data file directories, 3-14, 3-15
disk
changing permissions and owner for ASM
on HP-UX, 3-22
disk group
ASM, 3-16
recommendations for Oracle ASM disk
groups, 3-16
disk groups
recommendations for, 3-16
disk space
checking, 2-21
requirements for preconfigured database in
ASM, 3-17
disks
changing permissions and owner for ASM
on HP-UX, 3-22
checking availability for ASM on HP-UX, 3-21
identifying LVM disks on HP-UX, 3-22
disks. See Also ASM disks
DISPLAY environment variable, D-7
setting, 2-40
DNS, A-10
E
emulator
installing from X emulator, 2-4
enterprise.rsp file, B-3
environment
configuring for oracle user, 2-39
environment variables
DISPLAY, 2-40, D-7
ORACLE_BASE, C-2
ORACLE_HOME, 2-7, 2-41, E-4
ORACLE_SID, E-4
removing from shell startup file, 2-40
SHELL, 2-40
TEMP and TMPDIR, 2-21, 2-41
TNS_ADMIN, 2-41
errors
X11 forwarding, 2-42, D-5
errors using Opatch, E-3
Exadata
relinking binaries example for, 5-7
examples
ASM failure groups, 3-18, 3-23
executable_stack parameter
minimal value, D-5
F
failure group
ASM, 3-16
characteristics of ASM failure group, 3-18, 3-23
examples of ASM failure groups, 3-18, 3-23
fencing
and IPMI, 2-36, 4-5
file mode creation mask
setting, 2-39
file system
storage option for data files, 3-2
file systems, 3-4
files
$ORACLE_HOME/lib/libnfsodm11.so, 3-13
$ORACLE_HOME/lib/libodm11.so, 3-13
.bash_profile, 2-40
dbca.rsp, B-3
editing shell startup file, 2-40
enterprise.rsp, B-3
.login, 2-40
oraInst.loc, 2-5
.profile, 2-40
response files, B-2
filesets, 2-31
fixup script, 2-2
about, 1-1
grid naming service. See GNS
grid user, 2-11
group IDs
identifying existing, 2-16
specifying, 2-16
specifying on other nodes, 2-15
groups
checking for existing oinstall group, 2-5
creating identical groups on other nodes, 2-15
creating the ASM group, 2-14
creating the OSDBA for ASM group, 2-14
creating the OSDBA group, 2-13
OINSTALL, 2-5, 2-6
OSASM (asmadmin), 2-12
OSDBA (dba), 2-11
OSDBA for ASM (asmdba), 2-12
OSDBA group (dba), 2-11
OSOPER (oper), 2-11
OSOPER for ASM, 2-12
OSOPER group (oper), 2-11
required for installation owner user, 2-11
specifying when creating users, 2-16
using NIS, 2-10, 2-15
H
hardware requirements, 2-19
high availability IP addresses, 2-22
host names
changing, 4-3
legal host names, 4-3
HP-UX
character device names, 3-22
checking disk availability for ASM, 3-21
identifying disks for ASM, 3-21
identifying LVM disks, 3-22
tuning parameters for, D-6
Hyper Messaging Protocol
using as a cluster interconnect, 2-32, 2-34
HyperFabric software
requirement, 2-32, 2-34
G
I
GFS, 3-4
gid
identifying existing, 2-16
specifying, 2-16
specifying on other nodes, 2-15
globalization
support for, 4-3
GNS
about, 2-24
GPFS, 3-4
Grid home
and Oracle base restriction, 2-7
grid home
default path for, 2-45
disk space for, 2-20
unlocking, 5-7
id command, 2-16
INS-32026 error, 2-7
installation
and cron jobs, 4-5
and globalization, 4-3
cloning a Grid infrastructure installation to other
nodes, 4-8
completing after OUI exits, A-11
response files, B-2
preparing, B-2, B-4
templates, B-2
silent mode, B-5
using cluster configuration file, 4-7
installation types
and ASM, 3-17
interfaces
Index-3
requirements for private interconnect, C-4
intermittent hangs
and socket files, 4-10
ioscan command, 3-21
IPMI
preinstallation tasks, 2-36
preparing for installation, 4-5
required postinstallation configuration, 5-2
Itanium
operating system requirements, 2-31
J
JDK requirements, 2-31
job role separation users, 2-11
K
kctune, D-6
kcweb, D-6
kernel parameters
configuring, D-5
setting, D-7
Korn shell
default user startup file, 2-40
ksi_alloc_max parameter
minimal value, D-5
L
legal host names, 4-3
libnfsodm11.so, 3-13
libodm11.so, 3-13
log file
how to access during installation, 4-6
.login file, 2-40
LVM
identifying volume group devices on
HP-UX, 3-22
recommendations for ASM, 3-16
M
mask
setting default file mode creation mask, 2-39
max_thread_proc
minimal value, D-5
max_thread_proc parameter
minimal value, D-5
maxdsiz parameter
minimal value, D-5
maxdsiz_64bit parameter
minimal value, D-5
maxfiles parameter
minimal value, D-5
maxfiles_lim
minimal value, D-6
maxssiz parameter
minimal value, D-6
maxssiz_64bit parameter
minimal value, D-6
Index-4
maxuprc parameter
minimal value, D-6
memory requirements, 2-19
minor number for device files, 2-44
mixed binaries, 2-31
mkdir command, 3-14, 3-15
mode
setting default file mode creation mask, 2-39
msgmap
not valid, D-5
msgmni parameter
minimal value, D-6
msgseg
not valid, D-5
msgtql parameter
minimal value, D-6
multiple databases
and ASM, 2-12
multiple oracle homes, 2-6, 3-15
My Oracle Support, 5-1
N
ncsize parameter
minimal value, D-6
Net Configuration Assistant (NetCA)
response files, B-5
running at command prompt, B-5
netca, 4-7
netca.rsp file, B-3
Network Information Services
See NIS
nflocks parameter
minimal value, D-6
NFS, 3-4, 3-10
and data files, 3-9
and Oracle Clusterware files, 3-5
buffer size parameters for, 3-9, 3-11
Direct NFS, 3-7
for data files, 3-9
rsize, 3-10
ninode parameter
minimal value, D-6
NIS
alternative to local users and groups, 2-10
nkthread parameter
minimal value, D-6
noninteractive mode. See response file mode
nproc parameter
minimal value, D-6
nslookup command, A-10
NTP protocol
and slewing, 2-36
O
OCFS2, 3-4
OCR. See Oracle Cluster Registry
OINSTALL group
about, C-1
and oraInst.loc, 2-5
creating on other nodes, 2-15
oinstall group
checking for existing, 2-5
OINSTALL group. See Also Oracle Inventory group
Opatch, E-3
oper group. See OSOPER group
operating system
checking version, 2-34
different on cluster members, 2-31
operating system requirements, 2-31
Itanium, 2-31
optimal flexible architecture
and oraInventory directory, C-2
Oracle base
grid homes not permitted under, 2-45
Oracle base directory
about, C-3
Grid home must not be in an Oracle Database
Oracle base, 2-7
Oracle Cluster Registry
configuration of, 4-5
mirroring, 3-6
partition sizes, 3-6
permissions file to own block device
partitions, 3-21
supported storage options, 3-4
Oracle Clusterware
and file systems, 3-4
and upgrading Oracle ASM instances, 1-5, 3-2
installing, 4-1
rolling upgrade of, 4-2
supported storage options for, 3-4
upgrading, 3-6
Oracle Clusterware files
ASM disk space requirements, 3-17
Oracle Clusterware Installation Guide
replaced by Oracle Grid Infrastructure Installation
Guide, 4-1
Oracle Database
creating data file directories, 3-13, 3-14
data file storage options, 3-2
privileged groups, 2-11
requirements with ASM, 3-17
Oracle Database Configuration Assistant
response file, B-3
Oracle Disk Manager
and Direct NFS, 3-11
Oracle Grid Infrastructure owner (grid), 2-11
Oracle Grid Infrastructure response file, B-3
oracle home
and asmcmd errors, 2-7
ASCII path restriction for, 4-5
multiple oracle homes, 2-6, 3-15
Oracle Inventory
pointer file, 2-5
Oracle Inventory group
about, C-1
checking for existing, 2-5
creating, 2-5
creating on other nodes, 2-15
Oracle Net Configuration Assistant
response file, B-3
Oracle patch updates, 5-1
Oracle Software Owner user
and ASM disks, 3-22
configuring environment for, 2-39
creating, 2-7, 2-14
creating on other nodes, 2-15
determining default shell, 2-40
required group membership, 2-11
Oracle software owner user
creating, 2-6
description, 2-11
Oracle Universal Installer
response files
list of, B-3
Oracle Upgrade Companion, 2-2
oracle user
and ASM disks, 3-22
configuring environment for, 2-39
creating, 2-6, 2-7, 2-8, 2-14, 2-15
creating on other nodes, 2-15
description, 2-11
determining default shell, 2-40
required group membership, 2-11
ORACLE_BASE environment variable
removing from shell startup file, 2-40
ORACLE_HOME environment variable
removing from shell startup file, 2-40
unsetting, 2-41
ORACLE_SID environment variable
removing from shell startup file, 2-40
oraInst.loc
and central inventory, 2-5
contents of, 2-5
oraInst.loc file
location, 2-5
location of, 2-5
oraInventory, 2-11
about, C-1
creating, 2-5
OSASM group, 2-12
about, 2-12
and multiple databases, 2-12
and SYSASM, 2-12
creating, 2-14
OSDBA for ASM group, 2-12
about, 2-12
OSDBA group
and ASM disks on HP-UX, 3-22
and SYSDBA privilege, 2-11
creating, 2-13
creating on other nodes, 2-15
description, 2-11
OSDBA group for ASM
creating, 2-14
OSOPER for ASM group
about, 2-12
creating, 2-14
Index-5
OSOPER group
and SYSOPER privilege, 2-11
creating, 2-13
creating on other nodes, 2-15
description, 2-11
P
parameters
tuning manually, D-6
partition
using with ASM, 3-16
passwd command, 2-17
patch download location, 2-35
patch updates
download, 5-1
install, 5-1
My Oracle Support, 5-1
patches
download location, 2-35
PC X server
installing from, 2-4
permissions
for data file directories, 3-14, 3-15
physical RAM requirements, 2-19
ping command, A-10
policy-managed databases
and SCANs, C-5
postinstallation
patch download and install, 5-1
root.sh back up, 5-2
preconfigured database
ASM disk space requirements, 3-17
requirements when using ASM, 3-17
privileged groups
for Oracle Database, 2-11
.profile file, 2-40
PRVF-5436 error, 2-36
pvdisplay command, 3-22
R
RAC
configuring disks for ASM on HP-UX, 3-21
RAID
and mirroring Oracle Cluster Registry and voting
disk, 3-6
recommended ASM redundancy level, 3-16
RAM requirements, 2-19
raw devices
and upgrades, 3-3, 3-24
desupport of, 3-24
upgrading existing partitions, 3-6
recovery files
supported storage options, 3-4
redundancy level
and space requirements for preconfigured
database, 3-17
Redundant Interconnect Usage, 2-22
relinking Oracle Grid Infrastructure home
Index-6
binaries, 5-7, 6-3
requirements, 3-17
hardware, 2-19
resolv.conf file, A-10
response file installation
preparing, B-2
response files
templates, B-2
silent mode, B-5
response file mode
about, B-1
reasons for using, B-2
See also response files, silent mode, B-1
response files
about, B-1
creating with template, B-3
crs_install.rsp, B-3
dbca.rsp, B-3
enterprise.rsp, B-3
general procedure, B-2
Net Configuration Assistant, B-5
netca.rsp, B-3
passing values at command line, B-1
specifying with Oracle Universal Installer,
response files.See also silent mode
rolling upgrade
ASM, 4-2
of ASM, E-8
Oracle Clusterware, 4-2
root user
logging in as, 2-3
root.sh, 4-7
back up, 5-2
running, 4-3, A-11
rsize parameter, 3-10
run level, 2-19
B-5
S
SAM, D-6
starting, D-7
sam command, D-7
SCAN address, A-10
SCAN listener, A-10, C-5
SCANs, 2-25
understanding, C-4
use of SCANs required for clients of
policy-managed databases, C-5
scripts
root.sh, 4-3
secure shell
configured by the installer, 2-38
security
dividing ownership of Oracle software, 2-10
semmni parameter
minimal value, D-6
semmns parameter
minimal value, D-6
semmnu parameter
minimal value, D-6
semvmx parameter
minimal value, D-6
Serviceguard
requirement, 2-32, 2-34
setting kernel parameters, D-7
shell
determining default shell for oracle user, 2-40
SHELL environment variable
checking value of, 2-40
shell startup file
editing, 2-40
removing environment variables, 2-40
shmmax parameter
minimal value, D-6
shmmni parameter
minimal value, D-6
shmseg parameter
minimal value, D-6
silent mode
about, B-1
reasons for using, B-2
See also response files., B-1
silent mode installation, B-5
single client access names. See SCAN addresses
software requirements, 2-31
checking software requirements, 2-34
ssh
and X11 Forwarding, 2-42
automatic configuration from OUI, 2-38
configuring, D-1
when used, 2-38
startup file
for shell, 2-40
stty
suppressing to prevent installation errors, 2-43
supported storage options
Oracle Clusterware, 3-4
suppressed mode
reasons for using, B-2
swap space
requirements, 2-19
symbolic links
X library links required, 2-43
SYSASM, 2-12
and OSASM, 2-12
SYSDBA
using database SYSDBA on ASM
deprecated, 2-12
SYSDBA privilege
associated group, 2-11
SYSOPER privilege
associated group, 2-11
System Administration Manager
See SAM
System Administration Manager (SAM), D-7
T
TEMP environment variable, 2-21
setting, 2-41
temporary directory. See /tmp directory
temporary disk space
requirements, 2-19
terminal output commands
suppressing for Oracle installation owner
accounts, 2-43
TMPDIR environment variable, 2-21
setting, 2-41
TNS_ADMIN environment variable
unsetting, 2-41
Troubleshooting
DBCA does not recognize Oracle ASM disk size
and fails to create disk groups, 5-6
troubleshooting
and deinstalling, 6-1
asmcmd errors and oracle home, 2-7
automatic SSH configuration from OUI, 2-39
deconfiguring Oracle Clusterware to fix causes of
root.sh errors, 6-4
disk space errors, 4-5
DISPLAY errors, 2-42
environment path errors, 4-6
garbage strings in script inputs found in log
files, 2-43
intermittent hangs, 4-10
log file, 4-6
nfs mounts, 2-30
permissions errors and oraInventory, C-1
permissions errors during installation, C-2
public network failures, 2-30
root.sh errors, 6-4
run level error, 2-19
sqlplus errors and oracle home, 2-7
ssh, D-1
ssh configuration failure, D-2
ssh errors, 2-43
SSH timeouts, A-4
stty errors, 2-43
unexplained installation errors, 4-5, A-8
user equivalency, A-6, D-1
user equivalency error due to different user or
group IDs, 2-9, 2-15
user equivalency errors, 2-6
X11 forwarding error, 2-42
U
uid
identifying existing, 2-16
specifying, 2-16
specifying on other nodes, 2-15
umask command, 2-39
uname command, 2-34
uninstall, 6-1
uninstalling, 6-1
UNIX commands
chmod, 3-22
chown, 3-22
ioscan, 3-21
pvdisplay, 3-22
Index-7
sam, D-7
uname, 2-34
unset, 2-41
unsetenv, 2-41
unset command, 2-41
unsetenv command, 2-41
upgrade
of Oracle Clusterware, 4-2
restrictions for, E-2
unsetting environment variables for, E-4
upgrades, 2-2
and SCANs, C-5
of Oracle ASM, E-8
using raw or block devices with, 3-3
upgrading
and existing Oracle ASM instances, 1-5, 3-2
and OCR partition sizes, 3-6
and voting disk partition sizes, 3-6
shared Oracle Clusterware home to local grid
homes, 2-45
user equivalence
testing, A-6
user equivalency errors
groups and users, 2-9, 2-15
user IDs
identifying existing, 2-16
specifying, 2-16
specifying on other nodes, 2-15
useradd command, 2-8, 2-15, 2-16
users
creating identical users on other nodes, 2-15
creating the grid user, 2-6
creating the oracle user, 2-7, 2-14
oracle software owner user (oracle), 2-11
specifying groups when creating, 2-16
using NIS, 2-10, 2-15
V
VIP
for SCAN, A-10
voting disks
configuration of, 4-4
mirroring, 3-6
partition sizes, 3-6
supported storage options, 3-4
W
WebSphere MQ
CSD download location, 2-35
workstation
installing from, 2-3
wsize parameter, 3-10
wtmax, 3-8
minimum value for Direct NFS,
X
X emulator
installing from, 2-4
Index-8
3-8
X library symbolic links
required, 2-43
X window system
enabling remote hosts, 2-3, 2-4
X11 forwarding
error, 2-42
X11 forwarding errors, D-5
xhost command, 2-3
xterm command, 2-4
xtitle
suppressing to prevent installation errors, 2-43
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement