Installation and Upgrade Guide

Installation and Upgrade Guide
Oracle® Grid Infrastructure
Installation and Upgrade Guide
12c Release 2 (12.2) for Microsoft Windows
E55975-15
July 2017
Oracle Grid Infrastructure Installation and Upgrade Guide, 12c Release 2 (12.2) for Microsoft Windows
E55975-15
Copyright © 2012, 2017, Oracle and/or its affiliates. All rights reserved.
Primary Authors: Subhash Chandra, Janet Stern, Aparna Kamath
Contributing Authors: Prakash Jashnani, Jacqueline Sideri, Douglas Williams
Contributors: Prasad Bagal, Eric Belden, Maria Colgan, Ian Cookson, Jonathan Creighton, Rajesh Dasari,
Barbara Glover, Kevin Jernigan, Aneesh Khandelwal, Tonghua Li, Rudregowda Mallegowda, Sivaselvam
Narayanasamy, Srinivas Poovala, Vishal Saxena, Sunil Surabhi, Richard Wessman, James Williams, Yanhua
Xia, Suresh Yambari Venkata Naga, Jiangqi Yang
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software,
any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are
"commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agencyspecific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the
programs, including any operating system, integrated software, any programs installed on the hardware,
and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of
their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron,
the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro
Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless
otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates
will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party
content, products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Preface ............................................................................................................................................................... xv
Audience ...................................................................................................................................................... xv
Documentation Accessibility .................................................................................................................... xv
Java Access Bridge and Accessibility...................................................................................................... xvi
Related Documentation ............................................................................................................................ xvi
Conventions...............................................................................................................................................
xvii
Changes in this Release for Oracle Clusterware ......................................................................... xix
Changes in Oracle Clusterware 12c Release 2 (12.2) ............................................................................ xix
New Features for Oracle Grid Infrastructure 12c Release 2 (12.2) ............................................. xix
Deprecated Features for Oracle Clusterware 12c Release 2 (12.2) ............................................. xxi
Desupported Features ...................................................................................................................... xxi
Changes in Oracle Clusterware 12c Release 1 (12.1.0.2) .....................................................................
xxii
New Features for Oracle Clusterware 12.1.0.2.............................................................................
xxii
Other Changes for Oracle Clusterware 12.1.0.2...........................................................................
xxii
Changes in Oracle Clusterware 12c Release 1 (12.1) ..........................................................................
xxiii
New Features for Oracle Clusterware 12c Release 1 (12.1) .......................................................
xxiii
Deprecated Features for Oracle Clusterware 12c Release 1 (12.1) ............................................ xxv
Desupported Features for Oracle Clusterware 12c Release 1 (12.1) ......................................... xxv
Other Changes for Oracle Clusterware 12c Release 1 (12.1) ...................................................... xxv
1 Oracle Grid Infrastructure Installation Checklist
1.1 Oracle Grid Infrastructure Installation Server Hardware Checklist .........................................
1-1
1.2 Operating System Checklist for Oracle Grid Infrastructure and Oracle RAC.........................
1-2
1.3 Server Configuration Checklist for Oracle Grid Infrastructure .................................................
1-3
1.4 Oracle Grid Infrastructure Network Checklist.............................................................................
1-4
1.5 User Environment Configuration Checklist for Oracle Grid Infrastructure............................
1-6
1.6 Storage Checklist for Oracle Grid Infrastructure .........................................................................
1-7
1.7 Installer Planning Checklist for Oracle Grid Infrastructure .......................................................
1-8
iii
2 Configuring Servers for Oracle Grid Infrastructure and Oracle RAC
2.1 Checking Server Hardware and Memory Configuration ...........................................................
2-1
2.1.1 Checking the Available RAM on Windows Systems.......................................................
2-2
2.1.2 Checking the Currently Configured Virtual Memory on Windows Systems ..............
2-2
2.1.3 Checking the System Processor Type.................................................................................
2-2
2.1.4 Checking the Available Disk Space for Oracle Home Directories .................................
2-3
2.1.5 Checking the Available TEMP Disk Space ........................................................................
2-4
2.2 General Server Requirements .........................................................................................................
2-4
2.3 Server Minimum Hardware and Memory Requirements ..........................................................
2-5
2.4 Server Minimum Storage Requirements .......................................................................................
2-5
2.4.1 Disk Format Requirements ..................................................................................................
2-6
2.5 Configuring Time Synchronization for the Cluster .....................................................................
2-7
2.5.1 About Cluster Time Synchronization.................................................................................
2-7
2.5.2 Understanding Network Time Requirements ..................................................................
2-8
2.5.3 Configuring the Windows Time Service............................................................................
2-8
2.5.4 Configuring Network Time Protocol .................................................................................
2-9
2.5.5 Configuring Cluster Time Synchronization Service ........................................................
2-9
3 Configuring Operating Systems for Oracle Grid Infrastructure and Oracle RAC
3.1 Reviewing Operating System and Software Upgrade Best Practices .......................................
3-1
3.1.1 General Upgrade Best Practices ..........................................................................................
3-2
3.1.2 Oracle ASM Upgrade Notifications....................................................................................
3-2
3.1.3 Rolling Upgrade Procedure Notifications .........................................................................
3-3
3.2 Reviewing Operating System Security Common Practices........................................................
3-3
3.3 Verify Privileges for Copying Files in the Cluster .......................................................................
3-3
3.4 Identifying Software Requirements ...............................................................................................
3-4
3.4.1 Windows Firewall Feature on Windows Servers .............................................................
3-6
3.5 Checking the Operating System Version ......................................................................................
3-6
3.6 Checking Hardware and Software Certification on My Oracle Support .................................
3-6
3.7 Oracle Enterprise Manager Requirements....................................................................................
3-7
3.8 Installation Requirements for Web Browsers ...............................................................................
3-7
4 Configuring Networks for Oracle Grid Infrastructure and Oracle RAC
iv
4.1 About Oracle Grid Infrastructure Network Configuration Options.........................................
4-2
4.2 Understanding Network Addresses ..............................................................................................
4-2
4.2.1 About the Public IP Address ...............................................................................................
4-3
4.2.2 About the Private IP Address..............................................................................................
4-3
4.2.3 About the Virtual IP Address ..............................................................................................
4-4
4.2.4 About the Grid Naming Service (GNS) Virtual IP Address ...........................................
4-4
4.2.5 About the SCAN....................................................................................................................
4-4
4.3 Network Interface Hardware Requirements ................................................................................
4-6
4.3.1 Network Requirements for Each Node..............................................................................
4-6
4.3.2 Network Requirements for the Private Network .............................................................
4-7
4.3.3 Network Requirements for the Public Network...............................................................
4-7
4.3.4 IPv6 Protocol Support for Windows ..................................................................................
4-8
4.3.5 Using Multiple Public Network Adapters.........................................................................
4-9
4.3.6 Network Configuration Tasks for Windows Server Deployments................................
4-9
4.3.7 Network Interface Configuration Options for Performance......................................... 4-12
4.4 Oracle Grid Infrastructure IP Name and Address Requirements ........................................... 4-12
4.4.1 About Oracle Grid Infrastructure Name Resolution Options ...................................... 4-13
4.4.2 Cluster Name and SCAN Requirements ......................................................................... 4-14
4.4.3 IP Name and Address Requirements For Grid Naming Service (GNS) ..................... 4-15
4.4.4 IP Name and Address Requirements For Multi-Cluster GNS ..................................... 4-15
4.4.5 IP Address Requirements for Manual Configuration.................................................... 4-17
4.4.6 Confirming the DNS Configuration for SCAN............................................................... 4-18
4.4.7 Grid Naming Service for a Traditional Cluster Configuration Example .................... 4-18
4.4.8 Domain Delegation to Grid Naming Service .................................................................. 4-19
4.4.9 Manual IP Address Configuration Example ................................................................... 4-21
4.5 Intended Use of Network Adapters............................................................................................. 4-23
4.6 Broadcast Requirements for Networks Used by Oracle Grid Infrastructure......................... 4-23
4.7 Multicast Requirements for Networks Used by Oracle Grid Infrastructure ......................... 4-23
4.8 Configuring Multiple ASM Interconnects on Microsoft Windows Platforms....................... 4-24
5 Configuring Users, Groups and Environments for Oracle Grid Infrastructure
and Oracle RAC
5.1 Creating Installation Groups and Users for Oracle Grid Infrastructure and Oracle RAC ....
5-1
5.1.1 About the Oracle Installation User .....................................................................................
5-2
5.1.2 About the Oracle Home User for the Oracle Grid Infrastructure Installation .............
5-3
5.1.3 About the Oracle Home User for the Oracle RAC Installation ......................................
5-4
5.1.4 When to Create an Oracle Home User ...............................................................................
5-4
5.1.5 Oracle Home User Configurations for Oracle Installations ............................................
5-6
5.1.6 Understanding the Oracle Inventory Directory and the Oracle Inventory Group......
5-7
5.2 Standard Administration and Job Role Separation User Groups..............................................
5-7
5.2.1 About Job Role Separation Operating System Privileges Groups and Users...............
5-8
5.2.2 Oracle Software Owner for Each Oracle Software Product ............................................
5-9
5.2.3 Standard Oracle Database Groups for Database Administrators ................................ 5-10
5.2.4 Oracle ASM Groups for Job Role Separation .................................................................. 5-11
5.2.5 Extended Oracle Database Administration Groups for Job Role Separation............. 5-12
5.2.6 Operating System Groups Created During Installation ................................................ 5-13
5.2.7 Example of Using Role-Allocated Groups and Users .................................................... 5-16
5.3 Configuring User Accounts........................................................................................................... 5-17
5.3.1 Configuring Environment Variables for the Oracle Installation User......................... 5-17
5.3.2 Verifying User Privileges to Update Remote Nodes...................................................... 5-17
5.3.3 Managing User Accounts with User Account Control .................................................. 5-19
5.4 Creating Oracle Software Directories .......................................................................................... 5-19
v
5.4.1 About the Directories Used During Installation of Oracle Grid Infrastructure......... 5-19
5.4.2 Requirements for the Oracle Grid Infrastructure Home Directory ............................. 5-23
5.4.3 About Creating the Oracle Base Directory Path ............................................................. 5-24
5.5 Enabling Intelligent Platform Management Interface (IPMI) .................................................. 5-24
5.5.1 Requirements for Enabling IPMI ...................................................................................... 5-25
5.5.2 Configuring the IPMI Management Network ................................................................ 5-25
5.5.3 Configuring the IPMI Driver ............................................................................................. 5-26
6 Configuring Shared Storage for Oracle Database and Grid Infrastructure
6.1 Supported Storage Options for Oracle Grid Infrastructure and Oracle RAC..........................
6-2
6.2 Oracle ACFS and Oracle ADVM ....................................................................................................
6-3
6.3 Storage Considerations for Oracle Grid Infrastructure and Oracle RAC.................................
6-4
6.4 Guidelines for Choosing a Shared Storage Option......................................................................
6-5
6.4.1 Guidelines for Using Oracle ASM Disk Groups for Storage...........................................
6-5
6.4.2 Guidelines for Using Direct Network File System (NFS) with Oracle RAC ................
6-6
6.5 About Migrating Existing Oracle ASM Instances ........................................................................
6-7
6.6 Preliminary Shared Disk Preparation............................................................................................
6-7
6.6.1 Disabling Write Caching ......................................................................................................
6-7
6.6.2 Enabling Automounting for Windows ..............................................................................
6-8
6.7 Configuring Disk Partitions on Shared Storage ...........................................................................
6-9
6.7.1 Creating Disk Partitions Using the Disk Management Interface ...................................
6-9
6.7.2 Creating Disk Partitions using the DiskPart Utility ....................................................... 6-10
7 Configuring Storage for Oracle Automatic Storage Management
7.1 Identifying Storage Requirements for Using Oracle ASM for Shared Storage........................
7-2
7.2 Oracle Clusterware Storage Space Requirements........................................................................
7-6
7.3 About the Grid Infrastructure Management Repository ............................................................
7-7
7.4 Restrictions for Disk Partitions Used By Oracle ASM.................................................................
7-8
7.5 Preparing Your System to Use Oracle ASM for Shared Storage................................................
7-8
7.5.1 Identifying and Using Existing Oracle Database Disk Groups on Oracle ASM ..........
7-8
7.5.2 Selecting Disks to use with Oracle ASM Disk Groups ....................................................
7-9
7.5.3 Specifying the Oracle ASM Disk Discovery String ..........................................................
7-9
7.6 Marking Disk Partitions for Oracle ASM Before Installation................................................... 7-10
7.6.1 Using asmtoolg (Graphical User Interface) to Mark Disks ........................................... 7-11
7.6.2 Using asmtoolg to Remove Disk Stamps ......................................................................... 7-11
7.6.3 asmtool Command Line Reference................................................................................... 7-12
7.7 Creating and Using Oracle ASM Credentials File ..................................................................... 7-13
7.8 Configuring Oracle Automatic Storage Management Cluster File System ........................... 7-13
8 Configuring Direct NFS Client for Oracle RAC Data Files
vi
8.1 About Direct NFS Client Storage....................................................................................................
8-2
8.2 Creating an oranfstab File for Direct NFS Client .........................................................................
8-3
8.3 Configurable Attributes for the oranfstab File .............................................................................
8-5
8.4 Mounting NFS Storage Devices with Direct NFS Client ............................................................
8-7
8.5 Specifying Network Paths for a NFS Server .................................................................................
8-7
8.6 Enabling Direct NFS Client .............................................................................................................
8-8
8.7 Performing Basic File Operations Using the ORADNFS Utility................................................
8-8
8.8 Monitoring Direct NFS Client Usage .............................................................................................
8-9
8.9 Disabling Oracle Disk Management Control of NFS for Direct NFS Client ............................
8-9
9 Installing Oracle Grid Infrastructure for a Cluster
9.1 About Image-Based Oracle Grid Infrastructure Installation......................................................
9-2
9.2 Understanding Cluster Configuration Options ...........................................................................
9-2
9.2.1 About Oracle Standalone Clusters......................................................................................
9-3
9.2.2 About Oracle Extended Clusters.........................................................................................
9-3
9.3 About Default File Permissions Set by Oracle Universal Installer ............................................
9-4
9.4 Installing Oracle Grid Infrastructure for a New Cluster.............................................................
9-4
9.5 Installing Oracle Grid Infrastructure Using a Cluster Configuration File ...............................
9-7
9.6 Installing Only the Oracle Grid Infrastructure Software ............................................................
9-8
9.6.1 Installing Software Binaries for Oracle Grid Infrastructure for a Cluster.....................
9-9
9.6.2 Configuring the Software Binaries for Oracle Grid Infrastructure for a Cluster ....... 9-10
9.6.3 Configuring the Software Binaries Using a Response File............................................ 9-11
9.6.4 Setting Ping Targets for Network Checks........................................................................ 9-11
9.7 Confirming Oracle Clusterware Function................................................................................... 9-12
9.8 Confirming Oracle ASM Function for Oracle Clusterware Files............................................. 9-12
9.9 Understanding Offline Processes in Oracle Grid Infrastructure ............................................. 9-13
10 Oracle Grid Infrastructure Postinstallation Tasks
10.1 Required Postinstallation Tasks ................................................................................................. 10-1
10.1.1 Download and Install Patch Updates............................................................................. 10-2
10.1.2 Configure Exceptions for the Windows Firewall ......................................................... 10-2
10.2 Recommended Postinstallation Tasks ....................................................................................... 10-7
10.2.1 Downloading and Installing the ORAchk Health Check Tool ................................... 10-7
10.2.2 Optimize Memory Usage for Programs......................................................................... 10-8
10.2.3 Create a Fast Recovery Area Disk Group ...................................................................... 10-8
10.2.4 Checking the SCAN Configuration .............................................................................. 10-10
10.3 Using Earlier Oracle Database Releases with Grid Infrastructure ...................................... 10-11
10.3.1 General Restrictions for Using Earlier Oracle Database Releases ............................ 10-11
10.3.2 Configuring Earlier Release Oracle Database on Oracle ACFS................................ 10-12
10.3.3 Using ASMCA to Administer Disk Groups for Earlier Database Releases ............ 10-12
10.3.4 Using the Correct LSNRCTL Commands.................................................................... 10-13
10.3.5 Starting and Stopping Cluster Nodes or Oracle Clusterware Resources................ 10-13
10.4 Modifying Oracle Clusterware Binaries After Installation................................................... 10-13
11 Upgrading Oracle Grid Infrastructure
11.1 Understanding Out-of-Place and Rolling Upgrades ............................................................... 11-2
vii
11.2 About Oracle Grid Infrastructure Upgrade and Downgrade ................................................ 11-3
11.3 Options for Oracle Grid Infrastructure Upgrades ................................................................... 11-4
11.4 Restrictions and Guidelines for Oracle Grid Infrastructure Upgrades................................. 11-4
11.5 Preparing to Upgrade an Existing Oracle Clusterware Installation...................................... 11-6
11.5.1 Upgrade Checklist for Oracle Grid Infrastructure ....................................................... 11-7
11.5.2 Tasks to Complete Before Upgrading Oracle Grid Infrastructure............................. 11-9
11.5.3 Create an Oracle ASM Password File........................................................................... 11-10
11.5.4 Running the Oracle ORAchk Upgrade Readiness Assessment................................ 11-11
11.5.5 Using CVU to Validate Readiness for Oracle Clusterware Upgrades..................... 11-11
11.6 Understanding Rolling Upgrades Using Batches .................................................................. 11-13
11.7 Performing Rolling Upgrades of Oracle Grid Infrastructure............................................... 11-13
11.7.1 Upgrading Oracle Grid Infrastructure from an Earlier Release ............................... 11-14
11.7.2 Completing an Oracle Clusterware Upgrade when Nodes Become Unreachable
11-16
11.7.3 Joining Inaccessible Nodes After Forcing an Upgrade.............................................. 11-16
11.7.4 Changing the First Node for Install and Upgrade...................................................... 11-17
11.8 Applying Patches to Oracle Grid Infrastructure .................................................................... 11-17
11.8.1 About Individual (One-Off) Oracle Grid Infrastructure Patches ............................. 11-17
11.8.2 About Oracle Grid Infrastructure Software Patch Levels ......................................... 11-18
11.8.3 Patching Oracle ASM to a Software Patch Level ........................................................ 11-18
11.9 Updating Oracle Enterprise Manager Cloud Control Target Parameters.......................... 11-19
11.9.1 Updating the Enterprise Manager Cloud Control Target After Upgrades ............ 11-19
11.9.2 Updating the Enterprise Manager Agent Base Directory After Upgrades............. 11-19
11.9.3 Registering Resources with Oracle Enterprise Manager After Upgrades .............. 11-20
11.10 Checking Cluster Health Monitor Repository Size After Upgrading............................... 11-21
11.11 Downgrading Oracle Clusterware After an Upgrade......................................................... 11-21
11.11.1 Downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1) .......................... 11-22
11.11.2 Downgrading to Oracle Grid Infrastructure 11g Release 2 (11.2) .......................... 11-24
11.12 Completing Failed or Interrupted Installations and Upgrades ......................................... 11-25
11.12.1 Completing Failed Installations and Upgrades ........................................................ 11-25
11.12.2 Continuing Incomplete Upgrade of First Nodes ...................................................... 11-26
11.12.3 Continuing Incomplete Upgrades on Remote Nodes.............................................. 11-26
11.12.4 Continuing Incomplete Installations on First Nodes ............................................... 11-27
11.12.5 Continuing Incomplete Installation on Remote Nodes ........................................... 11-27
12 Modifying or Deinstalling Oracle Grid Infrastructure
12.1 Deciding When to Deinstall Oracle Clusterware ..................................................................... 12-1
12.2 Migrating Standalone Grid Infrastructure Servers to a Cluster............................................. 12-2
12.3 Changing the Oracle Grid Infrastructure Home Path ............................................................. 12-4
12.4 Unconfiguring Oracle Clusterware Without Removing the Software.................................. 12-5
12.5 Removing Oracle Clusterware and Oracle ASM Software..................................................... 12-6
12.5.1 About the Deinstallation Tool ......................................................................................... 12-7
12.5.2 Files Deleted by the Deinstallation Tool ........................................................................ 12-7
12.5.3 Deinstallation Tool Command Reference...................................................................... 12-8
viii
12.5.4 Using the Deinstallation Tool to Remove Oracle Clusterware and Oracle ASM .. 12-11
A Installing and Configuring Oracle Grid Infrastructure Using Response Files
A.1 How Response Files Work ............................................................................................................. A-1
A.1.1 Deciding to Use Silent Mode or Response File Mode..................................................... A-2
A.1.2 Using Response Files ........................................................................................................... A-3
A.2 Preparing Response Files................................................................................................................ A-3
A.2.1 About Response File Templates......................................................................................... A-4
A.2.2 Editing a Response File Template...................................................................................... A-5
A.2.3 Recording Response Files.................................................................................................... A-5
A.3 Running Oracle Universal Installer Using a Response File....................................................... A-6
A.4 Running Oracle Net Configuration Assistant Using Response Files....................................... A-7
A.5 Postinstallation Configuration Using Response File Created During Installation ................ A-8
A.5.1 Using the Installation Response File for Postinstallation Configuration ..................... A-8
A.5.2 Running Postinstallation Configuration Using Response File .................................... A-10
A.6 Postinstallation Configuration Using the ConfigToolAllCommands Script ........................ A-10
A.6.1 About the Postinstallation Configuration File ............................................................... A-11
A.6.2 Creating a Password Response File................................................................................. A-12
A.6.3 Running Postinstallation Configuration Using a Script and Response Files ............ A-13
B Optimal Flexible Architecture
B.1 About the Optimal Flexible Architecture Standard .................................................................... B-1
B.2 About Multiple Oracle Homes Support........................................................................................ B-2
B.3 Oracle Base Directory Naming Convention ................................................................................. B-3
B.4 Oracle Home Directory Naming Convention .............................................................................. B-3
B.5 Optimal Flexible Architecture File Path Examples ..................................................................... B-4
Index
ix
x
List of Examples
8-1
8-2
8-3
9-1
9-2
10-1
12-1
12-2
A-1
A-2
A-3
A-4
A-5
oranfstab File Using Local and Path NFS Server Entries ...................................................... 8-4
oranfstab File Using Network Names in Place of IP Addresses, with Multiple Exports,
management and community.............................................................................................. 8-4
oranfstab File Using Kerberos Authentication with Direct NFS Export............................. 8-4
Sample Cluster Configuration File............................................................................................ 9-7
Checking the Status of Oracle Clusterware........................................................................... 9-12
Using CLUVFY to Confirm DNS is Correctly Associating the SCAN Addresses......... 10-10
Running deinstall.bat From Within the Oracle Home....................................................... 12-12
Running the Deinstallation Tool from the Software Installation Media......................... 12-12
Response File Passwords for Oracle Grid Infrastructure...................................................... A-9
Response File Passwords for Oracle Grid Infrastructure for a Standalone Server
(Oracle Restart)...................................................................................................................... A-9
Response File Passwords for Oracle Database....................................................................... A-9
Sample Password response file for Oracle Grid Infrastructure Installation.................... A-12
Sample Password response file for Oracle Real Application Clusters.............................. A-12
xi
xii
List of Tables
1-1
1-2
1-3
1-4
1-5
1-6
1-7
2-1
3-1
4-1
4-2
5-1
6-1
7-1
7-2
7-3
8-1
9-1
10-1
10-2
12-1
A-1
A-2
B-1
B-2
Server Hardware Checklist for Oracle Grid Infrastructure................................................... 1-2
Operating System General Checklist for Oracle Grid Infrastructure and Oracle RAC.... 1-2
Server Configuration Checklist for Oracle Grid Infrastructure............................................ 1-3
Network Configuration Tasks for Oracle Grid Infrastructure and Oracle RAC................ 1-4
Environment Configuration for Oracle Grid Infrastructure and Oracle RAC................... 1-6
Oracle Grid Infrastructure Storage Configuration Checks.................................................... 1-7
Oracle Universal Installer Checklist for Oracle Grid Infrastructure Installation............... 1-8
Minimum Hardware and Memory Requirements for Oracle Grid Infrastructure
Installation............................................................................................................................... 2-5
Oracle Grid Software Requirements for Windows Systems................................................. 3-4
Example of a Grid Naming Service Network Configuration............................................. 4-19
Manual Network Configuration Example............................................................................. 4-21
Operating System Groups Created During Installation...................................................... 5-13
Supported Storage Options for Oracle Clusterware and Oracle RAC Files and Home
Directories .............................................................................................................................. 6-2
Oracle ASM Disk Space Requirements for a Multitenant Container Database (CDB)
with One Pluggable Database (PDB).................................................................................. 7-6
Oracle ASM Disk Space Requirements for Oracle Database (non-CDB)............................ 7-6
Minimum Space Requirements for Oracle Standalone Cluster............................................ 7-7
Configurable Attributes for the oranfstab File........................................................................ 8-5
Oracle ASM Disk Group Redundancy Levels for Oracle Extended Clusters..................... 9-4
Oracle Executables Used to Access Non-Oracle Software.................................................. 10-5
Other Oracle Software Products Requiring Windows Firewall Exceptions..................... 10-6
Options for the Deinstallation Tool........................................................................................ 12-9
Reasons for Using Silent Mode or Response File Mode........................................................ A-3
Response Files for Oracle Database and Oracle Grid Infrastructure................................... A-4
Examples of OFA-Compliant Oracle Base Directory Names............................................... B-3
Optimal Flexible Architecture Hierarchical File Path Examples......................................... B-4
xiii
xiv
Preface
This guide explains how to configure a server in preparation for installing and
configuring an Oracle Grid Infrastructure installation (Oracle Clusterware and Oracle
Automatic Storage Management).
This guide also explains how to configure a server and storage in preparation for an
Oracle Real Application Clusters (Oracle RAC) installation.
Audience (page xv)
Documentation Accessibility (page xv)
Java Access Bridge and Accessibility (page xvi)
Related Documentation (page xvi)
Conventions (page xvii)
Audience
Oracle Grid Infrastructure Installation Guide for Microsoft Windows x64 (64-Bit) provides
configuration information for network and system administrators, and database
installation information for database administrators (DBAs) who install and configure
Oracle Clusterware and Oracle Automatic Storage Management in an Oracle Grid
Infrastructure for a cluster installation.
For customers with specialized system roles who intend to install Oracle Real
Application Clusters (Oracle RAC), this book is intended to be used by system
administrators, network administrators, or storage administrators to configure a
system in preparation for an Oracle Grid Infrastructure for a cluster installation, and
complete all configuration tasks that require Administrator user privileges. When the
Oracle Grid Infrastructure installation and configuration is successfully completed, a
system administrator should only provide configuration information and grant access
to the database administrator to run scripts that require Administrator user privileges
during an Oracle RAC installation.
This guide assumes that you are familiar with Oracle Database concepts. For
additional information, refer to books in the Related Documents list.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=docacc.
xv
Access to Oracle Support
Oracle customers that have purchased support have access to electronic support
through My Oracle Support. For information, visit http://www.oracle.com/pls/
topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=trs if you are hearing impaired.
Java Access Bridge and Accessibility
Java Access Bridge enables assistive technologies to read Java applications running on
the Windows platform. Assistive technologies can read Java-based interfaces, such as
Oracle Universal Installer and Oracle Enterprise Manager Database Express.
See Also: Oracle Database Installation Guide for Microsoft Windows for more
information about installing Java Access Bridge
Related Documentation
For more information, refer to the following Oracle resources.
Oracle Clusterware and Oracle Real Application Clusters Documentation
This installation guide reviews steps required to complete an Oracle Clusterware and
Oracle Automatic Storage Management installation, and to perform preinstallation
steps for Oracle RAC.
Installing Oracle RAC or Oracle RAC One Node is a multi-step process. Perform the
following steps by reviewing the appropriate documentation:
Installation Task
Documentation to Use
Preinstallation setup for
Oracle Grid Infrastructure
and Oracle RAC
Oracle Grid Infrastructure Installation Guide for Microsoft
Windows x64 (64-Bit) (this guide)
Install Oracle Grid
Infrastructure and configure
Oracle Clusterware and
Oracle ASM
Oracle Grid Infrastructure Installation Guide for Microsoft
Windows x64 (64-Bit) (this guide)
Install Oracle Database
software and configure
Oracle RAC
Oracle Real Application Clusters Installation Guide for Microsoft
Windows x64 (64-Bit)
You can install the Oracle Database software for standalone databases or Oracle RAC
databases on servers that run Oracle Grid Infrastructure. You cannot install Oracle
Restart on servers that run Oracle Grid Infrastructure. To install an Oracle Restart
deployment of Oracle Grid Infrastructure, see Oracle Database Installation Guide.
Installation Guides for Microsoft Windows
xvi
•
Oracle Database Installation Guide for Microsoft Windows x64 (64-Bit)
•
Oracle Real Application Clusters Installation Guide for Microsoft Windows x64 (64-Bit)
•
Oracle Database Client Installation Guide for Microsoft Windows
Operating System-Specific Administrative Guides
•
Oracle Database Platform Guide for Microsoft Windows
Oracle Clusterware and Oracle Automatic Storage Management Administrative
Guides
•
Oracle Clusterware Administration and Deployment Guide
•
Oracle Automatic Storage Management Administrator's Guide
Oracle Real Application Clusters Administrative Guides
•
Oracle Real Application Clusters Administration and Deployment Guide
•
Oracle Database 2 Day + Real Application Clusters Guide
Generic Documentation
•
Oracle Database 2 Day DBA
•
Oracle Database Administrator’s Guide
•
Oracle Database Concepts
•
Oracle Database New Features Guide
•
Oracle Database Reference
•
Oracle Database Global Data Services Concepts and Administration Guide
To download free release notes, installation documentation, white papers, or other
collateral, visit Oracle Technology Network (OTN). You must register online before
using OTN; registration is free and can be done at the following website:
http://www.oracle.com/technetwork/community/join/overview/
You can access the online documentation in the Oracle Help Center at:
http://docs.oracle.com
Most Oracle error message documentation is only available in HTML format. If you
only have access to the Oracle Documentation, then browse the error messages by
range. When you find the correct range of error messages, use your browser's Find
feature to locate a specific message. When connected to the Internet, you can search for
a specific error message using the error message search feature of the Oracle online
documentation.
Conventions
These text conventions are used in this document:
Convention
Meaning
boldface
Boldface type indicates graphical user interface elements
associated with an action, or terms defined in text or the
glossary.
italic
Italic type indicates book titles, emphasis, or placeholder
variables for which you supply values.
xvii
xviii
Convention
Meaning
monospace
Monospace type indicates commands within a paragraph,
URLs, code in examples, text that appears on the screen, or
text that you enter.
Changes in this Release for Oracle
Clusterware
A list of changes in Oracle Grid Infrastructure Installation Guide for Windows.
Changes in Oracle Clusterware 12c Release 2 (12.2) (page xix)
Changes in Oracle Clusterware 12c Release 1 (12.1.0.2) (page xxii)
Changes in Oracle Clusterware 12c Release 1 (12.1) (page xxiii)
Changes in Oracle Clusterware 12c Release 2 (12.2)
The following are changes in Oracle Grid Infrastructure Installation Guide for Oracle
Clusterware 12c Release 2 (12.2):
New Features for Oracle Grid Infrastructure 12c Release 2 (12.2) (page xix)
Deprecated Features for Oracle Clusterware 12c Release 2 (12.2) (page xxi)
Desupported Features (page xxi)
New Features for Oracle Grid Infrastructure 12c Release 2 (12.2)
•
Direct NFS Dispatcher Support
Starting with Oracle Grid Infrastructure 12c release 2 (12.2), Oracle Direct NFS
Client supports adding a dispatcher or I/O slave infrastructure. For very large
database deployments running Oracle Direct NFS Client, this feature facilitates
scaling of sockets and TCP connections to multi-path and clustered NFS storage.
See Configuring Direct NFS Client for Oracle RAC Data Files (page 8-1)
•
Kerberos Authentication for Direct NFS
Oracle Database now supports Kerberos implementation with Direct NFS
communication. This feature solves the problem of authentication, message
integrity, and optional encryption over unsecured networks for data exchange
between Oracle Database and NFS servers using Direct NFS protocols.
See Configuring Direct NFS Client for Oracle RAC Data Files (page 8-1).
•
Oracle Automatic Storage Management Cluster File System (Oracle ACFS) and
Oracle ASM Dynamic Volume Manager (Oracle ADVM) Improvements
The following new features have been added to Oracle ACFS and Oracle ADVM:
Snapshot Enhancements, Snapshot-Based Replication, Oracle ACFS Compression,
xix
Oracle ACFS Defragger, Support for 4K Sectors, Automatic Resize, Metadata
Acceleration, NAS Maximum Availability eXtensions, Plugins for File Content,
Sparse Files, Scrubbing Functionality, Loopback Functionality, and Diagnostic
Commands.
•
Oracle Flex Clusters
In Oracle Clusterware 12c release 2 (12.2), all clusters are configured as Oracle Flex
Clusters, meaning that a cluster is configured with one or more Hub Nodes and
can support a large number of Leaf Nodes. Clusters currently configured under
older versions of Oracle Clusterware are converted in place as part of the upgrade
process, including the activation of Oracle Flex ASM (which is a requirement for
Oracle Flex Clusters).
See Oracle Clusterware Administration and Deployment Guide.
•
Parallel NFS Support in Oracle Direct NFS Client
Starting with Oracle Grid Infrastructure 12c release 2 (12.2), Oracle Direct NFS
Client supports parallel NFS. Parallel NFS is an NFS v4.1 option that allows direct
client access to file servers, enabling scalable distributed storage.
See Configuring Direct NFS Client for Oracle RAC Data Files (page 8-1)
•
Postinstallation Configuration of Oracle Software using the executeConfigTools option
Starting with Oracle Database 12c release 2 (12.2), you can perform
postinstallation configuration of Oracle products by running the Oracle Database
or Oracle Grid Infrastructure installer with the -executeConfigTools option.
You can use the same response file created during installation to complete
postinstallation configuration.
•
SCAN Listener Supports HTTP Protocol
Starting with Oracle Database 12c release 2 (12.2), SCAN listener enables
connections for the recovery server coming over HTTP to be redirected to
different machines based on the load on the recovery server machines.
See Oracle Clusterware Administration and Deployment Guide.
•
Separation of Duty for Administering Oracle Real Application Clusters
Starting with Oracle Database 12c release 2 (12.2), Oracle Database provides
support for separation of duty best practices when administering Oracle Real
Application Clusters (Oracle RAC) by introducing the SYSRAC administrative
privilege for the clusterware agent. This feature removes the need to use the
powerful SYSDBA administrative privilege for Oracle RAC.
SYSRAC, like SYSDG, SYSBACKUP and SYSKM, helps enforce separation of
duties and reduce reliance on the use of SYSDBA on production systems. This
administrative privilege is the default mode for connecting to the database by the
clusterware agent on behalf of the Oracle RAC utilities such as srvctl.
•
Shared Grid Naming Service (GNS) High Availability
Shared GNS High Availability provides high availability of lookup and other
services to the clients by running multiple instances of GNS with primary and
secondary roles.
•
xx
Support for IPv6 Based IP Addresses for the Oracle Cluster Interconnect
Starting with Oracle Grid Infrastructure 12c release 2 (12.2), you can use either
IPv4 or IPv6 based IP addresses to configure cluster nodes on the private network.
You can use more than one private network for the cluster.
•
Support for Oracle Extended Clusters
Starting with Oracle Grid Infrastructure 12c release 2 (12.2), Oracle Grid
Infrastructure installer supports the option of configuring cluster nodes in
different locations as an Oracle Extended Cluster. An Oracle Extended Cluster
consists of HUB nodes that are located in multiple locations called sites.
•
Support for UDP on Windows Operating System
Starting with Oracle Database 12c release 2 (12.2), the User Datagram Protocol
(UDP) is supported on Windows. The UDP protocol allows for larger clusters.
•
Support for Windows Group Managed Service Accounts
Starting with Oracle Database 12c release 2 (12.2), support of Group Managed
Services Account (gMSA) for installing Oracle Grid Infrastructure and Oracle
RAC provides additional options to create and manage database services without
passwords. The gMSA is a domain level account that can be used by multiple
servers in a domain to run the services using this account.
•
Zip Image based Grid Infrastructure Installation
Starting with Oracle Grid Infrastructure 12c release 2 (12.2), the installation media
is replaced with a zip file for the Oracle Grid Infrastructure installer. Run the
installation wizard after extracting the zip file into the target home path.
See Installing Oracle Grid Infrastructure for a New Cluster (page 9-4)
Deprecated Features for Oracle Clusterware 12c Release 2 (12.2)
The following feature is deprecated in this release, and may be desupported in a
future release.
See Oracle Database Upgrade Guide for a complete list of deprecated features in this
release.
•
Deprecation of configToolAllCommands script
The configToolAllCommands script runs in the response file mode to configure
Oracle products after installation and uses a separate password response file.
Starting with Oracle Database 12c Release 2 (12.2), the
configToolAllCommands script is deprecated and is subject to desupport in a
future release.
To perform postinstallation configuration of Oracle products, you can now run
the Oracle Database or Oracle Grid Infrastructure installer with the executeConfigTools option. You can use the same response file created
during installation to complete postinstallation configuration.
Desupported Features
The following feature is desupported in this release. See Oracle Database Upgrade Guide
for a complete list of features desupported in this release.
•
Desupport of Direct File System Placement for Oracle Cluster Registry (OCR) and
Voting Files
xxi
Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), the placement of
Oracle Clusterware files (the Oracle Cluster Registry (OCR), and the Voting Files)
directly on a shared file system is desupported. The Oracle Clusterware files are
now managed by Oracle Automatic Storage Management (Oracle ASM). You
cannot place Oracle Clusterware files directly on a shared file system. If you need
to use a supported shared file system, either a Network File System, or a shared
cluster file system instead of native disks devices, then you must create Oracle
ASM disks on the shared file systems that you plan to use for hosting Oracle
Clusterware files before installing Oracle Grid Infrastructure. You can then use
the Oracle ASM disks in an Oracle ASM disk group to manage Oracle Clusterware
files.
Changes in Oracle Clusterware 12c Release 1 (12.1.0.2)
The following are changes in Oracle Grid Infrastructure Installation Guide for Oracle
Clusterware 12c Release 1 (12.1.0.2):
New Features for Oracle Clusterware 12.1.0.2 (page xxii)
Other Changes for Oracle Clusterware 12.1.0.2 (page xxii)
New Features for Oracle Clusterware 12.1.0.2
•
Automatic Installation of Grid Infrastructure Management Repository
The Grid Infrastructure Management Repository is automatically installed with
Oracle Grid Infrastructure 12c Release 1 (12.1.0.2).
•
Cluster and Oracle RAC Diagnosability Tools Enhancements
Oracle Clusterware uses Oracle Database fault diagnosability infrastructure to
manage diagnostic data and its alert log. As a result, most diagnostic data resides
in the Automatic Diagnostic Repository (ADR), a collection of directories and files
located under a base directory that you specify during installation.
Note: The Oracle Trace File Analyzer (TFA) Collector is not supported on
Windows operating systems.
•
IPv6 Support for Public Networks
Oracle Clusterware 12c Release 1 (12.1) supports IPv6-based public IP and VIP
addresses.
IPv6-based IP addresses have become the latest standard for the information
technology infrastructure in today's data centers. With this release, Oracle RAC
and Oracle Grid Infrastructure support this standard. You can configure cluster
nodes during installation with either IPv4 or IPv6 addresses on the same network.
Database clients can connect to either IPv4 or IPv6 addresses. The Single Client
Access Name (SCAN) listener automatically redirects client connection requests to
the appropriate database listener for the IP protocol of the client request.
Other Changes for Oracle Clusterware 12.1.0.2
•
xxii
Windows Server 2012 and Windows Server 2012 R2 are supported with this
release
Changes in Oracle Clusterware 12c Release 1 (12.1)
The following are changes in Oracle Grid Infrastructure Installation Guide for Oracle
Clusterware 12c Release 1 (12.1):
New Features for Oracle Clusterware 12c Release 1 (12.1) (page xxiii)
Deprecated Features for Oracle Clusterware 12c Release 1 (12.1) (page xxv)
Desupported Features for Oracle Clusterware 12c Release 1 (12.1) (page xxv)
Other Changes for Oracle Clusterware 12c Release 1 (12.1) (page xxv)
New Features for Oracle Clusterware 12c Release 1 (12.1)
•
Oracle ASM File Access Control Enhancements on Windows
You can now use access control to separate roles in Windows environments. With
Oracle Database services running with the privileges of an Oracle home user
rather than Local System, the Oracle ASM access control feature is enabled to
support role separation on Windows. In earlier releases, this feature was disabled
on Windows because all Oracle services run as Local System.
You can change the identity of an Oracle ASM user from one operating system
user to another operating system user without having to drop and re-create the
user, which requires dropping all the files a user owns, which improves the
manageability of Oracle ASM users and the files they own.
You can modify Windows file access controls while files are open using
ASMCMD file access control commands, such as chgrp,chmod, and chown.
See Oracle Automatic Storage Management Administrator's Guide and Oracle Database
Platform Guide for Microsoft Windows.
•
Cluster Health Monitor Enhancements
Cluster Health Monitor (CHM) has been enhanced to provide a highly available
server monitor service that provides improved detection of operating system and
cluster resource-related degradation and failures.
See Oracle Clusterware Administration and Deployment Guide.
•
Support for Storing the Oracle Cluster Registry Backup in an Oracle ASM Disk
Group
The Oracle Cluster Registry (OCR) backup mechanism enables storing the OCR
backup in an Oracle ASM disk group. Storing the OCR backup in an Oracle ASM
disk group simplifies OCR management by permitting access to the OCR backup
from any node in the cluster should an OCR recovery become necessary.
•
Oracle Grid Infrastructure Rolling Migration for One-Off Patches
Oracle Grid Infrastructure one-off patch rolling migration and upgrade for Oracle
Automatic Storage Management (Oracle ASM) and Oracle Clusterware enables
you to independently upgrade or patch clustered Oracle Grid Infrastructure
nodes with one-off patches, without affecting database availability. This feature
provides greater uptime and patching flexibility. This release also introduces a
new Cluster state, "Rolling Patch" Operations allowed in a patch quiesce state are
similar to the existing "Rolling Upgrade" cluster state.
xxiii
See Oracle Automatic Storage Management Administrator's Guide
•
Oracle Home User Support for Oracle Database and Oracle Grid Infrastructure
Starting with Oracle Database 12c release 1 (12.1), Oracle Database supports the
use of an Oracle Home User, which can be specified at installation time. The
Oracle Home User can be a Built-in Account or a Windows Domain User Account.
If you specify a Windows User Account, then the user should be a standard (nonAdministrator) account to ensure that the Oracle Home User has a limited set of
privileges. Using an Oracle Home User ensures that Oracle Database services
have only those privileges required to run Oracle products.
See Oracle Database Platform Guide for Microsoft Windows
•
Oracle RAC Hang Detection and Node Eviction Protection
Slowdowns in the cluster, mainly affecting the Oracle Database instances, are
detected and classified as a hung process or instance and removed accordingly to
prevent unnecessary node evictions as a consequence. While the occasional
eviction of a node in an Oracle RAC cluster is mostly transparent to the
application, this feature minimizes its occurrence. Also, the Global Conflict
Resolution (GCR) process has been enhanced to provide better detection and
avoidance of issues causing node evictions.
•
Policy-Based Cluster Management and Administration
Oracle Grid Infrastructure allows running multiple applications in one cluster.
Using a policy-based approach, the workload introduced by these applications
can be allocated across the cluster using a policy. In addition, a policy set enables
different policies to be applied to the cluster over time as required. Policy sets can
be defined using a web-based interface or a command-line interface.
Hosting various workloads in the same cluster helps to consolidate the workloads
into a shared infrastructure that provides high availability and scalability. Using a
centralized policy-based approach allows for dynamic resource reallocation and
prioritization as the demand changes.
See Oracle Clusterware Administration and Deployment Guide.
•
Shared Grid Naming Service (GNS) Across Multiple Clusters
In earlier releases, the Grid Naming Service (GNS) was dedicated to one Oracle
Grid Infrastructure-based cluster, providing name resolution only for its own
cluster member nodes. With this release, one Oracle GNS can now manage just
the cluster member nodes in its own cluster, or GNS can provide naming
resolution for all nodes across all clusters in the data center that are delegated to
Oracle GNS for resolution.
Using only one Oracle GNS for all nodes that are part of an Oracle Grid
Infrastructure cluster in the data center not only streamlines the naming
convention, but also enables a data center cloud, minimizing day-to-day
administration efforts.
•
Support for Separation of Database Administration Duties
Oracle Database 12c release 1 (12.1) provides support for separation of
administrative duties for Oracle Database by introducing task-specific and leastprivileged administrative privileges that do not require the SYSDBA system
privilege. These new system privileges are: SYSBACKUP for backup and
recovery, SYSDG for Oracle Data Guard, and SYSKM for encryption key
management.
xxiv
See "Managing Administrative Privileges" in Oracle Database Security
Guide and "Extended Oracle Database Administration Groups for Job Role
Separation (page 5-12)"
•
Updates to Oracle ASM File Access Control Commands and Open Files Support
This features enables the modification of Windows file access controls while files
are open. This features supports updates to ASMCMD file access control
commands, such as chgrp, chmod, and chown.
See Oracle Automatic Storage Management Administrator's Guide.
Deprecated Features for Oracle Clusterware 12c Release 1 (12.1)
The following features are deprecated in this release, and may be desupported in a
future release:
•
Standalone Deinstallation Tool
The deinstallation tool is now integrated with the installation media.
•
The -cleanupOBase option of the deinstallation tool
The -cleanupOBase option is deprecated in this release. There is no replacement
for this option.
Desupported Features for Oracle Clusterware 12c Release 1 (12.1)
The following features are no longer supported by Oracle:
•
Direct use of raw devices with Oracle Clusterware and Oracle Database
•
Oracle Cluster File System (OCFS) for Windows
•
Oracle Objects for OLE
•
Oracle Enterprise Manager Database Control
For a complete list of desupported features, see:
Oracle Database Upgrade Guide
Other Changes for Oracle Clusterware 12c Release 1 (12.1)
•
Windows Server 2003 and Windows Server 2003 R2 are not supported with this
release
•
Document Structure Changes
This book is redesigned to provide an installation checklist to assist with
preparing for installation, and chapters that subdivide preinstallation tasks into
category topics.
xxv
1
Oracle Grid Infrastructure Installation
Checklist
Use checklists to plan and carry out Oracle Grid Infrastructure (Oracle Clusterware
and Oracle Automatic Storage Management).
Oracle recommends that you use checklists as part of your installation planning
process. Using this checklist can help you to confirm that your server hardware and
configuration meet minimum requirements for this release, and can help you to ensure
you carry out a successful installation.
Oracle Grid Infrastructure Installation Server Hardware Checklist (page 1-1)
Review server hardware requirements for Oracle Grid Infrastructure
installation.
Operating System Checklist for Oracle Grid Infrastructure and Oracle RAC
(page 1-2)
Review the following operating system checklist for all installations.
Server Configuration Checklist for Oracle Grid Infrastructure (page 1-3)
Use this checklist to check minimum server configuration requirements
for Oracle Grid Infrastructure installations.
Oracle Grid Infrastructure Network Checklist (page 1-4)
Review all installations to ensure that you have the required hardware,
names, and addresses for the cluster.
User Environment Configuration Checklist for Oracle Grid Infrastructure
(page 1-6)
Review the following environment checklist for all installations.
Storage Checklist for Oracle Grid Infrastructure (page 1-7)
Review the checklist for storage hardware and configuration
requirements for Oracle Grid Infrastructure installation.
Installer Planning Checklist for Oracle Grid Infrastructure (page 1-8)
Review the checklist for planning your Oracle Grid Infrastructure
installation before starting Oracle Universal Installer.
1.1 Oracle Grid Infrastructure Installation Server Hardware Checklist
Review server hardware requirements for Oracle Grid Infrastructure installation.
Oracle Grid Infrastructure Installation Checklist 1-1
Operating System Checklist for Oracle Grid Infrastructure and Oracle RAC
Table 1-1
Server Hardware Checklist for Oracle Grid Infrastructure
Check
Task
Server make and
architecture
Confirm that server makes, models, core architecture, and host bus
adaptors (HBA) are supported to run with Oracle Grid Infrastructure and
Oracle RAC.
Server Display
Cards
At least 1024 x 768 display resolution for Oracle Universal Installer.
Confirm display monitor.
Minimum
Random Access
Memory (RAM)
At least 4 GB RAM for Oracle Grid Infrastructure installations, including
installations where you plan to install Oracle RAC.
Virtual Memory
configuration
Oracle recommends that you set the paging file size to match the amount
of RAM available on the server, up to a maximum of 16 GB
Intelligent
Platform
Management
Interface (IPMI)
Configuration
•
•
IPMI cards installed and configured, with IPMI administrator
account information available to the person running the installation.
Ensure baseboard management controller (BMC) interfaces are
configured, and have an administration account user name and
password to provide when prompted during installation.
Related Topics:
Configuring Servers for Oracle Grid Infrastructure and Oracle RAC (page 2-1)
You must complete certain operating system tasks on your servers
before you install Oracle Grid Infrastructure for a Cluster and Oracle
Real Application Clusters.
1.2 Operating System Checklist for Oracle Grid Infrastructure and Oracle
RAC
Review the following operating system checklist for all installations.
Table 1-2 Operating System General Checklist for Oracle Grid Infrastructure and
Oracle RAC
Check
Task
Review
operating
system
security
standards
Secure operating systems are an important basis for general system security.
Verify
Privileges
for Copying
Files in the
Cluster
During installation, OUI copies the software from the local node to the remote
nodes in the cluster. The installation user must have Administrator privileges
on the other nodes in the cluster to copy the files.
Configure
Remote
Desktop
Services
If you want to enable remote display of the installation process, then configure
remote access for necessary user accounts.
1-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Server Configuration Checklist for Oracle Grid Infrastructure
Table 1-2 (Cont.) Operating System General Checklist for Oracle Grid Infrastructure
and Oracle RAC
Check
Task
Disable
Windows
Firewall
If the Windows Firewall is enabled, then remote copy and configuration
assistants such as Virtual IP Configuration Assistant (VIPCA), Network
Configuration Assistant (NETCA), and Oracle Database Configuration
Assistant (DBCA) will fail during Oracle RAC installation.
Operating
system
general
requiremen
ts
Review the system requirements section for a list of minimum software
component versions. Use the same operating system version on each cluster
member node.
Ensure Web
browsers
are
supported
by
Enterprise
Manager
Web browsers must support Java Script, and the HTML 4.0 and CSS 1.0
standards.
Windows Server 2012 x64 - Standard, Datacenter, Essentials, and
Foundation editions
Windows Server 2012 R2 x64 - Standard, Datacenter, Essentials, and
Foundation editions
1.3 Server Configuration Checklist for Oracle Grid Infrastructure
Use this checklist to check minimum server configuration requirements for Oracle
Grid Infrastructure installations.
Table 1-3
Server Configuration Checklist for Oracle Grid Infrastructure
Check
Task
Disk space allocated to the
temporary file system
At least 1 GB of space in the temporary directory. Oracle
recommends 2 GB or more
Storage hardware for Oracle
software
Attached storage hardware can be either Storage Area
Network (SAN) storage or Network-Attached Storage (NAS).
Ensure that the Oracle home
(the Oracle home path you
select for Oracle Database)
uses only ASCII characters
The ASCII character restriction includes installation owner
user names, which are used as a default for some home paths,
as well as other directory names you may select for paths.
Additionally, the file path names cannot contain spaces.
Set locale (if needed)
Specify the language and the territory, or locale, in which you
want to use Oracle components. A locale is a linguistic and
cultural environment in which a system or program is running.
NLS (National Language Support) parameters determine the
locale-specific behavior on both servers and clients. The locale
setting of a component determines the language of the user
interface of the component, and the globalization behavior,
such as date and number formatting.
Oracle Grid Infrastructure Installation Checklist 1-3
Oracle Grid Infrastructure Network Checklist
Table 1-3
(Cont.) Server Configuration Checklist for Oracle Grid Infrastructure
Check
Task
Set Network Time Protocol
for Cluster Time
Synchronization
Oracle Clusterware requires the same time zone environment
variable setting on all cluster nodes.
Ensure that you set the time zone synchronization across all
cluster nodes using either an operating system configured
network time protocol (NTP), Oracle Cluster Time
Synchronization Service, or Windows Time Service.
1.4 Oracle Grid Infrastructure Network Checklist
Review all installations to ensure that you have the required hardware, names, and
addresses for the cluster.
During installation, you designate interfaces for use as public, private, or Oracle ASM
interfaces. You can also designate interfaces that are in use for other purposes, and not
available for Oracle Grid Infrastructure use.
Table 1-4
RAC
Network Configuration Tasks for Oracle Grid Infrastructure and Oracle
Check
Task
Public Network
Hardware
•
Private Network
Hardware for
the Interconnect
•
Public network switch (redundant switches recommended) connected
to a public gateway and to the public interface ports for each cluster
member node.
Ethernet interface card.
•
•
Redundant network cards recommended; use NIC teaming so they
appear as one Ethernet port name.
The switches and network interface cards must be at least 1 GbE.
The network protocol is TCP/IP.
•
•
Private dedicated network switch (redundant switches
recommended, with NIC teaming so they appear as one Ethernet port
name), connected to the private interface ports for each cluster
member node.
The switches and network interface adapters must be at least 1 GbE,
with 10 GbE recommended. Alternatively, use InfiniBand for the
interconnect.
1-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Oracle Grid Infrastructure Network Checklist
Table 1-4 (Cont.) Network Configuration Tasks for Oracle Grid Infrastructure and
Oracle RAC
Check
Task
Perform
Windowsspecific network
configuration
tasks
•
•
•
•
Disable the Media Sensing Feature for TCP/IP. Media Sense allows
Windows to uncouple an IP address from a network interface card
when the link to the local switch is lost.
Configure the Network Binding Order and Protocol Priorities.
Ensure that your public network adapter is first in the binding order,
and the private network adapter is second
Deselect Automatic Registration with DNS for the Public Network
Interface. To prevent Windows Server from potentially registering
the wrong IP addresses for the node in DNS after a server restart, you
must deconfigure the "Register this connection's addresses in DNS"
option for the public network adapters.
Manually configure automatic metric values. Automatic Metric
feature automatically configures the metric for the local routes that
are based on link speed. If you use the default values for this feature,
OUI sometimes selects the private network interface as the default
public host name for the server when installing Oracle Grid
Infrastructure.
Oracle Flex ASM
Network
Hardware
Oracle Flex ASM can use either the same private networks as Oracle
Clusterware, or use its own dedicated private networks. Each network can
be classified PUBLIC or PRIVATE+ASM or PRIVATE or ASM. Oracle
ASM networks use the TCP protocol.
Cluster Names
and Addresses
Determine and configure the following names and addresses for the
cluster:
•
Cluster name: Decide a name for the cluster, and be prepared to enter
it during installation. The cluster name should have the following
characteristics:
–
•
•
Globally unique across all hosts, even across different DNS
domains.
–
Must be between 1 and 15 characters in length.
–
Consist of the same character set used for host names, in
accordance with RFC 1123: Hyphens (-), and single-byte
alphanumeric characters (a to z, A to Z, and 0 to 9).
Grid Naming Service Virtual IP Address (GNS VIP): If you plan to
use GNS, then configure a GNS name and fixed address on the DNS
for GNS VIP, and configure a subdomain on your DNS delegated to
the GNS VIP for resolution of cluster addresses. GNS domain
delegation is mandatory with dynamic public networks (DHCP,
autoconfiguration).
Single Client Access Name (SCAN) and addresses:
–
–
Using Grid Naming Service Resolution: Do not configure
SCAN names and addresses in your DNS. SCANs are managed
by GNS.
Using Manual Configuration and DNS resolution: Configure a
SCAN name to resolve to three addresses on the domain name
service (DNS).
Oracle Grid Infrastructure Installation Checklist 1-5
User Environment Configuration Checklist for Oracle Grid Infrastructure
Table 1-4 (Cont.) Network Configuration Tasks for Oracle Grid Infrastructure and
Oracle RAC
Check
Task
Hub Node
Public, Private
and Virtual IP
names and
Addresses
If you are configuring a cluster with Grid Naming Service (GNS), then
OUI displays the public and virtual host name addresses labeled as
"AUTO" because they are configured automatically. To use this option,
you must have configured a subdomain on your DNS that is delegated to
GNS for resolution, and you must have a fixed GNS VIP address where
the delegated service requests can be routed.
If you are not using GNS, then configure the following for each Hub
node:
•
•
•
Public node name and address, configured on the DNS and in the
hosts (for example, node1.example.com, address 192.0.2.10).
The public node name should be the primary host name of each node,
which is the name displayed by the hostname command.
Private node address, configured on the private interface for each
node. The installer identifies addresses in the private range as private
IP addresses by default. For example: 10.0.0.10
The private subnet that the private interfaces use must connect all the
nodes you intend to have as cluster members. Oracle recommends
that the network you select for the private network uses an address
range defined as private by RFC 1918.
Public node virtual IP name and address (for example, node1vip.example.com, address 192.0.2.11)
If you are not using dynamic networks with GNS and subdomain
delegation, then determine a virtual host name for each node. A
virtual host name is a public node name that is used to reroute client
requests sent to the node if the node is down.. Oracle Database uses
VIPs for client-to-database connections, so the VIP address must be
publicly accessible. Oracle recommends that you provide a name in
the format hostname-vip. For example: myclstr2-vip.
See Also: Network Configuration Tasks for Windows Server Deployments
(page 4-9)
1.5 User Environment Configuration Checklist for Oracle Grid
Infrastructure
Review the following environment checklist for all installations.
Table 1-5
Environment Configuration for Oracle Grid Infrastructure and Oracle RAC
Check
Task
Create operating
system groups
and users for
standard or roleallocated system
privileges
Create operating system groups and users depending on your security
requirements, as described in this install guide. Oracle Installation users
have different requirements from Oracle Home users. User names must
use only ASCII characters.
1-6 Oracle Grid Infrastructure Installation and Upgrade Guide
Storage Checklist for Oracle Grid Infrastructure
Table 1-5 (Cont.) Environment Configuration for Oracle Grid Infrastructure and
Oracle RAC
Check
Task
Configure the
Oracle Software
Owner
Environment
Set the TEMP environment variable.
Unset Oracle
software
environment
variables
If you have set ORA_CRS_HOME as an environment variable, then unset it
before starting an installation or upgrade. Do not use ORA_CRS_HOME as a
user environment variable.
Manage User
Account Control
If you have enabled the User Account Control security feature, then Oracle
Universal Installer prompts you for either your consent or your credentials
when installing Oracle Database software. Provide either the consent or
your Windows Administrator credentials as appropriate.
If you have had an existing installation on your system, and you are using
the same user account to install this installation, then unset the
ORACLE_HOME, ORACLE_BASE, ORACLE_SID, and TNS_ADMIN
environment variables. Unset any other environment variable set for the
Oracle installation user that is connected with Oracle software homes,
such as ORA_NLS10.
Related Topics:
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and
Oracle RAC (page 5-1)
You must configure certain users, groups, and environment settings
used during Oracle Grid Infrastructure for a Cluster and Oracle Real
Application Clusters installations.
1.6 Storage Checklist for Oracle Grid Infrastructure
Review the checklist for storage hardware and configuration requirements for Oracle
Grid Infrastructure installation.
Table 1-6
Oracle Grid Infrastructure Storage Configuration Checks
Check
Task
Minimum disk space
(local or shared) for
Oracle Grid
Infrastructure
Software
•
At least 12 GB of space for the Oracle Grid Infrastructure
for a cluster home (Grid home).
Oracle recommends that you allocate 100 GB to allow
additional space for patches.
At least 3.5 GB of space for the Oracle base of the Oracle
Grid Infrastructure installation owner (Grid user). The
Oracle base includes Oracle Clusterware and Oracle ASM
log files.
At least 7.5 GB of space for Oracle Database Enterprise
Edition.
•
Allocate additional storage space as per your cluster
configuration, as described in Oracle Clusterware Storage Space
Requirements.
Oracle Grid Infrastructure Installation Checklist 1-7
Installer Planning Checklist for Oracle Grid Infrastructure
Table 1-6
(Cont.) Oracle Grid Infrastructure Storage Configuration Checks
Check
Task
Select Oracle ASM
Storage Options
During installation, based on the cluster configuration, you are asked
to provide Oracle ASM storage paths for the Oracle Clusterware files.
These path locations must be writable by the Oracle Grid
Infrastructure installation owner (Grid user). These locations must be
shared across all nodes of the cluster on Oracle ASM because the files
in the Oracle ASM disk group created during installation must be
available to all cluster member nodes.
•
For Oracle Standalone Cluster deployment, shared storage,
either Oracle ASM or Oracle ASM on NFS, is locally mounted on
each of the Hub Nodes.
Voting files are files that Oracle Clusterware uses to verify cluster
node membership and status. Oracle Cluster Registry files (OCR)
contain cluster and database configuration information for Oracle
Clusterware.
Select Grid
Infrastructure
Management
Repository (GIMR)
Storage Option
For Oracle Standalone Cluster deployment, you can specify the same
or separate Oracle ASM disk group for the GIMR.
Related Topics:
Oracle Clusterware Storage Space Requirements (page 7-6)
Use this information to determine the minimum number of disks and the
minimum disk space requirements based on the redundancy type, for
installing Oracle Clusterware files, and installing the starter database, for
various Oracle Cluster deployments.
1.7 Installer Planning Checklist for Oracle Grid Infrastructure
Review the checklist for planning your Oracle Grid Infrastructure installation before
starting Oracle Universal Installer.
Table 1-7 Oracle Universal Installer Checklist for Oracle Grid Infrastructure
Installation
Check
Task
Read the Release Notes
Review release notes for your platform, which are available
for your release at the following URL:
http://www.oracle.com/technetwork/indexes/
documentation/index.html
Review the Licensing
Information
You are permitted to use only those components in the Oracle
Database media pack for which you have purchased licenses.
Refer to Oracle Database Licensing Information for more
information about licenses.
1-8 Oracle Grid Infrastructure Installation and Upgrade Guide
Installer Planning Checklist for Oracle Grid Infrastructure
Table 1-7 (Cont.) Oracle Universal Installer Checklist for Oracle Grid Infrastructure
Installation
Check
Task
Run OUI with CVU and use
fixup scripts
Oracle Universal Installer is fully integrated with Cluster
Verification Utility (CVU), automating many CVU
prerequisite checks. Oracle Universal Installer runs all
prerequisite checks and creates fixup scripts when you run
the installer. You can run OUI up to the Summary screen
without starting the installation.
You can also run CVU commands manually to check system
readiness. For more information, see:
Oracle Clusterware Administration and Deployment Guide
Download and run ORAchk
for runtime and upgrade
checks, or runtime health
checks
The ORAchk utility provides system checks that can help to
prevent issues after installation. These checks include kernel
requirements, operating system resource allocations, and
other system requirements.
Use the ORAchk Upgrade Readiness Assessment to obtain an
automated upgrade-specific system health check for
upgrades. For example:
./orachk -u -o pre
The ORAchk Upgrade Readiness Assessment automates
many of the manual pre- and post-upgrade checks described
in Oracle upgrade documentation.
ORAchk is supported on Windows platforms in a Cygwin
environment only.
For more information refer to the following URL:
https://support.oracle.com/rs?type=doc&id=1268927.1
Ensure cron jobs do not run
during installation
If the installer is running when daily cron jobs start, then
you may encounter unexplained installation problems if your
cron job is performing cleanup, and temporary files are
deleted before the installation is finished. Oracle recommends
that you complete installation before daily cron jobs are run,
or disable daily cron jobs that perform cleanup until after
the installation is completed.
Obtain Your My Oracle
Support account information.
During installation, you require a My Oracle Support user
name and password to configure security updates, download
software updates, and other installation tasks. You can
register for My Oracle Support at the following URL:
https://support.oracle.com/
Oracle Grid Infrastructure Installation Checklist 1-9
Installer Planning Checklist for Oracle Grid Infrastructure
Table 1-7 (Cont.) Oracle Universal Installer Checklist for Oracle Grid Infrastructure
Installation
Check
Task
Check running Oracle
processes, and shut down
processes if necessary.
•
•
•
On a node with a standalone database not using Oracle
ASM: You do not need to shut down the database while
you install Oracle Grid Infrastructure.
On a node with a standalone Oracle Database using
Oracle ASM: Stop the existing Oracle ASM instances.
The Oracle ASM instances are restarted during
installation.
On an Oracle RAC Database node: This installation
requires an upgrade of Oracle Clusterware, as Oracle
Clusterware is required to run Oracle RAC. As part of
the upgrade, you must shut down the database one node
at a time as the rolling upgrade proceeds from node to
node.
1-10 Oracle Grid Infrastructure Installation and Upgrade Guide
2
Configuring Servers for Oracle Grid
Infrastructure and Oracle RAC
You must complete certain operating system tasks on your servers before you install
Oracle Grid Infrastructure for a Cluster and Oracle Real Application Clusters.
The values provided in this chapter are for installation minimums only. Oracle
recommends that you configure production systems in accordance with planned
system loads.
Checking Server Hardware and Memory Configuration (page 2-1)
Perform these tasks to gather your current system information.
General Server Requirements (page 2-4)
Verify that servers where you install Oracle Grid Infrastructure meet the
minimum requirements for installation.
Server Minimum Hardware and Memory Requirements (page 2-5)
Each system must meet certain minimum hardware and memory
requirements.
Server Minimum Storage Requirements (page 2-5)
Each system must meet certain minimum storage requirements.
Configuring Time Synchronization for the Cluster (page 2-7)
Oracle Clusterware requires the same time zone setting on all cluster
nodes.
Related Topics:
Oracle Grid Infrastructure Installation Server Hardware Checklist (page 1-1)
Review server hardware requirements for Oracle Grid Infrastructure
installation.
2.1 Checking Server Hardware and Memory Configuration
Perform these tasks to gather your current system information.
Checking the Available RAM on Windows Systems (page 2-2)
Use the control panel to check the available RAM on each server.
Checking the Currently Configured Virtual Memory on Windows Systems
(page 2-2)
Virtual memory (also known as a paging file) stores information that
cannot fit in RAM, the main memory for the computer. All processes
share the paging files, and a lack of space in the paging files can prevent
processes from allocating memory.
Configuring Servers for Oracle Grid Infrastructure and Oracle RAC 2-1
Checking Server Hardware and Memory Configuration
Checking the System Processor Type (page 2-2)
To view your processor type (32-bit or 64-bit), perform these steps.
Checking the Available Disk Space for Oracle Home Directories (page 2-3)
Additional disk space on a cluster file system is required for the Oracle
Grid Infrastructure Management Repository, Oracle Cluster Registry
(OCR) and voting files used by Oracle Clusterware.
Checking the Available TEMP Disk Space (page 2-4)
The amount of disk space available in the TEMP directory is equivalent
to the total amount of free disk space, minus what is needed to install the
Oracle software.
2.1.1 Checking the Available RAM on Windows Systems
Use the control panel to check the available RAM on each server.
The minimum required RAM is 4 gigabyte (GB) for Oracle Grid Infrastructure for a
Cluster installations, including installations where you plan to install Oracle RAC.
To determine the physical RAM size, for a computer, you can use either of the
following methods:
•
Open System in the control panel and Select the General tab.
•
Alternatively, start the Windows Task Manager, then select the Performance tab
to view the available memory for your system.
2.1.2 Checking the Currently Configured Virtual Memory on Windows Systems
Virtual memory (also known as a paging file) stores information that cannot fit in
RAM, the main memory for the computer. All processes share the paging files, and a
lack of space in the paging files can prevent processes from allocating memory.
Oracle recommends that you set the paging file size to match the amount of RAM
available on the server, up to a maximum of 16 GB. If possible, split the paging file
into multiple files on multiple physical devices. This configuration encourages parallel
access to virtual memory, and improves the software performance.
1. From the Control panel, select System.
2. In the System Properties window, select the Advanced tab.
3. Under Performance, click Performance Options, or Settings.
4. In the Performance Options window, click the Advanced tab.
The virtual memory configuration is displayed at the bottom of the window.
If necessary, refer to your operating system documentation for information about how
to configure additional virtual memory.
2.1.3 Checking the System Processor Type
To view your processor type (32-bit or 64-bit), perform these steps.
1. From the Start menu, select Run. In the Run window, type in msinfo32.exe.
2. In the System Summary display, locate the System Type entry.
2-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Checking Server Hardware and Memory Configuration
•
If the value for System Type is x64-based PC, then you have a 64-bit system.
•
If the value for System Type is x86-based PC, then you have a 32-bit system.
2.1.4 Checking the Available Disk Space for Oracle Home Directories
Additional disk space on a cluster file system is required for the Oracle Grid
Infrastructure Management Repository, Oracle Cluster Registry (OCR) and voting files
used by Oracle Clusterware.
If you are installing Oracle RAC, then you must configuration additional disk space
for:
•
The Oracle RAC software and log files
•
The shared data files and, optionally, the shared Fast Recovery Area in an Oracle
ASM disk group
If you use standard redundancy for Oracle Clusterware files, which is 3 Oracle Cluster
Registry (OCR) files and 3 voting files, then you should have at least 2 GB of disk
space available on three separate physical disks reserved for storing the Oracle
Clusterware files in Oracle ASM.
Note: You cannot install OCR or voting files (Oracle Clusterware files) on raw
partitions. You can install Oracle Clusterware files only on Oracle ASM. Raw
devices can be used as Oracle ASM disks.
To ensure high availability of OCR or voting files on Oracle ASM, you need to have at
least 2 GB of disk space for Oracle Clusterware files in three separate failure groups,
with at least three physical disks. Each disk must have at least 1 GB of capacity to
ensure that there is sufficient space to create Oracle Clusterware files.
If the temp space and the Grid home are on the same file system, then add together
their respective requirements for the total minimum space required for that file
system.
Note:
Oracle recommends that you choose the Oracle Grid Infrastructure
Management Repository option when installing Oracle Grid Infrastructure.
When you choose this option, OUI configures a Oracle Grid Infrastructure
Management Repository database on one of the nodes in the cluster.
Starting with Oracle Grid Infrastructure 12c Release 12.1.0.2, installation of the
Oracle Grid Infrastructure Management Repository is no longer optional and
it is installed automatically.
To determine the amount of available free disk space, there are two methods you can
use:
1. Using the Computer properties window:
a. Open the Start menu, then click Computer.
b. View the free disk space for each drive.
Configuring Servers for Oracle Grid Infrastructure and Oracle RAC 2-3
General Server Requirements
c. Right-click the drive on which you plan to install the Oracle software and select
Properties to view additional information about the disk space for that drive.
2. Using the Disk Management Microsoft Management Console (MMC) plug-in:
a. From the Start menu, select Run...
b. In the Run window, type in Diskmgmt.msc to open the Disk Management
graphical user interface (GUI).
The Disk Management GUI displays the available space on the available file
systems.
See Also:
•
"Configuring Shared Storage for Oracle Database and Grid Infrastructure
(page 6-1)"
•
Oracle Automatic Storage Management Administrator's Guide
2.1.5 Checking the Available TEMP Disk Space
The amount of disk space available in the TEMP directory is equivalent to the total
amount of free disk space, minus what is needed to install the Oracle software.
Log in as or switch to the user that will be performing the installation. See "About the
Oracle Installation User (page 5-2)" for more information.
Note: The temporary directory must reside in the same directory path on
each node in the cluster.
You must have 1 GB of disk space available in the TEMP directory. If you do not have
sufficient space, then first delete all unnecessary files. If the temporary disk space is
still less than the required amount, then increase the partition size of the disk or set the
TEMP environment variable to point to a different hard drive. Ensure the environment
variables TEMP and TMP both point to the location of the TEMP directory, for example:
TEMP=C:\WINDOWS\TEMP
TMP=C:\WINDOWS\TEMP
1. From the Control Panel, select System.
2. Select Advanced System Settings.
3. In the System Properties windows, select the Advanced tab, then click
Environment Variables.
4. Modify the value of the TEMP environment variable in the user variables list.
2.2 General Server Requirements
Verify that servers where you install Oracle Grid Infrastructure meet the minimum
requirements for installation.
•
Select servers with the same instruction set architecture as cluster members.
2-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Server Minimum Hardware and Memory Requirements
•
Ensure servers run the same operating system binary.
•
Oracle Grid Infrastructure installations and Oracle Real Application Clusters
(Oracle RAC) support servers with different hardware in the same cluster. Your
cluster can have nodes with CPUs of different speeds or sizes, but Oracle
recommends that you use nodes with the same hardware configuration.
Oracle recommends that if you configure clusters using different configuration,
that you categorize cluster nodes into homogenous pools as part of your server
categorization management policy.
See Also: Oracle Clusterware Administration and Deployment Guide for more
information about server state and configuration attributes, and about using
server pools to manage resources and workloads
2.3 Server Minimum Hardware and Memory Requirements
Each system must meet certain minimum hardware and memory requirements.
Table 2-1 Minimum Hardware and Memory Requirements for Oracle Grid
Infrastructure Installation
Hardware
Component
Requirements
Memory (RAM)
Oracle Grid Infrastructure installations: At least 4 gigabyte (GB) of
physical RAM for Oracle Grid Infrastructure for a cluster installations,
including installations where you plan to install Oracle RAC
Virtual memory
(swap)
If your server has between 4 GB and 16 GB of RAM, then you should
configure a paging file of at least the same size as the RAM. If your
server has more than 16 GB of RAM, then the paging file should be 16
GB.
Video adapter
256 color and at least 1024 x 768 display resolution, so that OUI
displays correctly while performing a system console-based
installation
Processor
x64: Intel Extended Memory 64 Technology (EM64T) or AMD 64
Note: 32-bit systems are no longer supported for Oracle Grid Infrastructure
and Oracle RAC.
2.4 Server Minimum Storage Requirements
Each system must meet certain minimum storage requirements.
•
1 GB of space in the %TEMP% directory.
If the free space available in the %TEMP% directory is less than what is required,
then complete one of the following steps:
–
Delete unnecessary files from the %TEMP% directory to make available the
space required.
Configuring Servers for Oracle Grid Infrastructure and Oracle RAC 2-5
Server Minimum Storage Requirements
–
Extend the file system that contains the %TEMP% directory. If necessary,
contact your system administrator for information about extending file
systems.
•
At least 12 GB of space for the Oracle Grid Infrastructure for a cluster home (Grid
home). Oracle recommends that you allocate 100 GB to allow additional space for
patches.
•
At least 3.5 GB of space for the Oracle base of the Oracle Grid Infrastructure
Installation user. The Oracle base includes Oracle Clusterware and Oracle ASM
log files.
•
If you intend to install Oracle Database, then allocate 7.5 GB of disk space for the
Oracle home (the location for the Oracle Database software binaries).
If you plan to configure automated database backups for your Oracle Database,
then you require additional space either in a file system or in an Oracle Automatic
Storage Management disk group for the Fast Recovery Area.
Note: The base directory for Oracle Grid Infrastructure 12c and the base
directory for Oracle RAC 12c must be different from the directories used by
any Oracle RAC 11g Release 2 installations.
Disk Format Requirements (page 2-6)
Oracle recommends that you install Oracle software, or binaries, on New
Technology File System (NTFS) formatted drives or partitions.
See Also:
•
Oracle Automatic Storage Management Administrator's Guide
•
Oracle Database Backup and Recovery User’s Guide for more information
about Fast Recovery Area sizing
2.4.1 Disk Format Requirements
Oracle recommends that you install Oracle software, or binaries, on New Technology
File System (NTFS) formatted drives or partitions.
Because it is difficult for OUI to estimate NTFS and file allocation table (FAT) disk
sizes on Windows, the system requirements documented in this section are likely
more accurate than the values reported on the OUI Summary screen.
Note:
Oracle Grid Infrastructure software is not supported on Network File System
(NFS).
You cannot use NTFS formatted disks or partitions for Oracle Clusterware files or data
files because they cannot be shared. Oracle Clusterware shared files and Oracle
Database data files can be placed on unformatted basic disks or disk partitions, called
raw partitions, managed by Oracle ASM.
2-6 Oracle Grid Infrastructure Installation and Upgrade Guide
Configuring Time Synchronization for the Cluster
Oracle ASM is recommended for storing Oracle Clusterware and Oracle Database data
files.
2.5 Configuring Time Synchronization for the Cluster
Oracle Clusterware requires the same time zone setting on all cluster nodes.
About Cluster Time Synchronization (page 2-7)
For the Microsoft Windows operating system, there are three different
methods you can use to synchronize the time between the nodes of your
cluster.
Understanding Network Time Requirements (page 2-8)
Oracle Clusterware 12c Release 1 (12.1) is automatically configured with
Cluster Time Synchronization Service (CTSS).
Configuring the Windows Time Service (page 2-8)
The Windows Time service (W32Time) provides network clock
synchronization on computers running Microsoft Windows.
Configuring Network Time Protocol (page 2-9)
The Network Time Protocol (NTP) is a client/server application.
Configuring Cluster Time Synchronization Service (page 2-9)
Cluster Time Synchronization Service (CTSS) is provided by Oracle to
synchronize the time across the nodes in your cluster.
2.5.1 About Cluster Time Synchronization
For the Microsoft Windows operating system, there are three different methods you
can use to synchronize the time between the nodes of your cluster.
During installation, the installation process picks up the time zone environment
variable setting of the Oracle Installation user for Oracle Grid Infrastructure on the
node where OUI runs. Then the installation process uses that time zone value on all
nodes as the default TZ environment variable setting for all processes managed by
Oracle Clusterware. The time zone default is used for databases, Oracle ASM, and any
other managed processes.
You have three options for time synchronization between cluster nodes:
•
Windows Time service
•
An operating system configured network time protocol (NTP)
•
Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations whose
cluster servers are unable to access NTP services. If you use NTP, then the Oracle
Cluster Time Synchronization daemon (ctssd) starts in observer mode. If neither NTP
or the Windows Time service is found, then ctssd starts in active mode and
synchronizes time among cluster members without contacting an external time server.
Configuring Servers for Oracle Grid Infrastructure and Oracle RAC 2-7
Configuring Time Synchronization for the Cluster
Note:
•
Before starting the installation of Oracle Grid Infrastructure, Oracle
recommends that you ensure the clocks on all nodes are set to the same
time.
•
The IP address for an NTP server can be an IPv6 address.
2.5.2 Understanding Network Time Requirements
Oracle Clusterware 12c Release 1 (12.1) is automatically configured with Cluster Time
Synchronization Service (CTSS).
CTSS provides automatic synchronization of the time settings on all cluster nodes.
CTSS uses the optimal synchronization strategy for the type of cluster you deploy.
If you have an existing cluster synchronization service, such as network time protocol
(NTP) or Windows Time Service, then CTSS starts in an observer mode. Otherwise,
CTSS starts in an active mode to ensure that time is synchronized between cluster
nodes. CTSS will not cause compatibility issues.
The CTSS module is installed as a part of Oracle Grid Infrastructure installation. CTSS
daemons are started by the Oracle High Availability Services daemon (ohasd), and do
not require a command-line interface.
2.5.3 Configuring the Windows Time Service
The Windows Time service (W32Time) provides network clock synchronization on
computers running Microsoft Windows.
If you are using Windows Time service, and you prefer to continue using it instead of
Cluster Time Synchronization Service, then you must modify the Windows Time
service settings to jumps in time and allow the time to gradually match with the
reference time. Restart the Windows Time service after you complete this task.
1. To configure Windows Time service, use the following command on each node:
C:\> W32tm /register
2. To modify the Windows Time service to work in an Oracle RAC environment,
perform the following steps:
a. Open the Registry Editor (regedit)
b. Locate the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet
\Services\W32Time\Config key.
c. Set the following Windows Time service parameters to these decimal values:
•
MaxPosPhaseCorrection to 600
•
MaxNegPhaseCorrection to 600
•
MaxAllowedPhaseOffset to 600
These parameter settings specify that small time adjustments are allowed when
the time difference between the reference and cluster nodes is under 10 minutes.
2-8 Oracle Grid Infrastructure Installation and Upgrade Guide
Configuring Time Synchronization for the Cluster
Note: You should configure the Windows Time service to meet the
requirements of your environment, with assistance from Microsoft, if
necessary. The recommended settings provided for the three parameters are
the settings that Oracle recommends to allow time adjustments to happen
through slewing (gradually adjusting the clock using small changes) rather
than in large steps (setting the clock to a new time). Large time adjustments in
a single step are not supported.
3. To put the changes into effect, use the following command:
C:\> W32tm /config /update
See Also: For more information about using and configuring the Windows
Time Service, see:
•
Microsoft® Support article ID 816042: "How to configure an authoritative
time server in Windows Server"
•
Microsoft® Support article ID 939322: "Support boundary to configure the
Windows Time service for high accuracy environments"
•
NTP FAQ and HOW TO
2.5.4 Configuring Network Time Protocol
The Network Time Protocol (NTP) is a client/server application.
Each server must have NTP client software installed and configured to synchronize its
clock to the network time server. The Windows Time service is not an exact
implementation of the NTP, but it based on the NTP specifications.
If you decide to use NTP instead of the Windows Time service, then, after you have
installed the NTP client software on each cluster node, you must start the NTP service
with the -x option to prevent time from being adjusted backward. Restart the network
time protocol service after you complete this task.
1. Use the registry editor to edit the value for the ntpd executable under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTP
2. Add the -x option to the ImagePath key value, behind %INSTALLDIR%
\ntpd.exe.
3. Restart the NTP service using the following commands:
net stop NTP
net start NTP
2.5.5 Configuring Cluster Time Synchronization Service
Cluster Time Synchronization Service (CTSS) is provided by Oracle to synchronize the
time across the nodes in your cluster.
When OUI discovers that neither the Windows Time or NTP services are active, the
Cluster Time Synchronization Service is installed in active mode and synchronizes the
time across the nodes. If the Windows Time service or NTP service is found on the
server, then the Cluster Time Synchronization Service is started in observer mode, and
Configuring Servers for Oracle Grid Infrastructure and Oracle RAC 2-9
Configuring Time Synchronization for the Cluster
no active time synchronization is performed by the Cluster Time Synchronization
Service within the cluster.
To use Cluster Time Synchronization Service to provide synchronization service in the
cluster, disable the Windows Time service and stop the NTP service. If you have an
NTP service on your server but you cannot use the service to synchronize time with a
time server, then you must deactivate and deinstall the NTP to use Cluster Time
Synchronization Service.
•
To confirm that the Cluster Time Synchronization Service is active after
installation, enter the following command as the Oracle Grid Infrastructure
installation owner:
crsctl check ctss
2-10 Oracle Grid Infrastructure Installation and Upgrade Guide
3
Configuring Operating Systems for Oracle
Grid Infrastructure and Oracle RAC
Complete operating system configuration requirements and checks before you start
the Oracle Grid Infrastructure installation.
Reviewing Operating System and Software Upgrade Best Practices (page 3-1)
Review the general planning guidelines and platform-specific
information about upgrades and migration.
Reviewing Operating System Security Common Practices (page 3-3)
Secure operating systems are an important basis for general system
security.
Verify Privileges for Copying Files in the Cluster (page 3-3)
During installation, OUI copies the software from the local node to the
remote nodes in the cluster. The installation user must have privileges
on the other nodes in the cluster to copy the files.
Identifying Software Requirements (page 3-4)
Depending on the products that you intend to install, verify that the
required operating system software is installed on each node of your
cluster.
Checking the Operating System Version (page 3-6)
To determine whether your computer is running a 64-bit (x64) Windows
operating system, perform these steps.
Checking Hardware and Software Certification on My Oracle Support
(page 3-6)
The My Oracle Support website also provides compatible client and
database releases, patches, and workaround information for bugs.
Oracle Enterprise Manager Requirements (page 3-7)
Verify that your installed Oracle Enterprise Manager meets the
minimum requirements for use with Oracle Grid Infrastructure.
Installation Requirements for Web Browsers (page 3-7)
Web browsers are required to use Oracle Enterprise Manager Database
Express and Oracle Enterprise Manager Cloud Control.
3.1 Reviewing Operating System and Software Upgrade Best Practices
Review the general planning guidelines and platform-specific information about
upgrades and migration.
General Upgrade Best Practices (page 3-2)
Be aware of these guidelines as a best practice before you perform an
upgrade.
Configuring Operating Systems for Oracle Grid Infrastructure and Oracle RAC 3-1
Reviewing Operating System and Software Upgrade Best Practices
Oracle ASM Upgrade Notifications (page 3-2)
Be aware of these issues regarding Oracle ASM upgrades.
Rolling Upgrade Procedure Notifications (page 3-3)
Be aware of this information regarding rolling upgrades.
3.1.1 General Upgrade Best Practices
Be aware of these guidelines as a best practice before you perform an upgrade.
If you have an existing Oracle installation, then do the following:
•
Record the version numbers, patches, and other configuration information
•
Review upgrade procedures for your existing installation
•
Review Oracle upgrade documentation before proceeding with installation, to
decide how you want to proceed
Caution:
Always create a backup of existing databases before starting any configuration
change.
Refer to Oracle Database Upgrade Guide for more information about required software
updates, pre-upgrade tasks, post-upgrade tasks, compatibility, and interoperability
between different releases.
Related Topics:
Upgrading Oracle Grid Infrastructure (page 11-1)
Oracle Grid Infrastructure upgrade consists of upgrade of Oracle
Clusterware and Oracle Automatic Storage Management (Oracle ASM).
Oracle Database Upgrade Guide
3.1.2 Oracle ASM Upgrade Notifications
Be aware of these issues regarding Oracle ASM upgrades.
•
You can upgrade Oracle Automatic Storage Management (Oracle ASM) 11g
release 1 (11.1) and later without shutting down an Oracle RAC database by
performing a rolling upgrade either of individual nodes, or of a set of nodes in the
cluster. However, if you have a standalone database on a cluster that uses Oracle
ASM, then you must shut down the standalone database before upgrading. If you
are upgrading from Oracle ASM 10g, then you must shut down the entire Oracle
ASM cluster to perform the upgrade.
•
The location of the Oracle ASM home changed in Oracle Grid Infrastructure 11g
release 2 (11.2) so that Oracle ASM is installed with Oracle Clusterware in the
Oracle Grid Infrastructure home (Grid home).
If you have an existing Oracle ASM home from a earlier release, then you may
want to consider other configuration changes to simplify or customize storage
administration.
3-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Reviewing Operating System Security Common Practices
•
When upgrading from Oracle Grid Infrastructure 11g release 2 (11.2) or Oracle
Grid Infrastructure 12c release 1 (12.1) to a later release, if there is an outage
during the rolling upgrade, then when you restart the upgrade, ensure that you
start the earlier release of Oracle Grid Infrastructure. You must then bring the
Oracle ASM cluster back in the rolling migration mode, because two nodes of
different releases cannot run in the cluster.
•
You must be an Administrator user to upgrade Oracle ASM.
3.1.3 Rolling Upgrade Procedure Notifications
Be aware of this information regarding rolling upgrades.
•
During rolling upgrades of the operating system, Oracle supports using different
operating system binaries when both versions of the operating system are
certified with the Oracle Database release you are using.
•
Using mixed operating system versions is supported during upgrade only.
Be aware that mixed operating systems are supported only for the duration of an
upgrade, over the period of a few hours.
•
Oracle Clusterware does not support nodes that have processors with different
instruction set architectures (ISAs) in the same cluster. Each node must be binary
compatible with the other nodes in the cluster.
For example, you cannot have one node using an Intel 64 processor and another
node using an IA-64 (Itanium) processor in the same cluster. You could have one
node using an Intel 64 processor and another node using an AMD64 processor in
the same cluster because the processors use the same x86-64 ISA and run the same
binary release of Oracle software.
Note: Your cluster can have nodes with processors of different
manufacturers, speeds, or sizes, but this is not recommended.
3.2 Reviewing Operating System Security Common Practices
Secure operating systems are an important basis for general system security.
•
Ensure that your operating system deployment is in compliance with common
security practices as described in your operating system vendor security guide.
3.3 Verify Privileges for Copying Files in the Cluster
During installation, OUI copies the software from the local node to the remote nodes
in the cluster. The installation user must have privileges on the other nodes in the
cluster to copy the files.
1. Verify that you have Administrator privileges on the other nodes in the cluster by
running the following command on each node, where nodename is the name of the
remote node:
net use \\nodename\C$
2. After installation, if your system does not use the net share shown in the above
example, then you can remove the unused net share using the following command:
Configuring Operating Systems for Oracle Grid Infrastructure and Oracle RAC 3-3
Identifying Software Requirements
net use \\nodename\C$ /delete
3.4 Identifying Software Requirements
Depending on the products that you intend to install, verify that the required
operating system software is installed on each node of your cluster.
Requirements listed in this document are current as of the date listed on the title page.
To obtain the most current information about operating system requirements, see the
online version in the Oracle Help Center at the following URL:
docs.oracle.com
OUI performs checks on your system to verify that it meets the listed operating system
requirements. To ensure that these checks complete successfully, verify the
requirements before you start OUI.
Note: Oracle does not support running different operating system versions
on cluster members, unless an operating system is being upgraded. You
cannot run different operating system version binaries on members of the
same cluster, even if each operating system is supported.
Table 3-1
Oracle Grid Software Requirements for Windows Systems
Requirement
Value
System Architecture
Processor: AMD64, or Intel Extended Memory (EM64T)
Note: Oracle provides only x64 releases of Oracle Database with
Oracle RAC for Windows.
The x64 release of Oracle RAC runs on the x64 version of
Windows on AMD64 and EM64T hardware.
Operating system
Oracle Grid Infrastructure and Oracle RAC for x64 Windows:
•
Windows Server 2012 x64 - Standard, Datacenter,
Essentials, and Foundation editions
•
Windows Server 2012 R2 x64 - Standard, Datacenter,
Essentials, and Foundation editions
The Windows Multilingual User Interface Pack is supported.
The Server Core option is not supported.
Note: Oracle Clusterware, Oracle ASM and Oracle RAC 12c
release 1 (12.1) and later are not supported on x86 (32-bit)
Windows operating systems.
3-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Identifying Software Requirements
Table 3-1
(Cont.) Oracle Grid Software Requirements for Windows Systems
Requirement
Value
Compilers
The following components are supported with the Microsoft
Visual C++ 2013 Update 4, Microsoft Visual C++ 2015 Update 3,
and Intel 14.0 C compilers:
•
Oracle Call Interface (OCI)
•
Pro*C/C++
•
External callouts
•
Oracle XML Developer's Kit (XDK)
Oracle C++ Call Interface supports:
•
•
Microsoft Visual C++ 2013 Update 4
Microsoft Visual C++ 2015 Update 3 - OCCI libraries are
installed under ORACLE_HOME\oci\lib\msvc
\vc14. When developing OCCI applications with MSVC+
+ 2015, ensure that the OCCI libraries are correctly selected
from this directory for linking and executing.
•
Microsoft Visual C++ 2013 - OCCI libraries are installed
under ORACLE_HOME\oci\lib\msvc\vc12. When
developing OCCI applications with MSVC++ 2013, ensure
that the OCCI libraries are correctly selected from this
directory for linking and executing.
•
Intel 14.0 C compilers with Microsoft Visual Studio 2013
STLs
Pro*COBOL supports:
•
Micro Focus Visual COBOL 2.2 - Update 2
Note:
The platform-specific hardware and software requirements included in this
guide were current when this guide was published. However, because new
platforms and operating system software versions might be certified after this
guide is published, review the certification matrix on the My Oracle Support
website for the most up-to-date list of certified hardware platforms and
operating system versions:
https://support.oracle.com/
Windows Firewall Feature on Windows Servers (page 3-6)
When installing Oracle Grid Infrastructure software or Oracle RAC
software on Windows servers, it is mandatory to disable the Windows
Firewall feature.
See Also:
•
Checking Hardware and Software Certification on My Oracle Support
(page 3-6) for certification information.
•
Oracle Database Platform Guide for Microsoft Windows for upgrade
instructions for an operating system version that is not supported by
Oracle Database 12c release 2 (12.2), such as Windows Server 2003 x86.
Configuring Operating Systems for Oracle Grid Infrastructure and Oracle RAC 3-5
Checking the Operating System Version
3.4.1 Windows Firewall Feature on Windows Servers
When installing Oracle Grid Infrastructure software or Oracle RAC software on
Windows servers, it is mandatory to disable the Windows Firewall feature.
If the Windows Firewall is enabled, then remote copy and configuration assistants
such as virtual IP configuration assistant (VIPCA), Network Configuration Assistant
(NETCA), and Oracle Database Configuration Assistant (DBCA) will fail during
Oracle RAC installation. Thus, you must disable the firewall on all the nodes of a
cluster before performing an Oracle RAC installation.
Note: The Windows Firewall should never be enabled on a NIC that is used as
a cluster interconnect (private network interface) or for accessing an Oracle
ASM network.
After the installation is successful, you can enable the Windows Firewall for the public
connections. However, to ensure correct operation of the Oracle software, you must
add certain executables and ports to the Firewall exception list on all the nodes of a
cluster.
Additionally, the Windows Firewall must be disabled on all the nodes in the cluster
before performing any clusterwide configuration changes, such as:
•
Adding a node
•
Deleting a node
•
Upgrading to patch release
•
Applying a patch bundle or an emergency patch
If you do not disable the Windows Firewall before performing these actions, then the
changes might not be propagated correctly to all the nodes of the cluster.
Related Topics:
Configure Exceptions for the Windows Firewall (page 10-2)
3.5 Checking the Operating System Version
To determine whether your computer is running a 64-bit (x64) Windows operating
system, perform these steps.
1. Right-click My Computer and select Properties.
2. On the General tab, under the heading of System, view the displayed text.
You will see text similar to "64-bit Operating System" if you have the x64 version of
the operating system installed.
3.6 Checking Hardware and Software Certification on My Oracle Support
The My Oracle Support website also provides compatible client and database releases,
patches, and workaround information for bugs.
The hardware and software requirements included in this installation guide were
current at the time this guide was published. However, because new platforms and
operating system software versions might be certified after this guide is published,
3-6 Oracle Grid Infrastructure Installation and Upgrade Guide
Oracle Enterprise Manager Requirements
review the certification matrix on the My Oracle Support website for the most up-todate list of certified hardware platforms and operating system versions.
•
View certification information on both Oracle Technology Network (OTN) and
My Oracle Support.
The OTN certification page can be found on the following website: http://
www.oracle.com/technetwork/database/options/clustering/overview/
index.html
•
View hardware certification details for Microsoft Windows platforms on Oracle
Technology Network, at the following URL:
http://www.oracle.com/technetwork/database/clustering/tech-genericwindows-new-166584.html
•
View My Oracle Support for guidance about supported hardware options that can
assist you with your purchasing decisions and installation planning.
The My Oracle Support certifications page contains more detailed information
about certified hardware and has information specific to each release and
platform. My Oracle Support is available at the following URL:
https://support.oracle.com/
You must register online before using My Oracle Support. Use the steps described
in the support document "Locate Oracle Database Server Certification Information
for Microsoft Windows Platforms (Doc ID 1062972.1)" to locate the certification
information for your Windows operating system.
Note: Contact your Oracle sales representative if you do not have a My
Oracle Support account.
3.7 Oracle Enterprise Manager Requirements
Verify that your installed Oracle Enterprise Manager meets the minimum
requirements for use with Oracle Grid Infrastructure.
All Oracle Enterprise Manager products that you use on your system must be of the
same release. Oracle Database 12c Release 1 (12.1) and later releases do not support
releases of Enterprise Manager earlier than Oracle Enterprise Manager Cloud Control
12c.
See Also: Oracle Enterprise Manager Cloud Control Basic Installation Guide
available on the Enterprise Manager Cloud Control installation media
3.8 Installation Requirements for Web Browsers
Web browsers are required to use Oracle Enterprise Manager Database Express and
Oracle Enterprise Manager Cloud Control.
Web browsers must support Java Script, and the HTML 4.0 and CSS 1.0 standards. For
a list of browsers that meet these requirements, see the Oracle Enterprise Manager
certification matrix on My Oracle Support: https://support.oracle.com
Configuring Operating Systems for Oracle Grid Infrastructure and Oracle RAC 3-7
Installation Requirements for Web Browsers
3-8 Installation and Upgrade Guide
4
Configuring Networks for Oracle Grid
Infrastructure and Oracle RAC
Check that you have the networking hardware and internet protocol (IP) addresses
required for an Oracle Grid Infrastructure for a cluster installation.
Note: For the most up-to-date information about supported network
protocols and hardware for Oracle RAC installations, refer to the Certify pages
on the My Oracle Support website. See "Checking Hardware and Software
Certification on My Oracle Support (page 3-6)" for instructions.
About Oracle Grid Infrastructure Network Configuration Options (page 4-2)
Ensure that you have the networking hardware and internet protocol
(IP) addresses required for an Oracle Grid Infrastructure for a cluster
installation.
Understanding Network Addresses (page 4-2)
During installation, you are asked to identify the planned use for each
network interface that Oracle Universal Installer (OUI) detects on your
cluster node.
Network Interface Hardware Requirements (page 4-6)
Review these requirements to ensure that you have the minimum
network hardware technology for Oracle Grid Infrastructure clusters.
Oracle Grid Infrastructure IP Name and Address Requirements (page 4-12)
The Oracle Grid Naming Service (GNS) is used with large clusters to
ease network administration cost.
Intended Use of Network Adapters (page 4-23)
During installation, you are asked to identify the planned use for each
network adapter (or network interface) that Oracle Universal Installer
(OUI) detects on your cluster node.
Broadcast Requirements for Networks Used by Oracle Grid Infrastructure
(page 4-23)
Broadcast communications address resolution protocol (ARP) and User
Datagram Protocol (UDP) must work properly across all the public and
private interfaces configured for use by Oracle Grid Infrastructure.
Multicast Requirements for Networks Used by Oracle Grid Infrastructure
(page 4-23)
On each cluster member node the Oracle multicast DNS (mDNS)
daemon uses multicasting on all network interfaces to communicate
with other nodes in the cluster.
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC 4-1
About Oracle Grid Infrastructure Network Configuration Options
Configuring Multiple ASM Interconnects on Microsoft Windows Platforms
(page 4-24)
When using multiple network interface cards for the Oracle Automatic
Storage Management (Oracle ASM) interconnect, you must enable the
weakhostsend network parameter.
4.1 About Oracle Grid Infrastructure Network Configuration Options
Ensure that you have the networking hardware and internet protocol (IP) addresses
required for an Oracle Grid Infrastructure for a cluster installation.
Oracle Clusterware Networks
An Oracle Clusterware configuration requires at least two interfaces:
•
A public network interface, on which users and application servers connect to
access data on the database server
•
A private network interface for internode communication.
You can configure a network interface to use either the IPv4 protocol, or the IPv6
protocol on a given network. If you use redundant network interfaces (bonded or
teamed interfaces), then be aware that Oracle does not support configuring one
interface to support IPv4 addresses and the other to support IPv6 addresses. You must
configure network interfaces of a redundant interface pair with the same IP protocol.
All the nodes in the cluster must use the same IP protocol configuration. Either all the
nodes use only IPv4, or all the nodes use only IPv6. You cannot have some nodes in
the cluster configured to support only IPv6 addresses, and other nodes in the cluster
configured to support only IPv4 addresses.
The VIP agent supports the generation of IPv6 addresses using the Stateless Address
Autoconfiguration Protocol (RFC 2462), and advertises these addresses with GNS. Run
the srvctl config network command to determine if Dynamic Host Configuration
Protocol (DHCP) or stateless address autoconfiguration is being used.
Note: The Certify pages on My Oracle Support for the most up-to-date
information about supported network protocols and hardware for Oracle
RAC:
https://support.oracle.com
4.2 Understanding Network Addresses
During installation, you are asked to identify the planned use for each network
interface that Oracle Universal Installer (OUI) detects on your cluster node.
Identify each interface as a public or private interface, or as an interface that you do
not want Oracle Grid Infrastructure or Oracle ASM to use. Public and virtual internet
protocol (VIP) addresses are configured on public interfaces. Private addresses are
configured on private interfaces.
Refer to the following sections for detailed information about each address type.
About the Public IP Address (page 4-3)
The public IP address uses the public interface (the interface with access
available to clients).
4-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Understanding Network Addresses
About the Private IP Address (page 4-3)
Oracle Clusterware uses interfaces marked as private for internode
communication.
About the Virtual IP Address (page 4-4)
The virtual IP (VIP) address is registered in the grid naming service
(GNS), or the DNS.
About the Grid Naming Service (GNS) Virtual IP Address (page 4-4)
The GNS virtual IP address is a static IP address configured in the DNS.
About the SCAN (page 4-4)
Oracle Database clients connect to the database using a single client
access name (SCAN).
4.2.1 About the Public IP Address
The public IP address uses the public interface (the interface with access available to
clients).
The public IP address is assigned dynamically using Dynamic Host Configuration
Protocol (DHCP), or defined statically in a domain name system (DNS) or hosts file.
The public IP address is the primary address for a cluster member node, and should
be the address that resolves to the name returned when you enter the command
hostname.
If you configure IP addresses manually, then avoid changing host names after you
complete the Oracle Grid Infrastructure installation, including adding or deleting
domain qualifications. A node with a new host name is considered a new host, and
must be added to the cluster. A node under the old name will appear to be down until
it is removed from the cluster.
4.2.2 About the Private IP Address
Oracle Clusterware uses interfaces marked as private for internode communication.
Each cluster node must have an interface that you identify during installation as a
private interface. Private interfaces must have addresses configured for the interface
itself, but no additional configuration is required. Oracle Clusterware uses the
interfaces you identify as private for the cluster interconnect. Any interface that you
identify as private must be on a subnet that connects to every node of the cluster.
Oracle Clusterware uses all the interfaces you identify for use as private interfaces.
For the private interconnects, because of Cache Fusion and other traffic between
nodes, Oracle strongly recommends using a physically separate, private network. If
you configure addresses using a DNS, then you should ensure that the private IP
addresses are reachable only by the cluster nodes.
You can choose multiple interconnects either during installation or postinstallation
using the oifcfg setif command.
After installation, if you modify the interconnect for Oracle Real Application Clusters
(Oracle RAC) with the CLUSTER_INTERCONNECTS initialization parameter, then you
must change the interconnect to a private IP address, on a subnet that is not used with
a public IP address, nor marked as a public subnet by oifcfg. Oracle does not
support changing the interconnect to an interface using a subnet that you have
designated as a public subnet.
You should not use a firewall on the network with the private network IP addresses,
because this can block interconnect traffic.
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC 4-3
Understanding Network Addresses
4.2.3 About the Virtual IP Address
The virtual IP (VIP) address is registered in the grid naming service (GNS), or the
DNS.
Select an address for your VIP that meets the following requirements:
•
The IP address and host name are currently unused (it can be registered in a DNS,
but should not be accessible by a ping command)
•
The VIP is on the same subnet as your public interface
If you are not using Grid Naming Service (GNS), then determine a virtual host name
for each node. A virtual host name is a public node name that reroutes client requests
sent to the node if the node is down. Oracle Database uses VIPs for client-to-database
connections, so the VIP address must be publicly accessible. Oracle recommends that
you provide a name in the format hostname-vip. For example: myclstr2-vip.
4.2.4 About the Grid Naming Service (GNS) Virtual IP Address
The GNS virtual IP address is a static IP address configured in the DNS.
The DNS delegates queries to the GNS virtual IP address, and the GNS daemon
responds to incoming name resolution requests at that address. Within the
subdomain, the GNS uses multicast Domain Name Service (mDNS), included with
Oracle Clusterware, to enable the cluster to map host names and IP addresses
dynamically as nodes are added and removed from the cluster, without requiring
additional host configuration in the DNS.
To enable GNS, you must have your network administrator provide a set of IP
addresses for a subdomain assigned to the cluster (for example,
grid.example.com), and delegate DNS requests for that subdomain to the GNS
virtual IP address for the cluster, which GNS serves. DHCP provides the set of IP
addresses to the cluster; DHCP must be available on the public network for the cluster.
See Also: Oracle Clusterware Administration and Deployment Guide for more
information about GNS
4.2.5 About the SCAN
Oracle Database clients connect to the database using a single client access name
(SCAN).
The SCAN and its associated IP addresses provide a stable name for clients to use for
connections, independent of the nodes that make up the cluster. SCAN addresses,
virtual IP addresses, and public IP addresses must all be on the same subnet.
The SCAN is a virtual IP name, similar to the names used for virtual IP addresses,
such as node1-vip. However, unlike a virtual IP, the SCAN is associated with the
entire cluster, rather than an individual node, and associated with multiple IP
addresses, not just one address.
The SCAN resolves to multiple IP addresses reflecting multiple listeners in the cluster
that handle public client connections. When a client submits a request, the SCAN
listener listening on a SCAN IP address and the SCAN port is made available to a
client. Because all services on the cluster are registered with the SCAN listener, the
SCAN listener replies with the address of the local listener on the least-loaded node
4-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Understanding Network Addresses
where the service is currently being offered. Finally, the client establishes connection
to the service through the listener on the node where service is offered. All of these
actions take place transparently to the client without any explicit configuration
required in the client.
During installation, listeners are created. These SCAN listeners listen on the SCAN IP
addresses. The SCAN listeners are started on nodes determined by Oracle
Clusterware. Oracle Net Services routes application requests to the least-loaded
instance providing the service. Because the SCAN addresses resolve to the cluster,
rather than to a node address in the cluster, nodes can be added to or removed from
the cluster without affecting the SCAN address configuration. The SCAN listener also
supports HTTP protocol for communication with Oracle XML Database (XDB).
The SCAN should be configured so that it is resolvable either by using Grid Naming
Service (GNS) within the cluster, or by using Domain Name Service (DNS) resolution.
For high availability and scalability, Oracle recommends that you configure the SCAN
name so that it resolves to three IP addresses. At a minimum, the SCAN must resolve
to at least one address.
If you specify a GNS domain, then the SCAN name defaults to clusternamescan.cluster_name.GNS_domain. Otherwise, it defaults to clusternamescan.current_domain. For example, if you start Oracle Grid Infrastructure
installation from the server node1, the cluster name is mycluster, and the GNS
domain is grid.example.com, then the SCAN Name is myclusterscan.mycluster.grid.example.com.
Clients configured to use IP addresses for Oracle Database releases prior to Oracle
Database 11g release 2 can continue to use their existing connection addresses; using
SCAN is not required. When you upgrade to Oracle Clusterware 12c release 1 (12.1),
the SCAN becomes available, and you should use the SCAN for connections to Oracle
Database 11g release 2 or later databases. When an earlier release of Oracle Database is
upgraded, it registers with the SCAN listeners, and clients can start using the SCAN to
connect to that database. The database registers with the SCAN listener through the
remote listener parameter in the init.ora file. The REMOTE_LISTENER parameter
must be set to SCAN:PORT. Do not set it to a TNSNAMES alias with a single address for
the SCAN, for example, using HOST= SCAN_name.
The SCAN is optional for most deployments. However, clients using Oracle Database
11g release 2 and later policy-managed databases using server pools must access the
database using the SCAN. This is required because policy-managed databases can run
on different servers at different times, so connecting to a particular node by using the
virtual IP address for a policy-managed database is not possible.
Provide SCAN addresses for client access to the cluster. These addresses must be
configured as round robin addresses on the domain name service (DNS), if DNS is
used. Oracle recommends that you supply three SCAN addresses.
Identify public and private interfaces. Oracle Universal Installer configures public
interfaces for use by public and virtual IP addresses, and configures private IP
addresses on private interfaces. The private subnet that the private interfaces use must
connect all the nodes you intend to have as cluster members. The SCAN must be in the
same subnet as the public interface.
Related Topics:
Oracle Real Application Clusters Administration and Deployment Guide
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC 4-5
Network Interface Hardware Requirements
4.3 Network Interface Hardware Requirements
Review these requirements to ensure that you have the minimum network hardware
technology for Oracle Grid Infrastructure clusters.
Network Requirements for Each Node (page 4-6)
Verify that servers where you install Oracle Grid Infrastructure meet the
minimum network requirements for installation.
Network Requirements for the Private Network (page 4-7)
The following is a list of requirements for the private network
configuration:
Network Requirements for the Public Network (page 4-7)
The following is a list of requirements for the public network
configuration:
IPv6 Protocol Support for Windows (page 4-8)
Oracle Grid Infrastructure and Oracle RAC support the standard IPv6
address notations specified by RFC 2732 and global and site-local IPv6
addresses as defined by RFC 4193.
Using Multiple Public Network Adapters (page 4-9)
You can configure multiple network adapters for the public network
interface.
Network Configuration Tasks for Windows Server Deployments (page 4-9)
Microsoft Windows Server has many unique networking features. Some
of these features require special configuration to enable Oracle software
to run correctly on Windows Server.
Network Interface Configuration Options for Performance (page 4-12)
The precise configuration you choose for your network depends on the
size and use of the cluster you want to configure, and the level of
availability you require.
4.3.1 Network Requirements for Each Node
Verify that servers where you install Oracle Grid Infrastructure meet the minimum
network requirements for installation.
•
The host name of each node must use only the characters a-z, A-Z, 0-9, and the
dash or minus sign (-). Host names using underscores (_) are not supported.
•
Each node must have at least two network adapters or network interface cards
(NICs): one for the public network interface, and one for the private network
interface, or the interconnect. Each network adapter has a network connection
name.
Note: Do not use the names PUBLIC and PRIVATE (all caps) for the public or
private (interconnect) network connection names.
•
Network adapters must be at least 1 GbE, with 10 GbE recommended.
•
If you plan to use Oracle ASM running in a different cluster for storage, then you
must either have a third network adapter for accessing the ASM network, or use
the same network adapter that is used for the private network interface.
4-6 Oracle Grid Infrastructure Installation and Upgrade Guide
Network Interface Hardware Requirements
4.3.2 Network Requirements for the Private Network
The following is a list of requirements for the private network configuration:
•
The private network connection names must be different from the network
connection names used for the public network.
•
Each node's private interface for interconnects must be on the same subnet.
For example, if the private interfaces have a subnet mask of 255.255.255.0, then
your private network is in the range 192.168.0.0--192.168.0.255, and your private
addresses must be in the range of 192.168.0.[0-255]. If the private interfaces have a
subnet mask of 255.255.0.0, then your private addresses can be in the range of
192.168.[0-255].[0-255]
•
The private network connection name cannot contain any multibyte language
characters. The private network connection names are case-sensitive.
•
Both IPv4 and IPv6 addresses are supported.
•
If you use OUI to install Oracle Grid Infrastructure, then the private network
connection names associated with the private network adapters must be the same
on all nodes.
For example, if you have a two-node cluster, and PrivNIC is the private network
connection name for node1, then PrivNIC must be the private network
connection name for node2.
•
For the private network, the network adapters must use high-speed network
adapters and switches that support TCP/IP (minimum requirement is 1 Gigabit
Ethernet, 10GbE recommended). Alternatively, use InfiniBand for the
interconnect.
Note:
TCP is the interconnect protocol for Oracle Clusterware. You must use a
switch for the interconnect. Oracle recommends that you use a dedicated
switch.
Oracle does not support token-rings or crossover cables for the interconnect.
•
For the private network adapters, the endpoints of all designated network
connection names must be completely reachable on the network. There should be
no node that is not connected to every other node on the private network. You can
test if an interconnect interface is reachable using ping.
4.3.3 Network Requirements for the Public Network
The following is a list of requirements for the public network configuration:
•
The public network connection names must be different from the private network
connection names.
•
Public network connection names are case-sensitive.
•
The public network connection name cannot contain any multibyte language
characters.
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC 4-7
Network Interface Hardware Requirements
•
If you use OUI to install Oracle Grid Infrastructure, then the public network
connection names associated with the public network adapters for each network
must be the same on all nodes.
For example, if you have a two-node cluster, you cannot configure network
adapters on node1 with NIC1 as the public network connection name and on
node2 have NIC2 as the public network connection name. Public network
connection names must be the same, so you must configure NIC1 as the public
network connection name on both nodes.
•
For the public network, each network adapter must support transmission control
protocol and internet protocol (TCP/IP).
•
The network adapters must use high-speed network adapters and switches that
support TCP/IP (minimum requirement is 1 Gigabit Ethernet, 10GbE
recommended).
4.3.4 IPv6 Protocol Support for Windows
Oracle Grid Infrastructure and Oracle RAC support the standard IPv6 address
notations specified by RFC 2732 and global and site-local IPv6 addresses as defined by
RFC 4193.
Cluster member node interfaces can be configured to use IPv4, IPv6, or both types of
Internet protocol addresses. However, be aware of the following:
•
Configuring public VIPs: During installation, you can configure VIPs for a given
public network as IPv4 or IPv6 types of addresses. You can configure an IPv6
cluster by selecting VIP and SCAN names that resolve to addresses in an IPv6
subnet for the cluster, and selecting that subnet as public during installation. After
installation, you can also configure cluster member nodes with a mixture of IPv4
and IPv6 addresses.
If you install using static virtual IP (VIP) addresses in an IPv4 cluster, then the VIP
names you supply during installation should resolve only to IPv4 addresses. If
you install using static IPv6 addresses, then the VIP names you supply during
installation should resolve only to IPv6 addresses.
During installation, you cannot configure the cluster with VIP and SCAN names
that resolve to both IPv4 and IPv6 addresses. For example, you cannot configure
VIPs and SCANS on some cluster member nodes to resolve to IPv4 addresses, and
VIPs and SCANs on other cluster member nodes to resolve to IPv6 addresses.
Oracle does not support this configuration.
•
Configuring private IP interfaces (interconnects): You can configure a network
interface to use either the IPv4 protocol, or the IPv6 protocol on a given network
•
Redundant network interfaces: If you configure redundant network interfaces for
a public or VIP node name, then configure both interfaces of a redundant pair to
the same address protocol. Also ensure that private IP interfaces use the same IP
protocol. Oracle does not support configuring one interface to support IPv4
addresses and the other to support IPv6 addresses. You must configure both
network interfaces of a redundant pair with the same IP protocol.
GNS or Multi-cluster addresses: Oracle Grid Infrastructure supports IPv4 DHCP
addresses, and IPv6 addresses configured with the Stateless Address
Autoconfiguration protocol, as described in RFC 2462. Run the srvctl config
4-8 Oracle Grid Infrastructure Installation and Upgrade Guide
Network Interface Hardware Requirements
network command to determine if DHCP or stateless address autoconfiguration
is being used.
Note: Link-local and site-local IPv6 addresses as defined in RFC 1884 are not
supported
See Also:
•
RFC 2732 for information about IPv6 notational representation
•
RFC 3513 for information about proper IPv6 addressing
•
RFC 2462 for information about IPv6 Stateless Address Autoconfiguration
protocol
•
Oracle Database Net Services Administrator's Guide for more information
about network communication and IP address protocol options
4.3.5 Using Multiple Public Network Adapters
You can configure multiple network adapters for the public network interface.
Oracle recommends that you do not identify multiple public network connection
names during Oracle Grid Infrastructure installation.
1. Use a third-party technology for your platform to aggregate the multiple public
network adapters before you start installation.
2. During installation, select the single network connection name for the combined
network adapters as the public interface.
If you configure two network adapters as public network adapters in the cluster
without using an aggregation technology, the failure of one public network adapter on
a node does not result in automatic VIP failover to the other public network adapter.
4.3.6 Network Configuration Tasks for Windows Server Deployments
Microsoft Windows Server has many unique networking features. Some of these
features require special configuration to enable Oracle software to run correctly on
Windows Server.
Disabling Windows Media Sensing (page 4-10)
Windows Media Sensing must be disabled for the private network
adapters.
Setting the Bind Order for Network Adapters (page 4-10)
In Windows Networking Properties, the public network connection on
each node must be listed first in the bind order (the order in which
network services access the node). The private network connection
should be listed second.
Deconfigure DNS Registration for Public Network Adapter (page 4-11)
To prevent Windows Server from potentially registering the wrong IP
addresses for the node in DNS after a server restart, you must
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC 4-9
Network Interface Hardware Requirements
deconfigure the "Register this connection's addresses in DNS" option for
the public network adapters.
Manually Configure Automatic Metric Values (page 4-11)
Automatic Metric feature automatically configures the metric for the
local routes that are based on link speed. To prevent OUI from selecting
the wrong network interface during installation, you must customize the
metric values for the public and private network interfaces.
Setting UDP and TCP Dynamic Port Range for Oracle RAC Installations
(page 4-12)
For certain configurations of Oracle RAC in high load environments it is
possible for the system to exhaust the available number of sockets. To
avoid this problem, expand the dynamic port range for both UDP and
TCP.
4.3.6.1 Disabling Windows Media Sensing
Windows Media Sensing must be disabled for the private network adapters.
To disable Windows Media Sensing for TCP/IP, you must set the value of the
DisableDHCPMediaSense parameter to 1 on each node. Because you must modify
the Windows registry to disable Media Sensing, you should first backup the registry
and confirm that you can restore it, using the methods described in your Windows
documentation.
1. Backup the Windows registry.
2. Use Registry Editor to view the following key in the registry:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters
3. Add a new DWORD value to the Parameters subkey:
Value Name: DisableDHCPMediaSense
Value: 1
4. Exit the Registry Editor and restart the computer.
5. Repeat steps 1 through 4 on each node in your cluster.
4.3.6.2 Setting the Bind Order for Network Adapters
In Windows Networking Properties, the public network connection on each node
must be listed first in the bind order (the order in which network services access the
node). The private network connection should be listed second.
The names used for each class of network adapter (such as public) must be consistent
across all nodes. You can use nondefault names for the network adapter, for example,
PublicLAN, if the same names are used for the same class of network adapters on
each node in the network.
1. Right click My Network Places and choose Properties.
2. In the Advanced menu, click Advanced Settings.
3. If the public network connection name is not the first name listed under the
Adapters and Bindings tab, then select it and click the arrow to move it to the top
of the list.
4-10 Oracle Grid Infrastructure Installation and Upgrade Guide
Network Interface Hardware Requirements
4. Click OK to save the settings and then exit the network setup dialog.
4.3.6.3 Deconfigure DNS Registration for Public Network Adapter
To prevent Windows Server from potentially registering the wrong IP addresses for
the node in DNS after a server restart, you must deconfigure the "Register this
connection's addresses in DNS" option for the public network adapters.
Due to a change in functionality in the Windows Server 2008 operating system, the
DNS client service registers all of the network connections of a computer in DNS. In
earlier versions of Windows Server, the DNS client service registered only the
primary, or first, network adapter IP address in DNS.
1. Start the Windows Server Manager application.
2. Select View Network Connections.
3. Right-click the network adapter that provides the Public network interface and
select Properties.
4. Select the Networking tab, and then select Internet Protocol Version 4 (TCP/IPv4).
Note: If you configure this setting in IPv4, then Windows automatically
configures the same setting for IPv6
5. Click Properties.
6. On the General tab, click Advanced.
7. Select the DNS tab.
8. Deselect Register this connection's addresses in DNS.
See Also: "Best Practices Analyzer for Domain Name System: Configuration"
on Microsoft Technet, specifically DNS Best Practices
4.3.6.4 Manually Configure Automatic Metric Values
Automatic Metric feature automatically configures the metric for the local routes that
are based on link speed. To prevent OUI from selecting the wrong network interface
during installation, you must customize the metric values for the public and private
network interfaces.
The Automatic Metric feature is enabled by default, and it can also be manually
configured to assign a specific metric. The public and private network interface for
IPv4 use the Automatic Metric feature of Windows. When the Automatic Metric
feature is enabled and using the default values, it can sometimes cause OUI to select
the private network interface as the default public host name for the server when
installing Oracle Grid Infrastructure.
1. In Control Panel, double-click Network Connections.
2. Right-click a network interface, and then click Properties.
3. Click Internet Protocol (TCP/IP), and then click Properties.
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC 4-11
Oracle Grid Infrastructure IP Name and Address Requirements
4. On the General tab, click Advanced.
5. To specify a metric, on the IP Settings tab, click to clear the Automatic metric check
box.
6. In the Interface Metric field, set the public network interface metric to a lower value
than the private network interface.
For example, you might set the public network interface metric to 100 and the
private network interface metric to 300.
4.3.6.5 Setting UDP and TCP Dynamic Port Range for Oracle RAC Installations
For certain configurations of Oracle RAC in high load environments it is possible for
the system to exhaust the available number of sockets. To avoid this problem, expand
the dynamic port range for both UDP and TCP.
1. Open a command line window as an Administrator user.
2. Run the following commands to set the dynamic port range:
netsh int ipv4 set dynamicport udp start=9000 num=56000
netsh int ipv4 set dynamicport tcp start=9000 num=56000
3. Run the following commands to verify that the dynamic port range was set:
netsh int ipv4 show dynamicport udp
netsh int ipv4 show dynamicport tcp
For IPv6 network, replace IPv4 with IPv6 in the above examples.
4.3.7 Network Interface Configuration Options for Performance
The precise configuration you choose for your network depends on the size and use of
the cluster you want to configure, and the level of availability you require.
If you access Oracle ASM remotely, or a certified Network-attached Storage (NAS) is
used for Oracle RAC and this storage is connected through Ethernet-based networks,
then you must have a third network interface for data communications. Failing to
provide three separate network interfaces in this case can cause performance and
stability problems under heavy system loads.
4.4 Oracle Grid Infrastructure IP Name and Address Requirements
The Oracle Grid Naming Service (GNS) is used with large clusters to ease network
administration cost.
For small clusters, you can use a static configuration of IP addresses. For large clusters,
manually maintaining the large number of required IP addresses becomes too
cumbersome.
About Oracle Grid Infrastructure Name Resolution Options (page 4-13)
Before starting the installation, you must have at least two interfaces
configured on each node: One for the private IP address and one for the
public IP address.
Cluster Name and SCAN Requirements (page 4-14)
Review this information before selecting the cluster name and SCAN.
4-12 Oracle Grid Infrastructure Installation and Upgrade Guide
Oracle Grid Infrastructure IP Name and Address Requirements
IP Name and Address Requirements For Grid Naming Service (GNS)
(page 4-15)
The network administration must configure the domain name server
(DNS) to delegate resolution requests for cluster names (any names in
the subdomain delegated to the cluster) to the GNS.
IP Name and Address Requirements For Multi-Cluster GNS (page 4-15)
Multi-cluster GNS differs from standard GNS in that Multi-cluster GNS
provides a single networking service across a set of clusters, rather than
a networking service for a single cluster.
IP Address Requirements for Manual Configuration (page 4-17)
If you do not enable GNS, then the public and VIP addresses for each
node must be static IP addresses. Public, VIP and SCAN addresses must
be on the same subnet.
Confirming the DNS Configuration for SCAN (page 4-18)
You can use the nslookup command to confirm that the DNS is
correctly associating the SCAN with the addresses.
Grid Naming Service for a Traditional Cluster Configuration Example
(page 4-18)
To use GNS, you must specify a static IP address for the GNS VIP
address, and you must have a subdomain configured on your domain
name servers (DNS) to delegate resolution for that subdomain to the
static GNS IP address.
Domain Delegation to Grid Naming Service (page 4-19)
If you are configuring Grid Naming Service (GNS) for a standard cluster,
then before installing Oracle Grid Infrastructure you must configure
DNS to send to GNS any name resolution requests for the subdomain
served by GNS.
Manual IP Address Configuration Example (page 4-21)
If you choose not to use GNS, then before installation you must
configure public, virtual, and private IP addresses. Also, check that the
default gateway can be accessed by a ping command.
4.4.1 About Oracle Grid Infrastructure Name Resolution Options
Before starting the installation, you must have at least two interfaces configured on
each node: One for the private IP address and one for the public IP address.
You can configure IP addresses for Oracle Grid Infrastructure and Oracle RAC with
one of the following options:
•
Dynamic IP address assignment using Multi-cluster or standard Oracle Grid
Naming Service (GNS). If you select this option, then network administrators
delegate a subdomain to be resolved by GNS (standard or multicluster).
Requirements for GNS are different depending on whether you choose to
configure GNS with zone delegation (resolution of a domain delegated to GNS),
or without zone delegation (a GNS virtual IP address without domain delegation).
•
For GNS with zone delegation:
–
For IPv4, a DHCP service running on the public network the cluster uses
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC 4-13
Oracle Grid Infrastructure IP Name and Address Requirements
–
For IPv6, an autoconfiguration service running on the public network the
cluster uses
–
Enough DHCP addresses to provide 1 IP address for each node, and 3 IP
addresses for the cluster used by the Single Client Access Name (SCAN) for
the cluster
•
Use an existing GNS configuration. Starting with Oracle Grid Infrastructure 12c
Release 1 (12.1), a single GNS instance can be used by multiple clusters. To use
GNS for multiple clusters, the DNS administrator must have delegated a zone for
use by GNS. Also, there must be an instance of GNS started somewhere on the
network and the GNS instance must be accessible (not blocked by a firewall). All
of the node names registered with the GNS instance must be unique.
•
Static IP address assignment using DNS or host file resolution. If you select this
option, then network administrators assign a fixed IP address for each physical
host name in the cluster and for IPs for the Oracle Clusterware managed VIPs. In
addition, domain name system (DNS)-based static name resolution is used for
each node, or host files for both the clusters and clients have to be updated, and
SCAN functionality is limited. Selecting this option requires that you request
network administration updates when you modify the cluster.
Note:
•
Oracle recommends that you use a static host name for all non-VIP server
node public host names.
•
Public IP addresses and virtual IP addresses must be in the same subnet.
•
Oracle only supports DHCP-assigned networks for the default network,
not for any subsequent networks.
For clusters using single interfaces for private networks, each node's private interface
for interconnects must be on the same subnet, and that subnet must connect to every
node of the cluster. For example, if the private interfaces have a subnet mask of
255.255.255.0, then your private network is in the range 192.168.0.0--192.168.0.255, and
your private addresses must be in the range of 192.168.0.[0-255]. If the private
interfaces have a subnet mask of 255.255.0.0, then your private addresses can be in the
range of 192.168.[0-255].[0-255].
4.4.2 Cluster Name and SCAN Requirements
Review this information before selecting the cluster name and SCAN.
Cluster Name and SCAN Requirements
Cluster Name must meet the following requirements:
•
The cluster name is case-insensitive, must be unique across your enterprise, must
be at least one character long and no more than 15 characters in length, must be
alphanumeric, cannot begin with a numeral, and may contain hyphens (-).
Underscore characters (_) are not allowed.
•
The SCAN and cluster name are entered in separate fields during installation, so
cluster name requirements do not apply to the name used for the SCAN, and the
4-14 Oracle Grid Infrastructure Installation and Upgrade Guide
Oracle Grid Infrastructure IP Name and Address Requirements
SCAN can be longer than 15 characters. If you enter a domain with the SCAN
name, and you want to use GNS with zone delegation, then the domain must be
the GNS domain.
Select your cluster name carefully. After installation, you can only
change the cluster name by reinstalling Oracle Grid Infrastructure.
Note:
4.4.3 IP Name and Address Requirements For Grid Naming Service (GNS)
The network administration must configure the domain name server (DNS) to
delegate resolution requests for cluster names (any names in the subdomain delegated
to the cluster) to the GNS.
If you enable Grid Naming Service (GNS), then name resolution requests to the cluster
are delegated to the GNS, which listens on the GNS VIP address. When a request
comes to the domain, GNS processes the requests and responds with the appropriate
addresses for the name requested. To use GNS, you must specify a static IP address for
the GNS VIP address.
Note: You cannot use GNS with another multicast DNS. To use GNS, disable
any third-party mDNS daemons on your system.
Related Topics:
Configuring DNS for Domain Delegation to Grid Naming Service (page 4-20)
4.4.4 IP Name and Address Requirements For Multi-Cluster GNS
Multi-cluster GNS differs from standard GNS in that Multi-cluster GNS provides a
single networking service across a set of clusters, rather than a networking service for
a single cluster.
About Multi-Cluster GNS Networks (page 4-15)
The general requirements for multi-cluster GNS are similar to those for
standard GNS. Multi-cluster GNS differs from standard GNS in that
multi-cluster GNS provides a single networking service across a set of
clusters, rather than a networking service for a single cluster.
Configuring GNS Server Clusters (page 4-16)
Review these requirements to configure GNS server clusters.
Configuring GNS Client Clusters (page 4-16)
Review these requirements to configure GNS client clusters.
Creating and Using a GNS Client Data File (page 4-16)
Generate a GNS client data file and copy the file to the GNS client cluster
member node on which you are running the Oracle Grid Infrastructure
installation.
4.4.4.1 About Multi-Cluster GNS Networks
The general requirements for multi-cluster GNS are similar to those for standard GNS.
Multi-cluster GNS differs from standard GNS in that multi-cluster GNS provides a
single networking service across a set of clusters, rather than a networking service for
a single cluster.
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC 4-15
Oracle Grid Infrastructure IP Name and Address Requirements
Requirements for Multi-Cluster GNS Networks
To provide networking service, multi-cluster Grid Naming Service (GNS) is
configured using DHCP addresses, and name advertisement and resolution is carried
out with the following components:
•
The GNS server cluster performs address resolution for GNS client clusters. A
GNS server cluster is the cluster where multi-cluster GNS runs, and where name
resolution takes place for the subdomain delegated to the set of clusters.
•
GNS client clusters receive address resolution from the GNS server cluster. A
GNS client cluster is a cluster that advertises its cluster member node names using
the GNS server cluster.
•
If you choose to use GNS, then the GNS configured at the time of installation is
the primary. A secondary GNS for high availability can be configured at a later
time.
4.4.4.2 Configuring GNS Server Clusters
Review these requirements to configure GNS server clusters.
To use this option, your network administrators must have delegated a subdomain to
GNS for resolution.
1. Before installation, create a static IP address for the GNS VIP address.
2. Provide a subdomain that your DNS servers delegate to that static GNS IP address
for resolution.
4.4.4.3 Configuring GNS Client Clusters
Review these requirements to configure GNS client clusters.
•
To configure a GNS Client cluster, check to ensure all of the following
requirements are completed:
–
A GNS Server instance must be running on your network, and it must be
accessible (for example, not blocked by a firewall)
–
All of the node names in the GNS domain must be unique; address ranges
and cluster names must be unique for both GNS Server and GNS Client
clusters.
–
You must have a GNS Client data file that you generated on the GNS Server
cluster, so that the GNS Client cluster has the information needed to delegate
its name resolution to the GNS Server cluster, and you must have copied that
file to the GNS Client cluster member node on which you run the Oracle Grid
Infrastructure installation.
4.4.4.4 Creating and Using a GNS Client Data File
Generate a GNS client data file and copy the file to the GNS client cluster member
node on which you are running the Oracle Grid Infrastructure installation.
1. On a GNS Server cluster member, run the following command, where path_to_file is
the name and path location of the GNS Client data file you create:
srvctl export gns -clientdata path_to_file -role {client | secondary}
For example:
4-16 Oracle Grid Infrastructure Installation and Upgrade Guide
Oracle Grid Infrastructure IP Name and Address Requirements
C:\> srvctl export gns -clientdata C:\Users\grid\gns_client_data -role client
2. Copy the GNS Client data file to a secure path on the GNS Client node where you
run the GNS Client cluster installation.
The Oracle Installation user must have permissions to access that file. Oracle
recommends that no other user is granted permissions to access the GNS Client
data file.
3. During installation, you are prompted to provide a path to that file.
4. After you have completed the GNS Client cluster installation, you must run the
following command on one of the GNS Server cluster members to start GNS
service, where path_to_file is the name and path location of the GNS Client data file:
srvctl add gns -clientdata path_to_file
For example:
C:\> srvctl add gns -clientdata C:\Users\grid\gns_client_data
See Also: Oracle Clusterware Administration and Deployment Guide for more
information about GNS Server and GNS Client administration
4.4.5 IP Address Requirements for Manual Configuration
If you do not enable GNS, then the public and VIP addresses for each node must be
static IP addresses. Public, VIP and SCAN addresses must be on the same subnet.
IP addresses on the subnet you identify as private are assigned as private IP addresses
for cluster member nodes. Oracle Clusterware manages private IP addresses in the
private subnet. You do not have to configure these addresses manually in a hosts file.
The cluster must have the following addresses configured:
•
A public IP address for each node configured before installation, and resolvable to
that node before installation
•
A VIP address for each node configured before installation, but not currently in
use
•
Three static IP addresses configured on the domain name server (DNS) before
installation so that the three IP addresses are associated with the name provided
as the SCAN, and all three addresses are returned in random order by the DNS to
the requestor. These addresses must be configured before installation in the DNS
to resolve to addresses that are not currently in use. The SCAN name must meet
the requirements specified in "Cluster Name and SCAN Requirements
(page 4-14)"
•
A private IP address for each node configured before installation, but on a
separate, private network, with its own subnet. The IP address should not be
resolvable except by other cluster member nodes.
•
A set of one or more networks over which Oracle ASM serves its clients. The ASM
network does not have to be a physical network; it can be a virtual network. The
ASM network must use either a third NIC, or share a private network adapter.
The NIC can be a virtual NIC.
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC 4-17
Oracle Grid Infrastructure IP Name and Address Requirements
Note:
Oracle strongly recommends that you do not configure SCAN VIP addresses
in the hosts file. Use DNS resolution for SCAN VIPs. If you use the hosts file to
resolve SCANs, then you will only be able to resolve to one IP address and
you will have only one SCAN address.
Configuring SCANs in a DNS or a hosts file is the only supported
configuration. Configuring SCANs in a Network Information Service (NIS) is
not supported.
Related Topics:
Understanding Network Addresses (page 4-2)
4.4.6 Confirming the DNS Configuration for SCAN
You can use the nslookup command to confirm that the DNS is correctly associating
the SCAN with the addresses.
After installation, when a client sends a request to the cluster, the Oracle Clusterware
SCAN listeners redirect client requests to servers in the cluster.
•
At a command prompt, use the nslookup command and specify the name of the
SCAN for your cluster.
For example:
C:\> nslookup mycluster-scan
Server:
dns3.example.com
Address:
192.0.2.001
Name: mycluster-scan.example.com
Address: 192.0.2.201
Name: mycluster-scan.example.com
Address: 192.0.2.202
Name: mycluster-scan.example.com
Address: 192.0.2.203
4.4.7 Grid Naming Service for a Traditional Cluster Configuration Example
To use GNS, you must specify a static IP address for the GNS VIP address, and you
must have a subdomain configured on your domain name servers (DNS) to delegate
resolution for that subdomain to the static GNS IP address.
As nodes are added to the cluster, your organization's DHCP server can provide
addresses for these nodes dynamically. These addresses are then registered
automatically in GNS, and GNS provides resolution within the subdomain to cluster
node addresses registered with GNS.
Because allocation and configuration of addresses is performed automatically with
GNS, no further configuration is required. Oracle Clusterware provides dynamic
network configuration as nodes are added to or removed from the cluster. The
following example is provided only for information.
With IPv6 networks, the IPv6 auto configuration feature assigns IP addresses and no
DHCP server is required.
4-18 Oracle Grid Infrastructure Installation and Upgrade Guide
Oracle Grid Infrastructure IP Name and Address Requirements
Assuming a two node cluster where you have defined the GNS VIP, after installation
you might have a configuration similar to that shown in the following table. In this
configuration, the cluster name is mycluster, the GNS parent domain is
gns.example.com, the subdomain is cluster01.example.com, the 192.0.2
portion of the IP addresses represents the cluster public IP address subdomain, and
192.168.0 represents the private IP address subdomain.
Table 4-1
1
Example of a Grid Naming Service Network Configuration
Identity Home
Node
Host Node
Given Name
Type
Address
Address
Assigned
By
Resolve
d By
GNS
VIP
None
Selected by
Oracle
Clusterware
mycluster-gnsvip.example.com
Virtua
l
192.0.2.1
Fixed by
network
administrat
or
DNS
Node 1
Public
Node 1
node1
node111
Public
192.0.2.101
Fixed
GNS
Node 1
VIP
Node 1
Selected by
Oracle
Clusterware
node1-vip
Virtua
l
192.0.2.104
DHCP
GNS
Node 1
Private
Node 1
node1
node1-priv
Privat
e
192.168.0.1
Fixed or
DHCP
GNS
Node 2
Public
Node 2
node2
node21
Public
192.0.2.102
Fixed
GNS
Node 2
VIP
Node 2
Selected by
Oracle
Clusterware
node2-vip
Virtua
l
192.0.2.105
DHCP
GNS
Node 2
Private
Node 2
node2
node2-priv
Privat
e
192.168.0.2
Fixed or
DHCP
GNS
SCAN
VIP 1
none
Selected by
Oracle
Clusterware
myclusterscan.cluster01.exa
mple.com
Virtua
l
192.0.2.201
DHCP
GNS
SCAN
VIP 2
none
Selected by
Oracle
Clusterware
myclusterscan.cluster01.exa
mple.com
Virtua
l
192.0.2.202
DHCP
GNS
SCAN
VIP 3
none
Selected by
Oracle
Clusterware
myclusterscan.cluster01.exa
mple.com
Virtua
l
192.0.2.203
DHCP
GNS
Node host names may resolve to multiple addresses, including VIP addresses currently running on that host.
4.4.8 Domain Delegation to Grid Naming Service
If you are configuring Grid Naming Service (GNS) for a standard cluster, then before
installing Oracle Grid Infrastructure you must configure DNS to send to GNS any
name resolution requests for the subdomain served by GNS.
The subdomain that GNS serves represents the cluster member nodes.
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC 4-19
Oracle Grid Infrastructure IP Name and Address Requirements
Choosing a Subdomain Name for Use with Grid Naming Service (page 4-20)
To implement GNS, your network administrator must configure the
DNS to set up a domain for the cluster, and delegate resolution of that
domain to the GNS VIP. You can use a separate domain, or you can
create a subdomain of an existing domain for the cluster.
Configuring DNS for Domain Delegation to Grid Naming Service (page 4-20)
You must configure the DNS to send GNS name resolution requests
using DNS forwarders.
4.4.8.1 Choosing a Subdomain Name for Use with Grid Naming Service
To implement GNS, your network administrator must configure the DNS to set up a
domain for the cluster, and delegate resolution of that domain to the GNS VIP. You
can use a separate domain, or you can create a subdomain of an existing domain for
the cluster.
The subdomain name, can be any supported DNS name such as salescluster.rac.com.
Oracle recommends that the subdomain name be distinct from your corporate
domain. For example, if your corporate domain is mycorp.example.com, the
subdomain for GNS might be rac-gns.com.
If the subdomain is not distinct, then it should be for the exclusive use of GNS. For
example, if you delegate the subdomain mydomain.example.com to GNS, then there
should be no other domains that share it such as lab1.mydomain.example.com.
See Also:
•
Oracle Clusterware Administration and Deployment Guide for more
information about GNS
•
Cluster Name and SCAN Requirements (page 4-14) for information about
choosing network identification names
4.4.8.2 Configuring DNS for Domain Delegation to Grid Naming Service
You must configure the DNS to send GNS name resolution requests using DNS
forwarders.
If the DNS server is running on a Windows server that you administer, then the
following steps must be performed to configure DNS:
1. Click Start, then select Programs. Select Administrative Tools and then click DNS
manager.
The DNS server configuration wizard starts automatically.
2. Use the wizard to create an entry for the GNS virtual IP address, where the address
is a valid DNS name.
For example, if the cluster name is mycluster, and the domain name is
example.com, and the IP address is 192.0.2.1, you could create an entry similar to
the following:
mycluster-gns-vip.example.com: 192.0.2.1
The address you provide must be routable.
4-20 Oracle Grid Infrastructure Installation and Upgrade Guide
Oracle Grid Infrastructure IP Name and Address Requirements
Note: The domain name may not contain underscores. Windows may allow
the use of underscore characters, but this practice violates the Internet
Engineering Task Force RFC 952 standard and is not supported by Oracle.
3. To configure DNS forwarders, click Start, select Administrative Tools, and then
select DNS.
4. Right-click ServerName, where ServerName is the name of the server, and then click
the Forwarders tab.
5. Click New, then type the name of the DNS domain for which you want to forward
queries in the DNS domain box, for example, clusterdomain.example.com.
Click OK.
6. In the selected domain's forwarder IP address box, type the GNS VIP address, and
then click Add.
7. Click OK to exit.
Note: Experienced DNS administrators may want to create a reverse lookup
zone to enable resolution of reverse lookups. A reverse lookup resolves an IP
address to a host name with a Pointer Resource (PTR) record. If you have
reverse DNS zones configured, then you can automatically create associated
reverse records when you create your original forward record.
See Also: Oracle Grid Infrastructure Installation Guide for your platform or
your operating system documentation for more information about configuring
DNS for domain delegation to GNS if the DNS server is running on a different
operating system.
4.4.9 Manual IP Address Configuration Example
If you choose not to use GNS, then before installation you must configure public,
virtual, and private IP addresses. Also, check that the default gateway can be accessed
by a ping command.
To find the default gateway, use the ipconfig command, as described in your
operating system's help utility.
For example, with a two node cluster where the cluster name is mycluster, and each
node has one public and one private interface, and you have defined a SCAN domain
address to resolve on your DNS to one of three IP addresses, you might have the
configuration shown in the following table for your network interfaces.
Table 4-2
Manual Network Configuration Example
Identity
Home
Node
Host Node
Given Name
Type
Address
Address
Assigned
By
Resolved
By
Node 1
Public
Node 1
node1
node111
Public
192.0.2.101
Fixed
DNS
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC 4-21
Oracle Grid Infrastructure IP Name and Address Requirements
Table 4-2
1
(Cont.) Manual Network Configuration Example
Identity
Home
Node
Host Node
Given Name
Type
Address
Address
Assigned
By
Resolved
By
Node 1 VIP
Node 1
Selected by
Oracle
Clusterware
node1-vip
Virtual
192.0.2.104
Fixed
DNS,
hosts file
Node 1
Private
Node 1
node1
node1-priv
Private
192.168.0.1
Fixed
DNS,
hosts file,
or none
Node 2
Public
Node 2
node2
node21
Public
192.0.2.102
Fixed
DNS
Node 2 VIP
Node 2
Selected by
Oracle
Clusterware
node2-vip
Virtual
192.0.2.105
Fixed
DNS,
hosts file
Node 2
Private
Node 2
node2
node2-priv
Private
192.168.0.2
Fixed
DNS,
hosts file,
or none
SCAN VIP 1
none
Selected by
Oracle
Clusterware
mycluster-scan
Virtual
192.0.2.201
Fixed
DNS
SCAN VIP 2
none
Selected by
Oracle
Clusterware
mycluster-scan
Virtual
192.0.2.202
Fixed
DNS
SCAN VIP 3
none
Selected by
Oracle
Clusterware
mycluster-scan
Virtual
192.0.2.203
Fixed
DNS
Node host names may resolve to multiple addresses.
You do not have to provide a private name for the interconnect. If you want name
resolution for the interconnect, then you can configure private IP names in the system
hosts file or DNS. However, Oracle Clusterware assigns interconnect addresses on
the interface defined during installation as the private interface (Local Area
Connection 2, for example), and to the subnet used for the private subnet.
The addresses to which the SCAN resolves are assigned by Oracle Clusterware, so
they are not fixed to a particular node. To enable VIP failover, the configuration shown
in the previous table defines the SCAN addresses and the public and VIP addresses of
both nodes on the same subnet, 192.0.2.
Note: All host names must conform to the Internet Engineering Task Force
RFC 952 standard, which permits alphanumeric characters. Host names using
underscores ("_") are not allowed.
4-22 Oracle Grid Infrastructure Installation and Upgrade Guide
Intended Use of Network Adapters
4.5 Intended Use of Network Adapters
During installation, you are asked to identify the planned use for each network
adapter (or network interface) that Oracle Universal Installer (OUI) detects on your
cluster node.
Each NIC performs only one of the following roles:
•
Public
•
Private
•
Do Not Use
You must use the same private adapters for both Oracle Clusterware and Oracle RAC.
The precise configuration you choose for your network depends on the size and use of
the cluster you want to configure, and the level of availability you require. Network
interfaces must be at least 1 GbE, with 10 GbE recommended.
For network adapters that you plan to use for other purposes, for example, an adapter
dedicated to a non-Oracle network file system, you must identify those network
adapters as "do not use" adapters so that Oracle Clusterware ignores them.
If certified Network Attached Storage (NAS) is used for Oracle RAC and this storage is
connected through Ethernet-based networks, then you must have a third network
interface for NAS I/O. Failing to provide three separate interfaces in this case can
cause performance and stability problems under load.
Oracle Flex ASM can use either the same private networks as Oracle Clusterware, or
use its own dedicated private networks. If you plan to have other clusters (Oracle
ASM client clusters) access the storage in an Oracle Flex ASM cluster, then you should
configure a separate ASM network as a private network that connects the client
clusters to the Oracle Flex ASM cluster.
If you require high availability or load balancing for public adapters, then use a thirdparty solution. Typically, bonding, trunking or similar technologies can be used for
this purpose.
4.6 Broadcast Requirements for Networks Used by Oracle Grid
Infrastructure
Broadcast communications address resolution protocol (ARP) and User Datagram
Protocol (UDP) must work properly across all the public and private interfaces
configured for use by Oracle Grid Infrastructure.
The broadcast must work across any configured VLANs as used by the public or
private interfaces.
When configuring public and private network interfaces for Oracle RAC, you must
enable ARP, which is necessary for VIP failover. Do not configure NOARP.
4.7 Multicast Requirements for Networks Used by Oracle Grid
Infrastructure
On each cluster member node the Oracle multicast DNS (mDNS) daemon uses
multicasting on all network interfaces to communicate with other nodes in the cluster.
Multicasting is required on the private interconnect. For this reason, at a minimum,
you must enable multicasting for the cluster for the following:
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC 4-23
Configuring Multiple ASM Interconnects on Microsoft Windows Platforms
•
Across the broadcast domain as defined for the private interconnect
•
On the IP address subnet ranges 224.0.0.0/24 and 230.0.1.0/24
You do not need to enable multicast communications across routers.
4.8 Configuring Multiple ASM Interconnects on Microsoft Windows
Platforms
When using multiple network interface cards for the Oracle Automatic Storage
Management (Oracle ASM) interconnect, you must enable the weakhostsend
network parameter.
Microsoft Windows versions prior to Windows Vista use the weak host send and
receive model. The TCP/IP stack in Windows Vista, Windows Server 2008, and later
Windows operating systems, supports the strong host send and receive model for
both IPv4 and IPv6 protocols by default. Oracle RAC nodes that use multiple network
interface cards (NICs) for the ASM interconnect must enable the weak host send by
specifically setting the weakhostsend parameter for all the private subnets.
If you do not enable the weakhostsend parameter, you might experience connection
issues on the ASM interconnects because the interconnect packets are being blocked or
discarded. It is not considered unsafe to enable weakhostsend because the
interconnect is on a private and isolated network. Enabling the weakhostsend
parameter allows all private NICs to send packets to multiple private subnets.
1. Open a CMD Prompt as Administrator window.
2. Use the following commands to configure the send behavior for both IPv4 and
IPv6, on a per-interface basis, where interface1 and interface2 represent the names or
interface indexes assigned to the NIC adapters on the Oracle RAC nodes.
netsh interface [ipv4 | ipv6] set interface interface1 weakhostsend=enabled
netsh interface [ipv4 | ipv6] set interface interface2 weakhostsend=enabled
You can obtain the interface index of a NIC from the output of the command:
netsh interface [ipv4 | ipv6] show interface
3. Repeat the commands in Step 2 for all the private NICs, on each node in the cluster.
You can choose multiple interconnects either during installation or after installation
using the oifcfg setif command.
4-24 Oracle Grid Infrastructure Installation and Upgrade Guide
5
Configuring Users, Groups and
Environments for Oracle Grid Infrastructure
and Oracle RAC
You must configure certain users, groups, and environment settings used during
Oracle Grid Infrastructure for a Cluster and Oracle Real Application Clusters
installations.
Creating Installation Groups and Users for Oracle Grid Infrastructure and Oracle
RAC (page 5-1)
To install Oracle Grid Infrastructure and Oracle RAC, you must have an
installation user and optionally an Oracle Home User.
Standard Administration and Job Role Separation User Groups (page 5-7)
Oracle Grid Infrastructure uses various operating system groups. These
operating system groups are designated with the logical role of granting
operating system group authentication for administration system
privilege for Oracle Clusterware and Oracle ASM.
Configuring User Accounts (page 5-17)
When installing Oracle Grid Infrastructure for a cluster, you run the
installer software as an Administrator user. During installation, you can
specify an Oracle Home user.
Creating Oracle Software Directories (page 5-19)
During installation, you are prompted to provide a path to a home
directory to store Oracle Grid Infrastructure software.
Enabling Intelligent Platform Management Interface (IPMI) (page 5-24)
Intelligent Platform Management Interface (IPMI) provides a set of
common interfaces to computer hardware and firmware that system
administrators can use to monitor system health and manage the system.
Related Topics:
User Environment Configuration Checklist for Oracle Grid Infrastructure
(page 1-6)
Review the following environment checklist for all installations.
5.1 Creating Installation Groups and Users for Oracle Grid Infrastructure
and Oracle RAC
To install Oracle Grid Infrastructure and Oracle RAC, you must have an installation
user and optionally an Oracle Home User.
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-1
Creating Installation Groups and Users for Oracle Grid Infrastructure and Oracle RAC
Note: During an Oracle Grid Infrastructure installation, both Oracle
Clusterware and Oracle Automatic Storage Management (Oracle ASM) are
installed. You no longer can have separate Oracle Clusterware installation
owners and Oracle ASM installation owners.
About the Oracle Installation User (page 5-2)
The Oracle Installation User can be either a local user or a domain user.
About the Oracle Home User for the Oracle Grid Infrastructure Installation
(page 5-3)
During installation of Oracle Grid Infrastructure, you can specify an
optional Oracle Home user associated with the Oracle Grid home.
About the Oracle Home User for the Oracle RAC Installation (page 5-4)
During installation of Oracle RAC, you can either use a Windows builtin account or specify an optional, non-Administrator user that is a
Windows domain user to be the Oracle Home User associated with the
Oracle RAC home.
When to Create an Oracle Home User (page 5-4)
You must create an Oracle Home User in certain circumstances.
Oracle Home User Configurations for Oracle Installations (page 5-6)
When the Oracle software installation completes, you will have one of
the following configurations:
Understanding the Oracle Inventory Directory and the Oracle Inventory Group
(page 5-7)
You must have a group whose members are given access to write to the
Oracle Inventory directory, which is the central inventory record of all
Oracle software installations on a server.
5.1.1 About the Oracle Installation User
The Oracle Installation User can be either a local user or a domain user.
To install the Oracle Grid Infrastructure or Oracle Database software, you must use
either a local or domain user. In either case, the Oracle Installation User must be an
explicit member of the Administrators group on all nodes of the cluster.
If you use a local user account for installing Oracle Grid Infrastructure or Oracle Real
Application Clusters (Oracle RAC), then:
•
The user account must exist on all nodes in the cluster.
•
The user name and password must be the same on all nodes.
•
OUI displays a warning message.
If you use a domain user account for installing Oracle Grid Infrastructure or Oracle
Real Application Clusters (Oracle RAC), then:
•
The domain user must be explicitly declared as a member of the local
Administrators group on each node in the cluster. It is not sufficient if the domain
user has inherited membership from another group.
5-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Creating Installation Groups and Users for Oracle Grid Infrastructure and Oracle RAC
•
The user performing the installation must be in the same domain on each node.
For example, you cannot have use the DBADMIN\dba1 user on the first node and
the RACDBA\dba1 user on the second node.
•
A local user of the same name cannot exist on any of the nodes. For example if
you use RACDBA\dba1 as the installation user, none of the nodes can have a local
NODE1\dba1 user account.
If you use different users to install Oracle Grid Infrastructure and Oracle RAC, then
the user that installs Oracle RAC must be a member of the ASMDBA and
ASMADMIN groups to access the Oracle Automatic Storage Management (Oracle
ASM) Disks.
5.1.2 About the Oracle Home User for the Oracle Grid Infrastructure Installation
During installation of Oracle Grid Infrastructure, you can specify an optional Oracle
Home user associated with the Oracle Grid home.
For example, assume that you use an Administrator user named OraSys to install the
software (Oracle Installation user), then you can specify the ORADOMAIN\OraGrid
domain user as the Oracle Home user for this installation. The specified Oracle Home
domain user must exist before you install the Oracle Grid Infrastructure software.
The Oracle Home user for the Oracle Grid Infrastructure installation can be either the
Windows built-in account (LocalSystem) or an existing user. If you specify an existing
user as the Oracle Home user, then the Windows User Account you specify must be a
domain user or Group Managed Service Account (gMSA) user. When you use an
Oracle Home User, a secure wallet in Oracle Cluster Registry (created automatically)
stores the Oracle Home User name and password information. If you decide not to
create an Oracle Home user, then the Windows built-in account is used as Oracle
Home User.
Note: You cannot change the Oracle Home User after the installation is
complete. If you must change the Oracle Home User, then you must reinstall
the Oracle Grid Infrastructure software.
For Oracle Grid Infrastructure 12c release 12.1.0.1, if you choose the Oracle Grid
Infrastructure Management Repository option during installation, then use of an
Oracle Home user is mandatory. Similarly, if you perform a software-only installation
of Oracle Grid Infrastructure, then you must choose a Windows Domain User account
to configure the Oracle Grid Infrastructure Management Repository after installation.
During installation, the installer creates the software services and configures the
Access Control Lists (ACLs) based on the information you provided about the Oracle
Home User.
When you specify an Oracle Home user, the installer configures that user as the Oracle
Service user for all software services that run from the Oracle home. The Oracle
Service user is the operating system user that the Oracle software services run as, or
the user from which the services inherit privileges.
See Also:
Oracle Database Platform Guide for Microsoft Windows for more information
about the Oracle Home User, ACLs, and how database services run in Oracle
Home User account
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-3
Creating Installation Groups and Users for Oracle Grid Infrastructure and Oracle RAC
5.1.3 About the Oracle Home User for the Oracle RAC Installation
During installation of Oracle RAC, you can either use a Windows built-in account or
specify an optional, non-Administrator user that is a Windows domain user to be the
Oracle Home User associated with the Oracle RAC home.
The Oracle Home User for Oracle RAC can be different from the Oracle Home User
you specified during the Oracle Grid Infrastructure installation. If a Windows domain
user account is chosen, then it should be an existing domain user account with no
administration privileges.
For Oracle RAC installations, Oracle recommends that you use a Windows domain
user (instead of Windows built-in account) as the Oracle Home User for enhanced
security.
The services created for the Oracle RAC software run using the privileges of the
Oracle Home User for Oracle RAC, or the Local System built-in Windows account if
you did not specify an Oracle Home User during installation. Oracle Universal
Installer (OUI) creates multiple operating system groups, such as the ORA_DBA group,
on all nodes. The user performing the installation is automatically added to those
groups necessary for proper database administration.
For an administrator-managed database, you have the option of storing Oracle Home
User password in a secure wallet (stored in Oracle Cluster Registry). Use the following
CRSCTL command to create this secure wallet for storing the Windows operating
system user name and password:
crsctl add wallet -osuser -passwd
If the wallet (stored in Oracle Cluster Registry) exists, then Oracle administration tools
automatically use the password from the wallet without prompting the administrator
to enter the password of Oracle Home User for performing administrative operations.
A policy-managed database mandates the storage of Oracle Home User password in
the wallet (stored in Oracle Cluster Registry). When a policy-managed database is
created, DBCA automatically creates the wallet, if one does not exist.
Note: If you choose to use an Oracle Home User for your Oracle RAC
installation, then the Windows User Account you specify must be a domain
user.
See Also: Oracle Database Platform Guide for Microsoft Windows for more
information about the Oracle Home User implementation for Oracle Database.
5.1.4 When to Create an Oracle Home User
You must create an Oracle Home User in certain circumstances.
•
If an Oracle Home User exists, but you want to use a different operating system
user, with different group membership, to give database administrative privileges
to those groups in a new Oracle Database installation
•
If you have created an Oracle Home User for Oracle Grid Infrastructure, such as
grid, and you want to create a separate Oracle Home User for Oracle Database
software, such as oracle
5-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Creating Installation Groups and Users for Oracle Grid Infrastructure and Oracle RAC
Restrictions and Guidelines for Oracle Home Users (page 5-5)
Review the following restrictions and guidelines for Oracle Home Users
for Oracle software installations.
Determining if an Oracle Home User Exists (page 5-5)
You must decide to use an existing user, or create a new user.
Creating an Oracle Home User (page 5-6)
Use the Manage User Accounts window to create a new user.
Using an Existing Oracle Software Owner User (page 5-6)
If the user you have decided to use as an Oracle Home user exists, then
you can use this user as the Oracle Home user for a different installation.
5.1.4.1 Restrictions and Guidelines for Oracle Home Users
Review the following restrictions and guidelines for Oracle Home Users for Oracle
software installations.
•
If you intend to use multiple Oracle Home Users for different Oracle Database
homes, then Oracle recommends that you create a separate Oracle Home User for
Oracle Grid Infrastructure software (Oracle Clusterware and Oracle ASM).
•
If you plan to install Oracle Database or Oracle RAC, then Oracle recommends
that you create separate Oracle Home Users for the Oracle Grid Infrastructure and
the Oracle Database installations. If you use one Oracle Home User, then when
you want to perform administration tasks, you must select the utilities from the
Oracle home for the instance you want to administer, or change the default
%ORACLE_HOME% value to the location of the Oracle Home from which the
instance runs. For Oracle ASM instances, you must use the Oracle Grid
Infrastructure home and for database instance use the Oracle Database home.
•
If you try to administer an Oracle home or Grid home instance using sqlplus,
srvctl, lsnrctl, or asmcmd commands while the environment variable
%ORACLE_HOME% is set to a different Oracle home or Grid home path, then you
encounter errors. For example, when you start SRVCTL from a database home,
%ORACLE_HOME% should be set to that database home, or SRVCTL fails. The
exception is when you are using SRVCTL in the Oracle Grid Infrastructure home.
In that case, SRVTCL ignores %ORACLE_HOME%, and the Oracle home
environment variable does not affect SRVCTL commands. In all other cases, you
must start the utilities from the Oracle home of the instance that you want to
administer.
If you need to set the user environment to use a specific Oracle home, then use
Oracle Universal Installer. On the landing page, click Installed Products. In the
Inventory window, click the Environment tab. Select the Oracle Home you want
to use, and deselect the other Oracle homes, then click Apply. You can then exit
Oracle Universal Installer. When you use Oracle Universal Installer to set the
Oracle Home, it updates the ORACLE_HOME environment variable and updates the
PATH variable.
5.1.4.2 Determining if an Oracle Home User Exists
You must decide to use an existing user, or create a new user.
1. Open the Control Panel window.
2. Select User Accounts.
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-5
Creating Installation Groups and Users for Oracle Grid Infrastructure and Oracle RAC
3. Select Manage User Accounts.
4. Scroll through the list of names until you find the ones you are looking for.
If the names do not appear in the list, then the user has not yet been created.
See one of the following sections for the next steps:
•
Creating an Oracle Home User (page 5-6)
•
Using an Existing Oracle Software Owner User (page 5-6)
5.1.4.3 Creating an Oracle Home User
Use the Manage User Accounts window to create a new user.
The user must not be a member of the Administrators group. If you are creating an
Oracle Home User for an Oracle RAC installation, then the user must be a Windows
domain user, and the user must be a member of the same domain on each node in the
cluster.
1. Open the Control Panel window
2. Select User Accounts.
3. Select Manage User Accounts.
4. Create the user using the interface.
See Also: Oracle Database Platform Guide for Microsoft Windows for information
about the Oracle Home User Control utility
5.1.4.4 Using an Existing Oracle Software Owner User
If the user you have decided to use as an Oracle Home user exists, then you can use
this user as the Oracle Home user for a different installation.
Oracle does not support changing the ownership of an existing Oracle Database home
from one Oracle Home user to a different user.
•
During the software installation, specify the existing user for the Oracle Home
user.
Oracle Universal Installer (OUI) creates the appropriate group memberships.
5.1.5 Oracle Home User Configurations for Oracle Installations
When the Oracle software installation completes, you will have one of the following
configurations:
Installation Type
Oracle Home user configuration
Oracle Grid Infrastructure with a
domain user specified for the Oracle
Home User
The Oracle Home user owns the Oracle Grid
Infrastructure Management Repository service. The
other services are run under the built-in Administrator
account, except for the listeners, which run as
LocalService (a built-in Windows account).
5-6 Oracle Grid Infrastructure Installation and Upgrade Guide
Standard Administration and Job Role Separation User Groups
Installation Type
Oracle Home user configuration
Oracle Grid Infrastructure with the
Windows built-in Administrator
account as the Oracle Home User
The Oracle Grid Infrastructure services are run under
the built-in Administrator account, except for the
listeners, which run as LocalService.
Oracle RAC with specified Oracle
Home User
The Oracle Home User owns all the services run by the
Oracle Database software.
Oracle RAC with Built-in Oracle
Home user
The services run under the built-in LocalSystem
account.
Note: You cannot change the Oracle Home User after installation to a
different Oracle Home User. Only out-of-place upgrade or move allows the
Oracle Home User to be changed to or from the built-in Windows account.
5.1.6 Understanding the Oracle Inventory Directory and the Oracle Inventory Group
You must have a group whose members are given access to write to the Oracle
Inventory directory, which is the central inventory record of all Oracle software
installations on a server.
When you install Oracle software on the system for the first time, Oracle Universal
Installer (OUI) creates the directories for the Oracle central inventory. OUI also creates
the Oracle Inventory group, ORA_INSTALL. The ORA_INSTALL group contains all the
Oracle Home users for all Oracle homes on the server. The location of the Oracle
central inventory on Windows is always %SYSTEM_DRIVE%\Program Files
\Oracle\Inventory.
Whether you are performing the first installation of Oracle software on this server, or
are performing an installation of additional Oracle software on the server, you do not
need to create the Oracle central inventory or the ORA_INSTALL group. You cannot
change the name of the Oracle Inventory group - it is always ORA_INSTALL.
Members of the Oracle Inventory group have write privileges to the Oracle central
inventory directory, and are also granted permissions for various Oracle Clusterware
resources, OCR keys, directories in the Oracle Clusterware home to which DBAs need
write access, and other necessary privileges. All Oracle software install users must be
members of the Oracle Inventory group. Members of this group can talk to Cluster
Synchronization Service (CSS).
Note: If Oracle software is already installed on the system, then, when you
install new Oracle software, the existing Oracle Inventory group is used
instead of creating a new Inventory group.
5.2 Standard Administration and Job Role Separation User Groups
Oracle Grid Infrastructure uses various operating system groups. These operating
system groups are designated with the logical role of granting operating system group
authentication for administration system privilege for Oracle Clusterware and Oracle
ASM.
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-7
Standard Administration and Job Role Separation User Groups
About Job Role Separation Operating System Privileges Groups and Users
(page 5-8)
Job role separation requires that you create different operating system
groups for each set of system privileges that you grant through
operating system authorization.
Oracle Software Owner for Each Oracle Software Product (page 5-9)
Oracle recommends that you use appropriate operating system groups
and users for all installations where you specify separate Oracle Home
Users:
Standard Oracle Database Groups for Database Administrators (page 5-10)
The Oracle Database supports multiple operating system groups to
provide operating system authentication for database administration
system privileges.
Oracle ASM Groups for Job Role Separation (page 5-11)
The SYSASM, SYSOPER for ASM, and SYSDBA for ASM system
privileges enables the separation of the Oracle ASM storage
administration privileges from SYSDBA.
Extended Oracle Database Administration Groups for Job Role Separation
(page 5-12)
Oracle Database 12c Release 1 (12.1) and later releases provide an
extended set of database groups to grant task-specific system privileges
for database administration.
Operating System Groups Created During Installation (page 5-13)
When you install either Oracle Grid Infrastructure or Oracle RAC, the
user groups listed in the following table are created, if they do not
already exist.
Example of Using Role-Allocated Groups and Users (page 5-16)
You can use role-allocated groups and users that is compliant with an
Optimal Flexible Architecture (OFA) deployment.
5.2.1 About Job Role Separation Operating System Privileges Groups and Users
Job role separation requires that you create different operating system groups for each
set of system privileges that you grant through operating system authorization.
With Oracle Grid Infrastructure job role separation, Oracle ASM has separate
operating system groups that provide operating system authentication for Oracle ASM
system privileges for storage tier administration. This operating system authentication
is separated from Oracle Database operating system authentication. In addition, the
Oracle Grid Infrastructure Installation user provides operating system user
authentication for modifications to Oracle Grid Infrastructure binaries.
With Oracle Database job role separation, each Oracle Database installation has
separate operating system groups. The operating system groups provide authorization
for system privileges on that Oracle Database, so multiple databases can be installed
on the cluster without sharing operating system authentication for system privileges.
In addition, each Oracle software installation is associated with an Oracle Installation
user, to provide operating system user authorization for modifications to Oracle
Database binaries.
5-8 Oracle Grid Infrastructure Installation and Upgrade Guide
Standard Administration and Job Role Separation User Groups
Note: Any Oracle software owner can start and stop all databases and shared
Oracle Grid Infrastructure resources such as Oracle ASM or Virtual IP (VIP).
Job role separation configuration enables database security, and does not
restrict user roles in starting and stopping various Oracle Clusterware
resources.
During the Oracle Database installation, the installation creates the OSDBA, OSOPER,
OSBACKUPDBA, OSDGDBA, OSKMDBA, and OSRACDBA groups and you can
assign users to these groups. Members of these groups are granted operating system
authentication for the set of database system privileges each group authorizes. Oracle
recommends that you use different operating system groups for each set of system
privileges.
Note: This configuration is optional, to restrict user access to Oracle software
by responsibility areas for different administrator users.
To configure users for installation that are on a network directory service such as
Network Information Services (NIS), refer to your directory service documentation.
If you do not want to use role allocation groups, then Oracle strongly recommends
that you use at least two groups:
•
A system privileges group whose members are granted administrative system
privileges, including OSDBA, OSASM, and other system privileges groups.
•
An installation owner group (the ORA_INSTALL group) whose members are
granted Oracle installation owner system privileges.
See Also:
•
Oracle Database Administrator’s Guide for more information about planning
for system privileges authentication
•
Oracle Automatic Storage Management Administrator's Guide for more
information about Oracle ASM operating system authentication
5.2.2 Oracle Software Owner for Each Oracle Software Product
Oracle recommends that you use appropriate operating system groups and users for
all installations where you specify separate Oracle Home Users:
You can choose from the following operating system groups and users, as per your
configuration:
Separate Oracle Installation users for each Oracle software product (typically,
oracle, for the Oracle Database software, and grid for the Oracle Grid Infrastructure
software.
You must create at least one Oracle Installation user the first time you install Oracle
software on the system. This user owns the Oracle binaries of the Oracle Grid
Infrastructure software, and you can also use this same user as the Oracle Installation
user for the Oracle Database or Oracle RAC binaries.
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-9
Standard Administration and Job Role Separation User Groups
The Oracle Installation user for Oracle Database software has full administrative
privileges for Oracle instances and is added to the ORA_DBA, ORA_ASMDBA,
ORA_HOMENAME_SYSBACKUP, ORA_HOMENAME_SYSDG, ORA_HOMENAME_SYSKM, and
ORA_HOMENAME_SYSRAC groups. Oracle Home users are added to the
ORA_HOMENAME_DBA group for the Oracle home created during the installation. The
ORA_OPER and ORA_HOMENAME_OPER groups are created, but no users are added to
these groups during installation.
See Also: Oracle Database Security Guide for more information about the
available operating system groups and the privileges associated with each
group
5.2.3 Standard Oracle Database Groups for Database Administrators
The Oracle Database supports multiple operating system groups to provide operating
system authentication for database administration system privileges.
OSDBA group (ORA_DBA)
When you install Oracle Database, a special Windows local group called ORA_DBA is
created (if it does not already exist from an earlier Oracle Database installation), and
the Oracle Installation user is automatically added to this group. Members of the
ORA_DBA group automatically receive the SYSDBA privilege. Membership in the
ORA_DBA group allows a user to:
•
Connect to Oracle Database instances without a password
•
Perform database administration procedures such as starting and shutting down
local databases
•
Add additional Windows users to ORA_DBA, enabling them to have the SYSDBA
privilege
Membership in the ORA_DBA group grants full access to all databases on the server.
OSDBA group for a particular Oracle home (ORA_HOMENAME_DBA)
This group is created the first time you install Oracle Database software into a new
Oracle home. Membership in the ORA_HOMENAME_DBA group grants full access
(SYSDBA privileges) for all databases that run from the specific Oracle home.
Belonging to either the ORA_DBA or ORA_HOMENAME_DBA group does not grant any
special privileges for the user with respect to the Oracle ASM instance. Members of
these groups will not be able to connect to the Oracle ASM instance.
OSOPER group for Oracle Database (ORA_OPER)
This group is created the first time you install Oracle Database software into a new
Oracle home. This optional group identifies operating system user accounts that have
database administrative privileges (the SYSOPER system privilege) for the database
instances that run from any Oracle home. Assign users to this group if you want a
separate group of operating system users to have a limited set of database
administrative privileges for starting up and shutting down any Oracle database.
OSOPER group for a particular Oracle home (ORA_HOMENAME_OPER)
This group is created the first time you install Oracle Database software into a new
Oracle home. This optional group identifies operating system user accounts that have
database administrative privileges (the SYSOPER system privilege) for the database
instances that run from a specific Oracle home. Assign users to this group if you want
5-10 Oracle Grid Infrastructure Installation and Upgrade Guide
Standard Administration and Job Role Separation User Groups
a separate group of operating system users to have a limited set of database
administrative privileges for starting up and shutting down any Oracle database
located in a specific Oracle home.
5.2.4 Oracle ASM Groups for Job Role Separation
The SYSASM, SYSOPER for ASM, and SYSDBA for ASM system privileges enables the
separation of the Oracle ASM storage administration privileges from SYSDBA.
During installation, the following groups are created for Oracle ASM:
•
OSASM Group for Oracle ASM Administration (ORA_ASMADMIN)
Use this separate group to have separate administration privilege groups for
Oracle ASM and Oracle Database administrators. Members of this group are
granted the SYSASM system privilege to administer Oracle ASM. In Oracle
documentation, the operating system group whose members are granted
privileges is called the OSASM group. During installation, the Oracle Installation
User for Oracle Grid Infrastructure and Oracle Database Service IDs are
configured as members of this group. Membership in this group also grants
database access to the ASM disks.
Members of the OSASM group can use SQL to connect to an Oracle ASM instance
as SYSASM using operating system authentication. The SYSASM system privilege
permits mounting and dismounting disk groups, and other storage administration
tasks. SYSASM system privileges do not grant access privileges on an Oracle
Database instance.
•
OSDBA for ASM Database Administrator group (ORA_ASMDBA)
This group grants access for the database to connect to Oracle ASM. During
installation, the Oracle Installation Users are configured as members of this group.
After you create an Oracle Database, this groups contains the Oracle Home Users
of those database homes.
•
OSOPER for ASM Group for ASM Operators (ORA_ASMOPER)
This is an optional group. Use this group if you want a separate group of
operating system users to have a limited set of Oracle ASM instance
administrative privileges (the SYSOPER for ASM system privilege), including
starting up and stopping the Oracle ASM instance. By default, members of the
OSASM group also have all privileges granted by the SYSOPER for ASM system
privilege.
To use the Oracle ASM Operator group to create an Oracle ASM administrator
with fewer privileges than those granted by the SYSASM system privilege you
must assign the user to this group after installation.
Changes in Oracle ASM System Privileges When Upgrading to Oracle Grid
Infrastructure 12c Release 1 (12.1.0.2) (page 5-12)
When upgrading from Oracle Grid Infrastructure release 12.1.0.1 to
release 12.1.0.2, the upgrade process automatically updates the group
memberships and the disk ACLs for Oracle ASM privileges.
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-11
Standard Administration and Job Role Separation User Groups
5.2.4.1 Changes in Oracle ASM System Privileges When Upgrading to Oracle Grid
Infrastructure 12c Release 1 (12.1.0.2)
When upgrading from Oracle Grid Infrastructure release 12.1.0.1 to release 12.1.0.2,
the upgrade process automatically updates the group memberships and the disk ACLs
for Oracle ASM privileges.
•
The disk ACLs are updated to add ORA_ASMADMIN and remove ORA_ASMDBA.
•
The database service SIDs are added to both ORA_ASMADMIN and ORA_ASMDBA
•
The Oracle Service user (typically the Oracle Home user) is added to
ORA_ASMDBA
These updates ensure that databases using either Oracle Database release 12.1.0.1 or
release 12.1.0.2 can use Oracle ASM after the upgrade to Oracle Grid Infrastructure
release 12.1.0.2.
If Oracle ASM is freshly installed as part of Oracle Grid Infrastructure 12c Release 1
(12.1.0.2), then only the 12.1.0.2 version of the privileges are configured:
•
The database service SIDs are added to ORA_ASMADMIN
•
The Oracle Service user (typically the Oracle Home user) is added to
ORA_ASMDBA
•
The disk ACLs are updated to include ORA_ASMADMIN
Before you install Oracle Database 12c release 12.1.0.1 software on a system with a new
installation (not an upgraded installation) of Oracle Grid Infrastructure 12c Release 1
(12.1.0.2), you must apply a patch to ensure the proper privileges are configured when
you create an Oracle Database 12c release 12.1.0.1 database.
5.2.5 Extended Oracle Database Administration Groups for Job Role Separation
Oracle Database 12c Release 1 (12.1) and later releases provide an extended set of
database groups to grant task-specific system privileges for database administration.
The extended set of Oracle Database system privileges groups are task-specific and
less privileged than the ORA_DBA/SYSDBA system privileges. They are designed to
provide privileges to carry out everyday database operations. Users granted these
system privileges are also authorized through operating system group membership.
The installer automatically creates operating system groups whose members are
granted these system privileges. The subset of OSDBA job role separation privileges
and groups consist of the following:
•
OSBACKUPDBA group for Oracle Database (ORA_HOMENAME_SYSBACKUP)
Assign users to this group if you want a separate group of operating system users
to have a limited set of database backup- and recovery-related administrative
privileges (the SYSBACKUP privilege).
•
OSDGDBA group for Oracle Data Guard (ORA_HOMENAME_SYSDG)
Assign users to this group if you want a separate group of operating system users
to have a limited set of privileges to administer and monitor Oracle Data Guard
(the SYSDG privilege). To use this privilege, add the Oracle Database installation
owners as members of this group.
5-12 Oracle Grid Infrastructure Installation and Upgrade Guide
Standard Administration and Job Role Separation User Groups
•
OSKMDBA group for encryption key management (ORA_HOMENAME_SYSKM)
Assign users to this group if you want a separate group of operating system users
to have a limited set of privileges for encryption key management such as Oracle
Wallet Manager management (the SYSKM privilege). To use this privilege, add
the Oracle Database installation owners as members of this group.
•
OSRACDBA group for Oracle Real Application Clusters Administration
(typically, ORA_HOMENAME_SYSRAC)
Assign users to this group if you want a separate group of operating system users
to have a limited set of Oracle Real Application Clusters (RAC) administrative
privileges (the SYSRAC privilege). To use this privilege, add the Oracle Database
installation owners as members of this group.
You cannot change the name of these operating system groups. These groups do not
have any members after database creation, but an Administrator user can assign users
to these groups after installation. Each operating system group identifies a group of
operating system users that are granted the associated set of database privileges.
See Also:
•
Oracle Database Administrator’s Guide for more information about the
OSDBA, OSASM, OSOPER, OSBACKUPDBA, OSDGDBA, OSKMDBA,
and OSRACDBA groups, and the SYSDBA, SYSASM, SYSOPER,
SYSBACKUP, SYSDG, SYSKM, and SYSRAC privileges
•
The "Managing Administrative Privileges" section in Oracle Database
Security Guide
5.2.6 Operating System Groups Created During Installation
When you install either Oracle Grid Infrastructure or Oracle RAC, the user groups
listed in the following table are created, if they do not already exist.
Table 5-1
Operating System Groups Created During Installation
Operating System Group
Names
System
Privileges
Description
ORA_ASMADMIN
SYSASM system
privileges for
Oracle ASM
administration
The OSASM group for the Oracle ASM instance.
SYSDBA system
privileges on the
Oracle ASM
instance
The OSDBA group for the Oracle ASM instance.
ORA_ASMDBA
Using this group and the SYSASM system privileges
enables the separation of SYSDBA database
administration privileges from Oracle ASM storage
administration privileges. Members of the OSASM group
are authorized to connect using the SYSASM privilege
and have full access to Oracle ASM, including
administrative access to all disk groups that the Oracle
ASM instance manages.
This group grants access for the database to connect to
Oracle ASM. During installation, the Oracle Installation
Users are configured as members of this group. After you
create an Oracle Database, this groups contains the
Oracle Home Users of those database homes.
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-13
Standard Administration and Job Role Separation User Groups
Table 5-1
(Cont.) Operating System Groups Created During Installation
Operating System Group
Names
System
Privileges
Description
ORA_ASMOPER
SYSOPER for
Oracle ASM
system privileges
The OSOPER group for the Oracle ASM instance.
Members of this group are granted SYSOPER system
privileges on the Oracle ASM instance, which permits a
user to perform operations such as startup, shutdown,
mount, dismount, and check disk group. This group has
a subset of the privileges of the OSASM group.
Similar to the ORA_HOMENAME_OPER group, this group
does not have any members after installation, but you
can manually add users to this group after the
installation completes.
ORA_GRIDHM_DBA
ORA_GRIDHM_OPER
ORA_DBA
ORA_OPER
ORA_HOMENAME_DBA
SYSDBA system
privileges for the
Oracle Grid
Infrastructure
Management
Repository
database
Members of this group are granted the SYSDBA system
privileges for managing the Oracle Grid Infrastructure
Management Repository database, where GRIDHM is the
name of the Oracle Grid Infrastructure home.
SYSOPER system
privileges for the
Oracle Grid
Infrastructure
Management
Repository
database
Members of this group are granted the SYSOPER system
privileges for managing the Oracle Grid Infrastructure
Management Repository database, where GRIDHM is the
name of the Oracle Grid Infrastructure home.
SYSDBA system
privileges for all
Oracle Database
installations on
the server
A special OSDBA group for the Windows operating
system.
SYSOPER system
privileges for all
Oracle databases
installed on the
server
A special OSOPER group for the Windows operating
system.
SYSDBA system
privileges for all
database
instances that run
from the Oracle
home with the
name
HOMENAME
An OSDBA group for a specific Oracle Home with a
name of HOMENAME.
The default home name is OraGrid12Home1, so the
default group name is ORA_OraGrid12Home1_DBA.
If you use the default Grid home name of
OraGrid12Home1,then the default operating system
group name is ORA_OraGrid12Home1_OPER.
Members of this group are granted SYSDBA system
privileges for all Oracle Databases installed on the server.
Members of this group are granted SYSOPER system
privileges all Oracle Databases installed on the server.
This group does not have any members after installation,
but you can manually add users to this group after the
installation completes.
Members of this group can use operating system
authentication to gain SYSDBA system privileges for any
database that runs from the specific Oracle home. If you
specified an Oracle Home User during installation, the
user is added to this group during installation.
5-14 Oracle Grid Infrastructure Installation and Upgrade Guide
Standard Administration and Job Role Separation User Groups
Table 5-1
(Cont.) Operating System Groups Created During Installation
Operating System Group
Names
System
Privileges
Description
ORA_HOMENAME_OPER
SYSOPER system
privileges for all
database
instances that run
from the Oracle
home with the
name
HOMENAME
An OSDBA group for the Oracle Home with a name of
HOMENAME.
SYSBACKUP
system privileges
for all database
instances that run
from the Oracle
home with a
name of
HOMENAME
OSBACKUPDBA group for a specific Oracle Home with
a name of HOMENAME.
SYSDG system
privileges for all
database
instances that run
from the Oracle
home with a
name of
HOMENAME
OSDGDBA group for a specific Oracle Home with a
name of HOMENAME.
ORA_HOMENAME_SYSBACKUP
ORA_HOMENAME_SYSDG
ORA_HOMENAME_SYSKM
SYSKM system
privileges for all
database
instances that run
from the Oracle
home with a
name of
HOMENAME.
Members of this group can use operating system
authentication to gain SYSOPER system privileges for
any database that runs from the specific Oracle home.
This group does not have any members after installation,
but you can manually add users to this group after the
installation completes.
Members of this group have privileges necessary for
performing database backup and recovery tasks on all
database instances that run from the specified Oracle
Home directory.
Members of this group have privileges necessary for
performing Data Guard administrative tasks on all
database instances that run from the specified Oracle
Home directory.
OSKMDBA group for a specific Oracle Home with a
name of HOMENAME.
Members of this group have privileges necessary for
performing encryption key management tasks on all
database instances that run from the specified Oracle
Home directory.
During installation, the gridconfig.bat script creates the services and groups on
each node of the cluster. The installed files and permissions are owned by the Oracle
Installation user, and require the Administrator privilege.
Oracle creates and populates the groups listed in this table during installation to
ensure proper operation of Oracle products. You can manually add other users to
these groups to assign these database privileges to other Windows users.
Members of the ORA_DBA group can use operating system authentication to
administer all Oracle databases installed on the server. Members of the
ORA_HOMENAME_DBA, where HOMENAME is the name of a specific Oracle
installation, can use operating system authentication to manage only the databases
that run from that Oracle home.
Related Topics:
Standard Administration and Job Role Separation User Groups (page 5-7)
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-15
Standard Administration and Job Role Separation User Groups
5.2.7 Example of Using Role-Allocated Groups and Users
You can use role-allocated groups and users that is compliant with an Optimal
Flexible Architecture (OFA) deployment.
Assumptions:
•
The user installing the Oracle Grid Infrastructure software is named RACDOMAIN
\grid. This user was created before starting the installation.
The option to use the Windows Built-in Account was selected for the Oracle
Home user for Oracle Grid Infrastructure.
•
The name of the home directory for the Oracle Grid Infrastructure installation is
OraGrid12c.
•
The user installing the Oracle RAC software is named oracle. This user was
created before starting the installation.
During installation of Oracle RAC, an Oracle Home user named RACDOMAIN
\oradba1 is specified. The oradba1 user is a Windows domain user that was
created before the installation was started.
The name of the Oracle home for the Oracle RAC installation is
OraRAC12c_home1.
•
You have a second, Oracle Database installation (not Oracle RAC) on this server.
The installation was performed by the oracle user. The Oracle Home user is
oradba2, and this user was not created before starting the installation.
The Oracle Home name is OraDB12c_home1.
•
Both the Oracle databases and Oracle Clusterware are configured to use Oracle
ASM for data storage.
After installing the Oracle software, you have the following groups and users:
Operating System Group Name
Type of Group
Members
ORA_DBA
OSDBA group
oracle, RACDOMAIN\grid, and the
Local System built-in Windows
account
ORA_OraRAC12c_home1_DBA
OSDBA group for the Oracle
RAC home directory
RACDOMAIN\oradba1
ORA_OraDB12c_home1_DBA
OSDBA group for the Oracle
Database home directory
oradba2
ORA_OPER
OSOPER group
none
ORA_OraRAC12c_home1_OPER
OSOPER group for the Oracle
RAC home directory
none
ORA_OraDB12c_home1_OPER
OSOPER group for the Oracle
Database home directory
none
ORA_ASMADMIN
OSASM group
RACDOMAIN\grid and the Local
System built-in Windows account,
and the database service IDs
5-16 Oracle Grid Infrastructure Installation and Upgrade Guide
Configuring User Accounts
Operating System Group Name
Type of Group
Members
ORA_ASMOPER
OSOPER for ASM group
ORA_ASMDBA
OSDBA for ASM group for
Oracle ASM clients
RACDOMAIN\grid, oracle, the
Local System built-in Windows
account, and Oracle Home Users of
database homes
ORA_RAC12c_home1_SYSBACKUP,
ORA_RAC12c_home1_SYSDG, and
ORA_RAC12c_home1_SYSKM
Specialized role groups that
authorize users with the
SYSBACKUP, SYSDG, and
SYSKM system privileges.
none
ORA_DB12c_home1_SYSBACKUP,
ORA_DB12c_home1_SYSDG, and
ORA_DB12c_home1_SYSKM
Specialized role groups that
authorize users with the
SYSBACKUP, SYSDG, and
SYSKM system privileges.
none
If there are no users listed for an operating system group, then that means the group
has no members after installation.
5.3 Configuring User Accounts
When installing Oracle Grid Infrastructure for a cluster, you run the installer software
as an Administrator user. During installation, you can specify an Oracle Home user.
Before starting the installation, there are a few checks you need to perform for the
Oracle Installation users, to ensure the installation will succeed.
Configuring Environment Variables for the Oracle Installation User (page 5-17)
The installer uses environment variables set for the Oracle Installation
User.
Verifying User Privileges to Update Remote Nodes (page 5-17)
You must insure that operations that are performed on multiple nodes
can be performed during installation of the Oracle Grid Infrastructure
software.
Managing User Accounts with User Account Control (page 5-19)
To ensure that only trusted applications run on your computer,
Windows Server provides User Account Control.
5.3.1 Configuring Environment Variables for the Oracle Installation User
The installer uses environment variables set for the Oracle Installation User.
•
Before starting the Oracle Grid Infrastructure installation, ensure the %TEMP%
environment variable is set correctly.
Related Topics:
Checking the Available TEMP Disk Space (page 2-4)
5.3.2 Verifying User Privileges to Update Remote Nodes
You must insure that operations that are performed on multiple nodes can be
performed during installation of the Oracle Grid Infrastructure software.
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-17
Configuring User Accounts
For the installation to be successful, you must use the same user name and password
on each node in a cluster or use a domain user. You must explicitly grant membership
in the local Administrators group to the installation user on all of the nodes in your
cluster.
1. Determine if User Account Control (UAC) remote restrictions have been disabled
for the local installation user. If you are using a domain user for installation, then
skip this step.
Check the value of the LocalAccountTokenFilterPolicy registry entry for the
registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
\CurrentVersion\Policies\System. The value should be set to 1. If this
registry entry does not exist, then, do the following:
a. Click Start, click Run, type regedit, and then press Enter.
b. Navigate to the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft
\Windows\CurrentVersion\Policies\System.
c. On the Edit menu, select New, and then click DWORD Value.
d. Type LocalAccountTokenFilterPolicy and then press Enter.
e. Right-click LocalAccountTokenFilterPolicy, then click Modify.
f. In the Value data box, type 1, then click OK.
g. Exit the registry editor.
By setting the value of LocalAccountTokenFilterPolicy to 1, a user who is a
member of the local administrators group on the target remote computer
establishes a remote administrative connection with an elevated token, enabling the
user to perform administrative tasks. If you do not disable the UAC remote
restrictions for administrative users, then when installing Oracle Grid
Infrastructure on multiple nodes you might encounter the following error:
INS-40937 The following hostnames are invalid
2. Before running OUI, from the node where you intend to run the installer, verify
that the user account you are using for the installation is configured as a member of
the Administrators group on each node in the cluster.
Enter the following command for each node that is a part of the cluster where
nodename is the node name:
net use \\nodename\C$
3. If you will be using other disk drives in addition to the C: drive, then repeat the
net use command for every node in the cluster, substituting the drive letter for
each drive you plan to use.
4. Verify the installation user is configured to update the Windows registry on each
node in the cluster.
a. Run regedit from the Run menu or the command prompt.
b. From the File menu select Connect Network Registry.
c. In the 'Enter the object name…' edit box enter the name of a remote node in the
cluster, then click OK.
5-18 Oracle Grid Infrastructure Installation and Upgrade Guide
Creating Oracle Software Directories
d. Wait for the node to appear in the registry tree.
If the remote node does not appear in the registry tree or you are prompted to fill in a
username and password, then you must resolve the permissions issue at the operating
system level before proceeding with the Oracle Grid Infrastructure installation.
5.3.3 Managing User Accounts with User Account Control
To ensure that only trusted applications run on your computer, Windows Server
provides User Account Control.
If you have enabled the User Account Control security feature, then depending on
how you have it configured, OUI prompts you for either your consent or your
credentials when installing Oracle Database. Provide either the consent or your
Windows Administrator credentials as appropriate.
You must have Administrator privileges to run some Oracle tools, such as DBCA,
NETCA, and OPatch, or to run any tool or application that writes to any directory
within the Oracle home. If User Account Control is enabled and you are logged in as
the local Administrator, then you can successfully run each of these commands.
However, if you are logged in as "a member of the Administrators group," then you
must explicitly run these tools with Windows Administrator privileges.
All of the Oracle shortcuts that require Administrator privileges are automatically run
as an "Administrator" user when you click the shortcuts. However, if you run the
previously mentioned tools from a Windows command prompt, then you must run
them from an Administrator command prompt.
OPatch does not have a shortcut and must be run from an Administrator command
prompt.
5.4 Creating Oracle Software Directories
During installation, you are prompted to provide a path to a home directory to store
Oracle Grid Infrastructure software.
You also need to provide a home directory when installing Oracle RAC. Each
directory has certain requirements that must be met for the software to work correctly.
Oracle Universal Installer creates the directories during installation if they do not exist.
About the Directories Used During Installation of Oracle Grid Infrastructure
(page 5-19)
OUI uses several directories during installation of Oracle Grid
Infrastructure.
Requirements for the Oracle Grid Infrastructure Home Directory (page 5-23)
Review directory path requirements for Oracle Grid Infrastructure
Home directory.
About Creating the Oracle Base Directory Path (page 5-24)
The Oracle base directory for the Oracle Installation User for Oracle Grid
Infrastructure is the location where diagnostic and administrative logs,
and other logs associated with Oracle ASM and Oracle Clusterware are
stored.
5.4.1 About the Directories Used During Installation of Oracle Grid Infrastructure
OUI uses several directories during installation of Oracle Grid Infrastructure.
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-19
Creating Oracle Software Directories
Note: The base directory for Oracle Grid Infrastructure 12c and the base
directory for Oracle RAC 12c must be different from the directories used by
the Oracle RAC 11g Release 2 installation.
Temporary Directories (page 5-20)
To install properly across all nodes, OUI uses the temporary folders
defined within Microsoft Windows.
Oracle Base Directory for the Grid User (page 5-20)
During installation, you are prompted to specify an Oracle base location,
which is owned by the user performing the installation.
Grid Home Directory (page 5-22)
When installing Oracle Grid Infrastructure, you must determine the
location of the Oracle home for Oracle Grid Infrastructure software (Grid
home).
Oracle Inventory Directory (page 5-22)
The Oracle Inventory directory is the central inventory location for all
Oracle software installed on a server.
5.4.1.1 Temporary Directories
To install properly across all nodes, OUI uses the temporary folders defined within
Microsoft Windows.
The TEMP and TMP environment variables should point to the same local directory
on all nodes in the cluster.
By default, these settings are defined as %USERPROFILE%\Local Settings\Temp
and %USERPROFILE%\Local Settings\Tmp in the Environment Settings of My
Computer. It is recommended to explicitly redefine these as %WINDIR%\temp and
%WINDIR%\tmp.
For example, if Windows is installed on the C drive, then the temporary directories
would be defined as C:\Windows\temp or C:\Windows\tmp for all nodes.
5.4.1.2 Oracle Base Directory for the Grid User
During installation, you are prompted to specify an Oracle base location, which is
owned by the user performing the installation.
The Oracle base directory for the Oracle Grid Infrastructure installation is the location
where diagnostic and administrative logs, and other logs associated with Oracle ASM
and Oracle Clusterware are stored. For Oracle installations other than Oracle Grid
Infrastructure for a cluster, it is also the location under which an Oracle home is
placed. However, for an Oracle Grid Infrastructure installation, you must create a
different path for the Grid home, so that the path for Oracle base remains available for
other Oracle installations.
Multiple Oracle Database installations can use the same Oracle base directory. The
Oracle Grid Infrastructure installation uses a different directory path, one outside of
Oracle base. If you use different operating system users to perform the Oracle software
installations, then each user will have a different default Oracle base location.
5-20 Oracle Grid Infrastructure Installation and Upgrade Guide
Creating Oracle Software Directories
Caution: After installing Oracle Database 12c release 1 (12.1) (or later) release
with a Windows User Account as Oracle Home User, do not install older
releases of Oracle Database that share the same Oracle Base Directory. During
installation of the software for older releases, the ACLs are reset and Oracle
Database 12c release 1 (12.1) (or later) services may not be able to access the
Oracle Base directory and files.
In a default Windows installation, the Oracle base directory appears as follows, where
X represents a disk drive and username is the name of the currently logged in user,
which is also the name of the software installation owner:
X:\app\username
Because you can have only one Oracle Grid Infrastructure installation on a cluster, and
because all upgrades are out-of-place upgrades, Oracle recommends that you have an
Oracle base for the Oracle Installation user for Oracle Grid Infrastructure (for example,
C:\app\grid). You should also use a Grid home for the Oracle Grid Infrastructure
binaries using the release number of that installation (for example, C:\app
\12.2.0\grid).
During installation, ownership of the path to the Grid home is changed to the
LocalSystem or Oracle Home user, if specified. Using separate paths for the Oracle
base directory for the grid user and the Grid home directory means the path for
Oracle base remains available for other Oracle software installations by the grid user.
If you do not create a unique path to the Grid home, then after the Oracle Grid
Infrastructure installation, you might encounter permission errors for other
installations, including any existing installations under the same path.
Caution:
For Oracle Grid Infrastructure (for a cluster) installations, note the following
restrictions for the Oracle Grid Infrastructure home (the Grid home directory
for Oracle Grid Infrastructure):
•
It must not be placed under one of the Oracle base directories, including
the Oracle base directory of the Oracle Grid Infrastructure installation
owner.
•
It must not be placed in the home directory of an installation owner.
These requirements are specific to Oracle Grid Infrastructure for a cluster
installations. Oracle Grid Infrastructure for a standalone server (Oracle
Restart) can be installed under the Oracle base for the Oracle Database
installation.
Note:
Placing Oracle Grid Infrastructure for a cluster binaries on a cluster file system
is not supported.
Oracle recommends that you install Oracle Grid Infrastructure locally, on each
cluster member node. Using a shared Grid home prevents rolling upgrades,
and creates a single point of failure for the cluster.
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-21
Creating Oracle Software Directories
5.4.1.3 Grid Home Directory
When installing Oracle Grid Infrastructure, you must determine the location of the
Oracle home for Oracle Grid Infrastructure software (Grid home).
The Oracle Home directory that Oracle Grid Infrastructure is installed in is the Grid
home. Oracle ASM is also installed in this home directory.
If you plan to install Oracle RAC, you must choose a different directory in which to
install the Oracle Database software. The location of the Oracle RAC installation is the
Oracle home.
The Grid home should be located in a path that is different from the Oracle home
directory paths for any other Oracle software. The Optimal Flexible Architecture
guideline for a Grid home is to create a path in the form \path\version\user,
where \path is a string constant (C:\), \version is the version of the software
(12.2.0), and \user is the installation owner of the Oracle Grid Infrastructure
software (grid). During Oracle Grid Infrastructure for a cluster installation,
permissions on the path of the Grid home are changed to the LocalSystem user, so any
other users are unable to read, write, or execute commands in that path.
C:\12.2.0\grid
If you do not create a unique path to the Grid home, then after installing Oracle Grid
Infrastructure, you can encounter permission errors for other installations by the grid
user, including any existing installations under the same path. For example, if the Grid
home is C:\app\12.2.0\grid, then the entire directory path, including C:\ and C:
\app have restricted access. You would not be able to create an Oracle home with the
path C:\app\oracle\product\12.2.0\db_home1 for the Oracle Database
installation. To avoid permission problems, you can create and select paths such as the
following for the Grid home:
C:\12.2.0\grid
Note:
For Oracle Grid Infrastructure for a cluster installations, note the following
restrictions for the Oracle Grid Infrastructure binary home (Grid home):
•
It must not be placed under one of the Oracle base directories, including
the Oracle base directory of the Oracle Grid Infrastructure installation
owner.
•
It must not be placed in the home directory of an installation owner.
These requirements are specific to Oracle Grid Infrastructure for a cluster
installations.
Oracle Grid Infrastructure for a standalone server (Oracle Restart) can be
installed under the Oracle base for the Oracle Database installation.
5.4.1.4 Oracle Inventory Directory
The Oracle Inventory directory is the central inventory location for all Oracle software
installed on a server.
The first time you install Oracle software on a system, the installer checks to see if an
Oracle Inventory directory exists. The location of the Oracle Inventory directory is
determined by the Windows Registry key HKEY_LOCAL_MACHINE\SOFTWARE
5-22 Oracle Grid Infrastructure Installation and Upgrade Guide
Creating Oracle Software Directories
\Oracle\inst_loc. If an Oracle Inventory directory does not exist, then the installer
creates one in the default location of C:\Program Files\Oracle\Inventory. The
central inventory is created on all cluster nodes.
Note: Changing the value for inst_loc in the Windows registry is not
supported.
The Oracle Inventory directory is not installed under the Oracle base directory for the
Oracle Installation user. This is because all Oracle software installations share a
common Oracle Inventory, so there is only one Oracle Inventory for all users, whereas
there is a separate Oracle Base directory for each user.
The Oracle Inventory directory contains the following:
•
A registry of the Oracle home directories (Oracle Grid Infrastructure and Oracle
Database) on the system
•
Installation logs and trace files from installations of Oracle software. These files
are also copied to the respective Oracle homes for future reference.
•
Other metadata inventory information regarding Oracle installations are stored in
the individual Oracle home inventory directories, and are separate from the
central inventory.
5.4.2 Requirements for the Oracle Grid Infrastructure Home Directory
Review directory path requirements for Oracle Grid Infrastructure Home directory.
•
It is located in a path outside existing Oracle homes, including Oracle Clusterware
homes.
•
It is not located in a user home directory.
•
You create the path before installation, and then the installer for Oracle Grid
Infrastructure creates the directories in the path.
Oracle recommends that you install Oracle Grid Infrastructure on local homes, rather
than using a shared home on shared storage.
For installations with Oracle Grid Infrastructure only, Oracle recommends that you
create a path compliant with Oracle Optimal Flexible Architecture (OFA) guidelines,
so that Oracle Universal Installer (OUI) can select that directory during installation.
Note:
Oracle Grid Infrastructure homes can be placed in a local directory on servers,
even if your existing Oracle Clusterware home from a prior release is in a
shared location.
If you are installing Oracle Grid Infrastructure for a database (Oracle Restart),
then the home directory for Oracle Restart can be under the Oracle base
directory for the Oracle Installation user for Oracle Database.
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-23
Enabling Intelligent Platform Management Interface (IPMI)
See Also:
•
Optimal Flexible Architecture File Path Examples (page B-4)
•
Installing and Configuring Oracle Grid Infrastructure for a Standalone
Server (Oracle Restart) for more information about installing Oracle Grid
Infrastructure on a standalone server (Oracle Restart)
5.4.3 About Creating the Oracle Base Directory Path
The Oracle base directory for the Oracle Installation User for Oracle Grid
Infrastructure is the location where diagnostic and administrative logs, and other logs
associated with Oracle ASM and Oracle Clusterware are stored.
If the directory or path you specify during installation for the Grid home does not
exist, then OUI creates the directory.
Note:
•
Placing Oracle Grid Infrastructure for a cluster binaries on a cluster file
system is not supported.
•
The base directory for Oracle Grid Infrastructure 12c and the base
directory for Oracle RAC 12c must be different from the directories used
by the Oracle RAC 11g release 2 installation.
Related Topics:
Oracle Base Directory Naming Convention (page B-3)
5.5 Enabling Intelligent Platform Management Interface (IPMI)
Intelligent Platform Management Interface (IPMI) provides a set of common interfaces
to computer hardware and firmware that system administrators can use to monitor
system health and manage the system.
Oracle Clusterware can integrate IPMI to provide failure isolation support and to
ensure cluster integrity. You can configure node-termination with IPMI during
installation by selecting a node-termination protocol, such as IPMI. You can also
configure IPMI after installation with crsctl commands.
Requirements for Enabling IPMI (page 5-25)
Review these hardware and software requirements for enabling cluster
nodes to be managed with Intelligent Platform Management Interface
(IPMI).
Configuring the IPMI Management Network (page 5-25)
You can configure the Baseboard Management Controller (BMC) for
Dynamic Host Configuration Protocol (DHCP), or for static IP addresses.
Configuring the IPMI Driver (page 5-26)
For Oracle Clusterware to communicate with the BMC, the IPMI driver
must be installed permanently on each node, so that it is available on
system restarts.
5-24 Oracle Grid Infrastructure Installation and Upgrade Guide
Enabling Intelligent Platform Management Interface (IPMI)
See Also: Oracle Clusterware Administration and Deployment Guide for
information about how to configure IPMI after installation
5.5.1 Requirements for Enabling IPMI
Review these hardware and software requirements for enabling cluster nodes to be
managed with Intelligent Platform Management Interface (IPMI).
•
For this release, Oracle Grid Infrastructure supports IPMI version 1.5.
•
Each cluster member node requires a Baseboard Management Controller (BMC)
running firmware compatible with IPMI version 1.5, which supports IPMI over
local area networks (LANs), and configured for remote control using LAN.
•
Each cluster member node requires an IPMI driver installed on each node.
•
The cluster requires a management network for IPMI. This can be a shared
network, but Oracle recommends that you configure a dedicated network.
•
Each cluster member node's Ethernet port used by BMC must be connected to the
IPMI management network.
•
Each cluster member must be connected to the management network.
•
Some server platforms put their network interfaces into a power saving mode
when they are powered off. In this case, they may operate only at a lower link
speed (for example, 100 megabyte (MB), instead of 1 GB). For these platforms, the
network switch port to which the BMC is connected must be able to autonegotiate down to the lower speed, or IPMI will not function properly.
Note: IPMI operates on the physical hardware platform through the network
interface of the Baseboard Management Controller (BMC). Depending on your
system configuration, an IPMI-initiated restart of a server can affect all virtual
environments hosted on the server. Contact your hardware and OS vendor for
more information.
5.5.2 Configuring the IPMI Management Network
You can configure the Baseboard Management Controller (BMC) for Dynamic Host
Configuration Protocol (DHCP), or for static IP addresses.
Oracle recommends that you configure the BMC for dynamic IP address assignment
using DHCP. To use this option, you must have a DHCP server configured to assign
the BMC IP addresses.
Note: If you configure Intelligent Platform Management Interface (IPMI), and
you use Grid Naming Services (GNS), then you still must configure separate
addresses for the IPMI interfaces. Because the IPMI adapter is not seen
directly by the host, the IPMI adapter is not visible to GNS as an address on
the host.
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle RAC 5-25
Enabling Intelligent Platform Management Interface (IPMI)
5.5.3 Configuring the IPMI Driver
For Oracle Clusterware to communicate with the BMC, the IPMI driver must be
installed permanently on each node, so that it is available on system restarts.
On Windows systems, the implementation assumes the Microsoft IPMI driver
(ipmidrv.sys) is installed, which is included with the Windows Server 2012 and
later versions of the Windows operating system. The driver is included as part of the
Hardware Management feature, which includes the driver and the Windows
Management Interface (WMI).
Note: An alternate driver (imbdrv.sys) is available from Intel as part of
Intel Server Control, but this driver has not been tested with Oracle
Clusterware.
Configuring the Hardware Management Component (page 5-26)
Hardware management is installed using the Add/Remove Windows
Components Wizard.
5.5.3.1 Configuring the Hardware Management Component
Hardware management is installed using the Add/Remove Windows Components
Wizard.
1. Press Start, then select Control Panel.
2. Select Add or Remove Programs.
3. Click Add/Remove Windows Components.
4. Select (but do not check) Management and Monitoring Tools and click the Details
button to display the detailed components selection window.
5. Select the Hardware Management option.
If a BMC is detected through the system management BIOS (SMBIOS) Table Type
38h, then a dialog box will be displayed instructing you to remove any third-party
drivers. If no third-party IPMI drivers are installed or they have been removed
from the system, then click OK to continue.
Note: The Microsoft driver is incompatible with other drivers. Any third-
party drivers must be removed
6. Click OK to select the Hardware Management Component, and then click Next.
Hardware Management (including Windows Remote Management, or WinRM)
will be installed.
After the driver and hardware management have been installed, the BMC should be
visible in the Windows Device Manager under System devices with the label
"Microsoft Generic IPMI Compliant Device". If the BMC is not automatically detected
by the plug and play system, then the device must be created manually.
To create the IPMI device, run the following command:
rundll32 ipmisetp.dll,AddTheDevice
5-26 Oracle Grid Infrastructure Installation and Upgrade Guide
6
Configuring Shared Storage for Oracle
Database and Grid Infrastructure
Review supported storage options and perform prerequisite tasks as part of your
installation planning process.
Note: If you are currently using OCFS for Windows as your shared storage,
then you must migrate to using Oracle ASM before the upgrade of Oracle
Database and Oracle Grid Infrastructure.
Supported Storage Options for Oracle Grid Infrastructure and Oracle RAC
(page 6-2)
Both Oracle Clusterware and the Oracle RAC database use files that
must be available to all the nodes in the cluster. The following table
shows the storage options supported for storing Oracle Clusterware and
Oracle RAC files.
Oracle ACFS and Oracle ADVM (page 6-3)
Oracle Automatic Storage Management Cluster File System (Oracle
ACFS) is a multi-platform, scalable file system, and storage management
technology that extends Oracle Automatic Storage Management (Oracle
ASM) functionality to support all file types.
Storage Considerations for Oracle Grid Infrastructure and Oracle RAC
(page 6-4)
For all installations, you must choose the storage option to use for Oracle
Grid Infrastructure (Oracle Clusterware and Oracle ASM), and Oracle
Real Application Clusters (Oracle RAC) databases.
Guidelines for Choosing a Shared Storage Option (page 6-5)
You can choose any combination of the supported shared storage
options for each file type if you satisfy all requirements listed for the
chosen storage option.
About Migrating Existing Oracle ASM Instances (page 6-7)
You can use Oracle Automatic Storage Management Configuration
Assistant (ASMCA) to upgrade the existing Oracle ASM instance to
Oracle ASM 12c release 1 (12.1) or higher.
Preliminary Shared Disk Preparation (page 6-7)
When using shared storage on a Windows platform, there are additional
preinstallation tasks to complete.
Configuring Disk Partitions on Shared Storage (page 6-9)
Use the disk administration tools provided by the operating system or
third-party vendors to create disk partitions.
Configuring Shared Storage for Oracle Database and Grid Infrastructure 6-1
Supported Storage Options for Oracle Grid Infrastructure and Oracle RAC
6.1 Supported Storage Options for Oracle Grid Infrastructure and Oracle
RAC
Both Oracle Clusterware and the Oracle RAC database use files that must be available
to all the nodes in the cluster. The following table shows the storage options supported
for storing Oracle Clusterware and Oracle RAC files.
Table 6-1 Supported Storage Options for Oracle Clusterware and Oracle RAC Files and Home
Directories
Storage Option
OCR and Voting Oracle Grid
Files
Infrastructure
Home
Oracle RAC
Home
Oracle RAC
Database Files
Oracle
Recovery
Files
Oracle Automatic
Storage Management
(Oracle ASM)
Yes
No
No
Yes
Yes
Oracle Automatic
Storage Management
Cluster File System
(Oracle ACFS)
No
No
Yes
Yes (Oracle
Database 12c
Release 12.1.0.2
and later)
Yes (Oracle
Database
12c Release
12.1.0.2 and
later)
Direct NFS Client access
to a certified network
attached storage (NAS)
filer
No
No
No
Yes
Yes
Direct-attached storage
(DAS)
No
No
Yes
Yes
Yes
Shared disk partitions
(raw devices)
No
No
No
No
No
Local file system (NTFS
formatted disk)
No
Yes
Yes
No
No
Note: NFS or Direct NFS
Client cannot be used for
Oracle Clusterware files.
Guidelines for Storage Options
Use the following guidelines when choosing storage options:
•
You can choose any combination of the supported storage options for each file
type provided that you satisfy all requirements listed for the chosen storage
options.
•
You can only use Oracle ASM to store Oracle Clusterware files.
•
Direct use of raw or block devices is not supported. You can only use raw or block
devices with Oracle ASM.
See Also: Oracle Database Upgrade Guide for information about how to prepare
for upgrading an existing database.
6-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Oracle ACFS and Oracle ADVM
6.2 Oracle ACFS and Oracle ADVM
Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is a multiplatform, scalable file system, and storage management technology that extends
Oracle Automatic Storage Management (Oracle ASM) functionality to support all file
types.
Oracle ACFS extends Oracle ASM technology to support of all of your application data
in both single instance and cluster configurations. Oracle ADVM provides volume
management services and a standard disk device driver interface to clients. Oracle
Automatic Storage Management Cluster File System communicates with Oracle ASM
through the Oracle Automatic Storage Management Dynamic Volume Manager
interface.
Oracle ACFS can provide optimized storage for all Oracle files, including Oracle
Database binaries. Files supported by Oracle ACFS include application executable
files, data files, and application reports. Other supported files are video, audio, text,
images, engineering drawings, and other general-purpose application file data.
You can place the Oracle home for Oracle Database 12c release 1 (12.1) or later
software on Oracle ACFS, but you cannot place Oracle Clusterware binaries or Oracle
Clusterware files on Oracle ACFS.
Restrictions and Guidelines for Oracle ACFS
Review these guidelines and restrictions as part of your storage plan for using Oracle
ACFS for single instance and cluster configurations.
•
Oracle Automatic Storage Management Cluster File System (Oracle ACFS)
provides a general purpose file system.
•
You can only use Oracle ACFS when Oracle ASM is configured.
•
You must use a domain user when installing Oracle Grid Infrastructure if you
plan to use Oracle ACFS.
•
When creating Oracle ACFS file systems on Windows, log on as a Windows
domain user. Also, when creating files in an Oracle ACFS file system on
Windows, you should be logged in as a Windows domain user to ensure that the
files are accessible by all nodes.
When using a file system across cluster nodes, the best practice is to mount the file
system using a domain user, to ensure that the security identifier is the same
across cluster nodes. Windows security identifiers, which are used in defining
access rights to files and directories, use information which identifies the user.
Local users are only known in the context of the local node. Oracle ACFS uses this
information during the first file system mount to set the default access rights to
the file system.
Note the following general guidelines and restrictions for placing Oracle Database and
Oracle Grid Infrastructure files on Oracle ACFS:
•
Starting with Oracle Grid Infrastructure 12c Release 1 (12.1) for a cluster, you can
place Oracle Database binaries, data files, and administrative files (for example,
trace files) on Oracle ACFS.
•
Oracle ACFS does not support replication or encryption with Oracle Database
data files, tablespace files, control files, and redo logs.
Configuring Shared Storage for Oracle Database and Grid Infrastructure 6-3
Storage Considerations for Oracle Grid Infrastructure and Oracle RAC
•
You can place Oracle Database homes on Oracle ACFS only if the database release
is Oracle Database 11g release 2 or later. You cannot install earlier releases of
Oracle Database on Oracle ACFS.
•
For installations of Oracle Clusterware, you cannot place Oracle Clusterware files
on Oracle ACFS.
•
For policy-managed Oracle Flex Cluster databases, Oracle ACFS can run on Hub
Nodes, but cannot run on Leaf Nodes. For this reason, Oracle RAC binaries cannot
be placed on Oracle ACFS located on Leaf Nodes.
The following restrictions apply if you run Oracle ACFS in an Oracle Restart
configuration:
•
Oracle Restart does not support Oracle ACFS resources on all platforms.
•
Starting with Oracle Database 12c, Oracle Restart configurations do not support
the Oracle ACFS registry.
•
You must manually load Oracle ACFS drivers after a system restart.
•
You must manually mount an Oracle ACFS file system, and unmount it after the
Oracle ASM instance has finished running.
•
Creating Oracle data files on an Oracle ACFS file system is not supported in
Oracle Restart configurations. Creating Oracle data files on an Oracle ACFS file
system is supported on Oracle Grid Infrastructure for a cluster configurations.
See Also:
•
My Oracle Support Note 1369107.1 for more information about platforms
and releases that support Oracle ACFS and Oracle ADVM: https://
support.oracle.com/rs?type=doc&id=1369107.1
•
Oracle Automatic Storage Management Administrator's Guide for more
information about Oracle ACFS and Oracle ADVM
6.3 Storage Considerations for Oracle Grid Infrastructure and Oracle RAC
For all installations, you must choose the storage option to use for Oracle Grid
Infrastructure (Oracle Clusterware and Oracle ASM), and Oracle Real Application
Clusters (Oracle RAC) databases.
Storage Considerations for Oracle Clusterware
Oracle Clusterware voting files are used to monitor cluster node status, and Oracle
Cluster Registry (OCR) files contain configuration information about the cluster. You
must store Oracle Cluster Registry (OCR) and voting files in Oracle ASM disk groups.
You can also store a backup of the OCR file in a disk group. Storage must be shared;
any node that does not have access to an absolute majority of voting files (more than
half) is restarted.
If you use Oracle ASM disk groups created on Network File System (NFS) for storage,
then ensure that you follow the recommendations for mounting NFS described in the
topic Guidelines for Configuring Oracle ASM Disk Groups on NFS.
6-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Guidelines for Choosing a Shared Storage Option
Storage Considerations for Oracle RAC
Oracle ASM is a supported storage option for database and recovery files. For all
installations, Oracle recommends that you create at least two separate Oracle ASM
disk groups: One for Oracle Database data files, and one for recovery files. Oracle
recommends that you place the Oracle Database disk group and the recovery files disk
group in separate failure groups.
•
If you do not use Oracle ASM for database files, then Oracle recommends that you
place the data files and the Fast Recovery Area in shared storage located outside
of the Oracle home, in separate locations, so that a hardware failure does not
affect availability.
•
You can choose any combination of the supported storage options for each file
type provided that you satisfy all requirements listed for the chosen storage
options.
•
If you plan to install an Oracle RAC home on a shared OCFS2 location, then you
must upgrade OCFS2 to at least version 1.4.1, which supports shared writable
mmaps.
•
To use Oracle ASM with Oracle RAC, and if you are configuring a new Oracle
ASM instance, then your system must meet the following conditions:
–
All nodes on the cluster have Oracle Clusterware and Oracle ASM 12c Release
2 (12.2) installed as part of an Oracle Grid Infrastructure for a cluster
installation.
–
Any existing Oracle ASM instance on any node in the cluster is shut down.
–
To provide voting file redundancy, one Oracle ASM disk group is sufficient.
The Oracle ASM disk group provides three or five copies.
You can use NFS, with or without Direct NFS, to store Oracle Database data files. You
cannot use NFS as storage for Oracle Clusterware files.
6.4 Guidelines for Choosing a Shared Storage Option
You can choose any combination of the supported shared storage options for each file
type if you satisfy all requirements listed for the chosen storage option.
Guidelines for Using Oracle ASM Disk Groups for Storage (page 6-5)
Plan how you want to configure Oracle ASM disk groups for
deployment.
Guidelines for Using Direct Network File System (NFS) with Oracle RAC
(page 6-6)
Network-attached storage (NAS) systems use a network file system
(NFS) to access data. You can store Oracle RAC data files and recovery
files on a supported NAS server using Direct NFS Client.
6.4.1 Guidelines for Using Oracle ASM Disk Groups for Storage
Plan how you want to configure Oracle ASM disk groups for deployment.
During Oracle Grid Infrastructure installation, you can create one or two Oracle ASM
disk groups. After the Oracle Grid Infrastructure installation, you can create
additional disk groups using Oracle Automatic Storage Management Configuration
Configuring Shared Storage for Oracle Database and Grid Infrastructure 6-5
Guidelines for Choosing a Shared Storage Option
Assistant (ASMCA), SQL*Plus, or Automatic Storage Management Command-Line
Utility (ASMCMD).
Choose to create a second disk group during Oracle Grid Infrastructure
installation. The first disk group stores the Oracle Cluster Registry (OCR), voting files,
and the Oracle ASM password file. The second disk group stores the Grid
Infrastructure Management Repository (GIMR) data files and Oracle Cluster Registry
(OCR) backup files. Oracle strongly recommends that you store the OCR backup files
in a different disk group from the disk group where you store OCR files. In addition,
having a second disk group for GIMR is advisable for performance, availability,
sizing, and manageability of storage.
Note:
•
You must specify the Grid Infrastructure Management Repository (GIMR)
location at the time of installing Oracle Grid Infrastructure. You cannot
migrate the GIMR from one disk group to another later.
If you install Oracle Database or Oracle RAC after you install Oracle Grid
Infrastructure, then you can either use the same disk group for database files, OCR,
and voting files, or you can use different disk groups. If you create multiple disk
groups before installing Oracle RAC or before creating a database, then you can do
one of the following:
•
Place the data files in the same disk group as the Oracle Clusterware files.
•
Use the same Oracle ASM disk group for data files and recovery files.
•
Use different disk groups for each file type.
If you create only one disk group for storage, then the OCR and voting files, database
files, and recovery files are contained in the one disk group. If you create multiple disk
groups for storage, then you can place files in different disk groups.
With Oracle Database 11g Release 2 (11.2) and later releases, Oracle Database
Configuration Assistant (DBCA) does not have the functionality to create disk groups
for Oracle ASM.
See Also:
Oracle Automatic Storage Management Administrator's Guide for information
about creating disk groups
6.4.2 Guidelines for Using Direct Network File System (NFS) with Oracle RAC
Network-attached storage (NAS) systems use a network file system (NFS) to access
data. You can store Oracle RAC data files and recovery files on a supported NAS
server using Direct NFS Client.
NFS file systems must be mounted and available over NFS mounts before you start the
Oracle RAC installation. See your vendor documentation for NFS configuration and
mounting information.
Note that the performance of Oracle Database software and the databases that use
NFS storage depend on the performance of the network connection between the
6-6 Oracle Grid Infrastructure Installation and Upgrade Guide
About Migrating Existing Oracle ASM Instances
database server and the NAS device. For this reason, Oracle recommends that you
connect the database server (or cluster node) to the NAS device using a private,
dedicated, network connection, which should be Gigabit Ethernet or better.
6.5 About Migrating Existing Oracle ASM Instances
You can use Oracle Automatic Storage Management Configuration Assistant
(ASMCA) to upgrade the existing Oracle ASM instance to Oracle ASM 12c release 1
(12.1) or higher.
ASMCA is located in the path Grid_home\bin. You can also use ASMCA to
configure failure groups, Oracle ASM volumes, and Oracle ACFS.
Note: You must first shut down all database instances and applications on the
node with the existing Oracle ASM instance before upgrading it.
During installation, if you chose to use Oracle ASM and ASMCA detects that there is a
prior Oracle ASM release installed in another Oracle ASM home, then after installing
the Oracle ASM 12c release 2 (12.2) software, you can start ASMCA to upgrade the
existing Oracle ASM instance. You can then configure an Oracle ACFS deployment by
creating Oracle ASM volumes and using the upgraded Oracle ASM to create the
Oracle ACFS.
On an existing Oracle Clusterware or Oracle RAC installation, if the prior release of
Oracle ASM instances on all nodes is Oracle ASM 11g release 1 or higher, then you are
provided with the option to perform a rolling upgrade of Oracle ASM instances. If the
prior release of Oracle ASM instances on an Oracle RAC installation are from an
Oracle ASM release prior to Oracle ASM 11g release 1, then rolling upgrades cannot be
performed. Oracle ASM is then upgraded on all nodes to 12c release 2 (12.2).
6.6 Preliminary Shared Disk Preparation
When using shared storage on a Windows platform, there are additional
preinstallation tasks to complete.
Disabling Write Caching (page 6-7)
You must disable write caching on all disks that will be used to share
data between the nodes in your cluster.
Enabling Automounting for Windows (page 6-8)
Even though the automount feature is enabled by default, you should
verify that automount is enabled.
6.6.1 Disabling Write Caching
You must disable write caching on all disks that will be used to share data between the
nodes in your cluster.
1. Click Start, then select Administrative Tools, then Computer Management, then
Device Manager, and then Disk drives
2. Expand the Disk drives and double-click the first drive that will be used by Oracle
software.
3. Under the Policies tab for the selected drive, uncheck the option that enables write
caching.
Configuring Shared Storage for Oracle Database and Grid Infrastructure 6-7
Preliminary Shared Disk Preparation
4. Double-click each of the other drives that will be used by Oracle Clusterware and
Oracle RAC and disable write caching as described in Step 3.
Note: Any disks that you use to store files, including database files, that will
be shared between nodes, must have write caching disabled.
6.6.2 Enabling Automounting for Windows
Even though the automount feature is enabled by default, you should verify that
automount is enabled.
You must enable automounting when using:
•
Raw partitions for Oracle ASM
•
Oracle Clusterware
•
Logical drives for Oracle ASM
Note: Raw partitions are supported only when upgrading an existing
installation using the configured partitions. On new installations, using raw
partitions is not supported by ASMCA or OUI, but is supported by the
software if you perform manual configuration
1. To determine if automatic mounting of new volumes is enabled, use the following
commands:
C:\> diskpart
DISKPART> automount
Automatic mounting of new volumes disabled.
2. To enable automounting:
a. Enter the following commands at a command prompt:
C:\> diskpart
DISKPART> automount enable
Automatic mounting of new volumes enabled.
b. Type exit to end the diskpart session.
c. Repeat steps 1 and 2 for each node in the cluster.
Note: All nodes in the cluster must have automatic mounting enabled to
correctly install Oracle RAC and Oracle Clusterware. Oracle recommends that
you enable automatic mounting before creating any logical partitions for use
by the database or Oracle ASM.
You must restart each node after enabling disk automounting.
After disk automounting is enabled and the node is restarted, automatic mounting
remains active until it is disabled.
6-8 Oracle Grid Infrastructure Installation and Upgrade Guide
Configuring Disk Partitions on Shared Storage
6.7 Configuring Disk Partitions on Shared Storage
Use the disk administration tools provided by the operating system or third-party
vendors to create disk partitions.
You can create the disk partitions using either the Disk Management Interface or the
DiskPart utility, both of which are provided by the operating system.
Creating Disk Partitions Using the Disk Management Interface (page 6-9)
Use the graphical user interface Disk Management snap-in to manage
disks.
Creating Disk Partitions using the DiskPart Utility (page 6-10)
You can also use the DiskPart utility to manage disks.
6.7.1 Creating Disk Partitions Using the Disk Management Interface
Use the graphical user interface Disk Management snap-in to manage disks.
1. To access the Disk Management snap-in, do one of the following:
•
Type diskmgmt.msc at the command prompt
•
From the Start menu, select Administrative Tools, then Computer
Management. Then select the Disk Management node in the Storage tree.
2. Create primary partitions and logical drives in extended partitions by selecting the
New Simple Volume option.
Use a basic disk with a Master Boot Record (MBR) partition style as an extended
partition for creating partitions. Do not use spanned volumes or striped volumes.
These options convert the volume to a dynamic disk. Oracle Automatic Storage
Management does not support dynamic disks.
a. In the Assign Drive Letter or Path window, select Do not assign a drive letter
or drive path
b. In the Format Partition window, select Do not format this volume to specify
raw partition.
3. On each node in the cluster, ensure that the partitions are visible.
In the Disk Management snap-in, from the Action menu, choose Rescan Disks.
4. If the disks appear with drive letters on the remote node, then remove the drive
letter.
a. In the Disk Management snap-in, select the disk that has an assigned drive
letter.
b. Right-click the selected disk and choose Change Drive Letter and Paths...
c. Select the drive letter and click Remove.
d. Repeat these steps for each shared disk partition that has an assigned drive
letter.
Configuring Shared Storage for Oracle Database and Grid Infrastructure 6-9
Configuring Disk Partitions on Shared Storage
6.7.2 Creating Disk Partitions using the DiskPart Utility
You can also use the DiskPart utility to manage disks.
1. From an existing node in the cluster, run the DiskPart utility as follows:
C:\> diskpart
DISKPART>
2. List the available disks.
By specifying its disk number (n), select the disk on which you want to create a
partition.
DISKPART> list disk
DISKPART> select disk n
3. Create an extended partition:
DISKPART> create part ext
4. Create a logical drive of the desired size after the extended partition is created
using the following syntax:
DISKPART> create part log [size=n] [offset=n] [noerr]
5. Repeat steps 2 through 4 for the second and any additional partitions. An optimal
configuration is one partition for the Oracle home and one partition for Oracle
Database files.
6. List the available volumes, and remove any drive letters from the logical drives you
plan to use.
DISKPART> list volume
DISKPART> select volume n
DISKPART> remove
7. On each node in the cluster, ensure that the partitions are visible.
In the Windows Disk Management snap-in, from the Action menu, choose Rescan
Disks.
8. If the disks appear with drive letters on the remote node, then remove the drive
letter.
a. In the Windows Disk Management snap-in, select the disk that has an assigned
drive letter.
b. Right-click the selected disk and choose Change Drive Letter and Paths...
c. Select the drive letter and click Remove.
d. Repeat these steps for each shared disk partition that has an assigned drive
letter.
6-10 Oracle Grid Infrastructure Installation and Upgrade Guide
7
Configuring Storage for Oracle Automatic
Storage Management
Identify storage requirements and Oracle Automatic Storage Management (Oracle
ASM) disk group options.
When configuring disks for use with Oracle ASM, you can use the asmtool utility to
mark the disks prior to installation.
Identifying Storage Requirements for Using Oracle ASM for Shared Storage
(page 7-2)
Before installing Oracle Grid Infrastructure, you must identify and
determine how many devices are available for use by Oracle ASM, the
amount of free disk space available on each disk, and the redundancy
level to use with Oracle ASM.
Oracle Clusterware Storage Space Requirements (page 7-6)
Use this information to determine the minimum number of disks and the
minimum disk space requirements based on the redundancy type, for
installing Oracle Clusterware files, and installing the starter database, for
various Oracle Cluster deployments.
About the Grid Infrastructure Management Repository (page 7-7)
Every Oracle Standalone Cluster contains a Grid Infrastructure
Management Repository (GIMR), also known as the Management
Database (MGMTDB).
Restrictions for Disk Partitions Used By Oracle ASM (page 7-8)
Be aware of the following restrictions when configuring disk partitions
for use with Oracle ASM:
Preparing Your System to Use Oracle ASM for Shared Storage (page 7-8)
To use Oracle ASM as the shared storage solution for Oracle Clusterware
or Oracle RAC files, you must perform certain tasks before you begin the
software installation.
Marking Disk Partitions for Oracle ASM Before Installation (page 7-10)
The only partitions that OUI displays for Windows systems are logical
drives that are on disks that do not contain a primary partition, and have
been marked (or stamped) with asmtool or by Oracle Automatic
Storage Management (Oracle ASM) Filter Driver..
Creating and Using Oracle ASM Credentials File (page 7-13)
Review this information to create an Oracle ASM credentials file.
Configuring Oracle Automatic Storage Management Cluster File System
(page 7-13)
If you want to install Oracle RAC on Oracle ACFS, you must first create
the Oracle home directory in Oracle ACFS.
Configuring Storage for Oracle Automatic Storage Management 7-1
Identifying Storage Requirements for Using Oracle ASM for Shared Storage
7.1 Identifying Storage Requirements for Using Oracle ASM for Shared
Storage
Before installing Oracle Grid Infrastructure, you must identify and determine how
many devices are available for use by Oracle ASM, the amount of free disk space
available on each disk, and the redundancy level to use with Oracle ASM.
When Oracle ASM provides redundancy, you must have sufficient capacity in each
disk group to manage a re-creation of data that is lost after a failure of one or two
failure groups.
Tip: As you progress through the following steps, make a list of the raw
device names you intend to use to create the Oracle ASM disk groups and
have this information available during the Oracle Grid Infrastructure
installation or when creating your Oracle RAC database.
1. Plan your Oracle ASM disk groups requirement. If you choose to store the Grid
Infrastructure Management Repository (GIMR) on separate Oracle ASM disk
group, then you require two separate Oracle ASM disk groups, one for OCR and
voting files and the other for the GIMR.
2. Determine whether you want to use Oracle ASM for Oracle Database files,
recovery files, or all file types. Oracle Database files include data files, control files,
redo log files, the server parameter file, and the password file.
Note:
•
You do not have to use the same storage mechanism for Oracle Database
files and recovery files. You can use a shared file system for one file type
and Oracle ASM for the other.
•
There are two types of Oracle Clusterware files: OCR files and voting
files. You must use Oracle ASM to store OCR and voting files.
3. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
The redundancy level that you choose for the Oracle ASM disk group determines
how Oracle ASM mirrors files in the disk group, and determines the number of
disks and amount of disk space that you require. Except when using external
redundancy, Oracle ASM mirrors all Oracle Clusterware files in separate failure
groups within a disk group. A quorum failure group, a special type of failure
group, contains mirror copies of voting files when voting files are stored in normal
or high redundancy disk groups. The disk groups that contain Oracle Clusterware
files (OCR and voting files) have a higher minimum number of failure groups than
other disk groups because the voting files are stored in quorum failure groups in
the Oracle ASM disk group.
A quorum failure group is a special type of failure group that stores the Oracle
Clusterware voting files. The quorum failure group ensures that a quorum of the
specified failure groups are available. When Oracle ASM mounts a disk group that
contains Oracle Clusterware files, the quorum failure group determines if the disk
group can be mounted in the event of the loss of one or more failure groups. Disks
in the quorum failure group do not contain user data, therefore a quorum failure
7-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Identifying Storage Requirements for Using Oracle ASM for Shared Storage
group is not considered when determining redundancy requirements in respect to
storing user data.
The redundancy levels are as follows:
•
High redundancy
In a high redundancy disk group, Oracle ASM uses three-way mirroring to
increase performance and provide the highest level of reliability. A high
redundancy disk group requires a minimum of three disk devices (or three
failure groups). The effective disk space in a high redundancy disk group is
one-third the sum of the disk space in all of its devices.
For Oracle Clusterware files, a high redundancy disk group requires a
minimum of five disk devices and provides five voting files and one OCR (one
primary and two secondary copies). For example, your deployment may
consist of three regular failure groups and two quorum failure groups. Note
that not all failure groups can be quorum failure groups, even though voting
files need all five disks. With high redundancy, the cluster can survive the loss
of two failure groups.
While high redundancy disk groups provide a high level of data protection,
you should consider the greater cost of additional storage devices before
deciding to select high redundancy disk groups.
•
Normal redundancy
In a normal redundancy disk group, to increase performance and reliability,
Oracle ASM by default uses two-way mirroring. A normal redundancy disk
group requires a minimum of two disk devices (or two failure groups). The
effective disk space in a normal redundancy disk group is half the sum of the
disk space in all of its devices.
For Oracle Clusterware files, a normal redundancy disk group requires a
minimum of three disk devices and provides three voting files and one OCR
(one primary and one secondary copy). For example, your deployment may
consist of two regular failure groups and one quorum failure group. With
normal redundancy, the cluster can survive the loss of one failure group.
If you are not using a storage array providing independent protection against
data loss for storage, then Oracle recommends that you select normal
redundancy.
•
External redundancy
An external redundancy disk group requires a minimum of one disk device.
The effective disk space in an external redundancy disk group is the sum of the
disk space in all of its devices.
Because Oracle ASM does not mirror data in an external redundancy disk
group, Oracle recommends that you use external redundancy with storage
devices such as RAID, or other similar devices that provide their own data
protection mechanisms.
•
Flex redundancy
A flex redundancy disk group is a type of redundancy disk group with
features such as flexible file redundancy, mirror splitting, and redundancy
change. A flex disk group can consolidate files with different redundancy
requirements into a single disk group. It also provides the capability for
databases to change the redundancy of its files. A disk group is a collection of
Configuring Storage for Oracle Automatic Storage Management 7-3
Identifying Storage Requirements for Using Oracle ASM for Shared Storage
file groups, each associated with one database. A quota group defines the
maximum storage space or quota limit of a group of databases within a disk
group.
In a flex redundancy disk group, Oracle ASM uses three-way mirroring of
Oracle ASM metadata to increase performance and provide reliability. For
database data, you can choose no mirroring (unprotected), two-way mirroring
(mirrored), or three-way mirroring (high). A flex redundancy disk group
requires a minimum of three disk devices (or three failure groups).
Note: You can alter the redundancy level of the disk group after a disk group
is created. For example, you can convert a normal or high redundancy disk
group to a flex redundancy disk group. Within a flex redundancy disk group,
file redundancy can change among three possible values: unprotected,
mirrored, or high.
4. Determine the total amount of disk space that you require for the Oracle
Clusterware files using Oracle ASM for shared storage.
If an Oracle ASM instance is running on the system, then you can use an existing
disk group to meet these storage requirements. If necessary, you can add disks to
an existing disk group during the database installation.
5. Determine an allocation unit size.
Every Oracle ASM disk is divided into allocation units (AU). An allocation unit is
the fundamental unit of allocation within a disk group. You can select the AU Size
value from 1, 2, 4, 8, 16, 32 or 64 MB, depending on the specific disk group
compatibility level. For flex disk groups, the default value for AU size is set to 4
MB. For external, normal, and high redundancy disk groups, the default AU size is
1 MB.
6. For Oracle Clusterware installations, you must also add additional disk space for
the Oracle ASM metadata. You can use the following formula to calculate the disk
space requirements (in MB) for OCR and voting files, and the Oracle ASM
metadata:
total = [2 * ausize * disks] + [redundancy * (ausize * (all_client_instances +
nodes + disks + 32) +
(64 * nodes) + clients + 543)]
redundancy = Number of mirrors: external = 1, normal = 2, high = 3, flex = 3
ausize = Metadata AU size in megabytes
nodes = Number of nodes in cluster
clients - Number of database instances for each node
disks - Number of disks in disk group
For example, for a four-node Oracle RAC installation, using three disks in a normal
redundancy disk group, you require an additional 5293 MB of space: [2 * 4 * 3] + [2
* (4 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)] = 5293 MB
7. Optionally, identify failure groups for the Oracle ASM disk group devices.
Note: You only have to complete this step if you plan to use an installation
method that includes configuring Oracle ASM disk groups before installing
Oracle RAC, or creating an Oracle RAC database.
7-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Identifying Storage Requirements for Using Oracle ASM for Shared Storage
If you intend to use a normal or high redundancy disk group, then you can further
protect your database against hardware failure by associating a set of disk devices
in a custom failure group. Failure groups define Oracle ASM disks that share a
common potential failure mechanism. By default, each device comprises its own
failure group.
If two disk devices in a normal redundancy disk group are attached to the same
Host Bus Adapter (HBA), then the disk group becomes unavailable if the controller
fails. The HBA in this example is a single point of failure. To protect against failures
of this type, you could use two HBA fabric paths, each with two disks, and define a
failure group for the disks attached to each HBA. This configuration enables the
disk group to tolerate the failure of one HBA fabric path.
Note: You can define custom failure groups during installation of Oracle Grid
Infrastructure. You can also define failure groups after installation of Oracle
Grid Infrastructure using the GUI tool ASMCA, the command-line tool
asmcmd, or SQL commands. If you define custom failure groups, then you
must specify a minimum of two failure groups for normal redundancy disk
groups and three failure groups for high redundancy disk groups.
8. If you are sure that a suitable disk group does not exist on the system, then install
or identify appropriate disk devices to add to a new disk group. Use the following
guidelines when identifying appropriate disk devices:
•
The disk devices must be owned by the user performing the grid installation.
•
All the devices in an Oracle ASM disk group must be the same size and have
the same performance characteristics.
•
Do not specify multiple partitions on a single physical disk as a disk group
device. Oracle ASM expects each disk group device to be on a separate
physical disk.
•
Although you can specify a logical volume as a device in an Oracle ASM disk
group, Oracle does not recommend their use because it adds a layer of
complexity that is unnecessary with Oracle ASM. Oracle recommends that if
you choose to use a logical volume manager, then use the logical volume
manager to represent a single logical unit number (LUN) without striping or
mirroring, so that you can minimize the effect on storage performance of the
additional storage layer.
See Also:
•
Oracle Automatic Storage Management Administrator's Guide for more
information about Oracle ASM file, quota, and failure groups
•
Storage Checklist for Oracle Grid Infrastructure (page 1-7)
•
Oracle Clusterware Storage Space Requirements (page 7-6) to
determine the minimum number of disks and the minimum disk space
requirements for installing Oracle Clusterware files, and installing the
starter database, where you have voting files in a separate disk group.
Configuring Storage for Oracle Automatic Storage Management 7-5
Oracle Clusterware Storage Space Requirements
Related Topics:
About Oracle Extended Clusters (page 9-3)
An Oracle Extended Cluster consists of nodes that are located in
multiple locations called sites.
7.2 Oracle Clusterware Storage Space Requirements
Use this information to determine the minimum number of disks and the minimum
disk space requirements based on the redundancy type, for installing Oracle
Clusterware files, and installing the starter database, for various Oracle Cluster
deployments.
Total Storage Space for Database Files Required by Redundancy Type
The following tables list the space requirements for Oracle RAC Database data files for
multitenant and non-CDB deployments.
Table 7-1 Oracle ASM Disk Space Requirements for a Multitenant Container Database (CDB) with
One Pluggable Database (PDB)
Redundancy Level
Minimum number of Data Files
disks
Recovery Files
Both File Types
External
1
4.5 GB
12.9 GB
17.4 GB
Normal
2
8.6 GB
25.8 GB
34.4 GB
High
3
12.9 GB
38.7 GB
51.6 GB
Flex
3
12.9 GB
38.7 GB
51.6 GB
Table 7-2
Oracle ASM Disk Space Requirements for Oracle Database (non-CDB)
Redundancy Level
Minimum number of Data Files
disks
Recovery Files
Both File Types
External
1
2.7 GB
7.8 GB
10.5 GB
Normal
2
5.2 GB
15.6 GB
20.8 GB
High
3
7.8 GB
23.4 GB
31.2 GB
Flex
3
7.8 GB
23.4 GB
31.2 GB
Total Oracle Clusterware Storage Space Required by Oracle Cluster Deployment
Type
If you create a disk group as part of the installation to install the OCR and voting files,
then the installer requires that you create these files on a disk group with at least 2 GB
of available space.
Based on the cluster configuration you want to install, the Oracle Clusterware space
requirements vary for different redundancy levels. The following tables list the space
requirements for each cluster configuration.
7-6 Oracle Grid Infrastructure Installation and Upgrade Guide
About the Grid Infrastructure Management Repository
Table 7-3
Minimum Space Requirements for Oracle Standalone Cluster
Cluster
Configuration
Redundancy Level
Space Required for
DATA Disk Group
containing Oracle
Clusterware Files
(OCR and Voting
Files)
Space Required for Total Storage
MGMT Disk Group
containing the GIMR
and Oracle
Clusterware Backup
Files
Two nodes, 4 MB
Allocation Unit
(AU), one Oracle
ASM disks
External
1.4 GB
At least 37.6 GB for a
cluster with 4 nodes
or less. Additional
4.7 GB space
required for clusters
with 5 or more
nodes.
39 GB
Two nodes, 4 MB
Allocation Unit
(AU), three Oracle
ASM disks
Normal
2.5 GB
75.5 GB
78 GB
Two nodes, 4 MB
Allocation Unit
(AU), five Oracle
ASM disks
High
3.6 GB
113.4 GB
117 GB
Two nodes, 4 MB
Allocation Unit
(AU), three Oracle
ASM disks
Flex
2.5 GB
75.5 GB
78 GB
Related Topics:
Storage Checklist for Oracle Grid Infrastructure (page 1-7)
Review the checklist for storage hardware and configuration
requirements for Oracle Grid Infrastructure installation.
7.3 About the Grid Infrastructure Management Repository
Every Oracle Standalone Cluster contains a Grid Infrastructure Management
Repository (GIMR), also known as the Management Database (MGMTDB).
The Grid Infrastructure Management Repository (GIMR) is a multitenant database
with a pluggable database (PDB) for the GIMR of each cluster. The GIMR stores the
following information about the cluster:
•
Real time performance data the Cluster Health Monitor collects
•
Fault, diagnosis, and metric data the Cluster Health Advisor collects
•
Cluster-wide events about all resources that Oracle Clusterware collects
•
CPU architecture data for Quality of Service Management (QoS)
•
Metadata required for Rapid Home Provisioning
Configuring Storage for Oracle Automatic Storage Management 7-7
Restrictions for Disk Partitions Used By Oracle ASM
The Oracle Standalone Cluster locally hosts the GIMR on an Oracle ASM disk group;
this GIMR is a multitenant database with a single pluggable database (PDB). You can
optionally choose to configure two separate Oracle ASM disk groups, one for storing
Oracle Cluster Registry (OCR) and voting files, and the other for the GIMR.
7.4 Restrictions for Disk Partitions Used By Oracle ASM
Be aware of the following restrictions when configuring disk partitions for use with
Oracle ASM:
•
With x64 Windows, you can create up to 128 primary partitions for each disk.
•
Oracle recommends that you limit the number of partitions you create on a single
disk to prevent disk contention. Therefore, you may prefer to use extended
partitions rather than primary partitions.
7.5 Preparing Your System to Use Oracle ASM for Shared Storage
To use Oracle ASM as the shared storage solution for Oracle Clusterware or Oracle
RAC files, you must perform certain tasks before you begin the software installation.
Identifying and Using Existing Oracle Database Disk Groups on Oracle ASM
(page 7-8)
Identify existing disk groups and determine the free disk space that they
contain. Optionally, identify failure groups for the Oracle ASM disk
group devices.
Selecting Disks to use with Oracle ASM Disk Groups (page 7-9)
If you are sure that a suitable disk group does not exist on the system,
then install or identify appropriate disk devices to add to a new disk
group.
Specifying the Oracle ASM Disk Discovery String (page 7-9)
When an Oracle ASM instance is initialized, Oracle ASM discovers and
examines the contents of all of the disks that are in the paths that you
designated with values in the ASM_DISKSTRING initialization
parameter.
7.5.1 Identifying and Using Existing Oracle Database Disk Groups on Oracle ASM
Identify existing disk groups and determine the free disk space that they contain.
Optionally, identify failure groups for the Oracle ASM disk group devices.
If you intend to use a normal or high redundancy disk group, then you can further
protect your database against hardware failure by associating a set of disk devices in a
custom failure group. By default, each device comprises its own failure group.
However, if two disk devices in a normal redundancy disk group are attached to the
same SCSI controller, then the disk group becomes unavailable if the controller fails.
The controller in this example is a single point of failure.
To protect against failures of this type, you could use two SCSI controllers, each with
two disks, and define a failure group for the disks attached to each controller. This
configuration would enable the disk group to tolerate the failure of one SCSI
controller.
7-8 Oracle Grid Infrastructure Installation and Upgrade Guide
Preparing Your System to Use Oracle ASM for Shared Storage
Note: If you define custom failure groups, then you must specify a minimum
of two failure groups for normal redundancy and three failure groups for high
redundancy.
See Also: Oracle Automatic Storage Management Administrator's Guide for
information about Oracle ASM disk discovery.
7.5.2 Selecting Disks to use with Oracle ASM Disk Groups
If you are sure that a suitable disk group does not exist on the system, then install or
identify appropriate disk devices to add to a new disk group.
Use the following guidelines when identifying appropriate disk devices:
•
All of the devices in an Oracle ASM disk group should be the same size and have
the same performance characteristics.
•
Do not specify multiple partitions on a single physical disk as a disk group device.
Oracle ASM expects each disk group device to be on a separate physical disk.
•
Nonshared logical partitions are not supported with Oracle RAC. To use logical
partitions for your Oracle RAC database, you must use shared logical volumes
created by a logical volume manager such as diskpart.msc.
•
Although you can specify a logical volume as a device in an Oracle ASM disk
group, Oracle does not recommend their use because it adds a layer of complexity
that is unnecessary with Oracle ASM. In addition, Oracle RAC requires a cluster
logical volume manager in case you decide to use a logical volume with Oracle
ASM and Oracle RAC.
Related Topics:
Preliminary Shared Disk Preparation (page 6-7)
Configuring Disk Partitions on Shared Storage (page 6-9)
7.5.3 Specifying the Oracle ASM Disk Discovery String
When an Oracle ASM instance is initialized, Oracle ASM discovers and examines the
contents of all of the disks that are in the paths that you designated with values in the
ASM_DISKSTRING initialization parameter.
The value for the ASM_DISKSTRING initialization parameter is an operating system–
dependent value that Oracle ASM uses to limit the set of paths that the discovery
process uses to search for disks. The exact syntax of a discovery string depends on the
platform, ASMLib libraries, and whether Oracle Exadata disks are used. The path
names that an operating system accepts are always usable as discovery strings.
The default value of ASM_DISKSTRING might not find all disks in all situations. If
your site is using a third-party vendor ASMLib, then the vendor might have discovery
string conventions that you must use for ASM_DISKSTRING. In addition, if your
installation uses multipathing software, then the software might place pseudo-devices
in a path that is different from the operating system default.
Configuring Storage for Oracle Automatic Storage Management 7-9
Marking Disk Partitions for Oracle ASM Before Installation
See Also:
•
Oracle Automatic Storage Management Administrator's Guide for more
information about the initialization parameter ASM_DISKSTRING
•
See "Oracle ASM and Multipathing" in Oracle Automatic Storage
Management Administrator's Guide for information about configuring
Oracle ASM to work with multipathing, and consult your multipathing
vendor documentation for details.
7.6 Marking Disk Partitions for Oracle ASM Before Installation
The only partitions that OUI displays for Windows systems are logical drives that are
on disks that do not contain a primary partition, and have been marked (or stamped)
with asmtool or by Oracle Automatic Storage Management (Oracle ASM) Filter
Driver..
If you chose not to use Oracle ASM Filter Driver (Oracle ASMFD) for configuring and
marking the disks to use with Oracle ASM, then you must create disk partitions and
use the asmtool utility to mark the disk partitions prior to installing Oracle Grid
Infrastructure.
If you are using ASMLIB, then you can configure the disks before installation either by
using asmtoolg (graphical user interface (GUI) version) or using asmtool
(command line version). You also have the option of using the asmtoolg utility
during Oracle Grid Infrastructure for a cluster installation.
All disk names created by asmtoolg or asmtool begin with the prefix ORCLDISK
followed by a user-defined prefix (the default is DATA), and by a disk number for
identification purposes. You can use them as raw devices in the Oracle ASM instance
by specifying a name \\.\ORCLDISKprefixn, where prefixn either can be DATA,
or a value you supply, and where n represents the disk number.
The asmtoolg and asmtool utilities only work on partitioned disks; you cannot use
Oracle ASM on unpartitioned disks. You can also use these tools to reconfigure the
disks after installation. These utilities are installed automatically as part of Oracle Grid
Infrastructure.
Note: If user account control (UAC) is enabled, then running asmtoolg or
asmtool requires administrator-level permissions.
Using asmtoolg (Graphical User Interface) to Mark Disks (page 7-11)
Use asmtoolg (GUI version) to create device names; use asmtoolg to
add, change, delete, and examine the devices available for use in Oracle
ASM.
Using asmtoolg to Remove Disk Stamps (page 7-11)
You can use asmtoolg (GUI version) to delete disk stamps.
asmtool Command Line Reference (page 7-12)
asmtool is a command-line interface for marking (or stamping) disks to
be used with Oracle ASM.
7-10 Oracle Grid Infrastructure Installation and Upgrade Guide
Marking Disk Partitions for Oracle ASM Before Installation
7.6.1 Using asmtoolg (Graphical User Interface) to Mark Disks
Use asmtoolg (GUI version) to create device names; use asmtoolg to add, change,
delete, and examine the devices available for use in Oracle ASM.
1. In directory where you unzipped the Oracle Grid Infrastructure image files, go the
bin\asmtool folder and double-click asmtoolg.
If user access control (UAC) is enabled, then you must create a desktop shortcut to
a command window. Right-click the shortcut, select Run as Administrator, and
launch asmtoolg.
2. Select the Add or change label option, and then click Next.
asmtoolg shows the devices available on the system. Unrecognized disks have a
status of "Candidate device", stamped disks have a status of "Stamped ASM
device," and disks that have had their stamp deleted have a status of "Unstamped
ASM device." The tool also shows disks that are recognized by Windows as a file
system (such as NTFS). These disks are not available for use as Oracle ASM disks,
and cannot be selected. In addition, Microsoft Dynamic disks are not available for
use as Oracle ASM disks.
3. On the Stamp Disks window, select the disks to you want to use with Oracle ASM.
For ease of use, Oracle ASM can generate unique stamps for all of the devices
selected for a given prefix. The stamps are generated by concatenating a number
with the prefix specified. For example, if the prefix is DATA, then the first Oracle
ASM link name is ORCLDISKDATA0.
You can also specify the stamps of individual devices.
4. Optional: Select a disk to edit the individual stamp (Oracle ASM link name).
5. Click Next.
6. Click Finish.
See Also: Configuring Disk Partitions on Shared Storage (page 6-9) to create
disk partitions for use with the Oracle ASM instance.
7.6.2 Using asmtoolg to Remove Disk Stamps
You can use asmtoolg (GUI version) to delete disk stamps.
1. In directory where you unzipped the Oracle Grid Infrastructure image files, go the
Grid_home\bin folder and double-click asmtoolg.
If user access control (UAC) is enabled, then you must create a desktop shortcut to
a command window. Open the command window using the Run as
Administrator, right-click the context menu, and launch asmtoolg.
2. Select the Delete labels option, then click Next.
The delete option is only available if disks exist with stamps. The delete screen
shows all stamped Oracle ASM disks.
Configuring Storage for Oracle Automatic Storage Management 7-11
Marking Disk Partitions for Oracle ASM Before Installation
3. On the Delete Stamps screen, select the disks to unstamp.
4. Click Next.
5. Click Finish.
7.6.3 asmtool Command Line Reference
asmtool is a command-line interface for marking (or stamping) disks to be used with
Oracle ASM.
Option
Description
Example
-add
Adds or changes stamps. You must
specify the hard disk, partition, and
new stamp name. If the disk is a
raw device or has an existing
Oracle ASM stamp, then you must
specify the -force option.
asmtool -add [-force]
\Device\Harddisk1\Partition1 ORCLDISKASM0
\Device\Harddisk2\Partition1 ORCLDISKASM2
...
-addprefix
-create
Adds or changes stamps using a
common prefix to generate stamps
automatically. The stamps are
generated by concatenating a
number with the prefix specified. If
the disk is a raw device or has an
existing Oracle ASM stamp, then
you must specify the -force
option.
Creates an Oracle ASM disk device
from a file instead of a partition.
Note: Usage of this command is not
supported for production
environments.
-list
-delete
asmtool -addprefix ORCLDISKASM [-force]
\Device\Harddisk1\Partition1
\Device\Harddisk2\Partition1
...
asmtool -create \\server\share\file 1000
asmtool -create D:\asm\asmfile02.asm 240
List available disks. The stamp,
windows device name, and disk
size in MB are shown.
asmtool -list
Removes existing stamps from
disks.
asmtool -delete ORCLDISKASM0 ORCLDISKASM1...
If user access control (UAC) is enabled, then you must create a desktop shortcut to a
command window. Open the command window using the Run as Administrator,
right-click the context menu, and launch asmtool.
Note: If you use -add, -addprefix, or -delete, asmtool notifies the
Oracle ASM instance on the local node and on other nodes in the cluster, if
available, to rescan the available disks.
Related Topics:
Configuring Disk Partitions on Shared Storage (page 6-9)
7-12 Oracle Grid Infrastructure Installation and Upgrade Guide
Creating and Using Oracle ASM Credentials File
7.7 Creating and Using Oracle ASM Credentials File
Review this information to create an Oracle ASM credentials file.
An Oracle ASM Storage Client does not have an Oracle ASM instance running on the
nodes and uses Oracle ASM storage services in a different client cluster.
The Oracle ASM Client Cluster requires Grid Naming Service (GNS) to be configured
in the Oracle ASM Server Cluster.
1. Connect to any Oracle ASM instance as SYSASM user and execute the query:
ALTER DISKGROUP data SET ATTRIBUTE 'access_control.enabled' = 'true';
2. From the Grid_home\bin directory on the Storage Server, run the following
command on one of the member nodes, where credential_file is the name and path
location of the Oracle ASM credentials file you create:
cd Grid_home\bin
C:\..\bin> asmcmd mkcc client_cluster_name credential_file
For example:
asmcmd mkcc clientcluster1 C:\grid\clientcluster1_credentials.xml
3. Copy the Oracle ASM credentials file to a secure path on the client cluster node
where you run the client cluster installation.
The Oracle Installation user must have permissions to access that file. Oracle
recommends that no other user is granted permissions to access the Oracle ASM
credentials file. During installation, you are prompted to provide a path to the file.
Note:
•
The Oracle ASM credentials file can be used only once. If an Oracle ASM
Storage Client is configured and deconfigured, you must create a new
Oracle ASM credentials file.
•
If the Oracle ASM credentials file is used to configure the client cluster,
then it cannot be shared or reused to configure another client cluster.
7.8 Configuring Oracle Automatic Storage Management Cluster File
System
If you want to install Oracle RAC on Oracle ACFS, you must first create the Oracle
home directory in Oracle ACFS.
You can also create a General Purpose File System configuration of ACFS using
ASMCA. Oracle ACFS is installed as part of an Oracle Grid Infrastructure installation
(Oracle Clusterware and Oracle Automatic Storage Management).
The compatibility parameters COMPATIBLE.ASM and COMPATIBLE.ADVM must be set
to 11.2 or higher for the disk group to contain an Oracle ADVM volume.
1. Install Oracle Grid Infrastructure for a cluster (Oracle Clusterware and Oracle
ASM).
Configuring Storage for Oracle Automatic Storage Management 7-13
Configuring Oracle Automatic Storage Management Cluster File System
2. Go to the bin directory in the Grid home, for example:
C:\> cd app\12.2.0\grid\bin
3. Ensure that the Oracle Grid Infrastructure installation owner has read and write
permissions on the storage mount point you want to use.
For example, to use the mount point E:\data\acfsmounts\
C:\..bin> dir /Q E:\data\acfsmounts
4. Start ASMCA as the Oracle Installation user for Oracle Grid Infrastructure, for
example:
C:\..\bin> asmca
The Configure ASM: Disk Groups page is displayed. The Configure ASM: ASM
Disk Groups page shows you the Oracle ASM disk group you created during
installation.
5. Click the ASM Cluster File Systems tab.
6. On the ASM Cluster File Systems page, right-click the Data disk, then select Create
ACFS for Database Use.
7. In the Create ACFS for Database window, enter the following information:
•
Volume Name: Enter the name of the database home. The name must be
unique in your enterprise, for example: racdb_01
•
Mount Point: Enter the directory path or logical drive letter for the mount
point. For example: E:\data\acfsmounts\racdb_01
Make a note of this mount point for future reference.
•
Size (GB): Enter in gigabytes the size you want the database home to be.
•
Owner Name: Enter the name of the Oracle Installation user you plan to use to
install the database. For example: oracle1
Select Automatically run configuration commands to run ASMCA configuration
commands automatically. To use this option, you must provide the Administrator
user credentials on the ASMCA Settings page.
8. Click OK when you have entered the required information.
9. If you did not select to run configuration commands automatically, then run the
script generated by Oracle ASM Configuration Assistant as the Local
Administrator user.
On an Oracle Clusterware environment, the script registers ACFS as a resource
managed by Oracle Clusterware. Registering ACFS as a resource helps Oracle
Clusterware to mount the ACFS automatically in the proper order when ACFS is
used for an Oracle RAC database home.
10. During Oracle RAC installation, ensure that you or the database administrator who
installs Oracle RAC selects for the Oracle home the mount point you provided in
the Mount Point field (in the preceding example, this was E:\data\acfsmounts
\racdb_01).
7-14 Oracle Grid Infrastructure Installation and Upgrade Guide
Configuring Oracle Automatic Storage Management Cluster File System
See Also:
•
Oracle Automatic Storage Management Administrator's Guide for more
information about configuring and managing your storage with Oracle
ACFS
•
Oracle ACFS and Oracle ADVM (page 6-3)
Configuring Storage for Oracle Automatic Storage Management 7-15
Configuring Oracle Automatic Storage Management Cluster File System
7-16 Installation and Upgrade Guide
8
Configuring Direct NFS Client for Oracle
RAC Data Files
Direct NFS Client is an interface for NFS systems provided by Oracle.
About Direct NFS Client Storage (page 8-2)
Direct NFS Client integrates the NFS client functionality directly in the
Oracle software to optimize the I/O path between Oracle and the NFS
server. This integration can provide significant performance
improvements.
Creating an oranfstab File for Direct NFS Client (page 8-3)
If you use Direct NFS Client, then you must create a configuration file,
oranfstab, to specify the options, attributes, and parameters that
enable Oracle Database to use Direct NFS Client.
Configurable Attributes for the oranfstab File (page 8-5)
You can configure various settings in the oranfstab file.
Mounting NFS Storage Devices with Direct NFS Client (page 8-7)
Direct NFS Client determines mount point settings for NFS storage
devices based on the configuration information in oranfstab. Direct NFS
Client uses the first matching entry as the mount point.
Specifying Network Paths for a NFS Server (page 8-7)
Direct NFS Client can use up to four network paths defined in the
oranfstab file for an NFS server.
Enabling Direct NFS Client (page 8-8)
To enable Direct NFS Client, you must add an oranfstab file to the
Oracle_home\dbs directory and modify the related DLL files used by
the Oracle Database software.
Performing Basic File Operations Using the ORADNFS Utility (page 8-8)
ORADNFS is a utility which enables the database administrators to
perform basic file operations over Direct NFS Client on Microsoft
Windows platforms.
Monitoring Direct NFS Client Usage (page 8-9)
You use data dictionary view to monitor the Direct NFS client.
Disabling Oracle Disk Management Control of NFS for Direct NFS Client
(page 8-9)
If you no longer want to use the Direct NFS client, you can disable it.
Configuring Direct NFS Client for Oracle RAC Data Files 8-1
About Direct NFS Client Storage
8.1 About Direct NFS Client Storage
Direct NFS Client integrates the NFS client functionality directly in the Oracle
software to optimize the I/O path between Oracle and the NFS server. This integration
can provide significant performance improvements.
Direct NFS Client supports NFSv3, NFSv4 and NFSv4.1 protocols to access the NFS
server. Direct NFS Client also simplifies, and in many cases automates, the
performance optimization of the NFS client configuration for database workloads.
Starting with Oracle Database 12c release 2 (12.2), Windows Direct NFS Client
supports all widely accepted NFS path formats including UNIX-style NFS paths and
NFS Version 4 protocol.
Starting with Oracle Database 12c release 2, when you enable Direct NFS, you can also
enable the Direct NFS dispatcher. The Direct NFS dispatcher consolidates the number
of TCP connections that are created from a database instance to the NFS server. In
large database deployments, using Direct NFS dispatcher improves scalability and
network performance. Parallel NFS deployments also require a large number of
connections. Hence, the Direct NFS dispatcher is recommended with Parallel NFS
deployments too.
Direct NFS Client tunes itself to make optimal use of available resources and enables
the storage of data files on supported NFS servers. Direct NFS Client obtains NFS
mount points from the oranfstab file.
Note: Use NFS servers supported for Oracle RAC. Check My Oracle Support,
as described in "Checking Hardware and Software Certification on My Oracle
Support (page 3-6)" for support information.
Direct NFS Client Requirements
•
Direct NFS cannot provide service to NFS servers with write size values (wtmax)
less than 32768.
•
The Oracle files resident on the NFS server that are accessed by Direct NFS Client
can also be accessed through a third-party NFS client. The volume must be
mounted through Common Internet File System (CIFS) or kernel NFS to enable
regular windows utilities and commands, such as copy, and so on, access the
database files in the remote location.
•
Volumes mounted through CIFS can not be used for storing Oracle database files
without configuring Direct NFS Client. The atomic write requirements needed for
database writes are not guaranteed through the CIFS protocol, consequently CIFS
can only be used for OS level access, for example, for commands such as copy.
•
To enable Oracle Database to use Direct NFS Client, the NFS file systems must be
mounted and available before you start installation. Direct NFS Client manages
settings after installation.
If Oracle Database cannot open an NFS server using Direct NFS Client, then an
informational message is logged into the Oracle alert log. A trace file is also
created, indicating that Direct NFS Client could not connect to an NFS server.
•
Some NFS file servers require NFS clients to connect using reserved ports. If your
filer is running with reserved port checking, then you must disable it for Direct
8-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Creating an oranfstab File for Direct NFS Client
NFS to operate. To disable reserved port checking, consult your NFS file server
documentation.
•
For NFS servers that restrict port range, you can use the insecure option to enable
clients other than an Administrator user to connect to the NFS server.
Alternatively, you can disable Direct NFS Client.
•
You can have only one active Direct NFS Client implementation for each instance.
Using Direct NFS Client on an instance will prevent another Direct NFS Client
implementation.
See Also:
•
Oracle Database Reference for information on setting the
enable_dnfs_dispatcher parameter in the initialization parameter
file to enable Direct NFS dispatcher
•
Oracle Database Administrator’s Guide for guidelines for managing Oracle
data files created with Direct NFS client.
•
Oracle Database Performance Tuning Guide for performance benefits of
enabling Parallel NFS and Direct NFS dispatcher
•
Oracle Automatic Storage Management Administrator's Guide for guidelines
regarding managing Oracle Database data files created with Direct NFS
Client or kernel NFS
8.2 Creating an oranfstab File for Direct NFS Client
If you use Direct NFS Client, then you must create a configuration file, oranfstab, to
specify the options, attributes, and parameters that enable Oracle Database to use
Direct NFS Client.
Direct NFS Client looks for the mount point entries in oranfstab. It uses the first
matched entry as the mount point.
1. Create an oranfstab file and specify configurable attributes for each NFS server
that the Direct NFS Client accesses.
The mount point specified in the oranfstab file represents the local path where
the database files would reside normally, as if Direct NFS Client was not used. For
example, if the location for the data files is C:\app\oracle\oradata\orcl and
Direct NFS Client is not used, then specify C:\app\oracle\oradata\orcl as
NFS virtual mount point in the corresponding oranfstab file.
2. Save the file to the Oracle_home\dbs directory.
When the oranfstab file is placed in Oracle_home\dbs, the entries in the file
are specific to a single database.
3. If you have a nonshared Oracle home, copy the oranfstab file to the
Oracle_home\dbs directory on all nodes in the cluster.
All instances that use the shared Oracle home use the same Oracle_home\dbs
\oranfstab file.
Configuring Direct NFS Client for Oracle RAC Data Files 8-3
Creating an oranfstab File for Direct NFS Client
For a nonshared Oracle home, you must keep the oranfstab file synchronized on all
the nodes.
Note:
•
Direct NFS Client ignores a uid or gid value of 0. The exported path
from the NFS server must be accessible for read/write/execute by the
user with the uid, gid specified in oranfstab. If neither uid nor gid is
listed, then the exported path must be accessible by the user with uid:
65534 and gid:65534.
•
If you remove an NFS path from oranfstab, you must make the change
in all oranfstab files used by the Oracle RAC database. Then, you must
restart the database for the change to be effective. The mount point that
you use for the file system must be identical on each node.
The following examples show three possible NFS server entries in oranfstab. A
single oranfstab can have multiple NFS server entries.
Example 8-1
oranfstab File Using Local and Path NFS Server Entries
The following example of an oranfstab file shows an NFS server entry, where the
NFS server, MyDataServer1, uses two network paths specified with IP addresses.
server: MyDataServer1
local: 192.0.2.0
path: 192.0.2.1
local: 192.0.100.0
path: 192.0.100.1
nfs_version: nfsv3
export: /vol/oradata1 mount: C:\APP\ORACLE\ORADATA\ORADATA1
Example 8-2 oranfstab File Using Network Names in Place of IP Addresses, with
Multiple Exports, management and community
The following example of an oranfstab file shows an NFS server entry, where the
NFS server, MyDataServer2, uses four network paths specified by the network
interface to use, or the network connection name. Multiple export paths are also used
in this example.
server: MyDataServer2
local: LocalInterface1
path: NfsPath1
local: LocalInterface2
path: NfsPath2
local: LocalInterface3
path: NfsPath3
local: LocalInterface4
path: NfsPath4
nfs_version: nfsv4
export: /vol/oradata2 mount: C:\APP\ORACLE\ORADATA\ORADATA2
export: /vol/oradata3 mount: C:\APP\ORACLE\ORADATA\ORADATA3
management: MgmtPath1
community: private
Example 8-3
oranfstab File Using Kerberos Authentication with Direct NFS Export
In this example, when specified, the security parameter overrides the value of the
security_default parameter.
8-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Configurable Attributes for the oranfstab File
server: nfsserver
local: 198.51.100.02
path: 10.0.0.0
local: 198.51.100.03
path: 10.0.0.3
export: /private/oracle1/logs mount: C:\APP\ORACLE\ORADATA\logs security: krb5
export: /private/oracle1/data mount: C:\APP\ORACLE\ORADATA\data security: krb5p
export: /private/oracle1/archive mount: C:\APP\ORACLE\ORADATA\archive security: sys
export: /private/oracle1/data1 mount: C:\APP\ORACLE\ORADATA\data1
security_default: krb5i
Related Topics:
Enabling Direct NFS Client (page 8-8)
To enable Direct NFS Client, you must add an oranfstab file to the
Oracle_home\dbs directory and modify the related DLL files used by
the Oracle Database software.
Configurable Attributes for the oranfstab File (page 8-5)
You can configure various settings in the oranfstab file.
8.3 Configurable Attributes for the oranfstab File
You can configure various settings in the oranfstab file.
Table 8-1
Configurable Attributes for the oranfstab File
Attribute
Description
server
The NFS server name.
path
Up to four network paths to the NFS server, specified either by
internet protocol (IP) address, or by name, as displayed using
the ifconfig command on the NFS server.
local
Up to 4 paths on the database host, specified by IP address or by
name, as displayed using the ipconfig command on the
database host.
export
The exported path from the NFS server. Use a UNIX-style path.
mount
The corresponding local mount point for the exported volume.
The path can use a Windows-style path, or the following path
formats:
•
•
•
nfs://server/export/path/file
server:/export/path/file
//server/export/path/file
mnt_timeout
(Optional) The time (in seconds) for which Direct NFS Client
should wait for a successful mount before timing out. The
default timeout is 10 minutes (600).
uid
(Optional) The UNIX user ID to be used by Direct NFS Client to
access all NFS servers listed in oranfstab. The default value is
uid:65534, which corresponds to user:nobody on the NFS
server.
Configuring Direct NFS Client for Oracle RAC Data Files 8-5
Configurable Attributes for the oranfstab File
Table 8-1
(Cont.) Configurable Attributes for the oranfstab File
Attribute
Description
gid
(Optional) The UNIX group ID to be used by Direct NFS Client
to access all NFS servers listed in oranfstab. The default value
is gid:65534, which corresponds to group:nogroup on the
NFS server.
nfs_version
(Optional) The NFS protocol version used by the Direct NFS
Client. Possible values are NFSv3, NFSv4, and NFSv4.1, and
pNFS. The default version is NFSv3. To specify NFSv4.x, then
you must configure the nfs_version parameter accordingly in
the oranfstab file. Specify nfs_version as pNFS, if you
want to use Direct NFS with Parallel NFS.
security_default
(Optional) The default security mode applicable for all the
exported NFS server paths for a server entry. sys is the default
value. See the description of the security parameter for the
supported security levels for the security_default
parameter.
security
(Optional) The security level, if you want to enable security
using Kerberos authentication protocol with Direct NFS Client.
This parameter can be specified per export-mount pair. The
supported security levels for the security_default and
security parameters are:
•
sys: UNIX level security AUTH_UNIX authentication
based on user identifier (UID) and group identifier (GID)
values. This is the default value for security parameters.
•
krb5: Direct NFS runs with plain Kerberos authentication.
Server is authenticated as the real server which it claims to
be.
•
krb5i: Direct NFS runs with Kerberos authentication and
NFS integrity. Server is authenticated and each of the
message transfers is checked for integrity.
•
krb5p: Direct NFS runs with Kerberos authentication and
NFS privacy. Server is authenticated, and all data is
completely encrypted.
The security parameter, if specified, takes precedence over
the security_default parameter. If neither of these
parameters are specified, then sys is the default authentication.
For NFS server Kerberos security setup, review the relevant NFS
server documentation. For Kerberos client setup, review the
relevant operating system documentation.
management
Enables Direct NFS Client to use the management interface for
SNMP queries. You can use this parameter if SNMP is running
on separate management interfaces on the NFS server. The
default value is server.
community
The community string for use in SNMP queries. Default value is
public.
See Also: "Limiting Asynchronous I/O in NFS Server Environments" in
Oracle Database Performance Tuning Guide
8-6 Oracle Grid Infrastructure Installation and Upgrade Guide
Mounting NFS Storage Devices with Direct NFS Client
Related Topics:
Creating an oranfstab File for Direct NFS Client (page 8-3)
If you use Direct NFS Client, then you must create a configuration file,
oranfstab, to specify the options, attributes, and parameters that
enable Oracle Database to use Direct NFS Client.
8.4 Mounting NFS Storage Devices with Direct NFS Client
Direct NFS Client determines mount point settings for NFS storage devices based on
the configuration information in oranfstab. Direct NFS Client uses the first matching
entry as the mount point.
If Oracle Database cannot open an NFS server using Direct NFS Client, then an error
message is written into the Oracle alert and trace files indicating that Direct NFS Client
could not be established.
Note: You can have only one active Direct NFS Client implementation for
each instance. Using Direct NFS Client on an instance will prevent another
Direct NFS Client implementation.
Direct NFS Client requires an NFS server supporting NFS read/write buffers of at
least 16384 bytes.
Direct NFS Client issues writes at wtmax granularity to the NFS server. Direct NFS
Client does not serve an NFS server with a wtmax less than 16384. Oracle recommends
that you use the value 32768.
See Also: "Supported Storage Options for Oracle Grid Infrastructure and
Oracle RAC (page 6-2)" for a list of the file types that are supported with
Direct NFS Client.
8.5 Specifying Network Paths for a NFS Server
Direct NFS Client can use up to four network paths defined in the oranfstab file for
an NFS server.
Direct NFS Client performs load balancing across all specified paths. If a specified
path fails, then Direct NFS Client reissues all outstanding requests over any remaining
paths.
Note: You can have only one active Direct NFS Client implementation for
each instance. Using Direct NFS Client on an instance prevents the use of
another Direct NFS Client implementation.
See Also: Creating an oranfstab File for Direct NFS Client (page 8-3) for
examples of configuring network paths for Direct NFS Client attributes in an
oranfstab file.
Configuring Direct NFS Client for Oracle RAC Data Files 8-7
Enabling Direct NFS Client
8.6 Enabling Direct NFS Client
To enable Direct NFS Client, you must add an oranfstab file to the Oracle_home
\dbs directory and modify the related DLL files used by the Oracle Database
software.
1. Create an oranfstab file.
2. Replace the standard ODM library, oraodm12.dll, with the ODM NFS library
Oracle Database uses the ODM library, oranfsodm12.dll, to enable Direct NFS
Client. To replace the ODM library, complete the following steps:
a. Change directory to Oracle_home\bin.
b. Shut down the Oracle Database instance on a node using the Server Control
Utility (SRVCTL).
c. Enter the following commands:
copy oraodm12.dll oraodm12.dll.orig
copy /Y oranfsodm12.dll oraodm12.dll
d. Restart the Oracle Database instance using SRVCTL.
e. Repeat Step 2.a to Step 2.d for each node in the cluster.
Related Topics:
Creating an oranfstab File for Direct NFS Client (page 8-3)
If you use Direct NFS Client, then you must create a configuration file,
oranfstab, to specify the options, attributes, and parameters that
enable Oracle Database to use Direct NFS Client.
8.7 Performing Basic File Operations Using the ORADNFS Utility
ORADNFS is a utility which enables the database administrators to perform basic file
operations over Direct NFS Client on Microsoft Windows platforms.
ORADNFS is a multi-call binary, which is a single binary that acts like many utilities.
You must be a member of the local ORA_DBA group to use ORADNFS. A valid copy of
the oranfstab configuration file must be present in Oracle_home\dbs for
ORADNFS to operate.
•
To execute commands using ORADNFS you issue the command as an argument
on the command line.
The following command prints a list of commands available with ORADNFS:
C:\> oradnfs help
To display the list of files in the NFS directory mounted as C:\ORACLE\ORADATA,
use the following command:
C:\> oradnfs ls C:\ORACLE\ORADATA\ORCL
8-8 Oracle Grid Infrastructure Installation and Upgrade Guide
Monitoring Direct NFS Client Usage
8.8 Monitoring Direct NFS Client Usage
You use data dictionary view to monitor the Direct NFS client.
•
Use the following global dynamic performance views for managing Direct NFS
Client usage with your Oracle RAC database:
–
GV$DNFS_SERVERS: Lists the servers that are accessed using Direct NFS
Client.
–
GV$DNFS_FILES: Lists the files that are currently open using Direct NFS
Client.
–
GV$DNFS_CHANNELS: Lists the open network paths, or channels, to servers
for which Direct NFS Client is providing files.
–
GV$DNFS_STATS: Lists performance statistics for Direct NFS Client.
8.9 Disabling Oracle Disk Management Control of NFS for Direct NFS
Client
If you no longer want to use the Direct NFS client, you can disable it.
1. Log in as the Oracle Grid Infrastructure software owner.
2. Restore the original oraodm12.dll file.
a. Change directory to Oracle_home\bin.
b. Shut down the Oracle Database instance on a node using the Server Control
Utility (SRVCTL).
c. Enter the following commands:
copy oraodm12.dll.orig oraodm12.dll
copy /Y oraodm12.dll oranfsodm12.dll
d. Restart the Oracle Database instance using SRVCTL.
e. Repeat Step 2.a to Step 2.d for each node in the cluster.
3. Remove the oranfstab file.
Configuring Direct NFS Client for Oracle RAC Data Files 8-9
Disabling Oracle Disk Management Control of NFS for Direct NFS Client
8-10 Installation and Upgrade Guide
9
Installing Oracle Grid Infrastructure for a
Cluster
Review this information for installation and deployment options for Oracle Grid
Infrastructure.
Oracle Grid Infrastructure consists of Oracle Clusterware and Oracle Automatic
Storage Management (Oracle ASM). If you plan afterward to install Oracle Database
with Oracle Real Application Clusters (Oracle RAC), then this is phase one of a twophase installation.
Oracle Database and Oracle Grid Infrastructure installation software is available in
multiple media, and can be installed using several options. The Oracle Grid
Infrastructure software is available as an image, available for download from the
Oracle Technology Network website, or the Oracle Software Delivery Cloud portal. In
most cases, you use the graphical user interface (GUI) provided by Oracle Universal
Installer to install the software. You can also use Oracle Universal Installer to complete
silent mode installations, without using the GUI.
About Image-Based Oracle Grid Infrastructure Installation (page 9-2)
Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), installation
and configuration of Oracle Grid Infrastructure software is simplified
with image-based installation.
Understanding Cluster Configuration Options (page 9-2)
Oracle Grid Infrastructure 12c release 2 introduced new cluster
configuration options.
About Default File Permissions Set by Oracle Universal Installer (page 9-4)
During Oracle Database installation, by default Oracle Universal
Installer installs software in the ORACLE_HOME directory. Oracle
Universal Installer sets the permissions for this directory, and for all files
and directories under this directory.
Installing Oracle Grid Infrastructure for a New Cluster (page 9-4)
Complete this procedure to install Oracle Grid Infrastructure (Oracle
Clusterware and Oracle ASM) on your cluster.
Installing Oracle Grid Infrastructure Using a Cluster Configuration File
(page 9-7)
During installation of Oracle Grid Infrastructure, you have the option of
either of providing cluster configuration information manually, or of
using a cluster configuration file.
Installing Only the Oracle Grid Infrastructure Software (page 9-8)
This installation option requires manual postinstallation steps to enable
the Oracle Grid Infrastructure software.
Installing Oracle Grid Infrastructure for a Cluster 9-1
About Image-Based Oracle Grid Infrastructure Installation
Confirming Oracle Clusterware Function (page 9-12)
After installation, use the crsctl utility to verify Oracle Clusterware
installation is installed and running correctly.
Confirming Oracle ASM Function for Oracle Clusterware Files (page 9-12)
Confirm Oracle ASM is running after installing Oracle Grid
Infrastructure.
Understanding Offline Processes in Oracle Grid Infrastructure (page 9-13)
Oracle Grid Infrastructure provides required resources for various
Oracle products and components. Some of those products and
components are optional, so you can install and enable them after
installing Oracle Grid Infrastructure.
9.1 About Image-Based Oracle Grid Infrastructure Installation
Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), installation and
configuration of Oracle Grid Infrastructure software is simplified with image-based
installation.
To install Oracle Grid Infrastructure, create the new Grid home with the necessary
user group permissions, and then extract the image file into the newly-created Grid
home, and run the setup wizard to register the Oracle Grid Infrastructure product.
Using image-based installation, you can do the following:
•
Install and upgrade Oracle Grid Infrastructure for cluster configurations.
•
Install Oracle Grid Infrastructure for a standalone server (Oracle Restart).
•
Install only Oracle Grid Infrastructure software, and register the software with
Oracle inventory.
•
Add nodes to your existing cluster, if the Oracle Grid Infrastructure software is
already installed or configured.
This installation feature streamlines the installation process and supports automation
of large-scale custom deployments. You can also use this installation method for
deployment of customized images, after you patch the base-release software with the
necessary Patch Set Updates (PSUs) and patches.
Note: You must extract the image software into the directory where you want
your Grid home to be located, and then run the gridSetup.bat script to
start the Grid Infrastructure setup wizard. Ensure that the Grid home
directory path you create is in compliance with the Oracle Optimal Flexible
Architecture recommendations.
9.2 Understanding Cluster Configuration Options
Oracle Grid Infrastructure 12c release 2 introduced new cluster configuration options.
About Oracle Standalone Clusters (page 9-3)
An Oracle Standalone Cluster hosts all Oracle Grid Infrastructure
services and Oracle ASM locally and requires direct access to shared
storage.
9-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Understanding Cluster Configuration Options
About Oracle Extended Clusters (page 9-3)
An Oracle Extended Cluster consists of nodes that are located in
multiple locations called sites.
9.2.1 About Oracle Standalone Clusters
An Oracle Standalone Cluster hosts all Oracle Grid Infrastructure services and Oracle
ASM locally and requires direct access to shared storage.
Oracle Standalone Clusters contain two types of nodes arranged in a hub and spoke
architecture: Hub Nodes and Leaf Nodes. The number of Hub Nodes in an Oracle
Standalone Cluster can be as many as 64. The number of Leaf Nodes can be many
more. Hub Nodes and Leaf Nodes can host different types of applications. Oracle
Standalone Cluster Hub Nodes are tightly connected, and have direct access to shared
storage. Leaf Nodes do not require direct access to shared storage. Hub Nodes can run
in an Oracle Standalone Cluster configuration without having any Leaf Nodes as
cluster member nodes, but Leaf Nodes must be members of a cluster with a pool of
Hub Nodes. Shared storage is locally mounted on each of the Hub Nodes, with an
Oracle ASM instance available to all Hub Nodes.
Oracle Standalone Clusters host Grid Infrastructure Management Repository (GIMR)
locally. The GIMR is a multitenant database, which stores information about the
cluster. This information includes the real time performance data the Cluster Health
Monitor collects, and includes metadata required for Rapid Home Provisioning.
When you deploy an Oracle Standalone Cluster, you can also choose to configure it as
an Oracle Extended cluster. An Oracle Extended Cluster consists of nodes that are
located in multiple locations or sites.
9.2.2 About Oracle Extended Clusters
An Oracle Extended Cluster consists of nodes that are located in multiple locations
called sites.
When you deploy an Oracle Standalone Cluster, you can also choose to configure the
cluster as an Oracle Extended Cluster. You can extend an Oracle RAC cluster across
two, or more, geographically separate sites, each equipped with its own storage. In the
event that one of the sites fails, the other site acts as an active standby.
Both Oracle ASM and the Oracle Database stack, in general, are designed to use
enterprise-class shared storage in a data center. Fibre Channel technology, however,
enables you to distribute compute and storage resources across two or more data
centers, and connect them through Ethernet cables and Fibre Channel, for compute
and storage needs, respectively.
You can configure an Oracle Extended Cluster when you install Oracle Grid
Infrastructure. You can also do so post installation using the ConvertToExtended
script. You manage your Oracle Extended Cluster using CRSCTL.
Oracle recommends that you deploy Oracle Extended Clusters with normal
redundancy disk groups. You can assign nodes and failure groups to sites. Sites
contain failure groups, and failure groups contain disks. For normal redundancy disk
groups, a disk group provides one level of failure protection, and can tolerate the
failure of either a site or a failure group.
The following conditions apply when you select redundancy levels for Oracle
Extended Clusters:
Installing Oracle Grid Infrastructure for a Cluster 9-3
About Default File Permissions Set by Oracle Universal Installer
Table 9-1
Clusters
Oracle ASM Disk Group Redundancy Levels for Oracle Extended
Redundancy Level
Number of OCR and Voting
Files Disk Groups
Number of OCR Backup and
GIMR Disk Groups
Normal redundancy
1 failure group per data site,
1 quorum failure group
1 failure group per data site
Flex redundancy
1 failure group per data site,
1 quorum failure group
Three failure groups, with 1
failure group per site
High redundancy
Not supported
Three failure groups, with 1
failure group per site
Related Topics:
Identifying Storage Requirements for Using Oracle ASM for Shared Storage
(page 7-2)
Before installing Oracle Grid Infrastructure, you must identify and
determine how many devices are available for use by Oracle ASM, the
amount of free disk space available on each disk, and the redundancy
level to use with Oracle ASM.
See Also: Oracle Clusterware Administration and Deployment Guide
9.3 About Default File Permissions Set by Oracle Universal Installer
During Oracle Database installation, by default Oracle Universal Installer installs
software in the ORACLE_HOME directory. Oracle Universal Installer sets the
permissions for this directory, and for all files and directories under this directory.
For the ORACLE_HOME of Oracle Grid Infrastructure, OUI grants the following
permissions to the groups and users:
•
Full control - Administrators, SYSTEM, ORA_GRID_LISTENERS, Oracle
Installation User, Oracle Home User
•
Read, Execute, and List Contents - Authenticated Users
See Also: Oracle Database Platform Guide for Microsoft Windows
9.4 Installing Oracle Grid Infrastructure for a New Cluster
Complete this procedure to install Oracle Grid Infrastructure (Oracle Clusterware and
Oracle ASM) on your cluster.
At any time during installation, if you have a question about what you are being asked
to do, or what input you are required to provide during installation, then click the
Help button on the installer page.
You should have your network information, storage information, and operating
system users and groups available to you before you start installation.
9-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Installing Oracle Grid Infrastructure for a New Cluster
1. Log in to Windows as the installation user for Oracle Grid Infrastructure, which
must be a member of the Administrators users group.
2. Create the Grid home directory.
For example:
mkdir D:\app\12.2.0\grid
Ensure that the Grid home directory path you create is in compliance with the
Oracle Optimal Flexible Architecture recommendations.
3. Download the Oracle Grid Infrastructure image files and copy the files to the Grid
home.
For example:
cd D:\app\12.2.0\grid
unzip -q download_location\grid_home.zip
Unzip the installation image files only in this Grid home directory that you created.
Note: Download and copy the Oracle Grid Infrastructure image files to the
local node only. During installation, the software is copied and installed on all
other nodes in the cluster.
4. Start the Oracle Grid Infrastructure wizard by running the following command:
Grid_home\gridSetup.bat
You can run this command from a Virtual Network Computing (VNC) session, or
Terminal Services in console mode.
5. Select one of the following configuration options:
•
Configure Oracle Grid Infrastructure for a Cluster
Select this option to configure a new Oracle Flex Cluster deployment with Hub
and Leaf Nodes.
•
Configure Oracle Grid Infrastructure for a standalone server (Oracle Restart)
Select this option to install Oracle Grid Infrastructure in an Oracle Restart
configuration. Use this option for single servers supporting Oracle Database
and other applications.
•
Upgrade Oracle Grid Infrastructure
Select this option to upgrade Oracle Grid Infrastructure (Oracle Clusterware
and Oracle ASM).
•
Set Up Software Only
Select this option to configure Oracle Grid Infrastructure software and register
the Oracle Grid Infrastructure home with the central inventory
Note: Oracle Clusterware must always be the later release, so you cannot
upgrade Oracle ASM to a release that is more recent than Oracle Clusterware.
Installing Oracle Grid Infrastructure for a Cluster 9-5
Installing Oracle Grid Infrastructure for a New Cluster
6. Choose the type of Oracle Flex Cluster from the following options:
•
Configure a Flex Cluster: Select this option to configure an Oracle Real
Application Clusters deployment consisting of cluster member nodes
configured as two or more Hub Nodes (or HUB), and 0 to 1000 Leaf Node (or
LEAF).
•
Configure an Extended Cluster: Select this option to configure a type of Oracle
Flex Cluster consisting of HUB nodes that are located in multiple
geographically-separated locations called sites.
Configure an Application Cluster: Select this option to configure an Oracle
ASM client cluster that can run non-database applications. This cluster
configuration enables high availability of any software application.
7. Respond to the installation screens that appear in response to your configuration
selection.
Installation screens vary depending on the installation option you select.
For cluster member node public and VIP network addresses, provide the
information required depending on the kind of cluster you are configuring:
•
If you plan to use automatic cluster configuration with DHCP addresses
configured and resolved through GNS, then you only need to provide the GNS
VIP names as configured on your DNS.
The following is a list of additional information about node IP addresses:
•
For the local node only, the installer automatically fills in public and VIP fields.
If your system uses vendor clusterware, then the installer may fill additional
fields.
•
Host names and virtual host names are not domain-qualified. If you provide a
domain in the address field during installation, then the installer removes the
domain from the address.
•
Interfaces identified as private for private IP addresses should not be accessible
as public interfaces. Using public interfaces for Cache Fusion can cause
performance problems.
When you enter the public node name, use the primary host name of each node. In
other words, use the name displayed by the hostname command.
8. Provide information and run scripts as prompted by the installer.
After the installation interview, you can click Details to see the log file.
9. After you have specified all the information needed for installation, the installer
installs the software on each node. The installer then runs Oracle Net Configuration
Assistant (NETCA), Oracle Private Interconnect Configuration Assistant, and
Cluster Verification Utility (CVU). These programs run without user intervention.
10. During the installation, Oracle Automatic Storage Management Configuration
Assistant (asmca) configures Oracle ASM for storage.
When you have verified that your Oracle Grid Infrastructure installation has
completed successfully, you can either use Oracle Clusterware and Oracle ASM to
maintain high availability for other applications, or you can install Oracle Database
and Oracle RAC software.
9-6 Oracle Grid Infrastructure Installation and Upgrade Guide
Installing Oracle Grid Infrastructure Using a Cluster Configuration File
You can manage Oracle Grid Infrastructure and Oracle Automatic Storage
Management (Oracle ASM) using Oracle Enterprise Manager Cloud Control. To
register the Oracle Grid Infrastructure cluster with Oracle Enterprise Manager, ensure
that Oracle Management Agent is installed and running on all nodes of the cluster.
See Also:
•
Oracle Real Application Clusters Installation Guide for Microsoft Windows x64
(64-Bit)
•
Oracle Database Installation Guide for Microsoft Windows if you intend to use
Oracle Grid Infrastructure on a standalone server (an Oracle Restart
deployment)
•
Oracle Clusterware Administration and Deployment Guide for cloning Oracle
Grid Infrastructure
9.5 Installing Oracle Grid Infrastructure Using a Cluster Configuration File
During installation of Oracle Grid Infrastructure, you have the option of either of
providing cluster configuration information manually, or of using a cluster
configuration file.
A cluster configuration file is a text file that you can create before starting
gridSetup.bat, which provides the installer with cluster node addresses that it
requires to configure the cluster.
Oracle recommends that you consider using a cluster configuration file if you intend
to perform repeated installations on a test cluster, or if you intend to perform an
installation on many nodes. A sample cluster configuration file is available in the
directory Grid_home/install/response/sample.ccf.
To create a cluster configuration file manually, start a text editor, and create a file that
provides the name of the public and virtual IP addresses for each cluster member
node, in the following format:
node1 node1-vip /node-role
node2 node2-vip /node-role
.
.
.
node-role can have either HUB or LEAF as values. Specify the different nodes,
separating them with either spaces or colon (:).
For example:
mynode1 mynode1-vip /HUB
mynode2 mynode2-vip /LEAF
Or, for example:
mynode1:mynode1-vip:/HUB
mynode2:mynode2-vip:/LEAF
Example 9-1
Sample Cluster Configuration File
The following sample cluster configuration file is available in the directory
Grid_home/install/response/sample.ccf:
Installing Oracle Grid Infrastructure for a Cluster 9-7
Installing Only the Oracle Grid Infrastructure Software
#
# Cluster nodes configuration specification file
#
# Format:
# node [vip] [role-identifier] [site-name]
#
# node
- Node's public host name
# vip
- Node's virtual host name
# role-identifier - Node's role with "/" prefix - should be "/HUB" or "/LEAF"
# site-name
- Node's assigned site
#
# Specify details of one node per line.
# Lines starting with '#' will be skipped.
#
# (1) vip and role are not required for Oracle Grid Infrastructure software only
#
installs # (2) vip should be specified as AUTO if Node Virtual host names are
Dynamically
#
assigned
# (3) role-identifier can be specified as "/LEAF" only for "Oracle Standalone
Cluster"
# (4) site-name should be specified only when configuring Oracle Grid Infrastructure
with "Extended Cluster" option
#
# Examples:
# -------# For installing GI software only on a cluster:
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# node1
# node2
#
# For Standalone Cluster:
# ^^^^^^^^^^^^^^^^^^^^^^
# node1 node1-vip /HUB
# node2 node2-vip /LEAF
#
# For Standalone Extended Cluster:
# ^^^^^^^^^^^^^^^^^^^^^^
# node1 node1-vip /HUB sitea
# node2 node2-vip /LEAF siteb
#
9.6 Installing Only the Oracle Grid Infrastructure Software
This installation option requires manual postinstallation steps to enable the Oracle
Grid Infrastructure software.
If you use the Set Up Software Only option during installation, then Oracle Universal
Installer (OUI) installs the software binaries on multiple nodes. You can then perform
the additional steps of configuring Oracle Clusterware and Oracle ASM.
Installing Software Binaries for Oracle Grid Infrastructure for a Cluster
(page 9-9)
You can install Oracle Grid Infrastructure for a cluster software on
multiple nodes at a time.
Configuring the Software Binaries for Oracle Grid Infrastructure for a Cluster
(page 9-10)
Configure the software binaries by starting Oracle Grid Infrastructure
configuration wizard in GUI mode.
9-8 Oracle Grid Infrastructure Installation and Upgrade Guide
Installing Only the Oracle Grid Infrastructure Software
Configuring the Software Binaries Using a Response File (page 9-11)
When you install or copy Oracle Grid Infrastructure software on any
node, you can defer configuration for a later time using the Grid Setup
Wizard utility (gridSetup.bat).
Setting Ping Targets for Network Checks (page 9-11)
Receive notification about the status of public networks by setting the
Ping_Targets parameter during the Oracle Grid Infrastructure
installation.
Oracle Clusterware Administration and Deployment Guide for
information about cloning an Oracle Grid Infrastructure installation to other
nodes that were not included in the initial installation of Oracle Grid
Infrastructure, and then adding them to the cluster
See Also:
9.6.1 Installing Software Binaries for Oracle Grid Infrastructure for a Cluster
You can install Oracle Grid Infrastructure for a cluster software on multiple nodes at a
time.
1. Log in to Windows as the installation user for Oracle Grid Infrastructure, which
must be a member of the Administrators users group.
2. Create the Grid home directory.
For example:
mkdir D:\app\12.2.0\grid
Ensure that the Grid home directory path you create is in compliance with the
Oracle Optimal Flexible Architecture recommendations.
3. Download the Oracle Grid Infrastructure image files and copy the files to the Grid
home.
For example:
cd D:\app\12.2.0\grid
unzip -q download_location\grid_home.zip
Unzip the installation image files only in the newly created Grid home directory.
Note: Download and copy the Oracle Grid Infrastructure image files to the
local node only. During installation, the software is copied and installed on all
other nodes in the cluster.
4. Run the gridSetup.bat command from the newly created Grid home directory,
and select the Configuration Option as Set Up Software Only.
5. Complete installation of Oracle Grid Infrastructure software on one or more nodes
by providing information in the installer screens in response to your configuration
selection. You can install Oracle Grid Infrastructure software on multiple nodes at a
time.
6. When the software is configured, run the orainstRoot.bat script on all nodes, if
prompted.
Installing Oracle Grid Infrastructure for a Cluster 9-9
Installing Only the Oracle Grid Infrastructure Software
7. Ensure that you have completed all storage and server preinstallation
requirements.
8. Verify that all of the cluster nodes meet the installation requirements.
Use the command:
runcluvfy.bat stage -pre crsinst -n node_list
9. Configure the cluster using the Oracle Universal Installer (OUI) configuration
wizard or configure the cluster using a response file.
See Also:
•
Configuring the Software Binaries for Oracle Grid Infrastructure for a
Cluster (page 9-10)
•
Configuring the Software Binaries Using a Response File (page 9-11)
9.6.2 Configuring the Software Binaries for Oracle Grid Infrastructure for a Cluster
Configure the software binaries by starting Oracle Grid Infrastructure configuration
wizard in GUI mode.
After you have installed the Oracle Grid Infrastructure software on at least the local
node, you can configure Oracle Grid Infrastructure on all the nodes in your cluster.
1. Log in to one of the cluster nodes as the Oracle Installation user, and change
directory to Grid_home.
2. Start the Oracle Grid Infrastructure configuration wizard:
C:\Grid_home> gridSetup.bat
The configuration script starts Oracle Universal Installer in Configuration Wizard
mode.
3. Provide information as needed for configuration.
The configuration wizard mode validates the information and configures the
installation on all cluster nodes.
4. Verify that the summary has the correct information for your cluster, and click
Install to start configuration of the local node.
When you complete providing information, OUI shows you the Summary page,
listing the information you have provided for the cluster.
When configuration of the local node is complete, OUI copies the Oracle Grid
Infrastructure configuration file to other cluster member nodes.
5. If prompted, run the root scripts.
6. When you confirm that all root scripts are run, OUI checks the cluster
configuration status, and starts other configuration tools as needed.
9-10 Oracle Grid Infrastructure Installation and Upgrade Guide
Installing Only the Oracle Grid Infrastructure Software
9.6.3 Configuring the Software Binaries Using a Response File
When you install or copy Oracle Grid Infrastructure software on any node, you can
defer configuration for a later time using the Grid Setup Wizard utility
(gridSetup.bat).
Use this procedure to complete configuration after the software is installed or copied
on nodes.
1. As the Oracle Installation user for Oracle Grid Infrastructure (for example, grid),
start Oracle Universal Installer (OUI) in Oracle Grid Infrastructure Grid Setup
Wizard mode from the Oracle Grid Infrastructure software-only home. Use the
following syntax, where Grid_home is the Oracle Grid Infrastructure home, and
filename is the response file name:
Grid_home\gridSetup.bat [-debug] [-silent -responseFile filename]
For example:
C:\> cd Grid_home
C:\> gridSetup.bat -silent -responseFile C:\app\12.2.0\grid\response
\grid_setup.rsp
The configuration script starts OUI in Grid Setup Wizard mode. Each page shows
the same user interface and performs the same validation checks that OUI normally
does. However, instead of running an installation, the Grid Setup Wizard mode
validates inputs and configures the installation on all cluster nodes.
When you complete inputs, OUI shows you the Summary page, listing all inputs
you have provided for the cluster.
2. Verify that the summary has the correct information for your cluster, and click
Install to start configuration of the local node.
When configuration of the local node is complete, OUI copies the Oracle Grid
Infrastructure configuration file to other cluster member nodes.
3. OUI checks the cluster configuration status, and starts other configuration tools as
needed.
See Also:
•
Oracle Database Installation Guide for Microsoft Windows to configure and
activate a software-only Oracle Grid Infrastructure installation for a
standalone server
•
Running Postinstallation Configuration Using Response File (page A-10)
9.6.4 Setting Ping Targets for Network Checks
Receive notification about the status of public networks by setting the Ping_Targets
parameter during the Oracle Grid Infrastructure installation.
In certain environments, for example, in a virtual machine, the network link status is
not correctly returned when the network cable is disconnected. You can receive
notification about public network status in these environments by setting the
Installing Oracle Grid Infrastructure for a Cluster 9-11
Confirming Oracle Clusterware Function
Ping_Targets parameter during the Oracle Grid Infrastructure installation. You
should use this parameter for addresses outside the cluster, like a switch or router.
•
Run the installer:
C:\..> gridSetup.bat oracle_install_crs_Ping_Targets=Host1|IP1,Host2|IP2
For example:
C:\..> gridSetup.bat oracle_install_crs_Ping_Targets=192.0.2.1,192.0.2.2
The ping utility contacts the comma-separated list of host names or IP addresses
Host1/IP1,Host2/IP2 to determine whether the public network is available. If none of
the hosts respond, then the network is considered to be offline.
9.7 Confirming Oracle Clusterware Function
After installation, use the crsctl utility to verify Oracle Clusterware installation is
installed and running correctly.
•
Log in as a member of the Administrators group, and run the following command
from the bin directory in the Grid home:
crsctl check cluster -all
Example 9-2
Checking the Status of Oracle Clusterware
To check the status of the Oracle Clusterware components on each node of your
cluster, run the following command:
C:\..\bin\> crsctl check cluster -all
The output for this command is similar to the following:
$ crsctl check cluster -all
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
9.8 Confirming Oracle ASM Function for Oracle Clusterware Files
Confirm Oracle ASM is running after installing Oracle Grid Infrastructure.
After Oracle Grid Infrastructure installation, Oracle Clusterware files are stored on
Oracle ASM. Use the following command syntax as the Oracle Grid Infrastructure
installation owner (grid) to confirm that your Oracle ASM installation is running:
srvctl status asm
9-12 Oracle Grid Infrastructure Installation and Upgrade Guide
Understanding Offline Processes in Oracle Grid Infrastructure
For example:
srvctl status asm
ASM is running on node1,node2, node3, node4
Note: To manage Oracle ASM or Oracle Net 11g Release 2 (11.2) or later
installations, use the srvctl binary in the Oracle Grid Infrastructure home
for a cluster (Grid home). If you have Oracle Real Application Clusters or
Oracle Database installed, then you cannot use the srvctl binary in the
database home to manage Oracle ASM or Oracle Net.
9.9 Understanding Offline Processes in Oracle Grid Infrastructure
Oracle Grid Infrastructure provides required resources for various Oracle products
and components. Some of those products and components are optional, so you can
install and enable them after installing Oracle Grid Infrastructure.
To simplify postinstallation additions, Oracle Grid Infrastructure preconfigures and
registers all required resources for all products available for these products and
components, but only activates them when you choose to add them. As a result, some
components may be listed as OFFLINE after the installation of Oracle Grid
Infrastructure.
Resources listed as TARGET:OFFLINE and STATE:OFFLINE do not need to be
monitored. They represent components that are registered, but not enabled, so they do
not use any system resources. If an Oracle product or component is installed on the
system, and it requires a particular resource to be online, then the software will
prompt you to activate the required offline resource.
Installing Oracle Grid Infrastructure for a Cluster 9-13
Understanding Offline Processes in Oracle Grid Infrastructure
9-14 Installation and Upgrade Guide
10
Oracle Grid Infrastructure Postinstallation
Tasks
Complete the postinstallation tasks after you have installed the Oracle Grid
Infrastructure software.
You are required to complete some configuration tasks after Oracle Grid Infrastructure
is installed. In addition, Oracle recommends that you complete additional tasks
immediately after installation. You must also complete product-specific configuration
tasks before you use those products.
Note: This chapter describes basic configuration only. Refer to product-
specific administration and tuning guides for more detailed configuration and
tuning information.
Required Postinstallation Tasks (page 10-1)
Certain postinstallation tasks are critical for your newly installed
software.
Recommended Postinstallation Tasks (page 10-7)
Oracle recommends that you complete these tasks as needed after
installing Oracle Grid Infrastructure.
Using Earlier Oracle Database Releases with Grid Infrastructure (page 10-11)
Review the guidelines and restrictions for using earlier Oracle Database
releases with Oracle Grid Infrastructure 12c release 2 (12.2) installations.
Modifying Oracle Clusterware Binaries After Installation (page 10-13)
After installation, if you must modify the software installed in your Grid
home, then you must first stop the Oracle Clusterware stack.
10.1 Required Postinstallation Tasks
Certain postinstallation tasks are critical for your newly installed software.
Note: Backing up a voting file is no longer required.
Download and Install Patch Updates (page 10-2)
On a regular basis Oracle provides patch sets that include generic and
port specific fixes encountered by customers since the base product was
released.
Oracle Grid Infrastructure Postinstallation Tasks 10-1
Required Postinstallation Tasks
Configure Exceptions for the Windows Firewall (page 10-2)
If the Windows Firewall feature is enabled on one or more of the nodes
in your cluster, then virtually all transmission control protocol (TCP)
network ports are blocked to incoming connections.
10.1.1 Download and Install Patch Updates
On a regular basis Oracle provides patch sets that include generic and port specific
fixes encountered by customers since the base product was released.
Patch sets increment the 4th digit of the release number e.g. 12.1.0.1.0 to 12.1.0.2.0,
these patch sets are fully regression tested in the same way that the base release is (for
example, 12.1.0.1.0). Customers are encouraged to apply these fixes.
If a customer encounters a critical problem that requires a fix prior to the next patch
set becoming available, they can request that a one off fix is made available on top of
the latest patch set. This delivery mechanism is similar to the Microsoft Hot Fixes and
is known as an Oracle patch set exception (or interim patch). Unlike UNIX platforms,
these patch set exceptions are delivered in a patch set exception bundle (cumulative
patch bundle), which includes all fixes since the current patch set. Starting with Oracle
Database 11g release 2 (11.2.0.4), patch bundles are described as software_version.n,
where software_version is the patch set version (12.1.0.2) and n is the bundle patch
version (12.1.0.2.10). You should always apply the latest patch bundle available for
your release. Windows patch set exception bundles (bundled patches) are applicable
to both Oracle Grid Infrastructure and Oracle Database homes.
The patch set exception bundles also include the fixes for the CPU (Critical Patch
Update), DST (Daylight Saving Time), PSU (Patch Set Update) and Recommended
Patch Bundles. It is not required to have the previous security patches applied before
applying the patch set exception bundle. However, you must be on the stated patch set
level for a given product home before applying the patch set exception bundle for that
release.
•
Refer to the My Oracle Support website for required patch updates for your
installation.
https://support.oracle.com
Note: If you are not a My Oracle Support registered user, then click Register
for My Oracle Support and register.
See Also: Upgrading Oracle Grid Infrastructure (page 11-1) for information
about how to stop database processes in preparation for installing patches
10.1.2 Configure Exceptions for the Windows Firewall
If the Windows Firewall feature is enabled on one or more of the nodes in your cluster,
then virtually all transmission control protocol (TCP) network ports are blocked to
incoming connections.
Any Oracle product that listens for incoming connections on a TCP port will not
receive any of those connection requests and the clients making those connections will
report errors unless you configure exceptions for the Windows Firewall. You must
configure exceptions for the Windows Firewall if your system meets all of the
following conditions:
10-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Required Postinstallation Tasks
•
Oracle server-side components are installed on a computer running a supported
version of Microsoft Windows. The list of components includes the Oracle
Database, Oracle Grid Infrastructure, Oracle Real Application Clusters (Oracle
RAC), network listeners, or any web servers or services.
•
The Windows computer in question accepts connections from other computers
over the network. If no other computers connect to the Windows computer to
access the Oracle software, then no post-installation configuration steps are
required and the Oracle software functions as expected.
•
The Windows computer in question is configured to run the Windows Firewall. If
the Windows Firewall is not enabled, then no post-installation configuration steps
are required.
If all of the above conditions are met, then the Windows Firewall must be configured
to allow successful incoming connections to the Oracle software. To enable Oracle
software to accept connection requests, Windows Firewall must be configured by
either opening up specific static TCP ports in the firewall or by creating exceptions for
specific executable files so they can receive connection requests on any ports they
choose.
•
Use one of the following methods to configure the firewall:
–
Start the Windows Firewall application, select the Exceptions tab and then
click either Add Program or Add Port to create exceptions for the Oracle
software.
–
From the command prompt, use the netsh firewall add... command.
–
When Windows notifies you that a foreground application is attempting to
listen on a port, and gives you the opportunity to create an exception for that
executable file. If you choose the create the exception in this way, the effect is
the same as creating an exception for the executable file either through
Control Panel or from the command line.
The following sections list the Oracle Database 11g release 2 (11.2) executable files that
listen on TCP ports on Windows, along with a brief description of the executable file.
It is recommended that these executable files (if in use and accepting connections from
a remote, client computer) be added to the exceptions list for the Windows Firewall to
ensure correct operation. In addition, if multiple Oracle homes are in use, firewall
exceptions may have to be created for the same executable file, for example,
oracle.exe, multiple times, once for each Oracle home from which that executable
file loads.
Firewall Exceptions for Oracle Database (page 10-4)
For basic database operation and connectivity from remote clients, such
as SQL*Plus, Oracle Call Interface (OCI), Open Database Connectivity
(ODBC), and so on, you must add executable files to the Windows
Firewall exception list.
Firewall Exceptions for Oracle Database Examples (or the Companion CD)
(page 10-4)
After installing the Oracle Database Companion CD, you must add
executable files to the Windows Firewall exception list.,
Firewall Exceptions for Oracle Gateways (page 10-4)
If your Oracle database interacts with non-Oracle software through a
gateway, then you must add the gateway executable file to the Windows
Oracle Grid Infrastructure Postinstallation Tasks 10-3
Required Postinstallation Tasks
Firewall exception list. The following table lists the gateway executable
files used to access non-Oracle software.
Firewall Exceptions for Oracle Clusterware and Oracle ASM (page 10-5)
If you installed the Oracle Grid Infrastructure software on the nodes in
your cluster, then you can enable the Windows Firewall only after
adding certain executable files and ports to the Firewall exception list.
Firewall Exceptions for Oracle RAC Database (page 10-6)
After installing the Oracle Real Application Clusters (Oracle RAC), you
must add executable files to the Windows Firewall exception list.
Firewall Exceptions for Other Oracle Products (page 10-6)
In additional to all the previously listed exceptions, if you use any of the
Oracle software listed in, then you must create an exception for
Windows Firewall for the associated executable file.
Troubleshooting Windows Firewall Exceptions (page 10-6)
If you cannot establish certain connections even after granting
exceptions to the executable files, then follow these steps to troubleshoot
the installation.
10.1.2.1 Firewall Exceptions for Oracle Database
For basic database operation and connectivity from remote clients, such as SQL*Plus,
Oracle Call Interface (OCI), Open Database Connectivity (ODBC), and so on, you must
add executable files to the Windows Firewall exception list.
The following executable files must be added to the Windows Firewall exception list:
•
Oracle_home\bin\oracle.exe - Oracle Database executable
•
Oracle_home\bin\tnslsnr.exe - Oracle Listener
If you use remote monitoring capabilities for your database, the following executable
files must be added to the Windows Firewall exception list:
•
Oracle_home\bin\emagent.exe - Oracle Enterprise Manager
•
Oracle_home\jdk\bin\java.exe - Java Virtual Machine (JVM) for Oracle
Enterprise Manager
10.1.2.2 Firewall Exceptions for Oracle Database Examples (or the Companion CD)
After installing the Oracle Database Companion CD, you must add executable files to
the Windows Firewall exception list.,
The following executable files must be added to the Windows Firewall exception list:
•
Oracle_home\opmn\bin\opmn.exe - Oracle Process Manager
•
Oracle_home\jdk\bin\java.exe - JVM
10.1.2.3 Firewall Exceptions for Oracle Gateways
If your Oracle database interacts with non-Oracle software through a gateway, then
you must add the gateway executable file to the Windows Firewall exception list. The
following table lists the gateway executable files used to access non-Oracle software.
10-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Required Postinstallation Tasks
Table 10-1
Oracle Executables Used to Access Non-Oracle Software
Executable Name
Description
omtsreco.exe
Oracle Services for Microsoft Transaction Server
dg4sybs.exe
Oracle Database Gateway for Sybase
dg4tera.exe
Oracle Database Gateway for Teradata
dg4msql.exe
Oracle Database Gateway for SQL Server
dg4db2.exe
Oracle Database Gateway for Distributed Relational Database
Architecture (DRDA)
pg4arv.exe
Oracle Database Gateway for Advanced Program to Program
Communication (APPC)
pg4t4ic.exe
Oracle Database Gateway for APPC
dg4mqs.exe
Oracle Database Gateway for WebSphere MQ
dg4mqc.exe
Oracle Database Gateway for WebSphere MQ
dg4odbc.exe
Oracle Database Gateway for ODBC
10.1.2.4 Firewall Exceptions for Oracle Clusterware and Oracle ASM
If you installed the Oracle Grid Infrastructure software on the nodes in your cluster,
then you can enable the Windows Firewall only after adding certain executable files
and ports to the Firewall exception list.
The Firewall exception list must be updated on each node.
•
Grid_home\bin\gpnpd.exe - Grid Plug and Play daemon
•
Grid_home\bin\oracle.exe - Oracle Automatic Storage Management (Oracle
ASM) executable file (if using Oracle ASM for storage)
•
Grid_home\bin\racgvip.exe - Virtual Internet Protocol Configuration
Assistant
•
Grid_home\bin\evmd.exe - OracleEVMService
•
Grid_home\bin\crsd.exe - OracleCRService
•
Grid_home\bin\ocssd.exe - OracleCSService
•
Grid_home\bin\octssd.exe - Cluster Time Synchronization Service daemon
•
Grid_home\bin\mDNSResponder.exe - multicast-domain name system (DNS)
Responder Daemon
•
Grid_home\bin\gipcd.exe - Grid inter-process communication (IPC) daemon
•
Grid_home\bin\gnsd.exe - Grid Naming Service (GNS) daemon
•
Grid_home\bin\ohasd.exe - OracleOHService
•
Grid_home\bin\TNSLSNR.EXE - single client access name (SCAN) listener and
local listener for Oracle RAC database and Oracle ASM
Oracle Grid Infrastructure Postinstallation Tasks 10-5
Required Postinstallation Tasks
•
Grid_home\opmn\bin\ons.exe - Oracle Notification Service (ONS)
•
Grid_home\jdk\jre\bin\java.exe - JVM
10.1.2.5 Firewall Exceptions for Oracle RAC Database
After installing the Oracle Real Application Clusters (Oracle RAC), you must add
executable files to the Windows Firewall exception list.
For the Oracle RAC database, the executable file that require exceptions are:
•
Oracle_home\bin\oracle.exe - Oracle RAC database instance
•
Oracle_home\bin\emagent.exe - Oracle Enterprise Manager agent
•
Oracle_home\jdk\bin\java.exe - For the Oracle Enterprise Manager
Database Console
In addition, the following ports should be added to the Windows Firewall exception
list:
•
Microsoft file sharing system management bus (SMB)
–
•
TCP ports from 135 through 139
Direct-hosted SMB traffic without a network basic I/O system (NetBIOS)
–
port 445 (TCP)
10.1.2.6 Firewall Exceptions for Other Oracle Products
In additional to all the previously listed exceptions, if you use any of the Oracle
software listed in, then you must create an exception for Windows Firewall for the
associated executable file.
Table 10-2
Other Oracle Software Products Requiring Windows Firewall Exceptions
Oracle Software Product
Executable Name
Data Guard Manager
dgmgrl.exe
Oracle Internet Directory lightweight directory access
protocol (LDAP) Server
oidldapd.exe
External Procedural Calls
extproc.exe
10.1.2.7 Troubleshooting Windows Firewall Exceptions
If you cannot establish certain connections even after granting exceptions to the
executable files, then follow these steps to troubleshoot the installation.
1. Examine Oracle configuration files (such as *.conf files), the Oracle key in the
Windows registry, and network configuration files in %ORACLE_HOME%\network
\admin.
2. Grant an exception in the Windows Firewall to any executable listed in
%ORACLE_HOME%\network\admin\listener.ora in a PROGRAM= clause.
Each of these executables must be granted an exception in the Windows Firewall
because a connection can be made through the TNS listener to that executable.
10-6 Oracle Grid Infrastructure Installation and Upgrade Guide
Recommended Postinstallation Tasks
3. Examine Oracle trace files, log files, and other sources of diagnostic information for
details on failed connection attempts.
Log and trace files on the database client computer may contain useful error codes
or troubleshooting information for failed connection attempts. The Windows
Firewall log file on the server may contain useful information as well.
4. If the preceding troubleshooting steps do not resolve a specific configuration issue
on Windows, then provide the output from the following command to Oracle
Support for diagnosis and problem resolution:
netsh firewall show state verbose=enable
See Also:
•
Configure Exceptions for the Windows Firewall (page 10-2)
•
Windows Firewall Configuration
10.2 Recommended Postinstallation Tasks
Oracle recommends that you complete these tasks as needed after installing Oracle
Grid Infrastructure.
Downloading and Installing the ORAchk Health Check Tool (page 10-7)
Download and install the ORAchk utility to perform proactive heath
checks for the Oracle software stack.
Optimize Memory Usage for Programs (page 10-8)
The Windows operating system should be optimized for Memory Usage
of 'Programs' instead of 'System Caching'.
Create a Fast Recovery Area Disk Group (page 10-8)
You should create a separate disk group for the fast recovery area.
Checking the SCAN Configuration (page 10-10)
The SCAN is a name that provides service access for clients to the
cluster. You can use the command cluvfy comp scan (located in
Grid_home\bin) to confirm that the DNS is correctly associating the
SCAN with the addresses.
10.2.1 Downloading and Installing the ORAchk Health Check Tool
Download and install the ORAchk utility to perform proactive heath checks for the
Oracle software stack.
ORAchk replaces the RACCheck utility. ORAchk extends health check coverage to the
entire Oracle software stack, and identifies and addresses top issues reported by
Oracle users. ORAchk proactively scans for known problems with Oracle products
and deployments, including the following:
•
Standalone Oracle Database
•
Oracle Grid Infrastructure
•
Oracle Real Application Clusters
Oracle Grid Infrastructure Postinstallation Tasks 10-7
Recommended Postinstallation Tasks
•
Maximum Availability Architecture (MAA) Validation
•
Upgrade Readiness Validations
•
Oracle Golden Gate
Oracle is continuing to expand checks, based on customer requests.
ORAchk is supported on Windows Server 2012 and Windows Server 2016 on a
Cygwin environment only.
Oracle recommends that you download and run the latest version of ORAchk from
My Oracle Support. For information about downloading, configuring and running
ORAchk utility, refer to My Oracle Support note 1268927.2:
https://support.oracle.com/epmos/faces/DocContentDisplay?
id=1268927.2&parent=DOCUMENTATION&sourceId=USERGUIDE
Related Topics:
Oracle ORAchk and EXAchk User’s Guide
10.2.2 Optimize Memory Usage for Programs
The Windows operating system should be optimized for Memory Usage of 'Programs'
instead of 'System Caching'.
1. From the Start Menu, select Control Panel, then System.
2. In the System Properties window, click the Advanced tab.
3. In the Performance section, click Settings.
4. In the Performance Options window, click the Advanced tab.
5. In the Memory Usage section, ensure Programs is selected.
10.2.3 Create a Fast Recovery Area Disk Group
You should create a separate disk group for the fast recovery area.
During installation of Oracle Grid Infrastructure, if you select Oracle ASM for storage,
a single disk group is created to store the Oracle Clusterware files. If you plan to create
a single-instance database, an Oracle RAC database, or an Oracle RAC One Node
database, then this disk group can also be used to store the data files for the database.
However, Oracle recommends that you create a separate disk group for the fast
recovery area.
About the Fast Recovery Area and the Fast Recovery Area Disk Group
(page 10-8)
The fast recovery area is a unified storage location for all Oracle
Database files related to recovery.
Creating the Fast Recovery Area Disk Group (page 10-9)
You can use ASMCA to create an Oracle ASM disk group for the fast
recovery area.
10.2.3.1 About the Fast Recovery Area and the Fast Recovery Area Disk Group
The fast recovery area is a unified storage location for all Oracle Database files related
to recovery.
10-8 Oracle Grid Infrastructure Installation and Upgrade Guide
Recommended Postinstallation Tasks
Database administrators can define the DB_RECOVERY_FILE_DEST parameter to the
path for the fast recovery area to enable on-disk backups, and rapid recovery of data.
Enabling rapid backups for recent data can reduce requests to system administrators
to retrieve backup tapes for recovery operations.
When you enable the fast recovery area in the database initialization parameter file, all
RMAN backups, archive logs, control file automatic backups, and database copies are
written to the fast recovery area. RMAN automatically manages files in the fast
recovery area by deleting obsolete backups and archive files that are no longer
required for recovery.
To use a fast recovery area in Oracle RAC, you must place it on an Oracle ASM disk
group, a cluster file system, or on a shared directory that is configured through Direct
network file system (NFS) for each Oracle RAC instance. In other words, the fast
recovery area must be shared among all of the instances of an Oracle RAC database.
Oracle Clusterware files and Oracle Database files can be placed on the same disk
group as fast recovery area files. However, Oracle recommends that you create a
separate fast recovery area disk group to reduce storage device contention.
The fast recovery area is enabled by setting the parameter DB_RECOVERY_FILE_DEST
to the same value on all instances. The size of the fast recovery area is set with the
parameter DB_RECOVERY_FILE_DEST_SIZE. As a general rule, the larger the fast
recovery area, the more useful it becomes. For ease of use, Oracle recommends that
you create a fast recovery area disk group on storage devices that can contain at least
three days of recovery information. Ideally, the fast recovery area should be large
enough to hold a copy of all of your data files and control files, the online redo logs,
and the archived redo log files needed to recover your database using the data file
backups kept under your retention policy.
Multiple databases can use the same fast recovery area. For example, assume you have
created one fast recovery area disk group on disks with 150 gigabyte (GB) of storage,
shared by three different databases. You can set the size of the fast recovery area for
each database depending on the importance of each database. For example, if test1 is
your least important database, you might set DB_RECOVERY_FILE_DEST_SIZE to 30
GB. For the products database, which is of greater importance, you might set
DB_RECOVERY_FILE_DEST_SIZE to 50 GB. For the orders, which has the greatest
importance, you might set DB_RECOVERY_FILE_DEST_SIZE to 70 GB.
See Also: Oracle Automatic Storage Management Administrator's Guide for
information on how to create a disk group for data and a disk group for the
fast recovery area
10.2.3.2 Creating the Fast Recovery Area Disk Group
You can use ASMCA to create an Oracle ASM disk group for the fast recovery area.
1. Navigate to the bin directory in the Grid home and start Oracle ASM Configuration
Assistant (ASMCA).
For example:
C:\> cd app\12.1.0\grid\bin
C:\> asmca
ASMCA opens at the Disk Groups tab.
2. Click Create to create a new disk group.
Oracle Grid Infrastructure Postinstallation Tasks 10-9
Recommended Postinstallation Tasks
The Create Disk Groups window opens.
3. In the Create Disk Groups window, enter the following information, then click OK:
a. In the Disk Group Name field, enter a descriptive name for the fast recovery
area disk group, for example, FRA.
b. In the Redundancy section, select the level of redundancy you want to use.
c. In the Select Member Disks field, select eligible disks to be added to the fast
recovery area.
The Diskgroup Creation window opens to inform you when disk group creation is
complete.
4. Click OK to acknowledge the message, then click Exit to quit the application.
10.2.4 Checking the SCAN Configuration
The SCAN is a name that provides service access for clients to the cluster. You can use
the command cluvfy comp scan (located in Grid_home\bin) to confirm that the
DNS is correctly associating the SCAN with the addresses.
Because the SCAN is associated with the cluster as a whole, rather than to a particular
node, the SCAN makes it possible to add or remove nodes from the cluster without
needing to reconfigure clients. It also adds location independence for the databases, so
that client configuration does not have to depend on which nodes run a particular
database instance. Clients can continue to access the cluster in the same way as with
earlier releases, but Oracle recommends that clients accessing the cluster use the
SCAN.
After installation, when a client sends a request to the cluster, the Oracle Clusterware
SCAN listeners redirect client requests to servers in the cluster.
•
Confirm that the DNS is correctly associating the SCAN with the specified
addresses.
cluvfy comp scan
Example 10-1
Addresses
Using CLUVFY to Confirm DNS is Correctly Associating the SCAN
This example shows the output from the cluvfy comp scan command for a cluster
node named node1.example.com.
C:\> cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "node1.example.com"...
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful.
10-10 Oracle Grid Infrastructure Installation and Upgrade Guide
Using Earlier Oracle Database Releases with Grid Infrastructure
10.3 Using Earlier Oracle Database Releases with Grid Infrastructure
Review the guidelines and restrictions for using earlier Oracle Database releases with
Oracle Grid Infrastructure 12c release 2 (12.2) installations.
General Restrictions for Using Earlier Oracle Database Releases (page 10-11)
You can use Oracle Database 12c Release 1 and Release 2 and Oracle
Database 11g Release 2 with Oracle Grid Infrastructure 12c Release 2
(12.2).
Configuring Earlier Release Oracle Database on Oracle ACFS (page 10-12)
Review this information to configure a 11.2 release Oracle Database on
Oracle Automatic Storage Management Cluster File System (Oracle
ACFS).
Using ASMCA to Administer Disk Groups for Earlier Database Releases
(page 10-12)
Starting with Oracle Grid Infrastructure 11g Release 2, Oracle ASM is
installed as part of an Oracle Grid Infrastructure installation, with Oracle
Clusterware.
Using the Correct LSNRCTL Commands (page 10-13)
Do not attempt to use the lsnrctl programs from Oracle home
locations for earlier releases because they cannot be used with the new
release.
Starting and Stopping Cluster Nodes or Oracle Clusterware Resources
(page 10-13)
Before shutting down Oracle Clusterware 12c Release 1 (12.1), if you
have an Oracle Database 11g Release 2 (11.2) database registered with
Oracle Clusterware 12c, then you must perform additional steps to
ensure the resources are stopped.
10.3.1 General Restrictions for Using Earlier Oracle Database Releases
You can use Oracle Database 12c Release 1 and Release 2 and Oracle Database 11g
Release 2 with Oracle Grid Infrastructure 12c Release 2 (12.2).
Do not use the versions of srvctl, lsnrctl, or other Oracle Grid infrastructure
home tools to administer earlier version databases. Only administer earlier Oracle
Database releases using the tools in the earlier Oracle Database homes. To ensure that
the versions of the tools you are using are the correct tools for those earlier release
databases, run the tools from the Oracle home of the database or object you are
managing.
Oracle Database homes can only be stored on Oracle ASM Cluster File System (Oracle
ACFS) if the database release is Oracle Database 11g Release 2 or higher. Earlier
releases of Oracle Database cannot be installed on Oracle ACFS because these releases
were not designed to use Oracle ACFS.
When installing Oracle databases on a Flex ASM cluster, the Oracle ASM cardinality
must be set to All.
Oracle Grid Infrastructure Postinstallation Tasks 10-11
Using Earlier Oracle Database Releases with Grid Infrastructure
Note:
If you are installing Oracle Database 11g Release 2 with Oracle Grid
Infrastructure 12c Release 2 (12.2), then before running the Oracle Universal
Installer (OUI) for Oracle Database, run the following command on the local
node only:
Grid_home\oui\bin\setup.exe -ignoreSysPrereqs -updateNodeList
ORACLE_HOME=Grid_home "CLUSTER_NODES={comma_separated_list_of_hub_nodes}"
CRS=true LOCAL_NODE=local_node [-cfs]
Use the -cfs option only if the Grid_home is on a shared location.
See Also:
•
Download and Install Patch Updates (page 10-2)
•
Oracle Database 12c Release 2 Upgrade Companion (Doc ID 1670757.1) on
My Oracle Support: https://support.oracle.com/rs?
type=doc&id=1670757.1
10.3.2 Configuring Earlier Release Oracle Database on Oracle ACFS
Review this information to configure a 11.2 release Oracle Database on Oracle
Automatic Storage Management Cluster File System (Oracle ACFS).
1. Install Oracle Grid Infrastructure 12c release 2 (12.2) as described in this guide.
2. Start Oracle ASM Configuration Assistant (ASMCA) as the grid installation owner.
For example:
asmca
Follow the steps in the configuration wizard to create Oracle ACFS storage for the
earlier release Oracle Database home.
3. Install Oracle Database 11g release 2 (11.2) software-only on the Oracle ACFS file
system you configured.
4. From the 11.2 Oracle Database home, run Oracle Database Configuration Assistant
(DBCA) and create the Oracle RAC Database, using Oracle ASM as storage for the
database data files.
dbca
5. Modify the Oracle ACFS path dependency:
srvctl modify database -d my_112_db -j Oracle_ACFS_path
10.3.3 Using ASMCA to Administer Disk Groups for Earlier Database Releases
Starting with Oracle Grid Infrastructure 11g Release 2, Oracle ASM is installed as part
of an Oracle Grid Infrastructure installation, with Oracle Clusterware.
You can no longer use Database Configuration Assistant (DBCA) to perform
administrative tasks on Oracle ASM.
10-12 Oracle Grid Infrastructure Installation and Upgrade Guide
Modifying Oracle Clusterware Binaries After Installation
•
Use Oracle ASM Configuration Assistant (ASMCA) to create and modify disk
groups when you install earlier Oracle Database and Oracle RAC releases on
Oracle Grid Infrastructure 11g installations.
See Also: Oracle Automatic Storage Management Administrator's Guide for
details on configuring disk group compatibility for databases using Oracle
Database 11g Release 2 with Oracle Grid Infrastructure 12c Release 1 (12.1)
10.3.4 Using the Correct LSNRCTL Commands
Do not attempt to use the lsnrctl programs from Oracle home locations for earlier
releases because they cannot be used with the new release.
•
Use the Listener Control utility, lsnrctl, located in the Oracle Grid
Infrastructure 12c home to administer local and SCAN listeners for Oracle
Clusterware and Oracle ASM 11g Release 2.
10.3.5 Starting and Stopping Cluster Nodes or Oracle Clusterware Resources
Before shutting down Oracle Clusterware 12c Release 1 (12.1), if you have an Oracle
Database 11g Release 2 (11.2) database registered with Oracle Clusterware 12c, then
you must perform additional steps to ensure the resources are stopped.
1. If you have an Oracle Database 11g Release 2 (11.2) database registered with Oracle
Clusterware 12c, then perform one of the following steps:
a. Stop the Oracle Database 11g Release 2 database instances first, then stop the
Oracle Clusterware stack
b. Use the crsctl stop crs -f command to shut down the Oracle
Clusterware stack and ignore any errors that are raised
2. If you need to shut down a cluster node that currently has Oracle Database and
Oracle Grid Infrastructure running on that node, then you must perform the
following steps to cleanly shutdown the cluster node:
a. Use the crsctl stop crs command to shut down the Oracle Clusterware
stack
b. After Oracle Clusterware has been stopped, you can shutdown the Windows
server using shutdown -r.
10.4 Modifying Oracle Clusterware Binaries After Installation
After installation, if you must modify the software installed in your Grid home, then
you must first stop the Oracle Clusterware stack.
For example, to apply a one-off patch, or modify any of the dynamic-link libraries
(DLLs) used by Oracle Clusterware or Oracle ASM, you must follow these steps to
stop and restart Oracle Clusterware.
Caution: To put the changes you make to the Oracle Grid Infrastructure home
into effect, you must shut down all executable files that run in the Grid home
directory and then restart them. In addition, shut down any applications that
use Oracle shared libraries or DLL files in the Grid home.
Oracle Grid Infrastructure Postinstallation Tasks 10-13
Modifying Oracle Clusterware Binaries After Installation
1. Log in using a member of the Administrators group and go to the directory
Grid_home\bin, where Grid_home is the path to the Oracle Grid Infrastructure
home.
2. Shut down Oracle Clusterware using the following command:
C:\..\bin> crsctl stop crs -f
3. After Oracle Clusterware is completely shut down, perform the updates to the
software installed in the Grid home.
4. Use the following command to restart Oracle Clusterware:
C:\..\bin> crsctl start crs
5. Repeat steps 1 through 4 on each cluster member node.
Note: Do not delete directories in the Grid home. For example, do not delete
the directory Grid_home/Opatch. If you delete the directory, then the Grid
infrastructure installation owner cannot use Opatch to patch the Grid home,
and Opatch displays the error message "checkdir error: cannot
create Grid_home/OPatch".
10-14 Oracle Grid Infrastructure Installation and Upgrade Guide
11
Upgrading Oracle Grid Infrastructure
Oracle Grid Infrastructure upgrade consists of upgrade of Oracle Clusterware and
Oracle Automatic Storage Management (Oracle ASM).
Oracle Grid Infrastructure upgrades can be rolling upgrades, in which a subset of
nodes are brought down and upgraded while other nodes remain active. Starting with
Oracle ASM 12c release 2 (12.2), upgrades can be rolling upgrades.
You can also use Rapid Home Provisioning to upgrade Oracle Grid Infrastructure for
a cluster.
Understanding Out-of-Place and Rolling Upgrades (page 11-2)
You can use rolling upgrade or out-of-place upgrades to upgrade your
Oracle Grid Infrastructure software.
About Oracle Grid Infrastructure Upgrade and Downgrade (page 11-3)
There are different methods you can use to upgrade the Oracle Grid
Infrastructure software.
Options for Oracle Grid Infrastructure Upgrades (page 11-4)
Understand the upgrade options for Oracle Grid Infrastructure in this
release. When you upgrade to Oracle Grid Infrastructure 12c Release 2
(12.2), you upgrade to an Oracle Flex Cluster configuration.
Restrictions and Guidelines for Oracle Grid Infrastructure Upgrades
(page 11-4)
Review the restrictions and changes for upgrades to Oracle Grid
Infrastructure installations, which consists of Oracle Clusterware and
Oracle Automatic Storage Management (Oracle ASM).
Preparing to Upgrade an Existing Oracle Clusterware Installation (page 11-6)
If you have an existing Oracle Clusterware installation, then you
upgrade your existing cluster by performing an out-of-place upgrade.
You cannot perform an in-place upgrade.
Understanding Rolling Upgrades Using Batches (page 11-13)
Instead of shutting down all nodes when applying patches, you can shut
down some nodes while other nodes remaining running.
Performing Rolling Upgrades of Oracle Grid Infrastructure (page 11-13)
Review this information to perform rolling upgrade of Oracle Grid
Infrastructure.
Applying Patches to Oracle Grid Infrastructure (page 11-17)
After you have upgraded Oracle Grid Infrastructure 12c Release 2 (12.2),
you can install individual software patches by downloading them from
My Oracle Support.
Upgrading Oracle Grid Infrastructure 11-1
Understanding Out-of-Place and Rolling Upgrades
Updating Oracle Enterprise Manager Cloud Control Target Parameters
(page 11-19)
After upgrading Oracle Grid Infrastructure, upgrade the Enterprise
Manager Cloud Control target.
Checking Cluster Health Monitor Repository Size After Upgrading (page 11-21)
If you are upgrading Oracle Grid Infrastructure from a prior release
using IPD/OS to the current release, then review the Cluster Health
Monitor repository size (the CHM repository).
Downgrading Oracle Clusterware After an Upgrade (page 11-21)
After a successful or a failed upgrade, you can restore Oracle
Clusterware to the previous release.
Completing Failed or Interrupted Installations and Upgrades (page 11-25)
If Oracle Universal Installer (OUI) exits on the node from which you
started the installation or upgrade (the first node), or the node reboots
before you confirm that the gridconfig.bat script was run on all
cluster nodes, then the upgrade or installation remains incomplete.
Related Topics:
General Upgrade Best Practices (page 3-2)
Be aware of these guidelines as a best practice before you perform an
upgrade.
11.1 Understanding Out-of-Place and Rolling Upgrades
You can use rolling upgrade or out-of-place upgrades to upgrade your Oracle Grid
Infrastructure software.
If you have an existing Oracle Grid Infrastructure installation, then you upgrade your
existing cluster by performing an out-of-place upgrade. An in-place upgrade of Oracle
Grid Infrastructure is not supported. All upgrades are out-of-place upgrades, meaning
that the software binaries are placed in a different Grid home from the Grid home
used for the prior release. You can also perform the upgrade in a rolling manner,
which means there is always at least one cluster node operational during the upgrade.
Rolling Upgrades
You can upgrade Oracle Grid Infrastructure by upgrading individual nodes without
stopping Oracle Grid Infrastructure on other nodes in the cluster, which is called
performing a rolling upgrade. Rolling upgrades avoid downtime and ensure
continuous availability while the software is upgraded to a new release.
Note: In contrast with releases prior to Oracle Clusterware 11g Release 2,
Oracle Universal Installer (OUI) always performs rolling upgrades, even if
you select all nodes for the upgrade.
Out-of-Place Upgrades
During an out-of-place upgrade, the installer installs the newer release in a separate
Grid home. Both the old and new releases of Oracle Grid Infrastructure exist on each
cluster member node, but only one release is active. By contrast, an in-place upgrade
overwrites the software in the current Oracle Grid Infrastructure home.
To perform an out-of-place upgrade, you must create new Oracle Grid Infrastructure
homes on each node. Then you can perform an out-of-place rolling upgrade, so that
11-2 Oracle Grid Infrastructure Installation and Upgrade Guide
About Oracle Grid Infrastructure Upgrade and Downgrade
some nodes run Oracle Grid Infrastructure from the original Grid home, and other
nodes run Oracle Grid Infrastructure from the new Grid home.
Related Topics:
Performing Rolling Upgrades of Oracle Grid Infrastructure (page 11-13)
Review this information to perform rolling upgrade of Oracle Grid
Infrastructure.
11.2 About Oracle Grid Infrastructure Upgrade and Downgrade
There are different methods you can use to upgrade the Oracle Grid Infrastructure
software.
You can upgrade Oracle Grid Infrastructure in any of the following ways:
•
Rolling Upgrade: Involves upgrading individual nodes without stopping Oracle
Grid Infrastructure on other nodes in the cluster
•
Non-Rolling Upgrade: Involves bringing down all the nodes except one. A
complete cluster outage occurs while the root script stops the old Oracle
Clusterware stack and starts the new Oracle Clusterware stack on the node where
you initiate the upgrade. After upgrade is completed, the new Oracle Clusterware
is started on all the nodes.
Note that some services are disabled when one or more nodes are in the process of
being upgraded. All upgrades are out-of-place upgrades, meaning that the software
binaries are placed in a different Grid home from the Grid home used for the prior
release.
You can downgrade from Oracle Grid Infrastructure 12c Release 2 (12.2) to Oracle
Grid Infrastructure 12c Release 1 (12.1) and Oracle Grid Infrastructure 11g Release 2
(11.2). Be aware that if you downgrade to a prior release, then your cluster must
conform with the configuration requirements for that prior release, and the features
available for the cluster consist only of the features available for that prior release of
Oracle Clusterware and Oracle ASM.
You can perform out-of-place upgrades to an Oracle ASM instance using ASMCA. In
addition to running ASMCA using the graphical user interface, you can run ASMCA
in non-interactive (silent) mode.
Note: If you are currently using OCFS for Windows as your shared storage,
then you must migrate to using Oracle ASM during the upgrade of Oracle
Database and Oracle Grid Infrastructure.
Note: You must complete an upgrade before attempting to use cluster backup
files. You cannot use backups for a cluster that has not completed the upgrade.
See Also: Oracle Automatic Storage Management Administrator's Guide for
additional information about upgrading existing Oracle ASM installations
Upgrading Oracle Grid Infrastructure 11-3
Options for Oracle Grid Infrastructure Upgrades
11.3 Options for Oracle Grid Infrastructure Upgrades
Understand the upgrade options for Oracle Grid Infrastructure in this release. When
you upgrade to Oracle Grid Infrastructure 12c Release 2 (12.2), you upgrade to an
Oracle Flex Cluster configuration.
Supported upgrade paths for Oracle Grid Infrastructure for this release are:
•
Oracle Grid Infrastructure upgrade from releases 11.2.0.3 and 11.2.0.4 to Oracle
Grid Infrastructure 12c Release 2 (12.2).
•
Oracle Grid Infrastructure upgrade from Oracle Grid Infrastructure 12c Release 1
(12.1) to Oracle Grid Infrastructure 12c Release 2 (12.2).
Upgrade options from Oracle Grid Infrastructure 11g and Oracle Grid Infrastructure
12c Release 1 (12.1) to Oracle Grid Infrastructure 12c Release 2 (12.2) include the
following:
•
Oracle Grid Infrastructure rolling upgrade which involves upgrading individual
nodes without stopping Oracle Grid Infrastructure on other nodes in the cluster
•
Oracle Grid Infrastructure non-rolling upgrade by bringing the cluster down and
upgrading the complete cluster
Note:
•
When you upgrade to Oracle Grid Infrastructure 12c Release 2 (12.2), you
upgrade to an Oracle Standalone Cluster configuration.
•
If storage for OCR and voting files is other than Oracle ASM, you need to
migrate OCR and voting files to Oracle ASM before upgrading to Oracle
Grid Infrastructure 12c Release 2 (12.2).
11.4 Restrictions and Guidelines for Oracle Grid Infrastructure Upgrades
Review the restrictions and changes for upgrades to Oracle Grid Infrastructure
installations, which consists of Oracle Clusterware and Oracle Automatic Storage
Management (Oracle ASM).
•
Oracle Grid Infrastructure upgrades are always out-of-place upgrades. You
cannot perform an in-place upgrade of Oracle Grid Infrastructure to existing
homes.
•
You must use an Administrator user to perform the Oracle Grid Infrastructure 12c
release 2 (12.2)upgrade.
•
Oracle ASM and Oracle Clusterware both run in the Oracle Grid Infrastructure
home.
•
When you upgrade to Oracle Grid Infrastructure 12c release 2 (12.2), you upgrade
to an Oracle Flex Cluster configuration.
•
Do not delete directories in the Grid home. For example, do not delete
Grid_home\Opatch. If you delete the directory, then the Oracle Installation User
for Oracle Grid Infrastructure cannot use Opatch to patch the Grid home, and
11-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Restrictions and Guidelines for Oracle Grid Infrastructure Upgrades
Opatch displays the error "checkdir error: cannot create Grid_home
\OPatch"
•
To upgrade existing Oracle Grid Infrastructure installations to Oracle Grid
Infrastructure 12c release 2 (12.2), you must first verify if you need to apply any
mandatory patches for upgrade to succeed. You can use CVU to perform this
check.
•
To upgrade existing Oracle Grid Infrastructure installations to Oracle Grid
Infrastructure 12c release 2 (12.2), your current release must be greater than or
equal to Oracle Grid Infrastructure 11g release 2 (11.2.0.3).
•
To upgrade Oracle Grid Infrastructure installations from 11.2.0.2 to a later release,
you must install the latest bundle for the cumulative patches for Oracle Grid
Infrastructure (Patch 11 bundle or higher).
•
During a major release upgrade to Oracle Clusterware 12c release 2 (12.2), the
software in the Grid home for Oracle Grid Infrastructure 12c release 2 (12.2) is not
fully functional until the upgrade is completed. Running the Server Control
Utility (SRVCTL), crsctl, and other commands from the 12c release 2 (12.2) Grid
home is not supported until the upgrade is complete across all nodes.
To manage databases using earlier releases of Oracle Database software during
the Oracle Grid Infrastructure upgrade, use SRVCTL from the existing database
homes.
•
To change a cluster member node role to Leaf, you must have completed the
upgrade on all Oracle Grid Infrastructure nodes so that the active version is
Oracle Grid Infrastructure 12c release 2(12.2) or later.
See Also: Oracle Database Upgrade Guide
Storage Restrictions Related to Oracle Grid Infrastructure Upgrades
•
If the Oracle Cluster Registry (OCR) and voting file locations for your current
installation are on raw devices or shared file systems, then you must migrate them
to Oracle ASM disk groups before upgrading to Oracle Grid Infrastructure 12c
release 2 (12.2).
•
If you want to upgrade Oracle Grid Infrastructure releases before Oracle Grid
Infrastructure 11g release 2 (11.2), where the OCR and voting files are on raw or
block devices or shared file system, then you must upgrade to Oracle Grid
Infrastructure 11g release 2 (11.2). You must move the Oracle Cluster Registry
(OCR) and voting files to Oracle ASM before you upgrade to Oracle Grid
Infrastructure 12c release 2 (12.2).
Restrictions Related to Upgrading Shared Grid Homes
•
You can perform upgrades on a shared Oracle Clusterware home.
•
If the existing Oracle Clusterware home is a shared home, then you can use a nonshared home for the Oracle Grid Infrastructure for a cluster home for Oracle
Clusterware and Oracle ASM 12c release 2 (12.2).
Single-Instance Oracle ASM Upgrade Restrictions
•
During Oracle Grid Infrastructure installation or upgrade, if there is a single
instance Oracle ASM release on the local node, then it is converted to an Oracle
Upgrading Oracle Grid Infrastructure 11-5
Preparing to Upgrade an Existing Oracle Clusterware Installation
Flex ASM 12c release 2 (12.2) installation, and Oracle ASM runs in the Oracle Grid
Infrastructure home on all nodes.
•
If a single instance (non-clustered) Oracle ASM installation is on a remote node,
which is a node other than the local node (the node on which the Oracle Grid
Infrastructure installation or upgrade is being performed), then it will remain a
single instance Oracle ASM installation. However, during the installation or
upgrade, when the OCR and voting files are placed on Oracle ASM, then an
Oracle Flex ASM installation is created on all nodes in the cluster. The single
instance Oracle ASM installation on the remote node becomes nonfunctional.
Related Topics:
Using CVU to Validate Readiness for Oracle Clusterware Upgrades
(page 11-11)
Oracle recommends that you use Cluster Verification Utility (CVU) to
help to ensure that your upgrade is successful.
Example of Verifying System Upgrade Readiness for Grid Infrastructure
(page 11-13)
You can use the runcluvfy.bat command to check your system
before upgrading.
About the CVU Grid Upgrade Validation Command Options (page 11-11)
You can use the Cluster Verification Utility (CVU) to validate your
system readiness before upgrading.
11.5 Preparing to Upgrade an Existing Oracle Clusterware Installation
If you have an existing Oracle Clusterware installation, then you upgrade your
existing cluster by performing an out-of-place upgrade. You cannot perform an inplace upgrade.
The following topics list the steps you can perform before you upgrade Oracle Grid
Infrastructure:
Upgrade Checklist for Oracle Grid Infrastructure (page 11-7)
Review this checklist before upgrading an existing Oracle Grid
Infrastructure. A cluster is being upgraded until all cluster member
nodes are running the new installations, and the new clusterware
becomes the active version.
Tasks to Complete Before Upgrading Oracle Grid Infrastructure (page 11-9)
Review the tasks before upgrading Oracle Clusterware.
Create an Oracle ASM Password File (page 11-10)
In certain situations, you must create an Oracle ASM password file
before you can upgrade to Oracle Grid Infrastructure 12c release 2.
Running the Oracle ORAchk Upgrade Readiness Assessment (page 11-11)
Download and run the ORAchk Upgrade Readiness Assessment before
upgrading Oracle Grid Infrastructure.
Using CVU to Validate Readiness for Oracle Clusterware Upgrades
(page 11-11)
Oracle recommends that you use Cluster Verification Utility (CVU) to
help to ensure that your upgrade is successful.
11-6 Oracle Grid Infrastructure Installation and Upgrade Guide
Preparing to Upgrade an Existing Oracle Clusterware Installation
11.5.1 Upgrade Checklist for Oracle Grid Infrastructure
Review this checklist before upgrading an existing Oracle Grid Infrastructure. A
cluster is being upgraded until all cluster member nodes are running the new
installations, and the new clusterware becomes the active version.
If you intend to install or upgrade Oracle RAC, then you must first complete the
upgrade to Oracle Grid Infrastructure 12c release 2 (12.2) on all cluster nodes before
you install the Oracle Database 12c release 2 (12.2) release of Oracle RAC.
Check
Task
Review the Oracle Database
Upgrade Guide for
deprecation and desupport
information that may affect
upgrade planning.
Oracle Database Upgrade Guide
Verify the current operating
system is supported for the
new release
Confirm that you are using a supported operating system,
patch release, and all required operating system packages for
the new Oracle Grid Infrastructure installation.
Upgrade to a later Oracle
release if required
On the Windows platform, to upgrade Oracle Clusterware
from releases 10.2.0.5 and 11.1.0.7 to release 12.1, you must
perform an interim upgrade to 11.2.0.4 for Oracle
Clusterware.
Patch set (recommended)
Install the latest patch set release for your existing installation.
Upgrades from some earlier patchset releases might not be
supported.
Move any files stored on
OCFS for Windows or raw
devices to a supported
storage mechanism
Migrate OCR files from raw devices to Oracle ASM. Direct use
of raw devices is not supported. OCFS on Windows is no
longer supported.
Install user account
Confirm that the installation user you plan to use is the same
as the installation user that was used to perform the
installation you want to upgrade.
Oracle Home user
If the Oracle home from which you run the database upgrade
uses a Windows Domain User as the Oracle Home User, then
the Oracle Home User on the target version must use the
same Windows Domain User.
Create a Grid home
Create a new Oracle Grid Infrastructure Oracle home (Grid
home) where you can extract the image files. All Oracle Grid
Infrastructure upgrades (upgrades of existing Oracle
Clusterware and Oracle ASM installations) are out-of-place
upgrades.
Confirm Oracle Cluster Registry (OCR) file integrity. If this
check fails, then repair the OCRs before proceeding.
Upgrading Oracle Grid Infrastructure 11-7
Preparing to Upgrade an Existing Oracle Clusterware Installation
Check
Task
Instance names for Oracle
ASM
Confirm that the Oracle Automatic Storage Management
(Oracle ASM) instances use standard Oracle ASM instance
names
The default ASM SID for a single-instance database is +ASM,
and the default SID for ASM on Oracle Real Application
Clusters nodes is +ASMnode#, where node# is the node
number. With Oracle Grid Infrastructure 11.2.0.1 and later,
nondefault Oracle ASM instance names are not supported.
If you have nondefault Oracle ASM instance names, then
before you upgrade your cluster, use your existing release
srvctl to remove individual Oracle ASM instances with
nondefault names, and add Oracle ASM instances with
default names.
Network Addresses for
standard Oracle Grid
Infrastructure deployments
For standard Oracle Grid Infrastructure installations, confirm
the following network configuration:
•
•
•
The private and public IP addresses are in unrelated,
separate subnets. The private subnet should be in a
dedicated private subnet.
The public and virtual IP addresses, including SCAN
addresses, are in the same subnet (the range of addresses
permitted by the subnet mask for the subnet network).
Neither private nor public IP addresses use a link local
subnet (169.254.*.*).
CVU Upgrade Validation
Use Cluster Verification Utility (CVU) to assist you with
system checks in preparation for starting an upgrade.
Unset Environment
variables
As the user that will be performing the upgrade, unset the
environment variables %ORACLE_HOME% and %ORACLE_SID%
as follows:
C:\> set ORACLE_BASE=
C:\> set ORACLE_HOME=
C:\> set ORACLE_SID=
Check that the ORA_CRS_HOME environment variable is not
set. Do not use ORA_CRS_HOME as an environment variable,
except under explicit direction from Oracle Support.
RACcheck Upgrade
Readiness Assessment
Download and run the RACcheck Upgrade Readiness
Assessment to obtain automated upgrade-specific health
check for upgrades to Oracle Grid Infrastructure. See My
Oracle Support note 1457357.1, which is available at the
following URL:
https://support.oracle.com/rs?type=doc&id=1457357.1
Back Up the Oracle Software
Before Upgrades
Before you make any changes to the Oracle software, Oracle
recommends that you create a backup of the Oracle software
and databases.
For Oracle software running on Windows operating systems,
you must also take a backup of the Windows registry.
Without a registry backup, you will not be able to restore the
Oracle software to a working state if the upgrade of Oracle
Grid Infrastructure or Oracle Database fails and you want to
revert to the previous software installation.
11-8 Oracle Grid Infrastructure Installation and Upgrade Guide
Preparing to Upgrade an Existing Oracle Clusterware Installation
11.5.2 Tasks to Complete Before Upgrading Oracle Grid Infrastructure
Review the tasks before upgrading Oracle Clusterware.
Before upgrading, you must unset the following environment variables:
•
ORACLE_BASE
•
ORACLE_HOME
•
ORACLE_SID
•
ORA_NLS10
•
TNS_ADMIN
•
ORA_CRS_HOME
If you have set ORA_CRS_HOME as an environment variable, following instructions
from Oracle Support, then unset it before starting an installation or upgrade. You
should never use ORA_CRS_HOME as an environment variable except under explicit
direction from Oracle Support.
1. For each node, use the Cluster Verification Utility (CVU) to assist you with system
checks in preparation for patching or upgrading.
You can run CVU before starting the upgrade, however, the installer runs the
appropriate CVU checks automatically, and prompts you to fix problems before
proceeding with the upgrade.
2. Ensure that you have information you will need during installation, including the
following:
•
An Oracle base location for Oracle Clusterware
•
An Oracle Grid Infrastructure home location that is different from your
existing Grid home location
•
SCAN name and addresses, and other network addresses
•
Privileged user operating system groups
•
Local Administrator user access, or access as the user who performed the
previous Oracle Clusterware installation
3. For the installation user running the installation, if you have environment variables
set for the existing installation, then unset the environment variables
%ORACLE_HOME% and %ORACLE_SID%, because these environment variables are
used during upgrade. For example, as the grid user, run the following commands
on the local node:
C:\> set ORACLE_HOME=
C:\> set ORACLE_BASE=
C:\> set ORACLE_SID=
4. If you have set ORA_CRS_HOME as an environment variable, following instructions
from Oracle Support, then unset it before starting an installation or upgrade.
You should never use ORA_CRS_HOME as an environment variable except under
explicit direction from Oracle Support.
Upgrading Oracle Grid Infrastructure 11-9
Preparing to Upgrade an Existing Oracle Clusterware Installation
5. If you have an existing installation on your system, and you are using the same
user account to upgrade this installation, then unset the following environment
variables:
ORA_CRS_HOME, ORACLE_HOME, ORA_NLS10, TNS_ADMIN and any other
environment variable set for the Oracle installation user that is connected with
Oracle software homes.
6. Check to ensure that the user profile for the Oracle Installation User does not set
any of these environment variables.
Related Topics:
Configuring Users, Groups and Environments for Oracle Grid Infrastructure and
Oracle RAC (page 5-1)
Configuring Networks for Oracle Grid Infrastructure and Oracle RAC (page 4-1)
11.5.3 Create an Oracle ASM Password File
In certain situations, you must create an Oracle ASM password file before you can
upgrade to Oracle Grid Infrastructure 12c release 2.
If you are upgrading Oracle Grid Infrastructure from release 11.2.0.4 to release
12.1.0.2, and then upgrade to Oracle Grid Infrastructure 12c release 2 (12.2), you must
create an Oracle ASM password file before starting the upgrade to Oracle Grid
Infrastructure 12c release 2 (12.2).
On Windows platforms, this issue only exists when upgrading from Oracle Grid
Infrastructure 11.2.0.4 to Oracle Grid Infrastructure 12.1.0.2 and then to Oracle Grid
Infrastructure 12.2. If you upgrade from Oracle Grid Infrastructure 11.2.0.4 directly to
Oracle Grid Infrastructure 12.2, or if you upgrade from Oracle Grid Infrastructure
12.1.0.2 to Oracle Grid Infrastructure 12.2 directly, then this task is not required.
If you do not complete this task before starting the upgrade process, then you will get
a failed check during installation. If you choose to ignore the error and continue the
upgrade, then the rootcrs script fails to complete and generates the following error:
2016-06-28 20:20:33: Command output:
CLSRSC-661: The Oracle ASM password file does not exist at location
C:\app\12.1.0\grid\database\PWD+ASM.ora.
After upgrading Oracle Grid Infrastructure from release 11.2.0.4 to release 12.1.0.2,
complete the following steps to resolve this error before continue with the upgrade to
Oracle Grid Infrastructure 12c release 2:
1. Start the ASMCMD utility from the upgraded Oracle Grid Infrastructure home
directory.
Grid_home_12102\bin\asmcmd.bat
2. Set the Oracle ASM disk group compatibility to 12.1.0.0.0 or higher for the disk
group that stores the Oracle Cluster Registry (OCR).
ASMCMD> setattr -G disk_group_name compatible.asm 12.1.0.0.0
3. Create an Oracle ASM password file.
ASMCMD> pwcreate --asm +disk_group_name/orapwASM sys_password
11-10 Oracle Grid Infrastructure Installation and Upgrade Guide
Preparing to Upgrade an Existing Oracle Clusterware Installation
4. Verify the password file was created.
ASMCMD> pwget --asm
+disk_group_name/orapwasm
11.5.4 Running the Oracle ORAchk Upgrade Readiness Assessment
Download and run the ORAchk Upgrade Readiness Assessment before upgrading
Oracle Grid Infrastructure.
ORAchk is an Oracle RAC configuration audit tool. ORAchk Upgrade Readiness
Assessment can be used to obtain an automated upgrade-specific health check for
upgrades to Oracle Grid Infrastructure 11.2.0.3, 11.2.0.4, 12.1.0.1, 12.1.0.2, and 12.2. You
can run the ORAchk Upgrade Readiness Assessment tool and automate many of the
manual pre-upgrade and post-upgrade checks.
Oracle recommends that you download and run the latest version of ORAchk from
My Oracle Support. For information about downloading, configuring, and running
ORAchk, refer to My Oracle Support note 1457357.1.
See Also:
•
https://support.oracle.com/rs?type=doc&id=1457357.1
•
Oracle ORAchk and EXAchk User’s Guide
11.5.5 Using CVU to Validate Readiness for Oracle Clusterware Upgrades
Oracle recommends that you use Cluster Verification Utility (CVU) to help to ensure
that your upgrade is successful.
You can use CVU to assist you with system checks in preparation for starting an
upgrade. CVU runs the appropriate system checks automatically, and either prompts
you to fix problems, or provides a fixup script to be run on all nodes in the cluster
before proceeding with the upgrade.
About the CVU Grid Upgrade Validation Command Options (page 11-11)
You can use the Cluster Verification Utility (CVU) to validate your
system readiness before upgrading.
Example of Verifying System Upgrade Readiness for Grid Infrastructure
(page 11-13)
You can use the runcluvfy.bat command to check your system
before upgrading.
Related Topics:
Restrictions and Guidelines for Oracle Grid Infrastructure Upgrades (page 11-4)
Review the restrictions and changes for upgrades to Oracle Grid
Infrastructure installations, which consists of Oracle Clusterware and
Oracle Automatic Storage Management (Oracle ASM).
11.5.5.1 About the CVU Grid Upgrade Validation Command Options
You can use the Cluster Verification Utility (CVU) to validate your system readiness
before upgrading.
You can run upgrade validations in one of two ways:
Upgrading Oracle Grid Infrastructure 11-11
Preparing to Upgrade an Existing Oracle Clusterware Installation
•
Run the installer, and allow the CVU validation built into the installer to perform
system checks
•
Run the CVU manual script cluvfy.bat script to perform system checks
To use the installer to perform pre-install checks, run the installation as you normally
would. The installer starts CVU, and performs system checks as part of the installation
process. Selecting the installer to perform these checks is particularly appropriate if
you think you have completed preinstallation checks, and you want to confirm that
your system configuration meets minimum requirements for installation.
To use the cluvfy.bat command-line script for CVU, navigate to the new Grid
home where you extracted the image files for upgrade, that contains the
runcluvfy.bat script, and run the following command to check the readiness of
your Oracle Clusterware installation for upgrades:
runcluvfy.bat stage -pre crsinst -upgrade
Running runcluvfy.bat with the -pre crsinst -upgrade options performs
system checks to confirm if the cluster is in a correct state for upgrading from an
existing clusterware installation.
The runcluvfy command uses the following syntax, where variable content is
indicated by italics:
runcluvfy.bat stage -pre crsinst -upgrade [-rolling] -src_crshome src_Gridhome
-dest_crshome dest_Gridhome -dest_version dest_release
[-verbose]
The options are:
•
-rolling
Use this option to verify readiness for rolling upgrades.
•
-src_crshome src_Gridhome
Use this option to indicate the location of the source Oracle Clusterware or Grid
home that you are upgrading, where src_Gridhome is the path to the home to
upgrade.
•
-dest_crshome dest_Gridhome
Use this option to indicate the location of the upgrade Grid home, where
dest_Gridhome is the path to the Grid home.
•
-dest_version dest_release
Use the dest_version option to indicate the release number of the upgrade,
including any patchset. The release number must include the five digits
designating the release to the level of the platform-specific patch. For example:
12.1.0.1.0.
•
-verbose
Use the -verbose option to produce detailed output of individual checks
11-12 Oracle Grid Infrastructure Installation and Upgrade Guide
Understanding Rolling Upgrades Using Batches
Related Topics:
Restrictions and Guidelines for Oracle Grid Infrastructure Upgrades (page 11-4)
Review the restrictions and changes for upgrades to Oracle Grid
Infrastructure installations, which consists of Oracle Clusterware and
Oracle Automatic Storage Management (Oracle ASM).
11.5.5.2 Example of Verifying System Upgrade Readiness for Grid Infrastructure
You can use the runcluvfy.bat command to check your system before upgrading.
Verify that the permissions required for installing Oracle Clusterware have been
configured by running the following command:
C:\> runcluvfy.bat stage -pre crsinst -upgrade -rolling
-src_crshome C:\app\11.2.0.3\grid -dest_crshome C:\app\12.2.0\grid
-dest_version 12.2.0.1.0 -verbose
Related Topics:
Restrictions and Guidelines for Oracle Grid Infrastructure Upgrades (page 11-4)
Review the restrictions and changes for upgrades to Oracle Grid
Infrastructure installations, which consists of Oracle Clusterware and
Oracle Automatic Storage Management (Oracle ASM).
11.6 Understanding Rolling Upgrades Using Batches
Instead of shutting down all nodes when applying patches, you can shut down some
nodes while other nodes remaining running.
When you upgrade Oracle Grid Infrastructure, you upgrade the entire cluster. You
cannot select or de-select individual nodes for upgrade. Oracle does not support
attempting to add additional nodes to a cluster during a rolling upgrade. Oracle
recommends that you leave Oracle RAC instances running when upgrading Oracle
Clusterware. When you start the upgrade process on each node, the upgrade scripts
shut down the database instances and then start the instances again.
When performing the upgrade, you can divide the nodes into groups, or batches, and
start upgrades of these node batches. Between batches, you can move services from
nodes running the earlier release to the upgraded nodes, so that services are not
affected by the upgrade.
Restrictions for Selecting Nodes for Batch Upgrades
The following restrictions apply when selecting nodes in batches for upgrade:
•
You can pool nodes in batches for upgrade, up to a maximum of three batches.
•
The local node, where Oracle Universal Installer (OUI) is running, must be
upgraded in batch one.
•
Hub and Leaf Nodes cannot be upgraded in the same batch.
•
All Hub Nodes must be upgraded before starting the upgrade of Leaf Nodes.
11.7 Performing Rolling Upgrades of Oracle Grid Infrastructure
Review this information to perform rolling upgrade of Oracle Grid Infrastructure.
Upgrading Oracle Grid Infrastructure 11-13
Performing Rolling Upgrades of Oracle Grid Infrastructure
Upgrading Oracle Grid Infrastructure from an Earlier Release (page 11-14)
Complete this procedure to upgrade Oracle Grid Infrastructure (Oracle
Clusterware and Oracle Automatic Storage Management) from an earlier
release.
Completing an Oracle Clusterware Upgrade when Nodes Become Unreachable
(page 11-16)
If some nodes become unreachable in the middle of an upgrade, then
you cannot complete the upgrade without user intervention.
Joining Inaccessible Nodes After Forcing an Upgrade (page 11-16)
You can add inaccessible nodes to the cluster after a forced cluster
upgrade.
Changing the First Node for Install and Upgrade (page 11-17)
If the first node becomes inaccessible, you can force another node to be
the first node for installation or upgrade.
Related Topics:
Understanding Out-of-Place and Rolling Upgrades (page 11-2)
You can use rolling upgrade or out-of-place upgrades to upgrade your
Oracle Grid Infrastructure software.
11.7.1 Upgrading Oracle Grid Infrastructure from an Earlier Release
Complete this procedure to upgrade Oracle Grid Infrastructure (Oracle Clusterware
and Oracle Automatic Storage Management) from an earlier release.
If you previously attempted to upgrade to a higher release of Oracle Grid
Infrastructure, and that Grid home still exists, then instead of running the installer, run
the config.bat script in the Grid_home\crs\config directory of the higher
version Grid home. Select the Upgrade option to upgrade the existing Grid home to
the later release.
1. As the Grid installation user, download the Oracle Grid Infrastructure image files
and extract the files to the Grid home.
For example:
mkdir D:\app\12.2.0\grid
cd D:\app\12.2.0\grid
unzip -q download_location\grid_home.zip
where download_location\grid_home.zip is the path of the downloaded
Oracle Grid Infrastructure image file.
Note:
•
You must extract the image software into the directory where you want
your Grid home to be located.
•
Download and copy the Oracle Grid Infrastructure image files to the local
node only. During upgrade, the software is copied and installed on all
other nodes in the cluster.
11-14 Oracle Grid Infrastructure Installation and Upgrade Guide
Performing Rolling Upgrades of Oracle Grid Infrastructure
2. If there are non-clustered, or single-instance, Oracle databases that use Oracle ASM
running on any of the nodes in the cluster, they must be shut down before you start
the upgrade.
Listeners associated with those databases do not have to be shut down.
Note: Oracle recommends that you leave Oracle Real Application Clusters
(Oracle RAC) instances running during the Oracle Clusterware upgrade.
During the upgrade process, the database instances on the node being
upgraded are stopped and started automatically during the upgrade process.
3. Start the Oracle Grid Infrastructure wizard in the new Grid home by running the
following command:
Grid_home\gridSetup.bat
4. Select the option Upgrade Oracle Grid Infrastructure.
This option upgrades Oracle Grid Infrastructure (Oracle Clusterware and Oracle
ASM).
5. On the node selection page, select all nodes.
6. Select installation options as prompted.
7. Confirm your selections, and then the upgrade scripts are run automatically.
8. Because the Oracle Grid Infrastructure home is in a different location than the
former Oracle Clusterware and Oracle ASM homes, update any scripts or
applications that use utilities or other files that reside in the Oracle Clusterware
and Oracle ASM homes.
Note:
•
In Oracle Grid Infrastructure 12c release 2 (12.2), the OCR and OCR
backup must be located in an Oracle ASM disk group. During the
upgrade, regardless of where the OCR backup location was in the
previous release, the OCR backup location is changed to an Oracle ASM
disk group.
•
The backups in the old Oracle Clusterware home can be deleted after the
upgrade because upgrades of Oracle Clusterware are out-of-place
upgrades, and after the upgrade to Oracle Grid Infrastructure 12c release
2 (12.2) the OCR and OCR backup must be located in an Oracle ASM disk
group.
•
If the cluster being upgraded has a single disk group that stores the OCR,
OCR backup, Oracle ASM password, Oracle ASM password file backup,
and the Grid Infrastructure Management Repository (GIMR), then Oracle
recommends that you create a separate disk group or use another existing
disk group and store the OCR backup, the GIMR and Oracle ASM
password file backup in that disk group.
Upgrading Oracle Grid Infrastructure 11-15
Performing Rolling Upgrades of Oracle Grid Infrastructure
See Also: Oracle Clusterware Administration and Deployment Guide for the
commands to create a disk group.
11.7.2 Completing an Oracle Clusterware Upgrade when Nodes Become Unreachable
If some nodes become unreachable in the middle of an upgrade, then you cannot
complete the upgrade without user intervention.
Because the upgrade did not complete successfully on the unreachable nodes, the
upgrade is incomplete. Oracle Clusterware remains in the earlier release.
1. Confirm that the upgrade is incomplete by entering the following command:
crsctl query crs activeversion
2. To resolve the incomplete upgrade, run the gridConfig.bat —upgrade
command with the -force option on any of the nodes where the
gridConfig.bat script has already completed as follows:
Grid_home\crs\config\gridConfig.bat -upgrade -force
For example, as the Oracle Installation User for Oracle Grid Infrastructure, run the
following command:
C:\> C:\app\12.2.0\grid\crs\config\gridConfig.bat -upgrade -force
The force cluster upgrade has the following limitations:
•
All active nodes must be upgraded to the newer release
•
All inactive nodes (accessible or inaccessible) may be either upgraded or not
upgraded
•
For inaccessible nodes, after patch set upgrades, you can delete the node from
the cluster. If the node becomes accessible later, and the patch version upgrade
path is supported, then you can upgrade it to the new patch version.
This command forces the upgrade to complete.
3. Verify that the upgrade has completed by using the command crsctl query
crs activeversion.
The active release should be the upgrade release.
11.7.3 Joining Inaccessible Nodes After Forcing an Upgrade
You can add inaccessible nodes to the cluster after a forced cluster upgrade.
Starting with Oracle Grid Infrastructure 12c release 1 (12.1), after you complete a force
cluster upgrade command, you can join inaccessible nodes to the cluster as an
alternative to deleting the nodes, which was required in earlier releases.
To use this option, you must already have Oracle Grid Infrastructure 12c release 2
(12.2) software installed on the nodes.
1. Log in as an Administrator user on the nodes that you want to join to the cluster.
2. Change directory to the Oracle Grid Infrastructure 12c release 2 (12.2) Grid_home
directory.
11-16 Oracle Grid Infrastructure Installation and Upgrade Guide
Applying Patches to Oracle Grid Infrastructure
For example:
C:\> cd app\12.2.0\grid
3. Run the following command, where upgraded_node is the inaccessible or
unreachable node that you want to join to the cluster.
C:\..\grid\> rootcrs.bat -join -existingnode upgraded_node
11.7.4 Changing the First Node for Install and Upgrade
If the first node becomes inaccessible, you can force another node to be the first node
for installation or upgrade.
•
Installation: If gridonfig.bat fails to complete on the first node, run the
following command on another node using the -force option:
Grid_home\crs\config\gridconfig.bat -force -first
•
Upgrade: If gridonfig.bat fails to complete on the first node, run the following
command on another node using the -force option:
Grid_home\crs\config\gridconfig.bat -upgrade -force -first
11.8 Applying Patches to Oracle Grid Infrastructure
After you have upgraded Oracle Grid Infrastructure 12c Release 2 (12.2), you can
install individual software patches by downloading them from My Oracle Support.
About Individual (One-Off) Oracle Grid Infrastructure Patches (page 11-17)
Download Oracle ASM one-off patch and apply it to Oracle Grid
Infrastructure using the OPatch Utility.
About Oracle Grid Infrastructure Software Patch Levels (page 11-18)
Review this topic to understand how to apply patches for Oracle ASM
and Oracle Clusterware.
Patching Oracle ASM to a Software Patch Level (page 11-18)
Starting with Oracle Grid Infrastructure 12c Release 1 (12.1), a new
cluster state called Rolling Patch is available.
11.8.1 About Individual (One-Off) Oracle Grid Infrastructure Patches
Download Oracle ASM one-off patch and apply it to Oracle Grid Infrastructure using
the OPatch Utility.
Individual patches are called one-off patches. An Oracle ASM one-off patch is
available for a specific released release of Oracle ASM. If a patch you want is available,
then you can download the patch and apply it to Oracle ASM using the OPatch Utility.
The OPatch inventory keeps track of the patches you have installed for your release of
Oracle ASM. If there is a conflict between the patches you have installed and patches
you want to apply, then the OPatch Utility advises you of these conflicts.
Related Topics:
Patching Oracle ASM to a Software Patch Level (page 11-18)
Starting with Oracle Grid Infrastructure 12c Release 1 (12.1), a new
cluster state called Rolling Patch is available.
Upgrading Oracle Grid Infrastructure 11-17
Applying Patches to Oracle Grid Infrastructure
11.8.2 About Oracle Grid Infrastructure Software Patch Levels
Review this topic to understand how to apply patches for Oracle ASM and Oracle
Clusterware.
The software patch level for Oracle Grid Infrastructure represents the set of all one-off
patches applied to the Oracle Grid Infrastructure software release, including Oracle
ASM. The release is the release number, in the format of major, minor, and patch set
release number. For example, with the release number 12.1.0.1, the major release is 12,
the minor release is 1, and 0.0 is the patch set number. With one-off patches, the major
and minor release remains the same, though the patch levels change each time you
apply or roll back an interim patch.
As with standard upgrades to Oracle Grid Infrastructure, at any given point in time
for normal operation of the cluster, all the nodes in the cluster must have the same
software release and patch level. Because one-off patches can be applied as rolling
upgrades, all possible patch levels on a particular software release are compatible with
each other.
11.8.3 Patching Oracle ASM to a Software Patch Level
Starting with Oracle Grid Infrastructure 12c Release 1 (12.1), a new cluster state called
Rolling Patch is available.
Rolling Patch mode is similar to the existing Rolling Upgrade mode in terms of the
Oracle ASM operations allowed in this quiesce state.
1. Download the patch you want to apply from My Oracle Support:
a. Go to https://support.oracle.com
b. Select the Patches and Updates tab to locate the patch.
To locate patch bundles, you can perform a Product or Family (Advanced)
search for your platform and software release.
Oracle recommends that you select Recommended Patch Advisor, and enter
the product group, release, and platform for your software. My Oracle Support
provides you with a list of the most recent patches and critical patch updates
(CPUs).
c. Place the patch in an accessible directory, such as C:\downloads.
2. Change directory to the opatch directory in the Grid home.
For example:
C:\> cd app\12.1.0\grid\opatch
3. Review the patch documentation for the patch you want to apply, and complete all
required steps before starting the patch upgrade.
4. Follow the instructions in the patch documentation to apply the patch.
Related Topics:
About Individual (One-Off) Oracle Grid Infrastructure Patches (page 11-17)
Download Oracle ASM one-off patch and apply it to Oracle Grid
Infrastructure using the OPatch Utility.
11-18 Oracle Grid Infrastructure Installation and Upgrade Guide
Updating Oracle Enterprise Manager Cloud Control Target Parameters
11.9 Updating Oracle Enterprise Manager Cloud Control Target
Parameters
After upgrading Oracle Grid Infrastructure, upgrade the Enterprise Manager Cloud
Control target.
Because Oracle Grid Infrastructure 12c Release 2 (12.2) is an out-of-place upgrade of
the Oracle Clusterware home in a new location (the Oracle Grid Infrastructure for a
cluster home, or Grid home), the path for the CRS_HOME parameter in some parameter
files must be changed. If you do not change the parameter, then you encounter errors
such as "cluster target broken" on Oracle Enterprise Manager Cloud Control.
To resolve the issue, update the Enterprise Manager Cloud Control target, and then
update the Enterprise Manager Agent Base Directory on each cluster member node
running an agent.
Updating the Enterprise Manager Cloud Control Target After Upgrades
(page 11-19)
After upgrading Oracle Grid Infrastructure, update the Enterprise
Manager Target with the new Grid home path.
Updating the Enterprise Manager Agent Base Directory After Upgrades
(page 11-19)
After upgrading Oracle Grid Infrastructure, update the Enterprise
Manager Agent Base Directory on each cluster member node running an
agent.
Registering Resources with Oracle Enterprise Manager After Upgrades
(page 11-20)
After upgrading Oracle Grid Infrastructure, add the new resource
targets to Oracle Enterprise Manager Cloud Control.
11.9.1 Updating the Enterprise Manager Cloud Control Target After Upgrades
After upgrading Oracle Grid Infrastructure, update the Enterprise Manager Target
with the new Grid home path.
1.
Log in to Enterprise Manager Cloud Control.
2.
Navigate to the Targets menu, and then to the Cluster page.
3.
Click a cluster target that was upgraded.
4.
Click Cluster, then Target Setup, and then Monitoring Configuration from the
menu.
5.
Update the value for Oracle Home with the new Grid home path.
6.
Save the updates.
11.9.2 Updating the Enterprise Manager Agent Base Directory After Upgrades
After upgrading Oracle Grid Infrastructure, update the Enterprise Manager Agent
Base Directory on each cluster member node running an agent.
The Agent Base Directory is a directory where the Management Agent home is
created. The Management Agent home is in the path Agent_Base_directory\core
\EMAgent_Version. For example, if the Agent Base Directory is C:\app\emagent,
Upgrading Oracle Grid Infrastructure 11-19
Updating Oracle Enterprise Manager Cloud Control Target Parameters
then Oracle creates the Management Agent home as C:\app\emagent\core
\13.1.1.0.
1. Navigate to the bin directory in the Management Agent home.
2. In the C:\app\emagent\core\13.1.1.0\bin directory, open the file emctl
with a text editor.
3. Locate the parameter CRS_HOME, and update the parameter to the new Grid home
path.
4. Repeat steps 1-3 on each node of the cluster with an Enterprise Manager agent.
11.9.3 Registering Resources with Oracle Enterprise Manager After Upgrades
After upgrading Oracle Grid Infrastructure, add the new resource targets to Oracle
Enterprise Manager Cloud Control.
Discover and add new resource targets in Oracle Enterprise Manager after Oracle Grid
Infrastructure upgrade. The following procedure provides an example of discovering
an Oracle ASM listener target after upgrading Oracle Grid Infrastructure.
1. Log in to Oracle Enterprise Manager Cloud Control.
2. From the Setup menu, select Add Target, and then select Add Targets Manually.
The Add Targets Manually page is displayed.
3. In the Add Targets page, select the Add Using Guided Process option and Target
Type as Oracle Database, Listener and Automatic Storage
Management.
For any other resource to be added, select the appropriate Target Type in Oracle
Enterprise Manager discovery wizard.
4. Click Add Using Guided Process.
The Target Discover wizard is displayed.
5. For the Specify Host or Cluster field, click on the Search icon and search for Target
Types of Hosts, and select the corresponding Host.
6. Click Next.
7. In the Target Discovery: Results page, select the discovered Oracle ASM Listener
target, and click Configure.
8. In the Configure Listener dialog box, specify the listener properties and click OK.
9. Click Next and complete the discovery process.
The listener target is discovered in Oracle Enterprise Manager with the status as
Down.
10. From the Targets menu, select the type of target.
11. Click the target name to navigate to the target home page.
12. From the host, database, middleware target, or application menu displayed on the
target home page, select Target Setup, then select Monitoring Configuration.
11-20 Oracle Grid Infrastructure Installation and Upgrade Guide
Checking Cluster Health Monitor Repository Size After Upgrading
13. In the Monitoring Configuration page for the listener, specify the host name in the
Machine Name field and the password for the ASMSNMP user in the Password
field.
14. Click OK.
Oracle ASM listener target is displayed with the correct status.
Similarly, you can add other clusterware resources to Oracle Enterprise Manager after
an Oracle Grid Infrastructure upgrade.
11.10 Checking Cluster Health Monitor Repository Size After Upgrading
If you are upgrading Oracle Grid Infrastructure from a prior release using IPD/OS to
the current release, then review the Cluster Health Monitor repository size (the CHM
repository).
1. Review your CHM repository needs, and determine if you need to increase the
repository size to maintain a larger CHM repository.
Note: Your previous IPD/OS repository is deleted when you install Oracle
Grid Infrastructure.
By default, the CHM repository size is a minimum of either 1GB or 3600 seconds (1
hour), regardless of the size of the cluster.
2. To enlarge the CHM repository, use the following command syntax, where
RETENTION_TIME is the size of CHM repository in number of seconds:
oclumon manage -repos changeretentiontime RETENTION_TIME
For example, to set the repository size to four hours:
oclumon manage -repos changeretentiontime 14400
The value for RETENTION_TIME must be more than 3600 (one hour) and less than
259200 (three days). If you enlarge the CHM repository size, then you must ensure
that there is local space available for the repository size you select on each node of
the cluster. If you do not have sufficient space available, then you can move the
repository to shared storage.
11.11 Downgrading Oracle Clusterware After an Upgrade
After a successful or a failed upgrade, you can restore Oracle Clusterware to the
previous release.
Downgrading Oracle Clusterware restores the Oracle Clusterware configuration to the
state it was in before the Oracle Grid Infrastructure 12c Release 2 (12.2) upgrade. Any
configuration changes you performed during or after the Oracle Grid Infrastructure
12c Release 2 (12.2) upgrade are removed and cannot be recovered.
To restore Oracle Clusterware to the previous release, use the downgrade procedure
for the release to which you want to downgrade.
Upgrading Oracle Grid Infrastructure 11-21
Downgrading Oracle Clusterware After an Upgrade
Note: Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), you can
downgrade the cluster nodes in any sequence. You can downgrade all cluster
nodes except one, in parallel. You must downgrade the last node after you
downgrade all other nodes.
When downgrading after a failed upgrade, if the rootcrs.sh or
rootcrs.bat file does not exist on a node, then instead of the executing the
script use the command perl rootcrs.pl . Use the perl interpreter located
in the Oracle Home directory.
Note:
Downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1) (page 11-22)
Use this procedure to downgrade to Oracle Grid Infrastructure 12c
release 1 (12.1).
Downgrading to Oracle Grid Infrastructure 11g Release 2 (11.2) (page 11-24)
Use this procedure to downgrade to Oracle Grid Infrastructure 11g
Release 2 (11.2).
11.11.1 Downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1)
Use this procedure to downgrade to Oracle Grid Infrastructure 12c release 1 (12.1).
1. Optional: Delete the Oracle Grid Infrastructure 12c release 2 (12.2) Management
Database:
dbca -silent -deleteDatabase -sourceDB -MGMTDB
2. Use the command syntax rootcrs.bat -downgrade to downgrade Oracle Grid
Infrastructure on all nodes, in any sequence.
For example:
C:\app\12.2.0\grid\crs\install\rootcrs.bat -downgrade
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user. You can run the downgrade script in parallel on all
cluster nodes, but one.
3. Downgrade the last node after you downgrade all other nodes.
C:\app\12.2.0\grid\crs\install\rootcrs.bat -downgrade
4. Remove the Oracle Grid Infrastructure 12c release 2 (12.2) Grid home as the active
Oracle Clusterware home:
a. On any of the cluster member nodes where the rootcrs.bat -downgrade
command has run successfully, log in as the Oracle Grid Infrastructure
installation owner.
b. Use the following command to start the installer, where C:\app
\12.2.0\grid is the location of the new (upgraded) Grid home:
cd C:\app\12.2.0\grid\oui\bin
setup.exe -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent
CRS=false
ORACLE_HOME=C:\app\12.2.0\grid "CLUSTER_NODES=node1,node2,node3"
11-22 Oracle Grid Infrastructure Installation and Upgrade Guide
Downgrading Oracle Clusterware After an Upgrade
LOCAL_NODE=local_node_running_the_command
-doNotUpdateNodeList
Add the flag -cfs if the Grid home is a shared home.
5. Set Oracle Grid Infrastructure 12c release 1 (12.1) Grid home as the active Oracle
Clusterware home:
a. On any of the cluster member nodes where the rootcrs.bat -downgrade
command has run successfully, log in as the Oracle Grid Infrastructure
installation owner.
b. Use the following command to start the installer, where the path you provide
for ORACLE_HOME is the location of the home directory from the earlier Oracle
Clusterware installation.
cd C:\app\12.1.0\grid\oui\bin
setup.exe -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent
CRS=true
ORACLE_HOME=C:\app\12.1.0\grid "CLUSTER_NODES=node1,node2,node3" doNotUpdateNodeList
6. Start the Oracle Grid Infrastructure 12c release 1 (12.1) clusterware stack from the
old Grid home on each node to complete the downgrade.
crsctl start crs
7. On any node, remove the MGMTDB resource as follows:
121_Grid_home\bin\srvctl remove mgmtdb
8. If you are downgrading to Oracle Grid Infrastructure 12c release 1 (12.1.0.2), run
the following commands to configure the Grid Infrastructure Management
Database:
a. Run DBCA in silent mode from the Oracle Database 12c release 1 (12.1.0.2)
home and create the Management Database container database (CDB) as
follows:
12102_Grid_home\bin\dbca -silent -createDatabase -createAsContainerDatabase
true
-templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb
-storageType ASM -diskGroupName ASM_DG_NAME
-datafileJarLocation 12102_grid_home\assistants\dbca\templates
-characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck
b. Run DBCA in silent mode from the Oracle Database 12c release 1 (12.1.0.2)
home and create the Management Database pluggable database (PDB) as
follows:
12102_Grid_home\bin\dbca -silent -createPluggableDatabase -sourceDB -MGMTDB
-pdbName cluster_name -createPDBFrom RMANBACKUP
-PDBBackUpfile 12102_Grid_home\assistants\dbca\templates\mgmtseed_pdb.dfb
-PDBMetadataFile 12102_Grid_home\assistants\dbca\templates\mgmtseed_pdb.xml
-createAsClone true -internalSkipGIHomeCheck
9. If you are downgrading to Oracle Grid Infrastructure 12c release 1 (12.1.0.1), run
DBCA in silent mode from the Oracle Database 12c release 1 (12.1.0.1) home and
create the Management Database as follows:
Upgrading Oracle Grid Infrastructure 11-23
Downgrading Oracle Clusterware After an Upgrade
12101_Grid_home\bin\dbca -silent -createDatabase
-templateName MGMTSeen_Database.dbc -sid -MGMTDB -gdbName _mgmtdb
-storageType ASM -diskGroupName ASM_DG_NAME
-datafileJarLocation 12101_Grid_home\assistants\dbca\templates
-characterset AL32UTF8 -autoGeneratePasswords
10. Configure the Management Database by running the Configuration Assistant from
the location 121_Grid_home\bin\mgmtca.
11.11.2 Downgrading to Oracle Grid Infrastructure 11g Release 2 (11.2)
Use this procedure to downgrade to Oracle Grid Infrastructure 11g Release 2 (11.2).
1. Delete the Oracle Grid Infrastructure 12c release 2 (12.2) Management Database:
dbca -silent -deleteDatabase -sourceDB -MGMTDB
2. As an Administrator user, use the command syntax Grid_home\crs\install
\rootcrs.bat -downgrade to stop the Oracle Grid Infrastructure 12c release 2
(12.2) resources, and shut down the Oracle Grid Infrastructure stack.
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user.
You can run the downgrade script in parallel on all but one of the cluster nodes.
You must downgrade the last node after you downgrade all other nodes.
3. Remove Oracle Grid Infrastructure 12c release 2 (12.2) Grid home as the active
Oracle Clusterware home:
a. On any of the cluster member nodes where the upgrade to Oracle Grid
Infrastructure 12c has completed successfully, log in as the Oracle Grid
Infrastructure installation owner.
b. Use the following command to start the installer, where C:\app
\12.2.0\grid is the location of the new (upgraded) Grid home:
cd C:\app\12.2.0\grid\oui\bin
setup.exe -nowait -updateNodeList -silent CRS=false
ORACLE_HOME=C:\app\12.2.0\grid
"CLUSTER_NODES=node1,node2,node3" LOCAL_NODE=local_node_running_the_command
-doNotUpdateNodeList
Add the option -cfs if the Grid home is a shared home.
4. Set the Oracle Grid Infrastructure 11g Release 2 (11.2) Grid home as the active
Oracle Clusterware home:
a. On any of the cluster member nodes where the upgrade to Oracle Grid
Infrastructure 12c has completed successfully, log in as the Oracle Grid
Infrastructure installation owner.
b. Use the following command to start the installer, where the path you provide
for ORACLE_HOME is the location of the home directory from the earlier Oracle
Clusterware installation.
cd C:\app\11.2.0\grid\oui\bin
setup.exe -nowait -updateNodeList -silent CRS=true
ORACLE_HOME=C:\app\11.2.0\grid -doNotUpdateNodeList
11-24 Oracle Grid Infrastructure Installation and Upgrade Guide
Completing Failed or Interrupted Installations and Upgrades
Add the option -cfs if the Grid home is a shared home.
5. Manually start the Oracle Clusterware stack for Oracle Grid Infrastructure 11g
Release 2 (11.2).
On each node, start Oracle Clusterware from the earlier release Oracle Clusterware
home using the command crsctl start crs. For example. if the earlier release
home is C:\app\11.2.0\grid, use the following command on each node:
C:\app\11.2.0\grid\bin> crsctl start crs
11.12 Completing Failed or Interrupted Installations and Upgrades
If Oracle Universal Installer (OUI) exits on the node from which you started the
installation or upgrade (the first node), or the node reboots before you confirm that the
gridconfig.bat script was run on all cluster nodes, then the upgrade or installation
remains incomplete.
In an incomplete installation or upgrade, configuration assistants still need to run, and
the new Grid home still needs to be marked as active in the central Oracle inventory.
You must complete the installation or upgrade on the affected nodes manually.
Completing Failed Installations and Upgrades (page 11-25)
Understand how to join nodes to the cluster after installation or upgrade
fails on some nodes.
Continuing Incomplete Upgrade of First Nodes (page 11-26)
When the first node cannot be upgraded, use these steps to continue the
upgrade process.
Continuing Incomplete Upgrades on Remote Nodes (page 11-26)
For nodes other than the first node (the node on which you started the
upgrade), use these steps to continue the upgrade process.
Continuing Incomplete Installations on First Nodes (page 11-27)
To continue an incomplete installation, the first node must finish before
the rest of the clustered nodes.
Continuing Incomplete Installation on Remote Nodes (page 11-27)
For nodes other than the first node (the node on which you started the
installation), use these steps to continue the installation process.
11.12.1 Completing Failed Installations and Upgrades
Understand how to join nodes to the cluster after installation or upgrade fails on some
nodes.
If installation or upgrade of Oracle Grid Infrastructure on some nodes fails, and you
click Ignore to continue the installation, then the installation or upgrade completes on
only some nodes in the cluster. Follow this procedure to add the failed nodes to the
cluster.
1. Remove the Oracle Grid Infrastructure software from the failed nodes:
Grid_home\deinstall\deinstall -local
2. As the installation user for Oracle Grid Infrastructure, from a node where Oracle
Clusterware is installed, delete the failed nodes from the cluster using the crsctl
delete node command:
Upgrading Oracle Grid Infrastructure 11-25
Completing Failed or Interrupted Installations and Upgrades
Grid_home\bin\crsctl delete node -n node_name
where node_name is the node to be deleted.
3. Run the Oracle Grid Infrastructure installation wizard.
Grid_home\gridSetup.bat
After the installation wizard starts, choose the option Add more nodes to the
cluster, then follow the steps in the wizard to add the nodes.
Alternatively, you can also add the nodes by running the addnode script and
specifying the nodes to add to the cluster:
Grid_home\addnode\addnode.bat
The failed nodes are added to the cluster.
11.12.2 Continuing Incomplete Upgrade of First Nodes
When the first node cannot be upgraded, use these steps to continue the upgrade
process.
1. If the OUI failure indicated a need to reboot by raising error message CLSRSC-400,
then reboot the first node (the node on which you started the upgrade). Otherwise,
manually fix or clear the error condition, as reported in the error output.
2. Log in as an Administrator user on the first node.
3. Change directory to the new Grid home on the first node, and run the
gridconfig.bat —upgrade command on that node again. For example:
C:\> cd app\12.2.0\grid\crs\config\
C:\..\grid> gridconfig.bat -upgrade
4. Complete the upgrade of all other nodes in the cluster.
C:\app\12.2.0\grid\crs\config> gridconfig.bat -upgrade
5. Configure a response file, and provide passwords required for the upgrade.
6. To complete the upgrade, log in to the first node as the Oracle Installation user for
Oracle Grid Infrastructure and run the script gridSetup.bat, located in the
Grid_home, specifying the response file that you created.
For example, if the response file is named gridinstall.rsp:
C:\> cd Grid_home
C:\Grid_home> gridSetup.bat -executeConfigTools -responseFile Grid_home\install
\response\gridinstall.rsp
Related Topics:
Postinstallation Configuration Using the ConfigToolAllCommands Script
(page A-10)
11.12.3 Continuing Incomplete Upgrades on Remote Nodes
For nodes other than the first node (the node on which you started the upgrade), use
these steps to continue the upgrade process.
11-26 Oracle Grid Infrastructure Installation and Upgrade Guide
Completing Failed or Interrupted Installations and Upgrades
1. If the OUI failure indicated a need to reboot, by raising error message
CLSRSC-400, then reboot the node with the error condition. Otherwise, manually
fix or clear the error condition that was reported in the error output.
2. On the first node, within OUI, click Retry.
This instructs OUI to retry the upgrade on the affected node.
3. Continue the upgrade from the OUI instance on the first node.
11.12.4 Continuing Incomplete Installations on First Nodes
To continue an incomplete installation, the first node must finish before the rest of the
clustered nodes.
1. If the OUI failure indicated a need to reboot, by raising error message CLSRSC-400,
then reboot the first node (the node where the installation was started). Otherwise,
manually fix or clear the error condition that was reported in the error output.
2. If necessary, log in as the Oracle Installation user for Oracle Grid Infrastructure.
Change directory to the Grid home on the first node and run the
gridconfig.bat —upgrade command on that node again.
For example:
C:\> cd app\12.2.0\grid\crs\config\
C:\..\config> gridconfig.bat -upgrade
3. Complete the installation on all other nodes.
4. Configure a response file, and provide passwords for the installation.
5. To complete the installation, log in as the Oracle Installation user for Oracle Grid
Infrastructure, and run the script configToolAllCommands, located in the path
Grid_home\cfgtoollogs\configToolAllCommands, specifying the response
file that you created.
For example, if the response file is named gridinstall.rsp:
C:\> cd app\12.2.0\grid\cfgtoollogs
C:\..\cfgtoollogs> configToolAllCommands RESPONSE_FILE=gridinstall.rsp
See Also: Postinstallation Configuration Using the ConfigToolAllCommands
Script (page A-10) for information about how to create the response file
11.12.5 Continuing Incomplete Installation on Remote Nodes
For nodes other than the first node (the node on which you started the installation),
use these steps to continue the installation process.
1. If the OUI failure indicated a need to reboot, by raising error message
CLSRSC-400, then reboot the node with the error condition. Otherwise, manually
fix or clear the error condition that was reported in the error output.
2. On the first node, within OUI, click Retry.
3. Continue the installation from the OUI instance on the first node.
Upgrading Oracle Grid Infrastructure 11-27
Completing Failed or Interrupted Installations and Upgrades
11-28 Installation and Upgrade Guide
12
Modifying or Deinstalling Oracle Grid
Infrastructure
You must follow a specific procedure when modifying or removing Oracle
Clusterware and Oracle Automatic Storage Management (Oracle ASM) software.
Deciding When to Deinstall Oracle Clusterware (page 12-1)
There are certain situations in which you might be required to remove
Oracle software.
Migrating Standalone Grid Infrastructure Servers to a Cluster (page 12-2)
If you have an Oracle Database installation using Oracle Restart (an
Oracle Grid Infrastructure installation for a standalone server), you can
reconfigure that server as a cluster member node, then complete the
following tasks.
Changing the Oracle Grid Infrastructure Home Path (page 12-4)
After installing Oracle Grid Infrastructure for a cluster (Oracle
Clusterware and Oracle ASM configured for a cluster), you might need
to change the location of the Grid home.
Unconfiguring Oracle Clusterware Without Removing the Software
(page 12-5)
By running rootcrs.bat -deconfig -force on nodes where you
encounter an installation error, you can unconfigure Oracle Clusterware
on those nodes, correct the cause of the error, and then run
rootcrs.bat again to reconfigure Oracle Clusterware.
Removing Oracle Clusterware and Oracle ASM Software (page 12-6)
The deinstall command removes Oracle Clusterware and Oracle
ASM from your server.
See Also: Product-specific documentation for requirements and restrictions
to remove an individual product
12.1 Deciding When to Deinstall Oracle Clusterware
There are certain situations in which you might be required to remove Oracle
software.
Remove installed components in the following situations:
•
You have successfully installed Oracle Clusterware, and you want to remove the
Oracle Clusterware installation, either in an educational environment, or a test
environment.
Modifying or Deinstalling Oracle Grid Infrastructure 12-1
Migrating Standalone Grid Infrastructure Servers to a Cluster
•
You have encountered errors during or after installing or upgrading Oracle
Clusterware, and you want to reattempt an installation.
•
Your installation or upgrade stopped because of a hardware or operating system
failure.
•
You are advised by Oracle Support to reinstall Oracle Clusterware.
12.2 Migrating Standalone Grid Infrastructure Servers to a Cluster
If you have an Oracle Database installation using Oracle Restart (an Oracle Grid
Infrastructure installation for a standalone server), you can reconfigure that server as a
cluster member node, then complete the following tasks.
1. Inspect the Oracle Restart configuration with the Server Control (SRVCTL) utility
using the following syntax, where db_unique_name is the unique name for the
database, and lsnrname is the name of the listener for the database:
srvctl config database -db db_unique_name
srvctl config service -db db_unique_name
srvctl config listener -listener lsnrname
Write down the configuration information for the server; you will need this
information in a later step.
2. Stop all of the databases, services, and listeners that you discovered in step 1.
3. If present, unmount all Oracle Automatic Storage Management Cluster File System
(Oracle ACFS) file systems.
4. Log in as an Administrator user and navigate to the directory Grid_home\crs
\install, where Grid_home is the location of your Oracle Grid Infrastructure
home (Grid home) directory, for example:
C:\> cd app\12.2.0\grid\crs\install
5. Unconfigure the Oracle Grid Infrastructure installation for a standalone server
(Oracle Restart) using the following command:
C:\..\install> roothas.bat -deconfig -force
6. Prepare the server for Oracle Clusterware configuration, as described in Chapter 2
through Chapter 7 of this guide. In addition, choose to install Oracle Grid
Infrastructure for a cluster in the same location as Oracle Restart, or in a different
location:
Option
Description
Installing in the Same Location as Oracle
Restart
Proceed to step 7.
Installing in a Different Location than
Oracle Restart
Set up Oracle Grid Infrastructure software
in the new Grid home software location,
then proceed to step 7.
7. Set the environment variables as follows:
export oracle_install_asm_UseExistingDG=true or false
export oracle_install_asm_DiskGroupName=disk_group_name
export oracle_install_asm_DiskDiscoveryString=asm_discovery_string
12-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Migrating Standalone Grid Infrastructure Servers to a Cluster
export oracle_install_asm_ConfigureGIMRDataDG=true or false
export oracle_install_asm_GIMRDataDGName=disk_group_name
8. As the Oracle Installation user for Oracle Grid Infrastructure, run the installer.
You can complete the installation interactively, or if you want to perform a silent
installation, then save and stage the response file. After saving the response file,
run the following command. For the -responseFile parameter, specify the full
path name where the response file was saved, for example:
C:\> Grid_home\gridSetup.bat -silent -responseFile C:\Users\dba1\scripts\GI.rsp
9. Mount the Oracle ASM disk group used by Oracle Restart.
10. If you used Oracle ACFS with Oracle Restart, then:
a. Start Oracle ASM Configuration Assistant (ASMCA). Run the volenable
command to enable all Oracle Restart disk group volumes.
b. Mount all Oracle ACFS file systems manually.
11. Add back Oracle ACFS resources to the Oracle Clusterware home, using the
information you wrote down in step 1.
Register the Oracle ACFS resources using a commands similar to the following:
C:\> cd app\grid\product\12.2.0\grid\bin
C:\..bin> srvctl add filesystem -device \\.\ORCLDATADISK4
-diskgroup ORestartData -volume db1
-mountpointpath C:\app\grid\prodcut\12.2.0\dbhome1 -user grid
12. Add the Oracle Database for support by Oracle Grid Infrastructure for a cluster,
using the configuration information you recorded in step 1. Use the following
command syntax, where db_unique_name is the unique name of the database on the
node, and nodename is the name of the node:
srvctl add database -db db_unique_name -spfile -pwfile -oraclehome %ORACLE_HOME% node
nodename
a. Verify that the %ORACLE_HOME% environment variable is set to the location of
the database home directory.
b. To add the database name mydb, enter the following command:
srvctl add database -db mydb -spfile -pwfile -oraclehome %ORACLE_HOME% -node
node1
c. Add each service to the database, using the command srvctl add service.
For example:
srvctl add service -db mydb -service myservice
13. Add nodes to your cluster, as required, using the Oracle Grid Infrastructure
installer.
Modifying or Deinstalling Oracle Grid Infrastructure 12-3
Changing the Oracle Grid Infrastructure Home Path
See Also:
•
"Recording Response Files (page A-5)" for more information about
saving the response file.
•
Oracle Clusterware Administration and Deployment Guide for information
about adding notes to your cluster
12.3 Changing the Oracle Grid Infrastructure Home Path
After installing Oracle Grid Infrastructure for a cluster (Oracle Clusterware and Oracle
ASM configured for a cluster), you might need to change the location of the Grid
home.
If you need to change the Grid home path, then use the following example as a guide
to detach the existing Grid home, and to attach a new Grid home.
Caution: Before changing the Grid home, you must shut down all executables
that run in the Grid home directory that you are modifying. In addition, shut
down all applications that use Oracle shared libraries.
1. Login as an Administrator user or the Oracle Installation user for Oracle Grid
Infrastructure (for example, grid).
2. Change directory to Grid_home\bin and enter the command crsctl stop
crs. For example:
C:\> cd app\12.1.0\grid\BIN
C:\..\BIN> crsctl stop crs
3. Detach the existing Grid home.
Run a command similar to the following command, where C:\app\12.2.0\grid is
the existing Grid home location:
C:\> cd app\12.2.0\grid\oui\bin
C:\..\bin> gridSetup.bat -silent -detachHome ORACLE_HOME=
'C:\app\12.2.0\grid' -local
4. Move the installed files for Oracle Grid Infrastructure from the old Grid home to
the new Grid home.
For example, if the old Grid home is C:\app\12.2.0\grid and the new Grid
home is D:\app\12c\grid, use the following command:
C:\> xcopy C:\app\12.2.0\grid D:\app\12c\grid /E /I /H /K
5. Clone the Oracle Grid Infrastructure installation.
For example:
C:\>perl clone.pl ORACLE_BASE=C:\app\grid ORACLE_HOME=C:\app\12.2.0\grid
ORACLE_HOME_NAME=OraHome1Grid ORACLE_HOME_USER=Oracle_home_user_name
"LOCAL_NODE=node1" "CLUSTER_NODES={node1,node2}" CRS=TRUE
When you navigate to the Grid_home\clone\bin directory and run the
clone.pl script, provide values for the input parameters that provide the path
information for the new Grid home.
12-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Unconfiguring Oracle Clusterware Without Removing the Software
Note: You cannot specify a different Oracle Home user when changing the
Oracle Grid Infrastructure home path.
6. Start Oracle Clusterware in the new home location.
D:\> cd app\12c\grid\crs\install
D:\..install\> rootcrs.bat -move -dstcrshome D:\app\12c\grid
WARNING: While cloning, ensure that you do not change the Oracle home
base, otherwise the move operation will fail.
7. Repeat steps 1 through 6 on each cluster member node.
Related Topics:
Oracle Clusterware Administration and Deployment Guide
12.4 Unconfiguring Oracle Clusterware Without Removing the Software
By running rootcrs.bat -deconfig -force on nodes where you encounter an
installation error, you can unconfigure Oracle Clusterware on those nodes, correct the
cause of the error, and then run rootcrs.bat again to reconfigure Oracle
Clusterware.
Running the rootcrs.bat command with the options -deconfig -force enables
you to unconfigure Oracle Clusterware on one or more nodes without removing the
installed software. This feature is useful if you encounter an error on one or more
cluster nodes during installation, such as incorrectly configured shared storage.
Before unconfiguring Oracle Clusterware you must:
•
Stop any databases, services, and listeners that may be installed and running
•
Dismount ACFS file systems
•
Disable ADVM volumes
Caution: Commands used in this section remove the Oracle Grid
infrastructure installation for the entire cluster. To remove the installation
from an individual node, refer to Oracle Clusterware Administration and
Deployment Guide.
1. Log in using a member of the Administrators group on a node where you
encountered an error during installation.
2. Stop any databases, services, and listeners currently running from the Grid home.
3. If present, unmount all Oracle Automatic Storage Management Cluster File System
(Oracle ACFS) file systems.
4. Change directory to Grid_home\crs\install.
For example:
C:\> cd C:\app\12.1.0\grid\crs\install
Modifying or Deinstalling Oracle Grid Infrastructure 12-5
Removing Oracle Clusterware and Oracle ASM Software
5. Run rootcrs.bat with the -deconfig -force options.
For example:
C:\..\install> rootcrs.bat -deconfig -force
Note: The -force option must be specified when running the rootcrs.bat
script if there exist running resources that depend on the resources started
from the Oracle Clusterware home you are deleting, such as databases,
services, or listeners. You must also use the -force option if you are removing
a partial, or failed installation.
6. Repeat Step 1 through Step 5 on other nodes as required.
7. If you are unconfiguring Oracle Clusterware on all nodes in the cluster, then on the
last node, enter the following command:
C:\..\install> rootcrs.bat -deconfig -force -lastnode
The -lastnode option completes the unconfiguring of the cluster, including the
Oracle Cluster Registry (OCR) and voting files.
Caution: Run the rootcrs.bat -deconfig -force -lastnode
command on a Hub Node. Deconfigure all Leaf Nodes before you run the
command with the -lastnode flag.
12.5 Removing Oracle Clusterware and Oracle ASM Software
The deinstall command removes Oracle Clusterware and Oracle ASM from your
server.
Caution: You must use the deinstallation tool from the same release to
remove Oracle software. Do not run the deinstallation tool from a later release
to remove Oracle software from an earlier release. For example, do not run the
deinstallation tool from the 12.1.0.1 installation media to remove Oracle
software from an existing 11.2.0.4 Oracle home.
About the Deinstallation Tool (page 12-7)
The deinstallation tool stops Oracle software, and removes Oracle
software and configuration files on the operating system.
Files Deleted by the Deinstallation Tool (page 12-7)
The deinstallation tool removes Oracle software and files from your
system.
Deinstallation Tool Command Reference (page 12-8)
You can use the deinstallation tool to remove Oracle software. You can
run this command in standalone mode, from an Oracle home directory,
or through the installer.
Using the Deinstallation Tool to Remove Oracle Clusterware and Oracle ASM
(page 12-11)
You can run the deinstallation tool in multiple ways.
12-6 Oracle Grid Infrastructure Installation and Upgrade Guide
Removing Oracle Clusterware and Oracle ASM Software
12.5.1 About the Deinstallation Tool
The deinstallation tool stops Oracle software, and removes Oracle software and
configuration files on the operating system.
Starting with Oracle Database 12c, the deinstallation tool is integrated with the
database installation media. You can run the deinstallation tool using the
setup.exe command with the -deinstall and -home options from the base
directory of the Oracle Database or Oracle Database Client installation media.
The deinstallation tool is also available as a separate command (deinstall.bat) in
Oracle home directories after installation. It is located in the %ORACLE_HOME%
\deinstall directory.
The deinstallation tool stops Oracle software, and removes Oracle software and
configuration files on the operating system for a specific Oracle home. If you run the
deinstallation tool to remove an Oracle Grid Infrastructure for Windows installation,
then the deinstaller automatically runs the appropriate scripts to deconfigure Oracle
Grid Infrastructure or Oracle Grid Infrastructure for standalone server.
The deinstallation tool uses the information you provide, plus information gathered
from the software home to create a response file. You can alternatively supply a
response file generated previously by the deinstall.bat command using the checkonly option and -o option. You can also edit a response file template to create
a response file.
Note: You must run the deinstallation tool from the same release to remove
Oracle software. Do not run the deinstallation tool from a later release to
remove Oracle software from an earlier release. For example, do not run the
deinstallation tool from the Oracle Database 12.2 installation media to remove
Oracle software from an existing 11.2.0.4 Oracle home.
If the software in the Oracle home is not running (for example, after an unsuccessful
installation), then the deinstallation tool cannot determine the configuration and you
must provide all the configuration details either interactively or in a response file.
12.5.2 Files Deleted by the Deinstallation Tool
The deinstallation tool removes Oracle software and files from your system.
When you run the deinstallation tool, if the central inventory (Inventory) contains
no other registered homes besides the home that you are deconfiguring and removing,
then the deinstall command removes the following files and directory contents in
the Oracle base directory of the Oracle Database installation owner:
•
admin
•
cfgtoollogs
•
checkpoints
•
diag
•
oradata
•
fast_recovery_area
Modifying or Deinstalling Oracle Grid Infrastructure 12-7
Removing Oracle Clusterware and Oracle ASM Software
Oracle strongly recommends that you configure your installations using an Optimal
Flexible Architecture (OFA) configuration, and that you reserve Oracle base and
Oracle home paths for exclusive use of Oracle software. If you have any user data in
these locations in the Oracle base that is owned by the user account that owns the
Oracle software, then the deinstallation tool deletes this data.
Caution: The deinstallation tool deletes Oracle Database configuration files,
user data, and fast recovery area (FRA) files even if they are located outside of
the Oracle base directory path.
12.5.3 Deinstallation Tool Command Reference
You can use the deinstallation tool to remove Oracle software. You can run this
command in standalone mode, from an Oracle home directory, or through the
installer.
Purpose
The deinstallation tool stops Oracle software, and removes Oracle software and
configuration files on the operating system.
File Path
%ORACLE_HOME%\deinstall\deinstall
Prerequisites
Before you run the deinstallation tool for Oracle Grid Infrastructure installations:
•
Dismount Oracle Automatic Storage Management Cluster File System (Oracle
ACFS) and disable Oracle Automatic Storage Management Dynamic Volume
Manager (Oracle ADVM).
•
If Grid Naming Service (GNS) is in use, then notify your DNS administrator to
delete the subdomain entry from the DNS.
Syntax When Using the deinstall.bat Program
deinstall.bat [-silent] [-checkonly] [-local]
[-paramfile complete path of input response file]
[-params name1=value [name2=value . . .]]
[-o complete path of directory for saving files]
[-tmpdir complete path of temporary directory to use]
[-logdir complete path of log directory to use] [-skipLocalHomeDeletion] [skipRemoteHomeDeletion] [-help]
12-8 Oracle Grid Infrastructure Installation and Upgrade Guide
Removing Oracle Clusterware and Oracle ASM Software
Options
Table 12-1
Options for the Deinstallation Tool
Command Option
Description
home complete path of Oracle home
Specify this option to indicate the home path
of the Oracle home to check or deinstall. To
deinstall Oracle software using the
deinstall.bat command located within
the Oracle home being removed, provide a
response file in a location outside the Oracle
home, and do not use the -home option.
If you run deinstall.bat from the
%ORACLE_HOME%\deinstall path, then the
-home option is not required because the tool
knows from which home it is being run.
silent
Specify this option to run the deinstallation
tool in noninteractive mode. This option
requires one of the following:
•
A working system that it can access to
determine the installation and
configuration information; the -silent
option does not work with failed
installations.
•
A response file that contains the
configuration values for the Oracle
home that is being deinstalled or
deconfigured.
checkonly
Specify this option to check the status of the
Oracle software home configuration.
Running the deinstall command with the
-checkonly option does not remove the
Oracle configuration. This option generates a
response file that you can use with the
deinstall.bat command.
When you use the -checkonly option to
generate a response file, you are prompted to
provide information about your system. You
can accept the default value the tool has
obtained from your Oracle installation,
indicated inside brackets ([]), or you can
provide different values. To accept the
defaults, press Enter at each prompt.
Modifying or Deinstalling Oracle Grid Infrastructure 12-9
Removing Oracle Clusterware and Oracle ASM Software
Table 12-1
(Cont.) Options for the Deinstallation Tool
Command Option
Description
local
Specify this option on a multinode
environment to deconfigure Oracle software
in a cluster.
When you run deinstall.bat with this
option, it deconfigures and deinstalls the
Oracle software only on the local node (the
node on which you run deinstall.bat) for
non-shared Oracle home directories. The
deinstallation tool does not deinstall or
deconfigure Oracle software on remote
nodes.
paramfile complete path of input
response file
(Optional) You can specify this option to run
deinstall.bat with a response file in a
location other than the default. When you use
this option, provide the complete path where
the response file is located. If you run the
deinstall.bat command from the Oracle
home that you plan to deinstall, then you do
not need to specify the -paramfile option.
The default location of the response file
depends on the location of the deinstallation
tool:
•
From the installation media or stage
location: X:\staging_location
\deinstall\response
•
After installation, from the installed
Oracle home: %ORACLE_HOME%
\deinstall\response.
params name1=value[ name2=value
name3=value...]
Use this option with a response file to
override one or more values in a response file
that you created.
o complete path of directory for
saving response files
Use this option to provide a path other than
the default location where the response file
(deinstall.rsp.tmpl) is saved.
The default location of the response file
depends on the invocation method of the
deinstallation tool:
•
From the installation media or stage
location: stagelocation
\response
•
12-10 Oracle Grid Infrastructure Installation and Upgrade Guide
After installation, from the installed
Oracle home: %ORACLE_HOME%
\deinstall\response.
Removing Oracle Clusterware and Oracle ASM Software
Table 12-1
(Cont.) Options for the Deinstallation Tool
Command Option
Description
tmpdir complete path of temporary
directory to use
Specifies a non-default location where the
deinstallation tool writes the temporary files
for the deinstallation.
logdir complete path of log
directory to use
Specifies a non-default location where the
deinstallation tool writes the log files for the
deinstallation.
skipLocalHomeDeletion
Specify this option in Oracle Grid
Infrastructure installations on a multinode
environment to deconfigure a local
Grid home without deleting the Grid home.
skipRemoteHomeDeletion
Specify this option in Oracle Grid
Infrastructure installations on a multinode
environment to deconfigure a remote
Grid home without deleting the Grid home.
help
Use the -help option to obtain additional
information about the deinstallation tool
options.
Location of Log Files for the Deinstallation Tool
If you use the deinstall.bat command located in an Oracle home, then the
deinstallation tool writes log files in the C:\Program Files\Oracle\Inventory
\logs directory.
If you are using the deinstall.bat command to remove the last Oracle home
installed on the server, then the log files are written to the current user’s home
directory. For example, if you are logged in as the domain user RACDBA\dba1, then
the log files are stored in the directory C:\Users\dba1.RACDBA\logs.
12.5.4 Using the Deinstallation Tool to Remove Oracle Clusterware and Oracle ASM
You can run the deinstallation tool in multiple ways.
Running the Deinstallation Tool From an Oracle Home (page 12-12)
You can run the deinstallation tool from an Oracle home or from the
software installation media.
Running the Deinstallation Tool Interactively From the Installer (page 12-12)
You can run the deinstallation tool from an Oracle home or from the
software installation media.
Running the Deinstallation Tool with a Response File (page 12-13)
When running the deinstallation tool, you can use a response file instead
of responding to each prompt individually.
Generating a Response File For Use With the Deinstallation Tool (page 12-13)
To use a response file with the deinstallation tool you must first create
the response file.
Modifying or Deinstalling Oracle Grid Infrastructure 12-11
Removing Oracle Clusterware and Oracle ASM Software
12.5.4.1 Running the Deinstallation Tool From an Oracle Home
You can run the deinstallation tool from an Oracle home or from the software
installation media.
1. The default method for running the deinstallation tool is from the deinstall
directory in the Oracle home as the Oracle Installation user:
C:\> %ORACLE_HOME%\deinstall\deinstall.bat
2. Provide information about your servers as prompted or accept the defaults.
The deinstallation tool stops Oracle software, and removes Oracle software and
configuration files on the operating system.
Example 12-1
Running deinstall.bat From Within the Oracle Home
The most common method of running the deinstallation tool is to use the version
installed in the Oracle home being removed. The deinstallation tool determines the
software configuration for the local Oracle home, and then provides default values at
each prompt. You can either accept the default value, or override it with a different
value. If the software in the Oracle home is not running (for example, after an
unsuccessful installation), then the deinstallation tool cannot determine the
configuration, and you must provide all the configuration details either interactively
or in a response file. To use the deinstallation tool located in the Oracle home
directory, issue the following command, where C:\app\12.2.0\grid is the location
of Grid home:
Use the following command while logged in as a member of the Administrators group
to remove the Oracle Grid Infrastructure installation from your cluster:
C:\> app\12.2.0\grid\deinstall\deinstall.bat
Provide additional information as prompted.
Note: When using the deinstallation tool from a location other than within
the Oracle home being removed, you must specify the -home option on the
command line.
12.5.4.2 Running the Deinstallation Tool Interactively From the Installer
You can run the deinstallation tool from an Oracle home or from the software
installation media.
1. Use the setup.exe command with the -deinstall option, followed by the -
home option to specify the path of the Oracle home you want to remove.
For example:
setup.exe -deinstall -home C:\app\12.2.0\grid
2. Provide information about your servers as prompted or accept the defaults.
The deinstallation tool stops Oracle software, and removes Oracle software and
configuration files on the operating system.
Example 12-2
Running the Deinstallation Tool from the Software Installation Media
If you run the deinstallation tool from the installer in the installation media, then when
the deinstall.bat command runs, it uses the information you provide to
12-12 Oracle Grid Infrastructure Installation and Upgrade Guide
Removing Oracle Clusterware and Oracle ASM Software
determine the system configuration and then provides default values at each prompt.
You can either accept the default value, or override it with a different value. If the
software in the specified Oracle home is not running (for example, after an
unsuccessful install attempt), then the deinstallation tool cannot determine the
configuration, and you must provide all the configuration details either interactively
or in a response file.
In this example, the setup.exe command is in the path /directory_path, where
directory_path is the path to the database directory on the installation media, and
C:\app\12.2.0\grid is the path to the Grid home that you want to remove.
Use the following command while logged in as a member of the Administrators group
to remove the Oracle Grid Infrastructure installation from your cluster:
C:\> cd directory_path
C:\..database> setup.exe -deinstall -home C:\app\12.2.0\grid
Provide additional information as prompted.
Note: When using the deinstallation tool from a location other than within
the Oracle home being removed, you must specify the -home option on the
command line.
12.5.4.3 Running the Deinstallation Tool with a Response File
When running the deinstallation tool, you can use a response file instead of
responding to each prompt individually.
The deinstallation tool uses the information you provide, plus information gathered
from the software home to create a response file. You can alternatively supply a
response file generated previously by the deinstall.bat command using the –
checkonly option and -o option. You can also edit a response file template located at
Oracle_Home\deinstall\response\deinstall.rsp.tmpl to create a response
file.
•
To run the deinstall.bat command located in an Oracle Grid Infrastructure
home and use a response file located at D:\Users\oracle\paramfile4.tmpl,
enter the following commands while logged in as a member of the Administrators
group:
C:\> cd %ORACLE_HOME%
C:\..grid> deinstall\deinstall.bat -paramfile D:\Users\oracle\paramfile4.tmpl
12.5.4.4 Generating a Response File For Use With the Deinstallation Tool
To use a response file with the deinstallation tool you must first create the response
file.
You can generate the a response file by running the deinstall.bat command with
the -checkonly and -o options before you run the command to deinstall the Oracle
home, or you can use the response file template and manually edit it to create the
response file.
Alternatively, you can use the response file template located at %ORACLE_HOME%
\deinstall\response\deinstall.rsp.tmpl.
•
To generate the response file deinstall_OraCrs12c_home1.rsp using the
deinstall.bat command located in the Oracle home and the -checkonly
Modifying or Deinstalling Oracle Grid Infrastructure 12-13
Removing Oracle Clusterware and Oracle ASM Software
option, enter a command similar to the following, where C:\app\12.2.0\grid
is the location of the Grid home and C:\Users\oracle is the directory in which
the generated response file is created:
C:\> app\12.2.0\grid\deinstall\deinstall.bat -checkonly -o C:\Users\oracle\
12-14 Oracle Grid Infrastructure Installation and Upgrade Guide
A
Installing and Configuring Oracle Grid
Infrastructure Using Response Files
You can install and configure Oracle Grid Infrastructure software using response files.
How Response Files Work (page A-1)
Response files can assist you with installing an Oracle product multiple
times on multiple computers.
Preparing Response Files (page A-3)
There are two methods you can use to prepare response files for silent
mode or response file mode installations.
Running Oracle Universal Installer Using a Response File (page A-6)
After creating the response file, run Oracle Universal Installer at the
command line, specifying the response file you created, to perform the
installation.
Running Oracle Net Configuration Assistant Using Response Files (page A-7)
You can run Oracle Net Configuration Assistant (NETCA) in silent mode
to configure and start an Oracle Net listener on the system, configure
naming methods, and configure Oracle Net service names.
Postinstallation Configuration Using Response File Created During Installation
(page A-8)
Use response files to configure Oracle software after installation. You
can use the same response file created during installation to also
complete postinstallation configuration.
Postinstallation Configuration Using the ConfigToolAllCommands Script
(page A-10)
You can create and run a response file configuration after installing
Oracle software.
A.1 How Response Files Work
Response files can assist you with installing an Oracle product multiple times on
multiple computers.
When you start the installer, you can use a response file to automate the installation
and configuration of Oracle software, either fully or partially. The installer uses the
values contained in the response file to provide answers to some or all installation
prompts.
Typically, the installer runs in interactive mode, which means that it prompts you to
provide information in graphical user interface (GUI) screens. When you use response
files to provide this information, you run the installer from a command prompt using
either of the following modes:
Installing and Configuring Oracle Grid Infrastructure Using Response Files A-1
How Response Files Work
•
Silent mode
If you include responses for all of the prompts in the response file and specify the
-silent option when starting the installer, then it runs in silent mode. During a
silent mode installation, the installer does not display any screens. Instead, it
displays progress information in the terminal that you used to start it.
•
Response file mode
If you include responses for some or all of the prompts in the response file and
omit the -silent option, then the installer runs in response file mode. During a
response file mode installation, the installer displays all the screens, screens for
which you specify information in the response file, and also screens for which you
did not specify the required information in the response file.
You define the settings for a silent or response file installation by entering values for
the variables listed in the response file. For example, to specify the Oracle home name,
supply the appropriate value for the ORACLE_HOME variable:
ORACLE_HOME=C:\app\oracle\product\12.2.0\dbhome_1
Another way of specifying the response file variable settings is to pass them as
command line arguments when you run the installer. For example:
-silent directory_path
In this command, directory_path is the path of the database directory on the DVD, or
the path of the directory on the hard drive.
Ensure that you enclose the variable and its setting in double-quotes.
Deciding to Use Silent Mode or Response File Mode (page A-2)
There are several reasons for running the installer in silent mode or
response file mode.
Using Response Files (page A-3)
Use these general steps for installing and configuring Oracle products
using the installer in silent or response file mode.
See Also: Oracle Universal Installer (OUI) User’s Guide for more information
about response files
A.1.1 Deciding to Use Silent Mode or Response File Mode
There are several reasons for running the installer in silent mode or response file
mode.
A-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Preparing Response Files
Table A-1
Reasons for Using Silent Mode or Response File Mode
Mode
Reasons to Use
Silent
Use silent mode for the following installations:
•
To complete an unattended installation, which
you schedule using operating system utilities
•
To complete several similar installations on
multiple systems without user interaction
•
Install the software on a system that cannot
display the Oracle Universal Installer (OUI)
graphical user interface
OUI displays progress information on the terminal
that you used to start it, but it does not display any
of the installer screens.
Response file
Use response file mode to complete similar Oracle
software installations on more than one system,
providing default answers to some, but not all the
installer prompts.
If you do not specify information required for a
particular OUI screen in the response file, then the
installer displays that screen. OUI suppresses screens
for which you have provided all of the required
information.
A.1.2 Using Response Files
Use these general steps for installing and configuring Oracle products using the
installer in silent or response file mode.
Note: You must complete all required preinstallation tasks on a system before
running the installer in silent or response file mode.
1. Verify the Windows Registry key HKEY_LOCAL_MACHINE\Software\Oracle
exists and that the value for inst_loc is the location of the Oracle Inventory
directory on the local node.
Note: Changing the value for inst_loc in the Windows registry is not
supported after the installation of Oracle software
2. Prepare a response file.
3. Run the installer in silent or response file mode.
4. If you completed a software-only installation, then perform the steps necessary to
configure the Oracle product.
A.2 Preparing Response Files
There are two methods you can use to prepare response files for silent mode or
response file mode installations.
Installing and Configuring Oracle Grid Infrastructure Using Response Files A-3
Preparing Response Files
About Response File Templates (page A-4)
Oracle provides response file templates for each product and installation
type and for each configuration tool.
Editing a Response File Template (page A-5)
You can copy and modify a response file template for each product and
installation type and for each configuration tool.
Recording Response Files (page A-5)
You can use the installer in interactive mode to record response files,
which you can then edit and use to complete silent mode or response file
mode installations.
A.2.1 About Response File Templates
Oracle provides response file templates for each product and installation type and for
each configuration tool.
For Oracle Database, the response file templates are located in the database
\response directory on the installation media and in the Oracle_home
\inventory\response directory. For Oracle Grid Infrastructure, the response file
templates are located in the Grid_home\install\response directory after the
software is installed.
Note: If you copied the installation media to a directory on a local disk
(referred to as the staging_dir directory), then the response files are located in
the directory staging_dir\database\response.
All response file templates contain comment entries, sample formats, examples, and
other useful instructions. Read the response file instructions to understand how to
specify values for the response file variables, so that you can customize your
installation.
The following response files are provided with this software:
Table A-2
Response Files for Oracle Database and Oracle Grid Infrastructure
Response File
Used For
db_install.rsp
Silent installation of Oracle Database
dbca.rsp
Silent creation and configuration of an Oracle
Database using Oracle Database Configuration
Assistant (DBCA)
netca.rsp
Silent configuration of Oracle Net using NETCA
grid_install.rsp
Silent configuration of Oracle Grid Infrastructure
installations
A-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Preparing Response Files
Caution:
When you modify a response file template and save a file for use, the response
file may contain plain text passwords. Ownership of the response file must be
given to the Oracle software installation owner only, and access restricted to
the response file. Oracle strongly recommends that database administrators or
other administrators delete or secure response files when they are not in use.
A.2.2 Editing a Response File Template
You can copy and modify a response file template for each product and installation
type and for each configuration tool.
To copy and modify a response file, perform the following steps:
1. Copy the response file from the response file directory to a directory on your
system.
copy Oracle_home\install\response\product_timestamp.rsp local_directory
2. Open the response file in a text editor.
3. Follow the instructions in the file to edit it.
Note: The installer or configuration assistant fails if you do not correctly
configure the response file. Also, ensure that your response file name has
the .rsp suffix.
4. Secure the response file.
Ensure that only the user that installed the Oracle software can view or modify
response files. Consider deleting the modified response file after the installation
succeeds.
Note: A fully specified response file for an Oracle Grid Infrastructure
installation or an Oracle Database installation can contain the passwords for:
•
Oracle Automatic Storage Management (Oracle ASM) administrative
accounts
•
Database administrative accounts
•
A user who is a member of the operating system group ORA_DBA
(required for automated backups)
See Also: Oracle Universal Installer (OUI) User’s Guide for detailed information
about creating response files
A.2.3 Recording Response Files
You can use the installer in interactive mode to record response files, which you can
then edit and use to complete silent mode or response file mode installations.
Installing and Configuring Oracle Grid Infrastructure Using Response Files A-5
Running Oracle Universal Installer Using a Response File
This method is useful for Advanced or software-only installations. You can save all the
installation steps into a response file during installation by clicking Save Response
File on the Summary page. You can use the generated response file for a silent
installation later.
When you record the response file, you can either complete the installation, or you can
exit from the installer on the Summary page, before the installer starts to copy the
software to the local disk.
If you use record mode during a response file mode installation, then the installer
records the variable values that were specified in the original source response file into
the new response file.
Note: You cannot save passwords while recording the response file.
1. Complete preinstallation tasks as for a standard installation.
When you use run the installer to record a response file, it checks the system to
verify that it meets the requirements to install the software. For this reason, Oracle
recommends that you complete all of the required preinstallation tasks and record
the response file while completing an installation.
2. Log in as the Oracle Installation User. Ensure that the Oracle Installation User has
permissions to create or write to the Grid home path that you specify during
installation.
3. Start the installer. On each installation screen, specify the required information.
4. When the installer displays the Summary screen, perform the following steps:
a. Click Save Response File. In the pop up window, specify a file name and
location to save the values for the response file, then click Save.
b. Click Finish to continue with the installation.
Click Cancel if you do not want to continue with the installation. The
installation stops, but the recorded response file is retained.
Note: Your response file name must end with the .rsp suffix.
5. Before you use the saved response file on another system, edit the file and make
any required changes. Use the instructions in the file as a guide when editing it.
A.3 Running Oracle Universal Installer Using a Response File
After creating the response file, run Oracle Universal Installer at the command line,
specifying the response file you created, to perform the installation.
Run Oracle Universal Installer at the command line, specifying the response file you
created. The Oracle Universal Installer executables, setup.exe and
gridSetup.bat, provide several options. For help information on the full set of these
options, run the gridSetup.bat or setup.exe command with the -help option.
For example:
•
For Oracle Database:
C:\..\bin> setup.exe -help
A-6 Oracle Grid Infrastructure Installation and Upgrade Guide
Running Oracle Net Configuration Assistant Using Response Files
•
For Oracle Grid Infrastructure:
C:\..\bin> gridSetup.bat -help
The help information appears in your session window after a short period of time.
To run the installer using a response file, perform the following steps:
1. Complete the preinstallation tasks for a normal installation.
2. Log in as an Administrator user or the user that installed the software.
3. To start the installer in silent or response file mode, enter a command similar to the
following:
•
For Oracle Database:
C:\> directory_path\setup.exe [-silent] [-noconfig] \
-responseFile response_filename
•
For Oracle Grid Infrastructure:
C:\> directory_path\gridSetup.bat [-silent] [-noconfig] \
-responseFile response_filename
Note: Do not specify a relative path to the response file. If you specify a
relative path, then the installer fails.
In this example:
•
directory_path is the path of the DVD or the path of the directory on the
hard drive where you have copied the installation software.
•
-silent runs the installer in silent mode.
•
-noconfig suppresses running the configuration assistants during
installation, and a software-only installation is performed instead.
•
response_filename is the full path and file name of the installation
response file that you configured.
A.4 Running Oracle Net Configuration Assistant Using Response Files
You can run Oracle Net Configuration Assistant (NETCA) in silent mode to configure
and start an Oracle Net listener on the system, configure naming methods, and
configure Oracle Net service names.
Oracle provides a response file template named netca.rsp in the response
subdirectory of:
•
The Oracle_home\database\inventory\response directory after a
software-only installation
•
The database\response directory on the installation media or staging area
To run NETCA in silent mode, you must copy and edit a response file template.
1. Copy the netca.rsp response file template from the response file directory to a
directory on your system.
Installing and Configuring Oracle Grid Infrastructure Using Response Files A-7
Postinstallation Configuration Using Response File Created During Installation
If the software is staged on a hard drive, or has already been installed, then you can
edit the file in the response directory located on the local disk instead.
2. Open the response file in a text editor.
3. Follow the instructions in the file to edit it.
Note: NETCA fails if you do not correctly configure the response file.
4. Log in as an Administrator user and set the %ORACLE_HOME% environment
variable to specify the correct Oracle home directory.
5. Enter a command similar to the following to run NETCA in silent mode:
C:\> Oracle_home\bin\netca -silent -responsefile X:\local_dir\netca.rsp
In this command:
•
•
The -silent option runs NETCA in silent mode.
X:\local_dir is the full path of the directory where you copied the
netca.rsp response file template, where X represents the drive on which the
file is located, and local_dir the path on that drive.
A.5 Postinstallation Configuration Using Response File Created During
Installation
Use response files to configure Oracle software after installation. You can use the same
response file created during installation to also complete postinstallation
configuration.
Using the Installation Response File for Postinstallation Configuration
(page A-8)
Starting with Oracle Database 12c release 2 (12.2), you can use the
response file created during installation to also complete postinstallation
configuration.
Running Postinstallation Configuration Using Response File (page A-10)
Complete this procedure to run configuration assistants configuration
with the -executeConfigTools command.
A.5.1 Using the Installation Response File for Postinstallation Configuration
Starting with Oracle Database 12c release 2 (12.2), you can use the response file created
during installation to also complete postinstallation configuration.
Run the installer with the -executeConfigTools option to configure configuration
assistants after installing Oracle Grid Infrastructure or Oracle Database. You can use
the response file located at Oracle_home\install\response
\product_timestamp.rsp to obtain the passwords required to run the
configuration tools. You must update the response file with the required passwords
before running the -executeConfigTools command.
Oracle strongly recommends that you maintain security with a password response
file. The owner of the response file must be the installation owner user.
A-8 Oracle Grid Infrastructure Installation and Upgrade Guide
Postinstallation Configuration Using Response File Created During Installation
Example A-1
Response File Passwords for Oracle Grid Infrastructure
oracle.install.crs.config.ipmi.bmcPassword=password
oracle.install.asm.SYSASMPassword=password
oracle.install.asm.monitorPassword=password
oracle.install.config.emAdminPassword=password
oracle.install.OracleHomeUserPassword=password
If you do not have a BMC card, or you do not want to enable IPMI, then leave the
ipmi.bmcPassword input field blank.
If you do not want to enable Oracle Enterprise Manager for management, then leave
the emAdminPassword password field blank.
If you did not specify an Oracle Home user for the Oracle Grid Infrastructure
installation, then leave the OracleHomeUserPassword field blank.
Example A-2 Response File Passwords for Oracle Grid Infrastructure for a
Standalone Server (Oracle Restart)
oracle.install.asm.SYSASMPassword=password
oracle.install.asm.monitorPassword=password
oracle.install.config.emAdminPassword=password
oracle.install.OracleHomeUserPassword=password
If you do not want to enable Oracle Enterprise Manager for management, then leave
the emAdminPassword password field blank.
If you did not specify an Oracle Home user for the Oracle Grid Infrastructure for a
Standalone Server (Oracle Restart) installation, then leave the
OracleHomeUserPassword field blank.
Example A-3
Response File Passwords for Oracle Database
This example illustrates the passwords to specify for use with the database
configuration assistants.
oracle.install.db.config.starterdb.password.SYS=password
oracle.install.db.config.starterdb.password.SYSTEM=password
oracle.install.db.config.starterdb.password.DBSNMP=password
oracle.install.db.config.starterdb.password.PDBADMIN=password
oracle.install.db.config.starterdb.emAdminPassword=password
oracle.install.db.config.asm.ASMSNMPPassword=password
oracle.install.OracleHomeUserPassword=password
You can also specify
oracle.install.db.config.starterdb.password.ALL=password to use the
same password for all database users.
Oracle Database configuration assistants require the SYS, SYSTEM, and DBSNMP
passwords for use with Oracle Database Configuration Assistant (DBCA). Specify the
following passwords, depending on your system configuration:
•
If the database uses Oracle ASM for storage, then you must specify a password for
the ASMSNMPPassword variable. If you are not using Oracle ASM, then leave the
value for this password variable blank.
•
If you create a multitenant container database (CDB) with one or more pluggable
databases (PDBs), then you must specify a password for the PDBADMIN variable.
If you are not using Oracle ASM, then leave the value for this password variable
blank.
Installing and Configuring Oracle Grid Infrastructure Using Response Files A-9
Postinstallation Configuration Using the ConfigToolAllCommands Script
•
If you did not specify an Oracle Home user for the Oracle Database installation,
then leave the OracleHomeUserPassword field blank.
A.5.2 Running Postinstallation Configuration Using Response File
Complete this procedure to run configuration assistants configuration with the executeConfigTools command.
1. Edit the response file and specify the required passwords for your configuration.
You can use the response file created during installation, located at Oracle_home
\install\response\product_timestamp.rsp. For example, for Oracle Grid
Infrastructure:
oracle.install.asm.SYSASMPassword=password
oracle.install.config.emAdminPassword=password
2. Change directory to the Oracle home containing the installation software. For
example, for Oracle Grid Infrastructure:
cd Grid_home
3. Run the configuration script using the following syntax:
For Oracle Grid Infrastructure:
gridSetup.bat -executeConfigTools -responsfile Grid_home\install\response
\product_timestamp.rsp
For Oracle Database:
setup.exe -executeConfigTools -responseFile Oracle_home\install\response
\product_timestamp.rsp
For Oracle Database, you can also edit and use the response file located in the
directory Oracle_home\inventory\response\:
setup.exe -executeConfigTools -responseFile Oracle_home\inventory\response
\db_install.rsp
The postinstallation configuration tool runs the installer in the graphical user
interface mode, displaying the progress of the postinstallation configuration.
Specify the [-silent] option to run the postinstallation configuration in the
silent mode.
For example, for Oracle Grid Infrastructure:
gridSetup.bat -executeConfigTools -responseFile Grid_home\install\response
\grid_2016-09-09_01-03-36PM.rsp [-silent]
For Oracle Database:
setup.exe -executeConfigTools -responseFile Oracle_home\inventory\response
\db_2016-09-09_01-03-36PM.rsp [-silent]
A.6 Postinstallation Configuration Using the ConfigToolAllCommands
Script
You can create and run a response file configuration after installing Oracle software.
A-10 Oracle Grid Infrastructure Installation and Upgrade Guide
Postinstallation Configuration Using the ConfigToolAllCommands Script
The configToolAllCommands script requires users to create a second response file,
of a different format than the one used for installing the product. Starting with Oracle
Database 12c Release 2 (12.2), the configToolAllCommands script is deprecated and
may be desupported in a future release.
About the Postinstallation Configuration File (page A-11)
The configuration assistants are started with a script called
configToolAllCommands.
Creating a Password Response File (page A-12)
Use these steps to create a password response file for use with the
configToolAllCommands script..
Running Postinstallation Configuration Using a Script and Response Files
(page A-13)
You can run the postinstallation configuration assistants with the
configToolAllCommands script.
See Also: "Postinstallation Configuration Using Response File Created
During Installation (page A-8)" for an alternate method of postinstallation
configuration of Oracle software using the same response file created at the
time of installation.
A.6.1 About the Postinstallation Configuration File
The configuration assistants are started with a script called
configToolAllCommands.
When you perform an installation using silent mode or response file mode, you
provide information about your servers in a response file that you otherwise provide
manually using a graphical user interface. However, the response file does not contain
passwords for user accounts that configuration assistants require after software
installation is complete. To run the configuration assistants after the installation
completes in silent mode, you must run the configToolAllCommands script and
provide the passwords used by the assistants in a password file.
You can run the configToolAllCommands script in silent mode by using a
password response file. The script uses the passwords in the file to run the
configuration tools in succession to complete the software configuration. If you keep
the password file to use when cloning installations, then Oracle strongly recommends
that you store the password file in a secure location.
You can also use the password file to restart a failed installation. If you stop an
installation to fix an error, then you can rerun the configuration assistants using
configToolAllCommands and a password response file.
The configToolAllCommands password response file has the following options:
•
oracle.crs for Oracle Grid Infrastructure components or oracle.server for
Oracle Database components that the configuration assistants configure
•
variable_name is the name of the configuration file variable.
•
value is the desired value to use for configuration.
The command syntax is as follows:
internal_component_name|variable_name=value
Installing and Configuring Oracle Grid Infrastructure Using Response Files A-11
Postinstallation Configuration Using the ConfigToolAllCommands Script
For example, to set the password for the SYS user of Oracle ASM:
oracle.crs|S_ASMPASSWORD=myPassWord
A.6.2 Creating a Password Response File
Use these steps to create a password response file for use with the
configToolAllCommands script..
1. Create a response file that has a name of the format filename.properties.
2. Open the file with a text editor, and cut and paste the sample password file
contents, as shown in the example below, modifying as needed.
3. If the file is stored on a volume formatted for Windows New Technology File
System (NTFS), then modify the security permissions to secure the file.
Example A-4
Installation
Sample Password response file for Oracle Grid Infrastructure
Oracle Grid Infrastructure requires passwords for Oracle Automatic Storage
Management Configuration Assistant (ASMCA), and for Intelligent Platform
Management Interface Configuration Assistant (IPMICA) if you have a baseboard
management controller (BMC) card and you want to enable this feature. Also, if you
specified an Oracle Home user for the Oracle Grid Infrastructure installation, you
must specify the password as the Windows Service user password. Provide the
following response file:
oracle.crs|S_ASMPASSWORD=password
oracle.crs|S_OMSPASSWORD=password
oracle.crs|S_ASMMONITORPASSWORD=password
oracle.crs|S_BMCPASSWORD=password
oracle.crs|S_WINSERVICEUSERPASSWORD=password
If you do not have a BMC card, or you do not want to enable Intelligent Platform
Management Interface (IPMI), then leave the S_BMCPASSWORD input field blank.
Note: If you are upgrading Oracle ASM 11g Release 1 or earlier releases, then
you only need to provide the input field for oracle.assistants.asm|
S_ASMMONITORPASSWORD.
Example A-5
Sample Password response file for Oracle Real Application Clusters
Oracle Database configuration requires the SYS, SYSTEM, and DBSNMP passwords
for use with Database Configuration Assistant (DBCA). The S_ASMSNMPPASSWORD
response is necessary only if the database is using Oracle ASM for storage. Similarly,
the S_PDBADMINPASSWORD password is necessary only if you create a multitenant
container database (CDB) with one or more pluggable databases (PDBs). Also, if you
selected to configure Oracle Enterprise Manager Cloud Control, then you must
provide the password for the Oracle software installation owner for
S_EMADMINPASSWORD, similar to the following example, where the phrase password
represents the password string:
oracle.server|S_SYSPASSWORD=password
oracle.server|S_SYSTEMPASSWORD=password
oracle.server|S_DBSNMPPASSWORD=password
oracle.server|S_PDBADMINPASSWORD=password
oracle.server|S_EMADMINPASSWORD=password
A-12 Oracle Grid Infrastructure Installation and Upgrade Guide
Postinstallation Configuration Using the ConfigToolAllCommands Script
oracle.server|S_ASMSNMPPASSWORD=password
oracle.server|S_WINSERVICEUSERPASSWORD=password
If you do not want to enable Oracle Enterprise Manager for Oracle ASM, then leave
those password fields blank.
A.6.3 Running Postinstallation Configuration Using a Script and Response Files
You can run the postinstallation configuration assistants with the
configToolAllCommands script.
1. Create a password response file using the format filename.properties for the file
name, as described in Creating a Password Response File (page A-12).
2. Change directory to Oracle_home\cfgtoollogs, and run the configuration
script using the following syntax:
configToolAllCommands RESPONSE_FILE=\path\filename.properties
For example:
C:\..\cfgtoollogs> configToolAllCommands RESPONSE_FILE=C:\users\oracle\grid
\cfgrsp.properties
Installing and Configuring Oracle Grid Infrastructure Using Response Files A-13
Postinstallation Configuration Using the ConfigToolAllCommands Script
A-14 Installation and Upgrade Guide
B
Optimal Flexible Architecture
Oracle Optimal Flexible Architecture (OFA) rules are a set of configuration guidelines
created to ensure well-organized Oracle installations, which simplifies administration,
support and maintenance.
About the Optimal Flexible Architecture Standard (page B-1)
Oracle Optimal Flexible Architecture (OFA) rules help you to organize
database software and configure databases to allow multiple databases,
of different versions, owned by different users to coexist.
About Multiple Oracle Homes Support (page B-2)
Oracle Database supports multiple Oracle homes. You can install this
release or earlier releases of the software more than once on the same
system, in different Oracle home directories.
Oracle Base Directory Naming Convention (page B-3)
This section describes what the Oracle base is, and how it functions.
Oracle Home Directory Naming Convention (page B-3)
By default, Oracle Universal Installer configures Oracle home directories
using these Oracle Optimal Flexible Architecture conventions.
Optimal Flexible Architecture File Path Examples (page B-4)
This topic shows examples of hierarchical file mappings of an Optimal
Flexible Architecture-compliant installation.
B.1 About the Optimal Flexible Architecture Standard
Oracle Optimal Flexible Architecture (OFA) rules help you to organize database
software and configure databases to allow multiple databases, of different versions,
owned by different users to coexist.
In earlier Oracle Database releases, the OFA rules provided optimal system
performance by isolating fragmentation and minimizing contention. In current
releases, OFA rules provide consistency in database management and support, and
simplifies expanding or adding databases, or adding additional hardware.
By default, Oracle Universal Installer places Oracle Database components in directory
locations and with permissions in compliance with OFA rules. Oracle recommends
that you configure all Oracle components on the installation media in accordance with
OFA guidelines.
Oracle recommends that you accept the OFA default. Following OFA rules is
especially of value if the database is large, or if you plan to have multiple databases.
Optimal Flexible Architecture B-1
About Multiple Oracle Homes Support
Note:
OFA assists in identification of an ORACLE_BASE with its Automatic
Diagnostic Repository (ADR) diagnostic data to properly collect incidents.
B.2 About Multiple Oracle Homes Support
Oracle Database supports multiple Oracle homes. You can install this release or earlier
releases of the software more than once on the same system, in different Oracle home
directories.
Careful selection of mount point names can make Oracle software easier to administer.
Configuring multiple Oracle homes in compliance with Optimal Flexible Architecture
(OFA) rules provides the following advantages:
•
You can install this release, or earlier releases of the software, more than once on
the same system, in different Oracle home directories. However, you cannot
install products from one release of Oracle Database into an Oracle home
directory of a different release. For example, you cannot install Oracle Database
12c software into an existing Oracle 11g Oracle home directory.
•
Multiple databases, of different versions, owned by different users can coexist
concurrently.
•
You must install a new Oracle Database release in a new Oracle home that is
separate from earlier releases of Oracle Database.
You cannot install multiple releases in one Oracle home. Oracle recommends that
you create a separate Oracle Database Oracle home for each release, in accordance
with the Optimal Flexible Architecture (OFA) guidelines.
•
In production, the Oracle Database server software release must be the same as
the Oracle Database dictionary release through the first four digits (the major,
maintenance, and patch release number).
•
Later Oracle Database releases can access earlier Oracle Database releases.
However, this access is only for upgrades. For example, Oracle Database 12c
release 2 can access an Oracle Database 11g release 2 (11.2.0.4) database if the
11.2.0.4 database is started up in upgrade mode.
•
Oracle Database Client can be installed in the same Oracle Database home if both
products are at the same release level. For example, you can install Oracle
Database Client 12.2.0.1 into an existing Oracle Database 12.2.0.1 home but you
cannot install Oracle Database Client 12.2.0.1 into an existing Oracle Database
12.1.0.2 home. If you apply a patch set before installing the client, then you must
apply the patch set again.
•
Structured organization of directories and files, and consistent naming for
database files simplify database administration.
•
Login home directories are not at risk when database administrators add, move,
or delete Oracle home directories.
•
You can test software upgrades in an Oracle home in a separate directory from the
Oracle home where your production database is located.
B-2 Oracle Grid Infrastructure Installation and Upgrade Guide
Oracle Base Directory Naming Convention
B.3 Oracle Base Directory Naming Convention
This section describes what the Oracle base is, and how it functions.
The Oracle Base directory is the database home directory for Oracle Database
installation owners and the log file location for Oracle Grid Infrastructure owners. You
should name Oracle base directories using the syntax \pm\h\u, where pm is a string
mount point name, h is selected from a small set of standard directory names, and u is
the name of the owner of the directory.
You can use the same Oracle base directory for multiple installations. If different
operating system users install Oracle software on the same system, then you must
create a separate Oracle base directory for each installation owner. For ease of
administration, Oracle recommends that you create a unique owner for each Oracle
software installation owner, to separate log files.
Because all Oracle installation owners write to the central Oracle inventory file, and
that file mount point is in the same mount point path as the initial Oracle installation,
Oracle recommends that you use the same \pm\h path for all Oracle installation
owners.
Table B-1
Examples of OFA-Compliant Oracle Base Directory Names
Example
C:\app\oracle
C:\app\grid
Description
Oracle base directory for Oracle Database , where the Oracle Database
software installation owner name is oracle. The Oracle Database binary
home is located underneath the Oracle base path.
Oracle base directory for Oracle Grid Infrastructure, where the Oracle Grid
Infrastructure software installation owner name is grid.
Caution: The Oracle Grid Infrastructure Oracle base should not contain
the Oracle Grid Infrastructure binaries for an Oracle Grid Infrastructure
for a cluster installation. Permissions for the file path to the Oracle Grid
Infrastructure binary home is changed to LocalSystem or the Oracle
Home User, if specified, during installation.
B.4 Oracle Home Directory Naming Convention
By default, Oracle Universal Installer configures Oracle home directories using these
Oracle Optimal Flexible Architecture conventions.
The directory pattern syntax for Oracle homes is \pm\s\u\product\v\type_[n].
The following table describes the variables used in this syntax:
Variable
Description
pm
A mount point name.
s
A standard directory name.
u
The name of the owner of the directory.
v
The version of the software.
Optimal Flexible Architecture B-3
Optimal Flexible Architecture File Path Examples
Variable
Description
type
The type of installation. For example: Database (dbhome), Client
(client), or Oracle Grid Infrastructure (grid)
n
An optional counter, which enables you to install the same product more
than once in the same Oracle base directory. For example: Database 1 and
Database 2 (dbhome_1, dbhome_2)
For example, the following path is typical for the first installation of Oracle Database
on this system:
C:\app\oracle\product\12.2.0\dbhome_1
B.5 Optimal Flexible Architecture File Path Examples
This topic shows examples of hierarchical file mappings of an Optimal Flexible
Architecture-compliant installation.
This example shows an Optimal Flexible Architecture-compliant installation with
three Oracle home directories and three databases, as well as examples of the
deployment path differences between a cluster install and a standalone server install
of Oracle Grid Infrastructure. The databasefiles are distributed across three mount
points: D:\, E:\, and F:\.
Note:
Table B-2
•
The Grid homes are examples of Grid homes used for an Oracle Grid
Infrastructure for a standalone server deployment (Oracle Restart), or a
Grid home used for an Oracle Grid Infrastructure for a cluster
deployment (Oracle Clusterware). You can have either an Oracle Restart
deployment, or an Oracle Clusterware deployment. You cannot have both
options deployed at the same time.
•
Oracle Automatic Storage Management (Oracle ASM) is included as part
of an Oracle Grid Infrastructure installation. Oracle recommends that you
use Oracle ASM to provide greater redundancy and throughput.
Optimal Flexible Architecture Hierarchical File Path Examples
Directory
C:\
C:\app\
C:\app\oraInventory
Description
System directory
Subtree for application software
Central OraInventory directory, which maintains information about Oracle
installations on a server. Members of the group designated as the OINSTALL group
have permissions to write to the central inventory. All Oracle software installation
owners must have the OINSTALL group as their primary group, and be able to
write to this group.
B-4 Oracle Grid Infrastructure Installation and Upgrade Guide
Optimal Flexible Architecture File Path Examples
Table B-2
(Cont.) Optimal Flexible Architecture Hierarchical File Path Examples
Directory
Description
C:\app\oracle\
Oracle base directory for user oracle. There can be many Oracle Database
installations on a server, and many Oracle Database software installation owners.
Oracle software homes that an Oracle installation owner owns should be located in
the Oracle base directory for the Oracle software installation owner, unless that
Oracle software is Oracle Grid Infrastructure deployed for a cluster.
C:\app\grid\username
Oracle base directory for the Oracle Grid Infrastructure software, where username is
the name of the user that performed the software installation. The Oracle home
(Grid home) for Oracle Grid Infrastructure for a cluster installation is located outside
of the Grid user. There can be only one Grid home on a server, and only one Grid
software installation owner.
The Grid home contains log files and other administrative files.
C:\app\oracle\admin\
C:\app\oracle\admin\TAR
C:\app\oracle\admin
\db_sales\
C:\app\oracle\admin
\db_dwh\
C:\app\oracle
\fast_recovery_area\
C:\app\oracle
\fast_recovery_area
\db_sales
C:\app\oracle
\fast_recovery_area
\db_dwh
D:\app\oracle\oradata
E:\app\oracle\oradata
F:\app\oracle\oradata
C:\app\oracle\product\
Subtree for database administration files
Subtree for support log files
admin subtree for database named “sales”
admin subtree for database named “dwh”
Subtree for recovery files
Recovery files for database named “sales”
Recovery files for database named “dwh”
Oracle data file directories
Common path for Oracle software products other than Oracle Grid Infrastructure
for a cluster
Optimal Flexible Architecture B-5
Optimal Flexible Architecture File Path Examples
Table B-2
(Cont.) Optimal Flexible Architecture Hierarchical File Path Examples
Directory
Description
C:\app\oracle\product
\12.2.0\dbhome_1
Oracle home directory for Oracle Database 1, owned by Oracle Database installation
owner account oracle
C:\app\oracle\product
\12.2.0\dbhome_2
Oracle home directory for Oracle Database 2, owned by Oracle Database installation
owner account oracle
C:\app\oradbowner
\product
\12.2.0\dbhome_2
C:\app\oracle\product
\12.2.0\grid
C:\app\12.2.0\grid
C:\app\grid\username
\diag\crs\hostname\crs
\trace
Oracle home directory for Oracle Database 2, owned by Oracle Database installation
owner account oradbowner
Oracle home directory for Oracle Grid Infrastructure for a standalone server, owned
by Oracle Database and Oracle Grid Infrastructure installation owner oracle.
Oracle home directory for Oracle Grid Infrastructure for a cluster (Grid home),
owned by user grid before installation, and owned by root after installation
Oracle Clusterware log files
B-6 Oracle Grid Infrastructure Installation and Upgrade Guide
Index
32-bit and 64-bit
software versions in the same cluster not
supported, 2-4
bind order (continued)
Windows network, 4-10
BMC interface
preinstallation tasks, 5-24
bundled patches, 10-2
A
C
access control
support for Oracle ASM, xxiii
Access Control Lists, 5-3, 5-4
adding Oracle ASM listener, 11-20
address resolution protocol, 4-23
Administrators group, 5-2, 5-17, 5-19
ARP
See address resolution protocol
ASM
OSASM or ASM administrator, 5-11
OSDBA for ASM group, 5-11
OSOPER for ASM group, 5-11
See also Oracle ASM
ASM network
multiple NICs, 4-24
ASM_DISKSTRING, 7-9
ASMADMIN group, 5-2
ASMCA
and upgrades, 6-7
enabling Oracle Restart disk group volumes, 12-2
starting, 7-13, 9-4, 10-9
ASMCMD, xxiii
ASMDBA group, 5-2
ASMSNMP, 1-4
asmtool utility, 7-11
Automatic Diagnostic Repository (ADR), B-1
Automatic Storage Management Cluster File System
See Oracle ACFS
automount
enable, 6-8
central inventory, 5-7, 5-9, B-4
See also OINSTALL directory
changing host names, 4-3
checkdir error, 11-4
checklist
upgrades, 11-7
checklists, 1-1
chip architecture, 2-4
CHM, xxiii
CIFS
Common Internet File System, 8-2
client-server configurations, B-2
clients
and upgrades, 4-4
certification, 3-6
connecting through a firewall, 10-2
connecting to SCAN, 4-4
Direct NFS Client, 6-6
trace files, 10-6
using SCAN, 4-4
using the public interface, 4-3
CLSRSC-661, 11-10
cluster configuration
Oracle Extended Clusters, 9-3
Oracle Standalone Clusters, 9-3
cluster file system
storage option for data files, 6-4
cluster name
requirements for, 1-4
cluster nodes
private network node interfaces, 1-4
private node names, 4-4
public network node names and addresses, 1-4
public node names, 9-4
virtual node names, 1-4, 4-4
Symbols
B
batch upgrades, 11-13
bind order
Index-1
cluster privileges
verifying for OUI cluster installation, 5-17
Cluster Time Synchronization Service
configuring, 2-9
installed components, 2-8
observer mode, 2-9
Cluster Verification Utility (CVU)
and upgrades, 11-9
CLUSTER_INTERCONNECTS parameter, 4-3
commands
asmca, 7-13, 10-9
configToolAllCommands, A-13
crsctl, 2-9, 9-12, 11-4
executeConfigTools, A-10
gridSetup.bat, 9-9, A-6
lsnrctl, 10-13
msinfo32, 2-2
net start and net stop, 2-9
net use, 3-3, 5-17
ocopy.exe, 10-1
ping, 4-7
regedit, 2-8, 5-17
rootcrs.bat
and deconfigure option, 12-5
roothas.bat, 12-2
runcluvfy.bat, 9-9
srvctl, 11-4, 12-2
W32tm, 2-8
configToolAllCommands, A-12, A-13
configToolAllCommands script, A-11
configuration wizard, 9-10
CPU, 10-2
creating
oranfstab file, 8-3
Critical Patch Updates, 10-2
cron jobs, 1-8
crs_install.rsp file, A-5
CTSS
See Cluster Time Synchronization Service
ctssd, 2-7, 2-9
customized database
failure groups for Oracle ASM, 7-2
requirements when using Oracle ASM, 7-2
D
data files
storage options, 6-4
data loss
minimizing with Oracle ASM, 7-2, 7-8
Database Configuration Assistant (DBCA)
response file, A-4
database files
supported storage options, 6-2
databases
Oracle ASM requirements, 7-2
DB_RECOVERY_FILE_DEST, 10-8
Index-2
DBCA
no longer used for Oracle ASM disk group
administration, 10-12
dbca.rsp file, A-4
deconfiguring Oracle Clusterware, 12-5
deinstallation
files removed, 12-7
deinstallation tool
location of log files, 12-8
syntax, 12-8
Deinstallation tool
restriction for Flex Clusters and —lastnode option,
12-5
Deinstallation Tool, 12-7
device names
creating with asmtool, 7-12
creating with asmtoolg, 7-11
DHCP
and GNS, 4-13
diagnostic data, B-1
Direct NFS Client
and fast recovery area, 10-8
description, 8-2
dictionary views, 8-9
disabling, 8-9
enabling, 8-8
load balancing, 8-7
managing, 8-9
ORADNFS utility, 8-8
Direct NFS dispatcher, 8-2
DisableDHCPMediaSense parameter, 4-10
disk automount
enable, 6-8
disk group
Oracle ASM with redundancy levels, 7-2
recommendations for Oracle ASM disk groups,
7-2
disk space
checking, 2-3
requirements
for Oracle ASM, 7-2
for preconfigured database in Oracle ASM,
7-2
disks
NTFS formatted, 2-6
remove labels, 7-11
selecting for use with Oracle ASM, 7-9
display monitor
resolution settings, 2-5
DNS
configuring to use with GNS, 4-19
multicast (mDNS), 4-23
domain user
used in installation, 5-2
downgrade, 11-21
downgrading, 11-22
DST patches, 10-2
E
enable disk automount, 6-8
Enterprise Management agent, 11-19
enterprise.rsp file, A-4
environment variables
TEMP, 2-4
errors using Opatch, 11-4
executable files, 2-4, 6-3
executeConfigTools, A-8
external redundancy, 7-2
F
Grid home
changing location of, 12-4
creating, 5-23
definition, 5-22
modifying, 10-13
requirements, 5-20
Grid home disk space, 2-3
grid infrastructure management repository, 9-3
Grid Infrastructure Management Repository
local, 7-7
grid naming service
See GNS
grid user, 5-9
Group Managed Service Account (GMSA), 5-3
groups
Administrators group, 5-2, 5-17, 5-19
OINSTALL, 5-7
ORA_DBA, 5-10, A-5
ORA_HOMENAME_DBA, 5-10
ORA_HOMENAME_OPER, 5-10
ORA_OPER, 5-10
OSASM (ORA_ASMADMIN), 5-11
OSDBA for ASM (ORA_ASMDBA), 5-11
OSOPER for ASM (ORA_ASMOPER), 5-11
required for Oracle Installation user, 5-9
failed install, 11-25
failed upgrade, 11-25
failure group
characteristics of Oracle ASM failure group, 7-8
failure groups
characteristics in Oracle ASM, 7-2
examples in Oracle ASM, 7-2
Oracle ASM, 7-2
fast recovery area
filepath, B-4
Grid home
filepath, B-4
FAT disk sizes, 2-6
file system
storage option for data files, 6-4
files
dbca.rsp, A-4
enterprise.rsp, A-4
netca.rsp, A-4
removed by deinstallation, 12-7
firewall
interconnect
and firewalls, 3-6
Microsoft Windows, 3-6
flex redundancy, 7-2
hardware requirements
display, 1-1
IPMI, 1-1
RAM, 1-1
high redundancy, 7-2
host names
changing, 4-3
legal host names, 1-4
hosts file, 4-3
Hub Nodes
names and addresses for, 1-4
G
I
GIMR, 7-7
globalization, 1-8
GNS
about, xxiii, 4-15
choosing a subdomain name, 4-20
configuring, 4-13
configuring DNS for domain delegation, 4-20
name resolution, 4-15
Universal, xxiii
virtual IP address, 4-15
GNS client clusters
GNS client data file required for installation, 4-15
name resolution for, 4-15
GNS instance, 4-13
GNS virtual IP address, 1-4
image
install, 9-2
INS-40937, 5-17
inst_loc, 5-22
installation
creating user roles, xxiii
Oracle ASM requirements, 7-2
response files
preparing, A-3, A-5
silent mode, A-6
installation planning, 1-1
installer screens
Grid Plug and Play Information, 4-14
instruction sets
processors, 2-4
H
Index-3
interconnect
switch, 4-7
interfaces
requirements for private interconnect, 4-3
intermittent hangs
and socket files, 9-12
invalid hostnames error, 5-17
IP addresses
private, 4-17
public, 4-17
requirements, 4-12
virtual, 4-17
IPMI
addresses not configurable by GNS, 5-25
configuring driver for, 5-26
preinstallation tasks, 5-24
preparing for installation, 1-1
requirements, 5-25
IPv4 requirements, 4-2
IPv6 requirements, 4-2
IPv6 support, 4-8
J
JDK
requirements, 3-4
job role separation users, 5-9
K
Kerberos Based Authentication for Direct NFS, 8-8
Multiple Oracle Homes Support (continued)
advantages, B-2
multiversioning, B-2
My Oracle Support website
about, 3-6
accessing, 3-6
and patches, 10-2
N
named user support, xxiii
NAS, 6-6
Net Configuration Assistant
See Oracle Net Configuration Assistant (NETCA)
netca.rsp file, A-4
netsh, 4-24
network file system (NFS), 2-6, 6-2, 6-6, 8-2
See also Direct NFS Client
Network Information Services (NIS), 5-8
network interface
names
requirements, 4-10
Network Time Protocol, 2-8, 2-9
Network Time Protocol;Windows Time Service, 2-7
networks
IP protocol requirements for, 4-2
noninteractive mode
See response file mode
normal redundancy, 7-2
NTFS
formatted disks, 2-6
NTP
See Network Time Protocol
L
legal host names, 1-4
licensing, 1-8
log file
how to access during installation, 9-4
lsnrctl, 10-13
M
management repository service, 9-3
Media Sensing
disabling, 4-10
memory requirements
for Grid Infrastructure, 2-5
RAM, 2-2
mirroring Oracle ASM disk groups, 7-2
mixed binaries, 3-4
modifying
Grid home, 10-13
Oracle ASM binaries, 10-13
Oracle Clusterware binaries, 10-13
multicast DNS (mDNS), 4-23
Multicluster GNS, 4-13
Multiple Oracle Homes Support
Index-4
O
OCFS for Windows
upgrading, 11-3
OCR
See Oracle Cluster Registry
OFA, B-1
See also Optimal Flexible Architecture
oifcfg, 4-3
OINSTALL directory, B-4
Opatch, 11-4
operating system
32–bit, 2-2
Administrators group, 5-2, 5-17, 5-19
different versions on cluster members, 3-4
mixed versions, 3-3
requirements, 3-4
users, xxiii
using different versions, 3-4
version checks, 3-4
x86, 2-2
operating system groups, 5-8
operating system requirements, 1-2
Optimal Flexible Architecture
about, B-1
ORA_DBA group
and SYSDBA privilege, 5-10
description, 5-10
ORA_HOMENAME_DBA group
and SYSDBA privilege, 5-10
description, 5-10
ORA_HOMENAME_OPER group
and SYSOPER privilege, 5-10
description, 5-10
ORA_INSTALL group, 5-7
See also Oracle Inventory group
ORA_OPER group
and SYSOPER privilege, 5-10
description, 5-10
ORAchk
and Upgrade Readiness Assessment, 1-8
Oracle ACFS
and Oracle Clusterware files, 6-3
restrictions and guidelines for usage, 6-3
Oracle ADVM, 6-3
Oracle ASM
access control, xxiii
and Oracle Clusterware files, 2-3
asmtool utility reference, 7-12
asmtoolg utility, 7-11
characteristics of failure groups, 7-8
disk groups
recommendations for, 7-2
redundancy levels, 7-2
failure groups
characteristics, 7-2
examples, 7-2
identifying, 7-2
mirroring, 7-2
partitions
marking, 7-10
restrictions, 7-8
patching, xxiii
redundancy levels, 7-2
role separation, xxiii
space required for Oracle Clusterware files, 7-2
space required for preconfigured database, 7-2
upgrading, 11-3
Oracle ASM redundancy levels
high redundancy, 7-2
Oracle Automatic Storage Management (Oracle ASM)
password file, 11-10
Oracle base, B-1, B-4
Oracle base directory
creating, 5-24
Grid home must not be in an Oracle Database
Oracle base, 5-20
Oracle Cluster Health Monitor, xxiii
Oracle Cluster Registry
backups to ASM disk groups, xxiii
Oracle Cluster Registry (continued)
configuration of, 1-7
supported storage options, 6-2
Oracle Clusterware
installing, 9-1
supported storage options for, 6-2
Oracle Clusterware files
and NTFS formatted disks, 2-6
and Oracle ACFS, 6-3
and Oracle ASM, 2-3
Oracle ASM disk space requirements, 7-2
Oracle ASM requirements, 7-2
raw partitions, 2-3
Oracle Database
data file storage options, 6-4
privileged groups, 5-9
requirements with Oracle ASM, 7-2
Oracle Disk Manager (ODM)
library file, 8-8
Oracle Enterprise Manager
firewall exceptions, 10-6
preinstallation requirements, 3-7
supported web browsers, 3-7
Oracle Extended Clusters, 9-3
Oracle Grid Infrastructure
upgrading, 11-3
Oracle Grid Infrastructure Management Repository,
xxiii
Oracle Grid Infrastructure owner (grid), 5-9
Oracle home
ASCII path restriction for, 1-3, 1-6
definition, 5-22
file path, B-4
Grid home
filepath, B-4
naming conventions, B-3
Oracle Home User
permissions, 9-4
See also Oracle Service User
Oracle Installation user
required group membership, 5-9
Oracle Installation User
permissions, 9-4
Oracle Inventory directory
definition, 5-22
Oracle Inventory group
about, 5-7
checking for existing, 5-7
Oracle Net Configuration Assistant (NETCA)
response file, A-4
response files, A-7
running at command prompt, A-7
Oracle Notification Server Configuration Assistant, 9-4
Oracle Optimal Flexible Architecture
See Optimal Flexible Architecture
Oracle patch updates, 10-2
Index-5
Oracle Private Interconnect Configuration Assistant,
9-4
Oracle Restart
password file, A-8
Oracle Service User, 5-3, 5-4
Oracle Services, xxiii
Oracle software owner user
creating, 5-2
description, 5-9
Oracle Standalone Clusters, 9-3
Oracle Universal Installer
default permissions, 9-4
Oracle Universal Installer (OUI)
response files, A-4
Oracle Upgrade Companion, 3-2
oracle user
creating, 5-2
description, 5-9
required group membership, 5-9
ORADNFS utility, 8-8
oraInventory
about, 5-7
See also Oracle Inventory Directory
oranfstab file
creating, 8-3
OSASM group
about, 5-11
OSDBA for ASM group
about, 5-11
OSOPER for ASM group
about, 5-11
OUI
See Oracle Universal Installer (OUI)
P
paging file, 1-1, 2-2, 2-5
Parallel NFS, 8-2
partitions
using with Oracle ASM, 7-2
password file
Oracle ASM, required for upgrade, 11-10
patch bundles
contents, 10-2
description, 10-2
Patch Set Updates, 10-2
patch sets
description, 10-2
patch updates
download, 10-2
install, 10-2
My Oracle Support, 10-2
patches
CPU, 10-2
DST, 10-2
PSU, 10-2
patching
Index-6
patching (continued)
Oracle ASM, xxiii
permissions
Administrators, 9-4
Authenticated Users group, 9-4
Oracle Home User, 9-4
Oracle Installation User, 9-4
SYSTEM, 9-4
physical RAM
requirements, 2-5
policy-managed databases
and SCAN, 4-4
postinstallation
backing up voting files, 10-1
configuration of Oracle software, A-8, A-11
patch download and install, 10-2
recommended tasks, 10-7
required tasks, 10-1
resource status OFFLINE, 9-13
preconfigured database
Oracle ASM disk space requirements, 7-2
requirements when using Oracle ASM, 7-2
preinstallation
verifying installation requirements, 9-9
preinstallation:requirements for Oracle Enterprise
Manager, 3-7
primary host name, 1-4
private IP addresses, 4-17
privileged groups
for Oracle Database, 5-9
privileges
verifying for OUI cluster installation, 5-17
processors
instruction sets, 2-4
minimum requirements, 2-5
proxy realm, 1-8
PSU, 10-2
public IP addresses, 4-17
public node name
and primary host name, 1-4
Q
quorum failure group, 7-2
R
RAID
recommended Oracle ASM redundancy level, 7-2
RAM
requirements, 2-5
raw partitions, 2-6
recommendataions
backing up Oracle software, 11-7
recommendations
client access to the cluster, 10-10
configuring BMC, 5-25
recommendations (continued)
enable disk automounting, 6-8
for creating Oracle Grid Infrastructure home and
Oracle Inventory directories, 5-22
for using NTFS formatted disks, 2-6
for Windows Firewall exceptions, 10-2
limiting the number of partitions on a single disk,
7-8
managing Oracle Clusterware files, 2-6
managing Oracle Database data files, 2-6
minimum RAM size, 2-2
number of IP address for SCAN resolution, 4-4,
4-17
Oracle ASM redundancy level, 7-2
postinstallation tasks, 10-7
private network, 4-3
secure response files after modification, A-5
temporary directory configuration, 5-20
using static host names, 4-13
recovery files
and NFS, 6-6
required disk space, 7-2
supported storage options, 6-2
redundancy level
and space requirements for preconfigured
database, 7-2
for Oracle ASM, 7-2
registering resources, 11-20
Registry keys
Tcpip
parameters, 4-10
W32Time:Config, 2-8
releases
multiple, B-2
remove Oracle software, 12-1
requirements
hardware certification, 3-6
memory, 2-5
Oracle Enterprise Manager, 3-7
processors, 2-5
software certification, 3-6
temporary disk space, 2-4
web browsers, 3-7
resource status OFFLINE, 9-13
response file mode
about, A-1
installation
preparing, A-3
reasons for using, A-2
See also response files
response files
about, A-1
creating with template, A-4, A-5
crs_install.rsp, A-5
dbca.rsp, A-4
enterprise.rsp, A-4
general procedure, A-3
response files (continued)
netca.rsp, A-4
Oracle Net Configuration Assistant (NETCA), A-7
passing values at command line, A-1
reasons for using, A-2
specifying with the installer, A-6
reverse lookup, 4-20
role separation
and Oracle ASM, xxiii
rootcrs.bat
restriction for Flex Cluster deinstallation, 12-5
running multiple Oracle releases, B-2
S
SCAN
client access, 10-10
description, 10-10
required for clients of policy-managed databases,
4-4
understanding, 4-4
SCAN address, 1-4
SCAN addresses, 4-4, 4-21
SCAN listeners, 4-4, 10-13
SCANs
configuring, 1-4
separation of duty
user roles, xxiii
setup.exe, A-6
silent mode
See response file mode
single client access names
See SCAN addresses
socket files, 9-12
software
removing, 12-7
uninstalling, 12-7
software requirements, 3-4
space requirements, 7-6
supported storage options
Grid home, 2-6
My Oracle Support website, 3-6
Oracle Clusterware, 6-2
switch
recommendations
using a dedicated switch, 4-7
SYSASM, 5-11
SYSBACKUP, 5-12
SYSDBA privilege
associated group, 5-10
SYSDG, 5-12
SYSKM, 5-12
SYSOPER privilege
associated group, 5-10
SYSRAC, 5-12
system requirements
for Grid Infrastructure, 2-5
Index-7
system requirements (continued)
latest information, 3-6
memory, 2-2
SYSTEM user
permissions, 9-4
T
TCP, 4-7
TEMP environment variable, 2-4
temporary disk space
requirements, 2-4
to 12c release 1 (12.1), 11-22
troubleshooting
and deinstalling, 12-1
cron jobs and installation, 1-8
DBCA does not recognize Oracle ASM disk size
and fails to create disk groups, 10-12
deconfiguring Oracle Clusterware to fix causes of
installation errors, 12-5
disk space errors, 1-3, 1-6
environment path errors, 1-3, 1-6
installation errors, 12-5
installation owner environment variables and
installation errors, 11-9
intermittent hangs, 9-12
log file, 9-4
permissions errors and oraInventory, 5-7
unset environment variables, 1-3
U
UAC remote restrictions, 5-17
unconfiguring Oracle Clusterware, 12-5
uninstall
Oracle software, 12-1
unset installation owners environment variables, 11-9
upgrade
Oracle Grid Infrastructure 12.1
create Oracle ASM password file, 11-10
upgrade tasks, 11-20
upgrades
and SCAN, 4-4
best practices, 3-2
checklist, 11-7
Oracle ASM, 3-2, 11-3
Oracle Grid Infrastructure, 11-3
restrictions, 11-4
standalone database, 3-2
Index-8
upgrades (continued)
unsetting environment variables, 11-9
upgrading
and ORAchk Upgrade Readiness Assessment, 1-8
batches, 11-13
groups of nodes, 11-13
user
domain, 5-2
non-LocalSystem user running Oracle Services,
xxiii
Oracle Service User, 5-3, 5-4
roles, xxiii
User Account Control settings, 5-19
users
creating the grid user, 5-2
Oracle software owner user (oracle), 5-9
V
video adapter requirements, 2-5
virtual IP addresses, 4-17
virtual local area network, 4-23
virtual memory
requirements, 2-5
VLAN
virtual local area network, 4-23
voting files
backing up, 10-1
configuration of, 1-7
supported storage options, 6-2
W
W32Time, 2-8
weakhostsend, 4-24
web browsers, 3-7
Windows
supported operating system versions, 3-4
Windows Media Sensing
disabling, 4-10
Windows network bind order, 4-10
Windows registry
backup before upgrades, 11-7
Windows Time Service
configuring, 2-8
X
x64 software versions, 2-5
x86 software versions, 2-5
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement