Oracle Grid Infrastructure Installation Guide for Linux

Oracle Grid Infrastructure Installation Guide for Linux
Oracle® Grid Infrastructure
Installation Guide
11g Release 2 (11.2) for Microsoft Windows
E10817-03
April 2010
Oracle Grid Infrastructure Installation Guide, 11g Release 2 (11.2) for Microsoft Windows
E10817-03
Copyright © 2007, 2010, Oracle and/or its affiliates. All rights reserved.
Primary Authors: Janet Stern, Douglas Williams
Contributing Authors: Mark Bauer, David Brower, Jonathan Creighton, Reema Khosla, Barb Lundhild, Saar
Maoz, Markus Michalewicz, Philip Newlan, Hanlin Qian
Contributors: Karin Brandauer, Mark Fuller, Paul Harter, Yingwei Hu, Scott Jesse, Sameer Joshi, Sana
Karam, Alexander Keh, Roland Knapp, Jai Krishnani, Jifeng Liu, Anil Nair, Mohammed Quadri, Sunil
Ravindrachar, Janelle Simmons, Malaiarasan Stalin, Preethi Subramanyam, Preethi Vallam, Rick Wessman
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this software or related documentation is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, the following notice is applicable:
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data
delivered to U.S. Government customers are "commercial computer software" or "commercial technical data"
pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As
such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and
license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of
the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software
License (December 2007). Oracle USA, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
This software is developed for general use in a variety of information management applications. It is not
developed or intended for use in any inherently dangerous applications, including applications which may
create a risk of personal injury. If you use this software in dangerous applications, then you shall be
responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure the safe use
of this software. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of
this software in dangerous applications.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks
of their respective owners.
This software and documentation may provide access to or information on content, products, and services
from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all
warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and
its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of
third-party content, products, or services.
Contents
Preface ................................................................................................................................................................. ix
Intended Audience......................................................................................................................................
Documentation Accessibility .....................................................................................................................
Related Documents .....................................................................................................................................
Conventions .................................................................................................................................................
ix
ix
x
xi
What's New in Oracle Grid Infrastructure Installation and Configuration? ............ xiii
Desupported Options for Oracle Clusterware and Oracle ASM 11g Release 2 ............................... xiii
New Features for Release 2 (11.2) ........................................................................................................... xiii
New Features for Release 1 (11.1) .......................................................................................................... xvii
1
Typical Installation for Oracle Grid Infrastructure for a Cluster
1.1
1.2
1.2.1
1.2.2
1.2.3
1.2.4
1.2.5
1.2.6
1.3
Typical and Advanced Installation ..........................................................................................
Preinstallation Steps Requiring Manual Tasks .......................................................................
Verify System Requirements..............................................................................................
Check Network Requirements...........................................................................................
Install Operating System Patches and other required software ...................................
Configure Operating System Users ..................................................................................
Configure the Directories Used During Installation ......................................................
Check and Prepare Storage ................................................................................................
Install Oracle Grid Infrastructure Software ............................................................................
1-1
1-1
1-2
1-3
1-5
1-5
1-5
1-6
1-7
2 Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation
Tasks
2.1
2.2
2.3
2.3.1
2.3.2
2.3.3
2.4
2.5
2.5.1
2.5.2
2.5.3
Installation Differences Between Windows and Linux or UNIX.........................................
Reviewing Upgrade Best Practices...........................................................................................
Checking Hardware and Software Certification ....................................................................
View Certification Information at My Oracle Support ..................................................
Web Browser Support .........................................................................................................
Windows Telnet and Terminal Services Support ...........................................................
Checking the Hardware Requirements ...................................................................................
Checking the Disk Space Requirements ..................................................................................
Disk Format Requirements.................................................................................................
Disk Space Requirements for Oracle Home Directories ................................................
TEMP Disk Space Requirements .......................................................................................
2-1
2-2
2-3
2-3
2-4
2-4
2-4
2-5
2-6
2-6
2-6
iii
2.6
2.6.1
2.6.2
2.6.3
2.6.4
2.6.5
2.6.6
2.7
2.7.1
2.8
2.8.1
2.8.2
2.8.3
2.9
2.9.1
2.9.2
2.9.3
2.10
2.10.1
2.10.2
2.11
2.11.1
2.12
3
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC
3.1
3.1.1
3.1.2
3.1.3
3.1.4
3.2
3.2.1
3.2.2
3.3
3.3.1
3.3.2
3.3.3
3.3.4
3.3.5
3.4
3.4.1
3.4.2
3.4.3
3.4.4
3.5
3.5.1
3.6
iv
Checking the Network Requirements...................................................................................... 2-7
Network Hardware Requirements.................................................................................... 2-7
IP Address Requirements ................................................................................................... 2-9
DNS Configuration for Domain Delegation to Grid Naming Service ...................... 2-10
Grid Naming Service Configuration Example ............................................................. 2-11
Manual IP Address Configuration Example ................................................................ 2-12
Network Interface Configuration Options.................................................................... 2-13
Identifying Software Requirements ...................................................................................... 2-13
Windows Firewall Feature on Windows Servers......................................................... 2-15
Network Time Protocol Setting ............................................................................................. 2-15
Configuring the Windows Time Service ....................................................................... 2-15
Configuring Network Time Protocol............................................................................. 2-16
Configuring Cluster Time Synchronization Service.................................................... 2-16
Enabling Intelligent Platform Management Interface ........................................................ 2-16
Requirements for Enabling IPMI.................................................................................... 2-17
Configuring the IPMI Management Network.............................................................. 2-17
Configuring the IPMI Driver .......................................................................................... 2-17
Checking Individual Component Requirements ................................................................ 2-21
Oracle Advanced Security Requirements ..................................................................... 2-21
Oracle Enterprise Manager Requirements.................................................................... 2-21
Configuring User Accounts.................................................................................................... 2-22
Managing User Accounts with User Account Control ............................................... 2-22
Verifying Cluster Privileges ................................................................................................... 2-22
Reviewing Storage Options ....................................................................................................... 3-1
General Storage Considerations for Oracle Grid Infrastructure................................... 3-2
General Storage Considerations for Oracle RAC............................................................ 3-2
Supported Storage Options for Oracle Clusterware and Oracle RAC......................... 3-4
After You Have Selected Disk Storage Options .............................................................. 3-6
Preliminary Shared Disk Preparation ...................................................................................... 3-6
Disabling Write Caching..................................................................................................... 3-6
Enabling Automounting for Windows............................................................................. 3-7
Storage Requirements for Oracle Clusterware and Oracle RAC ......................................... 3-8
Requirements for Using a Cluster File System for Shared Storage .............................. 3-8
Identifying Storage Requirements for Using Oracle ASM for Shared Storage........... 3-9
Restrictions for Disk Partitions Used By Oracle ASM................................................. 3-14
Requirements for Using a Shared File System ............................................................. 3-14
Requirements for Files Managed by Oracle.................................................................. 3-16
Configuring the Shared Storage Used by Oracle ASM ...................................................... 3-16
Creating Disk Partitions for Oracle ASM ...................................................................... 3-16
Marking Disk Partitions for Oracle ASM Before Installation..................................... 3-17
Configuring Oracle Automatic Storage Management Cluster File System ............. 3-19
Migrating Existing Oracle ASM Instances .................................................................... 3-20
Configuring Storage for Oracle Database Files on OCFS for Windows .......................... 3-20
Formatting Drives to Use OCFS for Windows after Installation............................... 3-22
Configuring Direct NFS Storage for Oracle RAC Data Files ............................................. 3-22
3.6.1
3.6.2
3.6.3
3.6.4
3.6.5
3.6.6
3.6.7
3.7
4
Preparing to Install Oracle Grid Infrastructure with Oracle Universal Installer............... 4-1
Installing Grid Infrastructure with OUI .................................................................................. 4-5
Running OUI to Install Grid Infrastructure ..................................................................... 4-5
Installing Grid Infrastructure Using a Cluster Configuration File............................... 4-6
Silent Installation of Oracle Clusterware ......................................................................... 4-7
Installing Grid Infrastructure Using a Software-Only Installation ..................................... 4-7
Installing the Software Binaries ......................................................................................... 4-8
Configuring the Software Binaries.................................................................................... 4-8
Confirming Oracle Clusterware Function............................................................................ 4-10
Confirming Oracle ASM Function for Oracle Clusterware Files ...................................... 4-10
Oracle Grid Infrastructure Postinstallation Procedures
5.1
5.1.1
5.1.2
5.2
5.2.1
5.2.2
5.2.3
5.3
5.3.1
5.3.2
5.3.3
5.3.4
5.3.5
5.3.6
5.4
6
3-23
3-23
3-24
3-24
3-24
3-25
3-26
3-26
Installing Oracle Grid Infrastructure for a Cluster
4.1
4.2
4.2.1
4.2.2
4.2.3
4.3
4.3.1
4.3.2
4.4
4.5
5
About Direct NFS Storage ...............................................................................................
About the Oranfstab File for Direct NFS.......................................................................
Mounting NFS Storage Devices with Direct NFS ........................................................
Specifying Network Paths for a NFS Server .................................................................
Enabling the Direct NFS Client.......................................................................................
Performing Basic File Operations Using the ORADNFS Utility................................
Disabling Direct NFS Client ............................................................................................
Desupport of Raw Devices .....................................................................................................
Required Postinstallation Tasks................................................................................................ 5-1
Download and Install Patch Updates ............................................................................... 5-1
Configure Exceptions for the Windows Firewall............................................................ 5-2
Recommended Postinstallation Tasks ..................................................................................... 5-6
Install Troubleshooting Tool .............................................................................................. 5-6
Optimize Memory Usage for Programs ........................................................................... 5-6
Create a Fast Recovery Area Disk Group......................................................................... 5-6
Using Older Oracle Database Versions with Grid Infrastructure ....................................... 5-8
General Restrictions for Using Older Oracle Database Versions ................................. 5-8
Using DBCA or Applying Patches for Oracle Database Release 10.x or 11.x ............. 5-9
Pinning Cluster Nodes for Oracle Database Release 10.x or 11.x................................. 5-9
Enabling the Global Services Daemon (GSD) for Oracle Database Release 9.2....... 5-10
Using the Correct LSNRCTL Commands ..................................................................... 5-10
Starting and Stopping Oracle Clusterware Resources ................................................ 5-10
Modifying Oracle Clusterware Binaries After Installation................................................ 5-11
How to Modify or Deinstall Oracle Grid Infrastructure
6.1
6.2
6.3
6.4
6.4.1
6.4.2
6.4.3
Deciding When to Deinstall Oracle Clusterware ...................................................................
Adding Standalone Grid Infrastructure Servers to a Cluster...............................................
Deconfiguring Oracle Clusterware without Removing Binaries.........................................
Removing Oracle Clusterware and Oracle Automatic Storage Management ...................
About the Deinstallation Command.................................................................................
Example of Running the Deinstall Command for Oracle Clusterware and ASM .....
Example Parameter File for Deinstall of Oracle Grid Infrastructure ...........................
6-1
6-1
6-2
6-3
6-3
6-5
6-6
v
A
Troubleshooting the Oracle Grid Infrastructure Installation Process
A.1
A.2
A.3
A.3.1
A.3.2
A.3.3
A.3.4
A.3.5
A.3.6
A.4
A.5
B
B-1
B-2
B-2
B-3
B-3
B-4
B-4
B-5
B-6
B-6
B-7
Understanding Preinstallation Configuration.......................................................................
Understanding Oracle Groups and Users.......................................................................
Understanding the Oracle Base Directory Path .............................................................
Understanding Network Addresses ................................................................................
Understanding Network Time Requirements................................................................
Understanding Storage Configuration ...................................................................................
Understanding Oracle Automatic Storage Management Cluster File System ..........
About Migrating Existing Oracle ASM Instances ..........................................................
C-1
C-1
C-2
C-2
C-4
C-4
C-4
C-5
How to Upgrade to Oracle Grid Infrastructure 11g Release 2
D.1
D.2
D.3
D.3.1
D.3.2
D.3.3
D.3.4
D.4
D.5
vi
About Response Files ................................................................................................................
Reasons for Using Silent Mode or Response File Mode ...............................................
General Procedure for Using Response Files .................................................................
Preparing a Response File.........................................................................................................
Editing a Response File Template ....................................................................................
Recording a Response File.................................................................................................
Running the Installer Using a Response File .........................................................................
Running Net Configuration Assistant Using a Response File ............................................
Postinstallation Configuration Using a Response File .........................................................
About the Postinstallation Configuration File................................................................
Running Postinstallation Configuration Using a Response File..................................
Oracle Grid Infrastructure for a Cluster Installation Concepts
n
n
n
n
n
n
n
n
D
A-1
A-2
A-3
A-3
A-3
A-3
A-3
A-4
A-4
A-4
A-4
Installing and Configuring Oracle Grid Infrastructure Using Response Files
B.1
B.1.1
B.1.2
B.2
B.2.1
B.2.2
B.3
B.4
B.5
B.5.1
B.5.2
C
General Installation Issues........................................................................................................
About the Oracle Clusterware Alert Log ...............................................................................
Oracle Clusterware Install Actions Log Errors and Causes ................................................
The OCFS for Windows format is not recognized on a remote cluster node ............
You have a Windows 2003 system and Automount of new drives is not enabled...
Symbolic links for disks were not removed....................................................................
Discovery string used by Oracle Automatic Storage Management is incorrect........
You used a period in a node name during Oracle Clusterware install ......................
Ignoring upgrade failure of ocr(-1073740972) ................................................................
Performing Cluster Diagnostics During Oracle Grid Infrastructure Installations...........
Interconnect Configuration Issues...........................................................................................
Restrictions for Clusterware and Oracle ASM Upgrades to Grid Infrastructure.............
Understanding Out-of-Place and Rolling Upgrades ............................................................
Preparing to Upgrade an Existing Oracle Clusterware Installation...................................
Verify System Readiness for Patches and Upgrades .....................................................
Gather the Necessary System Information .....................................................................
Upgrade to the Minimum Required Oracle Clusterware Version ..............................
Unset Environment Variables ...........................................................................................
Back Up the Oracle Software Before Upgrades .....................................................................
Upgrading Oracle Clusterware................................................................................................
D-1
D-2
D-3
D-3
D-3
D-3
D-4
D-4
D-4
D.6
D.6.1
D.6.2
D.7
D.8
Upgrading Oracle ASM ............................................................................................................
About Upgrading Oracle ASM .........................................................................................
Using ASMCA to Upgrade Oracle ASM .........................................................................
Updating DB Control and Grid Control Target Parameters ...............................................
Downgrading Oracle Clusterware After an Upgrade ..........................................................
D-5
D-5
D-6
D-6
D-6
Index
vii
List of Tables
2–1
2–2
2–3
3–1
3–2
3–3
5–1
5–2
B–1
viii
Example of a Grid Naming Service Network ..................................................................... 2-11
Manual Network Configuration Example .......................................................................... 2-12
Oracle Grid Software Requirements for Windows Systems............................................. 2-14
Supported Storage Options for Oracle Clusterware and Oracle RAC Files and Binaries.......
3-5
Oracle Clusterware Shared File System Volume Size Requirements.............................. 3-15
Oracle RAC Shared File System Volume Size Requirements........................................... 3-15
Oracle Executables Used to Access Non-Oracle Software................................................... 5-4
Other Oracle Software Products Requiring Windows Firewall Exceptions ..................... 5-5
Response files for Oracle Grid Infrastructure....................................................................... B-3
Preface
Oracle Grid Infrastructure Installation Guide for Microsoft Windows explains how to
configure a server in preparation for installing and configuring an Oracle Grid
Infrastructure installation (Oracle Clusterware and Oracle Automatic Storage
Management). It also explains how to configure a server and storage in preparation for
an Oracle Real Application Clusters (Oracle RAC) installation.
Intended Audience
Oracle Grid Infrastructure Installation Guide for Microsoft Windows provides configuration
information for network and system administrators, and database installation
information for database administrators (DBAs) who install and configure Oracle
Clusterware and Oracle Automatic Storage Management in a grid infrastructure for a
cluster installation.
For customers with specialized system roles who intend to install Oracle RAC, this
book is intended to be used by system administrators, network administrators, or
storage administrators to configure a system in preparation for an Oracle Grid
Infrastructure for a cluster installation, and complete all configuration tasks that
require Administrative user privileges. When grid infrastructure installation and
configuration is completed successfully, a system administrator should only provide
configuration information and grant access to the database administrator to run scripts
that require Administrative user privileges during an Oracle RAC installation.
This guide assumes that you are familiar with Oracle Database concepts. For
additional information, refer to books in the Related Documents list.
Documentation Accessibility
Our goal is to make Oracle products, services, and supporting documentation
accessible to all users, including users that are disabled. To that end, our
documentation includes features that make information available to users of assistive
technology. This documentation is available in HTML format, and contains markup to
facilitate access by the disabled community. Accessibility standards will continue to
evolve over time, and Oracle is actively engaged with other market-leading
technology vendors to address technical obstacles so that our documentation can be
accessible to all of our customers. For more information, visit the Oracle Accessibility
Program Web site at http://www.oracle.com/accessibility/.
Accessibility of Code Examples in Documentation
Screen readers may not always correctly read the code examples in this document. The
conventions for writing code require that closing braces should appear on an
ix
otherwise empty line; however, some screen readers may not always read a line of text
that consists solely of a bracket or brace.
Accessibility of Links to External Web Sites in Documentation
This documentation may contain links to Web sites of other companies or
organizations that Oracle does not own or control. Oracle neither evaluates nor makes
any representations regarding the accessibility of these Web sites.
Access to Oracle Support
Oracle customers have access to electronic support through My Oracle Support. For
information, visit http://www.oracle.com/support/contact.html or visit
http://www.oracle.com/accessibility/support.html if you are hearing
impaired.
Related Documents
For more information, refer to the following Oracle resources:
Oracle Clusterware and Oracle Real Application Clusters Documentation
This installation guide reviews steps required to complete an Oracle Clusterware and
Oracle Automatic Storage Management installation, and to perform preinstallation
steps for Oracle RAC.
If you intend to install Oracle Database or Oracle RAC, then complete the
preinstallation tasks as described in this installation guide, complete grid
infrastructure installation, and review those installation guides for additional
information. You can install either Oracle databases for a standalone server on a grid
infrastructure installation, or install an Oracle RAC database. To install an Oracle
Restart deployment of Oracle Clusterware, refer to Oracle Database Installation Guide for
Microsoft Windows.
Most Oracle error message documentation is only available in HTML format. If you
only have access to the Oracle Documentation media, then browse the error messages
by range. When you find the correct range or error messages, use your browser's Find
feature to locate a specific message. When connected to the Internet, you can search for
a specific error message using the error message search feature of the Oracle online
documentation.
Installation Guides
■
Oracle Diagnostics Pack Installation Guide
■
Oracle Database Installation Guide for Microsoft Windows
■
Oracle Real Application Clusters Installation Guide for Linux and UNIX
Operating System-Specific Administrative Guides
Oracle Database Administrator's Reference, 11g Release 2 (11.2) for UNIX Systems
■
■
Oracle Database Platform Guide for Microsoft Windows
Oracle Clusterware and Oracle Automatic Storage Management Administrative
Guides
■
Oracle Clusterware Administration and Deployment Guide
■
x
Oracle Database Storage Administrator's Guide
Oracle Real Application Clusters Administrative Guides
■
Oracle Real Application Clusters Administration and Deployment Guide
■
Oracle Database 2 Day + Real Application Clusters Guide
■
Oracle Database 2 Day DBA
■
Getting Started with the Oracle Diagnostics Pack
Generic Documentation
■
Oracle Database Administrator's Guide
■
Oracle Database Concepts
■
Oracle Database New Features Guide
■
Oracle Database Net Services Administrator's Guide
■
Oracle Database Reference
To download free release notes, installation documentation, white papers, or other
collateral, visit the Oracle Technology Network (OTN). You must register online before
using OTN; registration is free and can be done at the following Web site:
http://www.oracle.com/technology/membership/
If you have a username and password for OTN, then you can go directly to the
documentation section of the OTN Web site at the following Web site:
http://www.oracle.com/technology/documentation/index.html
Oracle error message documentation is available only in HTML. You can browse the
error messages by range in the Documentation directory of the installation media.
When you find a range, use your browser's Find feature to locate a specific message.
When connected to the Internet, you can search for a specific error message using the
error message search feature of the Oracle online documentation.
Conventions
The following text conventions are used in this document:
Convention
Meaning
boldface
Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic
Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace
Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.
xi
xii
What's New in Oracle Grid Infrastructure
Installation and Configuration?
This section describes new features as they pertain to the installation and
configuration of Oracle Grid Infrastructure (Oracle Clusterware and Oracle Automatic
Storage Management), and Oracle Real Application Clusters (Oracle RAC). The topics
in this section are:
■
New Features for Release 2 (11.2)
■
New Features for Release 1 (11.1)
Desupported Options for Oracle Clusterware and Oracle ASM 11g
Release 2
The following is a list of options desupported with this release:
Raw Devices Not Supported with OUI
With this release, Oracle Universal Installer (OUI) no longer supports installation of
Oracle Clusterware files on raw devices. Install Oracle Clusterware files either on
Oracle Automatic Storage Management (Oracle ASM) diskgroups, or in a supported
shared file system.
Raw devices are still supported for systems upgrading to Oracle Clusterware 11g
release 2.
New Features for Release 2 (11.2)
The following is a list of new features for Oracle Database 11g release 2 (11.2):
■
Oracle Automatic Storage Management and Oracle Clusterware Installation
■
Oracle Automatic Storage Management and Oracle Clusterware Files
■
Oracle Automatic Storage Management Cluster File System
■
Cluster Time Synchronization Service
■
Daylight Savings Time Upgrade of Timestamp with Timezone Data Type
■
Enterprise Manager Database Control Provisioning
■
Oracle Clusterware Out-of-place Upgrade
■
Grid Plug and Play
■
Oracle Clusterware Administration with Oracle Enterprise Manager
xiii
■
Redundant Interconnects for Oracle Clusterware
■
SCAN for Simplified Client Access
■
SRVCTL Command Enhancements for Patching
■
Typical Installation Option
■
Deinstallation Tool
■
Voting Disk Backup Procedure Change
Oracle Automatic Storage Management and Oracle Clusterware Installation
With Oracle Grid Infrastructure 11g release 2 (11.2), Oracle ASM and Oracle
Clusterware are installed into a single home directory, which is referred to as the Grid
Infrastructure home. Configuration assistants start after the installer interview process
that configure Oracle ASM and Oracle Clusterware.
The installation of the combined products is called Oracle Grid Infrastructure.
However, Oracle Clusterware and Oracle ASM remain separate products.
Oracle Automatic Storage Management and Oracle Clusterware Files
With this release, Oracle Cluster Registry (OCR) and voting disks can be placed on
Oracle ASM.
This feature enables Oracle ASM to provide a unified storage solution, storing all the
data for the clusterware and the database, without the need for third-party volume
managers or cluster filesystems.
For new installations, OCR and voting disk files can be placed either on Oracle ASM,
or on a cluster file system. Installing Oracle Clusterware files on raw devices is no
longer supported, unless an existing Oracle Clusterware 10g release 1 or higher system
is being upgraded.
Oracle Automatic Storage Management Cluster File System
Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is a new
multi-platform, scalable file system and storage management design that extends
Oracle ASM technology to support all application data. Oracle ACFS provides
dynamic file system resizing, and improved performance using the distribution,
balancing and striping technology across all available disks, and provides storage
reliability through Oracle ASM's mirroring and parity protection.
Oracle ACFS is only supported on Windows Server 2003 64-bit
and Windows Server 2003 R2 64-bit.
Note:
The Oracle ASM Dynamic Volume Manager (Oracle ADVM) extends Oracle ASM by
providing a disk driver interface to Oracle ASM storage allocated as Oracle ASM
volume files. You can use Oracle ADVM to create virtual disks that contain file
systems. File systems and other disk-based applications issue I/O requests to Oracle
ADVM volume devices as they would to other storage devices on a vendor operating
system. The file systems contained on Oracle ASM volumes can support files beyond
Oracle database files, such as executable files, report files, trace files, alert logs, and
other application data files.
xiv
Cluster Time Synchronization Service
Cluster node times should be synchronized, particularly if the cluster is to be used for
Oracle RAC. With this release, Oracle Clusterware provides Cluster Time
Synchronization Service (CTSS), which ensures that there is a synchronization service
in the cluster. If Network Time Protocol (NTP) is not found during cluster
configuration, then CTSS is configured to ensure time synchronization.
Daylight Savings Time Upgrade of Timestamp with Timezone Data Type
When time zone version files are updated, TIMESTAMP WITH TIMEZONE data
could become stale. With timestamp automation, the system and user data is updated
transparently with minimal downtime. In addition, clients can continue to work with
the server without having to update the client-side files.
Enterprise Manager Database Control Provisioning
Enterprise Manager Database Control 11g provides the capability to automatically
provision Oracle Grid Infrastructure and Oracle RAC installations on new nodes, and
then extend the existing Oracle Grid Infrastructure and Oracle RAC database to these
provisioned nodes. This provisioning procedure requires a successful Oracle RAC
installation before you can use this feature.
See Also: Oracle Real Application Clusters Administration and
Deployment Guide for information about this feature
Oracle Clusterware Out-of-place Upgrade
With this release, you can install a new version of Oracle Clusterware into a separate
home from an existing Oracle Clusterware installation. This feature reduces the
downtime required to upgrade a node in the cluster. When performing an out-of-place
upgrade, the old and new version of the software are present on the nodes at the same
time, each in a different home location, but only one version of the software is active.
Grid Plug and Play
In the past, adding or removing servers in a cluster required extensive manual
preparation. With this release, you can continue to configure server nodes manually, or
use Grid Plug and Play to configure them dynamically as nodes are added or removed
from the cluster.
Grid Plug and Play reduces the costs of installing, configuring, and managing server
nodes by starting a grid naming service within the cluster to allow each node to
perform the following tasks dynamically:
■
■
■
Negotiating appropriate network identities for itself
Acquiring additional information it requires to operate from a configuration
profile
Configuring or reconfiguring itself using profile data, making host names and
addresses resolvable on the network
Because servers perform these tasks dynamically, adding and removing nodes simply
requires an administrator to connect the server to the cluster and allow the cluster to
configure the node. Using Grid Plug and Play, and using best practices
recommendations, adding a node to the database cluster is part of the normal server
restart, and removing a node from the cluster occurs automatically when a server is
turned off.
xv
Oracle Clusterware Administration with Oracle Enterprise Manager
With this release, you can use Enterprise Manager Cluster Home page to perform full
administrative and monitoring support for both standalone database and Oracle RAC
environments, using High Availability Application and Oracle Cluster Resource
Management.
When Oracle Enterprise Manager is installed with Oracle Clusterware, it can provide a
set of users that have the Oracle Clusterware Administrator role in Enterprise
Manager, and provide full administrative and monitoring support for High
Availability application and Clusterware resource management.
After you have completed installation and have Enterprise Manager deployed, you
can provision additional nodes added to the cluster using Enterprise Manager.
Redundant Interconnects for Oracle Clusterware
With this release, Oracle Clusterware can use redundant network interfaces for the
Clusterware interconnect, without requiring external or operating system-based
bonding solutions. This provides the following advantages:
■
■
■
■
High availability for the cluster by removing the network interface as a single
point of failure
Dynamic load-balancing across all available network interface cards (NICs)
Ability to add or replace private interfaces during run time; Oracle Clusterware
automatically recognizes and aggregates new or replaced interfaces to private
interconnect addresses
Support for UDP, TCP and raw interfaces
SCAN for Simplified Client Access
With this release, the single client access name (SCAN) is the address to provide for all
clients connecting to the cluster. The SCAN is a domain name registered to at least one
and up to three IP addresses, either in the domain name service (DNS) or the Grid
Naming Service (GNS). The SCAN eliminates the need to change clients when nodes
are added to or removed from the cluster. Clients using SCAN can also access the
cluster using Easy Connect Naming.
SRVCTL Command Enhancements for Patching
With this release, you can use srvctl to shut down all Oracle software running
within an Oracle home, in preparation for patching. Oracle Grid Infrastructure
patching is automated across all nodes, and patches can be applied in a multi-node,
multi-patch fashion.
Typical Installation Option
To streamline cluster installations, especially for those customers who are new to
clustering, Oracle introduces the Typical Installation path. Typical installation provides
defaults for as many options as possible to those recommended as best practices.
Deinstallation Tool
OUI no longer removes Oracle software. Use the new Deinstallation Tool
(deinstall.bat) available on the installation media before installation and in the
Oracle home directory after installation. This tool can also be downloaded from Oracle
TechNet.
xvi
See Also: Chapter 6, "How to Modify or Deinstall Oracle Grid
Infrastructure" for more information
Voting Disk Backup Procedure Change
In prior releases, backing up the voting disks was a required postinstallation task.
With Oracle Clusterware release 11.2 and later, backing up voting disks manually is no
longer required, as voting disks are backed up automatically in the OCR as part of any
configuration change and voting disk data is automatically restored to any added
voting disks.
See Also:
Oracle Clusterware Administration and Deployment Guide
New Features for Release 1 (11.1)
The following is a list of new features for release 1 (11.1)
Changes in Installation Documentation
With Oracle Database 11g release 1, Oracle Clusterware can be installed or configured
as an independent product, and additional documentation is provided on storage
administration. For installation planning, note the following documentation:
Oracle Database 2 Day + Real Application Clusters Guide
This book provides an overview and examples of the procedures to install and
configure a two-node Oracle Clusterware and Oracle RAC environment.
Oracle Clusterware Installation Guide
This book (the guide that you are reading) provides procedures either to install Oracle
Clusterware as a standalone product, or to install Oracle Clusterware with either
Oracle Database, or Oracle RAC. It contains system configuration instructions that
require system administrator privileges.
Oracle Real Application Clusters Installation Guide
This platform-specific book provides procedures to install Oracle RAC after you have
successfully completed an Oracle Clusterware installation. It contains database
configuration instructions for database administrators.
Oracle Database Storage Administrator’s Guide
This book provides information for database and storage administrators who
administer and manage storage, or who configure and administer Oracle ASM.
Oracle Clusterware Administration and Deployment Guide
This is the administrator’s reference for Oracle Clusterware. It contains information
about administrative tasks, including those that involve changes to operating system
configurations and cloning Oracle Clusterware.
Oracle Real Application Clusters Administration and Deployment Guide
This is the administrator’s reference for Oracle RAC. It contains information about
administrative tasks. These tasks include database cloning, node addition and
deletion, OCR administration, use of SRVCTL and other database administration
utilities, and tuning changes to operating system configurations.
xvii
Release 1 (11.1) Enhancements and New Features for Installation
The following is a list of enhancements and new features for Oracle Database 11g
release 1 (11.1):
New SYSASM Privilege for Oracle ASM Administration
This feature introduces a new SYSASM privilege that is specifically intended for
performing Oracle ASM administration tasks. Using the SYSASM privilege when
connecting to Oracle ASM instead of the SYSDBA privilege provides a clearer division
of responsibility between Oracle ASM administration and database administration.
xviii
1
1
Typical Installation for Oracle Grid
Infrastructure for a Cluster
This chapter describes the differences between a Typical and Advanced installation for
Oracle Grid Infrastructure for a cluster, and describes the steps required to complete a
Typical installation.
This chapter contains the following sections:
■
Typical and Advanced Installation
■
Preinstallation Steps Requiring Manual Tasks
■
Install Oracle Grid Infrastructure Software
1.1 Typical and Advanced Installation
You are given two installation options for Oracle Grid Infrastructure installations:
■
■
Typical Installation: The Typical installation option is a simplified installation
with a minimal number of manual configuration choices. Oracle recommends that
you select this installation type for most cluster implementations.
Advanced Installation: The Advanced Installation option is an advanced
procedure that requires a higher level of system knowledge. It enables you to
select particular configuration choices, including additional storage and network
choices, integration with Intelligent Platform Management Interface (IPMI), or
more granularity in specifying Oracle Automatic Storage Management (Oracle
ASM) roles.
1.2 Preinstallation Steps Requiring Manual Tasks
Complete the following manual configuration tasks
■
Verify System Requirements
■
Check Network Requirements
■
Install Operating System Patches and other required software
■
Configure Operating System Users
■
Configure the Directories Used During Installation
■
Check and Prepare Storage
Typical Installation for Oracle Grid Infrastructure for a Cluster
1-1
Preinstallation Steps Requiring Manual Tasks
See Also: Chapter 2, "Advanced Installation Oracle Grid
Infrastructure for a Cluster Preinstallation Tasks" and Chapter 3,
"Configuring Storage for Grid Infrastructure for a Cluster and Oracle
RAC" for information about how to complete these tasks
1.2.1 Verify System Requirements
This section provides a summary of the following pre-installation tasks:
■
Memory Requirements
■
Hardware Requirements
■
Disk Requirements
■
TEMP Space Requirements
For more information about these tasks, review "Checking the Hardware
Requirements" on page 2-4.
1.2.1.1 Memory Requirements
In the Windows Task Manager window, select the Performance tab to view the
available memory for your system.
To view the Virtual memory settings, from the Control panel, select System. In the
System Properties window, select the Advanced tab, then, under Performance, click
Performance Options, or Settings. In the Performance Options window, the virtual
memory, or page file, select the Advanced tab and the settings are displayed at the
bottom of the window.
The minimum required RAM is 1.5 gigabyte (GB) for Oracle grid infrastructure, and
the minimum required virtual memory space is 2 GB. Oracle recommends that you set
the paging file size to twice the amount of RAM.
If you also plan to install Oracle Real Application Clusters (Oracle RAC), then 2.5 GB
of memory is the minimum required RAM.
1.2.1.2 Hardware Requirements
The minimum processor speed is 1 gigahertz (GHz) for Windows Server 2003,
Windows Server 2003 R2, and Windows Server 2008. The minimum processor speed is
1.4 GHz for Windows Server 2008 R2.
1.2.1.3 Disk Requirements
From the Start menu, select Run... In the Run window, type in Diskmgmt.msc to open
the Disk Management graphical user interface (GUI).
The Disk Management window displays the available space on file systems. If you use
standard redundancy for Oracle Clusterware files, which is 3 Oracle Cluster Registry
(OCR) files and 3 voting disk files, then you should have at least 2 GB of disk space
available on three separate physical disks reserved for Oracle Clusterware files.
Ensure you have at least 3 GB of space for the grid infrastructure for a cluster home
(Grid home). This includes Oracle Clusterware and Oracle ASM files and log files.
1.2.1.4 TEMP Space Requirements
Ensure that you have at least 1 GB of disk space in the Windows TEMP directory. If
this space is not available, then increase the partition size, or delete unnecessary files
1-2 Oracle Grid Infrastructure Installation Guide
Preinstallation Steps Requiring Manual Tasks
in the TEMP directory. Make sure the environment variables TEMP and TMP both point
to the location of the TEMP directory, for example:
TEMP=C:\WINDOWS\TEMP
TMP=C:\WINDOWS\TEMP
1.2.2 Check Network Requirements
Ensure that you have the following available:
■
Single Client Access Name for the Cluster
■
IP Address Requirements
■
Disable the Media Sensing feature for TCP/IP
■
Network Adapter Binding Order and Protocol Priorities
■
Verify Privileges for Copying Files in the Cluster
1.2.2.1 Single Client Access Name for the Cluster
During Typical installation, you are prompted to confirm the default single client
access name (SCAN) for the cluster, which is used to connect to databases within the
cluster irrespective of which nodes the database instances run on. The default value
for the SCAN is based on the local node name. If you change the SCAN from the
default, then the name that you use must be globally unique throughout your
enterprise.
In a Typical installation, the SCAN is also the name of the cluster. The SCAN and
cluster name must be at least one character long and no more than 15 characters in
length, must be alphanumeric, and can contain hyphens (-). For example:
sales-dev
If you require a SCAN that is longer than 15 characters, then select the Advanced
installation option. See "IP Address Requirements" for information about the SCAN
address requirements.
1.2.2.2 IP Address Requirements
Before starting the installation, you must have at least two network interface cards
configured on each node: One for the private internet protocol (IP) addresses and one
for the public IP addresses.
1.2.2.2.1
IP Address Requirements for Manual Configuration
The public and virtual IP addresses must be static addresses, configured before
installation, and the virtual IP addresses for each node must not currently be in use.
Oracle Clusterware manages private IP addresses in the private subnet on network
adapters you identify as private during the installation interview.
Configures and updates the Grid plug and play (GPnP) profile with the following
addresses:
■
A public IP address for each node
■
A virtual IP (VIP) address for each node
■
A SCAN configured on the domain name system (DNS) for round-robin resolution
to three addresses (recommended) or at least one address.
Typical Installation for Oracle Grid Infrastructure for a Cluster
1-3
Preinstallation Steps Requiring Manual Tasks
The SCAN is a host name used to provide access for clients to the cluster. Because the
SCAN is associated with the cluster as a whole, rather than to a particular node, the
SCAN makes it possible to add or remove nodes from the cluster without needing to
reconfigure clients. It also adds location independence for the databases, so that client
configuration does not have to depend on which nodes are running a particular
database instance. Client can continue to access the cluster in the same way as with
previous releases, but Oracle recommends that clients accessing the cluster use the
SCAN.
If you manually configure addresses, the Oracle strongly
recommends that you use DNS resolution for SCAN VIPs. If you use
the hosts file to resolve SCANs, then you must provide a hosts file
entry for each SCAN address.
Note:
"Understanding Network Addresses" on page C-2 for
more information about network addresses.
See Also:
1.2.2.3 Intended Use of Network Adapters
During installation, you are asked to identify the planned use for each network
adapter that Oracle Universal Installer (OUI) detects on your cluster node. You must
identify each adapter as a public or private adapter, and you must use the same
private adapters for both Oracle Clusterware and Oracle RAC. For adapters that you
plan to use for other purposes–for example, an adapter dedicated to a network file
system–you must identify those adapters as "do not use" so that Oracle Clusterware
ignores them.
1.2.2.4 Disable the Media Sensing feature for TCP/IP
Media Sense allows Windows to uncouple an IP address from a network interface card
when the link to the local switch is lost. To disable Windows Media Sensing for
TCP/IP on Windows Server 2003 with SP1 or higher, you must set the value of the
DisableDHCPMediaSense parameter to 1 on each node. Because you must modify
the Windows registry to disable Media Sensing, you should first backup the registry
and confirm that you can restore it, using the methods described in your Windows
documentation.
Disable Media Sensing by completing the following steps on each node of your cluster:
1.
Backup the Windows registry.
2.
Use Registry Editor (Regedit.exe) to view the following key in the registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
3.
Add a new DWORD value to the Parameters subkey:
Value Name: DisableDHCPMediaSense
Value: 0
4.
Exit the Registry Editor and then restart the computer.
1.2.2.5 Network Adapter Binding Order and Protocol Priorities
Check the network adapter binding order on each node. Ensure that your public
network adapter is first in the bind order, and the private network adapter is second.
Follow these steps to configure the network adapter bind order:
1-4 Oracle Grid Infrastructure Installation Guide
Preinstallation Steps Requiring Manual Tasks
1.
Right-click My Network Places and choose Properties.
2.
In the Advanced menu, click Advanced Settings.
3.
If the public adapter name is not the first name listed under the Adapters and
Bindings tab, then select it and click the arrow to move it to the top of list.
4.
Increase the priority of IPv4 over IPv6.
5.
Click OK to save the setting and then exit the network setup dialog.
The names used for each class of network adapter (such as public) must be consistent
across all nodes. You can use nondefault names for the network adapter, for example,
PublicLAN, if the same names are used for the same class of network adapters on
each node in the network.
1.2.2.6 Verify Privileges for Copying Files in the Cluster
During installation, OUI copies the software from the local node to the remote nodes
in the cluster. Verify that you have administrative privileges on the other nodes in the
cluster by running the following command on each node, where nodename is the
node name:
net use \\nodename\C$
1.2.3 Install Operating System Patches and other required software
Refer to the tables listed in "Identifying Software Requirements" on page 2-13 for
details.
You must configure sufficient space in the Windows paging file. Paging files are used
to store pages of memory used by the process that are not contained in other files.
Paging files are shared by all processes, and the lack of space in paging files can
prevent other processes from allocating memory.
If possible, split the paging file into multiple files on multiple physical devices. This
encourages parallel access to virtual memory, and improves the software performance.
See "Memory Requirements" on page 1-2 for information on configure the Windows
paging file.
1.2.4 Configure Operating System Users
To install the Oracle software, you must use an account with administrative privileges.
For more information, refer to the section "Configuring User Accounts" on page 2-22.
If you are performing the installation remotely, refer to
"Windows Telnet and Terminal Services Support" on page 2-4
Note:
1.2.5 Configure the Directories Used During Installation
To install properly across all nodes, OUI uses the temporary folders defined within
Microsoft Windows. The TEMP and TMP environment variables should point to the
same local directory on all nodes in the cluster. By default, these settings are defined as
%USERPROFILE%\Local Settings\Temp and %USERPROFILE%\Local
Settings\Tmp in the Environment Settings of My Computer. It is recommended to
explicitly redefine these as WIN_DRIVE:\temp and WIN_DRIVE:\tmp, for example,
C:\temp or C:\tmp for all nodes, if Windows is installed on the C drive.
The directory that Oracle software is installed in is referred to as its home directory.
When installing Oracle Grid Infrastructure, you must determine the location of the
Typical Installation for Oracle Grid Infrastructure for a Cluster
1-5
Preinstallation Steps Requiring Manual Tasks
Oracle Grid Infrastructure home. Oracle ASM is also installed in this home directory.
When you install Oracle RAC, you must choose a different directory to install the
software. The location of the Oracle RAC installation is referred to as the Oracle home.
The Oracle Inventory directory is the central inventory location for all Oracle software
installed on a server. By default, the location of the Oracle Inventory directory is
C:\Program Files\Oracle\Inventory.
For installations with Oracle Grid Infrastructure only, Oracle recommends that you let
OUI create the Oracle Grid Infrastructure home and Oracle Inventory directories for
you.
1.2.6 Check and Prepare Storage
You must have space available on Oracle ASM for Oracle Clusterware files (voting
disks and OCRs), and for Oracle Database files, if you install standalone or Oracle
RAC databases. Creating Oracle Clusterware files on raw devices is no longer
supported for new installations.
The following sections outline the procedure for creating OCR and voting disk
partitions for Oracle Cluster File System for Windows (OCFS for Windows), and
preparing disk partitions for use with Oracle ASM.
■
Create Disk Partitions
■
Stamp Disks for Oracle ASM
For additional information, review the following sections:
■
■
"Configuring Storage for Oracle Database Files on OCFS for Windows" on
page 3-20
"Configuring Direct NFS Storage for Oracle RAC Data Files" on page 3-22
Note: When using Oracle ASM for either the Oracle Clusterware files
or Oracle Database files, Oracle creates one Oracle ASM instance on
each node in the cluster, regardless of the number of databases.
1.2.6.1 Create Disk Partitions
The following steps outline the procedure for creating disk partitions for use with
either OCFS for Windows or Oracle ASM:
1.
Use Microsoft Computer Management utility or the command line tool diskpart
to create an extended partition. Use a basic disk: dynamic disks are not supported.
2.
Create at least one logical partition for the Oracle Clusterware files. You do not
have to create separate partitions for the OCR and voting disk if you plan to use
OCFS for Windows. Oracle Clusterware creates individual files for the OCR and
voting disk.
3.
If your file system does not use a redundant array of inexpensive disks (RAID),
then create an additional extended partition and logical partition for each partition
that will be used by Oracle Clusterware files, to provide redundancy.
To create the required partitions, use the Disk Management utilities available with
Microsoft Windows. Use a basic disk with a Master Boot Record (MBR) partition style
as an extended partition for creating partitions.
1.
From an existing node in the cluster, run the Windows disk administration tool as
follows:
1-6 Oracle Grid Infrastructure Installation Guide
Install Oracle Grid Infrastructure Software
a.
For Windows Server 2003 and Windows 2003 R2 systems:
Click Start, then select Settings, Control Panel, Administrative Tools, and
then Computer Management
Expand the Storage folder to Disk Management. Use a basic disk with a MBR
partition style as an extended partition for creating partitions. Right click
inside an unallocated part of an extended partition and choose Create Logical
Drive.
Specify a size for the partition that is at least 520 megabyte (MB) to store both
the OCR and voting disk, or 500 MB (the minimum size) to store just the
voting disk or OCR.
When specifying options for the logical drive, choose the option "Do not
assign a drive letter or path" and "Do not format this partition". Repeat these
steps to create enough partitions to store all the required files.
b.
For Windows Server 2008 and Windows Server 2008 R2 systems:
See "Configuring Storage for Oracle Database Files on OCFS for Windows" on
page 3-20 for instructions on creating disk partitions using the DISKPART
utility.
2.
Repeat Step 1 to create all the required partitions.
3.
Check all nodes in the cluster to ensure that the partitions are visible on all the
nodes and to ensure that none of the Oracle partitions have drive letters assigned.
If any partitions have drive letters assigned, then remove them by performing
these steps:
■
Right-click the partition in the Windows disk administration tool
■
Select "Change Drive Letters and Paths..." from the menu
■
Click Remove in the "Change Drive Letter and Paths" window
1.2.6.2 Stamp Disks for Oracle ASM
If you plan to use Oracle ASM to store the Oracle Clusterware files, then you must
perform one additional step. After you have created the disk partitions, the disks must
be stamped with a header before they can be used by Oracle ASM. You can configure
the disk partitions manually by using either asmtoolg (GUI version) or using
asmtool (command line version).
For more information about configuring your disks with asmtoolg, refer to the
section "Using asmtoolg (Graphical User Interface)" on page 3-17. To configure the
disks with asmtool, refer to the section "Using asmtool (Command Line)" on
page 3-18.
1.3 Install Oracle Grid Infrastructure Software
For information, review "Installing Grid Infrastructure with OUI" on page 4-5.
1.
Start OUI from the root level of the installation media. For example:
cd D:
setup.exe
2.
Select Install and Configure Grid Infrastructure for a Cluster, then select Typical
Installation. In the installation screens that follow, enter the configuration
information as prompted.
Typical Installation for Oracle Grid Infrastructure for a Cluster
1-7
Install Oracle Grid Infrastructure Software
3.
After you have installed Oracle Grid Infrastructure, apply any available patch sets
or critical patches for Oracle Clusterware and Oracle ASM 11g release 2 (11.2).
See Also: Chapter 4, "Installing Oracle Grid Infrastructure for a
Cluster"
1-8 Oracle Grid Infrastructure Installation Guide
2
Advanced Installation Oracle Grid
Infrastructure for a Cluster Preinstallation
Tasks
2
This chapter describes the system configuration tasks that you must complete before
you start Oracle Universal Installer (OUI) to install Oracle grid infrastructure.
This chapter contains the following topics:
■
Installation Differences Between Windows and Linux or UNIX
■
Reviewing Upgrade Best Practices
■
Checking Hardware and Software Certification
■
Checking the Hardware Requirements
■
Checking the Disk Space Requirements
■
Checking the Network Requirements
■
Identifying Software Requirements
■
Network Time Protocol Setting
■
Enabling Intelligent Platform Management Interface
■
Checking Individual Component Requirements
■
Configuring User Accounts
■
Verifying Cluster Privileges
2.1 Installation Differences Between Windows and Linux or UNIX
If you are experienced with installing Oracle components in Linux or UNIX
environments, then note that many manual setup tasks required on Linux or UNIX are
not required on Windows. The key differences between Windows and Linux or UNIX
and installations are:
■
Environment variables
On Windows systems, OUI updates the PATH environment variable during
installation, and does not require other environment variables to be set, such as
ORACLE_BASE, ORACLE_HOME, and ORACLE_SID. On Linux and UNIX systems,
you must manually set these environment variables.
■
Operating System Groups
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-1
Reviewing Upgrade Best Practices
On Windows systems, OUI creates the ORA_DBA group, which is used for
operating system authentication for Oracle instances. On Linux and UNIX
systems, you must create this and other operating system groups manually, and
they are used for granting permission to access various Oracle software resources
and for operating system authentication. Windows does not use an Oracle
Inventory group.
■
Account for running OUI
On Windows systems, you log in as the Administrator user or as a user that is a
member of the local Administrators group. You do not need a separate account.
On Linux and UNIX systems, you must create and use a software owner user
account, and this user must belong to the Oracle Inventory group.
See Also: "Oracle Database Windows/UNIX Differences," in Oracle
Database Platform Guide for Microsoft Windows
2.2 Reviewing Upgrade Best Practices
Always create a backup of existing databases before
starting any configuration change.
Caution:
If you have an existing Oracle installation, then record your version numbers, patches,
and other configuration information. Before proceeding with installation of Oracle
Grid infrastructure, review the Oracle upgrade documentation to decide the best
method of upgrading your current software installation.
To upgrade Oracle Clusterware Release 10.2 to Oracle
Clusterware Release 11g, you must first apply the 10.2.0.3 or later
patchset.
Note:
You can upgrade a clustered Oracle Automatic Storage Management (Oracle ASM)
installation without shutting down an Oracle Real Application Clusters (Oracle RAC)
database by performing a rolling upgrade either of individual nodes, or of a set of
nodes in the cluster. However, if you have a standalone database on a cluster that uses
Oracle ASM, then you must shut down the standalone database before upgrading.
If you have an existing standalone, or non-clustered, Oracle ASM installation, then
review Oracle upgrade documentation. The location of the Oracle ASM home changes
in this release, and you may want to consider other configuration changes to simplify
or customize storage administration.
During rolling upgrades of the operating system, Oracle supports using different
operating system binaries when both versions of the operating system are certified
with the Oracle Database release you are using.
Using mixed operating system versions is only supported for
the duration of an upgrade, over the period of a few hours. Oracle
does not support operating a cluster with mixed operating systems for
an extended period. Oracle does not support running Oracle grid
infrastructure and Oracle RAC on heterogeneous platforms (servers
with different chip architectures) in the same cluster.
Note:
2-2 Oracle Grid Infrastructure Installation Guide
Checking Hardware and Software Certification
To find the most recent software updates, and to find best practices recommendations
about preupgrade, postupgrade, compatibility, and interoperability, refer to "Oracle
Upgrade Companion." "Oracle Upgrade Companion" is available through Note
785351.1 on My Oracle Support:
https://support.oracle.com
2.3 Checking Hardware and Software Certification
The following sections list the following certification information:
■
View Certification Information at My Oracle Support
■
Web Browser Support
■
Windows Telnet and Terminal Services Support
2.3.1 View Certification Information at My Oracle Support
The hardware and software requirements included in this installation guide were
current at the time this guide was published. However, because new platforms and
operating system software versions might be certified after this guide is published,
review the certification matrix on the My Oracle Support Web site for the most up-todate list of certified hardware platforms and operating system versions. This Web site
also provides compatible client and database versions, patches, and workaround
information for bugs.
The My Oracle Support Web site is available at the following URL:
http://support.oracle.com/
You must register online before using My Oracle Support. After logging in, click the
More... tab then select Certifications. In the Find Certification Information field,
choose the following:
■
Product Line: Oracle Database Products
■
Product Family: Oracle Database
■
Product Area: Oracle Database
■
Product: Oracle Server - Enterprise Edition
■
Product Release: 11gR2 RAC
■
Product Version: 11gR2 RAC
■
Platform: Microsoft Windows x64 (64-bit)
After you have made these selections, click Search. Click the Certified link next to the
value of Platform Version that matches your operating system, for example, 2008 R2.
Click the link for Certification notes to check the Certification Matrix for Oracle RAC
to ensure that your hardware configuration is supported for use with Oracle
Clusterware and Oracle RAC. My Oracle Support contains guidance about supported
hardware options that can assist you with your purchasing decisions and installation
planning.
In addition to specific certified hardware configurations, the Certify page provides
support and patch information, and general advice about how to proceed with an
Oracle Clusterware with Oracle RAC 11g release 2 (11.2) installation, including
important information about configuration issues. View the Product Notes and
Platform Notes to view this additional information.
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-3
Checking the Hardware Requirements
Contact your Oracle sales representative if you do not have a
My Oracle Support account.
Note:
2.3.2 Web Browser Support
On 64-bit Windows systems, Microsoft Internet Explorer 6.0 and higher is supported
for Oracle Enterprise Manager Database Control and Oracle Enterprise Manager Grid
Control.
2.3.3 Windows Telnet and Terminal Services Support
This section contains these topics:
■
Windows Telnet Services Support
■
Windows Terminal Services and Remote Desktop Support
2.3.3.1 Windows Telnet Services Support
Windows Server 2003 and Windows Server 2003 R2 can use a Telnet Service to enable
remote users to log on to the operating system and run console programs using the
command line. Oracle supports the use of database command line utilities such as
sqlplus, export, import and sqlldr using this feature, but does not support the
database graphical user interface (GUI) tools such as OUI, Database Configuration
Assistant (DBCA), and Oracle Net Configuration Assistant (NetCA).
Note:
Ensure that the Telnet service is installed and started.
2.3.3.2 Windows Terminal Services and Remote Desktop Support
Oracle supports installing, configuring, and running Oracle Database through
Terminal Services (console mode) on Windows 2003 Server. If you do not use Terminal
Services in console mode, then you might encounter problems with configuration
assistants after the installation or with starting Oracle Clusterware components after
installation.
To start Terminal Services in console mode, enter the following command:
mstsc -v:servername /F /console
The information in this section does not apply to systems
running Windows Server 2008 or higher.
Note:
2.4 Checking the Hardware Requirements
Each system must meet the following minimum hardware requirements:
■
At least 1.5 gigabyte (GB) of physical RAM for grid infrastructure for a cluster
installations without Oracle RAC; at least 2.5 GB of physical RAM if you plan to
install Oracle RAC after installing grid infrastructure for a cluster.
To determine the physical RAM size, for a computer using Windows 2003 R2, open
System in the control panel and select the General tab.
■
Virtual memory: double the amount of RAM
2-4 Oracle Grid Infrastructure Installation Guide
Checking the Disk Space Requirements
To determine the size of the configured virtual memory (also known as paging file
size), open the Control Panel and select System. Select the Advanced tab and then
click Settings in the Performance section. In the Performance Options window,
click the Advanced tab to see the virtual memory configuration.
If necessary, refer to your operating system documentation for information about
how to configure additional virtual memory.
■
■
Video adapter: 256 color and at least 1024 x 768 display resolution, so that OUI
displays correctly
Processor: Intel Extended Memory 64 Technology (EM64T) or AMD 64 for 64-bit.
The minimum processor speed is 1 gigahertz (GHz) for all supported Windows
Servers except for Windows Server 2008 R2, where the minimum processor speed
is 1.4 GHz.
While Oracle Database for Microsoft Windows can run on
supported 32-bit systems, Oracle Real Application Clusters, Oracle
Clusterware, and Oracle Automatic Storage Management are only
supported on 64-bit Windows systems.
Note:
The Oracle Database software for Itanium is supported only on the
Itanium hardware.
To view your processor speed, perform the following steps:
1.
From the Start menu, select Run ... In the Run window, type in
msinfo32.exe.
2.
In the System Summary display, locate the System Type entry. If the value for
System Type is x64-based PC, then you have a 64-bit system. If the value for
System Type is x86-based PC, then you have a 32-bit system.
3.
Locate the Processor entry. If necessary, scroll to the right until you can see the
end of the Processor value. The last part of this string shows the processor
speed, for example, ~2612 megahertz (MHz), which corresponds to 2.61 GHz.
To determine whether your computer is running a 64-bit Windows operating
system, perform the following steps:
1.
Right-click My Computer and select Properties.
2.
On the General tab, under the heading of System, view the displayed text.
–
On Windows Server 2003 and Windows Server 2003 R2, you will see text
similar to "x64 Edition" if you have the 64-bit version of the operating
system installed.
–
On Windows Server 2008 and Windows Server 2008 R2, you will see text
similar to "64-bit Operating System" if you have the 64-bit version of the
operating system installed.
2.5 Checking the Disk Space Requirements
The requirements for disk space on your server are described in the following sections:
■
Disk Format Requirements
■
Disk Space Requirements for Oracle Home Directories
■
TEMP Disk Space Requirements
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-5
Checking the Disk Space Requirements
2.5.1 Disk Format Requirements
Oracle recommends that you install Oracle software, or binaries, on New Technology
File System (NTFS) formatted drives or partitions. Because it is difficult for OUI to
estimate NTFS and file allocation table (FAT) disk sizes on Windows, the system
requirements documented in this section are likely more accurate than the values
reported on the OUI Summary screen.
Oracle Grid Infrastructure software is not supported on
Network File System (NFS).
Note:
You cannot use NTFS formatted disks or partitions for Oracle Clusterware files or data
files because they cannot be shared. Oracle Clusterware shared files and Oracle
Database data files can be placed on unformatted (raw) basic disks that are managed
by Oracle ASM or Oracle Cluster File System for Windows (Oracle OCFS for
Windows).
Oracle ASM is recommended for managing Oracle Clusterware and Oracle Database
data files.
2.5.2 Disk Space Requirements for Oracle Home Directories
3 GB of disk space for the grid infrastructure home (Grid home) The Grid home
includes Oracle Clusterware and Oracle ASM software, configuration, and log files.
Additional disk space on a cluster file system or shared disks is required for the Oracle
cluster registry (OCR) and voting files used by Oracle Clusterware.
To determine the amount of free disk space, open My Computer, right-click the drive
where the Oracle software is to be installed, and choose Properties.
If you are installing Oracle Database, then you must configuration additional disk
space for:
■
■
The Oracle Database software and log files
The shared data files and, optionally, the shared Fast Recovery Area on either a file
system or in an Oracle Automatic Storage Management disk group
See Also:
■
■
Chapter 3, "Configuring Storage for Grid Infrastructure for a
Cluster and Oracle RAC"
Oracle Database Storage Administrator's Guide
2.5.3 TEMP Disk Space Requirements
The amount of disk space available in the TEMP directory is equivalent to the total
amount of free disk space, minus what will be needed for the Oracle software to be
installed.
You must have 1 GB of disk space available in the TEMP directory. If you do not have
sufficient space, then first delete all unnecessary files. If the temp disk space is still less
than the required amount, then set the TEMP environment variable to point to a
different hard drive.
To modify the TEMP environment variable open the System control panel, select the
Advanced tab, and click Environment Variables.
2-6 Oracle Grid Infrastructure Installation Guide
Checking the Network Requirements
The temporary directory must reside in the same directory
path on each node in the cluster.
Note:
2.6 Checking the Network Requirements
Review the following sections to check that you have the networking hardware and
internet protocol (IP) addresses required for an Oracle grid infrastructure for a cluster
installation:
■
Network Hardware Requirements
■
IP Address Requirements
■
DNS Configuration for Domain Delegation to Grid Naming Service
■
Grid Naming Service Configuration Example
■
Manual IP Address Configuration Example
■
Network Interface Configuration Options
For the most up-to-date information about supported network
protocols and hardware for Oracle RAC installations, refer to the
Certify pages on the My Oracle Support Web site at the following
URL:
Note:
https://support.oracle.com
2.6.1 Network Hardware Requirements
The following is a list of requirements for network configuration:
■
■
The host name of each node must use only the characters a-z, A-Z, 0-9, and the
dash or minus sign (-). Host names using underscores (_) are not allowed.
Each node must have at least two network adapters or network interface cards
(NICs): one for the public network interface, and one for the private network
interface (the interconnect).
To use multiple NICs for the public network or for the private network, Oracle
recommends that you use NIC teaming. Use separate teaming for the public and
private networks, because during installation each network connection is defined
as a public or private interface.
■
■
The private and public network connection names must be different from each
other and cannot contain any multibyte language characters. The names are casesensitive.
The public network connection names associated with the network adapters for
each network must be the same on all nodes, and the private network connection
names associated with the network adapters must be the same on all nodes.
For example: With a two-node cluster, you cannot configure network adapters on
node1 with NIC1 as the public network connection name, but on node2 have
NIC2 as the public network connection name. Public network connection names
must be the same, so you must configure NIC1 as public on both nodes. You must
also configure the network adapters for the private network connection with the
same network connection name. If PrivNIC is the private network connection
name for node1, then PrivNIC must be the private network connection name for
node2.
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-7
Checking the Network Requirements
■
In Windows Networking Properties, the public network connection on each node
must be listed first in the bind order (the order in which network services access
the node). The private network connection should be listed second.
To ensure that your public adapter is first in the bind order, follow these steps:
■
■
1.
Right-click My Network Places and choose Properties.
2.
In the Advanced menu, click Advanced Settings.
3.
If the public adapter name is not the first name listed under the Adapters and
Bindings tab, then select it and click the arrow to move it to the top of list
4.
Click OK to save the setting and then exit network setup dialog
For the public and private networks, each network adapter must support
transmission control protocol internet protocol (TCP/IP).
For the private network, the interconnect must support the user datagram protocol
(UDP) using high-speed network adapters and switches that support TCP/IP
(minimum requirement 1 Gigabit Ethernet).
UDP is the default interconnect protocol for Oracle RAC, and
TCP is the interconnect protocol for Oracle Clusterware. You must use
a switch for the interconnect. Oracle recommends that you use a
dedicated switch.
Note:
Oracle does not support token-rings or crossover cables for the
interconnect.
■
Windows Media Sensing must be disabled for the private network connection.
To disable Windows Media Sensing for TCP/IP, you must set the value of the
DisableDHCPMediaSense parameter to 1 on each node. Because you must
modify the Windows registry to disable Media Sensing, you should first backup
the registry and confirm that you can restore it, using the methods described in
your Windows documentation. Disable Media Sensing by completing the
following steps on each node of your cluster:
1.
Backup the Windows registry.
2.
Use Registry Editor (Regedt32.exe) to view the following key in the
registry:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters
3.
Add the following registry value of type DWORD:
Value Name: DisableDHCPMediaSense
Data Type: REG_DWORD -Boolean
Value: 1
4.
■
■
Restart the computer.
For the private network, the endpoints of all designated interconnect interfaces
must be completely reachable on the network. There should be no node that is not
connected to every private network interface. You can test if an interconnect
interface is reachable using ping.
During installation, you are asked to identify the planned use for each network
connection name that OUI detects on your cluster node. You must identify each
network connection name as a public or private network connection name, and
2-8 Oracle Grid Infrastructure Installation Guide
Checking the Network Requirements
you must use the same private network connection names for both Oracle
Clusterware and Oracle RAC.
You can team separate interfaces to a common network connection to provide
redundancy for a NIC failure. Oracle RAC and Oracle Clusterware will share this
connection.
IP addresses on the subnet you identify as private are assigned as private IP
addresses for cluster member nodes. You do not have to configure these addresses
manually in a hosts file.
2.6.2 IP Address Requirements
Before starting the installation, you must have at least two network interfaces
configured on each node: One for the private IP address and one for the public IP
address.
You can configure IP addresses with one of the following options:
■
■
Oracle Grid Naming Service (GNS) using a static public node address and
dynamically allocated IPs for the Oracle Clusterware provided virtual IP (VIP)
addresses, Dynamic Host Configuration Protocol (DHCP) server assigned, and
resolved using a multicast domain name system configured within the cluster.
Static addresses that network administrators assign on a network domain name
system (DNS) for each node. Selecting this option requires that you request
network administration updates when you modify the cluster.
Oracle recommends that you use a static host name for all
server node public host names.
Note:
Public IP addresses and virtual IP addresses must be in the same
subnet.
2.6.2.1 IP Address Requirements with Grid Naming Service
If you enable GNS, then name resolution requests are delegated to the GNS service
through its VIP address. You define this address in the DNS domain before
installation. The DNS delegates name resolution requests for cluster names to the
GNS. The GNS processes the requests and responds with the list of addresses for the
names.
To use GNS, before installation the DNS administrator must establish DNS Lookup to
direct DNS resolution of a subdomain to the cluster.
See Also: "DNS Configuration for Domain Delegation to Grid
Naming Service" on page 2-10 for information on how to configure
DNS delegation
2.6.2.2 IP Address Requirements for Manual Configuration
If you do not enable GNS, then the public and VIP addresses for each node must be
static IP addresses, configured before installation for each node, but not currently in
use. Public and VIP addresses must be on the same subnet.
Oracle Clusterware manages private IP addresses in the private subnet on network
interfaces you identify as private during the installation interview.
The cluster must have the following addresses configured:
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-9
Checking the Network Requirements
■
A public IP address for each node
■
A VIP address for each node
■
A single client access name (SCAN) configured on the DNS for Round Robin
resolution to three addresses (recommended) or at least one address.
The SCAN is a host name used to provide service access for clients to the cluster.
Because the SCAN is associated with the cluster as a whole, rather than to a particular
node, the SCAN makes it possible to add or remove nodes from the cluster without
needing to reconfigure clients. It also adds location independence for the databases, so
that client configuration does not have to depend on which nodes are running a
particular database. Clients can continue to access the cluster in the same way as with
previous releases, but Oracle recommends that clients access the cluster using the
SCAN.
SCAN addresses should be defined in a DNS to resolve to the SCAN. The SCAN
addresses must be on the same subnet as VIP addresses and public IP addresses. The
SCAN must resolve to at least one address. For high availability and scalability, Oracle
recommends that you configure the SCAN to use Round Robin resolution to three
addresses. The name for the SCAN cannot begin with a numeral.
If you manually configure SCAN VIP addresses, then Oracle
strongly recommends that you do not configure SCAN VIP addresses
in the system hosts file. Use DNS resolution for SCAN VIPs. If you
use the system hosts file to resolve SCANs, then you will only be
able to resolve to one IP address and you will have only one SCAN
address.
Note:
Appendix C, "Understanding Network Addresses"for
more information about network addresses
See Also:
2.6.3 DNS Configuration for Domain Delegation to Grid Naming Service
If you plan to use GNS, then before grid infrastructure installation, you must configure
your DNS to send to GNS any name resolution requests for the subdomain served by
GNS. This subdomain represents the cluster member nodes.
You must configure the DNS to send GNS name resolution requests using DNS
forwarders. If the DNS server is running on Windows server that you administer, then
the following steps must be performed to configure DNS:
1.
Click Start, then select Programs. Select Administrative Tools and then click DNS
manager. The DNS server configuration wizard starts automatically. Use the
wizard to create an entry for the GNS VIP address. For example:
gns-server.clustername.com: 192.0.2.1
The address you provide must be static and routable.
2.
To configure DNS forwarders, click Start, select Administrative Tools, and then
select DNS.
3.
Right-click ServerName, where ServerName is the name of the server, and then click
the Forwarders tab.
4.
Click New, then type the name of the DNS domain for which you want to forward
queries in the DNS domain box, for example, clusterdomain.example.com.
Click OK.
2-10 Oracle Grid Infrastructure Installation Guide
Checking the Network Requirements
5.
In the selected domain's forwarder IP address box, type the GNS VIP address, and
then click Add.
6.
Click OK to exit.
If the DNS server is running on a different operating system, then refer to the Oracle
Clusterware Installation Guide for that platform, or your operating system
documentation.
Experienced DNS administrators may want to create a reverse
lookup zone to enable resolution of reverse lookups. A reverse lookup
resolves an IP address to a host name with a Pointer Resource (PTR)
record. If you have your reverse DNS zones configured, then you can
automatically create associated reverse records when you create your
original forward record.
Note:
2.6.4 Grid Naming Service Configuration Example
If you use GNS, then you must specify a static IP address for the GNS VIP address,
and delegate a subdomain to be delegated to that static GNS VIP address.
As nodes are added to the cluster, your organization’s DHCP server can provide
addresses for these nodes dynamically. These addresses are then registered
automatically in GNS, and GNS provides resolution within the subdomain to cluster
node addresses registered with GNS.
Because allocation and configuration of addresses is performed automatically with
GNS, no further configuration is required. Oracle Clusterware provides dynamic
network configuration as nodes are added to or removed from the cluster. The
following example is provided only for information.
With a two node cluster where you have defined the GNS VIP, after installation you
might have a configuration similar to the following, where the cluster name is
mycluster, the GNS parent domain is example.com, the subdomain is
grid.example.com, 192.0.2 in the IP addresses represents the cluster public IP
address network, and 192.168.0 represents the private IP address network:
Table 2–1
Home
Identity Node
Example of a Grid Naming Service Network
Host Node
Given Name
Type
Address
Address
Resolved
Assigned By By
GNS
VIP
n/a
Selected by
Oracle
Clusterware
myclustergns.example.com
Virtual
192.0.2.1
Fixed by net DNS
administrator
Node 1
Public
node1
node1
node11
Public
192.0.2.101
Fixed
GNS
Node 1
VIP
node1
Selected by
Oracle
Clusterware
node1-vip
Virtual
192.0.2.104
DHCP
GNS
Node 1
Private
node1
node1
node1-priv
Private
192.168.0.1
Fixed or
DHCP
GNS
Node 2
Public
node2
node2
node21
Public
192.0.2.102
Fixed
GNS
Node 2
VIP
node2
Selected by
Oracle
Clusterware
node2-vip
Virtual
192.0.2.105
DHCP
GNS
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-11
Checking the Network Requirements
Table 2–1 (Cont.) Example of a Grid Naming Service Network
Home
Identity Node
Host Node
Given Name
Type
Address
Address
Resolved
Assigned By By
Node 2
Private
node2
node2
node2-priv
Private
192.168.0.2
Fixed or
DHCP
GNS
SCAN
VIP 1
n/a
Selected by
Oracle
Clusterware
myclusterscan.grid.example.com
Virtual
192.0.2.201
DHCP
GNS
SCAN
VIP 2
n/a
Selected by
Oracle
Clusterware
myclusterscan.grid.example.com
Virtual
192.0.2.202
DHCP
GNS
SCAN
VIP 3
n/a
Selected by
Oracle
Clusterware
myclusterscan.grid.example.com
Virtual
192.0.2.203
DHCP
GNS
1
Node host names may resolve to multiple addresses, including any private IP addresses or VIP addresses currently running on
that host.
2.6.5 Manual IP Address Configuration Example
If you choose not to use GNS, then before installation you must configure public,
virtual, and private IP addresses. Also, check that the default gateway can be accessed
by a ping command. To find the default gateway, use the ipconfig command, as
described in your operating system's help utility.
For example, with a two node cluster where each node has one public and one private
interface, and you have defined a SCAN domain address to resolve on your DNS to
one of three IP addresses, you might have the configuration shown in the following
table for your network interfaces:
Table 2–2
Manual Network Configuration Example
Home
Node
Host Node
Given Name
Type
Address
Address
Resolved
Assigned By By
Node 1
Public
node1
node1
node11
Public
192.0.2.101
Fixed
DNS
Node 1
VIP
node1
Selected by
Oracle
Clusterware
node1-vip
Virtual
192.0.2.104
Fixed
DNS, hosts
file
Node 1
Private
node1
node1
node1-priv
Private
192.168.0.1
Fixed
DNS, hosts
file, or none
Node 2
Public
node2
node2
node21
Public
192.0.2.102
Fixed
DNS
Node 2
VIP
node2
Selected by
Oracle
Clusterware
node2-vip
Virtual
192.0.2.105
Fixed
DNS, hosts
file
Node 2
Private
node2
node2
node2-priv
Private
192.168.0.2
Fixed
DNS, hosts
file, or none
Identity
2-12 Oracle Grid Infrastructure Installation Guide
Identifying Software Requirements
Table 2–2 (Cont.) Manual Network Configuration Example
Identity
Home
Node
Host Node
Given Name
Type
Address
Address
Resolved
Assigned By By
SCAN
VIP 1
n/a
Selected by
Oracle
Clusterware
mycluster-scan
Virtual
192.0.2.201
Fixed
DNS
SCAN
VIP 2
n/a
Selected by
Oracle
Clusterware
mycluster-scan
Virtual
192.0.2.202
Fixed
DNS
SCAN
VIP 3
n/a
Selected by
Oracle
Clusterware
mycluster-scan
Virtual
192.0.2.203
Fixed
DNS
1
Node host names may resolve to multiple addresses.
You do not have to provide a private name for the interconnect. If you want name
resolution for the interconnect, then you can configure private IP names in the system
hosts file or the DNS. However, Oracle Clusterware assigns interconnect addresses
on the interface defined during installation as the private interface (Local Area
Connection 2, for example), and to the subnet used for the private subnet.
The addresses to which the SCAN resolves are assigned by Oracle Clusterware, so
they are not fixed to a particular node. To enable VIP failover, the configuration shown
in the preceding table defines the SCAN addresses and the public and VIP addresses
of both nodes on the same subnet, 192.0.2.
All host names must conform to the Internet Engineering Task
Force RFC 952 standard, which permits alphanumeric characters. Host
names using underscores ("_") are not allowed.
Note:
2.6.6 Network Interface Configuration Options
The precise configuration you choose for your network depends on the size and use of
the cluster you want to configure, and the level of availability you require.
If certified Network-attached Storage (NAS) is used for Oracle RAC and this storage is
connected through Ethernet-based networks, then you must have a third network
interface for NAS I/O. Failing to provide three separate interfaces in this case can
cause performance and stability problems under heavy system loads.
2.7 Identifying Software Requirements
Depending on the products that you intend to install, verify that the following
operating system software is installed on the system.
OUI performs checks your system to verify that it meets the
listed operating system requirements. To ensure that these checks
complete successfully, verify the requirements before you start OUI.
Note:
Oracle does not support running different operating system versions
on cluster members, unless an operating system is being upgraded.
You cannot run different operating system version binaries on
members of the same cluster, even if each operating system is
supported.
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-13
Identifying Software Requirements
Table 2–3 lists the software requirements for Oracle Grid Infrastructure and Oracle
RAC 11g Release 2 (11.2).
Table 2–3
Oracle Grid Software Requirements for Windows Systems
Requirement
Value
System Architecture
Processor: AMD64, or Intel Extended memory (EM64T)
Note: Oracle provides only 64-bit (x64) versions of Oracle Database
with Oracle RAC for Windows.
The 64-bit (x64) version of Oracle RAC runs on the 64-bit version of
Windows on AMD64 and EM64T hardware. For additional
information, visit My Oracle Support at the following URL:
http://support.oracle.com/
Operating system for
64-bit Windows
Oracle Grid Infrastructure and Oracle RAC for x64 Windows:
■
Windows Server 2003 x64 with service pack 1 or higher.
■
Windows Server 2003 R2 x64.
■
■
Windows Server 2008 x64 Standard, Enterprise, Datacenter, Web,
and Foundation editions.
Windows Server 2008 R2 x64 Standard, Enterprise, Datacenter,
Web, and Foundation editions.
The Windows Multilingual User Interface Pack and Terminal Services
are supported.
NOTE: Oracle Clusterware, Oracle ASM and Oracle RAC 11g release
2 are not supported on any 32-bit Windows operating systems.
Compiler for x64
Windows
Pro*Cobol has been tested and is certified with Micro Focus Net
Express 5.0. Object Oriented COBOL (OOCOBOL) specifications are
not supported.
The following components are supported with the Microsoft Visual
C++ .NET 2005 9.0 and Intel 10.1 C compilers:
■
Oracle Call Interface (OCI)
■
Pro*C/C++
■
External callouts
■
Oracle XML Developer's Kit (XDK)
Oracle C++ Call Interface (OCCI) is supported with:
■
■
■
Network Protocol
Microsoft Visual C++ .NET 2005 8.0
Microsoft Visual C++ .NET 2008 9.0 - OCCI libraries are installed
under ORACLE_HOME\oci\lib\msvc\vc9. When developing
OCCI applications with MSVC++ 9.0, ensure that the OCCI
libraries are correctly selected from this directory for linking and
executing.
Intel 10.1 C++ compiler with the relevant Microsoft Visual C++
.NET STLs
Oracle Net foundation layer uses Oracle protocol support to
communicate with the following industry-standard network
protocols:
■
TCP/IP
■
TCP/IP with secure sockets layer (SSL)
■
Named Pipes
2-14 Oracle Grid Infrastructure Installation Guide
Network Time Protocol Setting
If you are currently running an operating system version that is not supported by
Oracle Database 11g release 2 (11.2), such as Windows 2000, then you must first
upgrade your operating system before upgrading to Oracle Database 11g RAC.
If you are currently running a cluster with Oracle9i Clusterware and want to continue
to use it, then you must upgrade to the latest patchset for Oracle9i Clusterware to
ensure compatibility between Cluster Manager Services in Oracle9i Clusterware and
Oracle Clusterware 11g release 2 (11.2).
2.7.1 Windows Firewall Feature on Windows Servers
When installing Oracle grid infrastructure software or Oracle RAC software on
Windows servers, it is mandatory to disable the Windows Firewall feature. If the
windows firewall is enabled, then remote copy and configuration assistants such as
virtual IP configuration assistant (VIPCA), NetCA and DBCA will fail during Oracle
RAC installation. Thus, the firewall must be disabled on all the nodes of a cluster
before performing Oracle RAC installation.
Note: The Windows Firewall should never be enabled on a NIC that
is used as a cluster interconnect (private network interface).
After the installation is successful, you can enable the Windows Firewall for the public
connections. However, to ensure correct operation of the Oracle software, you must
add certain executables and ports to the Firewall exception list on all the nodes of a
cluster. See Section 5.1.2, "Configure Exceptions for the Windows Firewall" for details.
Additionally, the Windows Firewall must be disabled on all the nodes in the cluster
before performing any clusterwide configuration changes, such as:
■
Adding a node
■
Deleting a node
■
Upgrading to patch release
■
Applying a one-off patch
If you do not disable the Windows Firewall before performing these actions, then the
changes might not be propagated correctly to all the nodes of the cluster.
2.8 Network Time Protocol Setting
Each node in the cluster must use the same time reference. Follow the instructions in
one of the following sections to configure time synchronization for your cluster nodes:
■
Configuring the Windows Time Service
■
Configuring Network Time Protocol
■
Configuring Cluster Time Synchronization Service
2.8.1 Configuring the Windows Time Service
The Windows Time service (W32Time) provides network clock synchronization on
computers running Microsoft Windows. If you are using Windows Time service, and
you prefer to continue using it instead of Cluster Time Synchronization Service, then
you must modify the Windows Time service settings to prevent the time from being
adjusted backward. Restart the Windows Time service after you complete this task.
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-15
Enabling Intelligent Platform Management Interface
To configure Windows Time service, use the following command on each node:
C:\> W32tm /register
To modify the Windows Time service to prevent it from adjusting the time backward,
perform the following steps:
1.
Open the Registry Editor (regedit).
2.
Locate the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\
W32Time\Config key.
3.
Set the value for MaxNegPhaseCorrection to 0.
4.
To put the change into effect, use the following command:
C:\>W32tm /config /update
2.8.2 Configuring Network Time Protocol
The Network Time Protocol (NTP) is a client/server application. Each server must
have NTP client software installed and configured to synchronize its clock to the
network time server. The Windows Time service is not an exact implementation of the
NTP, but it based on the NTP specifications.
If you decide to use NTP instead of the Windows Time service, then, after you have
installed the NTP client software on each node server, you must start the NTP service
with the -x option to prevent time from being adjusted backward.
To ensure the NTP service is running with the -x option, perform the following steps:
1.
Use the registry editor to edit the value for the ntpd executable under HKEY_
LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTP
2.
Add the -x option to the ImagePath key value, behind %INSTALLDIR%\
ntpd.exe.
3.
Restart the NTP service using the following commands:
net stop NTP
net start NTP
2.8.3 Configuring Cluster Time Synchronization Service
To use Cluster Time Synchronization Service to provide synchronization service in the
cluster, disable the Windows Time service and stop the NTP service.
When the installer finds that neither the Windows Time service or NTP service are
active, the Cluster Time Synchronization Service is installed in active mode and
synchronizes the time across the nodes. If the Windows Time service or NTP service is
active, then the Cluster Time Synchronization Service is started in observer mode, and
no active time synchronization is performed by Oracle Clusterware within the cluster.
To confirm that the Cluster Time Synchronization Service is active after installation,
enter the following command as the Grid installation owner:
crsctl check ctss
2.9 Enabling Intelligent Platform Management Interface
Intelligent Platform Management Interface (IPMI) provides a set of common interfaces
to computer hardware and firmware that system administrators can use to monitor
2-16 Oracle Grid Infrastructure Installation Guide
Enabling Intelligent Platform Management Interface
system health and manage the system. With Oracle Database 11g release 2, Oracle
Clusterware can integrate IPMI to provide failure isolation support and to ensure
cluster integrity.
You can configure node-termination with IPMI during installation by selecting a nodetermination protocol, such as IPMI. You can also configure IPMI after installation with
crsctl commands.
Oracle Clusterware Administration and Deployment Guide for
information about how to configure IPMI after installation
See Also:
2.9.1 Requirements for Enabling IPMI
You must have the following hardware and software configured to enable cluster
nodes to be managed with IPMI:
■
Each cluster member node requires a Baseboard Management Controller (BMC)
running firmware compatible with IPMI version 1.5 or greater, which supports
IPMI over local area networks (LANs), and configured for remote control using
LAN.
On servers running Windows 2008, you may have to upgrade
the basic I/O system (BIOS), system firmware, and BMC firmware
before you can use IPMI. Refer to Microsoft Support Article ID 950257
(http://support.microsoft.com/kb/950257) for details.
Note:
■
■
■
■
■
Each cluster member node requires an IPMI driver installed on each node.
The cluster requires a management network for IPMI. This can be a shared
network, but Oracle recommends that you configure a dedicated network.
Each cluster member node’s Ethernet port used by BMC must be connected to the
IPMI management network.
Each cluster member must be connected to the management network.
Some server platforms put their network interfaces into a power saving mode
when they are powered off. In this case, they may operate only at a lower link
speed (for example, 100 megabyte (MB), instead of 1 GB). For these platforms, the
network switch port to which the BMC is connected must be able to auto-negotiate
down to the lower speed, or IPMI will not function properly.
2.9.2 Configuring the IPMI Management Network
You can configure the BMC for DHCP, or for static IP addresses. Oracle recommends
that you configure the BMC for dynamic IP address assignment using DHCP. To use
this option, you must have a DHCP server configured to assign the BMC IP addresses.
If you configure IPMI, and you use GNS, then you still must
configure separate addresses for the IPMI interfaces. As the IPMI
adapter is not seen directly by the host, the IPMI adapter is not visible
to GNS as an address on the host.
Note:
2.9.3 Configuring the IPMI Driver
For Oracle Clusterware to communicate with the BMC, the IPMI driver must be
installed permanently on each node, so that it is available on system restarts. On
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-17
Enabling Intelligent Platform Management Interface
Windows systems, the implementation assumes the Microsoft IPMI driver
(ipmidrv.sys), which is included on Windows Server 2008 and later versions of the
Windows operating system. The driver is included as part of the Hardware
Management feature, which includes the driver and the Windows Management
Interface (WMI).
Note:
■
■
The ipmidrv.sys driver is not supported by default on
Windows Server 2003. It is available for Windows 2003 R2, but is
not installed by default.
An alternate driver (imbdrv.sys) is available from Intel as part
of Intel Server Control, but this driver has not been tested with
Oracle Clusterware.
2.9.3.1 Configuring the Hardware Management Component
Hardware Management is not installed and enabled by default on Windows Server
2003 systems. Hardware management is installed using the Add/Remove Windows
Components Wizard.
1.
Press Start, then select Control Panel.
2.
Select Add or Remove Programs.
3.
Click Add/Remove Windows Components.
4.
Select (but do not check) Management and Monitoring Tools and click the
Details button to display the detailed components selection window.
5.
Select the Hardware Management option.
If a BMC is detected through the system management BIOS (SMBIOS) Table Type
38h, then a dialog box will be displayed instructing you to remove any third party
drivers. If no third party IPMI drivers are installed or they have been removed
from the system, then click OK to continue.
The Microsoft driver is incompatible with other drivers. Any
third party drivers must be removed
Note:
6.
Click OK to select the Hardware Management Component, and then click Next.
Hardware Management (including Windows Remote Management, or WinRM)
will be installed.
After the driver and hardware management have been installed, the BMC should be
visible in the Windows Device Manager under System devices with the label
"Microsoft Generic IPMI Compliant Device". If the BMC is not automatically detected
by the plug and play system, then the device must be created manually.
To create the IPMI device, run the following command:
rundll32 ipmisetp.dll,AddTheDevice
2.9.3.2 Configuring the BMC Using ipmiutil on Windows 2003 R2
For IPMI-based fencing to function properly, the BMC hardware must be configured
for remote control through LAN. The BMC configuration may be effected from the
2-18 Oracle Grid Infrastructure Installation Guide
Enabling Intelligent Platform Management Interface
boot prompt (BIOS), using a platform specific management utility or one of many
publicly available utilities, which can be downloaded from the Internet, such as:
IPMIutil, which is available for Linux and Windows:
http://ipmiutil.sourceforge.net
Refer to the documentation for these configuration tools for instructions on how to
configure the BMC.
When you configure the BMC on each node, you must complete the following:
■
■
■
■
Enable IPMI over LAN, so that the BMC can be controlled over the management
network.
Enable dynamic IP addressing using DHCP, or configure a static IP address for the
BMC.
Establish an administrator user account and password for the BMC
Configure the BMC for virtual LAN (VLAN) tags, if you will use the BMC on a
tagged VLAN.
The configuration tool you use does not matter, but these conditions must be met for
the BMC to function properly.
Example of BMC Configuration Using ipmiutil on Windows 2003 R2
The following is an example of configuring BMC using ipmiutil (version 2.2.3):
1.
Open a command window while logged in as a member of the Administrators
group.
2.
After the driver is loaded and the device special file has been created, verify that
ipmiutil can communicate with the BMC through the driver:
C:\> ipmiutil lan
impiutil ver 2.23
<PEFilter parameters displayed> . . .
pefconfig, GetLanEntry for channel 1 . . .
Lan Param(0) Set in progress: 00
. . . <remaining Lan Param info displayed>
The following steps establish the required configuration parameters described in
this example.
If you use the -l option, then ipmiutil sets certain LAN
parameters only for enabling IPMI over LAN. This can have the
undesired effect of resetting to default values of some previously
established parameters if they are not supplied on the command line.
Thus, the order of the following steps could be critical.
Note:
3.
Establish remote LAN access with Administrator privileges (-v 4) and the
desired user name and password (ipmiutil will find the LAN channel on its
own):
C:\> ipmiutil lan -l -v 4 -u user_name -p password
4.
Configure dynamic or static IP address settings for BMC:
■
Using dynamic IP addressing (DHCP)
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-19
Enabling Intelligent Platform Management Interface
Dynamic IP addressing is the default assumed by OUI. Oracle recommends
that you select this option so that nodes can be added or removed from the
cluster more easily, as address settings can be assigned automatically.
Note:
Use of DHCP requires a DHCP server on the subnet.
Set the channel. For example, if the channel is 1, then enter the following
command to enable DHCP:
C:\> ipmiutil lan set -l -D
■
Using static IP Addressing
If the BMC shares a network connection with the operating system, then the IP
address must be on the same subnet. You must set not only the IP address, but
also the proper values for the default gateway and the netmask for your
network configuration. for example:
C:\> impiutil lan -l -I 192.168.0.55
C:\> ipmiutil lan -l -G 192.168.0.1
C:\> ipmiutil lan -l -S 255.255.255.0
(IP address)
(gateway IP address)
(netmask)
The specified address (192.168.0.55) will be associated only with the BMC,
and will not respond to normal pings.
Enabling IPMI over LAN with the -l option will reset the
subnet mask to a value obtained from the operating system. Thus,
when setting parameters one at a time using the impiutil lan -l
command, as shown above, the -S option should be specified last.
Note:
5.
Verify the setup.
After the previous configuration steps have been completed, you can verify your
settings on the node being configured as follows (the items in bold text reflect the
settings just made):
C:\> impiutil lan
ipmiutil ver 2.23
peconfig ver 2.23
-- BMC version 1.40, IPMI version 1.5
pefconfig, GetPefEntry ...
PEFilter(01): 04 h : event ... <skipping PEF entries>
...
pefconfig, GetLanEntry for channel 1 ...
Lan Param(0) Set in progress: 00
Lan Param(1) Auth type support: 17 : None MD2 MD5 Pswd
Lan Param(2) Auth type enables: 16 16 16 16 00
Lan Param(3) IP address: 192 168 0 55
Lan Param(4) IP address src: 01 : Static
Lan Param(5) MAC addr: 00 11 43 d7 4f bd
Lan Param(6) Subnet mask: 255 255 255 0
Lan Param(7) IPv4 header: 40 40 10
GetLanEntry: completion code=cc
GetLanEntry(10), ret = -1
GetLanEntry: completion code=cc
GetLanEntry(11), ret = -1
2-20 Oracle Grid Infrastructure Installation Guide
Checking Individual Component Requirements
Lan Param(12) Def gateway IP: 192
Lan Param(13) Def gateway MAC: 00
...
Get User Access(1): 0a 01 01 0f :
Get User Access(2): 0a 01 01 14 :
Get User Access(3): 0a 01 01 0f :
pefconfig, completed successfully
6.
168 0 1
00 0c 07 ac dc
No access ()
IPMI, Admin (user_name)
No access ()
Finally, you can verify that the BMC is accessible and controllable from a remote
node in your cluster:
C:\> ipmiutil health -N 192.168.0.55 -U user_name -P password
ipmiutil ver 2.23
bmchealth ver 2.23
Opening connection to node 192.168.0.55 ...
Connected to node racnode1.example.com 192.168.0.31
BMC version 1.23, IPMI version 1.5
BMC manufacturer = 0002a2 (Dell), product = 0000
Chassis Status
= 01
(on, restore_policy=stay_off)
Power State
= 00
(S0: working)
Selftest status = 0055 (OK)
Channel 1 Auth Types: MD2 MD5
Status = 14, OEM ID 000000 OEM Aux 00
bmchealth, completed successfully
2.10 Checking Individual Component Requirements
This section contains these topics:
■
Oracle Advanced Security Requirements
■
Oracle Enterprise Manager Requirements
2.10.1 Oracle Advanced Security Requirements
You must meet hardware and software requirements to use authentication support
with Oracle components. Some Oracle Advanced Security components can use a
Lightweight Directory Access Protocol (LDAP) such as Oracle Internet Directory.
See Also:
Oracle Database Advanced Security Administrator's Guide
2.10.2 Oracle Enterprise Manager Requirements
All Oracle Enterprise Manager products that you use on your system must be of the
same release. Older versions of Enterprise Manager are not supported with the current
release.
All Oracle Enterprise Manager products, except Oracle
Enterprise Manager Database Control, are released on the Enterprise
Manager Grid Control installation media. Enterprise Manager
Database Control is available on the Oracle Database installation
media.
Note:
See Also: Oracle Enterprise Manager Grid Control Installation and Basic
Configuration available on the Enterprise Manager Grid Control
installation media
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-21
Configuring User Accounts
2.11 Configuring User Accounts
To install the Oracle software, you must use a user that is a member of the
Administrators group. If you use a local user account for the installation, then the user
account must exist on all nodes in the cluster and the user name and password must
be the same on all nodes.
If you use a domain account for the installation, then the domain user must be
explicitly declared as a member of the local Administrators group on each node in the
cluster. It is not sufficient if the domain user has inherited membership from another
group. The user performing the installation must be in the same domain on each node.
For example, you cannot have a dba1 user on the first node in the DBADMIN domain
and a dba1 user on the second node in the RACDBA domain.
For example, assume that you have one Oracle installation owner, and the user name
for this Oracle installation owner is oracle. The oracle user must be either a local
Administrator user or a domain user, and the same user must exist (same user name,
password, and domain) on each node in the cluster.
If you intend to install Oracle Database, then the oracle user must be part of the
ORA_DBA group. During installation, the user performing the software is automatically
added to the ORA_DBA group. If you use a domain user, then you must make sure the
domain user on each node is a member of the ORA_DBA group.
2.11.1 Managing User Accounts with User Account Control
To ensure that only trusted applications run on your computer, Windows Server 2008
and Windows Server 2008 R2 provide User Account Control. If you have enabled this
security feature, then depending on how you have configured it, OUI prompts you for
either your consent or your credentials when installing Oracle Database. Provide
either the consent or your Windows Administrator credentials as appropriate.
You must have Administrator privileges to run some Oracle tools, such as DBCA,
NetCA, and OPatch, or to run any tool or application that writes to any directory
within the Oracle home. If User Account Control is enabled and you are logged in as
the local Administrator, then you can successfully run each of these commands.
However, if you are logged in as "a member of the Administrator group," then you
must explicitly invoke these tasks with Windows Administrator privileges.
All of the Oracle shortcuts that require Administrator privileges are invoked as
"Administrator" automatically when you click the shortcuts. However, if you run the
previously mentioned tools from a Windows command prompt, then you must run
them from an Administrative command prompt. OPatch does not have a shortcut and
has to be run from an Administrative command prompt.
2.12 Verifying Cluster Privileges
Before running OUI, from the node where you intend to run the installer, verify that
the user account you are using for the installation is configured as a member of the
Administrators group on each node in the cluster. To do this, enter the following
command for each node that is a part of the cluster where nodename is the node
name:
net use \\nodename\C$
If you will be using other disk drives in addition to the C: drive, then repeat this
command for every node in the cluster, substituting the drive letter for each drive you
plan to use.
2-22 Oracle Grid Infrastructure Installation Guide
Verifying Cluster Privileges
The installation user must also be able to update the Windows registry on each node in
the cluster. To verify the installation user is configured to do this, perform the
following steps:
1.
Run regedit from the Run menu or the command prompt.
2.
From the 'File' menu choose: 'Connect Network Registry'
3.
In the ‘Enter the object name…’ edit box enter the name of a remote node in the
cluster, then click OK.
4.
Wait for the node to appear in the registry tree.
If the remote node does not appear in the registry tree or you are prompted to fill in a
username and password, then you must resolve the permissions issue at the operating
system level before proceeding with the Oracle Grid infrastructure installation.
For the installation to be successful, you must use the same
user name and password on each node in a cluster or use a domain
user. If you use a domain user, then you must have explicitly granted
membership in the local Administrators group to the domain user on
all of the nodes in your cluster.
Note:
Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks 2-23
Verifying Cluster Privileges
2-24 Oracle Grid Infrastructure Installation Guide
3
3
Configuring Storage for Grid Infrastructure
for a Cluster and Oracle RAC
This chapter describes the storage configuration tasks that you must complete before
you start the installer to install Oracle Clusterware and Oracle Automatic Storage
Management (Oracle ASM), and that you must complete before adding an Oracle Real
Application Clusters (Oracle RAC) installation to the cluster.
This chapter contains the following topics:
■
Reviewing Storage Options
■
Preliminary Shared Disk Preparation
■
Storage Requirements for Oracle Clusterware and Oracle RAC
■
Configuring the Shared Storage Used by Oracle ASM
■
Configuring Storage for Oracle Database Files on OCFS for Windows
■
Configuring Direct NFS Storage for Oracle RAC Data Files
■
Desupport of Raw Devices
3.1 Reviewing Storage Options
This section describes supported options for storing Oracle Grid Infrastructure for a
cluster storage options. It contains the following sections:
■
General Storage Considerations for Oracle Grid Infrastructure
■
General Storage Considerations for Oracle RAC
■
Supported Storage Options for Oracle Clusterware and Oracle RAC
■
After You Have Selected Disk Storage Options
See Also: The Oracle Certify site for a list of supported vendors for
Network Attached Storage options:
http://www.oracle.com/technology/support/
Refer also to the Certify site on My Oracle Support for the most
current information about certified storage options:
https://support.oracle.com/
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC
3-1
Reviewing Storage Options
3.1.1 General Storage Considerations for Oracle Grid Infrastructure
Oracle Clusterware voting disks are used to monitor cluster node status, and Oracle
Cluster Registry (OCR) files contain configuration information about the cluster. You
can place voting disks and OCR files either in an Oracle ASM diskgroup, or on a
cluster file system or shared network file system. Storage must be shared; any node
that does not have access to an absolute majority of voting disks (more than half) will
be restarted.
For a storage option to meet high availability requirements, the files stored on the disk
must be protected by data redundancy, so that if one or more disks fail, then the data
stored on the failed disks can be recovered. This redundancy can be provided
externally using Redundant Array of Independent Disks (RAID) devices, or logical
volumes on multiple physical devices and implement the stripe-and-mirroreverything methodology, also known as SAME. If you do not have a RAID devices or
logical volumes, then you can create additional copies, or mirrors, of the files on
different file systems. If you choose to mirror the files, then you must provide disk
space for additional OCR files and at least two additional voting disk files.
Each OCR location should be placed on a different disk. For voting disk file placement,
ensure that each file is configured so that it does not share any hardware device or
disk, or other single point of failure with the other voting disks. Any node that does
not have available to it an absolute majority of voting disks configured (more than
half) will be restarted.
Use the following guidelines when choosing storage options:
■
■
■
You can choose any combination of the supported storage options for each file
type if you satisfy all requirements listed for the chosen storage options.
You can use Oracle ASM 11g release 2 (11.2) to store Oracle Clusterware files. You
cannot use prior Oracle ASM releases to do this.
If you do not have a storage option that provides external file redundancy, then
you must configure at least three voting disk locations and at least three OCR
locations to provide redundancy.
3.1.2 General Storage Considerations for Oracle RAC
For all Oracle RAC installations, you must choose the storage options to use for Oracle
Database files. Oracle Database files include data files, control files, redo log files, the
server parameter file, and the password file.
To enable automated backups during the installation, you must also choose the shared
storage option to use for recovery files (the fast recovery area). Use the following
guidelines when choosing the storage options to use for each file type:
■
■
■
■
The shared storage option that you choose for recovery files can be the same as or
different from the option that you choose for the database files. However, you
cannot use raw storage to store recovery files.
You can choose any combination of the supported shared storage options for each
file type if you satisfy all requirements listed for the chosen storage options.
Oracle recommends that you choose Oracle ASM as the shared storage option for
database and recovery files.
For Standard Edition Oracle RAC installations, Oracle ASM is the only supported
shared storage option for database or recovery files. You must use Oracle ASM for
the storage of Oracle RAC data files, online redo logs, archived redo logs, control
files, server parameter files (SPFILE), and the fast recovery area.
3-2 Oracle Grid Infrastructure Installation Guide
Reviewing Storage Options
■
■
If you intend to use Oracle ASM with Oracle RAC, and you are configuring a new
Oracle ASM instance, then your system must meet the following conditions:
–
All nodes on the cluster have Oracle Clusterware and Oracle ASM 11g release
2 (11.2) installed as part of an Oracle grid infrastructure for a cluster
installation.
–
Any existing Oracle ASM instance on any node in the cluster is shut down.
Raw devices are supported only when upgrading an existing installation using the
configured partitions. On new installations, using raw device partitions is not
supported by Oracle Automatic Storage Management Configuration Assistant
(ASMCA) or Oracle Universal Installer (OUI), but is supported by the software if
you perform manual configuration.
3.1.2.1 Guidelines for Placing Oracle Data Files on a File System
If you decide to place the Oracle data files on Oracle Cluster File System for Windows
(OCFS for Windows), then use the following guidelines when deciding where to place
them:
■
You can choose either a single cluster file system or multiple cluster file systems to
store the data files:
–
To use a single cluster file system, choose a cluster file system on a physical
device that is dedicated to the database.
For best performance and reliability, choose a RAID device or a logical volume
on multiple physical devices and implement the stripe-and-mirror-everything
methodology, also known as SAME.
–
To use multiple cluster file systems, choose cluster file systems on separate
physical devices or partitions that are dedicated to the database.
This method enables you to distribute physical I/O and create separate control
files on different devices for increased reliability. It also enables you to fully
implement Oracle Optimal Flexible Architecture (OFA) guidelines. To
implement this method, you must choose the Advanced database creation
option.
■
If you intend to create a preconfigured database during the installation, then the
cluster file system (or systems) that you choose must have at least 4 gigabyte (GB)
of free disk space.
For production databases, you must estimate the disk space requirement
depending on how you use the database.
■
For optimum performance, the cluster file systems that you choose should be on
physical devices that are used by only the database.
You must not create an New Technology File System (NTFS)
partition on a disk that you are using for OCFS for Windows.
Note:
3.1.2.2 Guidelines for Placing Oracle Recovery Files on a File System
You must choose a location for recovery files before installation only if you intend to
enable automated backups during installation.
If you choose to place the Oracle recovery files on a cluster file system, then use the
following guidelines when deciding where to place them:
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC
3-3
Reviewing Storage Options
■
To prevent disk failure from making the database files and the recovery files
unavailable, place the recovery files on a cluster file system that is on a different
physical disk from the database files.
Alternatively use an Oracle ASM disk group with a normal or
high redundancy level for either or both file types, or use external
redundancy.
Note:
■
The cluster file system that you choose should have at least 3 GB of free disk space.
The disk space requirement is the default disk quota configured for the fast
recovery area (specified by the DB_RECOVERY_FILE_DEST_SIZE initialization
parameter).
If you choose the Advanced database configuration option, then you can specify a
different disk quota value. After you create the database, you can also use Oracle
Enterprise Manager to specify a different value.
See Also: Oracle Database Backup and Recovery Basics for more
information about sizing the fast recovery area.
3.1.3 Supported Storage Options for Oracle Clusterware and Oracle RAC
There are two ways of storing Oracle Clusterware files:
■
Oracle ASM: You can install Oracle Clusterware files (OCR and voting disks) in
Oracle ASM diskgroups.
Oracle ASM is an integrated, high-performance database file system and disk
manager for Oracle Clusterware and Oracle Database files. It performs striping
and mirroring of database files automatically.
You can no longer use OUI to install Oracle Clusterware or
Oracle Database files directly on raw devices.
Note:
Only one Oracle ASM instance is permitted for each node regardless
of the number of database instances on the node.
■
OCFS for Windows: OCFS for Windows is a cluster file system used to store Oracle
Clusterware and Oracle RAC files on the Microsoft Windows platforms. OCFS for
Windows is different from OCFS2, which is available on Linux.
You cannot put Oracle Clusterware files on Oracle Automatic
Storage Management Cluster File System (Oracle ACFS). You cannot
put Oracle Clusterware binaries on a cluster file system.
Note:
See Also: The Certify page on My Oracle Support for supported
cluster file systems
You cannot install the Oracle Grid infrastructure software on a cluster file system. The
Oracle Clusterware home must be on a local, NTFS formatted disk.
There are several ways of storing Oracle Database (Oracle RAC) files:
3-4 Oracle Grid Infrastructure Installation Guide
Reviewing Storage Options
■
Oracle ASM: You can create Oracle Database files in Oracle ASM diskgroups.
Oracle ASM is the required database storage option for Typical installations, and
for Standard Edition Oracle RAC installations.
You can no longer use OUI to install Oracle Clusterware or
Oracle Database files or binaries directly on raw devices.
Note:
Only one Oracle ASM instance is permitted for each node regardless
of the number of database instances on the node.
■
A supported shared file system: Supported file systems include the following:
–
OCFS for Windows: OCFS for Windows is a cluster file system used to store
Oracle Database binary and data files. If you intend to use OCFS for Windows
for your database files, then you should create partitions large enough for all
the database and recovery files when you create partitions for use by Oracle
Database.
See Also: The Certify page on My Oracle Support for supported
cluster file systems
–
Oracle ACFS: Oracle ACFS provides a general purpose file system that can
store the Oracle Database binary files.
Note:
■
You cannot put Oracle Database files on Oracle ACFS.
Network File System (NFS) with Oracle Direct NFS client: You can configure
Oracle RAC to access NFS V3 servers directly using an Oracle internal Direct NFS
client.
Note: You cannot use Direct NFS to store Oracle Clusterware files.
You can only use Direct NFS to store Oracle Database files. To install
Oracle RAC on Windows using Direct NFS, you must have access to a
shared storage method other than NFS for the Oracle Clusterware
files.
See Also: "About Direct NFS Storage" on page 3-23 for more
information on using Direct NFS
The following table shows the storage options supported for storing Oracle
Clusterware and Oracle RAC files.
Table 3–1
Supported Storage Options for Oracle Clusterware and Oracle RAC Files and Binaries
Storage Option
OCR and Voting
Disks
Oracle
Clusterware
Binaries
Oracle RAC
Binaries
Oracle RAC
Database Files
Oracle
Recovery
Files
Oracle ASM
Yes
No
No
Yes
Yes
Oracle ACFS
No
No
Yes
No
No
OCFS for Windows
Yes
No
Yes
Yes
Yes
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC
3-5
Preliminary Shared Disk Preparation
Table 3–1 (Cont.) Supported Storage Options for Oracle Clusterware and Oracle RAC Files and Binaries
OCR and Voting
Disks
Oracle
Clusterware
Binaries
Oracle RAC
Binaries
Oracle RAC
Database Files
Oracle
Recovery
Files
No
No
No
Yes
Yes
Shared disk partitions (raw
devices)
Not supported by No
OUI or ASMCA,
but supported by
the software.
They can be
added or
removed after
installation.
No
Not supported by
OUI or ASMCA,
but supported by
the software. They
can be added or
removed after
installation.
No
Local storage
No
Yes
No
No
Storage Option
Direct NFS access to a certified
network attached storage (NAS)
filer
Note: Direct NFS does not support
Oracle Clusterware files.
Yes
For the most up-to-date information about supported storage
options for Oracle Clusterware and Oracle RAC installations, refer to
the Certify pages on the My Oracle Support Web site:
Note:
https://support.oracle.com
3.1.4 After You Have Selected Disk Storage Options
When you have determined your disk storage options, first perform the steps listed in
the section "Preliminary Shared Disk Preparation", then configure the shared storage:
■
■
To use a file system, refer to "Configuring the Shared Storage Used by Oracle
ASM" on page 3-16.
To use Oracle ASM, refer to "Marking Disk Partitions for Oracle ASM Before
Installation" on page 3-17.
3.2 Preliminary Shared Disk Preparation
Complete the following steps to prepare shared disks for storage:
■
Disabling Write Caching
■
Enabling Automounting for Windows
3.2.1 Disabling Write Caching
You must disable write caching on all disks that will be used to share data between the
nodes in your cluster. Perform these steps to disable write caching:
1.
Click Start, then select Control Panel, then Administrative Tools, then Computer
Management, then Device Manager, and then Disk drives
2.
Expand the Disk drives and double-click the first drive listed.
3.
Under the Policies tab for the selected drive, uncheck the option that enables write
caching.
4.
Double-click each of the other drives that will be used by Oracle Clusterware and
Oracle RAC and disable write caching as described in the previous step.
3-6 Oracle Grid Infrastructure Installation Guide
Preliminary Shared Disk Preparation
Caution: Any disks that you use to store files, including database
files, that will be shared between nodes, must have write caching
disabled.
3.2.2 Enabling Automounting for Windows
If you are using Windows 2003 R2 Enterprise Edition or Datacenter Edition, then you
must enable disk automounting, as it is disabled by default. For other Windows
releases, even though the automount feature is enabled by default, you should verify
that automount is enabled.
You must enable automounting when using:
■
Raw partitions for Oracle RAC
■
OCFS for Windows
■
Oracle Clusterware
■
Raw partitions for single-node database installations
■
Logical drives for Oracle ASM
Raw partitions are supported only when upgrading an
existing installation using the configured partitions. On new
installations, using raw partitions is not supported by ASMCA or
OUI, but is supported by the software if you perform manual
configuration
Note:
If you upgrade the operating system from one version of Windows to another (for
example, Windows Server 2003 to Windows Advanced Server 2003), then you must
repeat this procedure after the upgrade is finished.
To determine if automatic mounting of new volumes is enabled, use the following
commands:
c:\> diskpart
DISKPART> automount
Automatic mounting of new volumes disabled.
To enable automounting:
1.
Enter the following commands at a command prompt:
c:\> diskpart
DISKPART> automount enable
Automatic mounting of new volumes enabled.
2.
Type exit to end the diskpart session
3.
Repeat steps 1 and 2 for each node in the cluster.
4.
When you have prepared all of the cluster nodes in your Windows 2003 R2 system
as described in the previous steps, restart all of the nodes.
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC
3-7
Storage Requirements for Oracle Clusterware and Oracle RAC
All nodes in the cluster must have automatic mounting
enabled to correctly install Oracle RAC and Oracle Clusterware.
Oracle recommends that you enable automatic mounting before
creating any logical partitions for use by the database, Oracle ASM, or
OCFS for Windows.
Note:
You must restart each node after enabling disk automounting. After it
is enabled and the node is restarted, automatic mounting remains
active until it is disabled.
3.3 Storage Requirements for Oracle Clusterware and Oracle RAC
Each supported file system type has additional requirements that must be met to
support Oracle Clusterware and Oracle RAC. Use the following sections to help you
select your storage option:
■
Requirements for Using a Cluster File System for Shared Storage
■
Identifying Storage Requirements for Using Oracle ASM for Shared Storage
■
Restrictions for Disk Partitions Used By Oracle ASM
■
Requirements for Using a Shared File System
■
Requirements for Files Managed by Oracle
3.3.1 Requirements for Using a Cluster File System for Shared Storage
To use OCFS for Windows for Oracle Clusterware files, you must comply with the
following requirements:
■
■
■
If you choose to place your OCR files on a shared file system, then Oracle
recommends that one of the following is true:
–
The disks used for the file system are on a highly available storage device, (for
example, a RAID device that implements file redundancy)
–
At least three file systems are mounted, and use the features of Oracle
Clusterware 11g release 2 (11.2) to provide redundancy for the OCR and
voting disks
If you use a RAID device to store the Oracle Clusterware files, then you must have
a partition that has at least 560 megabyte (MB) of available space for the OCR and
voting disk.
If you use the redundancy features of Oracle Clusterware to provide high
availability for the OCR and voting disk files, then you need a minimum of three
file systems, and each one must have 560 MB of available space for the OCR and
voting disk.
Note:
The smallest partition size that OCFS for Windows can use is
500 MB
The total required volume size listed in the previous paragraph is cumulative. For
example, to store all OCR and voting disk files on a shared file system that does not
provide redundancy at the hardware level (external redundancy), you should have at
least 1.7 GB of storage available over a minimum of three volumes (three separate
volume locations for the OCR and voting disk files, one on each volume). If you use a
3-8 Oracle Grid Infrastructure Installation Guide
Storage Requirements for Oracle Clusterware and Oracle RAC
file system that provides data redundancy, then you need only one physical disk with
560 MB of available space to store the OCR and voting disk files.
If you are upgrading from a previous release of Oracle
Clusterware, and the existing OCR and voting disk files are not 280
MB, then you do not have to change the size of the OCR or voting
disks before performing the upgrade.
Note:
3.3.2 Identifying Storage Requirements for Using Oracle ASM for Shared Storage
To identify the storage requirements for using Oracle ASM, you must determine how
many devices and the amount of free disk space that you require. To complete this
task, follow these steps:
Tip: As you progress through the following steps, make a list of the
raw device names you intend to use and have it available during your
database or Oracle ASM installation.
1.
Determine whether you want to use Oracle ASM for Oracle Clusterware files
(OCR and voting disks), Oracle Database files, recovery files, or all files except for
Oracle Clusterware binaries. Oracle Database files include data files, control files,
redo log files, the server parameter file, and the password file.
Note:
■
■
■
You do not have to use the same storage mechanism for data files
and recovery files. You can store one type of file in a cluster file
system while storing the other file type within Oracle ASM. If you
plan to use Oracle ASM for both data files and recovery files, then
you should create separate Oracle ASM disk groups for the data
files and the recovery files.
Oracle Clusterware files must use either Oracle ASM or a cluster
file system. You cannot have some Oracle Clusterware files in
Oracle ASM and other Oracle Clusterware files in a cluster file
system.
If you choose to store Oracle Clusterware files on Oracle ASM and
use redundancy for the disk group, then Oracle ASM
automatically maintains the ideal number of voting files based on
the redundancy of the diskgroup. The voting files are created
within a single disk group and you cannot add extra voting files
to this disk group manually.
If you plan to enable automated backups during the installation, then you can
choose Oracle ASM as the shared storage mechanism for recovery files by
specifying an Oracle ASM disk group for the fast recovery area. Depending how
you choose to create a database during the installation, you have the following
options:
■
If you select an installation method that runs Database Configuration
Assistant (DBCA) in interactive mode (for example, by choosing the
Advanced database configuration option), then you can decide whether you
want to use the same Oracle ASM disk group for data files and recovery files.
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC
3-9
Storage Requirements for Oracle Clusterware and Oracle RAC
You can also choose to use different disk groups for each file type. Ideally, you
should create separate Oracle ASM disk groups for data files and recovery
files.
The same choice is available to you if you use DBCA after the installation to
create a database.
■
2.
If you select an installation type that runs DBCA in non-interactive mode, then
you must use the same Oracle ASM disk group for data files and recovery
files.
Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
The redundancy level that you choose for the Oracle ASM disk group determines
how Oracle ASM mirrors files in the disk group, and determines the number of
disks and amount of disk space that you require. The redundancy levels are as
follows:
■
External redundancy
An external redundancy disk group requires a minimum of one disk device.
The effective disk space in an external redundancy disk group is the sum of
the disk space in all of its devices.
Because Oracle ASM does not mirror data in an external redundancy disk
group, Oracle recommends that you select external redundancy only if you
use RAID or similar devices that provide their own data protection
mechanisms for disk devices.
Even if you select external redundancy, you must have at least three voting
disks configured, as each voting disk is an independent entity, and cannot be
mirrored.
■
Normal redundancy
A normal redundancy disk group requires a minimum of two disk devices (or
two failure groups). The effective disk space in a normal redundancy disk
group is half the sum of the disk space in all of its devices.
For most installations, Oracle recommends that you select normal redundancy
disk groups.
■
High redundancy
In a high redundancy disk group, Oracle ASM uses three-way mirroring to
increase performance and provide the highest level of reliability. A high
redundancy disk group requires a minimum of three disk devices (or three
failure groups). The effective disk space in a high redundancy disk group is
one-third the sum of the disk space in all of the devices.
While high redundancy disk groups provide a high level of data protection,
you must consider the higher cost of additional storage devices before
deciding to use this redundancy level.
3.
Determine the total amount of disk space that you require for the Oracle
Clusterware files.
Use the following table to determine the minimum number of disks and the
minimum disk space requirements for installing Oracle Clusterware, where you
have voting disks in separate disk groups:
3-10 Oracle Grid Infrastructure Installation Guide
Storage Requirements for Oracle Clusterware and Oracle RAC
Redundancy
Level
Minimum Number
of Disks
OCR files
Both File
Voting disks Types
External
1
280 MB
280 MB
560 MB
Normal
3
560 MB
840 MB
1.4 GB1
High
5
840 MB
1.4 GB
2.3 GB
1
If you create a diskgroup during installation, then it must be at least 2 GB.
If the voting disk files are in a disk group, then be aware that
disk groups with Oracle Clusterware files (OCR and voting disks)
have a higher minimum number of failure groups than other disk
groups.
Note:
If you intend to place database files and Oracle Clusterware files in
the same disk group, then refer to the section "Identifying Storage
Requirements for Using Oracle ASM for Shared Storage" on page 3-9.
To ensure high availability of Oracle Clusterware files on Oracle ASM, you must
have at least 2 GB of disk space for Oracle Clusterware files in three separate
failure groups, with at least three physical disks. Each disk must have at least 1 GB
of capacity to ensure that there is sufficient space to create Oracle Clusterware
files.
For Oracle Clusterware installations, you must also add additional disk space for
the Oracle ASM metadata. You can use the following formula to calculate the
additional disk space requirements (in MB):
total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 *
nodes) + 533)]
Where:
■
redundancy = Number of mirrors: external = 1, normal = 2, high = 3.
■
ausize = Metadata allocation unit (AU) size in megabytes.
■
nodes = Number of nodes in cluster.
■
clients = Number of database instances for each node.
■
disks = Number of disks in disk group.
For example, for a four-node Oracle Grid Infrastructure installation, using three
disks in a normal redundancy disk group, you require an additional 1684 MB of
space:
[2 * 1 * 3] + [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)] = 1684 MB
4.
Determine the total amount of disk space that you require for the Oracle database
files and recovery files.
Use the following table to determine the minimum number of disks and the
minimum disk space requirements for installing the starter database:
Redundancy
Level
Minimum Number
of Disks
Data Files
Recovery
FIles
Both File
Types
External
1
3 GB
4.5 GB
1.5 GB
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC 3-11
Storage Requirements for Oracle Clusterware and Oracle RAC
Redundancy
Level
Minimum Number
of Disks
Data Files
Recovery
FIles
Both File
Types
Normal
2
3 GB
6 GB
9 GB
High
3
4.5 GB
9 GB
13.5 GB
The file sizes listed in the previous table are estimates of
minimum requirements for a new installation (or a database without
any user data). The file sizes for your database will be larger.
Note:
For Oracle RAC installations, you must also add additional disk space for the
Oracle ASM metadata. You can use the following formula to calculate the
additional disk space requirements (in MB):
total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 *
nodes) + 533)]
Where:
■
redundancy = Number of mirrors: external = 1, normal = 2, high = 3.
■
ausize = Metadata AU size in megabytes.
■
nodes = Number of nodes in cluster.
■
clients = Number of database instances for each node.
■
disks = Number of disks in disk group.
For example, for a four-node Oracle RAC installation, using three disks in a
normal redundancy disk group, you require an additional 1684 MB of space:
[2 * 1 * 3] + [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)] = 1684 MB
5.
Determine if you can use an existing disk group.
If an Oracle ASM instance currently exists on the system, then you can use an
existing disk group to meet these storage requirements. If necessary, you can add
disks to an existing disk group during the installation.
See "Using an Existing Oracle ASM Disk Group" on page 3-13 for more
information about using an existing disk group.
6.
Optionally, identify failure groups for the Oracle ASM disk group devices.
You only have to complete this step to use an installation
method that runs DBCA in interactive mode. Do this if, for example,
you choose the Advanced database configuration option. Other
installation types do not enable you to specify failure groups.
Note:
If you intend to use a normal or high redundancy disk group, then you can further
protect your database against hardware failure by associating a set of disk devices
in a custom failure group. Failure groups define Oracle ASM disks that share a
common potential failure mechanism. By default, each device comprises its own
failure group.
If two disk devices in a normal redundancy disk group are attached to the same
small computer system interface (SCSI) controller, then the disk group becomes
3-12 Oracle Grid Infrastructure Installation Guide
Storage Requirements for Oracle Clusterware and Oracle RAC
unavailable if the controller fails. The controller in this example is a single point of
failure. To protect against failures of this type, you could use two SCSI controllers,
each with two disks, and define a failure group for the disks attached to each
controller. This configuration would enable the disk group to tolerate the failure of
one SCSI controller.
If you define custom failure groups, then you must specify a
minimum of two failure groups for normal redundancy disk groups
and three failure groups for high redundancy disk groups.
Note:
For more information about Oracle ASM failure groups, refer to Oracle Database
Storage Administrator's Guide.
7.
If you are sure that a suitable disk group does not exist on the system, then install
or identify appropriate disk devices to add to a new disk group. Use the following
guidelines when identifying appropriate disk devices:
■
■
■
All of the devices in an Oracle ASM disk group should be the same size and
have the same performance characteristics.
Do not specify multiple partitions on a single physical disk as a disk group
device. Oracle ASM expects each disk group device to be on a separate
physical disk.
Although you can specify a logical volume as a device in an Oracle ASM disk
group, Oracle does not recommend their use. Logical volume managers can
hide the physical disk architecture, preventing Oracle ASM from optimizing
I/O across the physical devices. They are not supported with Oracle RAC.
3.3.2.1 Using an Existing Oracle ASM Disk Group
To use Oracle ASM as the storage option for either database or recovery files, you must
use an existing Oracle ASM disk group, or use ASMCA to create the necessary disk
groups before installing Oracle Database 11g release 2.
To determine if an Oracle ASM disk group currently exists, or to determine whether
there is sufficient disk space in an existing disk group, you can use Oracle Enterprise
Manager, either Grid Control or Database Control. Alternatively, you can use the
following procedure:
1.
In the Services Control Panel, ensure that the OracleASMService+ASMn service,
where n is the node number, has started.
2.
Open a Windows command prompt and temporarily set the ORACLE_SID
environment variable to specify the appropriate value for the Oracle ASM instance
to use.
For example, if the Oracle ASM system identifier (SID) is named +ASM1, then
enter a setting similar to the following:
C:\> set ORACLE_SID = +ASM1
3.
Use SQL*Plus to connect to the Oracle ASM instance as the SYS user with the
SYSASM privilege and start the instance if necessary with a command similar to
the following:
C:\> sqlplus /nolog
SQL> CONNECT SYS AS SYSASM
Enter password: sys_password
Connected to an idle instance.
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC 3-13
Storage Requirements for Oracle Clusterware and Oracle RAC
SQL> STARTUP
4.
Enter the following command to view the existing disk groups, their redundancy
level, and the amount of free disk space in each disk group:
SQL> SELECT NAME,TYPE,TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
5.
From the output, identify a disk group with the appropriate redundancy level and
note the free space that it contains.
6.
If necessary, install, or identify the additional disk devices required to meet the
storage requirements listed in the previous section.
If you are adding devices to an existing disk group, then
Oracle recommends that you use devices that have the same size and
performance characteristics as the existing devices in that disk group.
Note:
3.3.3 Restrictions for Disk Partitions Used By Oracle ASM
Be aware of the following restrictions when configuring disk partitions for use with
Oracle ASM:
■
You cannot use primary partitions for storing Oracle Clusterware files while
running OUI to install Oracle Clusterware as described in Chapter 4, "Installing
Oracle Grid Infrastructure for a Cluster". You must create logical drives inside
extended partitions for the disks to be used by Oracle Clusterware files and Oracle
ASM.
■
With 64-bit Windows, you can create up to 128 primary partitions for each disk.
■
You can create shared directories only on primary partitions and logical drives.
■
Oracle recommends that you limit the number of partitions you create on a single
disk to prevent disk contention. Therefore, you may prefer to use extended
partitions rather than primary partitions.
For these reasons, you might prefer to use extended partitions for storing Oracle
software files and not primary partitions.
3.3.4 Requirements for Using a Shared File System
To use a shared file system for Oracle Clusterware, Oracle ASM, and Oracle RAC, the
file system must comply with the following requirements:
■
■
To use a cluster file system, it must be a supported cluster file system, as listed in
the section "Supported Storage Options for Oracle Clusterware and Oracle RAC"
on page 3-4.
To use an NFS file system, it must be on a certified network attached storage
(NAS) device. Log in to My Oracle Support at the following URL, and click the
Certify tab to find a list of certified NAS devices.
https://support.oracle.com/
■
If you choose to place your OCR files on a shared file system, then Oracle
recommends that one of the following is true:
–
The disks used for the file system are on a highly available storage device, for
example, a RAID device.
3-14 Oracle Grid Infrastructure Installation Guide
Storage Requirements for Oracle Clusterware and Oracle RAC
–
■
■
At least two file systems are mounted, and use the features of Oracle
Clusterware 11g release 2 (11.2) to provide redundancy for the OCR.
If you choose to place your database files on a shared file system, then one of the
following should be true:
–
The disks used for the file system are on a highly available storage device, (for
example, a RAID device).
–
The file systems consist of at least two independent file systems, with the
database files on one file system, and the recovery files on a different file
system.
The user account with which you perform the installation (oracle or grid) must
have write permissions to create the files in the path that you specify.
Upgrading from Oracle9i release 2 using the raw device or
shared file for the OCR that you used for the SRVM configuration
repository is not supported.
Note:
If you are upgrading Oracle Clusterware, and your existing cluster
uses 100 MB OCR and 20 MB voting disk partitions, then you can
continue to use those partition sizes.
All storage products must be supported by both your server and
storage vendors.
Use Table 3–2 and Table 3–3 to determine the minimum size for shared file systems:
Table 3–2
Oracle Clusterware Shared File System Volume Size Requirements
File Types Stored
Number of
Volumes
Volume Size
Voting disks with external
redundancy
3
At least 280 MB for each voting disk
volume.
OCR with external redundancy
1
At least 280 MB for each OCR
volume
Oracle Clusterware files (OCR and
voting disks) with redundancy
provided by Oracle software.
1
At least 280 MB for each OCR
volume
Table 3–3
At least 280 MB for each voting disk
volume
Oracle RAC Shared File System Volume Size Requirements
File Types Stored
Number of
Volumes
Volume Size
Oracle Database files
1
At least 1.5 GB for each volume
Recovery files
1
At least 2 GB for each volume
Note: Recovery files must be on a
different volume than database files
In Table 3–2 and Table 3–3, the total required volume size is cumulative. For example,
to store all Oracle Clusterware files on the shared file system with normal redundancy,
you should have at least 2 GB of storage available over a minimum of three volumes
(three separate volume locations for the OCR and two OCR mirrors, and one voting
disk on each volume). You should have a minimum of three physical disks, each at
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC 3-15
Configuring the Shared Storage Used by Oracle ASM
least 500 MB, to ensure that voting disks and OCR files are on separate physical disks.
If you add Oracle RAC using one volume for database files and one volume for
recovery files, then you should have at least 3.5 GB available storage over two
volumes, and at least 5.5 GB available total for all volumes.
3.3.5 Requirements for Files Managed by Oracle
If you use OCFS for Windows or Oracle ASM for your database files, then your
database is created by default with files managed by Oracle Database. When using the
Oracle Managed files feature, you need specify only the database object name instead
of file names when creating or deleting database files.
Configuration procedures are required to enable Oracle Managed Files.
"Using Oracle-Managed Files" in Oracle Database
Administrator's Guide
See Also:
3.4 Configuring the Shared Storage Used by Oracle ASM
The installer does not suggest a default location for the OCR or the Oracle Clusterware
voting disk. If you choose to create these files on Oracle ASM, then you must first
create and configure disk partitions to by used by Oracle ASM.
The following sections describe how to create and configure disk partitions to be used
by Oracle ASM for storing Oracle Clusterware files or Oracle Database data files, how
to configure Oracle ACFS to store other file types, and what to do if you have
configured storage for a previous release of Oracle ASM:
■
Marking Disk Partitions for Oracle ASM Before Installation
■
Marking Disk Partitions for Oracle ASM Before Installation
■
Configuring Oracle Automatic Storage Management Cluster File System
■
Migrating Existing Oracle ASM Instances
The OCR is a file that contains the configuration information
and status of the cluster. The installer automatically initializes the
OCR during the Oracle Clusterware installation.
Note:
3.4.1 Creating Disk Partitions for Oracle ASM
To use direct-attached storage (DAS) or storage area network (SAN) disks for Oracle
ASM, each disk must have a partition table. Oracle recommends creating exactly one
partition for each disk that encompasses the entire disk.
You can use any physical disk for Oracle ASM, if it is
partitioned. However, you cannot use NAS or Microsoft dynamic
disks.
Note:
Use Microsoft Computer Management utility or the command line tool diskpart to
create the partitions. Ensure that you create the partitions without drive letters. After
you have created the partitions, the disks can be configured.
3-16 Oracle Grid Infrastructure Installation Guide
Configuring the Shared Storage Used by Oracle ASM
See Also: "Stamp Disks for Oracle ASM" on page 1-7 for more
information about using diskpart to create a partition
3.4.2 Marking Disk Partitions for Oracle ASM Before Installation
The only partitions that OUI displays for Windows systems are logical drives that are
on disks that do not contain a primary partition, and have been stamped with
asmtool. Configure the disks before installation either by using asmtoolg (graphical
user interface (GUI) version) or using asmtool (command line version). You also have
the option of using the asmtoolg utility during Oracle Grid infrastructure for a
cluster installation.
The asmtoolg and asmtool utilities only work on partitioned disks; you cannot use
Oracle ASM on unpartitioned disks. You can also use these tools to reconfigure the
disks after installation.
The following section describes the asmtoolg and asmtool functions and
commands.
Note: Refer to Oracle Database Storage Administrator's Guide for more
information about using asmtool.
3.4.2.1 Overview of asmtoolg and asmtool
The asmtoolg and asmtool tools associate meaningful, persistent names with disks
to facilitate using those disks with Oracle ASM. Oracle ASM uses disk strings to
operate more easily on groups of disks at the same time. The names that asmtoolg or
asmtool create make this easier than using Windows drive letters.
All disk names created by asmtoolg or asmtool begin with the prefix ORCLDISK
followed by a user-defined prefix (the default is DATA), and by a disk number for
identification purposes. You can use them as raw devices in the Oracle ASM instance
by specifying a name \\.\ORCLDISKprefixn, where prefix either can be DATA, or
can be a value you supply, and where n represents the disk number.
To configure your disks with asmtoolg, refer to the section "Using asmtoolg
(Graphical User Interface)" on page 3-17. To configure the disks with asmtool, refer to
the section "Using asmtool (Command Line)" on page 3-18.
3.4.2.2 Using asmtoolg (Graphical User Interface)
Use asmtoolg (GUI version) to create device names; use asmtoolg to add, change,
delete, and examine the devices available for use in Oracle ASM.
To add or change disk stamps:
1.
In the installation media for Oracle Grid Infrastructure, go the asmtool folder
and double-click asmtoolg.
If Oracle Clusterware is installed, then go to the Grid_home\bin folder and
double-click asmtoolg.exe.
On Windows Server 2008 and Windows Server 2008 R2, if user access control
(UAC) is enabled, then you must create a desktop shortcut to a command window.
Open the command window using the Run as Administrator, right-click context
menu, and launch asmtoolg.
2.
Select the Add or change label option, and then click Next.
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC 3-17
Configuring the Shared Storage Used by Oracle ASM
asmtoolg shows the devices available on the system. Unrecognized disks have a
status of "Candidate device", stamped disks have a status of "Stamped ASM
device," and disks that have had their stamp deleted have a status of "Unstamped
ASM device." The tool also shows disks that are recognized by Windows as a file
system (such as NTFS). These disks are not available for use as Oracle ASM disks,
and cannot be selected. In addition, Microsoft Dynamic disks are not available for
use as Oracle ASM disks.
If necessary, follow the steps under "Stamp Disks for Oracle ASM" on page 1-7 to
create disk partitions for the Oracle ASM instance.
3.
On the Stamp Disks window, select the disks to stamp.
For ease of use, Oracle ASM can generate unique stamps for all of the devices
selected for a given prefix. The stamps are generated by concatenating a number
with the prefix specified. For example, if the prefix is DATA, then the first Oracle
ASM link name is ORCLDISKDATA0.
You can also specify the stamps of individual devices.
4.
Optionally, select a disk to edit the individual stamp (Oracle ASM link name).
5.
Click Next.
6.
Click Finish.
To delete disk stamps:
1.
Select the Delete labels option, then click Next.
The delete option is only available if disks exist with stamps. The delete screen
shows all stamped Oracle ASM disks.
2.
On the Delete Stamps screen, select the disks to unstamp.
3.
Click Next.
4.
Click Finish.
3.4.2.3 Using asmtool (Command Line)
asmtool is a command-line interface for stamping disks. It has the following options:
Option
Description
Example
-add
Adds or changes stamps. You must specify
the hard disk, partition, and new stamp
name. If the disk is a raw device or has an
existing Oracle ASM stamp, then you must
specify the -force option.
asmtool -add [-force]
\Device\Harddisk1\Partition1 ORCLDISKASM0
\Device\Harddisk2\Partition1 ORCLDISKASM2...
If necessary, follow the steps under "Stamp
Disks for Oracle ASM" on page 1-7 to create
disk partitions for the Oracle ASM instance.
-addprefix
Adds or changes stamps using a common
prefix to generate stamps automatically.
The stamps are generated by concatenating
a number with the prefix specified. If the
disk is a raw device or has an existing
Oracle ASM stamp, then you must specify
the -force option.
3-18 Oracle Grid Infrastructure Installation Guide
asmtool -addprefix ORCLDISKASM [-force]
\Device\Harddisk1\Partition1
\Device\Harddisk2\Partition1...
Configuring the Shared Storage Used by Oracle ASM
Option
Description
Example
-create
Creates an Oracle ASM disk device from a
file instead of a partition.
asmtool -create \\server\share\file 1000
Note: Usage of this command is not
supported for production environments.
asmtool -create D:\asm\asmfile02.asm 240
-list
List available disks. The stamp, windows
device name, and disk size in MB are
shown.
asmtool -list
-delete
Removes existing stamps from disks.
asmtool -delete ORCLDISKASM0 ORCLDISKASM1...
If you use -add, -addprefix, and -delete, asmtool
notifies the Oracle ASM instance on the local node and on other nodes
in the cluster, if available, to rescan the available disks.
Note:
3.4.3 Configuring Oracle Automatic Storage Management Cluster File System
Oracle ACFS is installed as part of an Oracle grid infrastructure installation, Oracle
Clusterware and Oracle ASM, for 11g release 2 (11.2).
Oracle ACFS is supported only on Windows Server 2003 64-bit
and Windows Server 2003 R2 64-bit. All other Windows releases that
are supported for Oracle grid infrastructure and Oracle Clusterware
11g release 2 (11.2) are not supported for Oracle ACFS.
Note:
To configure Oracle ACFS for an Oracle Database home for an Oracle RAC database,
perform the following steps:
1.
Install Oracle grid infrastructure for a cluster (Oracle Clusterware and Oracle
ASM).
2.
Start ASMCA as the grid installation owner.
3.
The Configure ASM: ASM Disk Groups page shows you the Oracle ASM disk
group you created during installation. Click the ASM Cluster File Systems tab.
4.
On the ASM Cluster File Systems page, right-click the Data disk, then select Create
ACFS for Database Home.
5.
In the Create ACFS Hosted Database Home window, enter the following
information:
■
■
Database Home ADVM Volume Device Name: Enter the name of the database
home. The name must be unique in your enterprise. For example: racdb_01
Database Home Mountpoint: Enter the directory path for the mountpoint. For
example: M:\acfsdisks\racdb_01
Make a note of this mountpoint for future reference.
■
■
6.
Database Home Size (GB): Enter in gigabytes the size you want the database
home to be.
Click OK when you have completed your entries.
During Oracle RAC installation, ensure that you or the database administrator
who installs Oracle RAC selects for the Oracle home the mountpoint you provided
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC 3-19
Configuring Storage for Oracle Database Files on OCFS for Windows
in the Database Home Mountpoint field (in the preceding example,
M:\acfsdisks\racdb_01).
See Also: Oracle Database Storage Administrator's Guide for more
information about configuring and managing your storage with
Oracle ACFS
3.4.4 Migrating Existing Oracle ASM Instances
If you have an Oracle ASM installation from a prior release installed on your server, or
in an existing Oracle Clusterware installation, then you can use ASMCA (located in the
path Grid_home\bin) to upgrade the existing Oracle ASM instance to Oracle ASM
11g release 2 (11.2), and subsequently configure failure groups, Oracle ASM volumes
and Oracle ACFS.
You must first shut down all database instances and
applications on the node with the existing Oracle ASM instance before
upgrading it.
Note:
During installation, if you chose to use Oracle ASM and ASMCA detects that there is a
prior Oracle ASM version installed in another Oracle ASM home, then after installing
the Oracle ASM 11g release 2 (11.2) binaries, you can start ASMCA to upgrade the
existing Oracle ASM instance. You can then choose to configure an Oracle ACFS
deployment by creating Oracle ASM volumes and using the upgraded Oracle ASM to
create the Oracle ACFS.
On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of
the software on all nodes is Oracle ASM 11g release 1, then you are provided with the
option to perform a rolling upgrade of Oracle ASM instances. If the prior version of
the software for an Oracle RAC installation is from a release prior to Oracle ASM 11g
release 1, then rolling upgrades cannot be performed. Oracle ASM on all nodes will be
upgraded to Oracle ASM 11g release 2 (11.2).
3.5 Configuring Storage for Oracle Database Files on OCFS for Windows
To use OCFS for Windows for your Oracle home and data files, the following
partitions, at a minimum, must exist before you run OUI to install Oracle Clusterware:
■
5.5 GB or larger partition for the Oracle home, if you want a shared Oracle home
■
3 GB or larger partitions for the Oracle Database data files and recovery files
Log in to Windows using a member of the Administrators group and perform the
steps described in this section to set up the shared disk raw partitions for OCFS for
Windows. Windows refers to raw partitions as logical drives. If you need more
information about creating partitions, then refer to the Windows online help from
within the Disk Management utility.
1.
Run the Windows Disk Management utility from one node to create an extended
partition. Use a basic disk; dynamic disks are not supported.
2.
Create a partition for the Oracle Database data files and recovery files, and
optionally create a second partition for the Oracle home.
The number of partitions used for OCFS for Windows affects performance.
Therefore, you should create the minimum number of partitions needed for the
OCFS for Windows option you choose.
3-20 Oracle Grid Infrastructure Installation Guide
Configuring Storage for Oracle Database Files on OCFS for Windows
Note:
Oracle supports installing the database into multiple Oracle Homes on
a single system. This allows flexibility in deployment and
maintenance of the database software. For example, it enables
different versions of the database to run simultaneously on the same
system, or it enables you to upgrade specific database or Oracle
Automatic Storage Management instances on a system without
affecting other running databases.
However, when you have installed multiple Oracle Homes on a single
system, there is also some added complexity introduced that you may
have to consider allowing these Oracle Homes to coexist. For more
information on this topic, refer to Oracle Database Platform Guide for
Microsoft Windows and Oracle Real Application Clusters Installation Guide
To create the required partitions, perform the following steps:
1.
From an existing node in the cluster, run the DiskPart utility as follows:
C:\> diskpart
DISKPART>
2.
List the available disks. By specifying its disk number (n), select the disk on which
you want to create a partition.
DISKPART> list disk
DISKPART> select disk n
3.
Create an extended partition:
DISKPART> create part ext
4.
Create a logical drive of the desired size after the extended partition is created
using the following syntax:
DISKPART> create part log [size=n] [offset=n] [noerr]
5.
Repeat steps 2 through 6 for the second and any additional partitions. An optimal
configuration is one partition for the Oracle home and one partition for Oracle
Database files.
6.
List the available volumes, and remove any drive letters from the logical drives
you plan to use.
DISKPART> list volume
DISKPART> select volume n
DISKPART> remove
7.
If you are preparing drives on a Windows 2003 R2 system, then you should restart
all nodes in the cluster after you have created the logical drives.
8.
Check all nodes in the cluster to ensure that the partitions are visible on all the
nodes and to ensure that none of the Oracle partitions have drive letters assigned.
If any partitions have drive letters assigned, then remove them by performing
these steps:
■
Right-click the partition in the Windows Disk Management utility
■
Select "Change Drive Letters and Paths..." from the menu
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC 3-21
Configuring Direct NFS Storage for Oracle RAC Data Files
■
Click Remove in the "Change Drive Letter and Paths" window
3.5.1 Formatting Drives to Use OCFS for Windows after Installation
If you installed Oracle Grid Infrastructure, and you want to use OCFS for Windows for
storage for Oracle RAC, then run the ocfsformat.exe command from the Grid_
home\cfs directory using the following syntax:
Grid_home\cfs\OcfsFormat /m link_name /c ClusterSize_in_KB /v volume_label /f /a
Where:
■
■
/m link_name is the mountpoint for this file system which you want to format
with OCFS for Windows. On Windows, provide a drive letter corresponding to the
logical drive.
ClusterSize_in_KB is the Cluster size or allocation size for the OCFS for
Windows volume (this option must be used with the /a option or else the default
size of 4 kilobytes (KB) is used)
The Cluster size is essentially the block size. Recommended
values are 1024 (1 MB) if the OCFS for Windows disk partition is to be
used for Oracle datafiles and 4 (4 KB) if the OCFS for Windows disk
partition is to be used for the Oracle home.
Note:
■
volume_label is an optional volume label
■
The /f option forces the format of the specified volume
■
The /a option, if specified, forces OcfsFormat to use the clustersize specified
with the /c option
For example, to create an OCFS for Windows formatted shared disk partition named
DATA, mounted as U:, using a shared disk with a nondefault cluster size of 1 MB, you
would use the following command:
ocfsformat /m U: /c 1024 /v DATA /f /a
3.6 Configuring Direct NFS Storage for Oracle RAC Data Files
This section contains the following information about Direct NFS:
■
About Direct NFS Storage
■
About the Oranfstab File for Direct NFS
■
Mounting NFS Storage Devices with Direct NFS
■
Specifying Network Paths for a NFS Server
■
Enabling the Direct NFS Client
■
Performing Basic File Operations Using the ORADNFS Utility
■
Disabling Direct NFS Client
3-22 Oracle Grid Infrastructure Installation Guide
Configuring Direct NFS Storage for Oracle RAC Data Files
3.6.1 About Direct NFS Storage
Oracle Disk Manager (ODM) can manage NFS on its own. This is referred to as Direct
NFS. Direct NFS implements NFS version 3 protocol within the Oracle Database
kernel. This change enables monitoring of NFS status using the ODM interface. The
Oracle Database kernel driver tunes itself to obtain optimal use of available resources.
Starting with Oracle Database 11g release 1 (11.1), you can configure Oracle Database
to access NFS version 3 servers directly using Direct NFS. This enables the storage of
data files on a supported NFS system.
If Oracle Database cannot open an NFS server using Direct NFS, then an informational
message is logged into the Oracle alert and trace files indicating that Direct NFS could
not be established.
Direct NFS does not work if the backend NFS server does not
support a write size (wtmax) of 32768 or larger.
Note:
The Oracle files resident on the NFS server that are served by the Direct NFS Client
can also be accessed through a third party NFS client. Management of Oracle data files
created with Direct NFS should be done according to the guidelines specified in Oracle
Database Administrator's Guide.
Use the following views for Direct NFS management:
■
V$DNFS_SERVERS: Lists the servers that are accessed using Direct NFS.
■
V$DNFS_FILES: Lists the files that are currently open using Direct NFS.
■
■
V$DNFS_CHANNELS: Shows the open network paths, or channels, to servers for
which Direct NFS is providing files.
V$DNFS_STATS: Lists performance statistics for Direct NFS.
3.6.2 About the Oranfstab File for Direct NFS
If you use Direct NFS, then you must create a new configuration file, oranfstab, to
specify the options, attributes, and parameters that enable Oracle Database to use
Direct NFS. Direct NFS looks for the mount point entries in Oracle_
home\database\oranfstab. It uses the first matched entry as the mount point. You
must add the oranfstab file to the Oracle_home\database directory.
For Oracle RAC installations, to use Direct NFS you must replicate the oranfstab file
on all of the nodes. You must also keep all of the oranfstab files synchronized on all
nodes.
When the oranfstab file is placed in Oracle_home\database, the entries in the
file are specific to a single database. All nodes running an Oracle RAC database should
use the same Oracle_home\database\oranfstab file.
If you remove an NFS path from oranfstab that Oracle
Database is using, then you must restart the database for the change to
be effective. In addition, the mount point that you use for the file
system must be identical on each node.
Note:
See Also: "Enabling the Direct NFS Client" on page 3-24 for more
information about creating the oranfstab file
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC 3-23
Configuring Direct NFS Storage for Oracle RAC Data Files
3.6.3 Mounting NFS Storage Devices with Direct NFS
Direct NFS determines mount point settings to NFS storage devices based on the
configuration information in oranfstab. If Oracle Database cannot open an NFS
server using Direct NFS, then an error message is written into the Oracle alert and
trace files indicating that Direct NFS could not be established.
3.6.4 Specifying Network Paths for a NFS Server
Direct NFS can use up to four network paths defined in the oranfstab file for an
NFS server. The Direct NFS client performs load balancing across all specified paths. If
a specified path fails, then Direct NFS re-issues all outstanding requests over any
remaining paths.
Note: You can have only one active Direct NFS implementation for
each instance. Using Direct NFS on an instance prevents the use of
another Direct NFS implementation.
3.6.5 Enabling the Direct NFS Client
To enable the Direct NFS Client, you must add an oranfstab file to Oracle_
home\database. When oranfstab is placed in this directory, the entries in this file
are specific to one particular database. The Direct NFS Client searches for the mount
point entries as they appear in oranfstab. The Direct NFS Client uses the first
matched entry as the mount point.
Complete the following procedure to enable the Direct NFS Client:
1.
Create an oranfstab file with the following attributes for each NFS server
accessed by Direct NFS:
■
■
■
■
■
■
■
■
server: The NFS server name.
path: Up to four network paths to the NFS server, specified either by internet
protocol (IP) address, or by name, as displayed using the ifconfig
command on the NFS server.
local: Up to 4 network interfaces on the database host, specified by IP
address, or by name, as displayed using the ipconfig command on the
database host.
export: The exported path from the NFS server. Use a UNIX-style path.
mount: The corresponding local mount point for the exported volume. Use
Windows-style path.
mnt_timeout: (Optional) Specifies the time (in seconds) for which Direct
NFS client should wait for a successful mount before timing out. The default
timeout is 10 minutes.
uid: (Optional) The UNIX user ID to be used by Direct NFS to access all NFS
servers listed in oranfstab. The default value is uid:65534, which
corresponds to user:nobody on the NFS server.
gid: (Optional) The UNIX group ID to be used by Direct NFS to access all NFS
servers listed in oranfstab. The default value is gid:65534, which
corresponds to group:nogroup on the NFS server.
Note:
Direct NFS ignores a uid or gid value of 0.
3-24 Oracle Grid Infrastructure Installation Guide
Configuring Direct NFS Storage for Oracle RAC Data Files
The following is an example of an oranfstab file with two NFS server entries,
where the first NFS server uses 2 network paths and the second NFS server uses 4
network paths:
server: MyDataServer1
local: 132.34.35.10
path: 132.34.35.12
local: 132.34.55.10
path: 132.34.55.12
export: /vol/oradata1 mount: C:\APP\ORACLE\ORADATA\ORCL
server: MyDataServer2
local: LocalInterface1
path: NfsPath1
local: LocalInterface2
path: NfsPath2
local: LocalInterface3
path: NfsPath3
local: LocalInterface4
path: NfsPath4
export: /vol/oradata2 mount: C:\APP\ORACLE\ORADATA\ORCL2
export: /vol/oradata3 mount: C:\APP\ORACLE\ORADATA\ORCL3
The mount point specified in the oranfstab file represents the local path where
the database files would reside normally, as if Direct NFS was not used. For
example, if a database that does not use Direct NFS would have data files located
in the C:\app\oracle\oradata\orcl directory, then you specify
C:\app\oracle\oradata\orcl for the NFS virtual mount point in the
corresponding oranfstab file.
The exported path from the NFS server must be accessible for
read/write/execute by the user with the uid, gid specified in
oranfstab. If neither uid nor gid is listed, then the exported path
must be accessible by the user with uid:65534 and gid:65534.
Note:
2.
Oracle Database uses the ODM library, oranfsodm11.dll, to enable Direct NFS.
To replace the standard ODM library, oraodm11.dll, with the ODM NFS library,
complete the following steps:
a.
Change directory to Oracle_home\bin.
b.
Shut down the Oracle Database instance on a node using SRVCTL.
c.
Enter the following commands:
copy oraodm11.dll oraodm11.dll.orig
copy /Y oranfsodm11.dll oraodm11.dll
d.
Restart the Oracle Database instance using SRVCTL.
e.
Repeat Step a to Step d for each node in the cluster.
3.6.6 Performing Basic File Operations Using the ORADNFS Utility
ORADNFS is a utility which enables the database administrators to perform basic file
operations over Direct NFS Client on Microsoft Windows platforms.
ORADNFS is a multi-call binary, which is a single binary that acts like many utilities.
You must be a member of the local ORA_DBA group to use ORADNFS. To execute
Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC 3-25
Desupport of Raw Devices
commands using ORADNFS you issue the command as an argument on the command
line.
The following command prints a list of commands available with ORADNFS:
C:\> oradnfs help
To display the list of files in the NFS directory mounted as C:\ORACLE\ORADATA,
use the following command:
C:\> oradnfs ls C:\ORACLE\ORADATA\ORCL
A valid copy of the oranfstab configuration file must be
present in Oracle_home\database for ORADNFS to operate.
Note:
3.6.7 Disabling Direct NFS Client
Use one of the following methods to disable the Direct NFS client:
■
■
■
Remove the oranfstab file.
Restore the original oraodm11.dll file by reversing the process you completed
in "Enabling the Direct NFS Client" on page 3-24.
Remove the specific NFS server or export paths in the oranfstab file.
3.7 Desupport of Raw Devices
With the release of Oracle Database 11g and Oracle RAC release 11g, writing datafiles
directly to raw devices using DBCA or OUI is not supported. You can still use raw
devices with Oracle ASM.
3-26 Oracle Grid Infrastructure Installation Guide
4
4
Installing Oracle Grid Infrastructure for a
Cluster
This chapter describes the procedures for installing Oracle Grid Infrastructure for a
cluster. Oracle Grid Infrastructure consists of Oracle Clusterware and Oracle
Automatic Storage Management (Oracle ASM). If you plan afterward to install Oracle
Database with Oracle Real Application Clusters (Oracle RAC), then this is phase one of
a two-phase installation.
This chapter contains the following topics:
■
Preparing to Install Oracle Grid Infrastructure with Oracle Universal Installer
■
Installing Grid Infrastructure with OUI
■
Installing Grid Infrastructure Using a Software-Only Installation
■
Confirming Oracle Clusterware Function
■
Confirming Oracle ASM Function for Oracle Clusterware Files
The second phase of an Oracle RAC installation, installing
Oracle RAC, is described in Oracle Real Application Clusters Installation
Guide.
Note:
4.1 Preparing to Install Oracle Grid Infrastructure with Oracle Universal
Installer
Before you install Oracle Grid Infrastructure with Oracle Universal Installer (OUI), use
the following checklist to ensure that you have all the information you will need
during installation, and to ensure that you have completed all tasks that must be done
before starting your installation. Mark the box for each task as you complete it, and
record the information needed, so that you can provide it during installation.
❏
Verify Cluster Privileges
Before running OUI, from the node where you intend to run the Installer, verify
that you are logged in using a member of the Administrators group, and that this
user is an Administrator user on the other nodes in the cluster. To do this, enter the
following command for each node that is a part of the cluster:
net use \\nodename\C$
where nodename is the node name. Repeat for each node in the cluster.
❏
Shut Down Running Oracle Processes
Installing Oracle Grid Infrastructure for a Cluster 4-1
Preparing to Install Oracle Grid Infrastructure with Oracle Universal Installer
You may have to shut down running Oracle processes:
Installing on a node with a standalone database not using Oracle ASM: You do
not have to shut down the database while you install Oracle Grid Infrastructure
software.
Installing on an Oracle RAC database node: This installation requires an upgrade
of Oracle Clusterware, because Oracle Clusterware is required to run Oracle RAC.
As part of the upgrade, you must shut down the database one node at a time as
the rolling upgrade proceeds from node to node.
If a Global Services Daemon (GSD) from Oracle9i Release 9.2 or earlier is running,
then stop it before installing Oracle Grid Infrastructure by running the following
command, where Oracle_home is the Oracle Database home that is running the
GSD:
Oracle_home\bin\gsdctl stop
If you have an existing Oracle9i release 2 (9.2) Oracle
Cluster Manager installation, then do not shut down the Oracle Cluster
Manager service. Shutting down the Oracle Cluster Manager service
prevents the Oracle Grid Infrastructure 11g release 2 (11.2) software
from detecting the Oracle9i release 2 node list, and causes failure of
the Oracle Grid Infrastructure installation.
Caution:
If you receive a warning to stop all Oracle services after
starting OUI, then run the command, where Oracle_home is the
home that is running the cluster synchronization service (CSS):
Note:
Oracle_home\bin\localconfig delete
❏
Prepare for Oracle ASM and Oracle Clusterware Upgrade If You Have Existing
Installations
During installation, you can upgrade existing Oracle Clusterware and clustered
Oracle ASM installations to Oracle Grid Infrastructure 11g release 2.
When all member nodes of the cluster are running Oracle Grid Infrastructure 11g
release 2 (11.2), then the new clusterware becomes the active version.
If you intend to install Oracle RAC, then you must first complete the upgrade to
Oracle Grid Infrastructure 11g release 2 (11.2) on all cluster nodes before you
install the Oracle Database 11g release 2 (11.2) version of Oracle RAC.
All Oracle Grid Infrastructure upgrades (upgrades of existing
Oracle Clusterware and clustered Oracle ASM installations) are
out-of-place upgrades.
Note:
❏
Obtain LOCAL_SYSTEM administrator access
Oracle Grid Infrastructure must be installed using an Administrator user, one with
LOCAL_SYSTEM privileges, or a member of the local Administrators group. If
you do not have Administrator access to each node in the cluster, then ask your
system administrator to create and configure the user account on each node.
4-2 Oracle Grid Infrastructure Installation Guide
Preparing to Install Oracle Grid Infrastructure with Oracle Universal Installer
❏
Decide if you want to install other languages
During an Advanced installation session, you are asked if you want translation of
user interface text into languages other than the default, which is English.
If the language set for the operating system is not supported
by the installer, then by default the installer runs in English.
Note:
See Also: Oracle Database Globalization Support Guide for detailed
information on character sets and language configuration
❏
Determine your cluster name, public node names, single client access names
(SCAN), virtual node names, and planned interface use for each node in the
cluster
During installation, you are prompted to provide the public and virtual host
name, unless you use a third party cluster software. In that case, the public host
name information will be filled in. You are also prompted to identify which
interfaces are public, private, or interfaces in use for another purpose, such as a
network file system.
If you use Grid Naming Service (GNS), then OUI displays the public and virtual
host name addresses labeled as "AUTO" because they are configured automatically
by GNS.
If you configure internet protocol (IP) addresses manually,
then avoid changing host names after you complete the Oracle Grid
Infrastructure installation, including adding or deleting domain
qualifications. A node with a new host name is considered a new host,
and must be added to the cluster. A node under the old name will
appear to be down until it is removed from the cluster.
Note:
When you enter the public node name, use the primary host name of each node. In
other words, use the name displayed by the hostname command. This node
name can be either the permanent or the virtual host name. The node name should
contain only single-byte alphanumeric characters (a to z, A to Z, and 0 to 9). Do
not use underscores (_) or any other characters in the host name.
In addition:
–
Provide a cluster name with the following characteristics:
*
It must be globally unique throughout your host domain.
*
It must be at least one character long and fewer than 15 characters long.
*
It must consist of the same character set used for host names, in
accordance with Internet Engineering Task Force RFC 1123: Hyphens (-),
and single-byte alphanumeric characters (a to z, A to Z, and 0 to 9).
Windows operating systems allow underscores to be used
with host names, but underscored names are not legal host names for
a domain name system (DNS), and they should be avoided.
Note:
Installing Oracle Grid Infrastructure for a Cluster 4-3
Preparing to Install Oracle Grid Infrastructure with Oracle Universal Installer
–
If you are not using GNS, then determine a virtual host name for each node. A
virtual host name is a public node name that is used to reroute client requests
sent to the node if the node is down. Oracle Database uses virtual IP (VIP)
addresses for client-to-database connections, so the VIP address must be
publicly accessible. Oracle recommends that you provide a name in the format
hostname-vip. For example: myclstr2-vip.
–
Provide SCAN addresses for client access to the cluster. These addresses
should be configured as round robin addresses on the domain name service
(DNS). Oracle recommends that you supply three SCAN addresses.
The following is a list of additional information about node IP
addresses:
Note:
■
■
■
–
For the local node only, OUI automatically fills in public and VIP
fields. If your system uses vendor clusterware, then OUI may fill
additional fields.
Node names are not domain-qualified. If you provide a domain in
the host name field during installation, then OUI removes the
domain from the name.
Interfaces identified as private for private IP addresses should not
be accessible as public interfaces. Using public interfaces for
Cache Fusion can cause performance problems.
Identify public and private interfaces. OUI configures public interfaces for use
by public and virtual IP addresses, and configures private IP addresses on
private interfaces.
The private subnet that the private interfaces use must connect all the nodes
you intend to have as cluster members.
❏
Identify shared storage for Oracle Clusterware files and prepare storage if
necessary
During installation, you are asked to provide paths for the following Oracle
Clusterware files. These files must be shared across all nodes of the cluster, either
on Oracle ASM, or on a supported third party cluster file system:
–
Voting disks are files that Oracle Clusterware uses to verify cluster node
membership and status.
–
Oracle Cluster Registry files (OCR) contain cluster and database configuration
information for Oracle Clusterware.
If you intend to use Oracle Cluster File System for Windows (OCFS for Windows),
then you are prompted to indicate which of the available disks you want to format
with OCFS for Windows, what format type you want to use, and to what drive
letter the formatted OCFS for Windows disk is mounted.
If your file system does not have external storage redundancy, then Oracle
recommends that you provide two additional locations for the OCR disk and the
voting disk, for a total of at least three partitions. Creating redundant storage
locations protects the OCR and voting disk if a failure occurs. To completely
protect your cluster, the storage locations given for the copies of the OCR and
voting disks should have completely separate paths, controllers, and disks, so that
no single point of failure is shared by storage locations.
4-4 Oracle Grid Infrastructure Installation Guide
Installing Grid Infrastructure with OUI
See Also: Chapter 3, "Configuring Storage for Grid Infrastructure
for a Cluster and Oracle RAC"
❏
Disconnect all non-persistent drives
Before starting the Oracle Grid Infrastructure installation on Windows, ensure that
you disconnect all nonpersistent drives that are temporarily mounted on all the
nodes. Alternatively, to access the shared drive, make the shared drive persistent
using the following command:
net use * \\servername\sharename /persistent: YES
❏
Have intelligent platform management interface (IPMI) configuration
completed and have IPMI administrator account information
If you intend to use IPMI, then ensure baseboard management controller (BMC)
interfaces are configured, and have an administration account username and
password to provide when prompted during installation.
For nonstandard installations, if you must change the configuration on one or
more nodes after installation (for example, if you have different administrator
usernames and passwords for BMC interfaces on cluster nodes), then decide if you
want to reconfigure the BMC interface, or modify IPMI administrator account
information after installation as described in Chapter 5, "Oracle Grid
Infrastructure Postinstallation Procedures".
❏
Ensure the Oracle home path you select for the grid infrastructure home uses
only American standard code for information interchance (ASCII) characters
At the time of this release, the use of non-ASCII characters for a grid infrastructure
home or Oracle Database home is not supported.
4.2 Installing Grid Infrastructure with OUI
This section provides information about how to use Oracle Universal Installer (OUI) to
install Oracle Grid Infrastructure. It contains the following sections:
■
Running OUI to Install Grid Infrastructure
■
Installing Grid Infrastructure Using a Cluster Configuration File
■
Silent Installation of Oracle Clusterware
4.2.1 Running OUI to Install Grid Infrastructure
Complete the following steps to install grid infrastructure (Oracle Clusterware and
Oracle ASM) on your cluster. You can run OUI from a Virtual Network Computing
(VNC) session, or Terminal Services in console mode.
At any time during installation, if you have a question about what you are being asked
to do, then click the Help button on the OUI page.
1.
Log in to Windows using a member of the Administrators group and run the
setup.exe command from the Oracle Database 11g Release 2 (11.2) installation
media.
2.
Provide information as prompted by OUI. If you need assistance during
installation, then click Help. After the installation interview, you can click Details
to see the log file.
Installing Oracle Grid Infrastructure for a Cluster 4-5
Installing Grid Infrastructure with OUI
Note:
■
■
■
If you are upgrading your cluster or part of your cluster from
Oracle9i release 2 Clusterware to Oracle Clusterware 11g, then to
ensure backward compatibility, OUI prevents you from changing
the cluster name from the existing name by disabling the cluster
name field.
To use Oracle9i RAC, you must use Oracle9i Cluster Manager. You
can run Oracle9i Cluster Manager on the same server as Oracle
Clusterware; however, Oracle Clusterware will manage Oracle
Database and Oracle RAC releases 10.1 and higher and Oracle9i
Cluster Manager will manage Oracle9i RAC databases.
You cannot use Oracle ASM with Oracle9i Cluster Manager.
3.
After you have specified all the information needed for installation, OUI installs
the software then runs the Oracle Net Configuration Assistant (NetCA), Oracle
Private Interconnect Configuration Assistant, and Cluster Verification Utility
(CVU). These programs run without user intervention.
4.
If you selected to configure OCR and voting disk files on Oracle ACFS, then the
Oracle Automatic Storage Management Configuration Assistant (ASMCA)
configures Oracle ASM as part of the installation process. If you did not select
Oracle ASM as the storage option for the OCR and voting disk files, then you must
start ASMCA manually after installation to configure Oracle ASM.
Start ASMCA using the following command, where Grid_home is the grid
infrastructure home:
Grid_home\bin\asmca
When you have verified that your Oracle Grid Infrastructure installation has
completed successfully, you can either use it to maintain high availability for other
applications, or you can install an Oracle Database software.
If you intend to install Oracle Database 11g release 2 (11.2) with Oracle RAC, then refer
to Oracle Real Application Clusters Installation Guide. If you intend to use Oracle Grid
Infrastructure on a standalone server (an Oracle Restart deployment), then refer to
Oracle Database Installation Guide for Microsoft Windows.
See Also: Oracle Real Application Clusters Administration and
Deployment Guide for information about using cloning and node
addition procedures, and Oracle Clusterware Administration and
Deployment Guide for cloning Oracle Grid Infrastructure
4.2.2 Installing Grid Infrastructure Using a Cluster Configuration File
During installation of grid infrastructure, you are given the option either of providing
cluster configuration information manually, or of using a cluster configuration file. A
cluster configuration file is a text file that you can create before starting OUI, which
provides OUI with information about the cluster node names that it requires to
configure the cluster. When creating the text file, save the file with the extension .ccf
because the installer only accepts a file of type of Oracle Cluster Configuration File
(.ccf).
4-6 Oracle Grid Infrastructure Installation Guide
Installing Grid Infrastructure Using a Software-Only Installation
The cluster configuration file should have the following syntax, where node is the
name of the public host name for a node in the cluster, and vip is the VIP address for
that node:
node
node
...
vip
vip
For example, if have three nodes for your cluster, with host names RACnode1,
RACnode2 and RACnode3, you could create a text file named cluster_
config.ccf, with the following contents:
RACnode1
RACnode2
RACnode3
RACnode1-vip
RACnode2-vip
RACnode3-vip
Oracle suggests that you consider using a cluster configuration file if you intend to
perform repeated installations on a test cluster, or if you intend to perform an
installation on many nodes.
See Also: Appendix B, "Installing and Configuring Oracle Grid
Infrastructure Using Response Files" for more information about using
configuration files
4.2.3 Silent Installation of Oracle Clusterware
Complete the following procedure to perform a noninteractive (silent) installation:
1.
On the installation media, navigate to the directory response.
2.
Using a text editor, open the response file crs_install.rsp. Follow the
directions in each section, and supply values appropriate for your environment.
3.
Use the following command syntax to run OUI in silent mode:
setup.exe -silent -reponseFile path_to_your_reponse_file
For example:
E:\ setup.exe -silent -responseFile C:\users\oracle\installGrid.rsp
See Also: Appendix B, "Installing and Configuring Oracle Grid
Infrastructure Using Response Files" for more information about
performing silent installations using configuration files
4.3 Installing Grid Infrastructure Using a Software-Only Installation
A software-only installation only copies the Oracle Grid Infrastructure for a cluster
binaries to the specified node. Configuring Oracle Grid Infrastructure for a cluster and
Oracle ASM on all the nodes and then adding the nodes to the cluster must be done
manually after the installation has finished.
When you perform a software-only installation of Oracle Grid Infrastructure software,
you must complete several manual configuration steps to enable Oracle Clusterware
after you install the software on each node you intend to be a member of the cluster.
Installing Oracle Grid Infrastructure for a Cluster 4-7
Installing Grid Infrastructure Using a Software-Only Installation
Oracle recommends that only advanced users perform the
software-only installation, because this installation method provides
no validation of the installation and this installation option requires
manual postinstallation steps to enable the grid infrastructure
software.
Note:
If you select a software-only installation, then ensure that the Oracle Grid
Infrastructure home path is identical on each cluster member node.
Performing a software-only installation involves the following steps:
1.
Installing the Software Binaries
2.
Configuring the Software Binaries
4.3.1 Installing the Software Binaries
1.
Log in to Windows using a member of the Administrators group and run the
setup.exe command from the Oracle Database 11g Release 2 (11.2) installation
media.
2.
Complete a software-only installation of Oracle Grid Infrastructure for a cluster on
the node.
See "Configuring the Software Binaries" on page 8 for information about
configuring Oracle Grid Infrastructure after preforming a software-only
installation.
3.
Enable the Oracle RAC option for Oracle Grid infrastructure by renaming the
orarac11.dll.dbl file located in the Grid_home\bin directory to
orarac11.dll.
4.
Verify that all of the cluster nodes meet the installation requirements using the
command runcluvfy.bat stage -pre crsinst -n node_list. Ensure
that you have completed all storage and server preinstallation requirements.
5.
Copy the Grid home directory to the same location on the other nodes you are
configuring as cluster member nodes.
6.
On each node that you copied the Grid home to, run the clone.pl script.
Do not run the clone.pl script on the node where you performed the
software-only installation.
4.3.2 Configuring the Software Binaries
To configure and activate a software-only grid infrastructure installation for a cluster,
complete the following tasks:
1.
Using a text editor, modify the template file Grid_
home\crs\install\crsconfig_params for the installer to use to configure
the cluster. For example:
...
OCR_LOCATIONS=E:\grid\stor1\ocr,F:\grid\stor2\ocr
CLUSTER_NAME=racwin-cluster
HOST_NAME_LIST=node1,node2,node3,node4
NODE_NAME_LIST=node1,node2,node3,node4
PRIVATE_NAME_LIST=
4-8 Oracle Grid Infrastructure Installation Guide
Installing Grid Infrastructure Using a Software-Only Installation
VOTING_DISKS=E:\grid\stor1\vdsk,F:\grid\stor2\vdsk,G:\grid\stor3\vdsk
...
CRS_STORAGE_OPTION=2
CSS_LEASEDURATION=400
CRS_NODEVIPS="node1-vip/255.255.255.0/PublicNIC,node2-vip/255.255.255.0/Pub
licNIC,node3-vip/255.255.255.0/PublicNIC,node4-vip/255.255.255.0/PublicNIC"
NODELIST=node1,node2,node3,node4
NETWORKS="PublicNIC"/192.0.2.0:public,"PrivateNIC"/10.0.0.0:cluster_interco
nnect
SCAN_NAME=racwin-cluster
SCAN_PORT=1521
...
#### Required by OUI add node
NEW_HOST_NAME_LIST=
NEW_NODE_NAME_LIST=
NEW_PRIVATE_NAME_LIST=
...
2.
On all nodes, place the crsconfig_params file in the path Grid_
home\crs\install\crsconfig_params, where Grid_home is the path to the
Oracle Grid Infrastructure home for a cluster. For example, on node1 you might
issue a command similar to the following:
C:> xcopy app\orauser\grid\crs\install\crsconfig_params
\\NODE2\app\orauser\grid\crs\install /v
3.
After configuring the crsconfig_params file, run the rootcrs.pl script from
the Grid_home on each node, using the following syntax:
Grid_home\perl\lib\perl -IGrid_home\perl\lib -IGrid_home\crs\install
Grid_home\crs\install\rootcrs.pl
For example, if your Grid home is C:\app\11.2.0\grid, then you would run
the following script:
C:\app\11.2.0\grid\perl\lib\perl -IC:\app\11.2.0\grid\perl\lib -IC:\app
\11.2.0\grid\crs\install C:\app\11.2.0\grid\crs\install\rootcrs.pl
4.
Change directory to Grid_home\oui\bin, where Grid_home is the path of the
Grid Infrastructure home on each cluster member node.
5.
Enter the following command syntax, where Grid_home is the path of the Grid
home on each cluster member node, and node_list is a comma-delimited list of
nodes on which you want the software enabled:
setup.exe -updateNodeList ORACLE_HOME=Grid_home -defaultHomeName
"CLUSTER_NODES={node_list}" CRS=TRUE
For example:
C:\..\bin> setup.exe -updateNodeList ORACLE_HOME=C:\app\orauser\grid
-defaultHomeName "CLUSTER_NODES={node1,node2,node3,node4}" CRS=TRUE
To enable the Oracle Clusterware installation on the local node only, enter the
following command, where Grid_home is the Grid home on the local node, and
node_list is a comma-delimited list of nodes on which you want the software
enabled:
Installing Oracle Grid Infrastructure for a Cluster 4-9
Confirming Oracle Clusterware Function
setup.exe -updateNodeList -local ORACLE_HOME=Grid_home
-defaultHomeName "CLUSTER_NODES={node_list}" CRS=TRUE
For example:
C:\..\bin> setup.exe -updateNodeList -local ORACLE_HOME=C:\app\orauser\
grid -defaultHomeName "CLUSTER_NODES={node1,node2,node3,node4}" CRS=TRUE
To configure and activate a software-only grid infrastructure installation for a
standalone server, refer to Oracle Database Installation Guide for Microsoft Windows.
4.4 Confirming Oracle Clusterware Function
After installation, log in using a member of the Administrators group, and run the
following command from the bin directory in the Grid home to confirm that your
Oracle Clusterware installation is installed and running correctly:
crsctl check cluster -all
For example:
C:\..\bin\> crsctl check cluster -all
*************************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
*************************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
*************************************************************************
4.5 Confirming Oracle ASM Function for Oracle Clusterware Files
If you installed the OCR and voting disk files on Oracle ASM, then run the following
command from the Grid_home\bin directory to confirm that your Oracle ASM
installation is running:
srvctl status asm
For example:
C:\app\11.2.0\grid\BIN> srvctl status asm
ASM is running on node node1
ASM is running on node node2
Oracle ASM is running only if it is needed for Oracle Clusterware files. If you did not
configure Oracle Clusterware storage on Oracle ASM during installation, then the
Oracle ASM instance should be down.
4-10 Oracle Grid Infrastructure Installation Guide
Confirming Oracle ASM Function for Oracle Clusterware Files
To manage Oracle ASM or Oracle Net Services on Oracle
Clusterware 11g release 2 (11.2) or later installations, use the srvctl
binary in the Oracle Grid Infrastructure home for a cluster (Grid
home). If you have Oracle RAC or Oracle Database installed, then you
cannot use the srvctl binary in the database home (Oracle home) to
manage Oracle ASM or Oracle Net Services.
Note:
Installing Oracle Grid Infrastructure for a Cluster 4-11
Confirming Oracle ASM Function for Oracle Clusterware Files
4-12 Oracle Grid Infrastructure Installation Guide
5
5
Oracle Grid Infrastructure Postinstallation
Procedures
This chapter describes how to complete the postinstallation tasks after you have
installed the Oracle Grid Infrastructure software.
This chapter contains the following topics:
■
Required Postinstallation Tasks
■
Recommended Postinstallation Tasks
■
Using Older Oracle Database Versions with Grid Infrastructure
■
Modifying Oracle Clusterware Binaries After Installation
5.1 Required Postinstallation Tasks
You must perform the following tasks after completing your installation:
■
Download and Install Patch Updates
■
Configure Exceptions for the Windows Firewall
In prior releases, backing up the voting disks using the
ocopy.exe command was a required postinstallation task. With
Oracle Clusterware release 11.2 and later, backing up a voting disk is
no longer required.
Note:
5.1.1 Download and Install Patch Updates
Refer to the My Oracle Support Web site for required patch updates for your
installation.
To download required patch updates:
1.
Use a Web browser to view the My Oracle Support Web site:
https://support.oracle.com
2.
Log in to My Oracle Support Web site.
If you are not a My Oracle Support registered user, then click
Register Here and register.
Note:
3.
On the main My Oracle Support page, select Patches & Updates.
Oracle Grid Infrastructure Postinstallation Procedures
5-1
Required Postinstallation Tasks
4.
On the Patches & Update page, in the click Advanced "Classic" Patch Search.
To search for patch sets, click the Latest Patchsets link under the heading Oracle
Server/Tools.
5.
On the Advanced Search page, click the search icon next to the Product or Product
Family field.
6.
In the Search and Select: Product Family field, select Database and Tools in the
Search list field, enter Database in the text field, and click Go.
Click Oracle Database Family in the list of Product Names to select it.
7.
Click the search icon next to the Release field. In the Search and Select: Release
window, type 11.2 in the Search field, then click Go. Click Oracle 11.2.0.1.0 in the
Release Name column to select it.
8.
Select your platform from the Platform or Language drop-down list (for example,
Microsoft Windows Server 2003 R2).
9.
At the bottom of the Advanced Search section, click Go. Any available patch
updates appear under the Results heading.
10. Click the patch number to view the patch description and access the README file
for the patch. You can also download the patch from this page.
11. Click View README and read the page that appears. The README page
contains information about the patch and how to apply the patch to your
installation.
Click Patch Details to return to the previous page.
12. Click Download, and save the patch file on your system.
13. Use the unzip utility provided with your Oracle software to uncompress the
Oracle patch updates that you download from My Oracle Support. The unzip
utility is located in the Grid_home\BIN directory.
14. Refer to Appendix D, "How to Upgrade to Oracle Grid Infrastructure 11g Release
2" for information about how to stop database processes in preparation for
installing patches.
5.1.2 Configure Exceptions for the Windows Firewall
If the Windows Firewall feature is enabled on one or more of the nodes in your cluster,
then virtually all transmission control protocol (TCP) network ports are blocked to
incoming connections. As a result, any Oracle product that listens for incoming
connections on a TCP port will not receive any of those connection requests and the
clients making those connections will report errors.
You must configure exceptions for the Windows Firewall if your system meets all of
the following conditions:
■
■
Oracle server-side components are installed on a computer running a supported
version of Microsoft Windows. The list of components includes the Oracle
Database, Oracle grid infrastructure, Oracle Real Application Clusters (Oracle
RAC), network listeners, or any Web servers or services.
The Windows computer in question accepts connections from other computers
over the network. If no other computers connect to the Windows computer to
access the Oracle software, then no post-installation configuration steps are
required and the Oracle software functions as expected.
5-2 Oracle Grid Infrastructure Installation Guide
Required Postinstallation Tasks
■
The Windows computer in question is configured to run the Windows Firewall. If
the Windows Firewall is not enabled, then no post-installation configuration steps
are required.
If all of the above conditions are met, then the Windows Firewall must be configured
to allow successful incoming connections to the Oracle software. To enable Oracle
software to accept connection requests, Windows Firewall must be configured by
either opening up specific static TCP ports in the firewall or by creating exceptions for
specific executables so they can receive connection requests on any ports they choose.
This firewall configuration can be done by one of the following methods:
■
■
■
Start the Windows Firewall application, select the Exceptions tab and then click
either Add Program or Add Port to create exceptions for the Oracle software.
From the command prompt, use the netsh firewall add... command.
When Windows notifies you that a foreground application is attempting to listen
on a port, and gives you the opportunity to create an exception for that executable.
If you choose the create the exception in this way, the effect is the same as creating
an exception for the executable either through Control Panel or from the command
line.
The following sections list the Oracle Database 11g release 2 executables that listen on
TCP ports on Windows, along with a brief description of the executable. It is
recommended that these executables (if in use and accepting connections from a
remote, client computer) be added to the exceptions list for the Windows Firewall to
ensure correct operation. In addition, if multiple Oracle homes are in use, firewall
exceptions may have to be created for the same executable, for example, oracle.exe,
multiple times, once for each Oracle home from which that executable loads.
■
Firewall Exceptions for Oracle Database
■
Firewall Exceptions for Oracle Database Examples (or the Companion CD)
■
Firewall Exceptions for Oracle Gateways
■
Firewall Exceptions for Oracle Clusterware and Oracle ASM
■
Firewall Exceptions for Oracle RAC Database
■
Firewall Exceptions for Oracle Cluster File System for Windows
■
Firewall Exceptions for Other Oracle Products
5.1.2.1 Firewall Exceptions for Oracle Database
For basic database operation and connectivity from remote clients, such as SQL*Plus,
Oracle Call Interface (OCI), Open Database Connectivity (ODBC), Object Linking and
Embedding database (OLE DB) applications, and so on, the following executables
must be added to the Windows Firewall exception list:
■
Oracle_home\bin\oracle.exe - Oracle Database executable
■
Oracle_home\bin\tnslsnr.exe - Oracle Listener
If you use remote monitoring capabilities for your database, the following executables
must be added to the Windows Firewall exception list:
■
■
Oracle_home\bin\emagent.exe - Oracle Database Control
Oracle_home\jdk\bin\java.exe - Java Virtual Machine (JVM) for Enterprise
Manager Database Control
Oracle Grid Infrastructure Postinstallation Procedures
5-3
Required Postinstallation Tasks
5.1.2.2 Firewall Exceptions for Oracle Database Examples (or the Companion CD)
After installing the Oracle Database Companion CD, the following executables must
be added to the Windows Firewall exception list:
■
Oracle_home\opmn\bin\opmn.exe - Oracle Process Manager
■
Oracle_home\jdk\bin\java.exe - JVM
5.1.2.3 Firewall Exceptions for Oracle Gateways
If your Oracle database interacts with non-Oracle software through a gateway, then
you must add the gateway executable to the Windows Firewall exception list.
Table 5–1table lists the gateway executables used to access non-Oracle software.
Table 5–1
Oracle Executables Used to Access Non-Oracle Software
Executable Name
Description
omtsreco.exe
Oracle Services for Microsoft Transaction Server
dg4sybs.exe
Oracle Database Gateway for Sybase
dg4tera.exe
Oracle Database Gateway for Teradata
dg4msql.exe
Oracle Database Gateway for SQL Server
dg4db2.exe
Oracle Database Gateway for Distributed Relational Database
Architecture (DRDA)
pg4arv.exe
Oracle Database Gateway for Advanced Program to Program
Communication (APPC)
pg4t4ic.exe
Oracle Database Gateway for APPC
dg4mqs.exe
Oracle Database Gateway for WebSphere MQ
dg4mqc.exe
Oracle Database Gateway for WebSphere MQ
dg4odbc.exe
Oracle Database Gateway for ODBC
5.1.2.4 Firewall Exceptions for Oracle Clusterware and Oracle ASM
If you installed the Oracle grid infrastructure software on the nodes in your cluster,
then you can enable the Windows Firewall only after adding the following executables
and ports to the Firewall exception list. The Firewall Exception list must be updated on
each node.
■
■
■
Grid_home\bin\gpnpd.exe - Grid Plug and Play daemon
Grid_home\bin\oracle.exe - Oracle Automatic Storage Management (Oracle
ASM) executable (if using Oracle ASM for storage)
Grid_home\bin\racgvip.exe - Virtual Internet Protocol Configuration
Assistant
■
Grid_home\bin\evmd.exe - OracleEVMService
■
Grid_home\bin\crsd.exe - OracleCRService
■
Grid_home\bin\ocssd.exe - OracleCSService
■
Grid_home\bin\octssd.exe - Cluster Time Synchronization Service daemon
■
■
Grid_home\bin\mDNSResponder.exe - multicast-domain name system (DNS)
Responder Daemon
Grid_home\bin\gipcd.exe - Grid inter-process communication (IPC) daemon
5-4 Oracle Grid Infrastructure Installation Guide
Required Postinstallation Tasks
■
Grid_home\bin\gnsd.exe - Grid Naming Service (GNS) daemon
■
Grid_home\bin\ohasd.exe - OracleOHService
■
Grid_home\bin\TNSLSNR.EXE - single client access name (SCAN) listener and
local listener for Oracle RAC database and Oracle ASM
■
Grid_home\opmn\bin\ons.exe - Oracle Notification Service (ONS)
■
Grid_home\jdk\jre\bin\java.exe - JVM
5.1.2.5 Firewall Exceptions for Oracle RAC Database
For the Oracle RAC database, the executables that require exceptions are:
■
Oracle_home\bin\oracle.exe - Oracle RAC database instance
■
Oracle_home\bin\emagent.exe - Oracle Enterprise Manager agent
■
Oracle_home\jdk\bin\java.exe - For the Oracle Enterprise Manager
Database Console
In addition, the following ports should be added to the Windows Firewall exception
list:
■
■
Microsoft file sharing system management bus (SMB)
–
User Datagram Protocol (UDP) ports from 135 through 139
–
TCP ports from 135 through 139
Direct-hosted SMB traffic without a network basic I/O system (NetBIOS)
–
port 445 (TCP and UPD)
5.1.2.6 Firewall Exceptions for Oracle Cluster File System for Windows
If you use Oracle Cluster File System for Windows (OCFS for Windows) to store the
Oracle Clusterware files, or Oracle RAC database files, then you must add the
following exceptions to the Windows Firewall:
■
■
Grid_home\cfs\Ocfsfindvol.exe - OCFS for Windows Volume Service
%WINDOWS_HOME%\system32\drivers\Ocfs.sys - System file for OCFS (if
using OCFS for Windows for Oracle Clusterware storage)
5.1.2.7 Firewall Exceptions for Other Oracle Products
In additional to all the previously listed exceptions, if you use any of the Oracle
software listed in, then you must create an exception for Windows Firewall for the
associated executable.
Table 5–2
Other Oracle Software Products Requiring Windows Firewall Exceptions
Oracle Software Product
Executable Name
Data Guard Manager
dgmgrl.exe
Oracle Internet Directory lightweight directory access
protocol (LDAP) Server
oidldapd.exe
External Procedural Calls
extproc.exe
Oracle Grid Infrastructure Postinstallation Procedures
5-5
Recommended Postinstallation Tasks
5.2 Recommended Postinstallation Tasks
Oracle recommends that you complete the following tasks as needed after installing
Oracle Grid Infrastructure:
■
Install Troubleshooting Tool
■
Optimize Memory Usage for Programs
■
Create a Fast Recovery Area Disk Group
5.2.1 Install Troubleshooting Tool
To address troubleshooting issues, Oracle recommends that you install Instantaneous
Problem Detection OS Tool (IPD/OS).
5.2.1.1 Installing Instantaneous Problem Detection OS Tool (IPD/OS)
On Windows systems running Windows Server 2003 with service pack 2 or higher,
install IPD/OS.
The IPD/OS tool is designed to detect and analyze operating system and cluster
resource-related degradation and failures. The tool can provide better explanations for
many issues that occur in clusters where Oracle Clusterware and Oracle RAC are
running, such as node evictions. It tracks the operating system resource consumption
at each node, process, and device level continuously. It collects and analyzes
clusterwide data. In real time mode, when thresholds are reached, an alert is shown to
the operator. For root cause analysis, historical data can be replayed to understand
what was happening at the time of failure.
You can download the tool at the following URL:
http://www.oracle.com/technology/products/database/clustering/ipd_
download_homepage.html
To prevent performance problems, you cannot run the graphical user interface (GUI)
interface for IPD/OS on the Oracle RAC node. You can install the client on any Linux
or Windows client that is not a cluster member node. From this client you can view the
data.
5.2.2 Optimize Memory Usage for Programs
The Windows operating system should be optimized for Memory Usage of ‘Programs’
instead of ‘System Caching’. To modify the memory optimization settings, perform the
following steps:
1.
From the Start Menu, select Control Panel, then System.
2.
In the System Properties window, click the Advanced tab.
3.
In the Performance section, click Settings.
4.
In the Performance Options window, click the Advanced tab.
5.
In the Memory Usage section, make sure Programs is selected.
5.2.3 Create a Fast Recovery Area Disk Group
During installation, if you select Oracle ASM for storage, a single disk group is
created. If you plan to add an Oracle Database for a standalone server or an Oracle
RAC database, then you should create a separate disk group for the fast recovery area.
5-6 Oracle Grid Infrastructure Installation Guide
Recommended Postinstallation Tasks
5.2.3.1 About the Fast Recovery Area and the Fast Recovery Area Disk Group
The fast recovery area is a unified storage location for all Oracle Database files related
to recovery. Database administrators can define the DB_RECOVERY_FILE_DEST
parameter to the path for the Fast Recovery Area to enable on-disk backups, and rapid
recovery of data. Enabling rapid backups for recent data can reduce requests to system
administrators to retrieve backup tapes for recovery operations.
When you enable the fast recovery area in the database initialization parameter file, all
RMAN backups, archive logs, control file automatic backups, and database copies are
written to the fast recovery area. RMAN automatically manages files in the fast
recovery area by deleting obsolete backups and archive files that are no longer
required for recovery.
To use a flash recovery area in Oracle RAC, you must place it on an Oracle ASM disk
group, a cluster file system, or on a shared directory that is configured through Direct
network file system (NFS) for each Oracle RAC instance. In other words, the flash
recovery area must be shared among all of the instances of an Oracle RAC
database.Oracle recommends that you create a fast recovery area disk group. Oracle
Clusterware files and Oracle Database files can be placed on the same disk group as
fast recovery area files. However, Oracle recommends that you create a separate fast
recovery area disk group to reduce storage device contention.
The fast recovery area is enabled by setting the parameter DB_RECOVERY_FILE_DEST
to the same value on all instances. The size of the fast recovery area is set with DB_
RECOVERY_FILE_DEST_SIZE. As a general rule, the larger the fast recovery area, the
more useful it becomes. For ease of use, Oracle recommends that you create a fast
recovery area disk group on storage devices that can contain at least three days of
recovery information. Ideally, the fast recovery area should be large enough to hold a
copy of all of your data files and control files, the online redo logs, and the archived
redo log files needed to recover your database using the data file backups kept under
your retention policy.
Multiple databases can use the same fast recovery area. For example, assume you have
created one fast recovery area disk group on disks with 150 gigabyte (GB) of storage,
shared by three different databases. You can set the size of the fast recovery area for
each database depending on the importance of each database. For example, if
database1 is your least important database, database2 is of greater importance
and database3 is of greatest importance, then you can set different DB_RECOVERY_
FILE_DEST_SIZE settings for each database to meet your retention target for each
database: 30 GB for database1, 50 GB for database2, and 70 GB for database3.
See Also:
Oracle Database Storage Administrator's Guide
5.2.3.2 Creating the Fast Recovery Area Disk Group
To create a fast recovery area disk group:
1.
Navigate to the Grid home bin directory, and start Oracle ASM Configuration
Assistant (ASMCA). For example:
C:\> cd app\11.2.0\grid\bin
C:\> asmca
2.
ASMCA opens at the Disk Groups tab. Click Create to create a new disk group
3.
The Create Disk Groups window opens.
In the Disk Group Name field, enter a descriptive name for the fast recovery area
group. For example: FRA.
Oracle Grid Infrastructure Postinstallation Procedures
5-7
Using Older Oracle Database Versions with Grid Infrastructure
In the Redundancy section, select the level of redundancy you want to use.
In the Select Member Disks field, select eligible disks to be added to the fast
recovery area, and click OK.
4.
The Diskgroup Creation window opens to inform you when disk group creation is
complete. Click OK.
5.
Click Exit.
5.3 Using Older Oracle Database Versions with Grid Infrastructure
Review the following sections for information about using older Oracle Database
releases with Oracle Clusterware 11g release 2 (11.2) installations:
■
General Restrictions for Using Older Oracle Database Versions
■
Pinning Cluster Nodes for Oracle Database Release 10.x or 11.x
■
Using DBCA or Applying Patches for Oracle Database Release 10.x or 11.x
■
Enabling the Global Services Daemon (GSD) for Oracle Database Release 9.2
■
Using the Correct LSNRCTL Commands
5.3.1 General Restrictions for Using Older Oracle Database Versions
You can use Oracle Database release 9.2, release 10.x and release 11.1 with Oracle
Clusterware and Oracle ASM release 11.2.
If you upgrade an existing version of Oracle Clusterware and Oracle ASM to Oracle
grid infrastructure release 11.2 (which includes Oracle Clusterware and Oracle ASM),
and also upgrade your Oracle RAC database to release 11.2, then the required
configuration of existing databases is completed automatically.
If you plan to use Oracle Clusterware and Oracle ASM release 11.2 with Oracle RAC
10.x or 11.1, then you must complete additional configuration tasks as described in the
following sections. You also must apply patches to the Oracle RAC 10.x or 11.1
software installations before Oracle RAC 10.2 and 11.1 will work correctly with Oracle
Clusterware 11.2 and Oracle ASM 11.2. Refer to the Oracle Database Readme for
information about specific patches.
If you are upgrading Oracle RAC or Oracle Database from
releases 11.1.0.7, 11.1.0.6, or 10.2.0.4, and you upgraded Oracle
Clusterware and Oracle ASM to release 11.2, then Oracle recommends
that you check for the latest recommended patches for the release you
are upgrading from, and install those patches as needed before
upgrading to Oracle RAC or Oracle Database release 11.2.
Note:
For more information on recommended patches, refer to "Oracle
Upgrade Companion," which is available through Note 785351.1 on
My Oracle Support:
https://metalink.oracle.com
You can also refer to Oracle Support Notes 756388.1 and 756671.1 for
the current list of recommended patches for each release.
See Also:
"Download and Install Patch Updates" on page 5-1
5-8 Oracle Grid Infrastructure Installation Guide
Using Older Oracle Database Versions with Grid Infrastructure
5.3.2 Using DBCA or Applying Patches for Oracle Database Release 10.x or 11.x
Before you use DBCA to create an Oracle RAC or Oracle Database 10.x or 11.1
database on an Oracle Clusterware and Oracle ASM release 11.2 installation, you must
install patches to the Oracle RAC or Oracle Database home. Refer to the Oracle
Database Readme for information about specific patches.
See Also:
"Download and Install Patch Updates" on page 5-1
Before applying any patches to your Oracle Database 10g release 2 software or Oracle
Database 11g release 1 software, you must first stop the OracleRemExecService
Windows service on all the nodes in your cluster. If you do not stop this process before
applying patches to your Oracle Database 10g release 2 database, then you will receive
errors during the patching operation and the Oracle software will not be patched
correctly.
5.3.3 Pinning Cluster Nodes for Oracle Database Release 10.x or 11.x
When Oracle Database version 10.x or 11.x is installed on a new Oracle Grid
Infrastructure for a cluster configuration, it is configured for dynamic cluster
configuration, in which some or all internet protocol (IP) addresses are provisionally
assigned, and other cluster identification information is dynamic. This configuration is
incompatible with older database releases, which require fixed addresses and
configurations.
You can change the nodes where you want to run the older database to create a
persistent configuration. Creating a persistent configuration for a node is called
pinning a node.
To pin a node in preparation for installing an older Oracle Database version, use CRS_
home\bin\crsctl with the following command syntax, where nodes is a
space-delimited list of one or more nodes in the cluster whose configuration you want
to pin:
crsctl pin css -n nodes
For example, to pin nodes node3 and node4, log in as an Administrator user and
enter the following command:
C:\> crsctl pin css -n node3 node4
To determine if a node is in a pinned or unpinned state, use CRS_
home\bin\olsnodes with the following syntax:
olsnodes -t -n
For example, to list all pinned nodes, use the following command:
C:\> app\11.2.0\grid\bin\olsnodes -t -n
node1 1
Pinned
node2 2
Pinned
node3 3
Pinned
node4 4
Pinned
To list the state of a particular node use the -n option, as shown in the following
example:
C:\> app\11.2.0\grid\bin\olsnodes -t -n node3
node3 3
Pinned
Oracle Grid Infrastructure Postinstallation Procedures
5-9
Using Older Oracle Database Versions with Grid Infrastructure
Oracle Clusterware Administration and Deployment Guide for
more information about pinning and unpinning nodes
See Also:
5.3.4 Enabling the Global Services Daemon (GSD) for Oracle Database Release 9.2
To install an Oracle Database version 9.2 on a cluster running Oracle Clusterware 11g
(version 11.1 or higher), perform the following additional configuration steps to
prevent permission errors:
1.
Create the OracleCRSToken_username Windows service for the Oracle9i software
owner on each node in the cluster. After the service is created, change the
ownership for the Global Services daemon (GSD) resource to the Oracle9i software
owner.
Run the following commands on each node using the specified syntax, where
Grid_home is the Oracle Grid infrastructure home, 92_user_domain is the domain of
the Oracle9i software owner, 92_username is the user name of the Oracle9i software
owner, and nodename is the name of the node on which the service is being
configured:
Grid_home\bin\crsctl add crs administrator 92_domain/92_username
Grid_home\bin\crsctl setperm resource ora.nodename.gsd -o 92_username
For example, if the Oracle Clusterware home is C:\app\11.2.0\grid, the
domain is ORAUSERS, the node name is node1, and the username is ora92, then
you would enter the following commands:
C:\app\11.2.0\grid\bin> crsctl add crs administrator ORAUSERS/ora92
C:\app\11.2.0\grid\bin> crsctl setperm resource ora.node1.gsd
-o ORAUSERS/ora92
2.
Enable and start the GSD daemon on all nodes in the cluster.
On any node in the cluster, run commands using the following syntax, where
Grid_home is the Oracle Grid infrastructure home:
Grid_home\bin\srvctl enable nodeapps -g
Grid_home\bin\srvctl start nodeapps
For example, if the Oracle Clusterware home is C:\app\11.2.0\grid, then
enter the following commands:
C:\app\11.2.0\grid\bin> srvctl enable nodeapps -g
C:\app\11.2.0\grid\bin> srvctl start nodeapps
5.3.5 Using the Correct LSNRCTL Commands
To administer local and SCAN listeners using the lsnrctl command in Oracle
Clusterware and Oracle ASM 11g release 2, use the Listener Control utility in the grid
infrastructure home (Grid home). Do not attempt to use the lsnrctl commands from
Oracle home locations for previous releases, because they cannot be used with the new
release.
5.3.6 Starting and Stopping Oracle Clusterware Resources
Before shutting down Oracle Clusterware release 11.2, if you have an Oracle Database
10g release 2 or Oracle Database 11g release 1 database registered with Oracle
Clusterware, then you must do one of the following:
5-10 Oracle Grid Infrastructure Installation Guide
Modifying Oracle Clusterware Binaries After Installation
■
■
Stop the Oracle Database 10g release 2 or Oracle Database 11g release 1 database
instances first, then stop the Oracle Clusterware stack
Use the crsctl stop crs -f command to shut down the Oracle Clusterware
stack and ignore any errors that are raised
5.4 Modifying Oracle Clusterware Binaries After Installation
After installation, if you must modify the software installed in your Grid home, then
you must first stop the Oracle Clusterware stack. For example, to apply a one-off patch
or modify any of the dynamic-link libraries (DLLs) used by Oracle Clusterware or
Oracle ASM, you must follow these steps to stop and restart the Oracle Clusterware
stack.
Caution: To put the changes you make to the Oracle grid
infrastructure home into effect, you must shut down all executables
that run in the Grid home directory and then restart them. In addition,
shut down any applications that use Oracle shared libraries or DLL
files in the Grid home.
Prepare the Oracle grid infrastructure home for modification using the following
procedure:
1.
Log in using a member of the Administrators group and change directory to the
path Grid_home\bin, where Grid_home is the path to the Oracle grid
infrastructure home.
2.
Shut down the Oracle Clusterware stack using the following command:
C:\..\bin> crsctl stop crs -f
3.
After the Oracle Clusterware stack is completely shut down, perform the updates
to the software installed in the Grid home.
4.
Use the following command to restart the Oracle Clusterware stack:
C:\..\bin> crsctl start crs
5.
Repeat steps 1 through 4 on each cluster member node.
Oracle Grid Infrastructure Postinstallation Procedures 5-11
Modifying Oracle Clusterware Binaries After Installation
5-12 Oracle Grid Infrastructure Installation Guide
6
6
How to Modify or Deinstall Oracle Grid
Infrastructure
This chapter describes how to remove or deconfigure Oracle Clusterware software
from your server.
This chapter contains the following topics:
■
Deciding When to Deinstall Oracle Clusterware
■
Adding Standalone Grid Infrastructure Servers to a Cluster
■
Deconfiguring Oracle Clusterware without Removing Binaries
■
Removing Oracle Clusterware and Oracle Automatic Storage Management
See Also: Product-specific documentation for requirements and
restrictions to remove an individual product
6.1 Deciding When to Deinstall Oracle Clusterware
Remove installed components in the following situations:
■
■
■
■
You have successfully installed Oracle Clusterware, and you want to remove the
Clusterware installation, either in an educational environment, or a test
environment.
You have encountered errors during or after installing or upgrading Oracle
Clusterware, and you want to reattempt an installation.
Your installation or upgrade stopped because of a hardware or operating system
failure.
You are advised by Oracle Support to reinstall Oracle Clusterware.
6.2 Adding Standalone Grid Infrastructure Servers to a Cluster
If you have an Oracle Database installation using Oracle Restart (Oracle Grid
Infrastructure for a standalone server), and you want to configure that server as a
cluster member node, then complete the following tasks:
1.
Inspect the Oracle configuration with SRVCTL using the following syntax, where
db_unique_name is the unique name for the database, and lsnrname is the name of
the listener for the database:
srvctl config database -d db_unique_name
srvctl config service -d db_unique_name
srvctl config listener -l lsnrname
How to Modify or Deinstall Oracle Grid Infrastructure
6-1
Deconfiguring Oracle Clusterware without Removing Binaries
Record the configuration information for the server, as you will need this
information in a later step.
2.
Change directory to Grid_home\crs\install, for example:
C:\> cd app\product\grid\crs\install
3.
Deconfigure and deinstall the Oracle grid infrastructure installation for a
standalone server (Oracle Restart) using the following command:
C:\..\install> roothas.pl -deconfig
4.
Prepare the server for Oracle Clusterware configuration, as described in either
Chapter 1, "Typical Installation for Oracle Grid Infrastructure for a Cluster" or
Chapter 2, "Advanced Installation Oracle Grid Infrastructure for a Cluster
Preinstallation Tasks".
5.
Install and configure Oracle grid infrastructure for a cluster on each node in the
cluster.
6.
Add Oracle grid infrastructure for a cluster support for your Oracle databases
using the configuration information you recorded in Step 1. Use the following
command syntax, where db_unique_name is the unique name of the database on the
node, Oracle_home is the complete path of the Oracle home for the database, and
nodename is the name of the node:
srvctl add database -d db_unique_name -o Oracle_home -x nodename
For example, if your database name is mydb1, and the node name is node1, enter
the following command:
srvctl add database -d mydb1 -o C:\app\oracle\product\11.2.0\
db1 -x node1
7.
Add each service listed in Step 1 to the database, using the command srvctl
add service.
6.3 Deconfiguring Oracle Clusterware without Removing Binaries
Running the rootcrs.pl command with the flags -deconfig -force enables you
to deconfigure Oracle Clusterware on one or more nodes without removing the
installed binaries. This feature is useful if you encounter an error on one or more
cluster nodes during installation, such as incorrectly configured shared storage. By
running rootcrs.pl -deconfig -force on nodes where you encounter an
installation error, you can deconfigure Oracle Clusterware on those nodes, correct the
cause of the error, and then run rootcrs.pl again.
To deconfigure Oracle Clusterware:
1.
Log in using a member of the Administrators group on a node where you
encountered an error.
2.
Change directory to Grid_home\crs\install. For example:
C:\> cd app\11.2.0\grid\crs\install
3.
Run rootcrs.pl with the -deconfig -force flags. For example:
C:\..\install> perl rootcrs.pl -deconfig -force
Repeat on other nodes as required.
6-2 Oracle Grid Infrastructure Installation Guide
Removing Oracle Clusterware and Oracle Automatic Storage Management
4.
If you are deconfiguring Oracle Clusterware on all nodes in the cluster, then on the
last node, enter the following command:
C:\..\install> perl rootcrs.pl -deconfig -force -lastnode
The -lastnode flag completes deconfiguration of the cluster, including the
Oracle Cluster Registry (OCR) and voting disks.
Note: The -force flag must be specified when running the
rootcrs.pl script if there exist running resources that depend on the
resources started from the Oracle Clusterware home you are deleting,
such as databases, services, or listeners. You must also use the -force
flag if you are removing a partial, or failed installation.
6.4 Removing Oracle Clusterware and Oracle Automatic Storage
Management
The deinstall command removes Oracle Clusterware and Oracle Automatic
Storage Management (Oracle ASM) from your server. The following sections describe
the deinstall.bat command, and provide information about additional options to
use with the command:
■
About the Deinstallation Command
■
Example of Running the Deinstall Command for Oracle Clusterware and ASM
■
Example Parameter File for Deinstall of Oracle Grid Infrastructure
6.4.1 About the Deinstallation Command
The Deinstallation Tool (deinstall.bat) is available in Oracle home directories after
installation as %ORACLE_HOME%\deinstall\deinstall.bat. The
deinstall.bat command is also available for download from Oracle TechNet
(OTN) (http://www.oracle.com/technology/software/products/
database/index.html). You can download it with the complete Oracle Database
11g release 2 software, or as a separate archive file.
The deinstall.bat command uses the information you provide, plus information
gathered from the software home to create a parameter file. You can alternatively
supply a parameter file generated previously by the deinstall.bat command using
the –checkonly flag and -o flag. You can also edit a response file template to create a
parameter file.
The deinstall.bat command stops Oracle software, and removes Oracle software
and configuration files on the operating system for a specific Oracle home. After the
deinstallation process you are prompted to run the rootcrs.pl script as a user that is
a member of the Administrators group.
The deinstall.bat command uses the following syntax, where variable content is
indicated by italics:
deinstall.bat -home complete path of Oracle home [-silent] [-checkonly] [-local]
[-paramfile complete path of input parameter property file] [-params name1=value
name2=value . . .] [-o complete path of output file] [-help | -h]
The options are:
■
-home
How to Modify or Deinstall Oracle Grid Infrastructure
6-3
Removing Oracle Clusterware and Oracle Automatic Storage Management
Use this flag to indicate the home path of the Oracle home to check or deinstall. To
deinstall Oracle software using the deinstall.bat command located in the
Oracle home being removed, provide a parameter file in a location outside the
Oracle home, and do not use the -home flag.
If you run deinstall from the Grid_home\deinstall path, then the -home flag
is not required because the tool knows from which home it is being run. If you use
the standalone version of the tool, then -home is mandatory
■
-silent
Use this flag to run the command in noninteractive mode. This flag requires as
input a properties file that contains the configuration values for the Oracle home
that is being deinstalled or deconfigured. To provide these values, you must also
specify the -paramfile flag when specifying this flag.
To create a properties file and provide the required parameters, refer to the
template file deinstall.rsp.tmpl, located in the response folder. Instead of
using the template file, you can generate a properties file by using the
-checkonly flag with the deinstall command. The generated properties file
can then be used with the -silent flag.
■
-checkonly
Use this flag to check the status of the Oracle software home configuration.
Running the command with the -checkonly flag does not remove the Oracle
configuration. This flag generates a properties file that you can use with the
deinstall.bat command.
When you use the -checkonly flag to generate a properties file, you are
prompted to provide information about your system. You can accept the default
value the tool has obtained from your Oracle installation, indicated inside brackets
([]), or you can provide different values. To accept the defaults, click Enter.
■
-local
When you run deinstall.bat with this flag, it deconfigures and deinstalls the
Oracle software only on the local node (the node on which you run
deinstall.bat). On remote nodes, it deconfigures Oracle software, but does not
deinstall the Oracle software.
Note:
■
This flag can only be used in cluster environments.
-paramfile complete path of input parameter property file
This is an optional flag. You can use this flag to run deinstall.bat with a
parameter file in a location other than the default. When you use this flag, provide
the complete path where the parameter file is located.
The default location of the parameter file depends on the location of the
Deinstallation tool:
–
From the installation media or stage location: <Drive>:\staging_location\
deinstall\response
–
From a unzipped archive file downloaded from OTN: <Drive>:\ziplocation\
deinstall\response, where <Drive>:\ziplocation refers to the directory in
which the downloaded archive file was extracted.
–
After installation, from the installed Oracle home: %ORACLE_HOME%\
deinstall\response.
6-4 Oracle Grid Infrastructure Installation Guide
Removing Oracle Clusterware and Oracle Automatic Storage Management
■
-params [name1=value name2=value name3=value . . .]
Use this flag with a parameter file to override one or more values in a parameter
file that you created.
■
-o complete directory path for output file
Use this flag to provide a path other than the default location where the properties
file is saved.
The default location of the properties file depends on the location of the
Deinstallation tool:
■
–
Extracted from an archive file downloaded from OTN: <Drive>:\ziplocation\
response, where <Drive>:\ziplocation\ refers to directory in which the
downloaded archive file was extracted.
–
After installation, from the installed Oracle home: %ORACLE_HOME%\
deinstall\response.
-help | -h
Use the help option (-help or -h) to obtain additional information about the
command option flags.
If you use the deinstall.bat command located in an Oracle home, or the
deinstall.bat command downloaded from OTN (not installed in an Oracle home),
then it writes log files in the C:\Program Files\Oracle\Inventory\logs
directory. If, however, you are using the deinstall.bat command to remove the
last Oracle home installed on the server, then the log files are written to:
■
■
%TEMP%\OraDeinstall<timestamp>\logs if you use the deinstall.bat
command located in the Oracle home
<Drive>:\ziplocation\deinstall\logs if you use the deinstall.bat
command downloaded from OTN
6.4.2 Example of Running the Deinstall Command for Oracle Clusterware and ASM
If you use the separately downloaded version of deinstall.bat, then when the
deinstall.bat command runs, you are prompted to provide the home directory of
the Oracle software to remove from your system. Provide additional information as
prompted.
To run the deinstall.bat command located in an Oracle grid infrastructure home
in the path C:\app\11.2.0\grid, enter the following command while logged in as
a member of the Administrators group:
C:\> app\11.2.0\grid\deinstall\deinstall.bat
To run the deinstall.bat command located in an Oracle grid infrastructure home
and use a parameter file located at C:\users\oracle\myparamfile.tmpl, enter
the following command while logged in as a member of the Administrators group:
Grid_home\deinstall\deinstall.bat -paramfile C:\users\oracle\myparamfile.tmpl
You can generate the a parameter file by running the deinstall.bat command with
the -checkonly and -o flags before you run the command to deinstall the Oracle
home, or you can use the response file template and manually edit it to create the
parameter file. For example, to generate the parameter file deinstall_OraCrs11g_
home1.rsp using the -checkonly flag, enter a command similar to the following:
Grid_home\deinstall\deinstall -checkonly -o C:\users\oracle\
How to Modify or Deinstall Oracle Grid Infrastructure
6-5
Removing Oracle Clusterware and Oracle Automatic Storage Management
6.4.3 Example Parameter File for Deinstall of Oracle Grid Infrastructure
The following is an example of a parameter file for a cluster on nodes node1 and
node2, in which the Oracle grid infrastructure for a cluster is installed by the user
oracle, the Oracle grid infrastructure home (Grid home) is in the path C:\app\
11.2.0\grid, the Oracle base (where other Oracle software is installed) is C:\app\
oracle\, the central Oracle Inventory home is C:\Program Files\Oracle\
Inventory, the virtual IP addresses (VIP) are 192.0.2.2 and 192.0.2.4, the local
node (the node where you are running the deinstallation session from) is node1:
#Copyright (c) 2005, 2009 Oracle Corporation. All rights reserved.
VIP1_IP=192.0.2.2
LOCAL_NODE=node1
ORA_VD_DISKGROUPS=+DATA
VIP1_IF=PublicNIC
OCRID=
ObaseCleanupPtrLoc=C:\Temp\OraDeinstall112010-02-11_10-14-30AM\utl\...
HELPJAR_NAME=help4.jar
local=false
ORACLE_HOME=C:\app\11.2.0\grid
ASM_HOME=C:\app\11.2.0\grid
ASM_DISK_GROUPS=
ASM_DISK_GROUP=DATA
ORA_DBA_GROUP=
ASM_DISCOVERY_STRING=
NEW_HOST_NAME_LIST=
PRIVATE_NAME_LIST=
ASM_DISKS=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1,\\.\ORCLDISKDATA2
ASM_DISKSTRING=
CRS_HOME=true
JLIBDIR=C:\app\11.2.0\grid\jlib
OCRLOC=
JEWTJAR_NAME=jewt4.jar
EMBASEJAR_NAME=oemlt.jar
CRS_STORAGE_OPTION=1
ASM_REDUNDANCY=EXTERNAL
GPNPGCONFIGDIR=$ORACLE_HOME
LANGUAGE_ID='AMERICAN_AMERICA.WE8MSWIN1252'
CRS_NODEVIPS='node1-vip/255.255.252.0/PublicNIC,node2-vip/255.255.252.0/PublicNIC'
ORACLE_OWNER=Administrator
OLD_ACTIVE_ORACLE_HOME=
GNS_ALLOW_NET_LIST=
silent=false
LOGDIR=C:\Temp\OraDeinstall112010-02-11_10-14-30AM\logs\
OCFS_CONFIG=
NODE_NAME_LIST=node1,node2
GNS_DENY_ITF_LIST=
ORA_CRS_HOME=C:\app\11.2.0\grid
JREDIR=C:\app\11.2.0\grid\jdk\jre
ASM_LOCAL_SID=+asm1
ORACLE_BASE=C:\app\oracle\
GNS_CONF=false
NETCFGJAR_NAME=netcfg.jar
ORACLE_BINARY_OK=true
OCR_LOCATIONS=NO_VAL
ASM_ORACLE_BASE=C:\app\oracle
OLRLOC=
GPNPCONFIGDIR=$ORACLE_HOME
6-6 Oracle Grid Infrastructure Installation Guide
Removing Oracle Clusterware and Oracle Automatic Storage Management
ORA_ASM_GROUP=
GNS_DENY_NET_LIST=
OLD_CRS_HOME=
EWTJAR_NAME=ewt3.jar
NEW_NODE_NAME_LIST=
GNS_DOMAIN_LIST=
ASM_UPGRADE=false
NETCA_LISTENERS_REGISTERED_WITH_CRS=LISTENER
CLUSTER_NODES=node1,node2
CLUSTER_GUID=
NEW_PRIVATE_NAME_LIST=
ASM_DIAGNOSTIC_DEST=C:\APP\ORACLE
CLSCFG_MISSCOUNT=
SCAN_PORT=1521
ASM_DROP_DISKGROUPS=true
NETWORKS="PublicNIC"/192.0.2.1:public,"PrivateNIC"/10.0.0.1:cluster_interconnect
OCR_VOTINGDISK_IN_ASM=true
NODELIST=node1,node2
ASM_IN_HOME=true
HOME_TYPE=CRS
GNS_ADDR_LIST=
CLUSTER_NAME=myrac-cluster
SHAREJAR_NAME=share.jar
VOTING_DISKS=NO_VAL
SILENT=false
VNDR_CLUSTER=false
GPNP_PA=
CSS_LEASEDURATION=400
REMOTE_NODES=node2
ASM_SPFILE=
NEW_NODEVIPS="node1-vip/255.255.252.0","node2-vip/255.255.252.0"
HOST_NAME_LIST=node1,node2
SCAN_NAME=myrac-scan
VIP1_MASK=255.255.252.0
INVENTORY_LOCATION=C:\Program Files\Oracle\Inventory
How to Modify or Deinstall Oracle Grid Infrastructure
6-7
Removing Oracle Clusterware and Oracle Automatic Storage Management
6-8 Oracle Grid Infrastructure Installation Guide
A
Troubleshooting the Oracle Grid
Infrastructure Installation Process
A
This appendix provides troubleshooting information for installing Oracle Grid
Infrastructure.
See Also: The Oracle Database 11g Oracle Real Application Clusters
(Oracle RAC) documentation set included with the installation media
in the Documentation directory:
■
■
■
Oracle Clusterware Administration and Deployment Guide
Oracle Real Application Clusters Administration and Deployment
Guide
Oracle Real Application Clusters Installation Guide
This appendix contains the following topics:
■
General Installation Issues
■
About the Oracle Clusterware Alert Log
■
Oracle Clusterware Install Actions Log Errors and Causes
■
Performing Cluster Diagnostics During Oracle Grid Infrastructure Installations
■
Interconnect Configuration Issues
A.1 General Installation Issues
The following is a list of examples of types of errors that can occur during installation.
It contains the following issues:
■
Nodes unavailable for selection from the OUI Node Selection screen
■
Node nodename is unreachable
■
Shared disk access fails
■
Installation does not complete successfully on all nodes
Nodes unavailable for selection from the OUI Node Selection screen
Cause: Oracle Grid Infrastructure is either not installed, or the Oracle Grid
Infrastructure services are not up and running.
Action: Install Oracle Grid Infrastructure, or review the status of your installation.
Consider restarting the nodes, because doing so may resolve the problem.
Node nodename is unreachable
Troubleshooting the Oracle Grid Infrastructure Installation Process
A-1
About the Oracle Clusterware Alert Log
Cause: Unavailable IP host.
Action: Attempt the following:
1.
Run the command ipconfig /all. Compare the output of this command
with the contents of the C:\WINDOWS\system32\drivers\etc\hosts file
to ensure that the node IP is listed.
2.
Run the command nslookup to see if the host is reachable.
Shared disk access fails
Cause: Windows 2003 R2 does not automount raw drives by default. This is a
change from Windows 2000.
Action: Change the automount to enabled. Refer to "Enabling Automounting for
Windows" on page 3-7
Installation does not complete successfully on all nodes
Cause: If a configuration issue prevents the Oracle grid infrastructure software
from installing successfully on all nodes, you might see an error message such as
"Timed out waiting for the CRS stack to start", or when you exit the installer you
might notice that the Oracle Clusterware managed resources were not created on
some nodes, or have a status other than ONLINE on those nodes.
Action: One solution to this problem is to deconfigure Oracle Clusterware on the
nodes where the installation did not complete successfully, and then fix the
configuration issue that caused the installation on that node to error out. After the
configuration issue has been fixed, you can then rerun the scripts used during
installation to configure Oracle Clusterware. See "Deconfiguring Oracle
Clusterware without Removing Binaries" on page 6-2 for details.
A.2 About the Oracle Clusterware Alert Log
During installation, the Oracle Clusterware alert log is the first place to look for serious
errors. In the event of an error, it can contain path information to diagnostic logs that
can provide specific information about the cause of errors.
After installation, Oracle Clusterware posts alert messages when important events
occur. For example, you might see alert messages from the Cluster Ready Services
(CRS) daemon when it starts, if it aborts, if the failover process fails, or if automatic
restart of a CRS resource failed.
Enterprise Manager monitors the Oracle Clusterware log file and posts an alert on the
Cluster Home page if an error is detected. For example, if a voting disk is not
available, then a CRS-1604 error is raised, and a critical alert is posted on the Cluster
Home page. You can customize the error detection and alert settings on the Metric and
Policy Settings page.
The location of the Oracle Clusterware log file is Grid_
home\log\hostname\alerthostname.log, where Grid_home is the directory in
which Oracle Grid infrastructure was installed and hostname is the host name of the
local node.
See Also: Oracle Real Application Clusters Administration and
Deployment Guide
A-2 Oracle Grid Infrastructure Installation Guide
Oracle Clusterware Install Actions Log Errors and Causes
A.3 Oracle Clusterware Install Actions Log Errors and Causes
During installation of the Oracle Grid Infrastructure software, a log file named
installActions<Date_Timestamp>.log is written to the
%TEMP%\OraInstall<Date_Timestamp> directory.
The following is a list of potential errors in the installActions.log:
■
PRIF-10: failed to initialize the cluster registry
Configuration assistant "Oracle Private Interconnect Configuration Assistant"
failed
■
KFOD-0311: Error scanning device device_path_name
■
Step 1: checking status of Oracle Clusterware cluster
Step 2: configuring OCR repository
ignoring upgrade failure of ocr(-1073740972)
failed to configure Oracle Cluster Registry with CLSCFG, ret -1073740972
Each of these error messages can be caused by one of the following issues:
A.3.1 The OCFS for Windows format is not recognized on a remote cluster node
If you are using Oracle Cluster File System for Windows (OCFS for Windows) for your
Oracle Cluster Registry (OCR) and Voting disk partitions, then:
1.
Leave the Oracle Universal Installer (OUI) window in place.
2.
Restart the second node, and any additional nodes.
3.
Retry the assistants.
A.3.2 You have a Windows 2003 system and Automount of new drives is not enabled
If this is true, then:
For Oracle RAC on Windows Server 2003, you must issue the following commands on
all nodes:
C:\> diskpart
DISKPART> automount enable
If you did not enable automounting of disks before attempting to install Oracle Grid
Infrastructure, and the configuration assistants fail during installation, then you must
clean up your Oracle Clusterware install, enable automounting on all nodes, reboot all
nodes, and then start the Oracle Clusterware install again.
A.3.3 Symbolic links for disks were not removed
When you stamp a disk with ASMTOOL, it creates symbolic links for the disks. If
these links are not removed when the disk is deleted or reconfigured, then errors can
occur when attempting to access the disks.
To correct the problem, you can try stamping the disks again with ASMTOOL.
A.3.4 Discovery string used by Oracle Automatic Storage Management is incorrect
When specifying Oracle Automatic Storage Management (Oracle ASM) for storage,
you have the option of changing the default discovery string used to locate the disks.
If the discovery string is set incorrectly, Oracle ASM will not be able to locate the disks.
Troubleshooting the Oracle Grid Infrastructure Installation Process
A-3
Performing Cluster Diagnostics During Oracle Grid Infrastructure Installations
A.3.5 You used a period in a node name during Oracle Clusterware install
Periods (.) are not permitted in node names. Instead, use a hyphen (-).
To resolve a failed installation, remove traces of the Oracle installation, and reinstall
with a permitted node name.
A.3.6 Ignoring upgrade failure of ocr(-1073740972)
This error indicates that the user that is performing the installation does not have
Administrator privileges.
A.4 Performing Cluster Diagnostics During Oracle Grid Infrastructure
Installations
If the installer does not display the Node Selection page, then use the following
command syntax to check the integrity of the Cluster Manager:
cluvfy comp clumgr -n node_list -verbose
In the preceding syntax example, the variable node_list is the list of nodes in your
cluster, separated by commas.
If you encounter unexplained installation errors during or
after a period when scheduled tasks are run, then your scheduled task
may have deleted temporary files before the installation is finished.
Oracle recommends that you complete the installation before
scheduled tasks are run, or disable scheduled tasks that perform
cleanup until after the installation is completed.
Note:
A.5 Interconnect Configuration Issues
If you use multiple network interface cards (NICs) for the interconnect, then the NICs
should be bonded at the operating system level. Otherwise, the failure of a single NIC
will affect the availability of the cluster node.
If you install Oracle Grid Infrastructure and Oracle RAC, then they must use the same
NIC or teamed NIC cards for the interconnect.
If you use teamed NIC cards, then they must be on the same subnet.
If you encounter errors, then perform the following system checks:
■
■
Verify with your network providers that they are using the correct cables (length,
type) and software on their switches. In some cases, to avoid bugs that cause
disconnects under loads, or to support additional features such as Jumbo Frames,
you may need a firmware upgrade on interconnect switches, or you may need
newer NIC driver or firmware at the operating system level. Running without
such fixes can cause later instabilities to Oracle RAC databases, even though the
initial installation seems to work.
Review vitrual local area network (VLAN) configurations, duplex settings, and
auto-negotiation in accordance with vendor and Oracle recommendations.
A-4 Oracle Grid Infrastructure Installation Guide
B
B
Installing and Configuring Oracle Grid
Infrastructure Using Response Files
This appendix describes how to install and configure Oracle grid infrastructure
software using response files. It includes information about the following topics:
■
About Response Files
■
Preparing a Response File
■
Running the Installer Using a Response File
■
Running Net Configuration Assistant Using a Response File
■
Postinstallation Configuration Using a Response File
B.1 About Response Files
When you start the installer, you can use a response file to automate the installation
and configuration of Oracle software, either fully or partially. The installer uses the
values contained in the response file to provide answers to some or all installation
prompts.
Typically, the installer runs in interactive mode, which means that it prompts you to
provide information in graphical user interface (GUI) screens. When you use response
files to provide this information, you run the installer from a command prompt using
either of the following modes:
■
Silent mode
If you include responses for all of the prompts in the response file and specify the
-silent option when starting the installer, then it runs in silent mode. During a
silent mode installation, the installer does not display any screens. Instead, it
displays progress information in the terminal that you used to start it.
■
Response file mode
If you include responses for some or all of the prompts in the response file and
omit the -silent option, then the installer runs in response file mode. During a
response file mode installation, the installer displays all the screens, screens for
which you specify information in the response file, and also screens for which you
did not specify the required information in the response file.
You define the settings for a silent or response file installation by entering values for
the variables listed in the response file. For example, to specify the Oracle home name,
supply the appropriate value for the ORACLE_HOME variable:
ORACLE_HOME="OraCrs11g_home1"
Installing and Configuring Oracle Grid Infrastructure Using Response Files
B-1
About Response Files
Another way of specifying the response file variable settings is to pass them as
command line arguments when you run the installer. For example:
-silent "ORACLE_HOME=OraCrs11g_home1" ...
This method is particularly useful if you do not want to embed sensitive information,
such as passwords, in the response file. For example:
-silent "s_dlgRBOPassword=binks342" ...
Ensure that you enclose the variable and its setting in quotes.
See Also: Oracle Universal Installer and OPatch User's Guide for
Windows and UNIX for more information about response files
B.1.1 Reasons for Using Silent Mode or Response File Mode
The following table provides use cases for running the installer in silent mode or
response file mode.
Mode
Uses
Silent
Use silent mode to do the following installations:
■
■
■
Complete an unattended installation, which you schedule using
operating system utilities such as at.
Complete several similar installations on multiple systems without user
interaction.
Install the software on a system that cannot display the OUI graphical
user interface.
The installer displays progress information on the terminal that you used to
start it, but it does not display any of the installer screens.
Response file
Use response file mode to complete similar Oracle software installations on
multiple systems, providing default answers to some, but not all of the
installer prompts.
In response file mode, all the installer screens are displayed, but defaults for
the fields in these screens are provided by the response file. You have to
provide information for the fields in screens where you have not provided
values in the response file.
B.1.2 General Procedure for Using Response Files
The following are the general steps to install and configure Oracle products using the
installer in silent or response file mode:
You must complete all required preinstallation tasks on a
system before running the installer in silent or response file mode.
Note:
1.
Prepare a response file.
2.
Run the installer in silent or response file mode.
3.
If you completed a software-only installation, then perform the steps necessary to
configure the Oracle product.
These steps are described in the following sections.
B-2 Oracle Grid Infrastructure Installation Guide
Preparing a Response File
B.2 Preparing a Response File
This section describes the following methods to prepare a response file for use during
silent mode or response file mode installations:
■
Editing a Response File Template
■
Recording a Response File
B.2.1 Editing a Response File Template
Oracle provides response file templates for each product and installation type, and for
each configuration tool. For Oracle Grid Infrastructure, the response file is located in
the staging_dir\clusterware\response directory on the installation media and
in the Grid_home\inventory\response directory after installation.
Table B–1 lists the response files provided with this software:
Table B–1
Response files for Oracle Grid Infrastructure
Response File
Description
crs_install.rsp
Silent installation of Oracle grid infrastructure installations
Caution: When you modify a response file template and save a file
for use, the response file may contain plain text passwords.
Ownership of the response file should be given to the Oracle software
installation owner only. Oracle strongly recommends that database
administrators or other administrators delete or secure response files
when they are not in use.
To copy and modify a response file:
1.
Copy the response file from the response file directory to a directory on your
system.
2.
Open the response file in a text editor.
Remember that you can specify sensitive information, such as passwords, at the
command line rather than within the response file. The section "About Response
Files" on page B-1 explains this method.
See Also: Oracle Universal Installer and OPatch User's Guide for
Windows and UNIX for detailed information on creating response files
3.
Follow the instructions in the file to edit it.
The installer or configuration assistant fails if you do not
correctly configure the response file.
Note:
4.
Secure the response file.
Installing and Configuring Oracle Grid Infrastructure Using Response Files
B-3
Running the Installer Using a Response File
A fully specified response file for an Oracle grid infrastructure
installation can contain the passwords for Oracle Automatic Storage
Management (Oracle ASM) administrative accounts and for a user
who is a member of the ORA_DBA group and the Administrators
group. Ensure that only the Oracle software owner user can view or
modify response files or consider deleting the modified response file
after the installation succeeds.
Note:
B.2.2 Recording a Response File
You can use the installer in interactive mode to record a response file, which you can
edit and then use to complete silent mode or response file mode installations. This
method is useful for customized or software-only installations.
Starting with Oracle Database 11g Release 2 (11.2), you can save all the installation
steps into a response file during installation by clicking Save Response File on the
Summary page. You can use the generated response file for a silent installation later.
When you record the response file, you can either complete the installation, or you can
exit from the installer on the Summary page, before it starts to copy the software to the
server.
Oracle Universal Installer (OUI) does not record passwords in
the response file.
Note:
To record a response file:
1.
Complete preinstallation tasks as for a normal installation.
When you run the installer to record a response file, it checks the system to verify
that it meets the requirements to install the software. For this reason, Oracle
recommends that you complete all of the required preinstallation tasks and record
the response file while completing an installation.
2.
Log in as a user that is a member of the local Administrators group and start the
installer. On each installation screen, specify the required information.
3.
When the installer displays the Summary screen, perform the following:
a.
Click Save Response File. In the pop-up window, specify a file name and
location for the response file, then click Save to write the settings you have
entered to the response file.
b.
Click Finish to continue with the installation.
Click Cancel if you do not want to continue with the installation. The
installation will stop, but the recorded response file is retained.
4.
Before you use the saved response file on another system, edit the file and make
any required changes.
Use the instructions in the file as a guide when editing it.
B.3 Running the Installer Using a Response File
To use a response file during installation, you start OUI from the command line,
specifying the response file you created. The OUI executable, setup.exe, provides
B-4 Oracle Grid Infrastructure Installation Guide
Running Net Configuration Assistant Using a Response File
several options. For information about the full set of these options, run the
setup.exe command with the -help option, for example:
C:\..\bin> setup.exe -help
The help appears in your session window after a short period of time.
To run the installer using a response file:
1.
Complete the preinstallation tasks as for any installation
2.
Log in as an Administrative user
3.
To start the installer in silent or response file mode, enter a command similar to the
following:
C:\> directory_path\setup.exe [-silent] [-noconfig] \
-responseFile responsefilename
Do not specify a relative path to the response file. If you
specify a relative path, then the installer fails.
Note:
In this example:
■
■
■
■
directory_path is the path of the DVD or the path of the directory on the
hard drive where you have copied the installation binaries.
-silent runs the installer in silent mode.
-noconfig suppresses running the configuration assistants during
installation, and a software-only installation is performed instead.
responsefilename is the full path and file name of the installation response
file that you configured.
If you use record mode during a response file mode installation, then the installer
records the variable values that were specified in the original source response file into
the new response file.
B.4 Running Net Configuration Assistant Using a Response File
You can run Net Configuration Assistant (NetCA) in silent mode to configure and start
an Oracle Net listener on the system, configure naming methods, and configure Oracle
Net service names. To run NetCA in silent mode, you must copy and edit a response
file template. Oracle provides a response file template named netca.rsp in the
database\inventory\response directory in the Oracle home directory after
installation or in the database\response directory on the installation media.
To run NetCA using a response file:
1.
Copy the netca.rsp response file template from the response file directory to a
directory on your system.
If you have copied the software to a hard drive, then you can edit the file in the
response directory.
2.
Open the response file in a text editor.
3.
Follow the instructions in the file to edit it.
Installing and Configuring Oracle Grid Infrastructure Using Response Files
B-5
Postinstallation Configuration Using a Response File
Note:
NetCA fails if you do not correctly configure the response
file.
4.
Log in as an Administrative user.
5.
Enter a command similar to the following to run NetCA in silent mode:
C:\> Oracle_home\bin\netca -silent -responsefile X:\local_dir\netca.rsp
In this command:
■
■
The -silent option runs NetCA in silent mode.
X:\local_dir is the full path of the directory where you copied the
netca.rsp response file template where X represents the drive on which the
file is located, and local_dir the path on that drive.
B.5 Postinstallation Configuration Using a Response File
Use the following sections to create and run a response file configuration after
installing Oracle software.
B.5.1 About the Postinstallation Configuration File
When you run a silent or response file installation, you provide information about
your servers in a response file that you otherwise provide manually using a graphical
user interface. However, the response file does not contain passwords for user
accounts that configuration assistants require after software installation is complete.
The configuration assistants are started with a script called
configToolAllCommands. You can run this script in response file mode by creating
and using a password response file. The script uses the passwords to run the
configuration tools in succession to complete configuration.
If you keep the password file to use for clone installations, then Oracle strongly
recommends that you store it in a secure location. In addition, if you have to stop an
installation to fix an error, then you can run the configuration assistants using
configToolAllCommands and a password response file.
The configToolAllCommands password response file consists of the following
syntax options:
■
internal_component_name is the name of the component that the configuration
assistant configures
■
variable_name is the name of the configuration file variable
■
value is the desired value to use for configuration.
The command syntax is as follows:
internal_component_name|variable_name=value
For example:
oracle.assistants.asm|S_ASMPASSWORD=myPassWord
Oracle strongly recommends that you maintain security with a password response file.
B-6 Oracle Grid Infrastructure Installation Guide
Postinstallation Configuration Using a Response File
B.5.2 Running Postinstallation Configuration Using a Response File
To run configuration assistants with the configToolAllCommands script:
1.
Create a response file using the syntax filename.properties.
2.
Open the file with a text editor, and cut and paste the password template,
modifying as needed.
Example B–1 Password response file for Oracle grid infrastructure
Oracle grid infrastructure requires passwords for Oracle Automatic Storage
Management Configuration Assistant (ASMCA), and for Intelligent Platform
Management Interface Configuration Assistant (IPMICA) if you have a baseboard
management controller (BMC) card and you want to enable this feature. Provide the
following response file:
oracle.assistants.asm|S_ASMPASSWORD=password
oracle.assistants.asm|S_ASMMONITORPASSWORD=password
oracle.crs|S_BMCPASSWORD=password
If you do not have a BMC card, or you do not want to enable Intelligent Platform
Management Interface (IPMI), then leave the S_BMCPASSWORD input field blank.
3.
Change directory to Oracle_home\cfgtoollogs, and run the configuration
script using the following syntax:
configToolAllCommands RESPONSE_FILE=\path\name.properties
For example:
C:\..\cfgtoollogs> configToolAllCommands RESPONSE_FILE=C:\users\oracle
\grid\cfgrsp.properties
Installing and Configuring Oracle Grid Infrastructure Using Response Files
B-7
Postinstallation Configuration Using a Response File
B-8 Oracle Grid Infrastructure Installation Guide
C
C
Oracle Grid Infrastructure for a Cluster
Installation Concepts
This appendix explains the reasons for preinstallation tasks that you are asked to
perform and other installation concepts.
This appendix contains the following sections:
■
Understanding Preinstallation Configuration
■
Understanding Storage Configuration
Understanding Preinstallation Configuration
This section reviews concepts about grid infrastructure for a cluster preinstallation
tasks. It contains the following sections:
■
Understanding Oracle Groups and Users
■
Understanding the Oracle Base Directory Path
■
Understanding Network Addresses
■
Understanding Network Time Requirements
Understanding Oracle Groups and Users
This section contains the following topic:
■
Understanding the Oracle Inventory Directory
Understanding the Oracle Inventory Directory
The Oracle Inventory directory is the central inventory location for all Oracle software
installed on a server. The location of the Oracle Inventory directory is <System_
drive>:\Program Files\Oracle\Inventory.
The first time you install Oracle software on a system, the installer checks to see if an
Oracle Inventory directory exists. The location of the Oracle Inventory directory is
determined by the Windows Registry key HKEY_LOCAL_
MACHINE\SOFTWARE\Oracle\inst_loc. If an Oracle Inventory directory does not
exist, then the installer creates one in the default location of C:\Program
Files\Oracle\Inventory.
By default, the Oracle Inventory directory is not installed under the Oracle Base
directory. This is because all Oracle software installations share a common Oracle
Inventory, so there is only one Oracle Inventory for all users, whereas there is a
separate Oracle Base directory for each user.
Oracle Grid Infrastructure for a Cluster Installation Concepts
C-1
Understanding Preinstallation Configuration
Understanding the Oracle Base Directory Path
During installation, you are prompted to specify an Oracle base location, which is
owned by the user performing the installation. You can choose a location with an
existing Oracle home, or choose another directory location that does not have the
structure for an Oracle base directory. The default location for the Oracle base
directory is <SYSTEM_DRIVE>:\app\user_name\.
Using the Oracle base directory path helps to facilitate the organization of Oracle
installations, parameter, diagnostic, and log files, and helps to ensure that installations
of multiple databases maintain an Optimal Flexible Architecture (OFA) configuration.
Multiple Oracle Database installations can use the same Oracle base directory. The
Oracle grid infrastructure installation uses a different directory path, one outside of
Oracle base. If you use different operating system users to perform the Oracle software
installations, then each user will have a different default Oracle base location.
Understanding Network Addresses
During installation, you are asked to identify the planned use for each network
interface that Oracle Universal Installer (OUI) detects on your cluster node. Identify
each interface as a public or private interface, or as an interface that you do not want
Oracle Clusterware to use. Public and virtual internet protocol (IP) addresses are
configured on public interfaces. Private addresses are configured on private interfaces.
Refer to the following sections for detailed information about each address type:
■
About the Public IP Address
■
About the Private IP Address
■
About the Virtual IP Address
■
About the Grid Naming Service (GNS) Virtual IP Address
■
About the SCAN
About the Public IP Address
The public IP address is assigned dynamically using dynamic host configuration
protocol (DHCP), or defined statically in a domain name system (DNS) or in a hosts
file. It uses the public interface (the interface with access available to clients).
About the Private IP Address
Oracle Clusterware uses interfaces marked as private for internode communication.
Each cluster node must have an interface that you identify during installation as a
private interface. Private interfaces must have addresses configured for the interface
itself, but no additional configuration is required. Oracle Clusterware uses interfaces
marked as private as the cluster interconnects. Any interface that you identify as
private must be on a subnet that connects to every node of the cluster. Oracle
Clusterware uses all the interfaces you identify for use as private interfaces.
For the private interconnects, because of Cache Fusion and other traffic between
nodes, Oracle strongly recommends using a physically separate, private network. If
you configure addresses using a DNS, then you should ensure that the private IP
addresses are reachable only by the cluster nodes.
After installation, if you modify interconnects on Oracle Real Application Clusters
(Oracle RAC) with the CLUSTER_INTERCONNECTS initialization parameter, then you
must change the interconnect to a private IP address, on a subnet that is not used with
a public IP address, nor marked as a public subnet by oifcfg. Oracle does not
C-2 Oracle Grid Infrastructure Installation Guide
Understanding Preinstallation Configuration
support changing the interconnect to an interface using a subnet that you have
designated as a public subnet.
You should not use a firewall on the network with the private network IP addresses,
because this can block interconnect traffic.
About the Virtual IP Address
The virtual IP (VIP) address is registered in the grid naming service (GNS), or the
DNS. Select an address for your VIP that meets the following requirements:
■
■
The IP address and host name are currently unused (it can be registered in a DNS,
but should not be accessible by a ping command)
The VIP is on the same subnet as your public interface
About the Grid Naming Service (GNS) Virtual IP Address
The GNS VIP address is a static IP address configured in the DNS. The DNS delegates
queries to the GNS VIP address, and the GNS daemon responds to incoming name
resolution requests at that address.
Within the subdomain, the GNS uses multicast Domain Name Service (mDNS),
included with Oracle Clusterware, to enable the cluster to map host names and IP
addresses dynamically as nodes are added and removed from the cluster, without
requiring additional host configuration in the DNS.
To enable GNS, you must have your network administrator provide a set of IP
addresses for a subdomain assigned to the cluster (for example,
grid.example.com), and delegate DNS requests for that subdomain to the GNS VIP
address for the cluster, which GNS will serve. The set of IP addresses is provided to
the cluster through DHCP, which must be available on the public network for the
cluster.
Oracle Clusterware Administration and Deployment Guide for
more information about GNS
See Also:
About the SCAN
Oracle Database 11g release 2 clients connect to the database using single client access
names (SCANs). The SCAN and its associated IP addresses provide a stable name for
clients to use for connections, independent of the nodes that make up the cluster.
SCAN addresses, virtual IP addresses, and public IP addresses must all be on the same
subnet.
The SCAN is a VIP name, similar to the names used for VIP addresses, such as
node1-vip. However, unlike a VIP, the SCAN is associated with the entire cluster,
rather than an individual node, and associated with multiple IP addresses, not just one
address.
The SCAN resolves to multiple IP addresses reflecting multiple listeners in the cluster
handling public client connections. When a client submits a request, the SCAN listener
listening on a SCAN IP address and the SCAN port is contracted on a client's behalf.
Because all services on the cluster are registered with the SCAN listener, the SCAN
listener replies with the address of the local listener on the least-loaded node where
the service is currently being offered. Finally, the client establishes connection to the
service through the listener on the node where service is offered. All of these actions
take place transparently to the client without any explicit configuration required in the
client.
Oracle Grid Infrastructure for a Cluster Installation Concepts
C-3
Understanding Storage Configuration
During installation, listeners are created on nodes for the SCAN IP addresses. Oracle
Net Services routes application requests to the least loaded instance providing the
service. Because the SCAN addresses resolve to the cluster, rather than to a node
address in the cluster, nodes can be added to or removed from the cluster without
affecting the SCAN address configuration.
The SCAN should be configured so that it is resolvable either by using GNS within the
cluster, or by using DNS resolution. For high availability and scalability, Oracle
recommends that you configure the SCAN name so that it resolves to three IP
addresses. At a minimum, the SCAN must resolve to at least one address.
If you specify a GNS domain, then the SCAN name defaults to clustername-scan.GNS_
domain. Otherwise, it defaults to clustername-scan.current_domain. For example, if you
start Oracle grid infrastructure installation from the server node1, the cluster name is
mycluster, and the GNS domain is grid.example.com, then the SCAN Name is
mycluster-scan.grid.example.com.
Clients configured to use IP addresses for Oracle Database releases prior to Oracle
Database 11g release 2 can continue to use their existing connection addresses; using
SCANs is not required. When you upgrade to Oracle Clusterware 11g release 2 (11.2),
the SCAN becomes available, and you should use the SCAN for connections to Oracle
Database 11g release 2 or later databases. When an earlier version of Oracle Database is
upgraded, it registers with the SCAN listeners, and clients can start using the SCAN to
connect to that database. The database registers with the SCAN listener through the
remote listener parameter in the init.ora file.
The SCAN is optional for most deployments. However, clients using Oracle Database
11g release 2 and later policy-managed databases using server pools must access the
database using the SCAN. This is required because policy-managed databases can run
on different servers at different times, so connecting to a particular node by using the
virtual IP address for a policy-managed database is not possible.
Understanding Network Time Requirements
Oracle Clusterware 11g release 2 (11.2) is automatically configured with Cluster Time
Synchronization Service (CTSS). This service provides automatic synchronization of
the time settings on all cluster nodes using the optimal synchronization strategy for
the type of cluster you deploy. If you have an existing cluster synchronization service,
such as network time protocol (NTP), then it will start in an observer mode.
Otherwise, it will start in an active mode to ensure that time is synchronized between
cluster nodes. CTSS will not cause compatibility issues.
The CTSS module is installed as a part of Oracle grid infrastructure installation. CTSS
daemons are started by the Oracle High Availability Services daemon (ohasd), and do
not require a command-line interface.
Understanding Storage Configuration
Refer to the following sections for concepts about Oracle Automatic Storage
Management (Oracle ASM) storage:
■
Understanding Oracle Automatic Storage Management Cluster File System
■
About Migrating Existing Oracle ASM Instances
Understanding Oracle Automatic Storage Management Cluster File System
Oracle ASM has been extended to include a general purpose file system, called Oracle
Automatic Storage Management Cluster File System (Oracle ACFS). Oracle ACFS is a
C-4 Oracle Grid Infrastructure Installation Guide
Understanding Storage Configuration
new multi-platform, scalable file system, and storage management technology that
extends Oracle ASM functionality to support customer files maintained outside of the
Oracle Database. Files supported by Oracle ACFS include application binaries and
application reports. Other supported files are video, audio, text, images, engineering
drawings, and other general-purpose application file data.
Oracle ACFS is only supported on Windows Server 2003 64-bit
and Windows Server 2003 R2 64-bit.
Note:
About Migrating Existing Oracle ASM Instances
If you have an Oracle ASM installation from a prior release installed on your server, or
in an existing Oracle Clusterware installation, then you can use Oracle Automatic
Storage Management Configuration Assistant (ASMCA, located in the path Grid_
home\bin) to upgrade the existing Oracle ASM instance to Oracle ASM 11g release 2
(11.2), and subsequently configure failure groups, Oracle ASM volumes, and Oracle
ACFS.
You must first shut down all database instances and
applications on the node with the existing Oracle ASM instance before
upgrading it.
Note:
During installation, if you chose to use Oracle ASM and ASMCA detects that there is a
prior Oracle ASM version installed in another Oracle ASM home, then after installing
the Oracle ASM 11g release 2 (11.2) binaries, you can start ASMCA to upgrade the
existing Oracle ASM instance. You can then configure an Oracle ACFS deployment by
creating Oracle ASM volumes and using the upgraded Oracle ASM to create the
Oracle ACFS.
On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of
Oracle ASM instances on all nodes is Oracle ASM 11g release 1, then you are provided
with the option to perform a rolling upgrade of Oracle ASM instances. If the prior
version of Oracle ASM instances on an Oracle RAC installation are from an Oracle
ASM release prior to Oracle ASM 11g release 1, then rolling upgrades cannot be
performed. Oracle ASM is then upgraded on all nodes to 11g release 2 (11.2).
Oracle Grid Infrastructure for a Cluster Installation Concepts
C-5
Understanding Storage Configuration
C-6 Oracle Grid Infrastructure Installation Guide
D
D
How to Upgrade to Oracle Grid
Infrastructure 11g Release 2
This appendix describes how to perform Oracle Clusterware and Oracle Automatic
Storage Management (Oracle ASM) upgrades.
This appendix contains the following topics:
■
Restrictions for Clusterware and Oracle ASM Upgrades to Grid Infrastructure
■
Understanding Out-of-Place and Rolling Upgrades
■
Preparing to Upgrade an Existing Oracle Clusterware Installation
■
Back Up the Oracle Software Before Upgrades
■
Upgrading Oracle Clusterware
■
Upgrading Oracle ASM
■
Updating DB Control and Grid Control Target Parameters
■
Downgrading Oracle Clusterware After an Upgrade
D.1 Restrictions for Clusterware and Oracle ASM Upgrades to Grid
Infrastructure
Be aware of the following restrictions and changes for upgrades to Oracle Grid
Infrastructure installations, which consists of Oracle Clusterware and Oracle ASM:
■
■
To upgrade existing Oracle Clusterware installations to Oracle Grid Infrastructure
11g, your current release must be at least 10.1.0.x, 10.2.0.3, 10.2.0.4, 11.1.0.6, or
11.1.0.7.
To upgrade existing Oracle ASM installations to Oracle Grid Infrastructure 11g
release 2 (11.2) using the rolling upgrade method, your current release must be at
least 11.1.0.6 or 11.1.0.7.
See Also: Oracle Upgrade Companion" Note 785351.1 on My Oracle
Support:
https://support.oracle.com
■
■
Adding nodes to a cluster during a rolling upgrade is not supported.
Oracle Clusterware and Oracle ASM upgrades are always out-of-place upgrades.
With Oracle Grid Infrastructure 11g release 2 (11.2), you cannot perform an
in-place upgrade of Oracle Clusterware and Oracle ASM to existing homes.
How to Upgrade to Oracle Grid Infrastructure 11g Release 2 D-1
Understanding Out-of-Place and Rolling Upgrades
■
Oracle ASM and Oracle Clusterware both run in the Oracle grid infrastructure
home.
When you upgrade to Oracle Clusterware 11g release 2 (11.2),
Oracle ASM is installed. In Oracle documentation, this home is called
the "grid infrastructure home."
Note:
■
Only one Oracle Clusterware installation can be active on a server at any time.
During a major version upgrade to Oracle Clusterware 11g release 2 (11.2), the
software in the Oracle Clusterware 11g release 2 (11.2) home is not fully functional
until the upgrade is completed. Running srvctl, crsctl, and other commands
from the Oracle Clusterware 11g release 2 (11.2) home is not supported until the
final rootupgrade.sh script is run and the upgrade is complete across all nodes.
To manage databases using earlier versions (release 10.x or 11.1) of Oracle
Database during the grid infrastructure upgrade, use the srvctl utility in the
existing database homes.
See Also:
Oracle Database Upgrade Guide
D.2 Understanding Out-of-Place and Rolling Upgrades
Oracle Clusterware upgrades can be rolling upgrades, in which a subset of nodes are
brought down and upgraded while other nodes remain active. Oracle ASM 11g release
2 (11.2) upgrades can be rolling upgrades. If you upgrade a subset of nodes, then a
software-only installation is performed on the existing cluster nodes that you do not
select for upgrade. Rolling upgrades avoid downtime and ensure continuous
availability while the software is upgraded to a new version.
In contrast with releases prior to Oracle Clusterware 11g
release 2, Oracle Universal Installer (OUI) always performs rolling
upgrades, even if you select all nodes for the upgrade.
Note:
During an out-of-place upgrade, the installer installs the newer version in a separate
Oracle Clusterware home. Both versions of Oracle Clusterware are on each cluster
member node, but only one version is active. By contrast, an in-place upgrade
overwrites the software in the current Oracle Clusterware home.
To perform an out-of-place upgrade, you must create new Oracle Grid Infrastructure
homes on each node. Then you can an out-of-place rolling upgrade, so that some
nodes are running Oracle Clusterware from the original Oracle Clusterware home, and
other nodes are running Oracle Clusterware from the new Oracle Grid Infrastructure
home.
If you have an existing Oracle Clusterware installation, then you upgrade your
existing cluster by performing an out-of-place upgrade. An in-place upgrade of Oracle
Clusterware 11g release 2 is not supported.
See Also: "Upgrading Oracle Clusterware" on page D-4 for
instructions on completing rolling upgrades
D-2 Oracle Grid Infrastructure Installation Guide
Preparing to Upgrade an Existing Oracle Clusterware Installation
D.3 Preparing to Upgrade an Existing Oracle Clusterware Installation
Before you upgrade Oracle Clusterware or Oracle ASM, there are certain tasks you
should complete first. The following sections describe the tasks you should complete
before starting an upgrade:
■
Verify System Readiness for Patches and Upgrades
■
Gather the Necessary System Information
■
Upgrade to the Minimum Required Oracle Clusterware Version
■
Unset Environment Variables
D.3.1 Verify System Readiness for Patches and Upgrades
If you are completing a patch update of Oracle Clusterware or Oracle ASM, then after
you download the patch software and before you start to patch or upgrade your
software installation, review the Patch Set Release Notes that accompany the patch to
determine if your system meets the system requirements for the operating system and
the hardware platform.
Use the Cluster Verification Utility to assist you with system checks in preparation for
patching or upgrading.
See Also:
Oracle Database Upgrade Guide
D.3.2 Gather the Necessary System Information
Ensure that you have the information you will need during installation, including the
following:
■
The Oracle home location for the current Oracle Clusterware installation.
With Oracle Clusterware 11g release 2 (11.2), you can perform upgrades on a
shared Oracle Clusterware home.
■
■
■
An Oracle grid infrastructure home location that is different from your existing
Oracle Clusterware home location
A single client access name (SCAN) address
Two network interface names (consisting of bonded or separate interfaces), which
you can identify as public and private interfaces for the cluster
D.3.3 Upgrade to the Minimum Required Oracle Clusterware Version
If you plan to upgrade your Oracle Clusterware 10g release 2 installation to Oracle
Clusterware 11g release 2 (11.2) and your current Oracle Clusterware installation has
not been upgraded to at least version 10.2.0.3, then a prerequisite check failure is
reported. You must upgrade your current Oracle Clusterware installation to version
10.2.0.3 or higher before starting the upgrade to Oracle Clusterware 11g release 2.
If you plan to upgrade your Oracle Clusterware 10g release 1 installation to Oracle
Clusterware 11g release 2 (11.2) and your current Oracle Clusterware installation is not
10.1.0.3 or higher, then you must upgrade your current Oracle Clusterware installation
to version 10.1.0.3 or higher before starting the upgrade to Oracle Clusterware 11g
release 2.
How to Upgrade to Oracle Grid Infrastructure 11g Release 2 D-3
Back Up the Oracle Software Before Upgrades
D.3.4 Unset Environment Variables
For the user account used to perform the installation, if you have environment
variables set for the existing installation, then remove the environment variables
ORACLE_HOME and ORACLE_SID, as these environment variables are used during
upgrade.
To remove the environment variable settings for all sessions, from the Start menu,
right click My Computer and select Properties. In the System Properties window,
select Advanced, then click the Environment Variables button.
D.4 Back Up the Oracle Software Before Upgrades
Before you make any changes to the Oracle software, Oracle recommends that you
create a backup of the Oracle software.
D.5 Upgrading Oracle Clusterware
Use the following procedures to upgrade Oracle Clusterware. You can also choose to
upgrade Oracle ASM during the upgrade of Oracle Clusterware.
Oracle recommends that you leave Oracle Real Application
Clusters (Oracle RAC) instances running. During the upgrade process,
the database instances on the node being upgraded are stopped and
started automatically.
Note:
1.
If there are non-clustered, or standalone, Oracle databases that use Oracle ASM
running on any of the nodes in the cluster, they must be shut down before you
start the upgrade. Listeners associated with those databases do not have to be shut
down.
2.
Start the installer, and select the option to upgrade an existing Oracle Clusterware
and Oracle ASM installation.
3.
On the node selection page, select all nodes.
In contrast with releases prior to Oracle Clusterware 11g
release 2, all upgrades are rolling upgrades, even if you select all
nodes for the upgrade.
Note:
Oracle recommends that you select all cluster member nodes for the
upgrade, and then shut down the database instances on each node
before you run the upgrade root script. Start the database instances on
each node after the upgrade is complete. You can also use this
procedure to upgrade a subset of nodes in the cluster.
4.
Select installation options as prompted.
5.
When the Oracle Clusterware upgrade is complete, if an earlier version of Oracle
ASM is installed, then the installer starts Oracle ASM Configuration Assistant
(ASMCA) to upgrade Oracle ASM to 11.2. You can upgrade Oracle ASM at this
time, or upgrade it later.
Oracle recommends that you upgrade Oracle ASM at the same time that you
upgrade Oracle Clusterware. Until Oracle ASM is upgraded, Oracle databases that
use Oracle ASM cannot be created. Until Oracle ASM is upgraded, the Oracle
D-4 Oracle Grid Infrastructure Installation Guide
Upgrading Oracle ASM
ASM management tools in the Oracle Grid Infrastructure 11g release 2 (11.2) home
(for example, srvctl) will not work.
Note: After the upgrade, if you set the Oracle Cluster Registry (OCR)
backup location manually to the older release Oracle Clusterware
home (CRS home), then you must change the OCR backup location to
the Oracle grid infrastructure home (Grid home). If you did not set the
OCR backup location manually, then this issue does not concern you.
Because upgrades of Oracle Clusterware are out-of-place upgrades,
the previous release Oracle Clusterware home cannot be the location
of the OCR backups. Backups in the old Oracle Clusterware home
could be deleted.
D.6 Upgrading Oracle ASM
After you have completed the Oracle Clusterware 11g release 2 (11.2) upgrade, if you
did not choose to upgrade Oracle ASM when you upgraded Oracle Clusterware, then
you can do it separately using ASMCA to perform rolling upgrades.
While you can use asmca to complete the upgrade of Oracle ASM separately, you
should perform the upgrade as soon as possible after you upgrade Oracle Clusterware,
because Oracle ASM management tools such as srvctl will not work until Oracle
ASM has been upgraded.
ASMCA performs a rolling upgrade only if the earlier version
of Oracle ASM is either 11.1.0.6 or 11.1.0.7. Otherwise, ASMCA
performs a normal upgrade, during which ASMCA brings down all
Oracle ASM instances on all nodes of the cluster, and then brings them
all up in the new Oracle Grid Infrastructure home.
Note:
D.6.1 About Upgrading Oracle ASM
Note the following if you intend to perform rolling upgrades of Oracle ASM:
■
The active version of Oracle Clusterware must be 11g release 2 (11.2). To determine
the active version, enter the following command:
C:\..\bin> crsctl query crs activeversion
■
■
You must ensure that any rebalance operations on your existing Oracle ASM
installation are completed before starting the upgrade.
During the upgrade process, you place the Oracle ASM instances in an upgrade
mode. Because this upgrade mode limits Oracle ASM operations, you should
complete the upgrades soon after you begin. The following are the operations
allowed during Oracle ASM upgrade:
–
Diskgroup mounts and dismounts
–
Opening, closing, resizing, or deleting database files
–
Recovering instances
–
Queries of fixed views and packages: Users are allowed to query fixed views
and run anonymous PL/SQL blocks using fixed packages, such as DBMS_
DISKGROUP)
How to Upgrade to Oracle Grid Infrastructure 11g Release 2 D-5
Updating DB Control and Grid Control Target Parameters
D.6.2 Using ASMCA to Upgrade Oracle ASM
Complete the following procedure to upgrade Oracle ASM:
1.
On the node on which you plan to start the upgrade, set the environment variable
ASMCA_ROLLING_UPGRADE to true to put the Oracle ASM instances in upgrade
mode:
C:\> set ASMCA_ROLLING_UPGRADE=true
2.
From the Oracle Grid Infrastructure 11g release 2 (11.2) home, start ASMCA. For
example:
C:\> cd app\oracle\grid\11.2.0.1\bin
C:\..\bin> asmca.bat
3.
In the ASMCA graphical interface, select Upgrade.
ASMCA upgrades Oracle ASM in succession for all nodes in the cluster.
4.
When the upgrade is complete for all the nodes, unset the environment variable
ASMCA_ROLLING_UPGRADE:
C:\> set ASMCA_ROLLING_UPGRADE=
See Also: Oracle Database Upgrade Guide and Oracle Database Storage
Administrator's Guide for additional information about preparing an
upgrade plan for Oracle ASM, and for starting, completing, and
stopping Oracle ASM upgrades
D.7 Updating DB Control and Grid Control Target Parameters
Because Oracle Clusterware release 2 (11.2) is an out-of-place upgrade of the Oracle
Clusterware home in a new location (the grid infrastructure for a cluster home, or Grid
home), the path for the CRS_HOME parameter in some parameter files must be
changed. If you do not change the parameter, then you will encounter errors such as
"cluster target broken" on Enterprise Manager Database Control or Grid Control.
Use the following procedure to resolve this issue:
1.
Log in to Enterprise Manager Database Control or Enterprise Manager Grid
Control.
2.
Select the Cluster tab.
3.
Click Monitoring Configuration.
4.
Update the value for Oracle Home with the new path for the Oracle Grid
Infrastructure home.
D.8 Downgrading Oracle Clusterware After an Upgrade
After a successful or a failed upgrade to Oracle Clusterware 11g release 2 (11.2), you
can restore Oracle Clusterware to the previous version.
The restoration procedure in this section restores the Oracle Clusterware configuration
to the state it was in before the Oracle Clusterware 11g release 2 (11.2) upgrade. Any
configuration changes you performed during or after the 11g release 2 (11.2) upgrade
are removed and cannot be recovered.
To restore Oracle Clusterware to the previous release:
D-6 Oracle Grid Infrastructure Installation Guide
Downgrading Oracle Clusterware After an Upgrade
1.
On all remote nodes, use the command syntax Grid_
home\crs\install\rootcrs.pl -downgrade [-force] to stop the Oracle
Clusterware 11g release 2 (11.2) resources and shut down the Oracle Clusterware
stack.
Note: This command does not reset the OCR, or delete the ocr
Windows Registry key.
For example:
C:\app\grid\11.2.0.1\crs\install> rootcrs.pl -downgrade
To stop a partial or failed Oracle Clusterware 11g release 2 (11.2) upgrade and
restore the previous release Oracle Clusterware, use the -force flag with this
command.
2.
After the rootcrs.pl -downgrade script has completed on all remote nodes,
on the local node use the command syntax Grid_home\crs\install
\rootcrs.pl -downgrade -lastnode -oldcrshome pre11.2_crs_home
-version pre11.2_crs_version [-force], where pre11.2_crs_home is
the home of the earlier Oracle Clusterware installation, and pre11.2_crs_
version is the release number of the earlier Oracle Clusterware installation.
For example:
C:\app\grid\11.2.0.1\crs\install> rootcrs.pl -downgrade -lastnode
-oldcrshome C:\app\crs -version 11.1.0.6.0
This script downgrades the OCR, and removes binaries from the Grid home. If the
Oracle Clusterware 11g release 2 (11.2) upgrade did not complete successfully, then
to restore the previous release of Oracle Clusterware using the -force flag with
this command.
How to Upgrade to Oracle Grid Infrastructure 11g Release 2 D-7
Downgrading Oracle Clusterware After an Upgrade
D-8 Oracle Grid Infrastructure Installation Guide
Index
A
ACFS. See Oracle ACFS
Administrators group, 2-2
ASM
See Oracle ASM
asmtool utility, 1-7, 3-18
asmtoolg utility, 1-7, 3-17
authentication support
preinstallation requirements, 2-21
Automatic Storage Management Cluster File System.
See Oracle ACFS
Automatic Storage Management. See Oracle ASM.
automount
enable, 3-7
B
bind order
Windows network, 2-8
BMC
configuring, 2-18
BMC interface
preinstallation tasks, 2-17
C
changing host names, 4-3
clients
connecting to SCAN, C-3
cluster configuration file, 4-6
cluster file system
storage option for data files, 3-2
Cluster Manager for Oracle9i databases, 4-6
cluster name, 1-3
requirements for, 4-3
cluster nodes
private node names, 4-4
public node names, 4-3
virtual node names, 4-4
cluster privileges
verifying for OUI cluster installation, 2-22
Cluster Time Synchronization Service, 2-16
CLUSTER_INTERCONNECTS parameter, C-2
commands
asmca, 3-19, 5-7
cluvfy, A-4
configToolAllCommands, B-7
crsctl, 2-16, 4-10, 5-9, 5-10
deinstall.bat, 6-5
diskpart, A-3
gsdctl, 4-2
ipconfig, A-2
ipmitool, 2-19
localconfig delete, 4-2
lsnrctl, 5-10
msinfo32, 2-5
net start and net stop, 2-16
net use, 1-5, 2-22
nslookup, A-2
ocopy.exe, 5-1
olsnodes, 5-9
ping, 2-8
regedit, 1-4, 2-16, 2-23
rootcrs.pl, 6-2
roothas.pl, 6-2
runcluvfy.bat, 4-8
setup.exe, 4-8, B-5
srvctl, 4-10, 5-10, 6-1
W32tm, 2-16
console mode, 2-4
crs_install.rsp file, B-3
ctsdd, 2-16
customized database
failure groups for Oracle ASM, 3-12
requirements when using Oracle ASM,
3-11
D
DAS (direct attached storage) disks, 3-16
data files
minimum disk space for, 3-3
recommendations for file system, 3-3
storage options, 3-2
data loss
minimizing with Oracle ASM, 3-12
database files
supported storage options, 3-5
databases
Oracle ASM requirements, 3-11
DB_RECOVERY_FILE_DEST, 5-7
deinstall tool
Index-1
example of using the, 6-5
generating the paramater file, 6-5
location, 6-3
log files, 6-5
parameter file, 6-6
syntax and options, 6-3
desupported
raw devices, xiii, xiv
device names
creating with asmtool, 3-18
creating with asmtoolg, 3-17
DHCP, 2-20, C-2
directory
database file directory, 3-3
DisableDHCPMediaSense parameter, 1-4, 2-8
disk automount
enable, 3-7
disk group
Oracle ASM with redundancy levels, 3-10
recommendations for Oracle ASM disk
groups, 3-10
disk space
checking, 2-6
required partition sizes, 3-8
requirements for preconfigured database in Oracle
ASM, 3-10
disk space requirements
Windows, 2-5
disks
configuring for Oracle ASM, 1-7, 3-14
NTFS formatted, 2-6
supported for Oracle ASM, 3-16
display monitor
resolution settings, 2-5
DNS
configuring for use with GNS, 2-9
configuring to use with GNS, 2-10
domain user
used in installation, 2-22
characteristics in Oracle ASM, 3-12
examples in Oracle ASM, 3-12
Oracle ASM, 3-10
fast recovery area, 5-7
FAT disk sizes, 2-6
features, new, xiii
fencing
and IPMI, 2-18, 4-5
file system
storage option for data files, 3-2
using for data files, 3-3
file systems, 3-6
system requirements, 2-5
Windows system requirements, 2-5
files
raw device mapping file
desupport for, 3-26
response files, B-3
filesets, 2-13
G
globalization
support for, 4-3
GNS
about, 2-9
configuring, 2-9
name resolution, 2-9
virtual IP address, 2-9
Grid home
modifying, 5-11
grid home
disk space requirements, 2-6
grid naming service. See GNS
groups
Administrators group, 2-2
operating system, 2-1
ORA_DBA, 2-1
OraInventory, 2-1
E
H
Easy Connect Naming, xvi
enable automount, 3-7
enable disk automount, 3-7
environment variable
ORACLE_BASE, 2-1
ORACLE_HOME, 2-1
ORACLE_SID, 2-1
PATH, 2-1
environment variables
TEMP, 2-6
examples
Oracle ASM failure groups, 3-12
external redundancy
Oracle ASM redundancy level, 3-10
hardware requirements, 2-4
high redundancy
Oracle ASM redundancy level, 3-10
host name
legal host names, 4-3
host names
changing, 4-3
hosts file, 2-10, C-2
F
failure groups
Index-2
I
inst_loc registry key, C-1
installation
and globalization, 4-3
Oracle ASM requirements, 3-10, 3-11
response files, B-3
preparing, B-3, B-4
templates, B-3
silent mode, B-5
starting, 1-7
typical, 1-7
using cluster configuration file, 4-6
interconnect
switch, 2-8
interfaces
requirements for private interconnect, C-2
Inventory directory, C-1
IP addresses
configuring, 1-3
private, 1-3
public, 1-3
requirements, 1-3
virtual, 1-3
IPMI
addresses not configurable by GNS, 2-17
configuring driver for, 2-17
fencing, 2-18
preinstallation tasks, 2-17
preparing for installation, 4-5
J
JDK requirements, 2-13
L
legal host names, 4-3
local administrator user, 2-22
local device
using for data files, 3-3
LOCAL_SYSTEM privilege, 4-2
log file
how to access during installation,
lsnrctl, 5-10
4-5
M
Media Sensing, 1-4, 2-8
disabling, 1-4, 2-8
memory requirements, 2-4
Microsoft Internet Explorer, 2-4
minimum requirements, 2-5
mirroring Oracle ASM disk groups, 3-10
mixed binaries, 2-13
modifying Oracle ASM binaries, 5-11
modifying Oracle Clusterware binaries, 5-11
mtsc
console mode, 2-4
My Oracle Support, 5-1
My Oracle Support Web site
about, 2-3
accessing, 2-3
N
Net Configuration Assistant (NetCA)
response files, B-5
running at command prompt, B-5
network interface bind order, 1-4, 2-8
network interface names
requirements, 1-5
network interface teaming, 2-7
new features, xiii
NFS, 3-6
and Oracle Clusterware files, 3-14
noninteractive mode. See response file mode
normal redundancy, Oracle ASM redundancy
level, 3-10
NTFS
system requirements, 2-5
NTFS formatted disks, 2-6
O
OCFS
See Oracle Cluster File System (OCFS) for
Windows
OcfsFormat.exe, 3-22
OCR
upgrading from previous release, 3-9
OCR. See Oracle Cluster Registry
oifcfg, C-2
operating system
32-bit, 2-5
Administrators group, 2-2
different on cluster members, 2-13
groups, 2-1
Itanium, 2-5
limitation for Oracle ACFS, C-4
mixed versions, 2-2
requirements, 2-13
user accounts, 2-2
users, 1-5
using different versions, 2-13
version checks, 2-13
ORA_DBA group, 2-1, 2-22
Oracle ACFS, xiv
about, C-4
Oracle Advanced Security
preinstallation requirements, 2-21
Oracle ASM
and rolling upgrade, D-5
asmtool utility, 3-18
asmtoolg utility, 3-17
configuring disks, 3-14
DAS disks, 3-16
disk group recommendations, 3-10
disk groups
recommendations for, 3-10
redundancy levels, 3-10
disks, supported, 3-16
failure groups, 3-10
characteristics, 3-12
examples, 3-12
identifying, 3-12
mirroring, 3-10
number of instances on each node, 1-6, 3-4, 3-5
partition creation, 3-16
redundancy levels, 3-10
required for Standard Edition Oracle RAC, 3-4,
Index-3
3-5
required for Typical install type, 3-4, 3-5
rolling upgrade of, 4-2
SAN disks, 3-16
space required for Oracle Clusterware files, 3-10
space required for preconfigured database, 3-11
storing Oracle Clusterware files on, 3-2
Oracle ASM Dynamic Volume Manager (Oracle
ADVM), xiv
Oracle Cluster File System (OCFS) for Windows
formatting drives for, 3-22
installation, 3-22
Oracle Cluster Registry
configuration of, 4-4
mirroring, 3-14, 3-15
partition sizes, 3-15
supported storage options, 3-5
Oracle Clusterware
and file systems, 3-6
and upgrading Oracle ASM instances, 1-6, 3-4,
3-5
installing, 4-1
installing with Oracle Universal Installer, 4-5
rolling upgrade of, 4-2
supported storage options for, 3-5
upgrading, 3-15
Oracle Clusterware files
Oracle ASM disk space requirements, 3-10
Oracle ASM requirements, 3-10
Oracle Clusterware home directory, 3-4
Oracle Database
data file storage options, 3-2
minimum disk space requirements, 3-3
requirements with Oracle ASM, 3-11
Oracle Disk Manager (ODM), 3-23, 3-25
Oracle Enterprise Manager
preinstallation requirements, 2-21
Oracle grid infrastructure response file, B-3
oracle home
ASCII path restriction for, 4-5
Oracle Inventory directory, C-1
Oracle Notification Server Configuration
Assistant, 4-6
Oracle patch updates, 5-1
Oracle Private Interconnect Configuration
Assistant, 4-6
Oracle Universal Installer
and Oracle Clusterware, 4-5
Oracle Upgrade Companion, 2-3
oracle user, 2-2
ORACLE_BASE, 2-1
ORACLE_HOME, 2-1
ORACLE_SID, 2-1
Oracle9i Cluster Manager, 4-6
Oracle9i RAC databases, 4-6
OracleRemExecService Windows service, 5-9
oraInventory group, 2-1
OUI
see Oracle Universal Installer
Index-4
P
paging file, 1-5, 2-4
partition creation for Oracle ASM disks, 3-16
partitions
using with Oracle ASM, 3-10
passwords
specifying for response files, B-2
See also security
patch updates
download, 5-1
install, 5-1
My Oracle Support, 5-1
patching
older software releases, 5-9
PATH, 2-1
physical RAM requirements, 2-4
policy-managed databases
and SCAN, C-4
postinstallation
patch download and install, 5-1
preconfigured database
Oracle ASM disk space requirements, 3-11
requirements when using Oracle ASM, 3-11
preinstallation
requirements for Oracle Advanced Security, 2-21
requirements for Oracle Enterprise Manager, 2-21
private IP addresses, 1-3
privileges
verifying for OUI cluster installation, 2-22
processors, 2-5
public IP addresses, 1-3
R
RAID
and mirroring Oracle Cluster Registry and voting
disk, 3-15
and mirroring Oracle Cluster Registry and voting
disks, 3-14
recommended Oracle ASM redundancy
level, 3-10
using for Oracle data files, 3-3
RAM requirements, 2-4
raw devices
and upgrades, 3-3
desupport for creating a raw device mapping
file, 3-26
upgrading existing partitions, 3-15
raw devices desupported, xiii, xiv
recommendations
backing up Oracle software, D-4
creating a fast recovery area disk group, 5-7
enable disk automounting, 3-8
for adding devices to an Oracle ASM disk
group, 3-14
for client access to the cluster, 1-4, 2-10
for Cluster size for OCFS for Windows disk
partitions, 3-22
for configuring BMC, 2-17
for creating disk partitions for use with Oracle
ASM, 3-16
for creating Oracle Grid Infrastructure home and
Oracle Inventory directories, 1-6
for managing Oracle Clusterware files, 2-6
for managing Oracle Database data files, 2-6
for Oracle Clusterware shared files, 3-14
for Oracle Database data and recovery files, 3-2
for private network, C-2
for shared storage for Oracle Clusterware
files, 3-8
for using a dedicated switch, 2-8, 2-17
for using DHCP, 2-20
for using NIC teaming, 2-7
for using NTFS formatted disks, 2-6
for using static host names, 2-9
for Windows Firewall exceptions, 5-3
installation option, 1-1
installing Instantaneous Problem Detection OS
Tool (IPD/OS), 5-6
limiting the number of partitions on a single
disk, 3-14
number of IP address for SCAN resolution, 2-10,
C-4
number of OCR and voting disk locations, 4-4
number of SCAN addresses, 4-4
on installation and scheduled tasks, A-4
on perfomring software-only installations, 4-8
Oracle ASM redundancy level, 3-10
paging file size, 1-2
postinstallation tasks, 5-6
SCAN VIP address configuration, 1-4, 2-10
secure response files after modification, B-3
storing password files, B-6
temporary directory configuration, 1-5
upgrade Oracle ASM when upgrading Oracle
Clusterware, D-4
VIP address names, 4-4
recovery files
supported storage options, 3-5
redundancy level
and space requirements for preconfigured
database, 3-10
for Oracle ASM, 3-10
redundant array of independent disks
See RAID
Registry keys
inst_loc, C-1
ocr, D-7
remove Oracle software, 6-1
requirements
for Oracle Enterprise Manager, 2-21
Grid home disk space, 2-6
hardware, 2-4
hardware certification, 2-3
memory, 2-4
processors, 2-5
software certification, 2-3
temporary disk space, 2-6
Web browser support, 2-4
response file installation
preparing, B-3
response files
templates, B-3
silent mode, B-5
response file mode
about, B-1
reasons for using, B-2
See also response files, silent mode, B-1
response files
about, B-1
creating with template, B-3
crs_install.rsp, B-3
general procedure, B-2
Net Configuration Assistant, B-5
passing values at command line, B-1
passwords, B-2
security, B-2
specifying with Oracle Universal Installer,
response files.See also silent mode
rolling upgrade
of Oracle ASM, D-5
Oracle ASM, 4-2
Oracle Clusterware, 4-2
B-4
S
SAN (storage area network) disks, 3-16
SCAN, 1-3, 2-10
addresses, C-3
client access, 1-4
restrictions, 1-3
understanding, C-3
use of SCAN required for clients of
policy-managed databases, C-4
SCAN listener, C-3
scheduled tasks, A-4
security
See also passwords
silent mode
about, B-1
reasons for using, B-2
See also response files., B-1
silent mode installation, B-5
single client access name (SCAN). See SCAN
software requirements, 2-13
storage area network disks, 3-16
supported storage options
Oracle Clusterware, 3-5
switch, 2-8
system requirements
for NTFS file systems, 2-5
for Windows file systems, 2-5
T
TCP, 2-8
teaming
of network interfaces, 2-7
telnet service support, 2-4
TEMP environment variable, 2-6
Index-5
temporary disk space
requirements, 2-6
Terminal services
console mode, 2-4
troubleshooting
disk space errors, 4-5
error messages, A-1
log file, 4-5
unexplained installation errors, A-4
voting disk backups, xvii
U
UDP, 2-8
uninstall Oracle software, 6-1
upgrade
of Oracle ASM, 2-2
of Oracle Clusterware, 2-2, 4-2
Oracle ASM, 4-2
restrictions for, D-1
standalone database, 2-2
unsetting environment variables for, D-4
upgrades
and SCAN, C-4
using raw devices with, 3-3
upgrading
and existing Oracle ASM instances, 1-6, 3-4, 3-5
and OCR partition sizes, 3-15
and voting disk partition sizes, 3-15
user
account for installing Oracle software, 2-2
domain, 2-22
local Administrator, 2-22
user datagram protocol (UDP), 2-8
V
video adapter, 2-5
requirements, 2-5
virtual IP addresses, 1-3
virtual memory
requirements, 2-4
voting disk
configuration of, 4-4
mirroring, 3-14, 3-15
voting disks, 3-2
backing up deprecated, xvii
partition sizes, 3-15
requirement of absolute majority of, 3-2
supported storage options, 3-5
upgrading from a previous release, 3-9
W
Web browser support, 2-4
Web browsers, 2-4
Windows
comparison to Linux or UNIX, 2-1
Windows Media Sensing
disabling, 1-4, 2-8
Windows network bind order, 2-8
Index-6
Windows Time service, 2-15
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement