Clustered Data ONTAP 8.3 MetroCluster

Clustered Data ONTAP® 8.3
MetroCluster Installation Express Guide
NetApp, Inc.
495 East Java Drive
Sunnyvale, CA 94089
U.S.
Telephone: +1 (408) 822-6000
Fax: +1 (408) 822-4501
Support telephone: +1 (888) 463-8277
Web: www.netapp.com
Feedback: doccomments@netapp.com
Part number: 215-09653_A0
January 2015
Table of Contents | 3
Contents
Deciding whether to use this guide ............................................................. 5
MetroCluster installation workflow ........................................................... 6
Preparing for the MetroCluster installation .............................................. 7
Preconfigured component passwords .......................................................................... 7
Preconfigured SVMs and LIFs .................................................................................... 7
Worksheet for FC switches and FC-to-SAS bridges ................................................. 10
IP network information worksheet for site A ............................................................ 12
IP network information worksheet for site B ............................................................ 14
Configuring the MetroCluster hardware components ........................... 18
Installing and cabling MetroCluster components ...................................................... 19
Racking the hardware components ............................................................... 19
Labeling the cables ........................................................................................ 20
Cabling the FC-VI and HBA adapters to the FC switches ............................ 21
Cabling the ISLs between MetroCluster sites ............................................... 22
Cabling the cluster interconnect .................................................................... 23
Cabling the cluster peering connections ........................................................ 24
Cabling the HA interconnect, if necessary .................................................... 24
Cabling the management and data connections ............................................ 25
Recommended port assignments for FC switches ......................................... 26
Cabling the storage stacks ......................................................................................... 27
Configuring the FC switches ..................................................................................... 30
Setting the FC switch name ........................................................................... 30
Configuring the E-ports on a Brocade FC switch ......................................... 31
Configuring the cluster interconnect switch management port IP addresses ............ 37
Configuring the MetroCluster software in Data ONTAP ...................... 40
Software setup checklist ............................................................................................ 41
Configuring the individual nodes .............................................................................. 43
Configuring the cluster management and node management networks ........ 44
Changing the password for the admin account ............................................. 44
Renaming a node ........................................................................................... 45
Configuring and testing AutoSupport ........................................................... 45
Configuring the clusters ............................................................................................ 46
4 | MetroCluster Installation Express Guide
Configuring the cluster management networks ............................................. 47
Modifying cluster attributes .......................................................................... 48
Protecting configuration backup files ........................................................... 48
Synchronizing the system time using NTP ................................................... 49
Refreshing the MetroCluster configuration ................................................... 49
Verifying cluster health ................................................................................. 50
Configuring MetroCluster components for health monitoring ................................. 50
Configuring the MetroCluster FC switches for health monitoring ............... 51
Configuring FC-to-SAS bridges for health monitoring ................................ 52
Verifying the MetroCluster configuration ............................................... 53
Verifying the configuration is successful .................................................................. 54
Verifying cluster peering ............................................................................... 54
Verifying cabling ........................................................................................... 55
Checking the MetroCluster configuration ..................................................... 56
Verifying that you can read and write data to the test SVM using NFS ....... 57
Verifying local HA operation .................................................................................... 58
Verifying switchover, healing, and switchback ........................................................ 60
Verifying that your system is ready for a switchover ................................... 60
Performing a negotiated switchover .............................................................. 61
Confirming that the DR partners have come online ...................................... 62
Reestablishing SnapMirror or SnapVault SVM peering relationships ......... 63
Healing the configuration .............................................................................. 64
Performing a switchback ............................................................................... 67
Verifying a successful switchback ................................................................ 68
Reestablishing SnapMirror or SnapVault SVM peering relationships ......... 70
Where to find additional information ...................................................... 71
Copyright information ............................................................................... 73
Trademark information ............................................................................. 74
How to send comments about documentation and receive update
notification .............................................................................................. 75
Index ............................................................................................................. 76
5
Deciding whether to use this guide
This guide describes how to install a MetroCluster system that has been received from the factory.
The MetroCluster system includes two clusters (each with two nodes) at physically separate states
connected by redundant FC switch fabrics.
You should use this guide if you want to install and configure a MetroCluster system that has been
received from the factory. Because the MetroCluster nodes, switches, and bridges are preconfigured
at the factory, this allows for a simpler installation process. This guide does not cover every option
and provides minimal background information.
You should use this guide only if the following is true:
•
The MetroCluster configuration has been received from the factory.
•
The configuration is using Brocade FC storage switches.
This guide does not document configuration of the Cisco FC storage switches.
•
The configuration is not using array LUNs (FlexArray Virtualization).
•
The configuration is not sharing existing FC fabrics with a 7-Mode fabric MetroCluster during
transition.
If the MetroCluster configuration does not meet these requirements, use the procedures in the
Clustered Data ONTAP 8.3 MetroCluster Installation and Configuration Guide.
6 | MetroCluster Installation Express Guide
MetroCluster installation workflow
To install a MetroCluster system, you must review and collect information, configure hardware
components such as the storage stacks and FC switches, and then configure Data ONTAP software
on each node and cluster. After you complete these steps on both clusters in the MetroCluster system,
you need to verify the installation.
7
Preparing for the MetroCluster installation
As you prepare for the MetroCluster installation, you must gather the required networking
information and be aware of the preconfigurated addresses and SVMs in the MetroCluster
configuration.
Preconfigured component passwords
Some MetroCluster components are preconfigured with usernames and passwords. You need to be
aware of these settings as you perform your site-specific configuration.
Component
Username
Password
Data ONTAP login
admin
netapp!123
Service Processor (SP) login
admin
netapp!123
Intercluster pass phrase
None required
netapp!123
ATTO FC-to-SAS bridge
None required
None required
Brocade switches
Admin
password
NetApp cluster interconnect
switches (CN1601, CN1610)
None required
None required
Preconfigured SVMs and LIFs
The MetroCluster nodes are preconfigured with SVMs and LIFs. You need to be aware of these
settings as you perform your site-specific configuration.
LIFs for Cluster Storage Virtual Machine (SVM)
On the 32xx platforms, the symbol # indicates that the slot location may vary depending on system
configuration:
•
For 32xx systems with an IOXM, slot 2 is used.
•
For 32xx systems with two controllers in the chassis, or a controller and a blank, slot 1 is used.
8 | MetroCluster Installation Express Guide
LIF
Network
address/
mask
Node
clusterA-01_clus1
169.254.x.x/
16
cluster
A-01
clusterA-01_clus2
Port
32xx
62xx
FAS8020
FAS8040,
FAS8060, or
FAS8080
e#a
e0c
e0a
e0a
169.254.x.x/
16
e#b
e0e
e0b
e0b
clusterA-01_clus3
169.254.x.x/
16
- n/a
-
- n/a -
- n/a -
e0c
clusterA-01_clus4
169.254.x.x/
16
- n/a
-
- n/a -
- n/a -
e0d
clusterA-02_clus1
169.254.x.x/
16
e#a
e0c
e0a
e0a
clusterA-02_clus2
169.254.x.x/
16
e#b
e0e
e0b
e0b
clusterA-02_clus3
169.254.x.x/
16
- n/a
-
- n/a -
- n/a -
e0c
clusterA-02_clus4
169.254.x.x/
16
- n/a
-
- n/a -
- n/a -
e0d
cluster
A-02
LIFs for clusterA Storage Virtual Machine (SVM)
LIF
Network
address/
mask
Node
clusterA-01-ic1
192.168.224.
221/24
cluster
A-01
clusterA-01-ic2
clusterA-01_mgmt1
Port
32xx
62xx
FAS8020
FAS8040,
FAS8060, or
FAS8080
e0a
e0a
e0e
e0i
192.168.224.
223/24
e0b
e0b
e0f
e0j
10.10.10.11/
24
e0M
e0M
e0M
e0M
Preparing for the MetroCluster installation | 9
LIF
Network
address/
mask
Node
clusterA-02-ic1
192.168.224.
222/24
cluster
A-02
clusterA-02-ic2
Port
32xx
62xx
e0a
e0a
e0e
e0i
192.168.224.
224/24
e0b
e0b
e0f
e0j
clusterA-02_mgmt1
10.10.10.12/
24
e0M
e0M
e0M
e0M
cluster_mgmt
10.10.10.9/2
4
e0a
e0a
e0e
e0i
cluster
A-01
FAS8020
FAS8040,
FAS8060, or
FAS8080
LIFs for clusterB Storage Virtual Machine (SVM)
Network
address/
mask
Node
clusterB-01-ic1
192.168.224.
225/24
cluster
B-01
clusterB-01-ic2
LIF
Port
32xx
62xx
FAS8020
FAS8040,
FAS8060, or
FAS8080
e0a
e0a
e0e
e0i
192.168.224.
227/24
e0b
e0b
e0f
e0j
clusterB-01_mgmt1
10.10.10.13/
24
e0M
e0M
e0M
e0M
clusterB-02-ic1
192.168.224.
226/24
e0a
e0a
e0e
e0i
clusterB-02-ic2
192.168.224.
228/24
e0b
e0b
e0f
e0j
clusterB-02_mgmt1
10.10.10.14/
24
e0M
e0M
e0M
e0M
cluster_mgmt
10.10.10.10/
24
e0a
e0a
e0e
e0i
cluster
B-02
cluster
B-01
10 | MetroCluster Installation Express Guide
Worksheet for FC switches and FC-to-SAS bridges
The worksheet enables you to record the FC switch and FC-to-SAS bridge values that you need to
complete the cluster setup process.
Site A, FC switch one (FC_switch_A_1)
Switch
configuration
parameter
Customer value
FC_switch_A_1 IP
address
FC_switch_A_1
Username
FC_switch_A_1
Password
Site A, FC switch two (FC_switch_A_2)
Switch
configuration
parameter
Customer value
FC_switch_A_2 IP
address
FC_switch_A_2
Username
FC_switch_A_2
Password
Site A, FC-to-SAS bridge 1 (FC_bridge_A_1_port-number)
Each SAS stack requires two FC-to-SAS bridges. One bridge connects to FC_switch_A_1_portnumber and the second connects to FC_switch_A_2_port-number.
Site A
Bridge_A_1_port-number IP address
Bridge_A_1_port-number Username
Bridge_A_1_port-number Password
Customer value
Preparing for the MetroCluster installation | 11
Site A, FC-to-SAS bridge 2 (FC_bridge_A_2_port-number)
Each SAS stack requires two FC-to-SAS bridges. One bridge connects to FC_switch_A_1_portnumber and the second connects to FC_switch_A_2_port-number.
Site A
Customer value
Bridge_A_2_port-number IP address
Bridge_A_2_port-number Username
Bridge_A_2_port-number Password
Site B, FC switch one (FC_switch_B_1)
Site B
Customer value
FC_switch_B_1 IP address
FC_switch_B_1 Username
FC_switch_B_1 Password
Site B, FC switch two (FC_switch_B_2)
Site B
Customer value
FC_switch_B_2 IP address
FC_switch_B_2 Username
FC_switch_B_2 Password
Site B, FC-to-SAS bridge 1 (FC_bridge_B_1_port-number)
Each SAS stack requires two FC-to-SAS bridges. One bridge connects to FC_switch_B_1_portnumber and the second connects to FC_switch_B_2_port-number.
Site B
Customer value
Bridge_B_1_port-number IP address
Bridge_B_1_port-number Username
Bridge_B_1_port-number Password
Site B, FC-to-SAS bridge 2 (FC_bridge_B_2_port-number)
Each SAS stack requires two FC-to-SAS bridges. One bridge connects to FC_switch_B_1_portnumber and the second connects to FC_switch_B_2_port-number.
12 | MetroCluster Installation Express Guide
Site B
Customer value
Bridge_B_2_port-number IP address
Bridge_B_2_port-number Username
Bridge_B_2_port-number Password
IP network information worksheet for site A
You must obtain IP addresses and other network information for the first MetroCluster site (site A)
from your network administrator before you configure the system.
Site A switch information
When you cable the system, you need a host name and management IP address for each cluster
switch:
Cluster switch
Host name
IP address
Network mask
Interconnect 1
Not required if
using two-node
switchless
cluster.
Interconnect 2
Not required if
using two-node
switchless
cluster.
Management 1
Management 2
Site A cluster creation information
When you first create the cluster, you need the following information:
Type of information
Cluster name
Example used in this guide:
site_A
DNS domain
Your values
Default gateway
Preparing for the MetroCluster installation | 13
Type of information
Your values
DNS name servers
Location
Administrator password
Site A node information
For each node in the cluster, you need a management IP address, a network mask, and a default
gateway:
Node
Port
IP address
Network mask
Default gateway
Node 1
Example used
in this guide:
controller_A_1
Node 2
Example used
in this guide:
controller_A_2
Site A LIFs and ports for cluster peering
For each node in the cluster, you need the IP address of an intercluster LIF, a network mask, and a
default gateway. The intercluster LIFs are used to peer the clusters:
Node
Port
IP address of
intercluster LIF
Network mask
Default gateway
Node 1
Node 2
Site A time server information
You must synchronize the time, which will require one or more NTP time servers:
Node
NTP server 1
NTP server 2
Host name
IP address
Network mask
Default gateway
14 | MetroCluster Installation Express Guide
Site A AutoSupport information
You must configure AutoSupport on each node, which will require the following information:
Type of information
Your values
From email address
Mail hosts
IP addresses or names
Transport protocol
HTTP, HTTPS, or
SMTP
Proxy server
Recipient email
addresses or
distribution lists
Full-length messages
Concise messages
Partners
Site A service processor information
You must enable access to the service processor of each node for troubleshooting and maintenance,
which requires the following network information for each node:
Node
IP address
Network mask
Default gateway
Node 1
Node 2
IP network information worksheet for site B
You must obtain IP addresses and other network information for the second MetroCluster site (site
B) from your network administrator before you configure the system.
Site B cluster switch information (if not using two-node switchless cluster
configuration)
When you cable the system, you need a host name and management IP address for each cluster
switch:
Preparing for the MetroCluster installation | 15
Cluster switch
Host name
IP address
Network mask
Default gateway
Interconnect 1
Not required if
using two-node
switchless
cluster.
Interconnect 2
Not required if
using two-node
switchless
cluster.
Management 1
Management 2
Site B cluster creation information
When you first create the cluster, you need the following information:
Type of information
Your values
Cluster name
Example used in this guide:
site_B
DNS domain
DNS name servers
Location
Administrator password
Site B node information
For each node in the cluster, you need a management IP address, a network mask, and a default
gateway:
Node
Node 1
Example used
in this guide:
controller_B_1
Port
IP address
Network mask
Default gateway
16 | MetroCluster Installation Express Guide
Node
Port
IP address
Network mask
Default gateway
Node 2
Example used
in this guide:
controller_B_2
Site B LIFs and ports for cluster peering
For each node in the cluster, you need the IP address of an intercluster LIF, a network mask, and a
default gateway. The intercluster LIFs are used to peer the clusters.
Node
Port
IP address of
intercluster LIF
Network mask
Default gateway
Node 1
Node 2
Site B time server information
You must synchronize the time, which requires one or more NTP time servers:
Node
Host name
IP address
Network mask
Default gateway
NTP server 1
NTP server 2
Site B autoSupport information
You must configure AutoSupport on each node, which requires the following information:
Type of information
Your values
From email address
Mail hosts
IP addresses or names
Transport protocol
HTTP, HTTPS, or
SMTP
Proxy server
Recipient email
addresses or
distribution lists
Full-length messages
Concise messages
Partners
Preparing for the MetroCluster installation | 17
Site B service processor information
You must enable access to the service processor of each node for troubleshooting and maintenance,
which requires the following network information for each node:
Node
Node 1
(controller_B_1)
Node 2
(controller_B_2)
IP address
Network mask
Default gateway
18 | MetroCluster Installation Express Guide
Configuring the MetroCluster hardware
components
The MetroCluster components must be physically installed, cabled, and configured at both
geographic sites.
About this task
Steps
1. Installing and cabling MetroCluster components on page 19
Configuring the MetroCluster hardware components | 19
The storage controllers must be cabled to the FC switches and the ISLs must be cabled to link the
MetroCluster sites. The storage controllers must also be cabled to the data and management
network.
2. Cabling the storage stacks on page 27
You must cable the disk shelves to the FC-to-SAS bridges and then cable the bridges to the
storage controllers.
3. Configuring the FC switches on page 30
You must configure the switch name and ISL settings for the FC switches based on your site's
requirements.
4. Configuring the cluster interconnect switch management port IP addresses on page 37
In a switched cluster, you need to configure IP addresses for the management ports of the cluster
switches so that you can verify cluster cabling later. You also need to configure IP addresses for
the management ports of the management switches, if they were included in the order.
Installing and cabling MetroCluster components
The storage controllers must be cabled to the FC switches and the ISLs must be cabled to link the
MetroCluster sites. The storage controllers must also be cabled to the data and management network.
Steps
1.
2.
3.
4.
5.
6.
7.
8.
9.
Racking the hardware components on page 19
Labeling the cables on page 20
Cabling the FC-VI and HBA adapters to the FC switches on page 21
Cabling the ISLs between MetroCluster sites on page 22
Cabling the cluster interconnect on page 23
Cabling the cluster peering connections on page 24
Cabling the HA interconnect, if necessary on page 24
Cabling the management and data connections on page 25
Recommended port assignments for FC switches on page 26
Racking the hardware components
If you have not received the equipment already installed in cabinets, you must rack the components.
About this task
This task must be performed on both MetroCluster sites.
Steps
1. Plan out the positioning of the MetroCluster components.
20 | MetroCluster Installation Express Guide
The rack space will depend on the platform model of the storage controllers, the switch types, and
the number of disk shelf stacks in your configuration.
2. Properly ground yourself.
3. Install the storage controllers in the rack or cabinet.
4. Install the FC switches in the rack or cabinet.
5. Install the disk shelves, power them on, and set the shelf IDs.
•
You must power-cycle each disk shelf.
•
Shelf IDs must be unique for each SAS disk shelf within the entire MetroCluster
configuration (including both sites).
6. Install each FC-to-SAS bridge:
a. Secure the “L” brackets on the front of the bridge to the front of the rack (flush-mount) with
the four screws.
b. Connect each bridge to a power source that provides a proper ground.
c. Power on each bridge.
Related information
SAS Disk Shelves Installation and Service Guide for DS4243, DS2246, DS4486, and DS4246
Labeling the cables
You should label the cables to simplify future expansion and to make troubleshooting of the cluster
easier. You can use the binder of labels that is included in the accessories box.
About this task
You can label the cables at any time during the installation process.
Steps
1. Using the labels in the binder supplied, label each end of each cable as required by your
environment.
You do not need to label the power cables.
2. Save the binder containing any remaining labels for future expansion of the cluster.
Configuring the MetroCluster hardware components | 21
Cabling the FC-VI and HBA adapters to the FC switches
The FC-VI adapter and HBAs must be cabled to the site FC switches on each controller module in
the MetroCluster configuration.
About this task
This task must be performed on both MetroCluster sites.
Steps
1. Cable the FC-VI ports.
2. Cable the HBA ports.
22 | MetroCluster Installation Express Guide
Cabling the ISLs between MetroCluster sites
You must connect the FC switches at each site through the fiber-optic Inter-Switch Links (ISLs) to
form the switch fabrics that connect the MetroCluster components.
About this task
This task includes steps performed at each MetroCluster site.
Up to four ISLs per FC switch fabric are supported.
Step
1. Connect the FC switches at each site to the ISL or ISLs.
This must be done for both switch fabrics.
Configuring the MetroCluster hardware components | 23
ISL
Site A
Site B
To this port on
FC_switch_A_x...
To this port on
FC_switch_B_x...
Brocade
6505
Brocade 6510
Brocade
6505
Brocade 6510
First ISL
8
20
8
20
Second ISL
9
21
9
21
Third ISL
10
22
10
22
Four ISL
11
23
11
23
Cabling the cluster interconnect
At each site, you must cable the cluster interconnect between the local controllers. If the local
controllers are not configured as two-node switchless clusters, two-cluster switches are required.
About this task
This task must be performed at both MetroCluster sites.
Step
1. Cable the cluster interconnect from one controller to the other, or, if cluster interconnect switches
are used, from each controller to the switches.
If the storage
controller is a...
Do this if using cluster
interconnect switches...
Do this if using a switchless
cluster...
FAS80xx
Connect ports e0a through e0d on
each controller to a local cluster
interconnect switch.
Connect ports e0a through e0d on
each controller to the same ports on
its HA partner.
62xx
Connect ports e0c and e0e on each
controller to a local cluster
interconnect switch.
Connect ports e0c and e0e on each
controller to the same ports on its HA
partner.
32xx
Connect ports e1a and e2a on each
controller to a local cluster
interconnect switch.
Connect ports e1a and e2a on each
controller to the same ports on its HA
partner.
Related information
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
24 | MetroCluster Installation Express Guide
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
Cabling the cluster peering connections
You must ensure that the controller ports used for cluster peering have connectivity with the cluster
on the partner site.
About this task
This task must be performed at both MetroCluster sites.
Step
1. Identify the ports you want to use for cluster peering and ensure they have network connectivity
with the partner cluster.
Related information
Clustered Data ONTAP 8.3 System Administration Guide for Cluster Administrators
Cabling the HA interconnect, if necessary
If the storage controllers in the HA pair are in separate chassis, you must cable the HA interconnect
between the controllers.
About this task
This task must be performed at both MetroCluster sites.
The HA interconnect must be cabled only if the storage controllers in the HA pair are in separate
chassis. Some storage controller models support two controllers in a single chassis, in which case
they use an internal HA interconnect.
Steps
1. Cable the HA interconnect if the storage controller's HA partner is in a separate chassis.
If the storage controller is
a...
Do this...
FAS80xx with I/O Expansion
a.
Module
b.
Connect port ib0a on the first controller in the HA pair to port ib0a
on the other controller.
Connect port ib0b on the first controller in the HA pair to port ib0b
on the other controller.
Configuring the MetroCluster hardware components | 25
If the storage controller is
a...
62xx
32xx
Do this...
a.
Connect port 2a (top port of NVRAM8 card in vertical slot 2) on the
first controller in the HA pair to port 2a on the other controller.
b.
Connect port 2b (bottom port of NVRAM8 card in vertical slot 2) on
the first controller in the HA pair to port 2b on the other controller.
a.
Connect port c0a on the first controller in the HA pair to port c0a on
the other controller.
b.
Connect port c0b on the first controller in the HA pair to port c0b on
the other controller.
2. Repeat this task at the MetroCluster partner site.
Related information
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
Cabling the management and data connections
You must cable the management and data ports on each storage controller to the site networks.
About this task
This task must be repeated for each controller at both MetroCluster sites.
You can connect the controller and cluster switch management ports to existing switches in your
network or to new dedicated network switches such as NetApp CN1601 cluster management
switches.
Steps
1. Connect a cable from the management port of the first storage controller in an HA pair to one
switch in your Ethernet network.
The management port is labeled with a wrench symbol (
commands and output.
) and is identified as e0M in
2. Connect a cable from the management port of the other storage controller in the HA pair to a
different switch in your Ethernet network.
26 | MetroCluster Installation Express Guide
3. Cable one or more Ethernet ports from the first storage controller to an Ethernet switch in your
data network.
4. Cable one or more Ethernet ports from the first storage controller to a different Ethernet switch in
your data network.
Related information
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
Recommended port assignments for FC switches
You need to verify that you are using the recommended port assignments when you cable the FC
switches.
Note: The cabling is the same for each FC switch in the switch fabric.
Component and port
FC_switch_x_1
FC_switch_x_2
Brocade
6505
Brocade
6510
Brocade 6505
Brocade
6510
controller_x_1 FC-VI port a
0
0
-
-
controller_x_1 FC-VI port b
-
-
0
0
controller_x_1 HBA port a
1
1
-
-
controller_x_1 HBA port b
-
-
1
1
controller_x_1 HBA port c
2
2
-
-
controller_x_1 HBA port d
-
-
2
2
controller_x_2 FC-VI port a
3
3
-
-
controller_x_2 FC-VI port b
-
-
3
3
controller_x_2 HBA port a
4
4
-
-
controller_x_2 HBA port b
-
-
4
4
controller_x_2 HBA port c
5
5
-
-
controller_x_2 HBA port d
-
-
5
5
bridge_x_1_port-number port 1
6
6
6
6
bridge_x_1_port-number port 1
7
7
7
7
Configuring the MetroCluster hardware components | 27
Component and port
FC_switch_x_1
FC_switch_x_2
Brocade
6505
Brocade
6510
Brocade 6505
Brocade
6510
bridge_x_1_port-number port 1
12
8
12
8
bridge_x_1_port-number port 1
13
9
13
9
ISL port 1
8
20
8
20
ISL port 2
9
21
9
21
ISL port 3
10
22
10
22
ISL port 4
11
23
11
23
Cabling the storage stacks
You must cable the disk shelves to the FC-to-SAS bridges and then cable the bridges to the storage
controllers.
About this task
This task must be performed on all storage stacks at both MetroCluster sites.
The ATTO FC-to-SAS bridges are not configured with a password when shipped from the factory.
Steps
1. Cable the disk shelves to the bridges by completing the following substeps:
a. Daisy-chain the disk shelves in each stack.
For information about daisy-chaining disk shelves, see the Installation and Service Guide for
your disk shelf model.
b. For each stack of disk shelves, cable IOM A square port of the first shelf to SAS port A on
FibreBridge A.
c. For each stack of disk shelves, cable IOM B circle port of the last shelf to SAS port A on
FibreBridge B.
Each bridge has one path to its stack of disk shelves; bridge A connects to the A-side of the stack
through the first shelf, and bridge B connects to the B-side of the stack through the last shelf.
Note: The bridge SAS port B is disabled.
The following illustration shows a set of bridges cabled to a stack of three disk shelves:
28 | MetroCluster Installation Express Guide
FC1
FC2
FC_bridge_x_1_06
M1
SAS A
FC2
FC1
FC_bridge_x_2_o6
SAS A
M1
Stack of SAS shelves
IOM A
IOM B
First
shelf
Last
shelf
2. Verify that each bridge can detect all disk drives and disk shelves it is connected to.
If you are using the...
ATTO ExpressNAV GUI
Then...
a.
In a supported web browser, enter the IP address of a bridge in the
browser box.
You are brought to the ATTO FibreBridge 6500N home page, which
has a link.
b.
Click the link and enter your user name and the password that you
designated when you configured the bridge.
The ATTO FibreBridge 6500N status page appears with a menu to
the left.
c.
Click Advanced in the menu.
d.
Enter
sastargets
and then click Submit.
Serial port connection
Enter the following command:
sastargets
Configuring the MetroCluster hardware components | 29
Example
The output shows the devices (disks and disk shelves) that the bridge is connected to. Output
lines are sequentially numbered so you can quickly count the devices. For example, the following
output shows that 10 disks are connected:
Tgt
0
1
2
3
4
5
6
7
8
9
VendorID
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
ProductID
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
Type
DISK
DISK
DISK
DISK
DISK
DISK
DISK
DISK
DISK
DISK
SerialNumber
3QP1CLE300009940UHJV
3QP1ELF600009940V1BV
3QP1G3EW00009940U2M0
3QP1EWMP00009940U1X5
3QP1FZLE00009940G8YU
3QP1FZLF00009940TZKZ
3QP1CEB400009939MGXL
3QP1G7A900009939FNTT
3QP1FY0T00009940G8PA
3QP1FXW600009940VERQ
Note: If the text response truncated appears at the beginning of the output, you can use
Telnet to connect to the bridge and enter the same command to see all the output.
3. Cable each bridge in the first switch fabric to the FC switches:
30 | MetroCluster Installation Express Guide
Site A
Site B
Cable this site A
bridge...
To this port on
FC_switch_A_1...
Cable this Site B
bridge...
To this port on
FC_switch_B_1
...
bridge_A_1_06
6
bridge_B_1_06
6
bridge_A_1_07
7
bridge_B_1_07
7
bridge_A_1_08
8
bridge_B_1_08
8
bridge_A_1_09
9
bridge_B_1_09
9
4. Cable each bridge in the second switch fabric to the FC switches:
Site B
Site A
Cable this site A bridge...
To this port on
FC_switch_A_
1...
Cable this Site B
bridge...
To this port on
FC_switch_B_1
...
bridge_A_2_06
6
bridge_B_2_06
6
bridge_A_2_07
7
bridge_B_2_07
7
bridge_A_2_08
8
bridge_B_2_08
8
bridge_A_2_09
9
bridge_B_2_09
9
Configuring the FC switches
You must configure the switch name and ISL settings for the FC switches based on your site's
requirements.
Setting the FC switch name
You can change the pre-set name of the FC switches in the MetroCluster configuration.
About this task
This task must be performed on each of the two FC switch fabrics.
Steps
1. Create a console connection and log in to both switches in the fabric.
The preconfigured username is Admin.
The preconfigured password is password.
Configuring the MetroCluster hardware components | 31
2. Issue the following command to set the switch name:
switchname switch_name
The switches should each have a unique name. After setting the name, the prompt changes
accordingly.
Example
The following example shows the command on BrocadeSwitchA:
BrocadeSwitchA:admin> switchname "FC_switch_A_1"
FC_switch_A_1:admin>
The following example shows the command on BrocadeSwitchB:
BrocadeSwitchB:admin> switchname "FC_Switch_B_1"
FC_switch_B_1:admin>
3. Reboot the switch:
reboot
Example
The following example shows the command on FC_switch_A_1:
FC_switch_A_1:admin> reboot
4. Repeat these commands on the other FC switch fabric.
Configuring the E-ports on a Brocade FC switch
On each switch fabric, you must configure the switch ports that connect the Inter-Switch Link (ISL).
These ISL ports are otherwise known as the E-ports.
Before you begin
Review the following guidelines before configuring the E-ports:
•
All ISLs in an FC switch fabric must be configured with the same speed and distance.
•
The supported speeds are 4 Gbps, 8 Gbps, and 16 Gbps.
The combination of the switch port and SFP must support the speed.
•
The distance supported can be as far as 200 km, depending on the FC switch model.
NetApp Interoperability Matrix Tool
•
The ISL link must have a dedicated lambda and the link must be supported by Brocade for the
distance, switch type and FOS.
32 | MetroCluster Installation Express Guide
•
You must not use the L0 setting when issuing the portCfgLongDistance command.
Instead, you should use the LE or LS setting to configure the distance on the Brocade switches
with a minimum of LE.
•
You must not use the LD setting when issuing the portCfgLongDistance command when
working with xWDM/TDM equipment.
Instead, you should use the LE or LS setting to configure the distance on the Brocade switches.
About this task
This task must be performed for each FC switch fabric.
The following tables show the ISL ports for the different switches and different numbers of ISLs:
Ports for dual ISL configurations (all switches)
FC_switch_x_1 ISL ports
FC_switch_B_1 ports
10
10
11
11
Ports for three or four ISL configurations (Brocade 6505)
FC_switch_A_1 ISL ports
FC_switch_B_1 ports
8
8
9
9
10
10
11
11
Ports for three or four ISL configurations (Brocade 6510)
FC_switch_A_1 ISL ports
FC_switch_B_1 ports
20
20
21
21
22
22
23
23
Steps
1. Configure the port speed:
portcfgspeed port octet_combo
Configuring the MetroCluster hardware components | 33
You must use the highest common speed supported by the components in the path.
Example
In the following example, there is one ISL for each fabric:
FC_switch_A_1:admin> portcfgspeed 10 16
FC_switch_B_1:admin> portcfgspeed 10 16
In the following example, there are two ISLs for each fabric:
FC_switch_A_1:admin> portcfgspeed 10 16
FC_switch_A_1:admin> portcfgspeed 11 16
FC_switch_B_1:admin> portcfgspeed 10 16
FC_switch_B_1:admin> portcfgspeed 11 16
2. If more than one ISL for each fabric is used, enable trunking by issuing the following command
for each ISL port:
portcfgtrunkport port-number 1
Example
FC_switch_A_1:admin> portcfgtrunkport 10 1
FC_switch_A_1:admin> portcfgtrunkport 11 1
FC_switch_B_1:admin> portcfgtrunkport 10 1
FC_switch_B_1:admin> portcfgtrunkport 11 1
3. Enable QoS traffic by issuing the following command for each of the ISL ports:
portcfgqos --enable port-number
Example
In the following example, there is one ISL per switch fabric:
FC_switch_A_1:admin> portcfgqos --enable 10
FC_switch_B_1:admin> portcfgqos --enable 10
Example
In the following example, there are two ISLs per switch fabric:
34 | MetroCluster Installation Express Guide
FC_switch_A_1:admin> portcfgqos --enable 10
FC_switch_A_1:admin> portcfgqos --enable 11
FC_switch_B_1:admin> portcfgqos --enable 10
FC_switch_B_1:admin> portcfgqos --enable 11
4. Verify the settings using the portCfgShow command.
Example
The following example shows the output for a configuration that uses two ISLs cabled to port 10
and port 11:
Ports of Slot 0
0
1
2
3
4
5
6
7
8
9 10 11
12 13 14 15
----------------+---+---+---+---+-----+---+---+---+----+---+---+---+-----+---+---+--Speed
AN AN AN AN
AN AN 8G AN
AN AN 16G 16G
AN AN AN AN
Fill Word
0
0
0
0
0
0
3
0
0
0
3
3
3
0
0
0
AL_PA Offset 13
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Trunk Port
.. .. .. ..
.. .. .. ..
.. .. ON ON
.. .. .. ..
Long Distance
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
VC Link Init
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Locked L_Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Locked G_Port
.. .. .. ..
.. .. ON ..
.. .. .. ..
.. .. .. ..
Disabled E_Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Locked E_Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
ISL R_RDY Mode
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
RSCN Suppressed
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Persistent Disable.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
LOS TOV enable
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
NPIV capability
ON ON ON ON
ON ON ON ON
ON ON ON ON
ON ON ON ON
NPIV PP Limit
126 126 126 126
126 126 126 126 126 126 126 126
126 126 126 126
QOS E_Port
AE AE AE AE
AE AE AE AE
AE AE AE AE
AE AE AE AE
Mirror Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Rate Limit
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Credit Recovery
ON ON ON ON
ON ON ON ON
ON ON ON ON
ON ON ON ON
Fport Buffers
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Port Auto Disable .. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
CSCTL mode
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Fault Delay
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
5. Calculate the ISL distance.
Due to the behavior of FC-VI, the distance must be set to 1.5 times the real distance with a
minimum of 10 (LE).
The distance for the ISL is calculated as follows, rounded up to the next full kilometer:
1.5 x real_distance = distance
Example
The distance is 3 km, then 1.5 x 3km = 4.5. This is lower than 10, so the ISL must be set to LE.
Example
The distance is 20 km, then 1.5 x 20km = 30. The ISL must be set to LS 30.
Configuring the MetroCluster hardware components | 35
6. Set the distance on each ISL port:
portcfglongdistance
port level vc_link_init distance
A vc_link_init value of 1 uses the ARB fill word (default). A value of 0 uses IDLE. The
required value may depend on the link being used. The commands must be repeated for each ISL
port.
Example
Because the distance is assumed to be 20 km, the setting is 30 with the default vc_link_init
value of 1:
FC_switch_A_1:admin> portcfglongdistance 10 LS 1 30
FC_switch_B_1:admin> portcfglongdistance 10 LS 1 30
7. Verify the distance setting:
portbuffershow
A distance setting of LE appears as 10km.
Example
The following example shows output is a configuration that uses ISLs on port 10 and port 11:
FC_switch_A_1:admin> portbuffershow
User
Port
---...
10
11
...
23
Port
Type
---E
E
Lx
Mode
----
Max/Resv
Buffers
-------
Buffer Needed
Usage Buffers
------ -------
Link
Remaining
Distance Buffers
--------- ---------
-
8
8
67
67
67
67
30km
30km
-
8
0
-
-
466
8. Verify that both switches form one fabric:
switchshow
Example
The following example shows the output for a configuration that uses ISLs on port 10 and port
11:
FC_switch_A_1:admin> switchshow
switchName: FC_switch_A_1
switchType: 71.2
switchState:Online
switchMode: Native
switchRole: Subordinate
switchDomain:
5
36 | MetroCluster Installation Express Guide
switchId:
fffc01
switchWwn: 10:00:00:05:33:86:89:cb
zoning:
OFF
switchBeacon:
OFF
Index Port Address Media Speed State Proto
===========================================
...
10
10 010C00
id
16G Online FC E-Port
11
11 010B00
id
16G Online FC E-Port
(upstream)
...
10:00:00:05:33:8c:2e:9a "FC_switch_B_1"
10:00:00:05:33:8c:2e:9a "FC_switch_B_1"
FC_switch_B_1:admin> switchshow
switchName: FC_switch_B_1
switchType: 71.2
switchState:Online
switchMode: Native
switchRole: Principal
switchDomain:
7
switchId:
fffc03
switchWwn: 10:00:00:05:33:8c:2e:9a
zoning:
OFF
switchBeacon:
OFF
Index Port Address Media Speed State Proto
==============================================
...
10
10 030A00
id
16G Online FC E-Port
11
11 030B00
id
16G Online FC E-Port
(downstream)
...
10:00:00:05:33:86:89:cb "FC_switch_A_1"
10:00:00:05:33:86:89:cb "FC_switch_A_1"
9. Confirm the configuration of the fabrics:
fabricshow
Example
FC_switch_A_1:admin> fabricshow
Switch ID
Worldwide Name
Enet IP Addr FC IP Addr Name
----------------------------------------------------------------1: fffc01 10:00:00:05:33:86:89:cb 10.10.10.55 0.0.0.0
"FC_switch_A_1"
3: fffc03 10:00:00:05:33:8c:2e:9a 10.10.10.65 0.0.0.0
>"FC_switch_B_1"
FC_switch_B_1:admin> fabricshow
Switch ID
Worldwide Name
Enet IP Addr FC IP Addr
Name
---------------------------------------------------------------1: fffc01 10:00:00:05:33:86:89:cb 10.10.10.55 0.0.0.0
"FC_switch_A_1"
3: fffc03 10:00:00:05:33:8c:2e:9a 10.10.10.65
0.0.0.0
10. Repeat the previous steps for the second FC switch fabric.
>"FC_switch_B_1
Configuring the MetroCluster hardware components | 37
Configuring the cluster interconnect switch management
port IP addresses
In a switched cluster, you need to configure IP addresses for the management ports of the cluster
switches so that you can verify cluster cabling later. You also need to configure IP addresses for the
management ports of the management switches, if they were included in the order.
Before you begin
The cabinet PDUs must be powered on so that the switches are powered on.
About this task
This task is not required on clusters in which the nodes are connected back-to-back without cluster
interconnect switches.
This task must be performed on the cluster interconnect switches at both MetroCluster sites.
The Config Advisor tool connects to the management ports of the cluster and management switches
as part of its testing. The switch management ports must be configured with IP addresses to enable
Config Advisor to connect.
For more information about configuring the switches, see the following guides on NetApp Support:
•
NetApp 10G Cluster-Mode Switch Installation Guide
•
NetApp 1G Cluster-Mode Switch Installation Guide
Note: The commands for the cluster (CN1610) and management (CN1601) switches are slightly
different. Be sure to use the correct commands for the switch you are configuring.
Steps
1. Configure the management ports of the cluster (CN1610) switches:
a. Connect a serial cable from your laptop to cluster switch A console port.
b. Start a terminal emulator program on your laptop.
c. Log in to the switch using user name “admin” and no password.
d. At the (CN1610)> prompt, enter enable to enter the Privileged EXEC command mode.
The command prompt changes to (CN1610)#.
e. Disable DHCP for the management port:
serviceport protocol none
38 | MetroCluster Installation Express Guide
f. Configure the management port IP address:
serviceport ip ip-address netmask [gateway]
Example
serviceport ip 10.10.10.81 255.255.255.0 10.10.10.1
g. Verify that the port was configured correctly:
show -running-config
Example
The serviceport protocol and serviceport ip values are shown in the command
output:
(CN1610) #show running-config
!Current Configuration:
!System Description "NetApp CN1610, 1.0.0.4, Linux 2.6.21.7"
!System Software Version "1.0.0.4"
!System Up Time
"0 days 0 hrs 10 mins 11 secs"
!Additional Packages
FASTPATH QOS
!Current SNTP Synchronized Time: Not Synchronized
serviceport protocol none
serviceport ip 10.10.10.81 255.255.255.0 10.10.10.1
...
h. Save the current configuration so that all changes are retained during a switch reset:
write memory
i. Repeat these steps for cluster switch B.
2. Configure the management ports of the management (CN1601) switches, if applicable:
a. Connect a serial cable from your laptop to management switch A console port.
b. Start a terminal emulator program on your laptop.
c. Log in to the switch using user name “admin” and no password.
d. At the (CN1601)> prompt, enter enable to enter the Privileged EXEC command mode.
The command prompt changes to (CN1601)#.
e. Disable DHCP for the management port:
network protocol none
f. Configure the management port IP address:
network parms ip-address netmask [gateway]
Configuring the MetroCluster hardware components | 39
Example
network parms 10.10.10.83 255.255.255.0 10.10.10.1
g. Verify that the port was configured correctly:
show running-config
Example
The network protocol and network parms values are shown in the command output:
(CN1601) #show running-config
!Current Configuration:
!System Description "NetApp CN1601, 1.0.0.4, Linux 2.6.21.7"
!System Software Version "1.0.0.4"
!System Up Time
"0 days 0 hrs 10 mins 11 secs"
!Additional Packages
FASTPATH QOS
!Current SNTP Synchronized Time: Not Synchronized
network protocol none
network parms 10.10.10.83 255.255.255.0 10.10.10.1
...
h. Save the current configuration so that all changes are retained during a switch reset:
write memory
i. Repeat these steps for management switch B.
40 | MetroCluster Installation Express Guide
Configuring the MetroCluster software in Data
ONTAP
You must set up each node in the MetroCluster in Data ONTAP, including the node-level
configurations and the configuration of the nodes into two sites. Finally, you implement the
MetroCluster relationship between the two sites.
Configuring the MetroCluster software in Data ONTAP | 41
Software setup checklist
You need to know which software setup steps were completed at the factory and those you need to
complete at each MetroCluster site. This guide includes all the required steps, which you can
complete after reviewing the information in this checklist.
Step
Completed at factory
Completed by you using
procedures in this guide
Install the clustered Data
ONTAP software.
Yes
No
Create the cluster on the first
node at the first MetroCluster
site. This includes the
following:
Yes
No
Join the remaining nodes to the Yes
cluster.
No
Enable storage failover on one
node of each HA pair and
configure cluster high
availability.
Yes
No
Enable the switchless-cluster
option on a two-node
switchless cluster.
Yes
No
•
Name the cluster.
•
Set the admin password.
•
Set up the private cluster
interconnect.
•
Install all purchased license
keys.
•
Create the cluster
management interface.
•
Create the node
management interface.
•
Configure the FC switches
42 | MetroCluster Installation Express Guide
Step
Completed at factory
Completed by you using
procedures in this guide
Repeat the steps to configure
the second MetroCluster site.
Yes
No
Configure the clusters for
peering.
Yes
No
Enable the MetroCluster
configuration.
Yes
No
Configure user credentials and
management IP addresses on
the management and cluster
switches.
Yes, if ordered
User IDs are “admin” with no
password.
No
Thoroughly test the
MetroCluster configuration.
Yes
No, although you must perform
verification steps at your site as
described below.
Complete the cluster setup
worksheet.
No
Yes
Change the password for the
admin account to the
customer's value.
No
Yes
Configure each node with the
customer's values.
No
Yes
Discover the clusters in
OnCommand System
Manager.
No
Yes
Configure an NTP server for
each cluster.
No
Yes
Verify the cluster peering.
No
Yes
Verify the health of the cluster
and that the cluster is in
quorum.
No
Yes
Verify basic operation of the
MetroCluster sites.
No
Yes.
Check the MetroCluster
configuration.
No
Yes
Test storage failover.
No
Yes
Configuring the MetroCluster software in Data ONTAP | 43
Step
Completed at factory
Completed by you using
procedures in this guide
Add the MetroCluster switches
and bridges for health
monitoring.
No
Yes
Test switchover, healing and
switchback.
No
Yes
Set the destination for
configuration backup files.
No
Yes
Optional: Change the cluster
name if desired, for example,
to better distinguish the
clusters.
No
Yes
Optional: Change the node
name, if desired.
No
Yes
Configure AutoSupport.
No
Yes
Configuring the individual nodes
You must configure each node in the Metrocluster configuration, confirm the environmental variable
settings, assign disks to the node, configure HA, license the node, and configure the LIFs.
Steps
1. Configuring the cluster management and node management networks on page 44
You use the node wizard to configure the cluster management and node management networks
with the values appropriate for the customer's environment.
2. Changing the password for the admin account on page 44
The factory configured the cluster with a default user name and password. You should change the
password to the customer's value.
3. Renaming a node on page 45
You can change a node's name as needed.
4. Configuring and testing AutoSupport on page 45
You must configure how and where AutoSupport information is sent from each node, and then
test that the configuration is correct. AutoSupport messages are scanned for potential problems
and are available to technical support when they assist customers in resolving issues.
44 | MetroCluster Installation Express Guide
Configuring the cluster management and node management networks
You use the node wizard to configure the cluster management and node management networks with
the values appropriate for the customer's environment.
Before you begin
•
You should be logged in to the first node of the cluster.
Step
1. Use the network interface show command to verify that the cluster management and node
management LIFs are configured correctly.
For each cluster management and node management LIF, verify the following:
•
The LIF is up.
•
The IP address is configured correctly.
For more information about changing the configuration of a LIF, see the Clustered Data ONTAP
Network Management Guide.
Example
cluster1::> network interface show
Logical
Status
Network
Vserver
Interface Admin/Oper Address/Mask
----------- ---------- ---------- -----------------cluster1
cluster_mgmt up/up
10.10.10.10/24
cluster1-01
clus1
up/up
169.254.1.100/24
clus2
up/up
169.254.1.101/24
mgmt1
up/up
10.10.10.11/24
cluster1-02
clus1
up/up
169.254.1.102/24
clus2
up/up
169.254.1.103/24
mgmt1
up/up
10.10.10.12/24
Current
Current Is
Node
Port
Home
------------- ------- ---cluster1-01
e0a
true
cluster1-01
cluster1-01
cluster1-01
e1a
e2a
e0M
true
true
true
cluster1-02
cluster1-02
cluster1-02
e1a
e2a
eoM
true
true
true
cluster1-01
e0b
true
nfs_server
lif1
up/up
10.10.9.10/24
Changing the password for the admin account
The factory configured the cluster with a default user name and password. You should change the
password to the customer's value.
Before you begin
You should be logged in to the cluster with the user name and password for the admin account. The
user name is admin and the password is netapp!123.
Configuring the MetroCluster software in Data ONTAP | 45
Steps
1. Change the password:
security login password
2. Follow the prompts to enter the default admin password and the customer's new password.
The password must be at least eight characters long.
Renaming a node
You can change a node's name as needed.
Step
1. To rename a node, use the system node rename command.
The -newname parameter specifies the new name for the node. The system node rename man
page describes the rules for specifying the node name.
If you want to rename multiple nodes in the cluster, you must run the command for each node
individually.
Example of renaming a node
The following command renames node “node1” to “node1a”:
cluster1::> system node rename -node node1 -newname node1a
Configuring and testing AutoSupport
You must configure how and where AutoSupport information is sent from each node, and then test
that the configuration is correct. AutoSupport messages are scanned for potential problems and are
available to technical support when they assist customers in resolving issues.
Steps
1. Load the preinstalled System Manager for the node by entering the cluster management LIF
address as the URL in a web browser that has connectivity to the node.
2. From the home page, double-click the appropriate storage system.
3. Expand the Nodes hierarchy in the left navigation pane.
4. In the navigation pane, select the node, and then click Configuration > AutoSupport.
5. Click Edit.
46 | MetroCluster Installation Express Guide
6. In the From Email Address field, type the email address from which AutoSupport messages are
sent.
7. In the Email Recipient tab, type the email address from which email notifications are sent,
specify the email recipients and the message content for each email recipient, and add the mail
hosts.
You can add up to five email addresses of the host names. You should enter at least one recipient
outside your network so that you can verify that messages are being received outside your
network.
8. In the Others tab, select a transport protocol for delivering the email messages from the dropdown list and specify the HTTP or HTTPS proxy server details.
9. Click OK.
10. Test the configuration by performing the following steps:
a. Click Test, type a subject line that includes the word “Test”, and then click Test.
b. Confirm that your organization received an automated response from NetApp, which indicates
that NetApp received the AutoSupport message.
When NetApp receives an AutoSupport message with “Test” in the subject, it automatically
sends a reply to the email address it has on file as the system owner.
c. Confirm that the message was received by the email addresses that you specified.
11. Repeat this entire procedure for each node in the cluster.
Related information
Clustered Data ONTAP 8.3 System Administration Guide for Cluster Administrators
Configuring the clusters
You must set various site-specific attributes for the clusters, configure NTP to synchronize time
among the nodes, refresh the MetroCluster configuration and then verify cluster health.
About this task
Steps
1. Configuring the cluster management networks on page 47
You use the node wizard to configure the cluster management networks with the values
appropriate for the customer's environment.
2. Modifying cluster attributes on page 48
Configuring the MetroCluster software in Data ONTAP | 47
You can modify a cluster's attributes, such as the cluster name, location, and contact information
as needed.
3. Protecting configuration backup files on page 48
You can provide additional protection for the cluster configuration backup files by specifying a
remote URL (either HTTP or FTP) where the configuration backup files will be uploaded in
addition to the default locations in the local cluster.
4. Synchronizing the system time using NTP on page 49
The cluster needs a Network Time Protocol (NTP) server to synchronize the time between the
nodes and their clients. You can use the Edit DateTime dialog box in System Manager to
configure the NTP server.
5. Refreshing the MetroCluster configuration on page 49
After configuring the cluster and node attributes for your site, you must refresh the MetroCluster
configuration to ensure it includes all the new information.
6. Verifying cluster health on page 50
After completing cluster setup, you should verify that each node is healthy and eligible to
participate in the cluster.
Configuring the cluster management networks
You use the node wizard to configure the cluster management networks with the values appropriate
for the customer's environment.
Before you begin
•
The Cluster Setup worksheet should be completed.
•
You should be logged in to the first node of the cluster.
Step
1. Use the network interface show command to verify that the cluster management and node
management LIFs are configured correctly.
For each cluster management and node management LIF, verify the following:
•
The LIF is up.
•
The IP address is configured correctly.
For more information about changing the configuration of a LIF, see the Clustered Data ONTAP
Network Management Guide.
Example
cluster1::> network interface show
Logical
Status
Network
Current
Current Is
Vserver
Interface Admin/Oper Address/Mask
Node
Port
Home
----------- ---------- ---------- ------------------ ------------- ------- ----
48 | MetroCluster Installation Express Guide
cluster1
cluster_mgmt up/up
10.10.10.10/24
cluster1-01
e0a
true
clus1
clus2
mgmt1
up/up
up/up
up/up
169.254.1.100/24
169.254.1.101/24
10.10.10.11/24
cluster1-01
cluster1-01
cluster1-01
e1a
e2a
e0M
true
true
true
clus1
clus2
mgmt1
up/up
up/up
up/up
169.254.1.102/24
169.254.1.103/24
10.10.10.12/24
cluster1-02
cluster1-02
cluster1-02
e1a
e2a
eoM
true
true
true
lif1
up/up
10.10.9.10/24
cluster1-01
e0b
true
cluster1-01
cluster1-02
nfs_server
Modifying cluster attributes
You can modify a cluster's attributes, such as the cluster name, location, and contact information as
needed.
About this task
You cannot change a cluster's UUID, which is set when the cluster is created.
Step
1. To modify cluster attributes, use the cluster identity modify command.
The -name parameter specifies the name of the cluster. The cluster identity modify man
page describes the rules for specifying the cluster's name.
The -location parameter specifies the location for the cluster.
The -contact parameter specifies the contact information such as a name or e-mail address.
Example of renaming a cluster
The following command renames the current cluster (“cluster1”) to “cluster2”:
cluster1::> cluster identity modify -name cluster2
Protecting configuration backup files
You can provide additional protection for the cluster configuration backup files by specifying a
remote URL (either HTTP or FTP) where the configuration backup files will be uploaded in addition
to the default locations in the local cluster.
Step
1. Set the URL of the remote destination for the configuration backup files:
system configuration backup settings modify URL-of-destination
Configuring the MetroCluster software in Data ONTAP | 49
Synchronizing the system time using NTP
The cluster needs a Network Time Protocol (NTP) server to synchronize the time between the nodes
and their clients. You can use the Edit DateTime dialog box in System Manager to configure the
NTP server.
Steps
1. From the home page, double-click the appropriate storage system.
2. Expand the Cluster hierarchy in the left navigation pane.
3. In the navigation pane, click Configuration > System Tools > DateTime.
4. Click Edit.
5. Select the time zone.
6. Specify the IP addresses of the time servers, and then click Add.
You must add an NTP server to the list of time servers. The domain controller can be an
authoritative server.
7. Click OK.
8. Verify the changes you made to the date and time settings in the Date and Time window.
Refreshing the MetroCluster configuration
After configuring the cluster and node attributes for your site, you must refresh the MetroCluster
configuration to ensure it includes all the new information.
Steps
1. Enter advanced privilege mode:
set -privilege advanced
You can press y when prompted.
2. Refresh the configuration:
metrocluster configure -refresh
3. Return to the admin privilege mode:
set -privilege admin
50 | MetroCluster Installation Express Guide
Verifying cluster health
After completing cluster setup, you should verify that each node is healthy and eligible to participate
in the cluster.
About this task
This task must be performed on each cluster in the MetroCluster configuration.
Steps
1. Enter cluster show to view the status of each node.
Example
The following example shows that each node is healthy and eligible as indicated by the true
status seen in the Health and Eligibility columns; a false status indicates a problem.
cluster1::> cluster show
Node
Health
--------------------- ------controller_A_1
true
controller_A_2
true
Eligibility
-----------true
true
2 entries were displayed.
2. Repeat the previous step on the partner cluster.
Related information
Clustered Data ONTAP 8.3 System Administration Guide for Cluster Administrators
Configuring MetroCluster components for health monitoring
You must perform some special configuration steps before monitoring the components in the
MetroCluster configuration.
Steps
1. Configuring the MetroCluster FC switches for health monitoring on page 51
You must perform some special configuration steps to monitor the FC switches in the
MetroCluster configuration.
2. Configuring FC-to-SAS bridges for health monitoring on page 52
You must perform some special configuration steps to monitor the FC-to-SAS bridges in the
MetroCluster configuration.
Configuring the MetroCluster software in Data ONTAP | 51
Configuring the MetroCluster FC switches for health monitoring
You must perform some special configuration steps to monitor the FC switches in the MetroCluster
configuration.
Steps
1. Issue the following command on each MetroCluster node:
storage switch add -switch-ipaddress ipaddress
This command must be repeated on all four switches in the MetroCluster configuration.
Example
The following example shows the command to add a switch with IP 10.10.10.10:
controller_A_1::> storage switch add -switch-ipaddress 10.10.10.10
2. Verify that all switches are properly configured:
storage switch show
It may take up to 15 minutes to reflect all data due to the 15-minute polling interval.
Example
The following example shows the command given to verify the MetroCluster's FC switches are
configured:
controller_A_1::> storage switch
Fabric
Switch Name
---------------- --------------1000000533a9e7a6 brcd6505-fcs40
1000000533a9e7a6 brcd6505-fcs42
1000000533ed94d1 brcd6510-fcs44
1000000533ed94d1 brcd6510-fcs45
4 entries were displayed.
show
Vendor
------Brocade
Brocade
Brocade
Brocade
Model
-----------Brocade6505
Brocade6505
Brocade6510
Brocade6510
Switch WWN
---------------1000000533a9e7a6
1000000533d3660a
1000000533eda031
1000000533ed94d1
Status
-----OK
OK
OK
OK
controller_A_1::>
If the switch's worldwide name (WWN) is shown, the Data ONTAP health monitor is able to
contact and monitor the FC switch.
52 | MetroCluster Installation Express Guide
Configuring FC-to-SAS bridges for health monitoring
You must perform some special configuration steps to monitor the FC-to-SAS bridges in the
MetroCluster configuration.
Steps
1. Configure the FC-to-SAS bridges for monitoring by issuing the following command for each
bridge on each storage controller:
storage bridge add -address ipaddress
This command must be repeated for all FC-to-SAS bridges in the MetroCluster configuration.
Example
The following example shows the command you must use to add an FC-to-SAS bridge with an IP
address of 10.10.20.10:
controller_A_1::> storage bridge add -address 10.10.20.10
2. Verify that all FC-to-SAS bridges are properly configured:
storage bridge show
It might take as long as 15 minutes to reflect all data due to the polling interval.
Example
The following example shows that the FC-to-SAS bridges are configured:
controller_A_1::> storage bridge show
Bridge
------ATTO_1
ATTO_2
ATTO_3
ATTO_4
Symbolic Name
------------atto6500n-1
atto6500n-2
atto6500n-3
atto6500n-4
Vendor
-----Atto
Atto
Atto
Atto
Model
--------FibreBridge
FibreBridge
FibreBridge
FibreBridge
6500N
6500N
6500N
6500N
Bridge WWN Monitored Status
----------- --------- ------wwn
true
ok
wwn
true
ok
wwn
true
ok
wwn
true
ok
controller_A_1::>
If the FC-to-SAS bridge's worldwide name (WWN) is shown, the Data ONTAP health monitor is
able to contact and monitor the bridge.
53
Verifying the MetroCluster configuration
After enabling the MetroCluster configuration, you should verify the software configuration and
cabling, and test local HA and MetroCluster switchover and switchback.
About this task
Steps
1. Verifying the configuration is successful on page 54
After you make the site-specific changes and refresh the MetroCluster configuration, you need to
verify that configuration.
54 | MetroCluster Installation Express Guide
2. Verifying local HA operation on page 58
You should verify the operation of the local HA pairs in the MetroCluster configuration.
3. Verifying switchover, healing, and switchback on page 60
If you want to test the MetroCluster functionality, you can perform a negotiated switchover in
which one cluster is cleanly switched over to the partner cluster. You can then heal and switch
back the configuration.
Verifying the configuration is successful
After you make the site-specific changes and refresh the MetroCluster configuration, you need to
verify that configuration.
Steps
1.
2.
3.
4.
Verifying cluster peering on page 54
Verifying cabling on page 55
Checking the MetroCluster configuration on page 56
Verifying that you can read and write data to the test SVM using NFS on page 57
Verifying cluster peering
You create the cluster peer relationship using a set of intercluster-designated logical interfaces to
make the information about one cluster available to the other for use in cluster peering applications.
Step
1. Verify the health of the peering relationship:
cluster peer health show
Example
cluster_A::> cluster peer health show
Node
cluster-Name
Node-Name
Ping-Status
RDB-Health Cluster-Health
-------- --------------------------------- -------------ctlr_A_1
cluster_A
controller_A_1
Data: interface_reachable
ICMP: interface_reachable true
true
controller_A_2
Data: interface_reachable
ICMP: interface_reachable true
true
ctlr_B_2
cluster_B
controller_B_1
Data: interface_reachable
ICMP: interface_reachable true
true
controller_B_2
Data: interface_reachable
Availability
------------
true
true
true
Verifying the MetroCluster configuration | 55
ICMP: interface_reachable true
4 entries were displayed.
true
true
Verifying cabling
You use the Config Advisor tool to verify that the system is cabled correctly.
Before you begin
The hardware setup must be complete and all components must be powered on.
About this task
The Config Advisor tool is available at NetApp Downloads: Config Advisor. Information about how
to use the tool is available in the user guide, which is available in the download package. If you need
support for the Config Advisor tool, you must follow the procedure in the tool's “Reporting Issues in
Config Advisor” online help topic. The Config Advisor tool is not supported by the typical NetApp
support process.
This task must be performed on each cluster in the MetroCluster configuration.
Steps
1. Download the latest version of Config Advisor to a laptop running Windows from the NetApp
Support Site.
2. Connect your laptop to the management network for the cluster.
3. If needed, change the IP address of your laptop to an unused address on the subnet for the
management network.
4. Start Config Advisor, and then select the profile Clustered Data ONTAP.
5. Select the cluster switch model.
For a switchless cluster, select No Switches.
6. Enter the requested IP addresses and credentials.
7. Click Collect Data.
The Config Advisor tool displays any problems found. If problems are found, correct them and
run the tool again.
Related information
Config Advisor from NetApp Support Site: support.netapp.com/eservice/toolchest
56 | MetroCluster Installation Express Guide
Checking the MetroCluster configuration
You can check that the components and relationships in the MetroCluster configuration are working
correctly. You should do a check after initial configuration and after making any changes to the
MetroCluster configuration. After you run the metrocluster check run command, you then
display the results of the check with the metrocluster check show command.
About this task
If the metrocluster check run command is issued twice within a short time, on either or both
clusters, a conflict can occur and the command might not collect all data. Subsequent
metrocluster check show commands will not show the expected output.
Steps
1. Check the configuration:
metrocluster check run
Example
The following example shows the output for a healthy MetroCluster configuration:
controller_A_1::> metrocluster check run
Last Checked On: 9/24/2014 17:10:33
Component
Result
------------------- --------nodes
ok
lifs
ok
config-replication ok
aggregates
ok
4 entries were displayed.
Command completed. Use the "metrocluster check show -instance"
command or sub-commands in "metrocluster check" directory for
detailed results.
To check if the nodes are ready to do a switchover or switchback
operation, run "metrocluster switchover -simulate" or "metrocluster
switchback
-simulate", respectively.
2. Display more detailed results from the most recent metrocluster check run command:
metrocluster check aggregate show
metrocluster check config-replication show
metrocluster check lif show
Verifying the MetroCluster configuration | 57
metrocluster check node show
The metrocluster check show commands show the results of the most recent
metrocluster check run command. You should always run the metrocluster check
run command prior to using the metrocluster check show commands to ensure that the
information displayed is current.
Example
The following example shows the metrocluster check aggregate show output for a
healthy MetroCluster configuration:
controller_A_1::> metrocluster check aggregate show
Last Checked On: 8/5/2014 00:42:58
Node
Aggregate
Check
--------------------- --------------------- --------------------controller_A_1
aggr0_controller_A_1_0
mirroring-status
disk-pool-allocation
controller_A_1_aggr1
mirroring-status
disk-pool-allocation
controller_A_2
aggr0_controller_A_2
mirroring-status
disk-pool-allocation
controller_A_2_aggr1
mirroring-status
disk-pool-allocation
controller_B_1
aggr0_controller_B_1_0
mirroring-status
disk-pool-allocation
controller_B_1_aggr1
mirroring-status
disk-pool-allocation
controller_B_2
aggr0_controller_B_2
mirroring-status
disk-pool-allocation
controller_B_2_aggr1
mirroring-status
disk-pool-allocation
16 entries were displayed.
Result
--------ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
Verifying that you can read and write data to the test SVM using NFS
You should verify that NFS is configured correctly to the test Storage Virtual Machine (SVM) by
connecting a test client to the node and writing and reading data to it.
Before you begin
You must have configured NFS and created a test local user.
Steps
1. Log in to the client system that you configured for NFS access.
2. Change the directory to the mount directory:
58 | MetroCluster Installation Express Guide
cd /mnt
3. Create a mount directory that is named vsNFS-v3:
mkdir /mnt/vsNFS-v3
4. Mount the root volume
IP_Address:/
You must use the IP address of the data interface that you configured when enabling NFS for the
SVM.
5. Change the directory to vol1:
cd vol1
6. Copy a few files from the NFS client to the directory.
7. List the directory's contents to confirm that the files were copied:
ls -l
8. Delete the test files.
Verifying local HA operation
You should verify the operation of the local HA pairs in the MetroCluster configuration.
About this task
The examples in this task use standard naming conventions:
•
•
cluster_A
◦
controller_A_1
◦
controller_A_2
cluster_B
◦
controller_B_1
◦
controller_B_2
Steps
1. On cluster_A, perform a failover and giveback in both directions.
a. Confirm that storage failover is enabled:
storage failover show
Verifying the MetroCluster configuration | 59
Example
The output should indicate that takeover is possible for both nodes:
cluster_A::> storage failover show
Takeover
Node
Partner
Possible State Description
-------------- -------------- -------- --------------------------controller_A_1 controller_A_2 true
Connected to controller_A_2
controller_A_2 controller_A_1 true
2 entries were displayed.
Connected to controller_A_1
b. Takeover controller_A_2 from controller_A_1:
storage failover takeover controller_A_2
You can use the storage failover show-takeover command to monitor the progress of
the takeover operation.
c. Confirm that the takeover is complete:
storage failover show
Example
The output should indicate that controller_A_1 is in takeover state, meaning that it has taken
over its HA partner:
cluster_A::> storage failover show
Takeover
Node
Partner
Possible State Description
-------------- -------------- -------- ----------------controller_A_1 controller_A_2 false
In takeover
controller_A_2 controller_A_1 2 entries were displayed.
Unknown
d. Give back controller_A_2:
storage failover giveback controller_A_2
You can use the storage failover show-giveback command to monitor the progress of
the giveback operation.
e. Confirm that storage failover has returned to a normal state:
storage failover show
Example
The output should indicate that takeover is possible for both nodes:
60 | MetroCluster Installation Express Guide
cluster_A::> storage failover show
Takeover
Node
Partner
Possible State Description
-------------- -------------- -------- --------------------------controller_A_1 controller_A_2 true
Connected to controller_A_2
controller_A_2 controller_A_1 true
2 entries were displayed.
Connected to controller_A_1
f. Repeat the previous substeps, this time taking over controller_A_1 from controller_A_2.
2. Repeat the previous step on cluster_B.
Verifying switchover, healing, and switchback
If you want to test the MetroCluster functionality, you can perform a negotiated switchover in which
one cluster is cleanly switched over to the partner cluster. You can then heal and switch back the
configuration.
Steps
1.
2.
3.
4.
5.
6.
7.
8.
Verifying that your system is ready for a switchover on page 60
Performing a negotiated switchover on page 61
Confirming that the DR partners have come online on page 62
Reestablishing SnapMirror or SnapVault SVM peering relationships on page 63
Healing the configuration on page 64
Performing a switchback on page 67
Verifying a successful switchback on page 68
Reestablishing SnapMirror or SnapVault SVM peering relationships on page 70
Verifying that your system is ready for a switchover
You can use the -simulate option to preview the results of a switchover. A verification check gives
you a way to ensure that most of the preconditions for a successful run are met before you start the
operation.
Steps
1. Simulated the switchover operation at the advanced privilege level:
metrocluster switchover -simulate
.
2. Review the output that is returned.
The output shows whether any vetoes would prevent a switchover.
Verifying the MetroCluster configuration | 61
Example: Verification results
The following example shows errors encountered in a simulation of a switchover operation:
cluster4::*> metrocluster switchover simulate
[Job 126] Preparing the cluster for the switchover operation...
[Job 126] Job failed: Failed to prepare the cluster for the switchover
operation. Use the "metrocluster operation show" command to view detailed error
information. Resolve the errors, then try the command again.
Performing a negotiated switchover
A negotiated switchover cleanly shuts down processes on the partner site and then switches over
operations from the partner site. Negotiated switchover scan be used to perform maintenance on a
MetroCluster site or test the switchover functionality.
Before you begin
•
Any nodes that were previously down must be booted and in cluster quorum.
•
The cluster peering network must be available from both sites.
About this task
While preparing and executing the negotiated switchover, do not make configuration changes to
either cluster.
After the switchover operation finishes, all SnapMirror and SnapVault relationships using a disaster
site node as a destination must be reconfigured.
Clustered Data ONTAP 8.3 Data Protection Guide
Steps
1. Use the metrocluster check run, metrocluster check show and metrocluster
check config-replication show commands to make sure no configuration updates are in
progress or pending.
Checking the MetroCluster configuration on page 56
2. Enter the following command to implement the switchover:
metrocluster switchover
The operation can take several minutes to complete.
3. Monitor the completion of the switchover:
metrocluster operation show
62 | MetroCluster Installation Express Guide
Example
mcc1A::*> metrocluster operation show
Operation: Switchover
Start time: 10/4/2012 19:04:13
State: in-progress
End time: Errors:
mcc1A::*> metrocluster operation show
Operation: Switchover
Start time: 10/4/2012 19:04:13
State: successful
End time: 10/4/2012 19:04:22
Errors: -
4. Reestablish any SnapMirror or SnapVault configurations.
Clustered Data ONTAP 8.3 Data Protection Guide
Confirming that the DR partners have come online
After the switchover is complete, you should verify that the DR partners have taken ownership of the
disks and the partner SVMs have come online.
Steps
1. Confirm that the aggregate disks have switched over to the disaster site:
storage disk show -fields owner,dr-home
Example
In this example, the output shows that the switched over disks have the dr-home field set:
mcc1A::> storage disk show -fields owner,dr-home
disk
owner
dr-home
--------------------------------- ------- ------mcc1-a1:mcc-sw1A-fab1:1-7.126L1
mcc1-a1 mcc1-a1:mcc-sw1A-fab1:1-7.126L24 mcc1-a1 mcc1-b2
mcc1-a1:mcc-sw1A-fab1:1-7.126L36 mcc1-a1 mcc1-b2
mcc1-a1:mcc-sw1A-fab1:1-7.126L38 mcc1-a1 ....
mcc1-a1:mcc-sw1A-fab1:1-7.126L48 mcc1-a1 mcc1-a1:mcc-sw1A-fab1:1-7.126L49 mcc1-a1 mcc1-a1:mcc-sw1A-fab1:1-8.126L6
mcc1-a1 mcc1-a1:mcc-sw1A-fab1:1-8.126L13 mcc1-a1 mcc1-b2
mcc1-a1:mcc-sw1A-fab1:1-8.126L23 mcc1-a1 mcc1-b2
mcc1-a1:mcc-sw1A-fab1:1-8.126L24 mcc1-a1 mcc1-b2
2. Check that the aggregates were switched over by using the storage aggregate show
command.
Verifying the MetroCluster configuration | 63
Example
In this example, the aggregates were switched over. The root aggregate (aggr0_b2) is in a
degraded state. The data aggregate (b2_aggr2) is in a mirrored, normal state:
mcc1A::*> storage aggregate show
.
.
.
mcc1-b Switched Over Aggregates:
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_b2
227.1GB
45.1GB
80% online
0 mcc1-a1
raid_dp,
mirror
degraded
b2_aggr1
227.1GB
200.3GB
20% online
0 mcc1-a1
raid_dp,
mirrored
normal
3. Confirm that the secondary SVMs have come online by using the vserver show command.
Example
In this example, the previously dormant sync-destination SVMs on the secondary site have been
activated and have an Admin State of running:
mcc1A::*> vserver show
Vserver
----------...
mcc1B-vs1b-mc
Type Subtype
----- ---------data
sync-destination
Admin
Root
Name
Name
State
Volume
Aggregate Service Mapping
---------- --------- ---------- ------- ------running
vs1b_vol
aggr_b1
file
file
Reestablishing SnapMirror or SnapVault SVM peering relationships
After a switchover or switchback operation, you must manually reestablish any SVM peering to
clusters outside of the MetroCluster configuration if the destination of the relationship is in the
MetroCluster configuration.
About this task
This procedure is done at the surviving site after a switchover has occurred or at the disaster site after
a switchback has occurred.
Steps
1. Check whether the SVM peering relationships have been reestablished by using the
metrocluster vserver show command.
2. Reestablish any SnapMirror or SnapVault configurations.
Clustered Data ONTAP 8.3 Data Protection Guide
64 | MetroCluster Installation Express Guide
Healing the configuration
Following a switchover, you must perform a healing operation in a specific order to restore
MetroCluster functionality.
Before you begin
•
ISLs must be up and operating.
•
Switchover must have been performed and the surviving site must be serving data.
•
Nodes in the surviving site must not be in HA failover state (all nodes must be up and running for
each HA pair).
•
Nodes on the disaster site must be halted or remain powered off.
They must not be fully booted during the healing process.
•
Storage at the disaster site must be accessible (shelves are powered up, functional, and
accessible).
About this task
The healing operation must first be performed on the data aggregates, and then on the root
aggregates.
Steps
1. Healing the data aggregates after negotiated switchover on page 64
2. Healing the root aggregates after negotiated switchover on page 66
Healing the data aggregates after negotiated switchover
You must heal the data aggregates after completing any maintenance or testing. This process
resynchronizes the data aggregates and prepares the disaster site for normal operation. You must heal
the data aggregates prior to healing the root aggregates.
About this task
All configuration updates in the remote cluster successfully replicate to the local cluster. You power
up the storage on the disaster site as part of this procedure, but you do not and must not power up the
controller modules on the disaster site.
Steps
1. Ensure that switchover has been completed by running the metrocluster operation show
command.
Verifying the MetroCluster configuration | 65
Example
controller_A_1::> metrocluster operation show
Operation: switchover
State: successful
Start Time: 7/25/2014 20:01:48
End Time: 7/25/2014 20:02:14
Errors: -
2. Resynchronize the data aggregates by running the metrocluster heal -phase aggregates
command from the surviving cluster.
Example
controller_A_1::> metrocluster heal -phase aggregates
[Job 130] Job succeeded: Heal Aggregates is successful.
If the healing is vetoed, you have the option of reissuing the metrocluster heal command
with the –override-vetoes parameter. If you use this optional parameter, the system overrides
any soft vetoes that prevent the healing operation.
3. Verify that the operation has been completed by running the metrocluster operation show
command.
Example
controller_A_1::> metrocluster operation show
Operation: heal-aggregates
State: successful
Start Time: 7/25/2014 18:45:55
End Time: 7/25/2014 18:45:56
Errors: -
4. Check the state of the aggregates by running the storage aggregate show command.
Example
controller_A_1::> storage aggregate show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ --------------------------...
aggr_b2
227.1GB
227.1GB
0% online
0 mcc1-a2
raid_dp, mirrored, normal...
5. If storage has been replaced at the disaster site, you might need to remirror the aggregates.
66 | MetroCluster Installation Express Guide
Clustered Data ONTAP 8.3 Data Protection Guide
Healing the root aggregates after negotiated switchover
After the data aggregates have been healed, you must heal the root aggregates in preparation for the
switchback operation.
Before you begin
The data aggregates phase of the MetroCluster healing process must have been completed
successfully.
Steps
1. Switch back the mirrored aggregates by running the metrocluster heal -phase rootaggregates command.
Example
mcc1A::> metrocluster heal -phase root-aggregates
[Job 137] Job succeeded: Heal Root Aggregates is successful
If the healing is vetoed, you have the option of reissuing the metrocluster heal command
with the –override-vetoes parameter. If you use this optional parameter, the system overrides
any soft vetoes that prevent the healing operation.
2. Confirm the heal operation is complete by running the metrocluster operation show
command on the destination cluster:
Example
mcc1A::> metrocluster operation show
Operation: heal-root-aggregates
State: successful
Start Time: 7/29/2014 20:54:41
End Time: 7/29/2014 20:54:42
Errors: -
3. Power up each controller module on the disaster site.
4. After nodes are booted, verify that the root aggregates are mirrored.
If both plexes are present, resynchronization will start automatically. If one plex has failed, that
plex must be destroyed and the mirror must be recreated using the storage aggregate
mirror -aggregate aggregate-name command to reestablish the mirror relationship.
Verifying the MetroCluster configuration | 67
Clustered Data ONTAP 8.3 Data Protection Guide
Performing a switchback
After you heal the MetroCluster configuration you can perform the MetroCluster switchback
operation. The MetroCluster switchback operation returns the configuration to its normal operating
state, with the sync-source SVMs on the disaster site active and serving data from the local disk
pools.
Before you begin
•
The disaster cluster must have successfully switched over to the surviving cluster.
•
Healing must have been performed on the data and root aggregates.
•
The surviving cluster nodes must not be in HA failover state (all nodes must be up and running
for each HA pair).
•
The disaster site controller modules must be completely booted and not in HA takeover mode.
•
The root aggregate must be mirrored.
•
The Inter-Switch Links (ISLs) must be online.
Steps
1. Confirm that all nodes are in the enabled state:
metrocluster node show
Example
cluster_B::> metrocluster node show
DR
Configuration
Group Cluster Node
State
----- ------- ------------------ -------------1
sti65-vsim-ucs258f8e_siteB
sti65-vsim-ucs258e configured
sti65-vsim-ucs258f configured
sti65-vsim-ucs258g8h_siteA
sti65-vsim-ucs258g configured
sti65-vsim-ucs258h configured
4 entries were displayed.
2. Confirm that resynchronization is complete on all SVMs:
metrocluster vserver show
DR
Mirroring Mode
--------- ---enabled
enabled
normal
normal
enabled
enabled
normal
normal
68 | MetroCluster Installation Express Guide
3. Verify that any automatic LIF migrations being performed by the healing operations have been
successfully completed: metrocluster check lif show
4. From any node in the MetroCluster configuration, simulate the switchback to ensure that
switchback can succeed by running the metrocluster switchback -simulate command at
the advanced privilege level.
If the command indicates any vetoes that would prevent switchback, resolve those issues before
proceeding.
5. Perform the switchback by running the metrocluster switchback command from any node
in the surviving cluster.
6. Check the progress of the switchback operation:
metrocluster show
Example
cluster_B::> metrocluster show
Cluster
Configuration State
Mode
---------------------------------------------Local: cluster_B configured
switchover
Remote: cluster_A configured
waiting-for-switchback
The switchback operation is complete when the output displays normal:
cluster_B::> metrocluster show
Cluster
Configuration State
Mode
---------------------------------------------Local: cluster_B configured
normal
Remote: cluster_A configured
normal
If a switchback takes a long time to complete, you can check on the status of in-progress
baselines by using the metrocluster config-replication resync-status show
command.
7. Reestablish any SnapMirror or SnapVault configurations.
Clustered Data ONTAP 8.3 Data Protection Guide
Verifying a successful switchback
You want to confirm that all aggregates and SVMs are switched back and online.
Steps
1. Verify that the switched-over data aggregates (in the following example, aggr_b2 on node B2) are
switched back:
aggr show
Verifying the MetroCluster configuration | 69
Example
controller_B_1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------...
aggr_b2
227.1GB
227.1GB
0% online
0 controller_B_2
raid_dp,
mirrored,
normal
controller_B_1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------...
aggr_b2
- unknown
- controller_A_1
2. Verify that all sync-destination SVMs on the surviving cluster are dormant (showing an Admin
State of stopped) and the sync-source SVMs on the disaster cluster are up and running.
vserver show -subtype sync-source
Example
controller_B_1::> vserver show -subtype sync-source
Admin
Root
Name
Vserver
Type
Subtype
State
Volume
Aggregate
Service
----------- ------- ---------- ---------- ---------- ---------------...
vs1a-mc
data
sync-source
running
vs1a_vol
controller_B_2 file
aggr_b2
controller_A_1::> vserver show -subtype sync-destination
Admin
Root
Vserver
Type
Subtype
State
Volume
Aggregate
----------- ------- ---------- ---------- ---------- ---------...
mcc1A-vs1a-mc data
sync-destination
stopped
vs1a_vol
sosb_
aggr_b2
Name
Mapping
------file
Name
Name
Service Mapping
------- ------file
file
3. Confirm that the switchback operations succeeded by using the metrocluster operation
show command.
If the command output
shows...
Then...
That the switchback operation
state is successful.
The switchback process is complete and you can proceed with operation
of the system.
That the switchback operation
or switchback-continuationagent operation is partially
successful.
Perform the suggested fix provided in the output of the metrocluster
operation show command.
After you finish
Repeat the previous sections to perform the switchback in the opposite direction. If site_A did a
switchover of site_B, have site_B do a switchover of site_A.
70 | MetroCluster Installation Express Guide
Reestablishing SnapMirror or SnapVault SVM peering relationships
After a switchover or switchback operation, you must manually reestablish any SVM peering to
clusters outside of the MetroCluster configuration if the destination of the relationship is in the
MetroCluster configuration.
About this task
This procedure is done at the surviving site after a switchover has occurred or at the disaster site after
a switchback has occurred.
Steps
1. Check whether the SVM peering relationships have been reestablished by using the
metrocluster vserver show command.
2. Reestablish any SnapMirror or SnapVault configurations.
Clustered Data ONTAP 8.3 Data Protection Guide
71
Where to find additional information
When you have completed and verified the MetroCluster configuration, you can then configure
storage protocols and other Data ONTAP features. You can use the preinstalled OnCommand System
Manager on each node or the command line interface. Express guides, comprehensive guides, and
technical reports can help you achieve these goals.
To load System Manager, you must enter the cluster management LIF address as the URL in a web
browser that has connectivity to the node.
You can also use OnCommand Unified Manager and OnCommand Performance Manager to monitor
the MetroCluster configuration.
MetroCluster and Data ONTAP libraries
Library
Content
NetApp Documentation: MetroCluster in
clustered Data ONTAP
•
All MetroCluster guides
NetApp Documentation: Clustered Data
ONTAP Express Guides
•
All Data ONTAP express guides
NetApp Documentation: Data ONTAP 8
(current releases)
•
All Data ONTAP guides
MetroCluster and miscellaneous guides
Guide
Content
Clustered Data ONTAP 8.3 MetroCluster
Installation and Configuration Guide
•
MetroCluster architecture
•
Cabling the configuration
•
Configuring the FC-to-SAS bridges
•
Configuring the FC switches
•
Configuring the MetroCluster in Data
ONTAP
72 | MetroCluster Installation Express Guide
Guide
Content
Clustered Data ONTAP 8.3 MetroCluster
Management and Disaster Recovery Guide
•
Understanding the MetroCluster
configuration
•
Switchover, healing and switchback
•
Disaster recovery
•
Guidelines for maintenance in a
MetroCluster configuration
•
Hardware replacement and firmware
upgrade procedures for FC-to-SAS bridges
and FC switches
•
Hot-adding a disk shelf
•
Hot-removing a disk shelf
•
Replacing hardware at a disaster site
•
How mirrored aggregates work
•
SyncMirror
•
SnapMirror
•
SnapVault
•
Monitoring the MetroCluster configuration
•
Monitoring MetroCluster performance
•
Transitioning data from 7-Mode storage
systems to clustered storage systems
MetroCluster Service Guide
Clustered Data ONTAP 8.3 Data Protection
Guide
NetApp Documentation: OnCommand Unified
Manager Core Package (current releases)
NetApp Documentation: OnCommand
Performance Manager for Clustered Data
ONTAP
7-Mode Transition Tool 2.0 Data and
Configuration Transition Guide
73
Copyright information
Copyright © 1994–2015 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any means—
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval system—without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights of
NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to
restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer
Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
74 | MetroCluster Installation Express Guide
Trademark information
NetApp, the NetApp logo, Go Further, Faster, ASUP, AutoSupport, Campaign Express, Cloud
ONTAP, clustered Data ONTAP, Customer Fitness, Data ONTAP, DataMotion, Fitness, Flash
Accel, Flash Cache, Flash Pool, FlashRay, FlexArray, FlexCache, FlexClone, FlexPod, FlexScale,
FlexShare, FlexVol, FPolicy, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster,
MultiStore, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, SANtricity, SecureShare,
Simplicity, Simulate ONTAP, Snap Creator, SnapCopy, SnapDrive, SnapIntegrator, SnapLock,
SnapManager, SnapMirror, SnapMover, SnapProtect, SnapRestore, Snapshot, SnapValidator,
SnapVault, StorageGRID, Tech OnTap, Unbound Cloud, and WAFL are trademarks or registered
trademarks of NetApp, Inc., in the United States, and/or other countries. A current list of NetApp
trademarks is available on the web at http://www.netapp.com/us/legal/netapptmlist.aspx.
Cisco and the Cisco logo are trademarks of Cisco in the U.S. and other countries. All other brands or
products are trademarks or registered trademarks of their respective holders and should be treated as
such.
75
How to send comments about documentation and
receive update notification
You can help us to improve the quality of our documentation by sending us your feedback. You can
receive automatic notification when production-level (GA/FCS) documentation is initially released
or important changes are made to existing production-level documents.
If you have suggestions for improving this document, send us your comments by email to
doccomments@netapp.com. To help us direct your comments to the correct division, include in the
subject line the product name, version, and operating system.
If you want to be notified automatically when production-level documentation is released or
important changes are made to existing production-level documents, follow Twitter account
@NetAppDoc.
You can also contact us in the following ways:
•
NetApp, Inc., 495 East Java Drive, Sunnyvale, CA 94089 U.S.
•
Telephone: +1 (408) 822-6000
•
Fax: +1 (408) 822-4501
•
Support telephone: +1 (888) 463-8277
76 | MetroCluster Installation Express Guide
Index
A
addresses
configuring NTP server, to synchronize system time
49
gathering required network information 12, 14
admin accounts
changing the factory-configured default password
for 44
aggregates
healing data 64
healing root aggregates 66
verifying online after a switchback 68
attributes
modifying cluster 48
AutoSupport
configuring 45
testing 45
B
Brocade FC switch configuration
configuring ISL ports 31
setting the switch name 30
C
cables
labeling 20
cabling
verifying with Config Advisor 55
cabling)
Inter-Switch links (ISLs 22
checking
MetroCluster configuration operation 56
checklists
software setup, for factory-configured MetroCluster
41
cluster health
verifying 50
cluster interconnects
cabling MetroCluster configurations 23
cluster management networks
configuring 44, 47
cluster peering
verifying between the MetroCluster sites 54
cluster peering connections
cabling in MetroCluster configurations 24
Cluster Setup wizard
completing factory configured cluster setup 47
completing factory-configured cluster setup 44
cluster switches
configuring IP addresses 37
clusters
modifying attributes of 48
modifying contact information of 48
modifying location of 48
renaming 48
commands
metrocluster switchover 61
comments
how to send feedback about documentation 75
components
racking 19
Config Advisor
configuring switches for 37
verifying cabling 55
configuration backup files
setting remote destinations for preservation 48
configurations
testing AutoSupport 45
configuring
NTP server, to synchronize system time 49
contact information
modifying cluster 48
controller ports
checking connectivity with partner site 24
controllers
racking 19
cross-cluster peering
verifying 54
D
data aggregates
healing 64
Data ONTAP health monitoring
for FC-to-SAS bridges 52
data ports
cabling 25
default passwords
Index | 77
changing factory configured, for the admin account
44
destinations
specifying the URL for configuration backup 48
disaster sites
healing data aggregates after replacing hardware on
64
healing root aggregates after healing data aggregates
66
performing MetroCluster switchbacks for 67
disk shelves
cabling to FC-to-SAS bridges 27
racking 19
documentation
additional information about cluster configuration 71
how to receive automatic notification of changes to
75
how to send feedback about 75
DR partners
confirming that they came online after switchover 62
flowcharts
MetroCluster installation workflow 6
H
HA interconnects
cabling 24
HA pair operation
verifying in a MetroCluster configuration 58
hardware components
racking 19
healing operations
data aggregate 64
prerequisites for MetroCluster configurations 64
root aggregates 66
health monitoring
configuring FC switches for health monitoring 51
for FC-to-SAS bridges 52
host names
gathering required network information 12, 14
E
I
E-ports
configuring 31
express guides
additional documentation 71
requirements for using the MetroCluster installation
guide 5
information
how to send feedback about improving
documentation 75
installation
workflow flowchart 6
Inter-Switch Links (ISLs)
cabling Inter-Switch Links (ISLs) 22
IP addresses
configuring for switches 37
gathering required network information 12, 14
ISL ports
alternate name for 31
cabling 22
configuring 31
F
factory-configured clusters
changing the default password for the admin account
44
failover and giveback
verifying in a MetroCluster configuration 58
FC switch configurations
recommended port assignments 26
worksheets 10
FC switches
configuring for health monitoring 51
racking 19
FC-to-SAS bridges
cabling to storage controllers 27
configuring for health monitoring 52
FC-VI ports
cabling 21
feedback
how to send comments about documentation 75
L
labeling
cables 20
LIFs
preconfigured in a MetroCluster configuration 7
local HA pairs
verifying operation of in a MetroCluster
configuration 58
location
modifying cluster 48
78 | MetroCluster Installation Express Guide
M
maintenance steps, performing negotiated switchovers 60
management ports
cabling 25
management switches
configuring IP addresses 37
MetroCluster components
cabling cluster interconnects 23
cabling the HA interconnect 24
racking disk shelves 19
racking FC switches 19
racking storage controllers 19
MetroCluster configurations
cabling FC-VI adapters 21
cabling Inter-Switch Links (ISLs) 22
cabling MetroCluster components
cabling FC-VI adapters
cabling HBA adapters 21
checking operation 56
healing data aggregates in 64
healing root aggregates in 66
performing switchbacks for 67
prerequisites for healing the configuration 64
refreshing 49
requirements for using the Express Guide to
configure 5
MetroCluster switchback operations
about 60
verifying success of 68
metrocluster switchover command
using to perform a negotiated switchover 61
MetroCluster switchover operations
about 60
configuring 44, 47
node names
changing 45
nodes
renaming 45
NTP servers
configuring to synchronize system time 49
P
partner cluster
cabling cluster peering connections 24
passwords
changing the factory-configured default, for the
admin account 44
preconfigured in a MetroCluster configuration 7
peering relationships
reestablishing after switchover 63, 70
reestablishing SVM peering relationships after
switchover 63, 70
planning
gathering required network information 12, 14
port usage
in a MetroCluster configuration 7
R
refreshing attributes
MetroCluster configurations 49
renaming
clusters 48
requirements
gathering network information 12, 14
root aggregates
healing 66
N
names
modifying cluster 48
names of nodes
changing 45
negotiated switchovers
performing using the metrocluster switchover
command 61
network information
gathering required 14
gathering required network information 12
NFS
verifying SVM read and write access using 57
node management networks
S
servers
configuring NTP, to synchronize system time 49
setup
software, checklist for factory configured
MetroCluster 41
SnapMirror
reestablishing SVM peering relationships after
switchover 63, 70
SnapVault
reestablishing SVM peering relationships after
switchover 63, 70
software
Index | 79
setup checklist for factory configured clusters 41
storage controllers
cabling in a MetroCluster configuration 27
suggestions
how to send feedback about documentation 75
SVM peering
reestablishing SVM peering after switchover 63, 70
SVMs
preconfigured in a MetroCluster configuration 7
verifying NFS read and write access to 57
verifying online after a switchback 68
switch fabrics
configuring switch ports 31
switch names
setting for Brocade switches 30
switchback operations
about 60
healing root aggregates in preparation for 66
performing for MetroCluster configurations 67
verifying success of 68
switches
configuring IP addresses 37
switchover
confirming operation after DR partners come online
62
reestablishing SVM peering relationships after 63,
70
switchover operations
about 60
negotiated 61
performing negotiated for test or maintenance 60
verifying that your system is ready for 60
synchronizing system time
using NTP 49
system time
synchronizing using NTP 49
systems
verifying that it is ready for switchover 60
T
technical reports
additional information about cluster configuration 71
testing
cabling with Config Advisor 55
NFS read and write access to SVMs 57
tests
performing functionality 60
time
synchronizing system, using NTP 49
twitter
how to receive automatic notification of
documentation changes 75
U
usernames
preconfigured in a MetroCluster configuration 7
V
verifications
performing switchover 60
verifying
cluster health 50
cluster peering 54
MetroCluster configuration operation 56
NFS read and write access to SVMs 57
W
WireGauge
See Config Advisor
wizards
completing factory configured cluster setup 47
completing factory-configured cluster setup 44
workflows
MetroCluster installation, flowchart 6
worksheets
for FC switch configurations 10
for site configuration 12
for site configurations 14
Download PDF