Stretch MetroCluster Installation and Configuration Guide

Stretch MetroCluster Installation and Configuration Guide
Clustered Data ONTAP® 8.3
Stretch MetroCluster™ Installation
and Configuration Guide
February 2016 | 215-10881_A0
doccomments@netapp.com
Updated for 8.3.2
Table of Contents | 3
Contents
MetroCluster documentation ...................................................................... 6
Preparing for the MetroCluster installation .............................................. 8
Differences between 7-Mode and clustered Data ONTAP MetroCluster
configurations ........................................................................................................ 8
Differences between the clustered Data ONTAP MetroCluster configurations .......... 9
Considerations for configuring cluster peering ......................................................... 10
Prerequisites for cluster peering .................................................................... 10
Considerations when using dedicated ports .................................................. 11
Considerations when sharing data ports ........................................................ 11
Considerations when transitioning from 7-Mode to clustered Data ONTAP ........... 12
Configuration of new MetroCluster systems ............................................................. 12
Preconfigured component passwords ............................................................ 13
Hardware setup checklist .............................................................................. 13
Software setup checklist ................................................................................ 14
Choosing the correct installation procedure for your configuration ..... 17
Cabling a two-node SAS-attached stretch MetroCluster
configuration .......................................................................................... 18
Parts of a two-node direct-attached MetroCluster configuration .............................. 18
Required MetroCluster hardware components and naming guidelines for twonode stretch configurations .................................................................................. 19
Installing and cabling MetroCluster components ...................................................... 20
Racking the hardware components ............................................................... 21
Cabling the controllers to each other and the storage shelves ....................... 22
Cabling the cluster peering connections ........................................................ 23
Cabling the management and data connections ............................................ 23
Cabling a two-node bridge-attached stretch MetroCluster
configuration .......................................................................................... 25
Parts of a two-node MetroCluster configuration with FC-to-SAS bridges ............... 25
Required MetroCluster hardware components and naming guidelines for twonode stretch configurations .................................................................................. 26
Gathering required information and reviewing the workflow ................................... 27
Information gathering worksheet for FC-to-SAS bridges ............................. 28
Installing and cabling MetroCluster components ...................................................... 29
Racking the hardware components ............................................................... 29
Cabling the controllers to each other ............................................................. 30
Cabling the cluster peering connections ........................................................ 30
Cabling the management and data connections ............................................ 31
Installing FC-to-SAS bridges and SAS disk shelves ................................................ 31
Preparing for the installation ......................................................................... 32
Installing the FC-to-SAS bridge and SAS shelves ........................................ 34
Configuring the MetroCluster software in Data ONTAP ....................... 41
4 | Stretch MetroCluster Installation and Configuration Guide
Gathering required information and reviewing the workflow ................................... 42
IP network information worksheet for site A ................................................ 42
IP network information worksheet for site B ................................................ 43
Similarities and differences between regular cluster and MetroCluster
configurations ...................................................................................................... 46
Setting a previously used controller module to system defaults in Maintenance
mode .................................................................................................................... 46
Configuring FC-VI ports on a QLE2564 quad-port card .......................................... 47
Verifying disk assignment in Maintenance mode in a two-node configuration ........ 49
Verifying the HA state of components ...................................................................... 50
Setting up the clusters in a two-node MetroCluster configuration ............................ 50
Configuring the clusters into a MetroCluster configuration ..................................... 52
Peering the clusters ........................................................................................ 52
Mirroring the root aggregates ........................................................................ 62
Creating a mirrored data aggregate on each node ......................................... 63
Implementing the MetroCluster configuration .............................................. 64
Configuring the MetroCluster FC switches for health monitoring ............... 66
Checking the MetroCluster configuration ..................................................... 66
Checking for MetroCluster configuration errors with Config Advisor ..................... 68
Verifying switchover, healing, and switchback ......................................................... 69
Installing the MetroCluster Tiebreaker software ....................................................... 69
Protecting configuration backup files ........................................................................ 69
Stretch MetroCluster configurations with array LUNs .......................... 70
Example of a stretch MetroCluster configuration with array LUNs ......................... 70
Examples of two-node stretch MetroCluster configurations with disks and array
LUNs ................................................................................................................... 70
Using the OnCommand management tools for further configuration
and monitoring ....................................................................................... 73
Synchronizing the system time using NTP ............................................................... 73
Requirements and limitations when using Data ONTAP in a
MetroCluster configuration .................................................................. 75
Cluster peering from the MetroCluster sites to a third cluster .................................. 75
Volume creation on a root aggregate ......................................................................... 75
Networking and LIF creation guidelines for MetroCluster configurations ............... 75
Volume or FlexClone command VLDB errors .......................................................... 77
Output for storage disk show and storage shelf show commands in a two-node
MetroCluster configuration ................................................................................. 77
Output for the storage aggregate plex show command after a MetroCluster
switchover is indeterminate ................................................................................. 77
Modifying volumes to set NVFAIL in case of switchover ........................................ 77
Monitoring and protecting database validity by using NVFAIL ............................... 77
How NVFAIL protects database files ............................................................ 78
Commands for monitoring data loss events .................................................. 78
Accessing volumes in NVFAIL state after a switchover ............................... 79
Recovering LUNs in NVFAIL states after switchover .................................. 79
Table of Contents | 5
Glossary of MetroCluster terms ............................................................... 81
Copyright information ............................................................................... 83
Trademark information ............................................................................. 84
How to send comments about documentation and receive update
notifications ............................................................................................ 85
Index ............................................................................................................. 86
6
MetroCluster documentation
There are a number of documents that can help you configure, operate, and monitor a MetroCluster
configuration.
MetroCluster and Data ONTAP libraries
Library
Content
NetApp Documentation: MetroCluster in
clustered Data ONTAP
•
All MetroCluster guides
NetApp Documentation: Clustered Data
ONTAP Express Guides
•
All Data ONTAP express guides
NetApp Documentation: Data ONTAP 8
(current releases)
•
All Data ONTAP guides
MetroCluster and miscellaneous guides
Guide
NetApp Technical Report 4375: MetroCluster
for Data ONTAP Version 8.3 Overview and
Best Practices
Stretch MetroCluster Installation and
Configuration Guide
Clustered Data ONTAP 8.3 MetroCluster
Management and Disaster Recovery Guide
MetroCluster Service Guide
Content
•
A technical overview of the MetroCluster
configuration and operation.
•
Best practices for MetroCluster
configuration.
•
Stretch MetroCluster architecture
•
Cabling the configuration
•
Configuring the FC-to-SAS bridges
•
Configuring the MetroCluster in Data
ONTAP
•
Understanding the MetroCluster
configuration
•
Switchover, healing and switchback
•
Disaster recovery
•
Guidelines for maintenance in a
MetroCluster configuration
•
Hardware replacement and firmware
upgrade procedures for FC-to-SAS bridges
and FC switches
•
Hot-adding a disk shelf
•
Hot-removing a disk shelf
•
Replacing hardware at a disaster site
MetroCluster documentation | 7
Guide
Content
MetroCluster Tiebreaker Software Installation
and Configuration Guide
•
Monitoring the MetroCluster configuration
with the MetroCluster Tiebreaker software
Clustered Data ONTAP 8.3 Data Protection
Guide
•
How mirrored aggregates work
•
SyncMirror
•
SnapMirror
•
SnapVault
•
Monitoring the MetroCluster configuration
•
Monitoring MetroCluster performance
•
Transitioning data from 7-Mode storage
systems to clustered storage systems
NetApp Documentation: OnCommand Unified
Manager Core Package (current releases)
NetApp Documentation: OnCommand
Performance Manager for Clustered Data
ONTAP
7-Mode Transition Tool 2.2 Copy-Based
Transition Guide
8
Preparing for the MetroCluster installation
As you prepare for the MetroCluster installation, you should understand the MetroCluster hardware
architecture and required components. If you are familiar with MetroCluster configurations in a 7mode environment, you should understand the key MetroCluster differences you find in a clustered
Data ONTAP environment.
Differences between 7-Mode and clustered Data ONTAP
MetroCluster configurations
There are key differences between clustered Data ONTAP MetroCluster configurations and
configurations with Data ONTAP operating in 7-Mode.
In clustered Data ONTAP, a four-node MetroCluster configuration includes two HA pairs, each in a
separate cluster at physically separated sites.
Feature or
component
Clustered Data ONTAP
Four node
Two node
Number of
storage
controllers
Four
The controllers are
configured as two HA
pairs, one HA pair at
each site.
Two, one at each site
Each controller is
configured as a singlenode cluster.
Two
The two controllers are
configured as an HA pair
with one controller at each
site.
Local failover
available?
Yes
A failover can occur
at either site without
triggering an overall
switchover of the
configuration.
No
If a problem occurs at the
local site, the system
switches over to the
partner site.
No
If a problem occurs at the
local site, the system fails
over to the partner site.
Single
command for
failover or
switchover?
Yes
The command for
local failover in a
four-node fabric
MetroCluster
configuration is
Yes for switchover
Yes
The 7-Mode commands
are cf takeover or cf
forcetakeover -d.
storage failover
takeover.
The command for
switchover is
metrocluster
switchover or
metrocluster
switchover -forcedon-disaster true.
Data ONTAP 7-Mode
The command for
switchover is
metrocluster
switchover or
metrocluster
switchover forced-ondisaster true.
DS14 disk
shelves
supported?
No
No
Yes
Preparing for the MetroCluster installation | 9
Feature or
component
Clustered Data ONTAP
Data ONTAP 7-Mode
Four node
Two node
Two FC switch
fabrics?
Yes
Yes, in fabric
MetroCluster
configuration
Yes
Stretch
configuration
with FC-toSAS bridges
(no switch
fabric)?
No
Yes
Yes
Stretch
configuration
with SAS
cables (no
switch fabric)?
No
Yes
Yes
Related concepts
Parts of a fabric MetroCluster configuration
Differences between the clustered Data ONTAP MetroCluster
configurations
The supported MetroCluster configurations have key differences in the required components.
Feature
Fabric-attached configurations
Stretch configurations
Four-node
Two-node
Two-node
bridge-attached
Two-node
direct-attached
Number of
controllers
Four
Two
Two
Two
Uses an FC
switch storage
fabric
Yes
Yes
No
No
Uses FC-to-SAS
bridges
Yes
Yes
Yes
No
Uses directattached SAS
storage
No
No
No
Yes
Supports local
HA
Yes
No
No
No
Supports
automatic
switchover
No
Yes
Yes
Yes
Cluster configuration
In all configurations, each of the two MetroCluster sites is configured as a Data ONTAP cluster. In a
two-node MetroCluster configuration, each node is configured as a single-node cluster.
10 | Stretch MetroCluster Installation and Configuration Guide
Considerations for configuring cluster peering
Each MetroCluster site is configured as a peer to its partner site. You should be familiar with the
prerequisites and guidelines for configuring the peering relationships and when deciding whether to
use shared or dedicated ports for those relationships.
Related information
Clustered Data ONTAP 8.3 Data Protection Guide
Clustered Data ONTAP 8.3 Cluster Peering Express Guide
Prerequisites for cluster peering
Before you set up cluster peering, you should confirm that the IPspace, connectivity, port, IP address,
subnet, firewall, and cluster-naming requirements are met.
Connectivity requirements
The subnet used in each cluster for intercluster communication must meet the following
requirements:
•
The subnet must belong to the broadcast domain that contains the ports used for intercluster
communication.
•
IP addresses used for intercluster LIFs do not need to be in the same subnet, but having them in
the same subnet is a simpler configuration.
•
You must have considered whether the subnet will be dedicated to intercluster communication or
shared with data communication.
The intercluster network must be configured so that cluster peers have pair-wise full-mesh
connectivity within the applicable IPspace, which means that each pair of clusters in a cluster peer
relationship has connectivity among all of their intercluster LIFs.
A cluster's intercluster LIFs must use the same IP address version: all IPv4 addresses or all IPv6
addresses. Similarly, all of the intercluster LIFs of the peered clusters must use the same IP
addressing version.
Port requirements
The ports that will be used for intercluster communication must meet the following requirements:
•
The broadcast domain that is used for intercluster communication must include at least two ports
per node so that intercluster communication can fail over from one port to another.
The ports added to a broadcast domain can be physical network ports, VLANs, or interface
groups (ifgrps).
•
All of the ports must be cabled.
•
All of the ports must be in a healthy state.
•
The MTU settings of the ports must be consistent.
•
You must have considered whether the ports used for intercluster communication will be shared
with data communication.
Firewall requirements
Firewalls and the intercluster firewall policy must allow the following:
Preparing for the MetroCluster installation | 11
•
ICMP service
•
TCP to the IP addresses of all of the intercluster LIFs over all of the following ports: 10000,
11104, and 11105
•
HTTPS
The default intercluster firewall policy allows access through the HTTPS protocol and from all
IP addresses (0.0.0.0/0), but the policy can be altered or replaced.
Cluster requirements
Clusters must meet the following requirements:
•
Each cluster must have a unique name.
You cannot create a cluster peering relationship with any cluster that has the same name or is in a
peer relationship with a cluster of the same name.
•
The time on the clusters in a cluster peering relationship must be synchronized within 300
seconds (5 minutes).
Cluster peers can be in different time zones.
Considerations when using dedicated ports
When determining whether using a dedicated port for intercluster replication is the correct
intercluster network solution, you should consider configurations and requirements such as LAN
type, available WAN bandwidth, replication interval, change rate, and number of ports.
Consider the following aspects of your network to determine whether using a dedicated port is the
best intercluster network solution:
•
If the amount of available WAN bandwidth is similar to that of the LAN ports and the replication
interval is such that replication occurs while regular client activity exists, then you should
dedicate Ethernet ports for intercluster replication to avoid contention between replication and the
data protocols.
•
If the network utilization generated by the data protocols (CIFS, NFS, and iSCSI) is such that the
network utilization is above 50 percent, then you should dedicate ports for replication to allow for
nondegraded performance if a node failover occurs.
•
When physical 10 GbE ports are used for data and replication, you can create VLAN ports for
replication and dedicate the logical ports for intercluster replication.
The bandwidth of the port is shared between all VLANs and the base port.
•
Consider the data change rate and replication interval and whether the amount of data that must
be replicated on each interval requires enough bandwidth that it might cause contention with data
protocols if sharing data ports.
Considerations when sharing data ports
When determining whether sharing a data port for intercluster replication is the correct intercluster
network solution, you should consider configurations and requirements such as LAN type, available
WAN bandwidth, replication interval, change rate, and number of ports.
Consider the following aspects of your network to determine whether sharing data ports is the best
intercluster connectivity solution:
•
For a high-speed network, such as a 10-Gigabit Ethernet (10-GbE) network, a sufficient amount
of local LAN bandwidth might be available to perform replication on the same 10-GbE ports that
are used for data access.
In many cases, the available WAN bandwidth is far less than 10 GbE LAN bandwidth .
12 | Stretch MetroCluster Installation and Configuration Guide
•
All nodes in the cluster might have to replicate data and share the available WAN bandwidth,
making data port sharing more acceptable.
•
Sharing ports for data and replication eliminates the extra port counts required to dedicate ports
for replication.
•
The maximum transmission unit (MTU) size of the replication network will be the same size as
that used on the data network.
•
Consider the data change rate and replication interval and whether the amount of data that must
be replicated on each interval requires enough bandwidth that it might cause contention with data
protocols if sharing data ports.
•
When data ports for intercluster replication are shared, the intercluster LIFs can be migrated to
any other intercluster-capable port on the same node to control the specific data port that is used
for replication.
Considerations when transitioning from 7-Mode to clustered
Data ONTAP
You must have the new MetroCluster configuration fully configured and operating before you use the
transition tools to move data from a 7-Mode MetroCluster configuration to a Data ONTAP
configuration. If the 7-Mode configuration uses Brocade 6510 switches, the new configuration can
share the existing fabrics to reduce the hardware requirements.
If you have Brocade 6510 switches and plan on sharing the switch fabrics between the 7-Mode fabric
MetroCluster and the MetroCluster running in clustered Data ONTAP, you must use the specific
procedure for configuring the MetroCluster components.
FMC-MCC transition: Configuring the MetroCluster hardware for sharing a 7-Mode Brocade 6510
FC fabric during transition
Configuration of new MetroCluster systems
New MetroCluster components are preconfigured and MetroCluster settings are enabled in the
software. In most cases, you do not need to perform the detailed procedures provided in this guide.
Hardware racking and cabling
Depending on the configuration you ordered, you might need to rack the systems and complete the
cabling.
Configuring the MetroCluster hardware components in systems with native disk shelves
FC switch and FC-to-SAS bridge configuration
For configurations using FC-to-SAS bridges, the bridges received with the new MetroCluster
configuration are preconfigured and do not require additional configuration unless you want to
change the names and IP addresses.
For configurations using FC switches, in most cases, FC switch fabrics received with the new
MetroCluster configuration are preconfigured for two Inter-Switch Links (ISLs). If you are using
three or four ISLs, you must manually configure the switches.
Configuring the FC switches
Preparing for the MetroCluster installation | 13
MetroCluster configuration in Data ONTAP
Nodes and clusters received with the new MetroCluster configuration are preconfigured and the
MetroCluster configuration is enabled.
Configuring the FC switches
Preconfigured component passwords
Some MetroCluster components are preconfigured with usernames and passwords. You need to be
aware of these settings as you perform your site-specific configuration.
Component
Username
Password
Data ONTAP login
admin
netapp!123
Service Processor (SP) login
admin
netapp!123
Intercluster pass phrase
None required
netapp!123
ATTO FC-to-SAS bridge
None required
None required
Brocade switches
Admin
password
NetApp cluster interconnect
switches (CN1601, CN1610)
None required
None required
Hardware setup checklist
You need to know which hardware setup steps were completed at the factory and which steps you
need to complete at each MetroCluster site.
Step
Completed
at factory
Completed by you
Mount components in one or more cabinets.
Yes
No
Position cabinets in the desired location.
No
Yes
Position them in the original order
so that the supplied cables are long
enough.
Connect multiple cabinets to each other, if
applicable.
No
Yes
Use the cabinet interconnect kit if it
is included in the order. The kit box
is labeled.
Secure the cabinets to the floor, if
applicable.
No
Yes
Use the universal bolt-down kit if it
is included in the order. The kit box
is labeled.
Cable the components within the cabinet.
Yes
Cables 5
meters and
longer are
removed for
shipping and
placed in the
accessories
box.
No
14 | Stretch MetroCluster Installation and Configuration Guide
Step
Completed
at factory
Completed by you
Connect the cables between cabinets, if
applicable.
No
Yes
Cables are in the accessories box.
Connect management cables to the
customer's network.
No
Yes
Connect them directly or through
the CN1601 management switches,
if present.
Attention: To avoid address
conflicts, do not connect
management ports to the
customer's network until after you
change the default IP addresses to
the customer's values.
Connect console ports to the customer's
terminal server, if applicable.
No
Yes
Connect the customer's data cables to the
cluster.
No
Yes
Connect the long-distance ISLs between the
MetroCluster sites, if applicable.
No
Yes
Connecting the ISLs between the
MetroCluster sites
Connect the cabinets to power and power on No
the components.
Yes
Power them on in the following
order:
1. PDUs
2. Disk shelves and FC-to-SAS
bridges, if applicable
3. FC switches, if applicable
4. Nodes
Assign IP addresses to the management
ports of the cluster switches and to the
management ports of the management
switches if present.
No
Yes, for switched clusters only
Connect to the serial console port of
each switch and log in with user
name “admin” with no password.
Suggested addresses are
10.10.10.81, 10.10.10.82,
10.10.10.83, and 10.10.10.84.
Verify cabling by running the Config
Advisor tool.
No
Yes
Verifying the MetroCluster
configuration on page 68
Software setup checklist
You need to know which software setup steps were completed at the factory and which steps you
need to complete at each MetroCluster site.
This guide includes all of the required steps, which you can complete after reviewing the information
in this checklist.
Preparing for the MetroCluster installation | 15
Step
Completed at factory
Completed by you using
procedures in this guide
Install the clustered Data
ONTAP software.
Yes
No
Create the cluster on the first
node at the first MetroCluster
site:
Yes
No
Enable the switchless-cluster
option on a two-node
switchless cluster.
Yes
No
Repeat the steps to configure
the second MetroCluster site.
Yes
No
Configure the clusters for
peering.
Yes
No
Enable the MetroCluster
configuration.
Yes
No
Configure user credentials and
management IP addresses on
the management and cluster
switches.
Yes, if ordered
User IDs are “admin” with no
password.
No
Thoroughly test the
MetroCluster configuration.
Yes
No, although you must perform
verification steps at your site as
described below.
Complete the cluster setup
worksheet.
No
Yes
Change the password for the
admin account to the
customer's value.
No
Yes
Configure each node with the
customer's values.
No
Yes
•
Name the cluster.
•
Set the admin password.
•
Set up the private cluster
interconnect.
•
Install all purchased license
keys.
•
Create the cluster
management interface.
•
Create the node
management interface.
•
Configure the FC switches,
if applicable.
16 | Stretch MetroCluster Installation and Configuration Guide
Step
Completed at factory
Completed by you using
procedures in this guide
Discover the clusters in
OnCommand System
Manager.
No
Yes
Configure an NTP server for
each cluster.
No
Yes
Verify the cluster peering.
No
Yes
Verify the health of the cluster
and that the cluster is in
quorum.
No
Yes
Verify basic operation of the
MetroCluster sites.
No
Yes.
Check the MetroCluster
configuration.
No
Yes
Test switchover, healing, and
switchback.
No
Yes
Set the destination for
configuration backup files.
No
Yes
Optional: Change the cluster
name if desired, for example,
to better distinguish the
clusters.
No
Yes
Optional: Change the node
name, if desired.
No
Yes
Configure AutoSupport.
No
Yes
17
Choosing the correct installation procedure for
your configuration
You need to choose the correct installation procedure based on whether you are using FlexArray
LUNs, the number of nodes in the MetroCluster configuration, and whether you are sharing an
existing FC switch fabric used by a 7-Mode fabric MetroCluster.
Select your
MetroCluster
configuration
NetApp (native) disks
Two-node
Bridge-attached
Two-node
SAS-attached
Proceed to “Cabling a twonode bridge-attached
stretch MetroCluster
configuration”
Proceed to “Cabling a twonode SAS-attached stretch
MetroCluster configuration”
Proceed to “Configuring the MetroCluster software in Data
ONTAP”
Array LUNs (FlexArray virtualization)
Proceed to “Planning and installing a MetroCluster
configuration with array LUNs”
For this installation type...
Use these procedures...
Two-node stretch configuration with
FC-to SAS bridges
1. Cabling a two-node bridge-attached stretch
MetroCluster configuration on page 25
2. Configuring the MetroCluster software in Data
ONTAP (native disk shelves only) on page 41
Two-node stretch configuration with
direct-attached SAS cabling
1. Cabling a two-node SAS-attached stretch
MetroCluster configuration on page 18
2. Configuring the MetroCluster software in Data
ONTAP (native disk shelves only) on page 41
Installation with array LUNs
Planning and installing a MetroCluster configuration
with array LUNs
18
Cabling a two-node SAS-attached stretch
MetroCluster configuration
The MetroCluster components must be physically installed, cabled, and configured at both
geographic sites. The steps are slightly different for a system with native disk shelves as opposed to a
system with array LUNs.
About this task
Parts of a two-node direct-attached MetroCluster
configuration
The two-node MetroCluster direct-attached configuration requires a number of parts, including two
single-node clusters in which the controller modules afre directly connected to the storage using SAS
cables.
The MetroCluster configuration includes the following key hardware elements:
•
Storage controllers
The storage controllers connect directly to the storage using SAS cables.
Each storage controller is configured as a DR partner to a storage controller on the partner site.
When the MetroCluster is enabled, the system will automatically pair the two nodes with lowest
system IDs in each of the two clusters as DR partners.
◦
Copper SAS cables can be used for shorter distances.
◦
Optical SAS cables can be used for longer distances.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and Data ONTAP version to refine
your search. You can click Show Results to display the list of supported configurations that
match the criteria.
•
Cluster peering network
The cluster peering network provides connectivity for mirroring of the Storage Virtual Machine
(SVM) configuration. The configuration of all SVMs on one cluster is mirrored to the partner
cluster.
Cabling a two-node SAS-attached stretch MetroCluster configuration | 19
Required MetroCluster hardware components and naming
guidelines for two-node stretch configurations
The MetroCluster configuration requires a variety of hardware components. For convenience and
clarity, standard names for components are used throughout the MetroCluster documentation. Also,
one site is referred to as Site A and the other site is referred to as Site B.
Supported software and hardware
The hardware and software must be supported for the MetroCluster configuration.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray). You
use the Component Explorer to select the components and Data ONTAP version to refine your
search. You can click Show Results to display the list of supported configurations that match the
criteria.
NetApp Hardware Universe
When using All Flash Optimized systems, all controller modules in the MetroCluster configuration
must be configured as All Flash Optimized systems.
Note: Long-wave SFPs are not supported in the MetroCluster storage switches. For a table of
supported SPFs, see the MetroCluster Technical Report.
NetApp Technical Report 4375: MetroCluster for Data ONTAP Version 8.3 Overview and Best
Practices
Required components
Because of the hardware redundancy in the MetroCluster configuration, there are two of each
component at each site. The sites are arbitrarily assigned the letters A and B and the individual
components are arbitrarily assigned the numbers 1 and 2.
The MetroCluster configuration also includes SAS storage shelves that connect to the FC-to-SAS
bridges.
Component
Two single-node Data ONTAP clusters
Naming must be unique within the MetroCluster
configuration.
Two storage controllers
Naming must be unique within the MetroCluster
configuration.
All controller modules must be the same platform model
and running the same version of Data ONTAP.
The FC-VI connections between the controller modules
must be 4 Gbps.
Example names
Site A
Site B
cluster_A
cluster_B
controller_A_1
controller_B_1
20 | Stretch MetroCluster Installation and Configuration Guide
Component
Two or more FC-to-SAS bridges at each site.
These bridges connect the SAS disk shelves to the
controllers.
•
FibreBridge 7500N bridges support up to four storage
stacks.
•
FibreBridge 6500N bridges support only one stack.
Example names
Site A
Site B
bridge_A_1_port-
bridge_B_1_po
number
rt-number
bridge_A_2_port-
bridge_B_2_po
number
rt-number
shelf_A_1_1
shelf_B_1_1
shelf_A_2_1
shelf_B_2_1
shelf_B_1_2
shelf_A_1_2
shelf_B_2_2
shelf_A_2_2
Naming must be unique within the MetroCluster
configuration.
The suggested names used as examples in this guide
identify the controller that the bridge connects to and the
port.
The FC connections between the controller modules and
the FC-to-SAS bridges must be 4 Gbps.
At least eight SAS disk shelves (recommended)
Four shelves are recommended at each site to allow disk
ownership on a per-shelf basis. A minimum of two
shelves at each site is supported.
Note: FlexArray systems support array LUNs and have
different storage requirements.
Requirements for a MetroCluster configuration with
array LUNs
Installing and cabling MetroCluster components
SAS optical cables can be used to cable SAS disk shelves in a stretch MetroCluster system to achieve
greater distance connectivity.
Before you begin any procedure in this document
The following overall requirements must be met before completing this task:
•
If you are using SAS optical single-mode breakout cables, the following rules apply:
◦
You can use these cables for controller-to-shelf connections.
Shelf-to-shelf connections use multimode QSFP-to-QSFP cables or multimode MPO cables
with MPO QSFP modules.
◦
The point-to-point (QSFP-to-QSFP) path of a single single-mode cable cannot exceed 500
meters.
◦
The total end-to-end path (sum of point-to-point paths from the controller to the last shelf)
cannot exceed 510 meters.
The total path includes the set of breakout cables, patch panels, and inter-panel cables.
◦
Up to one pair of patch panels can be used in a path.
◦
You need to supply the patch panels and inter-panel cables.
The inter-panel cables must be the same mode as the SAS optical breakout cable: singlemode.
Cabling a two-node SAS-attached stretch MetroCluster configuration | 21
◦
•
•
You must connect all eight (four pairs) of the SC, LC, or MTRJ breakout connectors to the
patch panel.
The SAS cables can be SAS copper, SAS optical, or a mix depending on whether or not your
system meets the requirements for using the type of cable.
If you are using a mix of SAS copper cables and SAS optical cables, the following rules apply:
◦
Shelf-to-shelf connections in a stack must be all SAS copper cables or all SAS optical cables.
◦
If the shelf-to-shelf connections are SAS optical cables, the shelf-to-controller connections to
that stack must also be SAS optical cables.
◦
If the shelf-to-shelf connections are SAS copper cables, the shelf-to-controller connections to
that stack can be SAS optical cables or SAS copper cables.
All components must be supported.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and Data ONTAP version to refine
your search. You can click Show Results to display the list of supported configurations that
match the criteria.
About this task
•
The use of SAS optical cables in a stack attached to FibreBridge 6500N and FibreBridge 7500N
bridges is not supported in this release.
•
Disk shelves connected with SAS optical cables require a version of disk shelf firmware that
supports SAS optical cables.
Best practice is to update all disk shelves in the storage system with the latest version of disk
shelf firmware.
Note: Do not revert disk shelf firmware to a version that does not support SAS optical cables.
•
The cable QSFP connector end connects to a disk shelf or a SAS port on a controller.
The QSFP connectors are keyed; when oriented correctly into a SAS port the QSFP connector
clicks into place and the disk shelf SAS port link LED, labeled LNK (Link Activity), illuminates
green. Do not force a connector into a port.
•
The terms node and controller are used interchangeably.
Choices
•
•
•
•
Racking the hardware components on page 21
Cabling the controllers to each other and the storage shelves on page 22
Cabling the cluster peering connections on page 23
Cabling the management and data connections on page 23
Racking the hardware components
If you have not received the equipment already installed in cabinets, you must rack the components.
About this task
This task must be performed on both MetroCluster sites.
Steps
1. Plan the positioning of the MetroCluster components.
22 | Stretch MetroCluster Installation and Configuration Guide
The amount of rack space needed depends on the platform model of the storage controllers, the
switch types, and the number of disk shelf stacks in your configuration.
2. Properly ground yourself.
3. Install the storage controllers in the rack or cabinet.
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
4. Install the disk shelves, power them on, and set the shelf IDs.
SAS Disk Shelves Installation and Service Guide for DS4243, DS2246, DS4486, and DS4246
•
You must power-cycle each disk shelf.
•
Shelf IDs must be unique for each SAS disk shelf within the entire MetroCluster configuration
(including both sites).
Cabling the controllers to each other and the storage shelves
The controller FC-VI adapters must be cabled directly to each other. The controller SAS ports must
be cabled to both the remote and local storage stacks.
About this task
This task must be performed on both MetroCluster sites.
Steps
1. Cable the FC-VI ports.
Controller A
Controller B
fc-vi b
fc-vi b
fc-vi a
fc-vi a
2. Cable the SAS ports.
a. Determine the controller port pairs you will be cabling.
The Universal SAS and ACP Cabling Guide has completed SAS port pair worksheets for
common configurations, and also has a SAS port pairs worksheet template if a completed
worksheet is not provided for your configuration.
SAS Disk Shelves Universal SAS and ACP Cabling Guide
The following illustration shows the connections. Your port usage may be different, depending
on the available SAS ports on the controller module.
Cabling a two-node SAS-attached stretch MetroCluster configuration | 23
Controller 1
Slot 1
Controller 2
Slot 1
A
A
B
B
C
C
D
D
Stack 1
ACP
IOM A
Stack 2
SAS
ACP
SAS
First
shelf
IOM B
Last
shelf
Cabling the cluster peering connections
You must ensure that the controller ports used for cluster peering have connectivity with the cluster
on the partner site.
About this task
This task must be performed on each controller in the MetroCluster configuration.
At least two ports on each controller should be used for cluster peering.
The recommended minimum bandwidth for the ports and network connectivity is 1 GbE.
Step
1. Identify and cable at least two ports for cluster peering and ensure they have network connectivity
with the partner cluster.
Cluster peering can be done on dedicated ports or on data ports. Using dedicated ports ensures
higher throughput for the cluster peering traffic.
Related concepts
Considerations for configuring cluster peering on page 10
Related information
Clustered Data ONTAP 8.3 Data Protection Guide
Clustered Data ONTAP 8.3 Cluster Peering Express Guide
Cabling the management and data connections
You must cable the management and data ports on each storage controller to the site networks.
About this task
This task must be repeated for each controller at both MetroCluster sites.
You can connect the controller and cluster switch management ports to existing switches in your
network or to new dedicated network switches such as NetApp CN1601 cluster management
switches.
24 | Stretch MetroCluster Installation and Configuration Guide
Step
1. Cable the controller's management and data ports to the management and data networks at the
local site.
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
25
Cabling a two-node bridge-attached stretch
MetroCluster configuration
The MetroCluster components must be physically installed, cabled, and configured at both
geographic sites. The steps are slightly different for a system with native disk shelves as opposed to a
system with array LUNs.
About this task
Parts of a two-node MetroCluster configuration with FC-toSAS bridges
As you plan your MetroCluster configuration you should understand the parts of the configuration
and how they work together.
The MetroCluster configuration includes the following key hardware elements:
•
Storage controllers
The storage controllers are not connected directly to the storage but connect to FC-to-SAS
bridges. The storage controllers are connected to each other by FC cables between each
controller's FC-VI adapters.
Each storage controller is configured as a DR partner to a storage controller on the partner site.
When the MetroCluster is enabled, the system will automatically pair the two nodes with lowest
system IDs in each of the two clusters as DR partners.
•
FC-to-SAS bridges
The FC-to-SAS bridges connect the SAS storage stacks to the FC switches, providing bridging
between the two protocols.
•
Cluster peering network
The cluster peering network provides connectivity for mirroring of the Storage Virtual Machine
(SVM) configuration. The configuration of all SVMs on one cluster is mirrored to the partner
cluster.
26 | Stretch MetroCluster Installation and Configuration Guide
The following illustration shows a simplified view of the MetroCluster configuration. For some
connections, a single line represents multiple, redundant connections between the components. Data
and management network connections are not shown.
Cluster peering network
FC-VI links
controller_A_1
controller_B_1
FC_bridge_A_1
FC_bridge_B_1
SAS stack or stacks
SAS stack or stacks
FC_bridge_A_2
FC_bridge_B_2
cluster_A
cluster_B
•
The configuration consists of two single-node clusters.
•
Each site has one stack of SAS storage.
Additional storage stacks are supported, but only one is shown at each site.
Required MetroCluster hardware components and naming
guidelines for two-node stretch configurations
The MetroCluster configuration requires a variety of hardware components. For convenience and
clarity, standard names for components are used throughout the MetroCluster documentation. Also,
one site is referred to as Site A and the other site is referred to as Site B.
Supported software and hardware
The hardware and software must be supported for the MetroCluster configuration.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray). You
use the Component Explorer to select the components and Data ONTAP version to refine your
search. You can click Show Results to display the list of supported configurations that match the
criteria.
NetApp Hardware Universe
When using All Flash Optimized systems, all controller modules in the MetroCluster configuration
must be configured as All Flash Optimized systems.
Note: Long-wave SFPs are not supported in the MetroCluster storage switches. For a table of
supported SPFs, see the MetroCluster Technical Report.
NetApp Technical Report 4375: MetroCluster for Data ONTAP Version 8.3 Overview and Best
Practices
Cabling a two-node bridge-attached stretch MetroCluster configuration | 27
Required components
Because of the hardware redundancy in the MetroCluster configuration, there are two of each
component at each site. The sites are arbitrarily assigned the letters A and B and the individual
components are arbitrarily assigned the numbers 1 and 2.
The MetroCluster configuration also includes SAS storage shelves that connect to the FC-to-SAS
bridges.
Component
Two single-node Data ONTAP clusters
Naming must be unique within the MetroCluster
configuration.
Two storage controllers
Naming must be unique within the MetroCluster
configuration.
All controller modules must be the same platform model
and running the same version of Data ONTAP.
The FC-VI connections between the controller modules
must be 4 Gbps.
Two or more FC-to-SAS bridges at each site.
These bridges connect the SAS disk shelves to the
controllers.
•
FibreBridge 7500N bridges support up to four storage
stacks.
•
FibreBridge 6500N bridges support only one stack.
Example names
Site A
Site B
cluster_A
cluster_B
controller_A_1
controller_B_1
bridge_A_1_port-
bridge_B_1_po
number
rt-number
bridge_A_2_port-
bridge_B_2_po
number
rt-number
shelf_A_1_1
shelf_B_1_1
shelf_A_2_1
shelf_B_2_1
shelf_B_1_2
shelf_A_1_2
shelf_B_2_2
shelf_A_2_2
Naming must be unique within the MetroCluster
configuration.
The suggested names used as examples in this guide
identify the controller that the bridge connects to and the
port.
The FC connections between the controller modules and
the FC-to-SAS bridges must be 4 Gbps.
At least eight SAS disk shelves (recommended)
Four shelves are recommended at each site to allow disk
ownership on a per-shelf basis. A minimum of two
shelves at each site is supported.
Note: FlexArray systems support array LUNs and have
different storage requirements.
Requirements for a MetroCluster configuration with
array LUNs
Gathering required information and reviewing the workflow
You need to gather the required network information, identify the ports you will be using, and review
the hardware installation workflow before you begin cabling your system.
28 | Stretch MetroCluster Installation and Configuration Guide
Related information
NetApp Interoperability Matrix Tool
Information gathering worksheet for FC-to-SAS bridges
Before beginning to configure the MetroCluster sites, you should gather required configuration
information.
Site A, FC-to-SAS bridge 1 (FC_bridge_A_1a)
Each SAS stack requires at least two FC-to-SAS bridges.
•
For FibreBridge 7500N bridges, each bridge connects to FC_switch_A_1_port-number and
FC_switch_A_2_port-number.
•
For FibreBridge 6500N bridges, one bridge connects to FC_switch_A_1_port-number and the
second connects to FC_switch_A_2_port-number.
Site A
Your value
Bridge_A_1a IP address
Bridge_A_1a Username
Bridge_A_1a Password
Site A, FC-to-SAS bridge 2 (FC_bridge_A_1b)
Each SAS stack requires at least two FC-to-SAS bridges.
•
For FibreBridge 7500N bridges, each bridge connects to FC_switch_A_1_port-number and
FC_switch_A_2_port-number.
•
For FibreBridge 6500N bridges, one bridge connects to FC_switch_A_1_port-number and the
second connects to FC_switch_A_2_port-number.
Site A
Your value
Bridge_A_1b IP address
Bridge_A_1b Username
Bridge_A_1b Password
Site B, FC-to-SAS bridge 1 (FC_bridge_B_1a)
Each SAS stack requires at least two FC-to-SAS bridges.
•
For FibreBridge 7500N bridges, each bridge connects to FC_switch_B_1_port-number and
FC_switch_B_2_port-number.
•
For FibreBridge 6500N bridges, one bridge connects to FC_switch_B_1_port-number and the
second connects to FC_switch_B_2_port-number.
Site B
Bridge_B_1a IP address
Bridge_B_1a Username
Bridge_B_1a Password
Your value
Cabling a two-node bridge-attached stretch MetroCluster configuration | 29
Site B, FC-to-SAS bridge 2 (FC_bridge_B_1b)
Each SAS stack requires two at least FC-to-SAS bridges.
•
For FibreBridge 7500N bridges, each bridge connects to FC_switch_B_1_port-number and
FC_switch_B_2_port-number.
•
For FibreBridge 6500N bridges, one bridge connects to FC_switch_B_1_port-number and the
second connects to FC_switch_B_2_port-number.
Site B
Your value
Bridge_B_1b IP address
Bridge_B_1b Username
Bridge_B_1b Password
Installing and cabling MetroCluster components
The storage controllers must be cabled to the FC switches and the ISLs must be cabled to link the
MetroCluster sites. The storage controllers must also be cabled to the data and management network.
Steps
1.
2.
3.
4.
Racking the hardware components on page 29
Cabling the controllers to each other on page 30
Cabling the cluster peering connections on page 30
Cabling the management and data connections on page 31
Racking the hardware components
If you have not received the equipment already installed in cabinets, you must rack the components.
About this task
This task must be performed on both MetroCluster sites.
Steps
1. Plan out the positioning of the MetroCluster components.
The rack space depends on the platform model of the storage controllers, the switch types, and the
number of disk shelf stacks in your configuration.
2. Properly ground yourself.
3. Install the storage controllers in the rack or cabinet.
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
4. Install the disk shelves, power them on, and set the shelf IDs.
SAS Disk Shelves Installation and Service Guide for DS4243, DS2246, DS4486, and DS4246
30 | Stretch MetroCluster Installation and Configuration Guide
•
You must power-cycle each disk shelf.
•
Shelf IDs must be unique for each SAS disk shelf within the entire MetroCluster configuration
(including both sites).
5. Install each FC-to-SAS bridge:
a. Secure the “L” brackets on the front of the bridge to the front of the rack (flush-mount) with
the four screws.
The openings in the bridge “L” brackets are compliant with rack standard ETA-310-X for 19inch (482.6 mm) racks.
For more information and an illustration of the installation, see the ATTO FibreBridge
Installation and Operation Manual for your bridge model.
b. Connect each bridge to a power source that provides a proper ground.
c. Power on each bridge.
Note: For maximum resiliency, bridges that are attached to the same stack of disk shelves
must be connected to different power sources.
The bridge Ready LED might take up to 30 seconds to illuminate, indicating that the bridge
has completed its power-on self test sequence.
Cabling the controllers to each other
Each controller's FC-VI adapters must be cabled directly to its partner.
Step
1. Cable the FC-VI ports.
Controller A
Controller B
fc-vi b
fc-vi b
fc-vi a
fc-vi a
Cabling the cluster peering connections
You must ensure that the controller ports used for cluster peering have connectivity with the cluster
on the partner site.
About this task
This task must be performed on each controller in the MetroCluster configuration.
At least two ports on each controller should be used for cluster peering.
The recommended minimum bandwidth for the ports and network connectivity is 1 GbE.
Step
1. Identify and cable at least two ports for cluster peering and ensure they have network connectivity
with the partner cluster.
Cluster peering can be done on dedicated ports or on data ports. Using dedicated ports ensures
higher throughput for the cluster peering traffic.
Cabling a two-node bridge-attached stretch MetroCluster configuration | 31
Related concepts
Considerations for configuring cluster peering on page 10
Related information
Clustered Data ONTAP 8.3 Data Protection Guide
Clustered Data ONTAP 8.3 Cluster Peering Express Guide
Cabling the management and data connections
You must cable the management and data ports on each storage controller to the site networks.
About this task
This task must be repeated for each controller at both MetroCluster sites.
You can connect the controller and cluster switch management ports to existing switches in your
network or to new dedicated network switches such as NetApp CN1601 cluster management
switches.
Step
1. Cable the controller's management and data ports to the management and data networks at the
local site.
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
Installing FC-to-SAS bridges and SAS disk shelves
You install and cable ATTO FibreBridge bridges and SAS disk shelves as part of a new MetroCluster
installation.
About this task
For systems received from the factory, the FC-to-SAS bridges are preconfigured and do not require
additional configuration.
This procedure is written with the assumption that you are using the recommended bridge
management interfaces: the ATTO ExpressNAV GUI and ATTO QuickNAV utility.
You use the ATTO ExpressNAV GUI to configure and manage a bridge, and to update the bridge
firmware. You use the ATTO QuickNAV utility to configure the bridge Ethernet management 1 port.
You can use other management interfaces instead, if needed, such as a serial port or Telnet to
configure and manage a bridge and to configure the Ethernet management 1 port, and FTP to update
the bridge firmware.
This procedure uses the following workflow:
32 | Stretch MetroCluster Installation and Configuration Guide
Steps
1. Preparing for the installation on page 32
2. Installing the FC-to-SAS bridge and SAS shelves on page 34
Related concepts
Example of a four-node MetroCluster configuration with disks and array LUNs
Preparing for the installation
When you are preparing to install the bridges as part of your new MetroCluster system, you must
ensure that your system meets certain requirements, including meeting setup and configuration
requirements for the bridges. Other requirements include downloading the necessary documents, the
ATTO QuickNAV utility, and the bridge firmware.
Before you begin
•
Your system must already be installed in a rack if it was not shipped in a system cabinet.
•
Your configuration must be using supported hardware models and software versions.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and Data ONTAP version to refine
your search. You can click Show Results to display the list of supported configurations that
match the criteria.
•
Each FC switch must have one FC port available for one bridge to connect to it.
Cabling a two-node bridge-attached stretch MetroCluster configuration | 33
•
The computer you are using to set up the bridges must be running an ATTO-supported web
browser to use the ATTO ExpressNAV GUI.
The ATTO-supported web browsers are Internet Explorer 8 and 9, and Mozilla Firefox 3.
The ATTO Product Release Notes have an up-to-date list of supported web browsers. You can
access this document from the ATTO web site as described in the following steps.
Steps
1. Download the following documents:
•
SAS Disk Shelves Installation and Service Guide for DS4243, DS2246, DS4486, and DS4246
2. Download content from the ATTO web site and from the NetApp web site:
a. From NetApp Support, navigate to the ATTO FibreBridge Description page by clicking
Software, scrolling to Protocol Bridge and choosing ATTO FibreBridge from the dropdown menu, clicking Go!, and then clicking View & Download.
b. Access the ATTO web site using the link provided for your FibreBridge model and download
the following:
•
ATTO FibreBridge 7500N Installation and Operation Manual
•
ATTO FibreBridge 6500N Installation and Operation Manual
•
ATTO QuickNAV utility (to the computer you are using for setup)
c. Go to the ATTO FibreBridge Firmware Download page for your FibreBridge model and do
the following:
If you are
using...
Then...
FibreBridge
7500N
•
Navigate to the ATTO Fibrebridge 7500N Firmware Download page
by clicking Continue at the end of the ATTO FibreBridge Description
page.
•
Download the bridge firmware file using Steps 1 through 3 of that
procedure.
You update the firmware on each bridge later in this procedure.
•
Make a copy of the ATTO FibreBridge 7500N Firmware Download
page and release notes for reference when you are instructed to update
the firmware on each bridge.
•
Navigate to the ATTO Fibrebridge 6500N Firmware Download page
by clicking Continue at the end of the ATTO FibreBridge Description
page.
•
Download the bridge firmware file using Steps 1 through 3 of that
procedure.
You update the firmware on each bridge later in this procedure.
•
Make a copy of the ATTO FibreBridge 6500N Firmware Download
page and release notes for reference when you are instructed to update
the firmware on each bridge.
FibreBridge
6500N
3. Gather the hardware and information needed to use the recommended bridge management
interfaces, the ATTO ExpressNAV GUI, and the ATTO QuickNAV utility:
34 | Stretch MetroCluster Installation and Configuration Guide
a. Acquire a shielded Ethernet cable provided with the bridges (which connects from the bridge
Ethernet management 1 port to your network).
b. Determine a non-default user name and password (for accessing the bridges).
You should change the default user name and password.
c. Obtain an IP address, subnet mask, and gateway information for the Ethernet management 1
port on each bridge.
d. Disable VPN clients on the computer you are using for setup.
Active VPN clients cause the QuickNAV scan for bridges to fail.
Installing the FC-to-SAS bridge and SAS shelves
After ensuring that the system meets all the requirements in the “Preparing for the installation”
section, you can install your new system.
About this task
•
You should use an equal number of disk shelves at each site.
•
The system connectivity requirements for maximum distances for disk shelves, FC switches, and
backup tape devices using 50-micron, multimode fiber-optic cables, also apply to FibreBridge
bridges.
The Site Requirements Guide has detailed information about system connectivity requirements.
Note: SAS shelves in MetroCluster configurations do not require ACP cabling.
Steps
1. Configuring the FC-to-SAS bridges on page 34
2. Cabling disk shelves to the bridges on page 36
3. Cabling the FC-to-SAS bridges to the controller module in a two-node direct-attached
configuration on page 39
Configuring the FC-to-SAS bridges
Before cabling your model of the FC-to-SAS bridges you must configure the settings in the
FibreBridge software.
Steps
1. Connect the Ethernet management 1 port on each bridge to your network using the shielded
Ethernet cable provided with the bridges.
Note: The Ethernet management 1 port enables you to quickly download the bridge firmware
(using ATTO ExpressNAV or FTP management interfaces) and to retrieve core files and extract
logs.
2. Configure the Ethernet management 1 port for each bridge by following the procedure in the
ATTO FibreBridge Installation and Operation Manual for your bridge.
Note: When running QuickNAV to configure an Ethernet management port, only the Ethernet
management port that is connected by the shielded Ethernet cable is configured. For example,
if you also wanted to configure the Ethernet management 2 port, you would need to connect the
shielded Ethernet cable to port 2 and run QuickNAV.
Cabling a two-node bridge-attached stretch MetroCluster configuration | 35
If you are using...
Then see the..
ATTO FibreBridge 7500N
ATTO FibreBridge 7500N Installation and Operation Manual, section
2.0.
ATTO FibreBridge 6500N
ATTO FibreBridge 6500N Installation and Operation Manual, section
2.0.
3. Configure the bridges.
Be sure to make note of the user name and password that you designate.
Note: Do not enable SNMP for the bridges.
The ATTO FibreBridge Installation and Operation Manual for your bridge model has the most
current information on available commands and how to use them.
a. Configure the IP settings of the bridge.
To set the IP address without the Quicknav utility, you need to have a serial connection to the
FibreBridge.
Example
If using the CLI, issue the following commands:
set ipaddress mp1 ip-address
set ipsubnetmask mp1 subnet-mask
set ipgateway mp1 x.x.x.x
set ipdhcp mp1
b. Configure the bridge name.
The bridges should each have a unique name within the MetroCluster configuration.
Example bridge names for one stack group on each site:
•
bridge_A_1a
•
bridge_A_1b
•
bridge_B_1a
•
bridge_B_1b
Example
If using the CLI, issue the following command:
set bridgename bridgename
4. Configure the bridge FC ports.
a. Configure the data rate/speed of the bridge FC ports.
Note: Set the FCDataRate speed to the maximum speed supported by the FC port of the FC
switch or the controller module to which the bridge port connects.
The most current information on supported distance can be found in the NetApp
Interoperability Matrix Tool.
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without
FlexArray). You use the Component Explorer to select the components and Data ONTAP
36 | Stretch MetroCluster Installation and Configuration Guide
version to refine your search. You can click Show Results to display the list of supported
configurations that match the criteria.
Example
If using the CLI, issue the following command:
set FCDataRate port-number 16Gb
b. Configure the connection mode that the port uses to communicate across the FC network.
•
If you have a fabric-attached MetroCluster system, you must set the bridge connection
mode to ptp (point-to-point).
•
If you have a stretch MetroCluster system, you must set the bridge connection mode
depending on the adapter that the bridge connects to:
◦
For 16-Gbps capable adapters, set the connection mode to ptp, even if it is operating at
a lower speed.
◦
For 8-Gbps and 4-Gbps capable adapters, set the connection mode to loop.
Example
If using the CLI, issue the following command:
set FCConnMode port-number ptp-or-loop
c. If you are configuring an ATTO 7500N bridge and using the second port, repeat the previous
substeps for the FC2 port.
5. Save the bridge's configuration.
Example
If using the CLI, issue the following command:
SaveConfiguration Restart
You are prompted to restart the bridge.
6. Update the firmware on each bridge to the latest version by following the instructions—starting
with Step 4—on the FibreBridge Download page.
Cabling disk shelves to the bridges
You must use the correct FC-to-SAS bridges for cabling your disk shelves.
Choices
• Cabling a FibreBridge 7500N bridge on page 36
• Cabling a FibreBridge 6500N bridge on page 38
Cabling a FibreBridge 7500N bridge
After configuring the bridge, you can start cabling your new system. The FibreBridge 7500N bridge
uses mini-SAS connectors.
About this task
For disk shelves, you insert a SAS cable connector with the pull tab oriented down (on the underside
of the connector).
Cabling a two-node bridge-attached stretch MetroCluster configuration | 37
Steps
1. Daisy-chain the disk shelves in each stack.
a. For the first stack of disk shelves, cable IOM A square port of the first shelf to SAS port A on
FibreBridge A.
b. For the first stack of disk shelves, cable IOM B circle port of the last shelf to SAS port A on
FibreBridge B.
For information about daisy-chaining disk shelves, see the Installation and Service Guide for your
disk shelf model.
The following illustration shows a set of bridges cabled to a stack of three disk shelves:
FC2
FC1
FC1
FC_bridge_x_1_06
SAS A
M1
FC2
FC_bridge_x_2_o6
M1
SAS A
Stack of SAS shelves
IOM A
IOM B
First
shelf
Last
shelf
ATTO 7500N bridge
FC2
FC1
M1
FC_bridge_A_1a
SAS A SAS B SAS C SAS D
IOM A
IOM B
SAS A SAS B SAS C SAS D
FC_bridge_A_1b
M1
FC1
FC2
ATTO 7500N bridge
2. For additional shelf stacks, repeat the previous steps using the next available SAS port on the
FibreBridge bridges, using port B for a second stack, port C for a third stack, and port D for a
fourth stack.
38 | Stretch MetroCluster Installation and Configuration Guide
Note: Enter the following command to enable the SAS port you need to use.
SASPortEnable <port_letter>
Enter the following command to save the configuration.
SaveConfiguration
The following illustration shows four stacks connected to a pair of FibreBridge 7500N bridges.
ATTO 7500N bridge FC_bridge_A_1a
M1
FC1
FC2
SAS A SAS B SAS C SAS D
IOM A
IOM B
IOM A
IOM B
IOM A
IOM B
IOM A
IOM B
SAS A SAS B SAS C SAS D
M1
FC1
FC2
ATTO 7500N bridge FC_bridge_A_1b
Cabling a FibreBridge 6500N bridge
After configuring the bridge, you can start cabling your new system. The FibreBridge 6500N bridge
uses QSFP connectors.
About this task
Do not force a connector into a port. Wait at least 10 seconds before connecting the port. The SAS
cable connectors are keyed; when oriented correctly into a SAS port, the connector clicks into place
and the disk shelf SAS port LNK LED illuminates green. For disk shelves, you insert a SAS cable
connector with the pull tab oriented down (on the underside of the connector).
Steps
1. Daisy-chain the disk shelves in each stack.
For information about daisy-chaining disk shelves, see the Installation and Service Guide for your
disk shelf model.
2. For each stack of disk shelves, cable IOM A square port of the first shelf to SAS port A on
FibreBridge A.
3. For each stack of disk shelves, cable IOM B circle port of the last shelf to SAS port A on
FibreBridge B.
Each bridge has one path to its stack of disk shelves; bridge A connects to the A-side of the stack
through the first shelf, and bridge B connects to the B-side of the stack through the last shelf.
Cabling a two-node bridge-attached stretch MetroCluster configuration | 39
Example
Note: The bridge SAS port B is disabled.
The following illustration shows a set of bridges cabled to a stack of three disk shelves:
FC1
FC2
FC_bridge_x_1_06
M1
SAS A
FC2
FC1
FC_bridge_x_2_o6
SAS A
M1
Stack of SAS shelves
IOM A
IOM B
First
shelf
Last
shelf
Cabling the FC-to-SAS bridges to the controller module in a two-node direct-attached
configuration
You must cable the bridges to the controller module.
Steps
1. Verify that each bridge can detect all disk drives and disk shelves it is connected to.
If you are using the...
ATTO ExpressNAV GUI
Then...
a.
In a supported web browser, enter the IP address of a bridge in the
browser box.
You are brought to the ATTO FibreBridge homepage of the bridge for
which you entered the IP address, which has a link.
b.
Click the link, and then enter your user name and the password that
you designated when you configured the bridge.
The ATTO FibreBridge status page of the bridge appears with a menu
to the left.
c.
Click Advanced in the menu.
d.
Enter the following command, and then click Submit:
sastargets
Serial port connection
Enter the following command:
sastargets
40 | Stretch MetroCluster Installation and Configuration Guide
Example
The output shows the devices (disks and disk shelves) that the bridge is connected to. Output lines
are sequentially numbered so that you can quickly count the devices. For example, the following
output shows that 10 disks are connected:
Tgt
0
1
2
3
4
5
6
7
8
9
VendorID
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
ProductID
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
Type
DISK
DISK
DISK
DISK
DISK
DISK
DISK
DISK
DISK
DISK
SerialNumber
3QP1CLE300009940UHJV
3QP1ELF600009940V1BV
3QP1G3EW00009940U2M0
3QP1EWMP00009940U1X5
3QP1FZLE00009940G8YU
3QP1FZLF00009940TZKZ
3QP1CEB400009939MGXL
3QP1G7A900009939FNTT
3QP1FY0T00009940G8PA
3QP1FXW600009940VERQ
Note: If the text response truncated appears at the beginning of the output, you can use
Telnet to connect to the bridge and enter the same command to see all of the output.
2. Verify that the command output shows that the bridge is connected to all of the disks and disk
shelves in the stack that it is supposed to be connected to.
If the output is...
Then...
Correct
Repeat step 1 on page 39 for each remaining bridge.
Not correct
a.
Check for loose SAS cables or correct the SAS cabling by repeating
the steps in Cabling disk shelves to the bridges on page 36.
b.
Repeat step 1 on page 39.
3. Cable each bridge to the controllers:
a. Cable FC port 1 of the bridge to a 16 Gb FC port or an 8 Gb FC port on the controller in
cluster_A.
b. Cable FC port 2 of the bridge to the same speed FC port on the controller at cluster_A.
Note: Follow the cabling convention only when using a single FC adapter. If you have more
than two FC adapters, using an FC port on the same FC adapter for both bridge FC connections
might lead to a single point of failure.
4. Repeat the previous step on the other bridges until all bridges have been cabled.
41
Configuring the MetroCluster software in Data
ONTAP
You must set up each node in the MetroCluster in Data ONTAP, including the node-level
configurations and the configuration of the nodes into two sites. Finally, you implement the
MetroCluster relationship between the two sites. The steps for systems with native disk shelves are
slightly different than those for systems with array LUNs.
For new systems configured in the factory, you do not need to configure the Data ONTAP software
except to change the preconfigured IP addresses. If your system is new, you can proceed with
verifying the configuration.
Steps
1.
2.
3.
4.
5.
6.
7.
8.
9.
Gathering required information and reviewing the workflow on page 42
Similarities and differences between regular cluster and MetroCluster configurations on page 46
Setting a previously used controller module to system defaults in Maintenance mode on page 46
Configuring FC-VI ports on a QLE2564 quad-port card on page 47
Verifying disk assignment in Maintenance mode in a two-node configuration on page 49
Verifying the HA state of components on page 50
Setting up the clusters in a two-node MetroCluster configuration on page 50
Configuring the clusters into a MetroCluster configuration on page 52
Checking for MetroCluster configuration errors with Config Advisor on page 68
42 | Stretch MetroCluster Installation and Configuration Guide
10. Verifying switchover, healing, and switchback on page 69
11. Installing the MetroCluster Tiebreaker software on page 69
12. Protecting configuration backup files on page 69
Gathering required information and reviewing the workflow
You need to gather the required IP addresses for the controller and review the software installation
workflow before you begin the configuration process.
IP network information worksheet for site A
You must obtain IP addresses and other network information for the first MetroCluster site (site A)
from your network administrator before you configure the system.
Site A cluster creation information
When you first create the cluster, you need the following information:
Type of information
Your values
Cluster name
Example used in this guide:
site_A
DNS domain
DNS name servers
Location
Administrator password
Site A node information
For each node in the cluster, you need a management IP address, a network mask, and a default
gateway:
Node
Port
IP address
Network mask
Default gateway
Node 1
Example used
in this guide:
controller_A_1
Site A LIFs and ports for cluster peering
For each node in the cluster, you need the IP addresses of two intercluster LIFs, including a network
mask and a default gateway. The intercluster LIFs are used to peer the clusters.
Node
Node 1 IC LIF
1
Node 1 IC LIF
2
Port
IP address of
intercluster LIF
Network mask
Default gateway
Configuring the MetroCluster software in Data ONTAP | 43
Site A time server information
You must synchronize the time, which requires one or more NTP time servers:
Node
Host name
IP address
Network mask
Default gateway
NTP server 1
NTP server 2
Site A AutoSupport information
You must configure AutoSupport on each node, which requires the following information:
Type of information
Your values
From email address
Mail hosts
IP addresses or names
Transport protocol
HTTP, HTTPS, or
SMTP
Proxy server
Recipient email
addresses or
distribution lists
Full-length messages
Concise messages
Partners
Site A service processor information
You must enable access to the service processor of each node for troubleshooting and maintenance,
which requires the following network information for each node:
Node
IP address
Network mask
Default gateway
Node 1
IP network information worksheet for site B
You must obtain IP addresses and other network information for the second MetroCluster site (site B)
from your network administrator before you configure the system.
Site B switch information (if not using two-node switchless cluster configuration or
two-node MetroCluster configuration)
When you cable the system, you need a host name and management IP address for each cluster
switch. This information if using two-node switchless cluster or using two-node MetroCluster
configuration (one node at each site).
Cluster switch
Interconnect 1
Interconnect 2
Management 1
Management 2
Host name
IP address
Network mask
Default gateway
44 | Stretch MetroCluster Installation and Configuration Guide
Site B cluster creation information
When you first create the cluster, you need the following information:
Type of information
Your values
Cluster name
Example used in this guide:
site_B
DNS domain
DNS name servers
Location
Administrator password
Site B node information
For each node in the cluster, you need a management IP address, a network mask, and a default
gateway:
Node
Port
IP address
Network mask
Default gateway
Node 1
Example used
in this guide:
controller_B_1
Node 2
Not required if
using two-node
MetroCluster
configuration
(one node at
each site).
Example used
in this guide:
controller_B_2
Site B LIFs and ports for cluster peering
For each node in the cluster, you need the IP addresses of two intercluster LIFs including a network
mask and a default gateway. The intercluster LIFs are used to peer the clusters.
Node
Node 1 IC LIF
1
Node 1 IC LIF
2
Port
IP address of
intercluster LIF
Network mask
Default gateway
Configuring the MetroCluster software in Data ONTAP | 45
Node
Port
IP address of
intercluster LIF
Network mask
Default gateway
Node 2 IC LIF
1
Not required if
using two-node
MetroCluster
configuration
(one node at
each site).
Node 2 IC LIF
2
Not required if
using two-node
MetroCluster
configuration
(one node at
each site).
Site B time server information
You must synchronize the time, which requires one or more NTP time servers:
Node
Host name
IP address
Network mask
Default gateway
NTP server 1
NTP server 2
Site B AutoSupport information
You must configure AutoSupport on each node, which requires the following information:
Type of information
Your values
From email address
Mail hosts
IP addresses or names
Transport protocol
HTTP, HTTPS, or
SMTP
Proxy server
Recipient email
addresses or
distribution lists
Full-length messages
Concise messages
Partners
Site B service processor information
You must enable access to the service processor of each node for troubleshooting and maintenance,
which requires the following network information for each node:
Node
Node 1
(controller_B_1)
IP address
Network mask
Default gateway
46 | Stretch MetroCluster Installation and Configuration Guide
Node
IP address
Network mask
Default gateway
Node 2
(controller_B_2)
Not required if
using two-node
MetroCluster
configuration
(one node at each
site).
Similarities and differences between regular cluster and
MetroCluster configurations
The configuration of the nodes in each cluster in a MetroCluster configuration is similar to that of
nodes in a standard cluster.
The MetroCluster configuration is built on two standard clusters. Physically, the configuration must
be symmetrical, with each node having the same hardware configuration, and all the MetroCluster
components must be cabled and configured. But the basic software configuration for nodes in a
MetroCluster configuration is the same as that for nodes in a standard cluster.
Configuration step
Configure management, cluster, and data LIFs
on each node
Standard cluster
configuration
MetroCluster
configuration
Same in both types of clusters
Configure root aggregate
Configure nodes in the cluster as HA pairs
Set up cluster on one node in the cluster
Join the other node to the cluster
Create a mirrored root aggregate
Optional
Required
Create a mirrored data aggregate on each node
Optional
Required
Peer the clusters
Optional
Required
Does not apply
Required
Enable the MetroCluster configuration
Setting a previously used controller module to system
defaults in Maintenance mode
If your controller moduleshave been used previously, you must reset them to ensure a successful
MetroCluster configuration.
About this task
Important: This task is required only on controller modules that have been previously configured;
you do not need to perform this task if you received the controller modules as part of a new
MetroCluster configuration.
Configuring the MetroCluster software in Data ONTAP | 47
Steps
1. In Maintenance mode, return the environmental variables to their default setting:
set-defaults
2. Configure the settings for any HBA adapters in the system:
If you have this type of
adapter and desired mode...
Use this command for configuration...
CNA FC
ucadmin modify -mode fc -type initator
adapter_name
CNA Ethernet
ucadmin modify -mode cna adapter_name
FC target
fcadmin config -t target adapter_name
FC initiator
fcadmin config -t initiator adapter_name
3. Exit Maintenance mode:
halt
After you issue the command, wait until the system stops at the LOADER prompt.
4. Boot the node back into Maintenance mode to enable the configuration changes to take effect.
5. Verify the values of the variables:
If you have this type of
adapter...
Use this command...
CNA
ucadmin show
FC
fcadmin show
6. Clear the system configuration:
wipeconfig
Configuring FC-VI ports on a QLE2564 quad-port card
If you are using the QLE2564 quad-port card, you can enter Maintenance mode to configure the 1a
and 1b ports for FC-VI and initiator usage. This is not required on MetroCluster systems received
from the factory, in which the ports are set appropriately for your configuration.
About this task
This task must be performed in Maintenance mode.
Steps
1. Disable the ports:
storage disable adapter 1a
storage disable adapter 1b
Example
*> storage disable adapter 1a
Jun 03 02:17:57 [controller_B_1:fci.adapter.offlining:info]:
Offlining Fibre Channel adapter 1a.
Host adapter 1a disable succeeded
Jun 03 02:17:57 [controller_B_1:fci.adapter.offline:info]: Fibre
Channel adapter 1a is now offline.
48 | Stretch MetroCluster Installation and Configuration Guide
*> storage disable adapter 1b
Jun 03 02:18:43 [controller_B_1:fci.adapter.offlining:info]:
Offlining Fibre Channel adapter 1b.
Host adapter 1b disable succeeded
Jun 03 02:18:43 [controller_B_1:fci.adapter.offline:info]: Fibre
Channel adapter 1b is now offline.
*>
2. Verify that the ports are disabled:
ucadmin show
Example
*> ucadmin show
Current
Adapter Mode
------- ------...
1a
fc
1b
fc
1c
fc
1d
fc
Current
Type
---------
Pending
Mode
-------
Pending
Type
---------
Admin
Status
-------
initiator
initiator
initiator
initiator
-
-
offline
offline
online
online
3. Set the a and b ports to FC-VI mode:
ucadmin modify -adapter 1a -type fcvi
The command sets the mode on both ports in the port pair, 1a and 1b (even though only 1a is
specified in the command).
Example
*> ucadmin modify -t fcvi 1a
Jun 03 02:19:13 [controller_B_1:ucm.type.changed:info]: FC-4
changed to fcvi on adapter 1a. Reboot the controller for the
to take effect.
Jun 03 02:19:13 [controller_B_1:ucm.type.changed:info]: FC-4
changed to fcvi on adapter 1b. Reboot the controller for the
to take effect.
4. Confirm that the change is pending:
ucadmin show
Example
*> ucadmin show
Current
Adapter Mode
------- ------...
1a
fc
1b
fc
1c
fc
1d
fc
Current
Type
---------
Pending
Mode
-------
Pending
Type
---------
Admin
Status
-------
initiator
initiator
initiator
initiator
-
fcvi
fcvi
-
offline
offline
online
online
5. Shut down the controller, and then reboot into Maintenance mode.
6. Confirm the configuration change:
ucadmin show local
type has
changes
type has
changes
Configuring the MetroCluster software in Data ONTAP | 49
Example
Node
Adapter Mode
------------------ ----------------...
controller_B_1
1a
fc
controller_B_1
1b
fc
controller_B_1
1c
fc
controller_B_1
1d
fc
6 entries were displayed.
Type
---------
Mode
-------
Type
---------
Status
fcvi
-
-
online
fcvi
-
-
online
initiator
-
-
online
initiator
-
-
online
Verifying disk assignment in Maintenance mode in a twonode configuration
Before fully booting the system to Data ONTAP, you can optionally boot to Maintenance mode and
verify the disk assignment on the nodes. The disks should be assigned to create a fully symmetric
active-active configuration, where each node and each pool have an equal number of disks assigned
to them.
About this task
New MetroCluster systems have disk assignment completed prior to shipment.
The following table shows example pool assignments for a MetroCluster configuration. Disks are
assigned to pools on a per-shelf basis.
Disk shelf (example
name)...
At site...
Belongs to...
And is assigned to
that node's...
Disk shelf 1
(shelf_A_1_1)
Site A
Node A 1
Pool 0
Node B 1
Pool 1
Node B 1
Pool 0
Node A 1
Pool 1
Disk shelf 2
(shelf_A_1_3)
Disk shelf 3
(shelf_B_1_1)
Disk shelf 4
(shelf_B_1_3)
Disk shelf 9
(shelf_B_1_2)
Site B
Disk shelf 10
(shelf_B_1_4)
Disk shelf 11
(shelf_A_1_2)
Disk shelf 12
(shelf_A_1_4)
Steps
1. Confirm the shelf assignments:
50 | Stretch MetroCluster Installation and Configuration Guide
disk show –v
2. If necessary, you can explicitly assign disks on the attached disk shelves to the appropriate pool
with the disk assign command.
Using wildcards in the command enables you to assign all the disks on a disk shelf with one
command.
3. Show the disk shelf IDs and bays for each disk:
storage show disk –x
Verifying the HA state of components
In a stretch MetroCluster configuration that is not preconfigured at the factory, you must verify that
the HA state of the controller and chassis components is set to mcc-2n so that they boot up properly.
For systems received from the factory, this value is preconfigured and you do not need to verify it.
Before you begin
The system must be in Maintenance mode.
Steps
1. In Maintenance mode, display the HA state of the controller module and chassis:
ha-config show
The controller module and chassis should show the value mcc-2n.
2. If the displayed system state of the controller is not mcc-2n, set the HA state for the controller:
ha-config modify controller mcc-2n
3. If the displayed system state of the chassis is not mcc-2n, set the HA state for the chassis:
ha-config modify chassis mcc-2n
4. Boot the node to Data ONTAP:
boot_ontap
5. Repeat these steps on each node in the MetroCluster configuration.
Setting up the clusters in a two-node MetroCluster
configuration
In a two-node MetroCluster configuration, you must boot up the node, exit the Node Setup wizard,
and use the cluster setup wizard to configure the node into a single-node cluster.
Before you begin
You must not have configured the Service Processor.
About this task
This task is for two-node MetroCluster configurations using native NetApp storage.
New MetroCluster systems are preconfigured; you do not need to perform these steps. However, you
should configure AutoSupport.
This task must be performed on both clusters in the MetroCluster configuration.
Configuring the MetroCluster software in Data ONTAP | 51
For more general information about setting up Data ONTAP, see Clustered Data ONTAP 8.3
Software Setup Guide.
Steps
1. Power on the first node.
The node boots, and then the Node Setup wizard starts on the console, informing you that
AutoSupport will be enabled automatically.
Welcome to node setup.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the setup wizard.
Any changes you made before quitting will be saved.
To accept a default or omit a question, do not enter a value.
This system will send event messages and weekly reports to NetApp
Technical
Support. To disable this feature, enter "autosupport modify -support
disable"
within 24 hours. Enabling AutoSupport can significantly speed problem
determination and resolution should a problem occur on your system.
For
further information on AutoSupport, see:
http://support.netapp.com/autosupport/
Type yes to confirm and continue {yes}:
2. Because you are using the CLI to set up the cluster, exit the Node Setup wizard; Node Setup
wizard:
exit
The Node Setup would be used to configure the node's node management interface for use with
System Setup
The Node Setup wizard exits, and a login prompt appears, warning that you have not completed
the setup tasks:
Exiting the node setup wizard. Any changes you made have been saved.
Warning: You have exited the node setup wizard before completing all
of the tasks. The node is not configured. You can complete node setup
by typing
"node setup" in the command line interface.
login:
3. Log in to the admin account by using the admin user name.
4. Start the Cluster Setup wizard:
cluster setup
::> cluster setup
Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
52 | Stretch MetroCluster Installation and Configuration Guide
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster?
{create, join}:
5. Create a new cluster:
create
6. Accept the system defaults by pressing Enter, or enter your own values by typing no and then
pressing Enter.
7. Follow the prompts to complete the Cluster Setup wizard, pressing Enter to accept the default
values or typing your own values and then pressing Enter.
The default values are determined automatically based on your platform and network
configuration.
8. After you complete the Cluster Setup wizard and it exits, verify that the cluster is active and the
first node is healthy:
cluster show
Example
The following example shows a cluster in which the first node (cluster1-01) is healthy and
eligible to participate:
cluster1::> cluster show
Node
Health Eligibility
--------------------- ------- -----------cluster1-01
true
true
If it becomes necessary to change any of the settings you entered for the admin SVM or node
SVM, you can access the Cluster Setup wizard by using the cluster setup command.
Configuring the clusters into a MetroCluster configuration
You must mirror the root aggregates, create a mirrored data aggregate, and then issue the command to
implement the MetroCluster operations.
Peering the clusters
The clusters in the MetroCluster configuration must be in a peer relationship so that they can
communicate with each other and perform the data mirroring essential to MetroCluster disaster
recovery.
Before you begin
For systems received from the factory, the cluster peering is configured and you do not need to
manually peer the clusters.
Steps
1. Reviewing the preconfigured SVMs and LIFs on page 53
2. Manually peering the clusters on page 55
Configuring the MetroCluster software in Data ONTAP | 53
Related concepts
Considerations when using dedicated ports on page 11
Considerations when sharing data ports on page 11
Related references
Prerequisites for cluster peering on page 10
Related information
Clustered Data ONTAP 8.3 Data Protection Guide
Clustered Data ONTAP 8.3 Cluster Peering Express Guide
Reviewing the preconfigured SVMs and LIFs
The MetroCluster nodes are preconfigured with SVMs and LIFs. You need to be aware of these
settings as you perform your site-specific configuration.
LIFs for Cluster Storage Virtual Machine (SVM)
On the 32xx platforms, the symbol # indicates that the slot location might vary depending on system
configuration:
•
For 32xx systems with an IOXM, slot 2 is used.
•
For 32xx systems with two controllers in the chassis, or a controller and a blank, slot 1 is used.
LIF
Network
address/
mask
Node
clusterA-01_clus1
169.254.x.x/
16
cluster
A-01
clusterA-01_clus2
Port
32xx
62xx
8020
8040, 8060, or
8080
e#a
e0c
e0a
e0a
169.254.x.x/
16
e#b
e0e
e0b
e0b
clusterA-01_clus3
169.254.x.x/
16
- n/a
-
- n/a -
- n/a -
e0c
clusterA-01_clus4
169.254.x.x/
16
- n/a
-
- n/a -
- n/a -
e0d
clusterA-02_clus1
169.254.x.x/
16
e#a
e0c
e0a
e0a
clusterA-02_clus2
169.254.x.x/
16
e#b
e0e
e0b
e0b
clusterA-02_clus3
169.254.x.x/
16
- n/a
-
- n/a -
- n/a -
e0c
clusterA-02_clus4
169.254.x.x/
16
- n/a
-
- n/a -
- n/a -
e0d
cluster
A-02
54 | Stretch MetroCluster Installation and Configuration Guide
LIFs for clusterA Storage Virtual Machine (SVM)
LIF
Network
address/
mask
Node
clusterA-01-ic1
192.168.224.
221/24
cluster
A-01
clusterA-01-ic2
Port
32xx
62xx
e0a
e0a
e0e
e0i
192.168.224.
223/24
e0b
e0b
e0f
e0j
clusterA-01_mgmt1
10.10.10.11/
24
e0M
e0M
e0M
e0M
clusterA-02-ic1
192.168.224.
222/24
e0a
e0a
e0e
e0i
clusterA-02-ic2
192.168.224.
224/24
e0b
e0b
e0f
e0j
clusterA-02_mgmt1
10.10.10.12/
24
e0M
e0M
e0M
e0M
cluster_mgmt
10.10.10.9/2
4
e0a
e0a
e0e
e0i
cluster
A-02
cluster
A-01
8020
8040, 8060, or
8080
LIFs for clusterB Storage Virtual Machine (SVM)
LIF
Network
address/
mask
Node
clusterB-01-ic1
192.168.224.
225/24
cluster
B-01
clusterB-01-ic2
Port
32xx
62xx
8020
8040, 8060, or
8080
e0a
e0a
e0e
e0i
192.168.224.
227/24
e0b
e0b
e0f
e0j
clusterB-01_mgmt1
10.10.10.13/
24
e0M
e0M
e0M
e0M
clusterB-02-ic1
192.168.224.
226/24
e0a
e0a
e0e
e0i
clusterB-02-ic2
192.168.224.
228/24
e0b
e0b
e0f
e0j
clusterB-02_mgmt1
10.10.10.14/
24
e0M
e0M
e0M
e0M
cluster_mgmt
10.10.10.10/
24
e0a
e0a
e0e
e0i
cluster
B-02
cluster
B-01
Configuring the MetroCluster software in Data ONTAP | 55
Manually peering the clusters
To manually create the cluster peering relationship, you must decide whether to use dedicated LIFs,
configure the LIFs and then enable the relationship.
Before you begin
For systems received from the factory, the cluster peering is configured and you do not need to
manually peer the clusters.
Configuring intercluster LIFs
You must create intercluster LIFs on ports used for communication between the MetroCluster partner
clusters. You can use dedicated ports or ports that also have data traffic.
Choices
• Configuring intercluster LIFs to use dedicated intercluster ports on page 55
• Configuring intercluster LIFs to share data ports on page 58
Configuring intercluster LIFs to use dedicated intercluster ports
Configuring intercluster LIFs to use dedicated data ports allows greater bandwidth than using shared
data ports on your intercluster networks for cluster peer relationships.
About this task
Creating intercluster LIFs that use dedicated ports involves creating a failover group for the dedicated
ports and assigning LIFs to those ports. In this procedure, a two-node cluster exists in which each
node has two data ports that you have added, e0e and e0f. These ports are ones you will dedicate for
intercluster replication and currently are in the default IPspace. These ports will be grouped together
as targets for the intercluster LIFs you are configuring. You must configure intercluster LIFs on the
peer cluster before you can create cluster peer relationships. In your own environment, you would
replace the ports, networks, IP addresses, subnet masks, and subnets with those specific to your
environment.
Steps
1. List the ports in the cluster by using network port show command.
Example
cluster01::> network port show
Node
Port
------ --------cluster01-01
e0a
e0b
e0c
e0d
e0e
e0f
cluster01-02
e0a
e0b
e0c
e0d
e0e
e0f
Speed (Mbps)
IPspace
Broadcast Domain Link
MTU
Admin/Oper
------------ ---------------- ----- ------- -----------Cluster
Cluster
Default
Default
Default
Default
Cluster
Cluster
Default
Default
Default
Default
up
up
up
up
up
up
1500
1500
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
Cluster
Cluster
Default
Default
Default
Default
Cluster
Cluster
Default
Default
Default
Default
up
up
up
up
up
up
1500
1500
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
2. Determine whether any of the LIFs are using ports that are dedicated for replication by using the
network interface show command.
56 | Stretch MetroCluster Installation and Configuration Guide
Example
Ports e0e and e0f do not appear in the following output; therefore, they do not have any LIFs
located on them:
cluster01::> network interface show -fields home-port,curr-port
vserver lif
home-port curr-port
------- -------------------- --------- --------Cluster cluster01-01_clus1
e0a
e0a
Cluster cluster01-01_clus2
e0b
e0b
Cluster cluster01-02_clus1
e0a
e0a
Cluster cluster01-02_clus2
e0b
e0b
cluster01
cluster_mgmt
e0c
e0c
cluster01
cluster01-01_mgmt1
e0c
e0c
cluster01
cluster01-02_mgmt1
e0c
e0c
3. If a LIF is using a port that you want dedicated to intercluster connectivity, migrate the LIF to a
different port.
a. Migrate the LIF to another port by using the network interface migrate command.
Example
The following example assumes that the data LIF named cluster01_data01 uses port e0e and
you want only an intercluster LIF to use that port:
cluster01::> network interface migrate -vserver cluster01
-lif cluster01_data01 -dest-node cluster01-01 -dest-port e0d
b. You might need to modify the migrated LIF home port to reflect the new port where the LIF
should reside by using the network interface modify command:
Example
cluster01::> network interface modify -vserver cluster01
-lif cluster01_data01 -home-node cluster01-01 -home-port e0d
4. Group the ports that you will use for the intercluster LIFs by using the network interface
failover-groups create command.
Example
cluster01::> network interface failover-groups create -vserver cluster01
-failover-group intercluster01 -targets cluster01-01:e0e,cluster01-01:e0f,
cluster01-02:e0e,cluster01-02:e0f
5. Display the failover-group that you created by using the network interface failovergroups show command.
Example
cluster01::> network interface failover-groups show
Failover
Vserver
Group
Targets
---------------- ---------------- -------------------------------------------Cluster
Cluster
cluster01-01:e0a, cluster01-01:e0b,
cluster01-02:e0a, cluster01-02:e0b
cluster01
Default
cluster01-01:e0c, cluster01-01:e0d,
cluster01-02:e0c, cluster01-02:e0d,
cluster01-01:e0e, cluster01-01:e0f
Configuring the MetroCluster software in Data ONTAP | 57
cluster01-02:e0e, cluster01-02:e0f
intercluster01
cluster01-01:e0e, cluster01-01:e0f
cluster01-02:e0e, cluster01-02:e0f
6. Create an intercluster LIF on the admin SVM cluster01 by using the network interface
create command.
Example
This example uses the LIF naming convention adminSVMname_icl# for the intercluster LIF:
cluster01::> network interface create -vserver cluster01 -lif cluster01_icl01 -role
intercluster -home-node cluster01-01 -home-port e0e
-address 192.168.1.201 -netmask 255.255.255.0 -failover-group intercluster01
cluster01::> network interface create -vserver cluster01 -lif cluster01_icl02 -role
intercluster -home-node cluster01-02 -home-port e0e
-address 192.168.1.202 -netmask 255.255.255.0 -failover-group intercluster01
7. Verify that the intercluster LIFs were created properly by using the network interface show
command.
Example
cluster01::> network interface show
Logical
Status
Network
Vserver
Interface Admin/Oper Address/Mask
----------- ---------- ---------- -----------------Cluster
cluster01-01_clus_1
up/up
192.168.0.xxx/24
cluster01-01_clus_2
up/up
192.168.0.xxx/24
cluster01-02_clus_1
up/up
192.168.0.xxx/24
cluster01-02_clus_2
up/up
192.168.0.xxx/24
cluster01
cluster_mgmt up/up
192.168.0.xxx/24
cluster01_icl01
up/up
192.168.1.201/24
cluster01_icl02
up/up
192.168.1.202/24
cluster01-01_mgmt1
up/up
192.168.0.xxx/24
cluster01-02_mgmt1
up/up
192.168.0.xxx/24
Current
Current Is
Node
Port
Home
------------- ------- ---cluster01-01
e0a
true
cluster01-01
e0b
true
cluster01-01
e0a
true
cluster01-01
e0b
true
cluster01-01
e0c
true
cluster01-01
e0e
true
cluster01-02
e0e
true
cluster01-01
e0c
true
cluster01-02
e0c
true
8. Verify that the intercluster LIFs are configured for redundancy by using the network
interface show command with the -role intercluster and -failover parameters.
Example
The LIFs in this example are assigned the e0e home port on each node. If the e0e port fails, the
LIF can fail over to the e0f port.
cluster01::> network interface show -role intercluster –failover
Logical
Home
Failover
Failover
Vserver Interface
Node:Port
Policy
Group
-------- --------------- --------------------- --------------- -------cluster01-01
cluster01-01_icl01 cluster01-01:e0e
local-only
intercluster01
Failover Targets: cluster01-01:e0e,
cluster01-01:e0f
cluster01-01_icl02 cluster01-02:e0e
local-only
intercluster01
Failover Targets: cluster01-02:e0e,
cluster01-02:e0f
9. Display the routes in the cluster by using the network route show command to determine
whether intercluster routes are available or you must create them.
Creating a route is required only if the intercluster addresses in both clusters are not on the same
subnet and a specific route is needed for communication between the clusters.
58 | Stretch MetroCluster Installation and Configuration Guide
Example
In this example, no intercluster routes are available:
cluster01::> network route show
Vserver
Destination
Gateway
--------- --------------- --------------Cluster
0.0.0.0/0
192.168.0.1
cluster01
0.0.0.0/0
192.168.0.1
Metric
-----20
10
10. If communication between intercluster LIFs in different clusters requires routing, create an
intercluster route by using the network route create command.
The gateway of the new route should be on the same subnet as the intercluster LIF.
Example
In this example, 192.168.1.1 is the gateway address for the 192.168.1.0/24 network. If the
destination is specified as 0.0.0.0/0, then it becomes a default route for the intercluster network.
cluster01::> network route create -vserver cluster01
-destination 0.0.0.0/0 -gateway 192.168.1.1 -metric 40
11. Verify that you created the routes correctly by using the network route show command.
Example
cluster01::> network route show
Vserver
Destination
Gateway
--------- --------------- --------------Cluster
0.0.0.0/0
192.168.0.1
cluster01
0.0.0.0/0
192.168.0.1
0.0.0.0/0
192.168.1.1
Metric
-----20
10
40
12. Repeat these steps to configure intercluster networking in the peer cluster.
13. Verify that the ports have access to the proper subnets, VLANs, and so on.
Dedicating ports for replication in one cluster does not require dedicating ports in all clusters; one
cluster might use dedicated ports, while the other cluster shares data ports for intercluster
replication.
Related concepts
Considerations when using dedicated ports on page 11
Configuring intercluster LIFs to share data ports
Configuring intercluster LIFs to share data ports enables you to use existing data ports to create
intercluster networks for cluster peer relationships. Sharing data ports reduces the number of ports
you might need for intercluster networking.
About this task
Creating intercluster LIFs that share data ports involves assigning LIFs to existing data ports. In this
procedure, a two-node cluster exists in which each node has two data ports, e0c and e0d, and these
data ports are in the default IPspace. These are the two data ports that are shared for intercluster
replication. You must configure intercluster LIFs on the peer cluster before you can create cluster
peer relationships. In your own environment, you replace the ports, networks, IP addresses, subnet
masks, and subnets with those specific to your environment.
Configuring the MetroCluster software in Data ONTAP | 59
Steps
1. List the ports in the cluster by using the network port show command:
Example
cluster01::> network port show
Node
Port
------ --------cluster01-01
e0a
e0b
e0c
e0d
cluster01-02
e0a
e0b
e0c
e0d
Speed (Mbps)
IPspace
Broadcast Domain Link
MTU
Admin/Oper
------------ ---------------- ----- ------- -----------Cluster
Cluster
Default
Default
Cluster
Cluster
Default
Default
up
up
up
up
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
Cluster
Cluster
Default
Default
Cluster
Cluster
Default
Default
up
up
up
up
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
2. Create an intercluster LIF on the admin SVM cluster01 by using the network interface
create command.
Example
This example uses the LIF naming convention
adminSVMname_icl#
for the intercluster LIF:
cluster01::> network interface create -vserver cluster01 -lif cluster01_icl01 -role
intercluster
-home-node cluster01-01 -home-port e0c -address 192.168.1.201 -netmask 255.255.255.0
cluster01::> network interface create -vserver cluster01 -lif cluster01_icl02 -role
intercluster
-home-node cluster01-02 -home-port e0c -address 192.168.1.202 -netmask 255.255.255.0
3. Verify that the intercluster LIFs were created properly by using the network interface show
command with the -role intercluster parameter:
Example
cluster01::> network interface show –role intercluster
Logical
Status
Network
Current
Vserver
Interface Admin/Oper Address/Mask
Node
----------- ---------- ---------- ------------------ ------------cluster01
cluster01_icl01
up/up
192.168.1.201/24
cluster01-01
cluster01_icl02
up/up
192.168.1.202/24
cluster01-02
Current Is
Port
Home
------- ---e0c
true
e0c
true
4. Verify that the intercluster LIFs are configured to be redundant by using the network
interface show command with the -role intercluster and -failover parameters.
Example
The LIFs in this example are assigned the e0c port on each node. If the e0c port fails, the LIF can
fail over to the e0d port.
cluster01::> network interface show -role intercluster –failover
Logical
Home
Failover
Failover
Vserver Interface
Node:Port
Policy
Group
-------- --------------- --------------------- --------------- -------cluster01
cluster01_icl01 cluster01-01:e0c
local-only
192.168.1.201/24
Failover Targets: cluster01-01:e0c,
cluster01-01:e0d
60 | Stretch MetroCluster Installation and Configuration Guide
cluster01_icl02 cluster01-02:e0c
local-only
192.168.1.201/24
Failover Targets: cluster01-02:e0c,
cluster01-02:e0d
5. Display the routes in the cluster by using the network route show command to determine
whether intercluster routes are available or you must create them.
Creating a route is required only if the intercluster addresses in both clusters are not on the same
subnet and a specific route is needed for communication between the clusters.
Example
In this example, no intercluster routes are available:
cluster01::> network route show
Vserver
Destination
Gateway
--------- --------------- --------------Cluster
0.0.0.0/0
192.168.0.1
cluster01
0.0.0.0/0
192.168.0.1
Metric
-----20
10
6. If communication between intercluster LIFs in different clusters requires routing, create an
intercluster route by using the network route create command.
The gateway of the new route should be on the same subnet as the intercluster LIF.
Example
In this example, 192.168.1.1 is the gateway address for the 192.168.1.0/24 network. If the
destination is specified as 0.0.0.0/0, then it becomes a default route for the intercluster network.
cluster01::> network route create -vserver cluster01
-destination 0.0.0.0/0 -gateway 192.168.1.1 -metric 40
7. Verify that you created the routes correctly by using the network route show command.
Example
cluster01::> network route show
Vserver
Destination
Gateway
--------- --------------- --------------Cluster
0.0.0.0/0
192.168.0.1
cluster01
0.0.0.0/0
192.168.0.1
0.0.0.0/0
192.168.1.1
Metric
-----20
10
40
8. Repeat these steps on the cluster to which you want to connect.
Related concepts
Considerations when sharing data ports on page 11
Creating the cluster peer relationship
You create the cluster peer relationship using a set of intercluster logical interfaces to make
information about one cluster available to the other cluster for use in cluster peering applications.
Before you begin
•
Intercluster LIFs should be created in the IPspaces of both clusters you want to peer.
•
You should ensure that the intercluster LIFs of the clusters can route to each other.
Configuring the MetroCluster software in Data ONTAP | 61
•
If there are different administrators for each cluster, the passphrase used to authenticate the
cluster peer relationship should be agreed upon.
About this task
If you created intercluster LIFs in a nondefault IPspace, you need to designate the IPspace when you
create the cluster peer.
Steps
1. Create the cluster peer relationship on each cluster by using the cluster peer create
command.
The passphrase that you use is not displayed as you type it.
If you created a nondefault IPspace to designate intercluster connectivity, you use the ipspace
parameter to select that IPspace.
Example
In the following example, cluster01 is peered with a remote cluster named cluster02. Cluster01 is
a two-node cluster that has one intercluster LIF per node. The IP addresses of the intercluster
LIFs created in cluster01 are 192.168.2.201 and 192.168.2.202. Similarly, cluster02 is a two-node
cluster that has one intercluster LIF per node. The IP addresses of the intercluster LIFs created in
cluster02 are 192.168.2.203 and 192.168.2.204. These IP addresses are used to create the cluster
peer relationship.
cluster01::> cluster peer create -peer-addrs
192.168.2.203,192.168.2.204
Please type the passphrase:
Please type the passphrase again:
cluster02::> cluster peer create -peer-addrs
192.168.2.201,192.168.2.202
Please type the passphrase:
Please type the passphrase again:
If DNS is configured to resolve host names for the intercluster IP addresses, you can use host
names in the –peer-addrs option. It is not likely that intercluster IP addresses frequently
change; however, using host names allows intercluster IP addresses to change without having to
modify the cluster peer relationship.
Example
In the following example, an IPspace called IP01A was created on cluster01 for intercluster
connectivity. The IP addresses used in the previous example are used in this example to create the
cluster peer relationship.
cluster01::> cluster peer create -peer-addrs
192.168.2.203,192.168.2.204
-ipspace IP01A
Please type the passphrase:
Please type the passphrase again:
cluster02::> cluster peer create -peer-addrs
192.168.2.201,192.168.2.202
Please type the passphrase:
Please type the passphrase again:
62 | Stretch MetroCluster Installation and Configuration Guide
2. Display the cluster peer relationship by using the cluster peer show command with the instance parameter.
Displaying the cluster peer relationship verifies that the relationship was established successfully.
Example
cluster01::> cluster peer show –instance
Peer Cluster Name: cluster02
Remote Intercluster Addresses: 192.168.2.203,192.168.2.204
Availability: Available
Remote Cluster Name: cluster02
Active IP Addresses: 192.168.2.203,192.168.2.204
Cluster Serial Number: 1-80-000013
3. Preview the health of the nodes in the peer cluster by using the cluster peer health show
command.
Previewing the health checks the connectivity and status of the nodes on the peer cluster.
Example
cluster01::> cluster peer health show
Node
cluster-Name
Ping-Status
---------- --------------------------cluster01-01
cluster02
Data: interface_reachable
ICMP: interface_reachable
Node-Name
RDB-Health Cluster-Health Avail…
--------- --------------- -------cluster02-01
true
true
cluster02-02
Data: interface_reachable
ICMP: interface_reachable true
true
cluster01-02
cluster02
cluster02-01
Data: interface_reachable
ICMP: interface_reachable true
true
cluster02-02
Data: interface_reachable
ICMP: interface_reachable true
true
true
true
true
true
Mirroring the root aggregates
You must mirror the root aggregates to ensure data protection.
Steps
1. Mirror the root aggregate:
storage aggregate mirror aggr_ID
Example
The following command mirrors the root aggregate for controller_A_1:
controller_A_1::> storage aggregate mirror aggr0_controller_A_1
This creates an aggregate with a local plex located at the local MetroCluster site and a remote
plex located at the remote MetroCluster site.
2. Repeat the previous steps for each node in the MetroCluster configuration.
Configuring the MetroCluster software in Data ONTAP | 63
Creating a mirrored data aggregate on each node
You must create a mirrored data aggregate on each node in the DR group.
Before you begin
•
You should know what drives or array LUNs will be used in the new aggregate.
•
If you have multiple drive types in your system (heterogeneous storage), you should understand
how you can ensure that the correct drive type is selected.
About this task
•
Drives and array LUNs are owned by a specific node; when you create an aggregate, all drives in
that aggregate must be owned by the same node, which becomes the home node for that
aggregate.
•
Aggregate names should conform to the naming scheme you determined when you planned your
MetroCluster configuration.
•
The Clustered Data ONTAP Data Protection Guide contains more information about mirroring
aggregates.
Steps
1. Display a list of available spares:
storage disk show -spare -owner node_name
2. Create the aggregate by using the storage aggregate create -mirror true command.
If you are logged in to the cluster on the cluster management interface, you can create an
aggregate on any node in the cluster. To ensure that the aggregate is created on a specific node,
use the -node parameter or specify drives that are owned by that node.
You can specify the following options:
•
Aggregate's home node (that is, the node that owns the aggregate in normal operation)
•
List of specific drives or array LUNs that are to be added to the aggregate
•
Number of drives to include
•
Checksum style to use for the aggregate
•
Type of drives to use
•
Size of drives to use
•
Drive speed to use
•
RAID type for RAID groups on the aggregate
•
Maximum number of drives or array LUNs that can be included in a RAID group
•
Whether drives with different RPM are allowed
For more information about these options, see the storage aggregate create man page.
Example
The following command creates a mirrored aggregate with 10 disks:
64 | Stretch MetroCluster Installation and Configuration Guide
controller_A_1::> storage aggregate create aggr1_controller_A_1 diskcount 10 -node controller_A_1 -mirror true
[Job 15] Job is queued: Create aggr1_controller_A_1.
[Job 15] The job is starting.
[Job 15] Job succeeded: DONE
3. Verify the RAID group and drives of your new aggregate:
storage aggregate show-status -aggregate aggregate-name
Implementing the MetroCluster configuration
You must run the metrocluster configure command to start data protection in the MetroCluster
configuration.
Before you begin
•
There must be at least two nonroot mirrored data aggregates on each cluster, and all aggregates
must be mirrored.
You can verify this with the storage aggregate show command.
•
The ha-config state of the controllers and chassis must be mcc.
This state is preconfigured on systems shipped from the factory.
About this task
You issue the metrocluster configure command once, on any of the nodes, to enable the
MetroCluster configuration. You do not need issue the command on each of the sites or nodes, and it
does not matter which node or site you choose to issue the command on.
The metrocluster configure command automatically pairs the two nodes with lowest system
IDs in each of the two clusters as DR partners. In a four-node MetroCluster, there are two DR partner
pairs. The second DR pair is created from the two nodes with higher system IDs.
Steps
1. Enter the metrocluster configure command in the following format, depending on whether
you .
If your MetroCluster
configuration will have..
Then take these steps:
Multiple data aggregates
From any node's prompt, perform the MetroCluster configuration
operation:
This is the best practice.
metrocluster configure node-name
A single data aggregate
a.
From any node's prompt, change to the advanced privilege level:
set -privilege advanced
You need to respond with y when prompted to continue into
advanced mode and see the advanced mode prompt (*>).
b.
Perform the MetroCluster configuration operation with the -allowwith-one-aggregate true parameter:
metrocluster configure -allow-with-oneaggregate true node-name
c.
Return to the admin privilege level:
set -privilege admin
Configuring the MetroCluster software in Data ONTAP | 65
Example
The following command enables MetroCluster configuration on all nodes in the DR group that
contains controller_A_1:
controller_A_1::*> metrocluster configure -node-name controller_A_1
[Job 121] Job succeeded: Configure is successful.
2. Check the networking status on site A:
network port show
Example
cluster_A::> network port show
Node
Port
IPspace
------ --------- --------controller_A_1
e0a
Cluster
e0b
Cluster
e0c
Default
e0d
Default
e0e
Default
e0f
Default
e0g
Default
controller_A_2
e0a
Cluster
e0b
Cluster
e0c
Default
e0d
Default
e0e
Default
e0f
Default
e0g
Default
14 entries were displayed.
Speed (Mbps)
Broadcast Domain Link
MTU
Admin/Oper
---------------- ----- ------- -----------Cluster
Cluster
Default
Default
Default
Default
Default
up
up
up
up
up
up
up
9000
9000
1500
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
Cluster
Cluster
Default
Default
Default
Default
Default
up
up
up
up
up
up
up
9000
9000
1500
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
3. Confirm the MetroCluster configuration from both sites in the MetroCluster configuration:
a. Confirm the configuration from site A:
metrocluster show
Example
cluster_A::> metrocluster
Cluster
------------------------Local: cluster_A
Remote: cluster_B
show
Configuration State
------------------configured
configured
Mode
----------normal
normal
b. Confirm the configuration from site B:
metrocluster show
Example
cluster_B::> metrocluster
Cluster
------------------------Local: cluster_B
Remote: cluster_A
show
Configuration State
------------------configured
configured
Mode
----------normal
normal
66 | Stretch MetroCluster Installation and Configuration Guide
Configuring the MetroCluster FC switches for health monitoring
In a fabric-attached MetroCluster configuration, you must perform some special configuration steps
to monitor the FC switches.
Steps
1. Issue the following command on each MetroCluster node:
storage switch add -switch-ipaddress ipaddress
This command must be repeated on all four switches in the MetroCluster configuration.
Example
The following example shows the command to add a switch with IP address of 10.10.10.10:
controller_A_1::> storage switch add -switch-ipaddress 10.10.10.10
2. Verify that all switches are properly configured:
storage switch show
It might take up to 15 minutes to reflect all data due to the 15-minute polling interval.
Example
The following example shows the command given to verify the MetroCluster's FC switches are
configured:
controller_A_1::> storage switch
Fabric
Switch Name
---------------- --------------1000000533a9e7a6 brcd6505-fcs40
1000000533a9e7a6 brcd6505-fcs42
1000000533ed94d1 brcd6510-fcs44
1000000533ed94d1 brcd6510-fcs45
4 entries were displayed.
show
Vendor
------Brocade
Brocade
Brocade
Brocade
Model
-----------Brocade6505
Brocade6505
Brocade6510
Brocade6510
Switch WWN
---------------1000000533a9e7a6
1000000533d3660a
1000000533eda031
1000000533ed94d1
Status
-----OK
OK
OK
OK
controller_A_1::>
If the switch's worldwide name (WWN) is shown, the Data ONTAP health monitor is able to
contact and monitor the FC switch.
Related information
Clustered Data ONTAP 8.3 System Administration Guide
Checking the MetroCluster configuration
You can check that the components and relationships in the MetroCluster configuration are working
correctly. You should do a check after initial configuration, and after making any changes to the
MetroCluster configuration, and before a negotiated (planned) switchover or a switchback operation.
After you run the metrocluster check run command, you then display the results of the check
with various metrocluster check show commands.
About this task
If the metrocluster check run command is issued twice within a short time, on either or both
clusters, a conflict can occur and the command might not collect all data. Subsequent
metrocluster check show commands will not show the expected output.
Configuring the MetroCluster software in Data ONTAP | 67
Steps
1. Check the configuration:
metrocluster check run
Example
The command will run as a background job:
controller_A_1::> metrocluster check run
[Job 60] Job succeeded: Check is successful. Run "metrocluster check
show" command to view the results of this operation.
controller_A_1::> metrocluster check show
Last Checked On: 5/22/2015 12:23:28
Component
Result
------------------- --------nodes
ok
lifs
ok
config-replication ok
aggregates
ok
clusters
ok
5 entries were displayed.
2. Display more detailed results from the most recent metrocluster check run command:
metrocluster check aggregate show
metrocluster check cluster show
metrocluster check config-replication show
metrocluster check lif show
metrocluster check node show
The metrocluster check show commands show the results of the most recent
metrocluster check run command. You should always run the metrocluster check
run command prior to using the metrocluster check show commands to ensure that the
information displayed is current.
Example
The following example shows the metrocluster check aggregate show output for a
healthy four-node MetroCluster configuration:
controller_A_1::> metrocluster check aggregate show
Last Checked On: 8/5/2014 00:42:58
Node
Aggregate
Check
--------------------- --------------------- --------------------controller_A_1
aggr0_controller_A_1_0
mirroring-status
disk-pool-allocation
controller_A_1_aggr1
mirroring-status
disk-pool-allocation
controller_A_2
aggr0_controller_A_2
mirroring-status
disk-pool-allocation
controller_A_2_aggr1
mirroring-status
disk-pool-allocation
controller_B_1
aggr0_controller_B_1_0
mirroring-status
disk-pool-allocation
controller_B_1_aggr1
mirroring-status
disk-pool-allocation
controller_B_2
aggr0_controller_B_2
Result
--------ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
68 | Stretch MetroCluster Installation and Configuration Guide
mirroring-status
disk-pool-allocation
ok
ok
mirroring-status
disk-pool-allocation
ok
ok
controller_B_2_aggr1
16 entries were displayed.
The following example shows the metrocluster check cluster show output for a healthy
four-node MetroCluster configuration. It indicates that the clusters are ready to perform a
negotiated switchover if necessary.
Last Checked On: 5/22/2015 12:23:28
Cluster
Check
--------------------- -----------------------------------cluster_A
negotiated-switchover-ready
switchback-ready
applicable
job-schedules
licenses
cluster_B
negotiated-switchover-ready
switchback-ready
applicable
job-schedules
licenses
8 entries were displayed.
Result
--------ok
notok
ok
ok
notok
ok
Related information
Clustered Data ONTAP 8.3 Physical Storage Management Guide
Clustered Data ONTAP 8.3 Network Management Guide
Clustered Data ONTAP 8.3 Data Protection Guide
Checking for MetroCluster configuration errors with Config
Advisor
You can go to the NetApp Support Site and download the Config Advisor tool to check for common
configuration errors.
About this task
Config Advisor is a configuration validation and health check tool. You can deploy it at both secure
sites and non-secure sites for data collection and system analysis.
Note: Support for Config Advisor is limited, and available only online.
Steps
1. Go to NetApp Downloads: Config Advisor.
2. After running Config Advisor, review the tool's output and follow the recommendations in the
output to address any issues discovered.
Configuring the MetroCluster software in Data ONTAP | 69
Verifying switchover, healing, and switchback
You should verify the switchover, healing, and switchback operations of the MetroCluster
configuration.
Step
1. Use the procedures for negotiated switchover, healing, and switchback in the Clustered Data
ONTAP 8.3 MetroCluster Management and Disaster Recovery Guide.
Installing the MetroCluster Tiebreaker software
You can download and install Tiebreaker software to monitor the two clusters and the connectivity
status between them from a third site. Doing so enables each partner in a cluster to distinguish
between an ISL failure (when inter-site links are down) from a site failure.
Before you begin
You must have a Linux host available that has network connectivity to both clusters in the
MetroCluster configuration.
Steps
1. Go to MetroCluster Tiebreaker Software Download page.
2. Follow the directions to download the Tiebreaker software and documentation.
Protecting configuration backup files
You can provide additional protection for the cluster configuration backup files by specifying a
remote URL (either HTTP or FTP) where the configuration backup files will be uploaded in addition
to the default locations in the local cluster.
Step
1. Set the URL of the remote destination for the configuration backup files:
system configuration backup settings modify URL-of-destination
Related information
Clustered Data ONTAP 8.3 System Administration Guide
70
Stretch MetroCluster configurations with array
LUNs
In a stretch MetroCluster configuration with array LUNs, you must connect the FC-VI ports across
controllers. In addition, you must use FC switches in the configuration because direct connectivity
between the controllers and the storage arrays is not supported.
You can also set up a stretch MetroCluster configuration with both disks and array LUNs. In such a
configuration, you must use either FC-to-SAS bridges or SAS optical cables to connect the
controllers to disks, and use FC switches to connect the controllers to array LUNs.
Example of a stretch MetroCluster configuration with array
LUNs
In a stretch MetroCluster configuration with array LUNs, you must cable the FC-VI ports for direct
connectivity between the controllers. In addition, you must cable each controller's HBA ports to
switch ports on the corresponding FC switches.
The following illustration shows the FC-VI ports cabled across controllers A and B in a stretch
MetroCluster configuration:
Except for connecting the FC-VI ports, the rest of the procedure for setting up a MetroCluster
configuration with array LUNs remains the same for both stretch and fabric-attached configurations.
Examples of two-node stretch MetroCluster configurations
with disks and array LUNs
For setting up a stretch MetroCluster configuration with native disks and array LUNs, you must use
either FC-to-SAS bridges or SAS optical cables to connect the Data ONTAP systems to the disk
shelves. You must use FC switches for connecting array LUNs to the Data ONTAP systems.
A minimum of eight HBA ports is required for a Data ONTAP system to connect to both native disks
and array LUNs.
In the following examples representing two-node stretch MetroCluster configurations with disks and
array LUNs, HBA ports 0a through 0d are used for connection with array LUNs, while ports 1a
through 1d are used for connections with native disks.
The following illustration shows a two-node stretch MetroCluster configuration in which the native
disks are connected to the Data ONTAP systems through SAS optical cables:
Stretch MetroCluster configurations with array LUNs | 71
The following illustration shows a two-node stretch MetroCluster configuration in which the native
disks are connected to the Data ONTAP systems through FC-to-SAS bridges:
The following illustration shows a two-node stretch MetroCluster configuration with the array LUN
connections:
72 | Stretch MetroCluster Installation and Configuration Guide
Note: If required, you can also use the same FC switches to connect both native disks and array
LUNs to the controllers in the MetroCluster configuration.
73
Using the OnCommand management tools for
further configuration and monitoring
The OnCommand management tools can be used for GUI management of the clusters and
monitoring of the configuration.
Each node has OnCommand System Manager pre-installed. To load System Manager, enter the
cluster management LIF address as the URL in a web browser that has connectivity to the node.
You can also use OnCommand Unified Manager and OnCommand Performance Manager to monitor
the MetroCluster configuration.
Related information
NetApp Documentation: OnCommand Unified Manager Core Package (current releases)
NetApp Documentation: OnCommand System Manager (current releases)
Synchronizing the system time using NTP
Each cluster needs its own Network Time Protocol (NTP) server to synchronize the time between the
nodes and their clients. You can use the Edit DateTime dialog box in System Manager to configure
the NTP server.
Before you begin
You must have downloaded and installed System Manager. System Manager is available at
mysupport.netapp.com.
About this task
•
You cannot modify the time zone settings for a failed node or the partner node after a takeover
occurs.
•
Each cluster in the MetroCluster configuration should have its own separate NTP server or servers
used by the nodes, FC switches and FC-to-SAS bridges at that MetroCluster site.
If you are using the MetroCluster Tiebreaker software, it should also have its own separate NTP
server.
Steps
1. From the home page, double-click the appropriate storage system.
2. Expand the Cluster hierarchy in the left navigation pane.
3. In the navigation pane, click Configuration > System Tools > DateTime.
4. Click Edit.
5. Select the time zone.
6. Specify the IP addresses of the time servers, and then click Add.
You must add an NTP server to the list of time servers. The domain controller can be an
authoritative server.
7. Click OK.
74 | Stretch MetroCluster Installation and Configuration Guide
8. Verify the changes you made to the date and time settings in the Date and Time window.
75
Requirements and limitations when using Data
ONTAP in a MetroCluster configuration
When using Data ONTAP in a MetroCluster configuration, you should be aware of certain
requirements and limitations when configuring Data ONTAP features.
•
Both sites should be licensed for the same site-licensed features.
•
All nodes should be licensed for the same node-locked features.
•
Infinite Volumes are not supported in a MetroCluster configuration.
Cluster peering from the MetroCluster sites to a third cluster
Because peering configuration is not replicated, if you peer one of the clusters in the MetroCluster
configuration to a third cluster outside of that configuration, you must also configure the peering on
the partner MetroCluster cluster. This ensures peering can be maintained in the event of a switchover.
The non-MetroCluster cluster must be running Data ONTAP 8.3 or peering will be lost in the event
of a switchover.
Volume creation on a root aggregate
The system does not allow the creation of new volumes on the root aggregate (an aggregate with an
HA policy of CFO) of a node in a MetroCluster configuration.
Because of this restriction, root aggregates cannot be added to an SVM using the vserver addaggregates command.
Networking and LIF creation guidelines for MetroCluster
configurations
You should be aware of how LIFs are created and replicated within the MetroCluster configuration.
You must also know about the requirement for consistency so you can make proper decisions when
configuring your network .
IPspace configuration
IPspace names must match between the two sites.
IPspace objects must be manually replicated to the partner cluster. Any SVMs created and assigned
to an IPspace before the IPspace is replicated will not be replicated to the partner cluster.
IPv6 configuration
If IPv6 is configured on one site, it must be configured on the other site.
LIF creation
You can confirm the successful creation of a LIF in a MetroCluster configuration by running the
metrocluster check lif show. If there are issues, you can use the metrocluster check
lif repair-placement command.
76 | Stretch MetroCluster Installation and Configuration Guide
Duplicate LIFs
You should not create duplicate LIFs (multiple LIFs with the same IP address) within the same
IPspace.
Intercluster LIFs
Intercluster LIFs are limited to the default IPspace that is owned by the admin SVM.
Replication of LIFs to the partner cluster
When you create a LIF on a cluster in a MetroCluster configuration, that LIF is replicated on the
partner cluster. The system must meet the following conditions to place the replicated LIF on the
partner cluster:
1. DR partner availability
The system attempts to place the replicated LIF on the DR partner of the node on which it was
created.
2. Connectivity
•
For IP or iSCSI LIFs, the system places them on a reachable subnet.
•
For FCP LIFs, the system attempts to place them on a reachable FC fabric.
3. Port attributes
The system attempts to place the LIF on a port with the desired VLAN, adapter type, and speed
attributes.
An EMS message is displayed if the LIF replication fails.
You can also check the failure by using the metrocluster check lif show command. Failures
can be corrected by running the metrocluster check lif repair-placement command for
any LIF that fails to find a correct port. You should resolve any LIF failures as soon as possible to
ensure LIF operation in the event of a MetroCluster switchover operation.
Note: Even if the source Storage Virtual Machine (SVM) is down, LIF placement may proceed
normally if there is a LIF belonging to a different SVM in a port with the same ipspace and
network in the destination.
Placement of replicated LIFs when the DR partner node is down
When a iSCSI or FCP LIF is created on a node whose DR partner has been taken over, the replicated
LIF is placed on the DR auxiliary partner node. After a subsequent giveback, the LIFs are not
automatically moved to the DR partner. This could lead to LIFs being concentrated on a single node
in the partner cluster. In the event of a MetroCluster switchover operation, subsequent attempts to
map LUNs belonging to the SVM will fail.
You should run the metrocluster check lif show after a takeover or giveback to ensure correct
LIF placement. If errors exist, you can run the metrocluster check lif repair-placement
command.
LIF placement errors
Starting in Data ONTAP 8.3.1, LIF placement errors displayed by the metrocluster check lif
show command will be retained after a switchover. If the network interface modify, network
interface rename, or network interface delete command is issued for a LIF with a
placement error, the error is removed and will not appear in the metrocluster check lif show
output.
Requirements and limitations when using Data ONTAP in a MetroCluster configuration | 77
Related information
Clustered Data ONTAP 8.3 Network Management Guide
Volume or FlexClone command VLDB errors
If a volume or FlexClone volume command (such as volume create or volume delete) fails and the
error message indicates that the failure is due to a VLDB error, you should manually retry the job.
If the retry fails with an error that indicates a duplicate volume name, there is a stale entry in the
internal volume database. Please call customer support for assistance in removing the stale entry.
Removing the entry helps ensure that configuration inconsistencies do not develop between the two
MetroCluster clusters.
Output for storage disk show and storage shelf show
commands in a two-node MetroCluster configuration
In a two-node MetroCluster configuration, the is-local-attach field of the storage disk
show and storage shelf show commands shows all disks and storage shelves as local, regardless
of the node to which they are attached.
Output for the storage aggregate plex show command after
a MetroCluster switchover is indeterminate
When you run the storage aggregate plex show command after a MetroCluster switchover,
the status of plex0 of the switched over root aggregate is indeterminate and is displayed as failed.
During this time, the switched over root is not updated. The actual status of this plex can only be
determined after running the metrocluster heal –phase aggregates command.
Modifying volumes to set NVFAIL in case of switchover
You can modify a volume so that, in event of a MetroCluster switchover, the NVFAIL flag is set on
the volume. The NVFAIL flag causes the volume to be fenced off from any modification. This is
required for volumes that need to be handled as if committed writes to the volume were lost after the
switchover.
Step
1. Enable MetroCluster to trigger NVFAIL on switchover by setting the vol -dr-force-nvfail
parameter to on:
vol modify -vserver vserver-name -volume volume-name -dr-force-nvfail on
.
Monitoring and protecting database validity by using
NVFAIL
The -nvfail parameter of the volume modify command enables Data ONTAP to detect
nonvolatile RAM (NVRAM) inconsistencies when the system is booting or after a switchover
78 | Stretch MetroCluster Installation and Configuration Guide
operation. It also warns you and protects the system against data access and modification until the
volume can be manually recovered.
If Data ONTAP detects any problems, database or file system instances stop responding or shut
down. Data ONTAP then sends error messages to the console to alert you to check the state of the
database or file system. You can enable NVFAIL to warn database administrators of NVRAM
inconsistencies among clustered nodes that can compromise database validity.
After a system crash or switchover operation, NFS clients cannot access data from any of the nodes
until the NVFAIL state is cleared. CIFS clients are unaffected.
How NVFAIL protects database files
The NVFAIL state is set in two cases, either when Data ONTAP detects NVRAM errors when
booting up or when a MetroCluster switchover operation occurs. If no errors are detected at startup,
the file service is started normally. However, if NVRAM errors are detected or the force-fail
option was set and then there was a switchover, Data ONTAP stops database instances from
responding.
When you enable the NVFAIL option, one of the following processes takes place during bootup:.
If...
Then...
Data ONTAP detects no NVRAM errors
File service starts normally.
Data ONTAP detects NVRAM errors
•
Data ONTAP returns a stale file handle
(ESTALE) error to NFS clients trying to
access the database, causing the application
to stop responding, crash, or shut down.
Data ONTAP then sends an error message to
the system console and log file.
•
When the application restarts, files are
available to CIFS clients, even if you have
not verified that they are valid.
For NFS clients, files remain inaccessible
until you reset the in-nvfailed-state
option on the affected volume.
Data ONTAP detects NVRAM errors on a
volume that contains LUNs
LUNs in that volume are brought offline. Then
the in-nvfailed-state option on the
volume must be cleared, and the NVFAIL
attribute on the LUNs must be cleared by
bringing each LUN in the affected volume
online. You can perform the steps to check the
integrity of the LUNs and recover the LUN
from Snapshot copy or backup as necessary.
After all the LUNs in the volume are recovered,
the in-nvfailed-state option on the
affected volume is cleared.
Commands for monitoring data loss events
If you enable the NVFAIL option, you receive notification when a system crash caused by NVRAM
inconsistencies or a MetroCluster switchover occurs.
By default, the NVFAIL parameter is not enabled.
Requirements and limitations when using Data ONTAP in a MetroCluster configuration | 79
If you want to...
Use this command...
Create a new volume with NVFAIL enabled
volume create -nvfail on
Enable NVFAIL on an existing volume
volume modify
Note: You set the -nvfail option to on to
enable NVFAIL on the created volume.
Display whether NVFAIL is currently enabled
for a specified volume
volume show
Note: You set the -fields parameter to
nvfail to display the NVFAIL attribute for a
specified volume.
See the man page for each command for more information.
Accessing volumes in NVFAIL state after a switchover
After a switchover, you must clear the NVFAIL state by resetting the -in-nvfailed-state
parameter of the volume modify command to remove the restriction of clients to access data.
Before you begin
The database or file system must not be running or trying to access the affected volume.
About this task
Setting -in-nvfailed-state parameter requires advanced-level privilege.
Step
1. Recover the volume by using the volume modify command with the -in-nvfailed-state
parameter set to false.
After you finish
For instructions about examining database file validity, see the documentation for your specific
database software.
If your database uses LUNs, review the steps to make the LUNs accessible to the host after an
NVRAM failure.
Recovering LUNs in NVFAIL states after switchover
After a switchover, the host no longer has access to data on the LUNs that are in NVFAIL states. You
must perform a number of actions before the database has access to the LUNs.
Before you begin
The database must not be running.
Steps
1. Clear the NVFAIL state on the affect volume that hosts the LUNs by resetting the -innvfailed-state parameter of the volume modify command.
2. Bring the affected LUNs online.
3. Examine the LUNs for any data inconsistencies and resolve them.
80 | Stretch MetroCluster Installation and Configuration Guide
This might involve host-based recovery or recovery done on the storage controller using
SnapRestore.
4. Bring the database application online after recovering the LUNs.
81
Glossary of MetroCluster terms
aggregate
A grouping of physical storage resources (disks or array LUNs) that provides storage to
volumes associated with the aggregate. Aggregates provide the ability to control the RAID
configuration for all associated volumes.
data SVM
Formerly known as data Vserver. In clustered Data ONTAP, a Storage Virtual Machine
(SVM) that facilitates data access from the cluster; the hardware and storage resources of
the cluster are dynamically shared by data SVMs within a cluster.
admin SVM
Formerly known as admin Vserver. In clustered Data ONTAP, a Storage Virtual Machine
(SVM) that has overall administrative access to all objects in the cluster, including all
objects owned by other SVMs, but does not provide data access to clients or hosts.
inter-switch link (ISL)
A connection between two switches using the E-port.
destination
The storage to which source data is backed up, mirrored, or migrated.
disaster recovery (DR) group
The four nodes in a MetroCluster configuration that synchronously replicate each others'
configuration and data.
disaster recovery (DR) partner
A node's partner at the remote MetroCluster site. The node mirrors its DR partner's
NVRAM or NVMEM partition.
disaster recovery auxiliary (DR auxiliary) partner
The HA partner of a node's DR partner. The DR auxiliary partner mirrors a node's
NVRAM or NVMEM partition in the event of an HA takeover after a MetroCluster
switchover operation.
HA pair
•
In Data ONTAP 8.x, a pair of nodes whose controllers are configured to serve data for
each other if one of the two nodes stops functioning.
Depending on the system model, both controllers can be in a single chassis, or one
controller can be in one chassis and the other controller can be in a separate chassis.
•
In the Data ONTAP 7.3 and 7.2 release families, this functionality is referred to as an
active/active configuration.
HA partner
A node's partner within the local HA pair. The node mirrors its HA partner's NVRAM or
NVMEM cache.
high availability (HA)
In Data ONTAP 8.x, the recovery capability provided by a pair of nodes (storage systems),
called an HA pair, that are configured to serve data for each other if one of the two nodes
stops functioning. In the Data ONTAP 7.3 and 7.2 release families, this functionality is
referred to as an active/active configuration.
healing
82 | Stretch MetroCluster Installation and Configuration Guide
The two required MetroCluster operations that prepare the storage located at the DR site
for switchback. The first heal operation resynchronizes the mirrored plexes. The second
heal operation returns ownership of root aggregates to the DR nodes.
LIF (logical interface)
A logical network interface, representing a network access point to a node. LIFs currently
correspond to IP addresses, but could be implemented by any interconnect. A LIF is
generally bound to a physical network port; that is, an Ethernet port. LIFs can fail over to
other physical ports (potentially on other nodes) based on policies interpreted by the LIF
manager.
NVRAM
nonvolatile random-access memory.
NVRAM cache
Nonvolatile RAM in a storage system, used for logging incoming write data and NFS
requests. Improves system performance and prevents loss of data in case of a storage
system or power failure.
NVRAM mirror
A synchronously updated copy of the contents of the storage system NVRAM (nonvolatile
random access memory) contents kept on the partner storage system.
node
•
In Data ONTAP, one of the systems in a cluster or an HA pair.
To distinguish between the two nodes in an HA pair, one node is sometimes called the
local node and the other node is sometimes called the partner node or remote node.
•
In Protection Manager and Provisioning Manager, the set of storage containers
(storage systems, aggregates, volumes, or qtrees) that are assigned to a dataset and
designated either primary data (primary node), secondary data (secondary node), or
tertiary data (tertiary node).
A dataset node refers to any of the nodes configured for a dataset.
A backup node refers to either a secondary or tertiary node that is the destination of a
backup or mirror operation.
A disaster recovery node refers to the dataset node that is the destination of a failover
operation.
remote storage
The storage that is accessible to the local node, but is at the location of the remote node.
root volume
A special volume on each Data ONTAP system. The root volume contains system files
and configuration information, and can also contain data. It is required for the system to
be able to boot and to function properly. Core dump files, which are important for
troubleshooting, are written to the root volume if there is enough space.
switchback
The MetroCluster operation that restores service back to one of the MetroCluster sites.
switchover
The MetroCluster operation that transfers service from one of the MetroCluster sites.
•
A negotiated switchover is planned in advance and cleanly shuts down components of
the target MetroCluster site.
•
A forced switchover immediately transfers service; the shut down of the target site
might not be clean.
83
Copyright information
Copyright © 1994–2016 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any means—
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval system—without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights of
NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to
restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer
Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
84
Trademark information
NetApp, the NetApp logo, Go Further, Faster, AltaVault, ASUP, AutoSupport, Campaign Express,
Cloud ONTAP, Clustered Data ONTAP, Customer Fitness, Data ONTAP, DataMotion, Fitness, Flash
Accel, Flash Cache, Flash Pool, FlashRay, FlexArray, FlexCache, FlexClone, FlexPod, FlexScale,
FlexShare, FlexVol, FPolicy, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster,
MultiStore, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, RAID-TEC, SANtricity,
SecureShare, Simplicity, Simulate ONTAP, Snap Creator, SnapCenter, SnapCopy, SnapDrive,
SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect, SnapRestore,
Snapshot, SnapValidator, SnapVault, StorageGRID, Tech OnTap, Unbound Cloud, and WAFL and
other names are trademarks or registered trademarks of NetApp, Inc., in the United States, and/or
other countries. All other brands or products are trademarks or registered trademarks of their
respective holders and should be treated as such. A current list of NetApp trademarks is available on
the web at http://www.netapp.com/us/legal/netapptmlist.aspx.
85
How to send comments about documentation and
receive update notifications
You can help us to improve the quality of our documentation by sending us your feedback. You can
receive automatic notification when production-level (GA/FCS) documentation is initially released or
important changes are made to existing production-level documents.
If you have suggestions for improving this document, send us your comments by email to
doccomments@netapp.com. To help us direct your comments to the correct division, include in the
subject line the product name, version, and operating system.
If you want to be notified automatically when production-level documentation is released or
important changes are made to existing production-level documents, follow Twitter account
@NetAppDoc.
You can also contact us in the following ways:
•
NetApp, Inc., 495 East Java Drive, Sunnyvale, CA 94089 U.S.
•
Telephone: +1 (408) 822-6000
•
Fax: +1 (408) 822-4501
•
Support telephone: +1 (888) 463-8277
86 | Stretch MetroCluster Installation and Configuration Guide
Index
7-Mode fabric MetroClusters
sharing FC switch fabrics 17
7-Mode MetroCluster configurations
differences from clustered MetroCluster
configurations 8
A
addresses
configuring NTP server, to synchronize system time
73
gathering for site_A 42
gathering required network information 43
aggregates
mirrored data, creating on each node of a
MetroCluster configuration 63
architecture
of two-node MetroCluster configurations 18, 25
array LUNs
example of stretch MetroCluster configurations with
70
array LUNs and disks
example two-node stretch MetroCluster
configurations 70
ATTO FibreBridge
See FC-to-SAS bridges
automatic switchover
supported MetroCluster configurations 9
autosupport configuration
or site_A 42
B
bridges
installing FC-to-SAS 31
bridges, FC-to-SAS
considerations for installing SAS shelves and 34
bridges, FibreBridge 6500N
cabling 38
C
cables, SAS optical
cabling in stretch MetroCluster systems with SAS
shelves 20
cabling
cabling a FibreBridge 7500N bridge 36
data ports 23, 31
FibreBridge 6500N bridges 38
management ports 23, 31
stretch MetroCluster components 20
cabling the controller, FC-to-SAS bridges
two-node direct-attached configuration 39
cabling the FC-to-SAS bridges to the controller
two-node direct-attached configuration 39
cascade configurations
networking requirements for cluster peering 10
chassis
verifying HA state in a stretch MetroCluster
configuration 50
checking
MetroCluster configuration operations 66
checklists, hardware setup
for factory configured clusters 13
checklists, software setup
for factory-configured MetroCluster configurations
14
cluster configurations
hardware and software 46
regular and MetroCluster 46
cluster peer relationships
requirements for 10
cluster peering
introduction to MetroCluster configuration 52
port usage for site_A 42
to an outside cluster 75
cluster peering connections
cabling in MetroCluster configurations 23, 30
cluster peers
creating relationships between 60
Cluster Setup wizard
using to set up a cluster in a two-node MetroCluster
configuration 50
clustered MetroCluster configurations
differences between the types 9
differences from 7-Mode MetroCluster
configurations 8
clusters
example names in MetroCluster configuration 19, 26
introduction to manually peering 55
naming requirements for cluster peering 10
setting up in a two-node MetroCluster configuration
50
clusters, factory configured
hardware setup checklist 13
CNA adapter settings
configuring when reusing a controller module 46
commands
for starting data protection 64
volume 78
commands, storage aggregate plex show
output after a MetroCluster switchover is
indeterminate 77
comments
how to send feedback about documentation 85
components
preconfigured when new 12
racking 21
racking in a MetroCluster 29
components, stretch MetroCluster
installing and cabling 20
Config Advisor
checking for common configuration errors 68
downloading and running 68
configuration backup files
setting remote destinations for preservation 69
configuration networking
Index | 87
how LIFs are created 75
how LIFs are replicated 75
IPv6 configuration 75
configurations
cascade, for cluster peering 10
fan-out, for cluster peering 10
implementing MetroCluster configurations 64
configurations, MetroCluster
differences between 7-Mode and clustered Data
ONTAP 8
configurations, site
worksheets for 43
worksheets for site_A 42
configurations, two-node MetroCluster
parts of 25
configuring
NTP server, to synchronize system time 73
controller ports
checking connectivity with the partner site 23, 30
controllers
racking 21
racking in a MetroCluster 29
verifying the HA state in a stretch MetroCluster
configuration 50
creation, LIFs
in a MetroCluster configuration 75
D
data aggregates
mirrored, creating on each node of a MetroCluster
configuration 63
data ports
cabling 23, 31
configuring intercluster LIFs to share 58
considerations when sharing intercluster and 11
database files
how NVFAIL protects 78
databases
accessing after a switchover 79
introduction to using NVFAIL to monitor and
protect validity of 77
dedicated ports
considerations when using for intercluster replication
11
defaults
restoring when reusing a controller module 46
destinations
specifying the URL for configuration backup 69
disk assignment
verifying in a MetroCluster configuration 49
disk shelves
racking 21
racking in a MetroCluster 29
disk shelves, SAS
cabling in MetroCluster systems with SAS optical
cables 20
documentation
how to receive automatic notification of changes to
85
how to send feedback about 85
where to find MetroCluster documentation 6
E
events
monitoring data loss 78
example names
MetroCluster component 19, 26
F
factory configured clusters
hardware setup checklist 13
fan-out configurations
networking requirements for cluster peering 10
FC switches
configuring for health monitoring 66
example names in MetroCluster configuration 19, 26
racking 21, 29
FC-to-SAS bridge configurations
worksheets 28
FC-to-SAS bridges
considerations for installing SAS shelves and 34
example names in MetroCluster configuration 19, 26
installing 31, 39
meeting preinstallation requirements 32
FC-VI ports
cabling 22, 30
configuring on a QLE2564 quad-port card 47
feedback
how to send comments about documentation 85
FibreBridge
See FC-to-SAS bridges
FibreBridge 6500N bridges
cabling 38
FibreBridge 7500N
cabling a bridge 36
FibreBridge bridges
installing 39
files, database
how NVFAIL protects 78
firewalls
requirements for cluster peering 10
FlexClone command errors
in MetroCluster configurations 77
full-mesh connectivity
description 10
H
HA states
verifying and setting in a stretch MetroCluster
configuration 50
ha-config setting
verifying in a stretch MetroCluster configuration 50
hardware components
racking 21
racking in a MetroCluster 29
hardware setup
checklist for factory configured clusters 13
HBA settings
configuring when reusing a controller module 46
healing
verifying in a MetroCluster configuration 69
88 | Stretch MetroCluster Installation and Configuration Guide
health monitoring
configuring FC switches for 66
host names
gathering for site_A 42
gathering required network information 43
I
Infinite Volumes
in a MetroCluster configuration 75
information
how to send feedback about improving
documentation 85
installation
for systems sharing an FC switch fabric 17
for systems with array LUNs 17
for systems with native disks 17
installing FC-to-SAS bridges 39
installing new FibreBridge bridges 39
preparations for installing FC-to-SAS bridges 32
intercluster LIFs
configuring to share data ports 58
configuring to use dedicated intercluster ports 55
considerations when sharing with data ports 11
intercluster networks
configuring intercluster LIFs for 55, 58
considerations when sharing data and intercluster
ports 11
intercluster ports
configuring intercluster LIFs to use dedicated 55
considerations when using dedicated 11
IP addresses
gathering for site_A 42
gathering required network information 43
requirements for cluster peering 10
IPspaces
requirements for cluster peering 10
L
licensing
in a MetroCluster configuration 75
LIF creation
in a MetroCluster configuration 75
LIF replication
in a MetroCluster configuration 75
LIFs
configuring to use dedicated intercluster ports 55
preconfigured in a MetroCluster configuration 53
LIFs, intercluster
configuring to share data ports 58
local HA
supported MetroCluster configurations 9
LUNs
recovering after NVRAM failures 79
M
management ports
cabling 23, 31
manually peering clusters
introduction to 55
MetroCluster components
racking disk shelves 21, 29
racking FC switches 21, 29
racking storage controllers 21, 29
MetroCluster configuration
bridge-attached stretch, workflow for cabling two
nodes 25
SAS-attached stretch, workflow for cabling two
nodes 18
MetroCluster configurations
architecture of two-node configuration 18
cabling FC-VI adapters 30
cabling HBA adapters 30
cabling MetroCluster components
cabling FC-VI adapters
cabling HBA adapters 22
creating mirrored data aggregates on each node of 63
differences between 7-Mode and clustered Data
ONTAP 8
implementing 64
introduction to gathering required information and
reviewing the workflow 27
verifying correct operation of 66
MetroCluster configurations, stretch
example configurations with array LUNs and disks
70
MetroCluster configurations, stretch with array LUNs
example 70
MetroCluster configurations, two-node
illustration of 25
parts of 25
metrocluster configure command
creating MetroCluster relationships 64
MetroCluster monitoring
using Tiebreaker software 69
MetroCluster switchovers
output for the storage aggregate plex
show command after 77
MetroCluster systems, stretch
installing and cabling components 20
N
network
full-mesh connectivity described 10
requirements for cluster peering 10
network information
gathering for site_A 42
gathering required 43
node configuration after MetroCluster setup
additional configuration with OnCommand System
Manager 73
nodes
creating mirrored data aggregates on each
MetroCluster 63
using the Cluster Setup wizard to set up, in a twonode MetroCluster configuration 50
NTP servers
configuring to synchronize system time 73
NVFAIL
description of 77
how it protects database files 78
Index | 89
modifying volumes to set NVFAIL in case of
switchover 77
NVRAM failures
recovering LUNs after 79
O
OnCommand Performance Manager
monitoring with 73
OnCommand System Manager
node configuration with 73
OnCommand Unified Manager
monitoring with 73
optical cables, SAS
cabling in stretch MetroCluster systems with SAS
shelves 20
output for storage aggregate plex show
command
after a MetroCluster switchover is indeterminate 77
P
partner cluster
cabling cluster peering connections 23, 30
passwords
preconfigured in a MetroCluster configuration 13
peer relationships
creating cluster 60
requirements for clusters 10
peering clusters
introduction to manually 55
MetroCluster configuration, introduction to 52
planning
gathering required network information 43
gathering required network information for site_A
42
port usage
in a MetroCluster configuration 53
ports
configuring intercluster LIFs to use dedicated
intercluster 55
considerations when sharing data and intercluster
roles on 11
considerations when using dedicated intercluster 11
requirements for cluster peering 10
ports, data
configuring intercluster LIFs to share 58
preconfiguration
MetroCluster components 12
Q
QLE2564 quad-port cards
configuring FC-VI ports on 47
R
relationships
creating cluster peer 60
creating MetroCluster relationships 64
replication, LIFs
in a MetroCluster configuration 75
required hardware
MetroCluster 19, 26
requirements
cluster naming when peering 10
firewall for cluster peering 10
gathering network information 43
gathering network information for site_A 42
IP addresses for cluster peering 10
IPspaces for cluster peering 10
network for cluster peering 10
ports for cluster peering 10
subnets for cluster peering 10
root aggregates
mirroring 62
volume creation on 75
S
SAS disk shelves
installing 31
SAS optical cables
cabling in stretch MetroCluster systems with SAS
shelves 20
SAS shelves
cabling in MetroCluster systems with SAS optical
cables 20
considerations for installing FC-to-SAS bridges and
34
servers
configuring NTP, to synchronize system time 73
Service Processor configuration
or site_A 42
setup, hardware
checklist for factory configured clusters 13
setup, software
checklist for factory-configured MetroCluster
configurations 14
shelves, SAS
cabling in MetroCluster systems with SAS optical
cables 20
considerations for installing FC-to-SAS bridges and
34
site configurations
worksheets for 43
worksheets for site_A 42
software
settings already enabled 12
software setup
checklist for factory-configured MetroCluster
configurations 14
software, configuring
workflows for MetroCluster in Data ONTAP 41
storage aggregate plex show command
output
after a MetroCluster switchover is indeterminate 77
storage controllers
example names in MetroCluster configuration 19, 26
storage disk show command
output in a two-node MetroCluster configuration 77
storage shelf show command
output in a two-node MetroCluster configuration 77
stretch MetroCluster configuration
verifying chassis and controller HA state 50
90 | Stretch MetroCluster Installation and Configuration Guide
stretch MetroCluster configuration with array LUNs
example 70
stretch MetroCluster configurations
example configurations with array LUNs and disks
70
with array LUNs, overview 70
stretch MetroCluster systems
installing and cabling components 20
subnets
requirements for cluster peering 10
suggestions
how to send feedback about documentation 85
SVMs
preconfigured in a MetroCluster configuration 53
switchback
verifying in a MetroCluster configuration 69
switchover
accessing the database after 79
verifying in a MetroCluster configuration 69
switchovers, MetroCluster
output for the storage aggregate plex
show command after 77
synchronizing system time
using NTP 73
system time
synchronizing using NTP 73
systems, stretch MetroCluster
installing and cabling components 20
U
usernames
preconfigured in a MetroCluster configuration 13
utilities
checking for common configuration errors with
Config Advisor 68
downloading and running Config Advisor 68
V
verification
booting to Maintenance mode 49
performing before booting to Data ONTAP 49
verifying disk assignment 49
verifying
MetroCluster configuration operations 66
verifying chassis and controller HA state in a stretch
MetroCluster configuration 50
VLDB errors
in MetroCluster configurations 77
volume command errors
in MetroCluster configurations 77
volume creation
in a MetroCluster configuration 75
volumes
commands 78
recovering after a switchover 79
T
W
Tiebreaker software
installing 69
using to identify failures 69
time
synchronizing system, using NTP 73
transition
7-Mode to clustered Data ONTAP 12
sharing FC switch fabric 12
Twitter
how to receive automatic notification of
documentation changes 85
two-node MetroCluster configurations
parts of 25
workflows
cabling a two-node bridge-attached stretch
MetroCluster configuration 25
cabling a two-node SAS-attached stretch
MetroCluster configuration 18
MetroCluster software configurations in Data
ONTAP 41
worksheets
for FC-to-SAS bridge configurations 28
for site configurations 43
for site_A 42
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising