Fabric-attached MetroCluster Installation and

Fabric-attached MetroCluster Installation and
ONTAP® 9
Fabric-attached MetroCluster®
Installation and Configuration Guide
May 2017 | 215-11162-G0
doccomments@netapp.com
Updated for ONTAP 9.1
Table of Contents | 3
Contents
Deciding whether to use this guide ............................................................. 7
MetroCluster documentation ...................................................................... 8
Preparing for the MetroCluster installation ............................................ 10
Differences between 7-Mode and ONTAP MetroCluster configurations .................
Differences between the ONTAP MetroCluster configurations ................................
Considerations for configuring cluster peering .........................................................
Prerequisites for cluster peering ....................................................................
Considerations when using dedicated ports ..................................................
Considerations when sharing data ports ........................................................
Considerations for MetroCluster configurations with native disk shelves or
array LUNs ..........................................................................................................
Considerations when transitioning from 7-Mode to clustered Data ONTAP ...........
Considerations for using TDM/xWDM equipment with fabric-attached
MetroCluster configurations ................................................................................
Preconfigured settings for new MetroCluster systems from the factory ...................
Preconfigured component passwords ............................................................
Hardware setup checklist ..............................................................................
Software setup checklist ................................................................................
10
11
12
12
13
13
14
14
15
16
16
16
18
Choosing the correct installation procedure for your configuration ..... 21
Cabling a fabric-attached MetroCluster configuration .......................... 23
Parts of a fabric MetroCluster configuration ............................................................. 24
Local HA pairs in a MetroCluster configuration ........................................... 28
Redundant FC-to-SAS bridges ...................................................................... 29
Redundant FC switch fabrics ........................................................................ 29
Cluster peering network ................................................................................ 30
Required MetroCluster components and naming conventions .................................. 31
Information gathering worksheet for FC switches and FC-to-SAS bridges ............. 34
Installing and cabling MetroCluster components ...................................................... 36
Racking the hardware components ............................................................... 36
Cabling the FC-VI and HBA connections to the FC switches ...................... 37
Cabling the ISLs between MetroCluster sites ............................................... 38
Port assignments for FC switches ................................................................. 38
Cabling the cluster interconnect in eight- or four-node configurations ........ 48
Cabling the cluster peering connections ........................................................ 49
Cabling the HA interconnect, if necessary .................................................... 49
Cabling the management and data connections ............................................ 50
Configuring the FC switches ..................................................................................... 51
Configuring the FC switches by running a configuration file ....................... 51
Configuring the Cisco or Brocade FC switches manually ............................ 52
Installing FC-to-SAS bridges and SAS disk shelves .............................................. 122
Preparing for the installation ....................................................................... 123
4 | Fabric-attached MetroCluster Installation and Configuration Guide
Installing the FC-to-SAS bridge and SAS shelves ...................................... 125
Configuring hardware for sharing a Brocade 6510 FC fabric during
transition .............................................................................................. 139
Reviewing Brocade license requirements ............................................................... 140
Racking the hardware components ......................................................................... 140
Cabling the new MetroCluster controllers to the existing FC fabrics ..................... 141
Configuring switch fabrics sharing between the 7-Mode and clustered
MetroCluster configuration ............................................................................... 142
Disabling one of the switch fabrics ............................................................. 143
Deleting TI zoning and configuring IOD settings ....................................... 144
Ensuring ISLs are in the same port group and configuring zoning ............. 145
Reenabling the switch fabric and verify operation ...................................... 146
Configuring the MetroCluster software in ONTAP .............................. 148
Gathering required information and reviewing the workflow ................................. 149
IP network information worksheet for site A .............................................. 149
IP network information worksheet for site B .............................................. 151
Similarities and differences between standard cluster and MetroCluster
configurations .................................................................................................... 154
Setting a previously used controller module to system defaults in Maintenance
mode .................................................................................................................. 154
Configuring FC-VI ports on a QLE2564 quad-port card on FAS8020 systems ..... 155
Verifying disk assignment in Maintenance mode in an eight-node or a four-node
configuration ...................................................................................................... 157
Assigning disk ownership in non-AFF systems .......................................... 160
Assigning disk ownership in AFF systems ................................................. 161
Verifying disk assignment in Maintenance mode in a two-node configuration ...... 163
Verifying and configuring the HA state of components in Maintenance mode ...... 164
Setting up ONTAP ................................................................................................... 165
Setting up the clusters ................................................................................. 165
Configuring the clusters into a MetroCluster configuration ................................... 167
Peering the clusters ...................................................................................... 167
Mirroring the root aggregates ...................................................................... 175
Creating a mirrored data aggregate on each node ....................................... 176
Creating unmirrored data aggregates .......................................................... 177
Implementing the MetroCluster configuration ............................................ 179
Configuring in-order delivery or out-of-order delivery of frames on
ONTAP software ................................................................................... 180
Configuring MetroCluster components for health monitoring ................... 182
Checking the MetroCluster configuration ................................................... 184
Checking for MetroCluster configuration errors with Config Advisor ................... 186
Verifying local HA operation .................................................................................. 186
Verifying switchover, healing, and switchback ....................................................... 188
Installing the MetroCluster Tiebreaker software ..................................................... 188
Protecting configuration backup files ...................................................................... 188
Considerations when removing MetroCluster configurations ............. 189
Table of Contents | 5
Planning and installing a MetroCluster configuration with array
LUNs ..................................................................................................... 190
Planning for a MetroCluster configuration with array LUNs ................................. 190
Supported MetroCluster configuration with array LUNs ........................................ 190
Requirements for a MetroCluster configuration with array LUNs ......................... 191
Installing and cabling the MetroCluster components in a configuration with
array LUNs ........................................................................................................ 192
Racking the hardware components in a MetroCluster configuration with
array LUNs ............................................................................................ 192
Preparing a storage array for use with ONTAP systems ............................. 193
Switch ports required for an MetroCluster configuration with array
LUNs ..................................................................................................... 194
Cabling the FC-VI and HBA ports in a MetroCluster configuration with
array LUNs ............................................................................................ 199
Cabling the ISLs in a MetroCluster configuration with array LUNs .......... 205
Cabling the cluster interconnect in eight- or four-node configurations ...... 206
Cabling the cluster peering connections ...................................................... 207
Cabling the HA interconnect, if necessary .................................................. 207
Cabling the management and data connections .......................................... 208
Cabling storage arrays to FC switches in a MetroCluster configuration .... 208
Switch zoning in a MetroCluster configuration with array LUNs .......................... 213
Requirements for switch zoning in a MetroCluster configuration with
array LUNs ............................................................................................ 213
Example of switch zoning in a two-node MetroCluster configuration
with array LUNs .................................................................................... 214
Example of switch zoning in a four-node MetroCluster configuration
with array LUNs .................................................................................... 215
Example of switch zoning in an eight-node MetroCluster configuration
with array LUNs .................................................................................... 217
Setting up ONTAP in a MetroCluster configuration with array LUNs ................... 218
Verifying and configuring the HA state of components in Maintenance
mode ...................................................................................................... 218
Configuring ONTAP on a system that uses only array LUNs ..................... 219
Setting up the cluster ................................................................................... 222
Installing the license for using array LUNs in a MetroCluster
configuration .......................................................................................... 222
Configuring FC-VI ports on a QLE2564 quad-port card on FAS8020
systems .................................................................................................. 223
Assigning ownership of array LUNs ........................................................... 225
Peering the clusters ...................................................................................... 225
Mirroring the root aggregates ...................................................................... 228
Creating a mirrored data aggregate on each node ....................................... 228
Creating unmirrored data aggregates .......................................................... 229
Implementing the MetroCluster configuration ............................................ 230
Configuring the MetroCluster FC switches for health monitoring ............. 232
6 | Fabric-attached MetroCluster Installation and Configuration Guide
Checking the MetroCluster configuration ................................................... 233
Checking for MetroCluster configuration errors with Config Advisor ....... 235
Verifying switchover, healing, and switchback ........................................... 235
Installing the MetroCluster Tiebreaker software ......................................... 235
Protecting configuration backup files .......................................................... 236
Implementing a MetroCluster configuration with both disks and array LUNs ....... 236
Considerations when implementing a MetroCluster configuration with
disks and array LUNs ............................................................................ 236
Example of a two-node fabric-attached MetroCluster configuration with
disks and array LUNs ............................................................................ 237
Example of a four-node MetroCluster configuration with disks and array
LUNs ..................................................................................................... 238
Using the OnCommand management tools for further configuration
and monitoring ..................................................................................... 240
Synchronizing the system time using NTP ............................................................. 240
Requirements and limitations when using ONTAP in a MetroCluster
configuration ........................................................................................ 242
Cluster peering from the MetroCluster site to a third cluster .................................. 242
Volume creation on a root aggregate ....................................................................... 242
Networking and LIF creation guidelines for MetroCluster configurations ............. 242
Volume or FlexClone command VLDB errors ........................................................ 244
Output for storage disk show and storage shelf show commands in a two-node
MetroCluster configuration ............................................................................... 244
Output for the storage aggregate plex show command after a MetroCluster
switchover is indeterminate ............................................................................... 244
Modifying volumes to set NVFAIL in case of switchover ...................................... 244
Monitoring and protecting database validity by using NVFAIL ............................. 244
How NVFAIL protects database files .......................................................... 245
Commands for monitoring data loss events ................................................ 245
Accessing volumes in NVFAIL state after a switchover ............................. 246
Recovering LUNs in NVFAIL states after switchover ................................ 246
Considerations for using TDM/xWDM equipment with fabricattached MetroCluster configurations ............................................... 247
Glossary of MetroCluster terms ............................................................. 248
Copyright information ............................................................................. 250
Trademark information ........................................................................... 251
How to send comments about documentation and receive update
notifications .......................................................................................... 252
Index ........................................................................................................... 253
7
Deciding whether to use the Fabric-attached
MetroCluster Installation and Configuration Guide
This guide describes how to install and configure the MetroCluster hardware and software
components in a fabric configuration.
You should use this guide for planning, installing, and configuring a fabric-attached MetroCluster
configuration under the following circumstances:
•
You want to understand the architecture of a fabric-attached MetroCluster configuration.
•
You want to understand the requirements and best practices for configuring a fabric-attached
MetroCluster configuration.
•
You want to use the command-line interface (CLI), not an automated scripting tool.
You can find other MetroCluster documentation in the following location:
NetApp Documentation: MetroCluster in ONTAP 9.0
8
MetroCluster documentation
There are a number of documents that can help you configure, operate, and monitor a MetroCluster
configuration.
MetroCluster and miscellaneous guides
Guide
Content
NetApp Documentation: MetroCluster
•
All MetroCluster guides
•
A technical overview of the MetroCluster
configuration and operation.
•
Best practices for MetroCluster
configuration.
•
Stretch MetroCluster architecture
•
Cabling the configuration
•
Configuring the FC-to-SAS bridges
•
Configuring the MetroCluster in ONTAP
•
Understanding the MetroCluster
configuration
•
Switchover, healing and switchback
•
Disaster recovery
•
Guidelines for maintenance in a
MetroCluster configuration
•
Hardware replacement and firmware
upgrade procedures for FC-to-SAS bridges
and FC switches
•
Hot-adding a disk shelf
•
Hot-removing a disk shelf
•
Replacing hardware at a disaster site
•
Expanding a two-node MetroCluster
configuration to a four-node MetroCluster
configuration.
•
Expanding a four-node MetroCluster
configuration to an eight-node MetroCluster
configuration.
•
Monitoring the MetroCluster configuration
with the MetroCluster Tiebreaker software
NetApp Technical Report 4375: MetroCluster
for Data ONTAP Version 8.3 Overview and
Best Practices
Stretch MetroCluster installation and
configuration
MetroCluster management and disaster
recovery
MetroCluster Service Guide
MetroCluster Tiebreaker Software Installation
and Configuration Guide
MetroCluster documentation | 9
Guide
Content
Data protection using SnapMirror and
SnapVault technology
•
SyncMirror
•
SnapMirror
•
SnapVault
•
Monitoring the MetroCluster configuration
•
Monitoring MetroCluster performance
Copy-based transition
•
Transitioning data from 7-Mode storage
systems to clustered storage systems
ONTAP concepts
•
How mirrored aggregates work
NetApp Documentation: OnCommand Unified
Manager Core Package (current releases)
NetApp Documentation: OnCommand
Performance Manager for Clustered Data
ONTAP
10
Preparing for the MetroCluster installation
As you prepare for the MetroCluster installation, you should understand the MetroCluster hardware
architecture and required components. If you are familiar with MetroCluster configurations in a 7Mode environment, you should understand the key MetroCluster differences you find in a clustered
ONTAP environment.
Differences between 7-Mode and ONTAP MetroCluster
configurations
There are key differences between ONTAP MetroCluster configurations and configurations with
ONTAP operating in 7-Mode.
In clustered ONTAP, a four-node MetroCluster configuration includes two HA pairs, each in a
separate cluster at physically separated sites.
Feature or
component
Clustered ONTAP
ONTAP 7-Mode
Eight or four node
Two node
Number of
storage
controllers
Eight or four
The controllers are
configured as HA
pairs, one or two HA
pairs at each site.
Two, one at each site
Each controller is
configured as a singlenode cluster.
Two
The two controllers are
configured as an HA pair
with one controller at each
site.
Local failover
available?
Yes
A failover can occur
at either site without
triggering an overall
switchover of the
configuration.
No
If a problem occurs at the
local site, the system
switches over to the
partner site.
No
If a problem occurs at the
local site, the system fails
over to the partner site.
Single
command for
failover or
switchover?
Yes
The command for
local failover in an
eight- or a four-node
fabric MetroCluster
configuration is
Yes for switchover
Yes
The 7-Mode commands
are cf takeover or cf
forcetakeover -d.
storage failover
takeover.
The command for
switchover is
metrocluster
switchover or
metrocluster
switchover -forcedon-disaster true.
The command for
switchover is
metrocluster
switchover or
metrocluster
switchover forced-ondisaster true.
DS14 disk
shelves
supported?
No
No
Yes
Preparing for the MetroCluster installation | 11
Feature or
component
Clustered ONTAP
ONTAP 7-Mode
Eight or four node
Two node
Two FC switch
fabrics?
Yes
Yes, in fabric
MetroCluster
configuration
Yes
Stretch
configuration
with FC-toSAS bridges
(no switch
fabric)?
No
Yes
Yes
Stretch
configuration
with SAS
cables (no
switch fabric)?
No
Yes
Yes
Related concepts
Parts of a fabric MetroCluster configuration on page 24
Differences between the ONTAP MetroCluster
configurations
The fabric-attached and stretch MetroCluster configurations have key differences in the required
components.
In all configurations, each of the two MetroCluster sites is configured as an ONTAP cluster. In a twonode MetroCluster configuration, each node is configured as a single-node cluster.
Feature
Fabric-attached configurations
Stretch configurations
Eight- or fournode
Two-node
Two-node
bridge-attached
Two-node
direct-attached
Number of
controllers
Eight or four
Two
Two
Two
Uses an FC
switch storage
fabric
Yes
Yes
No
No
Uses FC-to-SAS
bridges
Yes
Yes
Yes
No
Uses directattached SAS
storage
No
No
No
Yes
Supports local
HA
Yes
No
No
No
Supports
automatic
switchover
Yes
Yes
Yes
Yes
12 | Fabric-attached MetroCluster Installation and Configuration Guide
Considerations for configuring cluster peering
Each MetroCluster site is configured as a peer to its partner site. You should be familiar with the
prerequisites and guidelines for configuring the peering relationships and when deciding whether to
use shared or dedicated ports for those relationships.
Related information
Data protection using SnapMirror and SnapVault technology
Cluster peering express configuration
Prerequisites for cluster peering
Before you set up cluster peering, you should confirm that the connectivity, port, IP address, subnet,
firewall, and cluster-naming requirements are met.
Connectivity requirements
The subnet used in each cluster for intercluster communication must meet the following
requirements:
•
The subnet must belong to the broadcast domain that contains the ports that are used for
intercluster communication.
•
The IP addresses that are used for intercluster LIFs do not need to be in the same subnet, but
having them in the same subnet is a simpler configuration.
•
You must have decided whether the subnet is dedicated to intercluster communication or is shared
with data communication.
The intercluster network must be configured so that cluster peers have pair-wise full-mesh
connectivity within the applicable IPspace, which means that each pair of clusters in a cluster peer
relationship has connectivity among all of their intercluster LIFs.
A cluster’s intercluster LIFs have an IPv4 address or an IPv6 address.
Port requirements
The ports that are used for intercluster communication must meet the following requirements:
•
All ports that are used to communicate with a given remote cluster must be in the same IPspace.
You can use multiple IPspaces to peer with multiple clusters. Pair-wise full-mesh connectivity is
required only within an IPspace.
•
The broadcast domain that is used for intercluster communication must include at least two ports
per node so that intercluster communication can fail over from one port to another port.
Ports added to a broadcast domain can be physical network ports, VLANs, or interface groups
(ifgrps).
•
All ports must be cabled.
•
All ports must be in a healthy state.
•
The MTU settings of the ports must be consistent.
•
You must decide whether the ports that are used for intercluster communication are shared with
data communication.
Preparing for the MetroCluster installation | 13
Firewall requirements
Firewalls and the intercluster firewall policy must allow the following protocols:
•
ICMP service
•
TCP to the IP addresses of all the intercluster LIFs over the ports 10000, 11104, and 11105
•
HTTPS
The default intercluster firewall policy allows access through the HTTPS protocol and from all
IP addresses (0.0.0.0/0), but the policy can be altered or replaced.
Cluster requirements
Clusters must meet the following requirements:
•
The time on the clusters in a cluster peering relationship must be synchronized within 300
seconds (5 minutes).
Cluster peers can be in different time zones.
Related concepts
Cluster peering network on page 30
Considerations when using dedicated ports
When determining whether using a dedicated port for intercluster replication is the correct
intercluster network solution, you should consider configurations and requirements such as LAN
type, available WAN bandwidth, replication interval, change rate, and number of ports.
Consider the following aspects of your network to determine whether using a dedicated port is the
best intercluster network solution:
•
If the amount of available WAN bandwidth is similar to that of the LAN ports and the replication
interval is such that replication occurs while regular client activity exists, then you should
dedicate Ethernet ports for intercluster replication to avoid contention between replication and the
data protocols.
•
If the network utilization generated by the data protocols (CIFS, NFS, and iSCSI) is such that the
network utilization is above 50 percent, then you should dedicate ports for replication to allow for
nondegraded performance if a node failover occurs.
•
When physical 10 GbE ports are used for data and replication, you can create VLAN ports for
replication and dedicate the logical ports for intercluster replication.
The bandwidth of the port is shared between all VLANs and the base port.
•
Consider the data change rate and replication interval and whether the amount of data that must
be replicated on each interval requires enough bandwidth that it might cause contention with data
protocols if sharing data ports.
Considerations when sharing data ports
When determining whether sharing a data port for intercluster replication is the correct intercluster
network solution, you should consider configurations and requirements such as LAN type, available
WAN bandwidth, replication interval, change rate, and number of ports.
Consider the following aspects of your network to determine whether sharing data ports is the best
intercluster connectivity solution:
14 | Fabric-attached MetroCluster Installation and Configuration Guide
•
For a high-speed network, such as a 10-Gigabit Ethernet (10-GbE) network, a sufficient amount
of local LAN bandwidth might be available to perform replication on the same 10-GbE ports that
are used for data access.
In many cases, the available WAN bandwidth is far less than 10 GbE LAN bandwidth.
•
All nodes in the cluster might have to replicate data and share the available WAN bandwidth,
making data port sharing more acceptable.
•
Sharing ports for data and replication eliminates the extra port counts required to dedicate ports
for replication.
•
The maximum transmission unit (MTU) size of the replication network will be the same size as
that used on the data network.
•
Consider the data change rate and replication interval and whether the amount of data that must
be replicated on each interval requires enough bandwidth that it might cause contention with data
protocols if sharing data ports.
•
When data ports for intercluster replication are shared, the intercluster LIFs can be migrated to
any other intercluster-capable port on the same node to control the specific data port that is used
for replication.
Considerations for MetroCluster configurations with native
disk shelves or array LUNs
The MetroCluster configuration supports installations with native (NetApp) disk shelves only, array
LUNs only, or a combination of both.
AFF systems do not support array LUNs.
Related concepts
Planning and installing a MetroCluster configuration with array LUNs on page 190
Related tasks
Cabling a fabric-attached MetroCluster configuration on page 23
Related information
FlexArray virtualization installation requirements and reference
Considerations when transitioning from 7-Mode to clustered
Data ONTAP
You must have the new MetroCluster configuration fully configured and operating before you use the
transition tools to move data from a 7-Mode MetroCluster configuration to a clustered Data ONTAP
configuration. If the 7-Mode configuration uses Brocade 6510 switches, the new configuration can
share the existing fabrics to reduce the hardware requirements.
If you have Brocade 6510 switches and plan on sharing the switch fabrics between the 7-Mode fabric
MetroCluster and the MetroCluster running in clustered ONTAP, you must use the specific procedure
for configuring the MetroCluster components.
FMC-MCC transition: Configuring the MetroCluster hardware for sharing a 7-Mode Brocade 6510
FC fabric during transition on page 139
Preparing for the MetroCluster installation | 15
Considerations for using TDM/xWDM equipment with fabricattached MetroCluster configurations
The Interoperability Matrix provides some notes about the requirements that TDM/xWDM
equipment must meet to work with a fabric-attached MetroCluster configuration. These notes also
include information about various configurations, which can help you to determine when to use inorder delivery (IOD) of frames or out-of-order delivery (OOD) of frames.
An example of such requirements is that the TDM/xWDM equipment must support the link
aggregation (trunking) feature with routing policies. The order of delivery (IOD or OOD) of frames
is maintained within a switch, and is determined by the routing policy that is in effect.
The following table provides the routing policies for configurations containing Brocade switches and
Cisco switches:
Switches
Configuring MetroCluster
configurations for IOD
Configuring MetroCluster
configurations for OOD
Brocade
•
AptPolicy must be set to 1
•
AptPolicy must be set to 3
•
DLS must be set to off
•
DLS must be set to on
•
IOD must be set to on
•
IOD must be set to off
Cisco
Policies for the FCVI-designated
VSAN:
•
Load balancing policy: srcid and
dstid
•
IOD must be set to on
Not applicable
Policies for the storage-designated
VSAN:
•
Load balancing policy: srcid, dstid,
and oxid
•
VSAN must not have the inorder-guarantee option set
When to use in-order delivery
It is best to use IOD if it is supported by the links. The following configurations support IOD:
•
A single ISL, and the ISL and the link (and the link equipment, such as TDM/xWDM, if used).
•
A single trunk, and the ISLs and the links (and the link equipment, such as TDM/xWDM, if
used).
When to use out-of-order delivery
You can use OOD for all configurations that do not support IOD.
16 | Fabric-attached MetroCluster Installation and Configuration Guide
Preconfigured settings for new MetroCluster systems from
the factory
New MetroCluster nodes, FC-to-SAS bridges, and FC switches are preconfigured and MetroCluster
settings are enabled in the software. In most cases, you do not need to perform the detailed
procedures provided in this guide.
Hardware racking and cabling
Depending on the configuration you ordered, you might need to rack the systems and complete the
cabling.
Configuring the MetroCluster hardware components in systems with native disk shelves on page 23
FC switch and FC-to-SAS bridge configurations
For configurations using FC-to-SAS bridges, the bridges received with the new MetroCluster
configuration are preconfigured and do not require additional configuration unless you want to
change the names and IP addresses.
For configurations using FC switches, in most cases, FC switch fabrics received with the new
MetroCluster configuration are preconfigured for two Inter-Switch Links (ISLs). If you are using
additional ISLs, you must manually configure the switches.
Configuring the FC switches on page 51
MetroCluster configurations in ONTAP
Nodes and clusters received with the new MetroCluster configuration are preconfigured and the
MetroCluster configuration is enabled.
Preconfigured component passwords
Some MetroCluster components are preconfigured with user names and passwords. You need to be
aware of these settings as you perform your site-specific configuration.
Component
Username
Password
ONTAP login
admin
netapp!123
Service Processor (SP) login
admin
netapp!123
Intercluster pass phrase
None required
netapp!123
ATTO FC-to-SAS bridge
None required
None required
Brocade switches
Admin
password
NetApp cluster interconnect
switches (CN1601, CN1610)
None required
None required
Hardware setup checklist
You need to know which hardware setup steps were completed at the factory and which steps you
need to complete at each MetroCluster site.
Step
Completed
at factory
Completed by you
Mount components in one or more cabinets.
Yes
No
Preparing for the MetroCluster installation | 17
Step
Completed
at factory
Completed by you
Position cabinets in the desired location.
No
Yes
Position them in the original order
so that the supplied cables are long
enough.
Connect multiple cabinets to each other, if
applicable.
No
Yes
Use the cabinet interconnect kit if it
is included in the order. The kit box
is labeled.
Secure the cabinets to the floor, if
applicable.
No
Yes
Use the universal bolt-down kit if it
is included in the order. The kit box
is labeled.
Cable the components within the cabinet.
Yes
Cables 5
meters and
longer are
removed for
shipping and
placed in the
accessories
box.
No
Connect the cables between cabinets, if
applicable.
No
Yes
Cables are in the accessories box.
Connect management cables to the
customer's network.
No
Yes
Connect them directly or through
the CN1601 management switches,
if present.
Attention: To avoid address
conflicts, do not connect
management ports to the
customer's network until after you
change the default IP addresses to
the customer's values.
Connect console ports to the customer's
terminal server, if applicable.
No
Yes
Connect the customer's data cables to the
cluster.
No
Yes
Connect the long-distance ISLs between the
MetroCluster sites, if applicable.
No
Yes
Connecting the ISLs between the
MetroCluster sites on page 38
18 | Fabric-attached MetroCluster Installation and Configuration Guide
Step
Completed
at factory
Connect the cabinets to power and power on No
the components.
Completed by you
Yes
Power them on in the following
order:
1. PDUs
2. Disk shelves and FC-to-SAS
bridges, if applicable
3. FC switches, if applicable
4. Nodes
Assign IP addresses to the management
ports of the cluster switches and to the
management ports of the management
switches if present.
No
Yes, for switched clusters only
Connect to the serial console port of
each switch and log in with user
name “admin” with no password.
Suggested addresses are
10.10.10.81, 10.10.10.82,
10.10.10.83, and 10.10.10.84.
Verify cabling by running the Config
Advisor tool.
No
Yes
Verifying the MetroCluster
configuration on page 186
Software setup checklist
You need to know which software setup steps were completed at the factory and which steps you
need to complete at each MetroCluster site.
This guide includes all of the required steps, which you can complete after reviewing the information
in this checklist.
Step
Completed at factory
Completed by you using
procedures in this guide
Install the clustered ONTAP
software.
Yes
No
Preparing for the MetroCluster installation | 19
Step
Completed at factory
Completed by you using
procedures in this guide
Create the cluster on the first
node at the first MetroCluster
site:
Yes
No
In an eight-node or a four-node
MetroCluster configuration,
join the remaining nodes to the
cluster.
Yes
No
In an eight-node or a four-node
MetroCluster configuration,
enable storage failover on one
node of each HA pair.
Yes
No
In a four-node MetroCluster
configuration, configure
cluster high availability on
each cluster.
Yes
No
Enable the switchless-cluster
option on a two-node
switchless cluster.
Yes
No
Repeat the steps to configure
the second MetroCluster site.
Yes
No
Configure the clusters for
peering.
Yes
No
Enable the MetroCluster
configuration.
Yes
No
Configure user credentials and
management IP addresses on
the management and cluster
switches.
Yes, if ordered
User IDs are “admin” with no
password.
No
Thoroughly test the
MetroCluster configuration.
Yes
No, although you must perform
verification steps at your site as
described below.
•
Name the cluster.
•
Set the admin password.
•
Set up the private cluster
interconnect.
•
Install all purchased license
keys.
•
Create the cluster
management interface.
•
Create the node
management interface.
•
Configure the FC switches,
if applicable.
20 | Fabric-attached MetroCluster Installation and Configuration Guide
Step
Completed at factory
Completed by you using
procedures in this guide
Complete the cluster setup
worksheet.
No
Yes
Change the password for the
admin account to the
customer's value.
No
Yes
Configure each node with the
customer's values.
No
Yes
Discover the clusters in
OnCommand System
Manager.
No
Yes
Configure an NTP server for
each cluster.
No
Yes
Verify the cluster peering.
No
Yes
Verify the health of the cluster
and that the cluster is in
quorum.
No
Yes
Verify basic operation of the
MetroCluster sites.
No
Yes.
Check the MetroCluster
configuration.
No
Yes
In an eight- or a four-node
MetroCluster configuration,
test storage failover.
No
Yes
If the configuration includes
FC switches or FC-to-SAS
bridges, add the MetroCluster
switches and bridges for health
monitoring.
No
Yes
Test switchover, healing, and
switchback.
No
Yes
Set the destination for
configuration backup files.
No
Yes
Optional: Change the cluster
name if desired, for example,
to better distinguish the
clusters.
No
Yes
Optional: Change the node
name, if desired.
No
Yes
Configure AutoSupport.
No
Yes
21
Choosing the correct installation procedure for
your configuration
You must choose the correct installation procedure based on whether you are using FlexArray LUNs,
the number of nodes in the MetroCluster configuration, and whether you are sharing an existing FC
switch fabric used by a 7-Mode fabric MetroCluster.
Select your
MetroCluster
configuration
NetApp (native) disks
Eight- or four-node
fabric-attached
Two-node
fabric-attached
Proceed to “Cabling a fabric-attached
MetroCluster configuration“
Proceed to “Cabling a fabric-attached
MetroCluster configuration“
Sharing a fabric during transition?
No
Yes
Proceed to “Configuring hardware for
sharing a Brocade 6510 FC fabric during
transition”
Proceed to “Configuring the
MetroCluster software in Data ONTAP”
Array LUNs (FlexArray virtualization)
Proceed to “Planning and installing a MetroCluster
configuration with array LUNs”
For this installation type...
Use these procedures...
Fabric-attached configuration with
NetApp (native) disks
1. Cabling a fabric-attached MetroCluster
configuration on page 23
2. Configuring the MetroCluster software in Data
ONTAP (native disk shelves only) on page 148
22 | Fabric-attached MetroCluster Installation and Configuration Guide
For this installation type...
Use these procedures...
Fabric-attached configuration when
sharing with an existing FC switch
fabric
This is supported only as a temporary
configuration with a 7-Mode fabric
MetroCluster configuration using
Brocade 6510 switches.
1. Cabling a fabric-attached MetroCluster
configuration on page 23
2. Configuring the MetroCluster hardware for sharing
a 7-Mode Brocade 6510 FC fabric during transition
on page 139
3. Configuring the MetroCluster software in Data
ONTAP (native disk shelves only) on page 148
23
Cabling a fabric-attached MetroCluster
configuration
The MetroCluster components must be physically installed, cabled, and configured at both
geographic sites. The steps are slightly different for a system with native disk shelves as opposed to a
system with array LUNs.
About this task
Rack the equipment
Cable the controllers
to the FC switches
Cable the ISL to the
FC switches
Cable the cluster
peering network
Cable the HA
interconnect
(if necessay)
Cable management
and data
connections
Cable the SAS disk
shelves to the FC-toSAS bridges
Configure FC
switches
Brocade
Cisco
Configure the FC-toSAS bridges
Cable the cluster
interconnect
switches
(if necessary)
Steps
1.
2.
3.
4.
5.
6.
Parts of a fabric MetroCluster configuration on page 24
Required MetroCluster components and naming conventions on page 31
Information gathering worksheet for FC switches and FC-to-SAS bridges on page 34
Installing and cabling MetroCluster components on page 36
Configuring the FC switches on page 51
Installing FC-to-SAS bridges and SAS disk shelves on page 122
Cable the bridges to
the FC switches
Proceed to
configuration in
Data ONTAP
24 | Fabric-attached MetroCluster Installation and Configuration Guide
Parts of a fabric MetroCluster configuration
As you plan your MetroCluster configuration, you should understand the hardware components and
how they interconnect.
DR groups
A fabric MetroCluster configuration consists of one or two DR groups, depending on the number of
nodes in the MetroCluster configuration. Each DR group consists of four nodes.
•
An eight-node MetroCluster consists of two DR groups.
•
A four-node MetroCluster consists of one DR group.
The following illustration shows the organization of nodes in an eight-node MetroCluster
configuration:
cluster_B
cluster_A
DR Group One
node_A_1
DR pair
HA pair
HA pair
node_A_2
node_B_1
DR pair
node_B_2
DR Group Two
node_A_3
DR pair
HA pair
HA pair
node_A_4
node_B_3
DR pair
node_B_4
The following illustration shows the organization of nodes in a four-node MetroCluster
configuration:
Cabling a fabric-attached MetroCluster configuration | 25
cluster_B
cluster_A
DR Group One
node_A_1
DR pair
HA pair
HA pair
node_A_2
node_B_1
DR pair
node_B_2
Key hardware elements
The MetroCluster configuration includes the following key hardware elements:
•
Storage controllers
The storage controllers are not connected directly to the storage but connect to two redundant FC
switch fabrics.
•
FC-to-SAS bridges
The FC-to-SAS bridges connect the SAS storage stacks to the FC switches, providing bridging
between the two protocols.
•
FC switches
The FC switches provide the long-haul backbone ISL between the two sites. The switches provide
the two storage fabrics that allow data mirroring to the remote storage pools.
•
Cluster peering network
The cluster peering network provides connectivity for mirroring of the Storage Virtual Machine
(SVM) configuration. The configuration of all SVMs on one cluster is mirrored to the partner
cluster.
Eight-node fabric MetroCluster configuration
•
The configuration consists of two clusters, one at each geographically separated site.
•
cluster_A is located at one MetroCluster site.
•
cluster_B is located at the second MetroCluster site.
•
Each site has one stack of SAS storage.
Additional storage stacks are supported, but only one is shown at each site.
•
The HA pairs are configured as switchless clusters, without cluster interconnect switches.
A switched configuration is supported but not shown.
The configuration includes the following connections:
•
FC connections from each controller's HBAs and FC-VI adapters to each of the FC switches
26 | Fabric-attached MetroCluster Installation and Configuration Guide
•
An FC connection from each FC-to-SAS bridge to an FC switch
•
SAS connections between each SAS shelf and from the top and bottom of each stack to an FC-toSAS bridge
•
An HA interconnect between each controller in the local HA pair
If the controllers support a single-chassis HA pair, the HA interconnect is internal, occurring
through the backplane, meaning an external interconnect is not required.
•
Ethernet connections from the controllers to the customer-provided network used for cluster
peering
SVM configuration is replicated over the cluster peering network.
•
A cluster interconnect between each controller in the local cluster.
Four-node fabric MetroCluster configuration
The following illustration shows a simplified view of a four-node fabric MetroCluster configuration.
For some connections, a single line represents multiple, redundant connections between the
components. Data and management network connections are not shown.
Cluster peering network
controller_A_2
FC
FC_switch_B_1
Long-haul ISLs
FC
FC_bridge_A_1
FC_bridge_B_1
SAS stack or stacks
SAS stack or stacks
FC_bridge_A_2
FC_bridge_B_2
FC
FC_switch_A_2
cluster_A
Long-haul ISLs
FC
FC_switch_B_2
controller_B_1
cluster interconnect
FC_switch_A_1
HA interconnect
HA interconnect
cluster interconnect
controller_A_1
controller_B_2
cluster_B
The following illustration shows a more detailed view of the connectivity in a single MetroCluster
cluster (both clusters have the same configuration):
Cabling a fabric-attached MetroCluster configuration | 27
Cluster peering network
to partner
site
FC_switch_A_1
Long-haul
ISL
FC
FC_bridge_A_1
controller_A_1
controller_A_2
SAS-attached
shelf
SAS-attached
shelf
SAS-attached
shelf
SAS-attached
shelf
SAS-attached
shelf
SAS-attached
shelf
SAS-attached
shelf
SAS-attached
shelf
FC_bridge_A_2
Ethernet (controller_A_1)
Long-haul
ISL
FC
Ethernet (controller_A_2)
Fibre Channel (controller_A_1)
FC_switch_A_2
Fibre Channel (controller_A_2)
Fibre Channel (bridge to switch)
10-GbE Cluster Interconnect
HA Interconnect
Two-node fabric MetroCluster configuration
The following illustration shows a simplified view of a two-node fabric MetroCluster configuration.
For some connections, a single line represents multiple, redundant connections between the
components. Data and management network connections are not shown.
Cluster peering network
controller_A_1
FC_switch_B_1
FC_switch_A_1
FC
Long-haul ISLs
FC
FC_bridge_A_1
FC_bridge_B_1
SAS stack
or stacks
SAS stack
or stacks
FC_bridge_A_2
FC_bridge_B_2
FC
FC_switch_A_2
Long-haul ISLs
cluster_A
controller_B_1
FC
FC_switch_B_2
cluster_B
•
The configuration consists of two clusters, one at each geographically separated site.
•
cluster_A is located at one MetroCluster site.
28 | Fabric-attached MetroCluster Installation and Configuration Guide
•
cluster_B is located at the second MetroCluster site.
•
Each site has one stack of SAS storage.
Additional storage stacks are supported, but only one is shown at each site.
Note: In the two-node configuration, the nodes are not configured as an HA pair.
The following illustration shows a more detailed view of the connectivity in a single MetroCluster
cluster (both clusters have the same configuration):
Cluster peering network
to partner
site
FC_switch_A_1
Long-haul
ISL
FC
FC_bridge_A_1
controller_A_1
SAS-attached
shelf
SAS-attached
shelf
SAS-attached
shelf
SAS-attached
shelf
SAS-attached
shelf
SAS-attached
shelf
SAS-attached
shelf
SAS-attached
shelf
FC_bridge_A_2
Ethernet (controller_A_1)
FC
Fibre Channel (controller_A_1)
FC_switch_A_2
Long-haul
ISL
Fibre Channel (bridge to switch)
The configuration includes the following connections:
•
FC connections between the FC-VI adapter on each controller module
•
FC connections from each controller module's HBAs to FC-to-SAS bridge for each SAS shelf
stack
•
SAS connections between each SAS shelf and from the top and bottom of each stack to an FC-toSAS bridge
•
Ethernet connections from the controllers to the customer-provided network used for cluster
peering
SVM configuration is replicated over the cluster peering network.
Local HA pairs in a MetroCluster configuration
In eight-node or four-node MetroCluster configurations, each site consists of storage controllers
configured as one or two HA pair. This allows local redundancy so that if one storage controller fails,
its local HA partner can take over. Such failures can be handled without a MetroCluster switchover
operation.
Local HA failover and giveback operations are performed with the storage failover commands,
in the same manner as a non-MetroCluster configuration.
Cabling a fabric-attached MetroCluster configuration | 29
Cluster peering network
Long-haul ISL
(one to four)
FC_bridge_A_1
FC_bridge_B_1
SAS stack or stacks
SAS stack or stacks
FC_bridge_A_2
FC_bridge_B_2
FC
controller_A_2
Long-haul ISL
(one to four)
FC_switch_A_2
controller_B_1
FC
HA interconnect
HA interconnect
cluster interconnect
FC_switch_B_1
FC
FC
cluster interconnect
FC_switch_A_1
controller_A_1
controller_B_2
FC_switch_B_2
cluster_B
cluster_A
HA pair, Site A
HA pair, Site B
Related information
ONTAP concepts
Redundant FC-to-SAS bridges
FC-to-SAS bridges provide protocol bridging between SAS attached disks and the FC switch fabric.
Two FibreBridge 7500N bridges can support up to four disk shelf stacks.
Two FibreBridge 6500N bridges can support only one disk shelf stack.
Cluster peering network
controller_A_2
FC
FC_switch_B_1
Long-haul ISL
(one or two)
FC
FC_bridge_A_1
FC_bridge_B_1
SASSAS
stackstack
or stacks
SASSAS
stackstack
or stacks
FC_bridge_A_2
FC_bridge_B_2
FC
FC_switch_A_2
cluster_A
Shelf stack, Site A
Long-haul ISL
(one or two)
controller_B_1
FC
FC_switch_B_2
cluster interconnect
FC_switch_A_1
HA interconnect
HA interconnect
cluster interconnect
controller_A_1
controller_B_2
cluster_B
Shelf stack, Site B
Redundant FC switch fabrics
Each switch fabric includes inter-switch links (ISLs) that connect the sites. Data is replicated from
site-to-site over the ISL. Each switch fabric must be on different physical paths for redundancy.
30 | Fabric-attached MetroCluster Installation and Configuration Guide
FC fabric 1
Cluster peering network
controller_A_2
FC
FC_switch_B_1
FC
Long-haul ISL
(one or two)
FC_bridge_A_1
FC_bridge_B_1
SAS stack or stacks
SAS stack or stacks
FC_bridge_A_2
FC_bridge_B_2
FC
FC_switch_A_2
cluster_A
Long-haul ISL
(one or two)
FC
FC_switch_B_2
controller_B_1
cluster interconnect
FC_switch_A_1
HA interconnect
HA interconnect
cluster interconnect
controller_A_1
controller_B_2
cluster_B
FC fabric 2
Cluster peering network
The two clusters in the MetroCluster configuration are peered through a customer-provided cluster
peering network. Cluster peering supports the synchronous mirroring of Storage Virtual Machines
(SVMs, formerly known as Vservers) between the sites.
Intercluster LIFs must be configured on each node in the MetroCluster configuration, and the clusters
must be configured for peering. The ports with the intercluster LIFs are connected to the customerprovided cluster peering network. Replication of SVM configuration is carried out over this network
through the Configuration Replication Service.
Cluster peering network
controller_A_2
FC
FC_switch_B_1
FC
Long-haul ISL
(one or two)
FC_bridge_A_1
FC_bridge_B_1
SAS stack or stacks
SAS stack or stacks
FC_bridge_A_2
FC_bridge_B_2
FC
FC_switch_A_2
Long-haul ISL
(one or two)
FC
FC_switch_B_2
cluster_A
Related concepts
Considerations for configuring cluster peering on page 12
Related tasks
Cabling the cluster peering connections on page 49
Peering the clusters on page 167
controller_B_1
controller_B_2
cluster_B
cluster interconnect
FC_switch_A_1
HA interconnect
HA interconnect
cluster interconnect
controller_A_1
Cabling a fabric-attached MetroCluster configuration | 31
Related information
Cluster peering express configuration
Required MetroCluster components and naming
conventions
When planning your MetroCluster configuration, you must understand the required and supported
hardware and software components. For convenience and clarity, you should also understand the
naming conventions used for components in examples throughout the documentation. Also, one site
is referred to as Site A and the other site is referred to as Site B.
Supported software and hardware
The hardware and software must be supported for the MetroCluster configuration.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray). You
use the Component Explorer to select the components and ONTAP version to refine your search.
You can click Show Results to display the list of supported configurations that match the criteria.
NetApp Hardware Universe
When using All Flash Optimized systems, all controller modules in the MetroCluster configuration
must be configured as All Flash Optimized systems.
Required components
Because of the hardware redundancy in the MetroCluster configuration, there are two of each
component at each site. The sites are arbitrarily assigned the letters A and B, and the individual
components are arbitrarily assigned the numbers 1 and 2.
The MetroCluster configuration also includes SAS storage shelves that connect to the FC-to-SAS
bridges.
Component
Example names
Site A
Site B
Two ONTAP clusters, one at each MetroCluster
site
Naming must be unique within the MetroCluster
configuration.
cluster_A
cluster_B
Four FC switches (supported Brocade or Cisco
models)
The four switches form two switch storage
fabrics that provide the ISL between each of the
clusters in the MetroCluster configuration.
Naming must be unique within the MetroCluster
configuration.
FC_switch_A_1
FC_switch_B_1
FC_switch_A_2
FC_switch_B_2
32 | Fabric-attached MetroCluster Installation and Configuration Guide
Component
Example names
Site A
Site B
Controller modules (two, four, or eight)
Naming must be unique within the MetroCluster
configuration.
The controller modules at each site form one or
two HA pairs. Each controller has a DR partner
at the other site.
All controller modules must be the same platform
model and running the same version of ONTAP.
Some controller modules can be ordered with one
of two options for FC-VI connectivity, either
onboard FC-VI ports or an FC-VI card in slot 1.
All controllers in one DR Group must use the
same FC-VI configuration. For example if one
node uses onboard FC-VI then all other nodes
must use onboard FC-VI as well. A mix of one
controller using onboard and another using an
add-on FCVI card is not supported.
controller_A_1
controller_B_1
controller_A_2
controller_B_2
controller_A_3
controller_B_3
controller_A_4
controller_B_4
Four cluster interconnect switches (if not using
two-node switchless clusters)
These switches provide cluster communication
among the storage controllers in each cluster. The
switches are not required if the storage
controllers at each site are configured as a twonode switchless cluster.
Naming must be unique within the MetroCluster
configuration.
clus_sw_A_1
clust_sw_B_1
clus_sw_A_2
clust_sw_B_2
One pair of FC-to-SAS bridges for each group of
SAS disk shelves:
bridge_A_1_port-
bridge_B_1_port-
number
number
bridge_A_1a
bridge_A_1b
bridge_B_1a
bridge_B_1b
•
FibreBridge 7500N bridges support up to four
SAS stacks.
Each stack can use different models of IOM,
but all disk shelves within a stack must use
the same model.
•
FibreBridge 6500N bridges support only one
SAS stack.
Naming must be unique within the MetroCluster
configuration.
Bridge naming on page 33.
Cabling a fabric-attached MetroCluster configuration | 33
Component
Example names
At least eight SAS disk shelves (recommended)
Four shelves are recommended at each site to
allow disk ownership on a per-shelf basis. A
minimum of two shelves at each site is
supported.
A mix of IOM12 modules and IOM3/IOM6
modules is not supported within the same storage
stack.
Site A
Site B
shelf_A_1_1
shelf_B_1_1
shelf_A_2_1
shelf_B_2_1
shelf_B_1_2
shelf_A_1_2
shelf_B_2_2
shelf_A_2_2
Note: FlexArray systems support array LUNs
and have different storage requirements.
Requirements for a MetroCluster configuration
with array LUNs on page 191
Bridge naming
The bridges use the following example naming: bridge_site_stack grouplocation in pair
This portion of the name...
Identifies the...
Possible values...
site
Site on which the bridge pair
physically resides.
A or B
stack group
Number of the stack group to
which the bridge pair connects.
1, 2, etc.
location in pair
•
FibreBridge 7500N bridges
support up to four stacks in
the stack group.
The stack group can
contain no more than 10
storage shelves.
•
FibreBridge 6500N bridges
support only a single stack
in the stack group.
Bridge within the bridge pair.
A pair of bridges connect to a
specific stack group.
Example bridge names for one stack group on each site:
•
bridge_A_1a
•
bridge_A_1b
•
bridge_B_1a
•
bridge_B_1b
a or b
34 | Fabric-attached MetroCluster Installation and Configuration Guide
Information gathering worksheet for FC switches and FC-toSAS bridges
Before beginning to configure the MetroCluster sites, you should gather required configuration
information.
Site A, FC switch one (FC_switch_A_1)
Switch
configuration
parameter
Your value
FC_switch_A_1 IP
address
FC_switch_A_1
Username
FC_switch_A_1
Password
Site A, FC switch two (FC_switch_A_2)
Switch
configuration
parameter
Your value
FC_switch_A_2 IP
address
FC_switch_A_2
Username
FC_switch_A_2
Password
Site A, FC-to-SAS bridge 1 (FC_bridge_A_1a)
Each SAS stack requires two FC-to-SAS bridges.
•
For FibreBridge 7500N bridges using both FC ports (FC1 and FC1), each bridge connects to
FC_switch_A_1_port-number and FC_switch_A_2_port-number.
•
For FibreBridge 6500N bridges or FibreBridge 7500N bridges using one FC port (FC1 or FC2)
only, one bridge connects to FC_switch_A_1_port-number and the second connects to
FC_switch_A_2_port-number.
Site A
Your value
Bridge_A_1a IP address
Bridge_A_1a Username
Bridge_A_1a Password
Site A, FC-to-SAS bridge 2 (FC_bridge_A_2_port-number)
Each SAS stack requires two FC-to-SAS bridges.
Cabling a fabric-attached MetroCluster configuration | 35
•
For FibreBridge 7500N bridges using both FC ports (FC1 and FC1), each bridge connects to
FC_switch_A_1_port-number and FC_switch_A_2_port-number.
•
For FibreBridge 6500N bridges or FibreBridge 7500N bridges using one FC port (FC1 or FC2)
only, one bridge connects to FC_switch_A_1_port-number and the second connects to
FC_switch_A_2_port-number.
Site A
Your value
Bridge_A_1b IP address
Bridge_A_1b Username
Bridge_A_1b Password
Site B, FC switch one (FC_switch_B_1)
Site B
Your value
FC_switch_B_1 IP address
FC_switch_B_1 Username
FC_switch_B_1 Password
Site B, FC switch two (FC_switch_B_2)
Site B
Your value
FC_switch_B_2 IP address
FC_switch_B_2 Username
FC_switch_B_2 Password
Site B, FC-to-SAS bridge 1 (FC_bridge_B_1a)
Each SAS stack requires two FC-to-SAS bridges.
•
For FibreBridge 7500N bridges using both FC ports (FC1 and FC1), each bridge connects to
FC_switch_B_1_port-number and FC_switch_B_2_port-number.
•
For FibreBridge 6500N bridges or FibreBridge 7500N bridges using one FC port (FC1 or FC2)
only, one bridge connects to FC_switch_B_1_port-number and the second connects to
FC_switch_B_2_port-number.
Site B
Your value
Bridge_B_1a IP address
Bridge_B_1a Username
Bridge_B_1a Password
Site B, FC-to-SAS bridge 2 (FC_bridge_B_2b)
Each SAS stack requires two FC-to-SAS bridges.
•
For FibreBridge 7500N bridges using both FC ports (FC1 and FC1), each bridge connects to
FC_switch_B_1_port-number and FC_switch_B_2_port-number.
36 | Fabric-attached MetroCluster Installation and Configuration Guide
•
For FibreBridge 6500N bridges or FibreBridge 7500N bridges using one FC port (FC1 or FC2)
only, one bridge connects to FC_switch_B_1_port-number and the second connects to
FC_switch_B_2_port-number.
Site B
Your value
Bridge_B_1b IP address
Bridge_B_1b Username
Bridge_B_1b Password
Related information
NetApp Interoperability Matrix Tool
Installing and cabling MetroCluster components
The storage controllers must be cabled to the FC switches and the ISLs must be cabled to link the
MetroCluster sites. The storage controllers must also be cabled to the cluster peering, data and
management networks.
Steps
1.
2.
3.
4.
5.
6.
7.
8.
Racking the hardware components on page 36
Cabling the FC-VI and HBA connections to the FC switches on page 37
Cabling the ISLs between MetroCluster sites on page 38
Port assignments for FC switches on page 38
Cabling the cluster interconnect in eight- or four-node configurations on page 48
Cabling the cluster peering connections on page 49
Cabling the HA interconnect, if necessary on page 49
Cabling the management and data connections on page 50
Racking the hardware components
If you have not received the equipment already installed in cabinets, you must rack the components.
About this task
This task must be performed on both MetroCluster sites.
Steps
1. Plan out the positioning of the MetroCluster components.
The rack space depends on the platform model of the storage controllers, the switch types, and the
number of disk shelf stacks in your configuration.
2. Properly ground yourself.
3. Install the storage controllers in the rack or cabinet.
Note: AFF systems are not supported with array LUNs.
Installation and Setup Instructions for AFF A300 Systems
Installation and Setup Instructions for AFF A700 and FAS9000
Installation and Setup Instructions for FAS8200 Systems
Cabling a fabric-attached MetroCluster configuration | 37
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
4. Install the FC switches in the rack or cabinet.
5. Install the disk shelves, power them on, and set the shelf IDs.
NetApp Documentation: Disk Shelves
•
You must power-cycle each disk shelf.
•
Shelf IDs must be unique for each SAS disk shelf within each MetroCluster DR group
(including both sites).
6. Install each FC-to-SAS bridge:
a. Secure the “L” brackets on the front of the bridge to the front of the rack (flush-mount) with
the four screws.
The openings in the bridge “L” brackets are compliant with rack standard ETA-310-X for 19inch (482.6 mm) racks.
For more information and an illustration of the installation, see the ATTO FibreBridge
Installation and Operation Manual for your bridge model.
Note: For adequate port space access and FRU serviceability, you must leave 1U space
below the bridge pair and cover this space with a tool-less blanking panel.
b. Connect each bridge to a power source that provides a proper ground.
c. Power on each bridge.
Note: For maximum resiliency, bridges that are attached to the same stack of disk shelves
must be connected to different power sources.
The bridge Ready LED might take up to 30 seconds to illuminate, indicating that the bridge
has completed its power-on self test sequence.
Cabling the FC-VI and HBA connections to the FC switches
The FC-VI ports and HBAs (host bus adapters) must be cabled to the site FC switches on each
controller module in the MetroCluster configuration.
About this task
This task must be performed on both MetroCluster sites.
Step
1. Cable the FC-VI ports and HBA ports, using the table for your configuration and switch model.
Port assignments for FC switches on page 38
38 | Fabric-attached MetroCluster Installation and Configuration Guide
Cabling the ISLs between MetroCluster sites
You must connect the FC switches at each site through the fiber-optic Inter-Switch Links (ISLs) to
form the switch fabrics that connect the MetroCluster components.
About this task
This must be done for both switch fabrics.
Step
1. Connect the FC switches at each site to all ISLs, using the cabling in the table that corresponds to
your configuration and switch model.
Port assignments for FC switches on page 38
Port assignments for FC switches
You need to verify that you are using the specified port assignments when you cable the FC switches.
Ports that are not used for attaching initiator ports, FC-VI ports, or ISLs can be reconfigured to act as
storage ports. However, if the supported RCFs are being used, the zoning must be changed
accordingly.
If the supported RCF files are used, ISL ports may not connect to the same ports shown here and may
need to be reconfigured manually.
Overall cabling guidelines
You should be aware of the following guidelines when using the cabling tables:
•
The Brocade and Cisco switches use different port numbering:
◦
On Brocade switches, the first port is numbered 0.
◦
On Cisco switches, the first port is numbered 1.
•
The cabling is the same for each FC switch in the switch fabric.
•
AFF A300 and FAS8200 storage systems can be ordered with one of two options for FC-VI
connectivity:
•
◦
Onboard ports 0e and 0f configured in FC-VI mode.
◦
Ports 1a and 1b on an FC-VI card in slot 1.
AFF A700 and FAS9000 storage systems support four FC-VI ports. The following tables show
cabling for the FC switches with four FC-VI ports on each controller.
For other storage systems, use the cabling shown in the tables but ignore the cabling for FC-VI
ports c and d.
You can leave those ports empty.
Brocade port usage for controllers in a MetroCluster configuration
The following tables show port usage on Brocade switches. The tables show the maximum supported
configuration, with eight controller modules in two DR groups. For smaller configurations, ignore the
rows for the additional controller modules. Note that eight ISLs are supported only on the Brocade
6510 switch.
Cabling a fabric-attached MetroCluster configuration | 39
Configurations using FibreBridge 6500N bridges or FibreBridge 7500N using one FC port
(FC1 or FC2) only
DR GROUP 1
Brocade 6505
Component
Port
Switch 1
Switch 2
Brocade 6510
Switch
1
Switch
2
Switch
1
Switch
2
controller_x_1 FC-VI port a
0
FC-VI port b
-
0
-
0
-
0
FC-VI port c
1
-
1
-
1
-
FC-VI port d
-
1
-
1
-
1
HBA port a
2
-
2
-
2
-
HBA port b
-
2
-
2
-
2
HBA port c
3
-
3
-
3
-
HBA port d
-
3
-
3
-
3
controller_x_2 FC-VI port a
4
-
4
-
4
-
FC-VI port b
-
4
-
4
-
4
FC-VI port c
5
-
5
-
5
-
FC-VI port d
-
5
-
5
-
5
HBA port a
6
-
6
-
6
-
HBA port b
-
6
-
6
-
6
HBA port c
7
-
7
-
7
-
HBA port d
-
7
-
7
-
7
bridge_x_1a
8
bridge_x_1b
-
8
-
8
-
8
bridge_x_2a
9
-
9
-
9
-
bridge_x_2b
-
9
-
9
-
9
bridge_x_3a
10
-
10
-
10
-
bridge_x_4b
-
10
-
10
-
10
bridge_x_ya
11
-
11
-
11
-
bridge_x_yb
-
11
-
11
-
11
Stack 1
Stack 2
Stack 3
Stack y
0
Brocade 6520
0
8
8
40 | Fabric-attached MetroCluster Installation and Configuration Guide
Configurations using FibreBridge 6500N bridges or FibreBridge 7500N using one FC port
(FC1 or FC2) only
DR GROUP 2
Brocade 6505
Brocade 6510
Not supported
24
-
48
-
FC-VI port b
-
24
-
48
FC-VI port c
25
-
49
-
FC-VI port d
-
25
-
49
HBA port a
26
-
50
-
HBA port b
-
26
-
50
HBA port c
27
-
51
-
HBA port d
-
27
-
51
28
-
52
-
FC-VI port b
-
28
-
52
FC-VI port c
29
-
53
-
FC-VI port d
-
29
-
53
HBA port a
30
-
54
-
HBA port b
-
30
-
54
HBA port c
31
-
55
-
HBA port d
-
31
-
55
32
-
56
-
bridge_x_51b
-
32
-
56
bridge_x_52a
33
-
57
-
bridge_x_52b
-
33
-
57
bridge_x_53a
34
-
58
-
bridge_x_54b
-
34
-
58
bridge_x_ya
35
-
59
-
bridge_x_yb
-
35
-
59
controller_x_3 FC-VI port a
controller_x_4 FC-VI port a
Stack 1
Stack 2
Stack 3
Stack y
bridge_x_51a
Not supported
Not supported
Brocade 6520
ISLs
ISL 1
20
20
40
40
23
23
ISL 2
21
21
41
41
47
47
ISL 3
22
22
42
42
71
71
ISL 4
23
23
43
43
95
95
ISL 5
Not supported
44
44
ISL 6
45
45
ISL 7
46
46
ISL 8
47
47
Not supported
Cabling a fabric-attached MetroCluster configuration | 41
Configurations using FibreBridge 7500N using both FC ports (FC1 and FC2)
DR GROUP 1
Brocade 6505
Switc
h1
Switc
h1
Port
controller_x_1
FC-VI port
a
0
FC-VI port
b
-
0
-
0
-
0
FC-VI port
c
1
-
1
-
1
-
FC-VI port
d
-
1
-
1
-
1
HBA port a
2
-
2
-
2
-
HBA port b
-
2
-
2
-
2
HBA port c
3
-
3
-
3
-
HBA port d
-
3
-
3
-
3
FC-VI port
a
4
-
4
-
4
-
FC-VI port
b
-
4
-
4
-
4
FC-VI port
c
5
-
5
-
5
-
FC-VI port
d
-
5
-
5
-
5
HBA port a
6
-
6
-
6
-
HBA port b
-
6
-
6
-
6
HBA port c
7
-
7
-
7
-
HBA port d
-
7
-
7
-
7
FC1
8
FC2
-
8
-
8
-
8
FC1
9
-
9
-
9
-
FC2
-
9
-
9
-
9
FC1
10
-
10
-
10
-
FC2
-
10
-
10
-
10
FC1
11
-
11
-
11
-
FC2
-
11
-
11
-
11
Stack 1
bridge_x_1a
bridge_x_1B
Stack 2
bridge_x_2a
bridge_x_2B
Switch
2
Brocade 6520
Component
controller_x_2
Switch
1
Brocade 6510
Switc
h2
0
Switc
h2
0
8
8
42 | Fabric-attached MetroCluster Installation and Configuration Guide
Configurations using FibreBridge 7500N using both FC ports (FC1 and FC2)
DR GROUP 1
Brocade 6505
Component
Stack 3
bridge_x_3a
bridge_x_3B
Stack y
bridge_x_ya
bridge_x_yb
Brocade 6510
Brocade 6520
Port
Switch
1
Switch
2
Switc
h1
Switc
h2
Switc
h1
Switc
h2
FC1
12
-
12
-
12
-
FC2
-
12
-
12
-
12
FC1
13
-
13
-
13
-
FC2
-
13
-
13
-
13
FC1
14
-
14
-
14
-
FC2
-
14
-
14
-
14
FC1
15
-
15
-
15
-
FC2
--
15
--
15
--
15
Configurations using FibreBridge 7500N using both FC ports (FC1 and FC2)
DR GROUP 2
Brocade 6505
Brocade 6510
Brocade 6520
Switc
h1
Switc
h2
Switc
h1
Switc
h2
24
-
48
-
FC-VI port b
-
24
-
48
FC-VI port c
25
-
49
-
FC-VI port d
-
25
-
49
HBA port a
26
-
50
-
HBA port b
-
26
-
50
HBA port c
27
-
51
-
HBA port d
-
27
-
51
28
-
52
-
FC-VI port b
-
28
-
52
FC-VI port c
29
-
53
-
FC-VI port d
-
29
-
53
HBA port a
30
-
54
-
HBA port b
-
30
-
54
HBA port c
31
-
55
-
HBA port d
-
31
-
55
Component
Port
Switc
h1
controller_x_3
FC-VI port a
Not supported
controller_x_4
FC-VI port a
Switc
h2
Not supported
Cabling a fabric-attached MetroCluster configuration | 43
Configurations using FibreBridge 7500N using both FC ports (FC1 and FC2)
DR GROUP 2
Component
Stack 1
bridge_x_51a
bridge_x_51b
Stack 2
bridge_x_52a
bridge_x_52b
Stack 3
bridge_x_53a
bridge_x_53b
Stack y
bridge_x_5ya
bridge_x_5yb
Brocade 6505
Brocade 6510
Brocade 6520
Port
Switc
h1
Switc
h1
Switc
h2
Switc
h1
Switc
h2
FC1
Not supported
32
-
56
-
FC2
-
32
-
56
FC1
33
-
57
-
FC2
-
33
-
57
FC1
34
-
58
-
FC2
-
34
-
58
FC1
35
-
59
-
FC2
-
35
-
59
FC1
36
-
60
-
FC2
-
36
-
60
FC1
37
-
61
-
FC2
-
37
-
61
FC1
38
-
62
-
FC2
-
38
-
62
FC1
39
-
63
-
FC2
-
39
-
63
Switc
h2
ISLs
ISL 1
20
20
40
40
23
23
ISL 2
21
21
41
41
47
47
ISL 3
22
22
42
42
71
71
ISL 4
23
23
43
43
95
95
44
44
ISL 6
45
45
ISL 7
46
46
ISL 8
47
47
ISL 5
Not supported
Not supported
Cisco port usage for controllers in a MetroCluster configuration
The tables show the maximum supported configuration, with eight controller modules in two DR
groups. For smaller configurations, ignore the rows for the additional controller modules.
44 | Fabric-attached MetroCluster Installation and Configuration Guide
Cisco 9396S
Component
controller_x_1
controller_x_2
controller_x_3
controller_x_4
Port
Switch 1
Switch 2
FC-VI port a
1
-
FC-VI port b
-
1
FC-VI port c
2
-
FC-VI port d
-
2
HBA port a
3
-
HBA port b
-
3
HBA port c
4
-
HBA port d
-
4
FC-VI port a
5
-
FC-VI port b
-
5
FC-VI port c
6
-
FC-VI port d
-
6
HBA port a
7
-
HBA port b
-
7
HBA port c
8
-
HBA port d
-
8
FC-VI port a
49
FC-VI port b
-
FC-VI port c
50
FC-VI port d
-
HBA port a
51
HBA port b
-
HBA port c
52
HBA port d
-
52
FC-VI port a
53
-
FC-VI port b
-
53
FC-VI port c
54
-
FC-VI port d
-
54
HBA port a
55
-
HBA port b
-
55
HBA port c
56
-
HBA port d
-
56
49
50
51
Cabling a fabric-attached MetroCluster configuration | 45
Cisco 9148 or 9148S
Component
controller_x_1
controller_x_2
controller_x_3
controller_x_4
Port
Switch 1
Switch 2
FC-VI port a
1
-
FC-VI port b
-
1
HBA port a
2
-
HBA port b
-
2
HBA port c
3
-
HBA port d
-
3
FC-VI port a
4
-
FC-VI port b
-
4
HBA port a
5
-
HBA port b
-
5
HBA port c
6
-
HBA port d
-
6
FC-VI port a
7
FC-VI port b
-
7
HBA port a
8
-
HBA port b
-
8
HBA port c
9
-
HBA port d
-
9
FC-VI port a
10
-
FC-VI port b
-
10
HBA port a
11
-
HBA port b
-
11
HBA port c
13
-
HBA port d
-
13
Cisco port usage for FC-to-SAS bridges
Cisco 9396S
FibreBridge 7500 using two FC
ports
Port
Switch 1
Switch 2
bridge_x_1a
FC1
9
-
FC2
-
9
FC1
10
-
FC2
-
10
FC1
11
-
FC2
-
11
bridge_x_1b
bridge_x_2a
46 | Fabric-attached MetroCluster Installation and Configuration Guide
Cisco 9396S
FibreBridge 7500 using two FC
ports
Port
Switch 1
Switch 2
bridge_x_2b
FC1
12
-
FC2
-
12
FC1
13
-
FC2
-
13
FC1
14
-
FC2
-
14
FC1
15
-
FC2
-
15
FC1
16
-
FC2
-
16
bridge_x_3a
bridge_x_3b
bridge_x_4a
bridge_x_4b
Additional bridges can be attached using
ports 17 through 40 and 57 through 88
following the same pattern.
Cisco 9148 or 9148S
FibreBridge 7500 using two FC
ports
Port
bridge_x_1a
bridge_x_1b
bridge_x_2a
bridge_x_2b
bridge_x_3a
bridge_x_3b
bridge_x_4a
bridge_x_4b
Switch 1
Switch 2
FC1
14
-
FC2
-
14
FC1
15
-
FC2
-
15
FC1
17
-
FC2
-
17
FC1
18
-
FC2
-
18
FC1
19
-
FC2
-
19
FC1
21
-
FC2
-
21
FC1
22
-
FC2
-
22
FC1
23
-
FC2
-
23
Additional bridges can be attached using ports
25 through 48 following the same pattern.
Cabling a fabric-attached MetroCluster configuration | 47
The following table shows bridge port usage up to port 23 when using FibreBridge 6500 bridges or
FibreBridge 7500 bridges using one FC port (FC1 or FC2) only. For FibreBridge 7500 bridges using
one FC port, either FC1 or FC2 can be cabled to the port indicated as FC1. Additional bridges can be
attached using ports 25-48.
FibreBridge 6500 bridge or
FibreBridge 7500 using one FC
port
Port
bridge_x_1a
Cisco 9396S
Switch 1
Switch 2
FC1
9
-
bridge_x_1b
FC1
-
9
bridge_x_2a
FC1
10
-
bridge_x_2b
FC1
-
10
bridge_x_3a
FC1
11
-
bridge_x_3b
FC1
-
11
bridge_x_4a
FC1
12
-
bridge_x_4b
FC1
-
12
bridge_x_5a
FC1
13
-
bridge_x_5b
FC1
-
13
bridge_x_6a
FC1
14
-
bridge_x_6b
FC1
-
14
bridge_x_7a
FC1
15
-
bridge_x_7b
FC1
-
15
bridge_x_8a
FC1
16
-
bridge_x_8b
FC1
-
16
Additional bridges can be attached using ports
17 through 40 and 57 through 88 following
the same pattern.
Cisco 9148 or 9148S
FibreBridge 6500 bridge or
FibreBridge 7500 using one
FC port
Port
Switch 1
Switch 2
bridge_x_1a
FC1
14
-
bridge_x_1b
FC1
-
14
bridge_x_2a
FC1
15
-
bridge_x_2b
FC1
-
15
bridge_x_3a
FC1
17
-
bridge_x_3b
FC1
-
17
bridge_x_4a
FC1
18
-
bridge_x_4b
FC1
-
18
bridge_x_5a
FC1
19
-
48 | Fabric-attached MetroCluster Installation and Configuration Guide
Cisco 9148 or 9148S
FibreBridge 6500 bridge or
FibreBridge 7500 using one
FC port
Port
Switch 1
Switch 2
bridge_x_5b
FC1
-
19
bridge_x_6a
FC1
21
-
bridge_x_6b
FC1
-
21
bridge_x_7a
FC1
22
-
bridge_x_7b
FC1
-
22
bridge_x_8a
FC1
23
-
bridge_x_8b
FC1
-
23
Additional bridges can be attached using ports
25 through 48 following the same pattern.
Cisco port usage for ISLs
The following table shows ISL port usage. ISL port usage is the same on all switches in the
configuration.
ISL connection
Cisco 9396S
Cisco 9148 or 9148S
Switch Port
Switch port
ISL 1
44
12
ISL 2
48
16
ISL 3
92
20
ISL 4
96
24
Cabling the cluster interconnect in eight- or four-node configurations
In eight- or four-node MetroCluster configurations, you must cable the cluster interconnect between
the local controller modules at each site.
About this task
This task is not required on two-node MetroCluster configurations.
This task must be performed at both MetroCluster sites.
Step
1. Cable the cluster interconnect from one controller module to the other, or if cluster interconnect
switches are used, from each controller module to the switches.
Note: AFF systems are not supported with array LUNs.
Note: AFF systems are not supported with array LUNs.
Installation and Setup Instructions for AFF A300 Systems
Installation and Setup Instructions for AFF A700 and FAS9000
Installation and Setup Instructions for FAS8200 Systems
Cabling a fabric-attached MetroCluster configuration | 49
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
Related information
Network and LIF management
Cabling the cluster peering connections
You must cable the controller ports used for cluster peering so that they have connectivity with the
cluster on the partner site.
About this task
This task must be performed on each controller in the MetroCluster configuration.
At least two ports on each controller should be used for cluster peering.
The recommended minimum bandwidth for the ports and network connectivity is 1 GbE.
Step
1. Identify and cable at least two ports for cluster peering and verify they have network connectivity
with the partner cluster.
Cluster peering can be done on dedicated ports or on data ports. Using dedicated ports provides
higher throughput for the cluster peering traffic.
Cluster peering express configuration
Related concepts
Considerations for configuring cluster peering on page 12
Related information
Data protection using SnapMirror and SnapVault technology
Cluster peering express configuration
Cabling the HA interconnect, if necessary
If you have an eight- or a four-node MetroCluster configuration and the storage controllers within the
HA pairs are in separate chassis, you must cable the HA interconnect between the controllers.
About this task
•
This task does not apply to two-node MetroCluster configurations.
•
This task must be performed at both MetroCluster sites.
•
The HA interconnect must be cabled only if the storage controllers within the HA pair are in
separate chassis.
Some storage controller models support two controllers in a single chassis, in which case they use
an internal HA interconnect.
50 | Fabric-attached MetroCluster Installation and Configuration Guide
Steps
1. Cable the HA interconnect if the storage controller's HA partner is in a separate chassis.
Note: AFF systems are not supported with array LUNs.
Note: AFF systems are not supported with array LUNs.
Installation and Setup Instructions for AFF A300 Systems
Installation and Setup Instructions for AFF A700 and FAS9000
Installation and Setup Instructions for FAS8200 Systems
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
2. If the MetroCluster site includes two HA pairs, repeat the previous steps on the second HA pair.
3. Repeat this task at the MetroCluster partner site.
Cabling the management and data connections
You must cable the management and data ports on each storage controller to the site networks.
About this task
This task must be repeated for each new controller at both MetroCluster sites.
You can connect the controller and cluster switch management ports to existing switches in your
network or to new dedicated network switches such as NetApp CN1601 cluster management
switches.
Step
1. Cable the controller's management and data ports to the management and data networks at the
local site.
Note: AFF systems are not supported with array LUNs.
Installation and Setup Instructions for AFF A300 Systems
Installation and Setup Instructions for AFF A700 and FAS9000
Installation and Setup Instructions for FAS8200 Systems
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
Cabling a fabric-attached MetroCluster configuration | 51
Configuring the FC switches
For fabric-attached MetroCluster systems that were not pre-configured in the factory, you must
configure each FC switch in the DR group. This is done manually, or, depending on the switch, can
optionally be done with a configuration file.
About this task
For new systems, the FC switch fabrics are typically configured for two ISLs and do not require
additional configuration unless you want to change the pre-configured IP addresses.
Choices
• Configuring the FC switches by running a configuration file on page 51
• Configuring the Cisco or Brocade FC switches manually on page 52
Configuring the FC switches by running a configuration file
If you want to simplify the process of configuring switches, you can download and execute switch
configuration files that provide the complete switch settings for certain configurations.
About this task
The reference configuration files (RCFs) do not support configurations using eight ISLs. If you are
using eight ISLs you must configure the switches manually.
The RCFs apply to two, four, and eight node configurations. The RCF download page indicates the
number of nodes supported by the different switch models.
Choices
• Configuring Brocade FC switches with configuration files on page 51
• Configuring the Cisco FC switches with configuration files on page 52
Configuring Brocade FC switches with configuration files
When you configure a Brocade FC switch, you can download and execute switch configuration files
that provide the complete switch settings for certain configurations.
Before you begin
You must have access to an FTP server. The switches must have connectivity with the FTP server.
About this task
Each configuration file is different and must be used with the correct switch. Only one of the
configuration files for each switch fabric contains zoning commands.
Steps
1. Go to the software download page.
NetApp Downloads: Software
2. In the list of products, find the row for Fibre Channel Switch, and in the drop-down list select
Brocade.
3. On the Fibre Channel Switch for Brocade page, click View & Download.
52 | Fabric-attached MetroCluster Installation and Configuration Guide
4. On the Fibre Channel Switch - Brocade page, click the MetroCluster link.
5. Follow the directions on the MetroCluster Configuration Files for Brocade Switches
description page to download and run the files.
Configuring the Cisco FC switches with configuration files
To configure a Cisco FC switch, you can download and execute switch configuration files that
provide the complete switch settings for certain configurations.
Steps
1. Go to the software download page.
NetApp Downloads: Software
2. In the list of products, find the row for Fibre Channel Switch, and then select Cisco from the
drop-down list.
3. On the Fibre Channel Switch for Cisco page, click the View & Download button.
4. On the Fibre Channel Switch - Cisco page, click the MetroCluster link.
5. Follow the directions on the MetroCluster Configuration Files for Cisco Switches Description
page to download and run the files.
Configuring the Cisco or Brocade FC switches manually
Switch configuration procedures and commands are different, depending on the switch vendor.
Choices
• Configuring the Brocade FC switches on page 52
• Configuring the Cisco FC switches on page 90
Configuring the Brocade FC switches
You must configure each of the Brocade switch fabrics in the MetroCluster configuration.
Before you begin
•
You must have a PC or UNIX workstation with Telnet or SSH access to the FC switches.
•
You must be using four supported Brocade switches of the same model with the same Brocade
Fabric Operating System (FOS) version and licensing.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and ONTAP version to refine your
search. You can click Show Results to display the list of supported configurations that match the
criteria.
•
You must have four switches; the MetroCluster configuration requires four switches.
The four switches must be connected to two fabrics of two switches each, with each fabric
spanning both sites.
•
Two initiator ports must be connected from each storage controller to each fabric.
Each storage controller must have four initiator ports available to connect to the switch fabrics.
Cabling a fabric-attached MetroCluster configuration | 53
About this task
•
ISL trunking is required.
•
ISLs must have the same length and same speed ISLs in one fabric.
Different lengths can be used in the different fabrics. The same speed must be used in all fabrics.
•
Metro-E and TDM (SONET/SDH) are not supported; any non-FC native framing or signaling is
not supported.
Metro-E means Ethernet framing/signaling occurs either natively over a Metro distance or
through some TDM, MPLS, or WDM.
•
TDMs, FCR (native FC Routing) or FCIP extensions are not supported for the MetroCluster FC
switch fabric.
•
Third-party encryption devices are not supported on any link in the MetroCluster FC switch
fabric, including the ISL links across the WAN.
•
Certain switches in the MetroCluster FC switch fabric support encryption or compression, and
sometimes support both.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and ONTAP version to refine your
search. You can click Show Results to display the list of supported configurations that match the
criteria.
•
The Brocade Virtual Fabric (VF) feature is not supported.
Steps
1.
2.
3.
4.
5.
6.
7.
Reviewing Brocade license requirements on page 53
Setting the Brocade FC switch values to factory defaults on page 54
Configuring basic switch settings on page 56
Configuring E-ports on Brocade FC switches on page 59
Configuring the non-E-ports on the Brocade switch on page 64
Configuring zoning on Brocade FC switches on page 65
Setting ISL encryption on Brocade 6510 switches on page 86
Related information
NetApp Interoperability Matrix Tool
Reviewing Brocade license requirements
You need certain licenses for the switches in a MetroCluster configuration. You must install these
licenses on all four switches.
The MetroCluster configuration has the following Brocade license requirements:
•
Trunking license for systems using more than one ISL, as recommended.
•
Extended Fabric license (for ISL distances over 6 km)
•
Enterprise license for sites with more than one ISL and an ISL distance greater than 6 km
The Enterprise license includes Brocade Network Advisor and all licenses except for additional
port licenses.
You can verify that the licenses are installed by using the licenseshow command. If you do not
have these licenses, you should contact your sales representative before proceeding.
54 | Fabric-attached MetroCluster Installation and Configuration Guide
Setting the Brocade FC switch values to factory defaults
You must set the switch to its factory defaults to ensure a successful configuration. You must also
assign each switch a unique name.
About this task
In the examples in this procedure, the fabric consists of BrocadeSwitchA and BrocadeSwitchB.
Steps
1. Make a console connection and log in to both switches in one fabric.
2. Disable the switch persistently:
switchcfgpersistentdisable
This ensures the switch will remain disabled after a reboot or fastboot. If this command is not
available, use the switchdisable command.
Example
The following example shows the command on BrocadeSwitchA:
BrocadeSwitchA:admin> switchcfgpersistentdisable
The following example shows the command on BrocadeSwitchB:
BrocadeSwitchA:admin> switchcfgpersistentdisable
3. Enter switchname switch_name to set the switch name.
The switches should each have a unique name. After setting the name, the prompt changes
accordingly.
Example
The following example shows the command on BrocadeSwitchA:
BrocadeSwitchA:admin> switchname "FC_switch_A_1"
FC_switch_A_1:admin>
The following example shows the command on BrocadeSwitchB:
BrocadeSwitchB:admin> switchname "FC_Switch_B_1"
FC_switch_B_1:admin>
4. Set all ports to their default values by issuing the following command for each port:
portcfgdefault
This must be done for all ports on the switch.
Example
The following example shows the commands on FC_switch_A_1:
FC_switch_A_1:admin> portcfgdefault 0
FC_switch_A_1:admin> portcfgdefault 1
...
FC_switch_A_1:admin> portcfgdefault 39
Cabling a fabric-attached MetroCluster configuration | 55
The following example shows the commands on FC_switch_B_1:
FC_switch_B_1:admin> portcfgdefault 0
FC_switch_B_1:admin> portcfgdefault 1
...
FC_switch_B_1:admin> portcfgdefault 39
5. Clear the zoning information by issuing the following commands:
cfgdisable
cfgclear
cfgsave
Example
The following example shows the commands on FC_switch_A_1:
FC_switch_A_1:admin> cfgdisable
FC_switch_A_1:admin> cfgclear
FC_switch_A_1:admin> cfgsave
The following example shows the commands on FC_switch_B_1:
FC_switch_B_1:admin> cfgdisable
FC_switch_B_1:admin> cfgclear
FC_switch_B_1:admin> cfgsave
6. Set the general switch settings to default:
configdefault
Example
The following example shows the command on FC_switch_A_1:
FC_switch_A_1:admin> configdefault
The following example shows the command on FC_switch_B_1:
FC_switch_B_1:admin> configdefault
7. Set all ports to non-trunking mode:
switchcfgtrunk 0
Example
The following example shows the command on FC_switch_A_1:
FC_switch_A_1:admin> switchcfgtrunk 0
The following example shows the command on FC_switch_B_1:
FC_switch_B_1:admin> switchcfgtrunk 0
8. On Brocade 6510 switches, disable the Brocade Virtual Fabrics (VF) feature:
fosconfig options
56 | Fabric-attached MetroCluster Installation and Configuration Guide
Example
The following example shows the command on FC_switch_A_1:
FC_switch_A_1:admin> fosconfig --disable vf
The following example shows the command on FC_switch_B_1:
FC_switch_B_1:admin> fosconfig --disable vf
9. Clear the Administrative Domain (AD) configuration:
ad options
Example
The following example shows the commands on FC_switch_A_1:
FC_switch_A_1:admin> switch:admin> ad --select AD0
FC_switch_A_1:> defzone --noaccess
FC_switch_A_1:> cfgsave
FC_switch_A_1:> exit
FC_switch_A_1:admin> ad --clear -f
FC_switch_A_1:admin> ad --apply
FC_switch_A_1:admin> ad --save
FC_switch_A_1:admin> exit
The following example shows the commands on FC_switch_B_1:
FC_switch_B_1:admin> switch:admin> ad --select AD0
FC_switch_A_1:> defzone --noaccess
FC_switch_A_1:> cfgsave
FC_switch_A_1:> exit
FC_switch_B_1:admin> ad --clear -f
FC_switch_B_1:admin> ad --apply
FC_switch_B_1:admin> ad --save
FC_switch_B_1:admin> exit
10. Reboot the switch by issuing the following command:
reboot
Example
The following example shows the command on FC_switch_A_1:
FC_switch_A_1:admin> reboot
The following example shows the command on FC_switch_B_1:
FC_switch_B_1:admin> reboot
Configuring basic switch settings
You must configure basic global settings, including the domain ID, for Brocade switches.
About this task
This task contains steps that must be performed on each switch at both the MetroCluster sites.
In this procedure, you set the unique domain ID for each switch as shown in the following examples:
Cabling a fabric-attached MetroCluster configuration | 57
•
FC_switch_A_1 is assigned to domain ID 5
•
FC_switch_A_2 is assigned to domain ID 6
•
FC_switch_B_1 is assigned to domain ID 7
•
FC_switch_B_2 is assigned to domain ID 8
In this example, domain IDs 5 and 7 form fabric_1, and domain IDs 6 and 8 form fabric_2.
Steps
1. Enter configuration mode:
configure
2. Proceed through the prompts:
a. Set the domain ID for the switch.
b. Press Enter in response to the prompts until you get to RDP Polling Cycle, and then set that
value to 0 to disable the polling.
c. Press Enter in response to the prompts until you get to RSCN Transmission Mode, and then
set that value to y.
d. Press Enter until you return to the switch prompt.
Example
FC_switch_A_1:admin> configure
Fabric parameters = y
Domain_id = 5
.
.
RSCN Transmission Mode (yes, y, no, n): [no] y
3. If you are using two or more ISLs per fabric, then you can configure either in-order delivery
(IOD) of frames or out-of-order (OOD) delivery of frames.
Note: The standard IOD settings are recommended. You should configure OOD only if
necessary.
Considerations for using TDM/xWDM equipment with fabric-attached MetroCluster
configurations on page 15
•
The following steps must be performed on each switch fabric to configure in-order delivery of
frames:
a. Enable in-order delivery:
iodset
b. Set the Advanced Performance Tuning (APT) policy to 1:
aptpolicy 1
c. Disable Dynamic Load Sharing (DLS):
dlsreset
d. Verify the IOD settings by using the iodshow, aptpolicy, and dlsshow commands.
For example, issue the following commands on FC_switch_A_1:
58 | Fabric-attached MetroCluster Installation and Configuration Guide
FC_switch_A_1:admin> iodshow
IOD is set
FC_switch_A_1:admin> aptpolicy
Current Policy: 1 0(ap)
3 0(ap) : Default Policy
1: Port Based Routing Policy
3: Exchange Based Routing Policy
0: AP Shared Link Policy
1: AP Dedicated Link Policy
command aptpolicy completed
FC_switch_A_1:admin> dlsshow
DLS is not set
e. Repeat these steps on the second switch fabric.
•
The following steps must be performed on each switch fabric to configure out-of-order
delivery of frames:
a. Enable out-of-order delivery:
iodreset
b. Set the Advanced Performance Tuning (APT) policy to 3:
aptpolicy 3
c. Disable Dynamic Load Sharing (DLS):
dlsreset
d. Verify the OOD settings by using the iodshow, aptpolicy and dlsshow commands.
For example, issue the following commands on FC_switch_A_1:
FC_switch_A_1:admin> iodshow
IOD is not set
FC_switch_A_1:admin> aptpolicy
Current Policy: 3 0(ap)
3 0(ap) : Default Policy
1: Port Based Routing Policy
3: Exchange Based Routing Policy
0: AP Shared Link Policy
1: AP Dedicated Link Policy
command aptpolicy completed
FC_switch_A_1:admin> dlsshow
DLS is set by default with current routing policy
e. Repeat these steps on the second switch fabric.
Note: When configuring ONTAP on the controller modules, OOD must be explicitly
configured on each controller module in the MetroCluster configuration.
Configuring in-order delivery or out-of-order delivery of frames on ONTAP software on
page 180
4. Enable dynamic port licensing:
licenseport --method dynamic
5. Enable the trap for T11-FC-ZONE-SERVER-MIB to provide successful health monitoring of the
switches in ONTAP:
Cabling a fabric-attached MetroCluster configuration | 59
a. Enable the T11-FC-ZONE-SERVER-MIB:
snmpconfig --set mibCapability -mib_name T11-FC-ZONE-SERVER-MIB bitmask 0x3f
b. Enable the T11-FC-ZONE-SERVER-MIB trap:
snmpconfig --enable mibcapability -mib_name SW-MIB -trap_name
swZoneConfigChangeTrap
c. Repeat the previous steps on the second switch fabric.
6. Reboot the switch:
reboot
Example
On FC_switch_A_1:
FC_switch_A_1:admin> reboot
On FC_switch_B_1:
FC_switch_B_1:admin> reboot
7. Persistently enable the switch:
switchcfgpersistentenable
Example
On FC_switch_A_1:
FC_switch_A_1:admin> switchcfgpersistentenable
On FC_switch_B_1:
FC_switch_B_1:admin> switchcfgpersistentenable
Configuring E-ports on Brocade FC switches
On each switch fabric, you must configure the switch ports that connect the Inter-Switch Link (ISL).
These ISL ports are also known as E-ports.
Before you begin
•
All of the ISLs in an FC switch fabric must be configured with the same speed and distance.
•
The combination of the switch port and small form-factor pluggable (SFP) must support the
speed.
The ISL must be using one of the supported speeds: 4 Gbps, 8 Gbps, or 16 Gbps.
•
The supported ISL distance depends on the FC switch model.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and ONTAP version to refine your
search. You can click Show Results to display the list of supported configurations that match the
criteria.
60 | Fabric-attached MetroCluster Installation and Configuration Guide
•
The ISL link must have a dedicated lambda, and the link must be supported by Brocade for the
distance, switch type, and Fabric Operating System (FOS).
About this task
You must not use the L0 setting when issuing the portCfgLongDistance command. Instead, you
should use the LE or LS setting to configure the distance on the Brocade switches with a minimum of
LE distance level.
You must not use the LD setting when issuing the portCfgLongDistance command when working
with xWDM/TDM equipment. Instead, you should use the LE or LS setting to configure the distance
on the Brocade switches.
You must perform this task for each FC switch fabric.
The following tables show the ISL ports for different switches and different number of ISLs. The
examples shown in this section are for a Brocade 6505 switch. You should modify the examples to
use ports that apply to your switch type.
Brocade 6505
FC_switch_A_1 ISL ports
FC_switch_B_1 ports
20
20
21
21
22
22
23
23
Brocade 6520
FC_switch_A_1 ISL ports
FC_switch_B_1 ports
23
23
47
47
71
71
95
95
Brocade 6510
FC_switch_A_1 ISL ports
FC_switch_B_1 ports
40
40
41
41
42
42
43
43
44
44
45
45
46
46
47
47
Steps
1. Configure the port speed:
Cabling a fabric-attached MetroCluster configuration | 61
portcfgspeed port-number speed
You must use the highest common speed that is supported by the components in the path.
Example
In the following example, there is one ISL for each fabric:
FC_switch_A_1:admin> portcfgspeed 20 16
FC_switch_B_1:admin> portcfgspeed 20 16
In the following example, there are two ISLs for each fabric:
FC_switch_A_1:admin> portcfgspeed 20 16
FC_switch_A_1:admin> portcfgspeed 21 16
FC_switch_B_1:admin> portcfgspeed 20 16
FC_switch_B_1:admin> portcfgspeed 21 16
2. If more than one ISL is used for each fabric, enable trunking for each ISL port:
portcfgtrunkport port-number 1
Example
FC_switch_A_1:admin> portcfgtrunkport 20 1
FC_switch_A_1:admin> portcfgtrunkport 21 1
FC_switch_B_1:admin> portcfgtrunkport 20 1
FC_switch_B_1:admin> portcfgtrunkport 21 1
3. Enable QoS traffic for each of the ISL ports:
portcfgqos --enable port-number
Example
In the following example, there is one ISL per switch fabric:
FC_switch_A_1:admin> portcfgqos --enable 20
FC_switch_B_1:admin> portcfgqos --enable 20
In the following example, there are two ISLs per switch fabric:
FC_switch_A_1:admin> portcfgqos --enable 20
FC_switch_A_1:admin> portcfgqos --enable 21
FC_switch_B_1:admin> portcfgqos --enable 20
FC_switch_B_1:admin> portcfgqos --enable 21
4. Verify the settings:
portCfgShow command
Example
The following example shows the output for a configuration that uses two ISLs cabled to port 10
and port 11:
62 | Fabric-attached MetroCluster Installation and Configuration Guide
Ports of Slot 0
16 17
18 19
20 21 22 23
24 25 26 27
12 13 14 15
----------------+---+---+---+---+-----+---+---+---+----+---+---+---+-----+---+---+--Speed
AN AN AN AN
AN AN 8G AN
AN AN 16G 16G
AN AN AN AN
Fill Word
0
0
0
0
0
0
3
0
0
0
3
3
3
0
0
0
AL_PA Offset 13
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Trunk Port
.. .. .. ..
.. .. .. ..
.. .. ON ON
.. .. .. ..
Long Distance
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
VC Link Init
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Locked L_Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Locked G_Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Disabled E_Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Locked E_Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
ISL R_RDY Mode
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
RSCN Suppressed
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Persistent Disable.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
LOS TOV enable
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
NPIV capability
ON ON ON ON
ON ON ON ON
ON ON ON ON
ON ON ON ON
NPIV PP Limit
126 126 126 126
126 126 126 126 126 126 126 126
126 126 126 126
QOS E_Port
AE AE AE AE
AE AE AE AE
AE AE AE AE
AE AE AE AE
Mirror Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Rate Limit
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Credit Recovery
ON ON ON ON
ON ON ON ON
ON ON ON ON
ON ON ON ON
Fport Buffers
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Port Auto Disable .. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
CSCTL mode
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Fault Delay
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
5. Set the authentication policy:
secAuthSecret --set
6. Calculate the ISL distance.
Because of the behavior of FC-VI, the distance must be set to 1.5 times the real distance with a
minimum distance of 10 km (using the LE distance level).
The distance for the ISL is calculated as follows, rounded up to the next full kilometer:
1.5 × real_distance = distance
Example
If the distance is 3 km, then 1.5 × 3 km = 4.5 km. This is lower than 10 km, so the ISL must be
set to the LE distance level.
If the distance is 20 km, then 1.5 × 20 km = 30 km. The ISL must be set to 30 km and must use
the LS distance level.
7. Set the distance on each ISL port:
portcfglongdistance
port distance-level vc_link_init distance
A vc_link_init value of 1 uses the ARB fill word (default). A value of 0 uses IDLE. The
required value might depend on the link being used. The commands must be repeated for each
ISL port.
Example
For an ISL distance of 3 km, as given in the example in the previous step, the setting is 4.5 km
with the default vc_link_init value of 1. Because a setting of 4.5 km is lower than 10 km, the
port needs to be set to the LE distance level:
FC_switch_A_1:admin> portcfglongdistance 20 LE 1
FC_switch_B_1:admin> portcfglongdistance 20 LE 1
For an ISL distance of 20 km, as given in the example in the previous step, the setting is 30 km
with the default vc_link_init value of 1:
Cabling a fabric-attached MetroCluster configuration | 63
FC_switch_A_1:admin> portcfglongdistance 20 LS 1 -distance 30
FC_switch_B_1:admin> portcfglongdistance 20 LS 1 -distance 30
8. Verify the distance setting:
portbuffershow
A distance level of LE appears as 10 km.
Example
The following example shows the output for a configuration that uses ISLs on port 20 and port
21:
FC_switch_A_1:admin> portbuffershow
User
Port
---...
20
21
...
23
Port
Type
---E
E
Lx
Mode
----
Max/Resv
Buffers
-------
Buffer Needed
Usage Buffers
------ -------
Link
Remaining
Distance Buffers
--------- ---------
-
8
8
67
67
67
67
30km
30km
-
8
0
-
-
466
9. Verify that both switches form one fabric:
switchshow
Example
The following example shows the output for a configuration that uses ISLs on port 20 and port
21:
FC_switch_A_1:admin> switchshow
switchName: FC_switch_A_1
switchType: 71.2
switchState:Online
switchMode: Native
switchRole: Subordinate
switchDomain:
5
switchId:
fffc01
switchWwn: 10:00:00:05:33:86:89:cb
zoning:
OFF
switchBeacon:
OFF
Index Port Address Media Speed State Proto
===========================================
...
20
20 010C00
id
16G Online FC E-Port
21
21 010B00
id
16G Online FC E-Port
(upstream)
...
10:00:00:05:33:8c:2e:9a "FC_switch_B_1"
10:00:00:05:33:8c:2e:9a "FC_switch_B_1"
FC_switch_B_1:admin> switchshow
switchName: FC_switch_B_1
switchType: 71.2
switchState:Online
switchMode: Native
switchRole: Principal
switchDomain:
7
switchId:
fffc03
switchWwn: 10:00:00:05:33:8c:2e:9a
zoning:
OFF
switchBeacon:
OFF
Index Port Address Media Speed State Proto
==============================================
...
20
20 030A00
id
16G Online FC E-Port
21
21 030B00
id
16G Online FC E-Port
(downstream)
...
10. Confirm the configuration of the fabrics:
10:00:00:05:33:86:89:cb "FC_switch_A_1"
10:00:00:05:33:86:89:cb "FC_switch_A_1"
64 | Fabric-attached MetroCluster Installation and Configuration Guide
fabricshow
Example
FC_switch_A_1:admin> fabricshow
Switch ID
Worldwide Name
Enet IP Addr FC IP Addr Name
----------------------------------------------------------------1: fffc01 10:00:00:05:33:86:89:cb 10.10.10.55 0.0.0.0
"FC_switch_A_1"
3: fffc03 10:00:00:05:33:8c:2e:9a 10.10.10.65 0.0.0.0
>"FC_switch_B_1"
FC_switch_B_1:admin> fabricshow
Switch ID
Worldwide Name
Enet IP Addr FC IP Addr
Name
---------------------------------------------------------------1: fffc01 10:00:00:05:33:86:89:cb 10.10.10.55 0.0.0.0
"FC_switch_A_1"
3: fffc03 10:00:00:05:33:8c:2e:9a 10.10.10.65
0.0.0.0
>"FC_switch_B_1
11. Confirm the trunking of the ISLs:
trunkshow
Example
FC_switch_A_1:admin> trunkshow
1: 20-> 20 10:00:00:05:33:ac:2b:13
21-> 21 10:00:00:05:33:8c:2e:9a
3 deskew 15 MASTER
3 deskew 16
FC_switch_A_1:admin> trunkshow
1: 20-> 20 10:00:00:05:33:af:9b:13
21-> 21 10:00:00:05:33:2c:8e:12a
3 deskew 15 MASTER
3 deskew 16
12. Repeat steps 1 on page 60 through 11 on page 64 for the second FC switch fabric.
Related concepts
Port assignments for FC switches on page 38
Configuring the non-E-ports on the Brocade switch
You must configure the non-E-ports on the FC switch. In the MetroCluster configuration, these are
the ports that connect the switch to the HBA initiators, FC-VI interconnects, and FC-to-SAS bridges.
These steps must be done for each port.
About this task
In the following example, the ports connect an FC-to-SAS bridge:
•
Port 6 on FC_FC_switch_A_1 at Site_A
•
Port 6 on FC_FC_switch_B_1 at Site_B
Steps
1. Configure the port speed for each non-E-port:
portcfgspeed port speed
You should use the highest common speed, which is the highest speed supported by all
components in the data path: the SFP, the switch port that the SFP is installed on, and the
connected device (HBA, bridge, etc).
For example, the components might have the following supported speeds:
•
The SFP is capable of 4/8/16 GB.
Cabling a fabric-attached MetroCluster configuration | 65
•
The switch port is capable of 4/8/16 GB.
•
The connected HBA maximum speed is 8 GB.
The highest common speed in this case is 8 GB, so the port should be configured for a speed of 8
GB.
Example
FC_switch_A_1:admin> portcfgspeed 6 8
FC_switch_B_1:admin> portcfgspeed 6 8
2. Verify the settings using the portcfgshow command:
Example
FC_switch_A_1:admin> portcfgshow
FC_switch_B_1:admin> portcfgshow
In the example output, port 6 has the following settings:
•
Speed is set to 4G
Ports of Slot 0 0
1
2
3
4
5
6
7
8
9 10 11
12 13 14 15
-----------------+---+---+---+---+-----+---+---+---+-----+---+---+---+-----+---+---+--Speed
AN AN AN AN
AN AN 8G AN
AN AN AN AN
AN AN AN AN
Fill Word
0
0
0
0
0
0
3
0
0
0
0
0
0
0
0
0
AL_PA Offset 13
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Trunk Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Long Distance
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
VC Link Init
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Locked L_Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Locked G_Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Disabled E_Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Locked E_Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
ISL R_RDY Mode
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
RSCN Suppressed
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Persistent Disable .. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
LOS TOV enable
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
NPIV capability
ON ON ON ON
ON ON ON ON
ON ON ON ON
ON ON ON ON
NPIV PP Limit
126 126 126 126
126 126 126 126
126 126 126 126
126 126 126 126
QOS E_Port
AE AE AE AE
AE AE AE AE
AE AE AE AE
AE AE AE AE
Mirror Port
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Rate Limit
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Credit Recovery
ON ON ON ON
ON ON ON ON
ON ON ON ON
ON ON ON ON
Fport Buffers
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Port Auto Disable .. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
CSCTL mode
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
Fault Delay
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Configuring zoning on Brocade FC switches
You must assign the switch ports to separate zones to separate controller and storage traffic. The
procedure differs depending on whether you are using a FibreBridge 7500N or FibreBridge 6500N
bridge.
Choices
•
•
•
•
Zoning for FC-VI ports on page 66
Zoning for FibreBridge 6500N bridges or FibreBridge 7500N using one FC port on page 69
Zoning for FibreBridge 7500N bridges using both FC ports on page 75
Configuring zoning on Brocade FC switches on page 84
66 | Fabric-attached MetroCluster Installation and Configuration Guide
Zoning for FC-VI ports
For each DR group in the MetroCluster, you must configure two zones for the FC-VI connections
that allow controller-to-controller traffic. These zones contain the FC switch ports connecting to the
controller module FC-VI ports. These zones are Quality of Service (QoS) zones.
A QoS zone name starts with the prefix QOSHid_, followed by a user-defined string to differentiate it
from a regular zone. These QoS zones are the same regardless of the model of FibreBridge bridge
that is being used.
Each zone contains all the FC-VI ports, one for each FC-VI cable from each controller. These zones
are configured for high priority.
The following tables show the FC-VI zones for two DR groups.
DR group 1 : QOSH1 FC-VI zone for FC-VI port a / c
Color in illustration: Black
FC switch
FC_switch_
A_1
FC_switch_
B_1
Site
A
B
Switch
domain
5
7
Switch port
Connects to...
6505 /
6510
6520
0
0
controller_A_1 port
FC-VI a
1
1
controller_A_1 port
FC-VI c
4
4
controller_A_2 port
FC-VI a
5
5
controller_A_2 port
FC-VI c
0
0
controller_B_1 port
FC-VI a
1
1
controller_B_1 port
FC-VI c
4
4
controller_B_2 port
FC-VI a
5
5
controller_B_2 port
FC-VI c
Zone in Fabric_1
Member ports
QOSH1_MC1_FAB_1_FCVI
5,0;5,1;5,4;5,5;7,0;7,1;7,4;7,5
Cabling a fabric-attached MetroCluster configuration | 67
DR group 1 : QOSH1 FC-VI zone for FC-VI port b / d
Color in illustration: Black
FC switch
FC_switch_
A_2
FC_switch_
B_2
Site
A
B
Switch
domain
6
8
Switch port
Connects to...
6505 /
6510
6520
0
0
controller_A_1 port
FC-VI b
1
1
controller_A_1 port
FC-VI d
4
4
controller_A_2 port
FC-VI b
5
5
controller_A_2 port
FC-VI d
0
0
controller_B_1 port
FC-VI b
1
1
controller_B_1 port
FC-VI d
4
4
controller_B_2 port
FC-VI b
5
5
controller_B_2 port
FC-VI d
Zone in Fabric_1
Member ports
QOSH1_MC1_FAB_2_FCVI
6,0;6,1;6,4;6,5;8,0;8,1;8,4;8,5
DR group 2 : QOSH2 FC-VI zone for FC-VI port a / c
Color in illustration: Black
FC switch
FC_switch_
A_1
Site
A
Switch
domain
5
Switch port
Connects to...
6510
6520
24
48
controller_A_3 port
FC-VI a
25
49
controller_A_3 port
FC-VI c
28
52
controller_A_4 port
FC-VI a
29
53
controller_A_4 port
FC-VI c
68 | Fabric-attached MetroCluster Installation and Configuration Guide
DR group 2 : QOSH2 FC-VI zone for FC-VI port a / c
Color in illustration: Black
FC switch
FC_switch_
B_1
Site
B
Switch
domain
7
Switch port
Connects to...
6510
6520
24
48
controller_B_3 port
FC-VI a
25
49
controller_B_3 port
FC-VI c
28
52
controller_B_4 port
FC-VI a
29
53
controller_B_4 port
FC-VI c
Zone in Fabric_1
Member ports
QOSH2_MC2_FAB_1_FCVI (6510)
5,24;5,25;5,28;5,29;7,24;7,25;7,28;7,29
QOSH2_MC2_FAB_1_FCVI (6520)
5,48;5,49;5,52;5,53;7,48;7,49;7,52;7,53
DR group 2 : QOSH2 FC-VI zone for FC-VI port b / d
Color in illustration: Black
FC switch
FC_switch_
A_2
FC_switch_
B_2
Site
A
B
Switch
domain
6
8
Switch port
Connects to...
6510
6520
24
48
controller_A_3 port
FC-VI b
25
49
controller_A_3 port
FC-VI d
28
52
controller_A_4 port
FC-VI b
29
53
controller_A_4 port
FC-VI d
24
48
controller_B_3 port
FC-VI b
25
49
controller_B_3 port
FC-VI d
28
52
controller_B_4 port
FC-VI b
29
53
controller_B_4 port
FC-VI d
Zone in Fabric_2
Member ports
QOSH2_MC2_FAB_2_FCVI (6510)
6,24;6,25;6,28;6,29;8,24;8,25;8,28;8,29
QOSH2_MC2_FAB_2_FCVI (6520)
6,48;6,49;6,52;6,53;8,48;8,49;8,52;8,53
The following table provides a summary of the FC-VI zones:
Cabling a fabric-attached MetroCluster configuration | 69
Fabric
FC_switch_A
_1 and
FC_switch_B
_1
FC_switch_A
_2 and
FC_switch_B
_2
Zone name
Member ports
QOSH1_MC1_FAB_1_FCVI
5,0;5,1;5,4;5,5;7,0;7,1;7,4;7,5
QOSH2_MC1_FAB_1_FCVI ( 6510)
5,24;5,25;5,28;5,29;7,24;7,25;7,28
;7,29
QOSH2_MC1_FAB_1_FCVI (6520)
5,48;5,49;5,52;5,53;7,48;7,49;7,52
;7,53
QOSH1_MC1_FAB_2_FCVI
6,0;6,1;6,4;6,5;8,0;8,1;8,4;8,5
QOSH2_MC1_FAB_2_FCVI (6510)
6,24;6,25;6,28;6,29;8,24;8,25;8,28
;8,29
QOSH2_MC1_FAB_2_FCVI (6520)
6,48;6,49;6,52;6,53;8,48;8,49;8,52
;8,53
Zoning for FibreBridge 6500N bridges or FibreBridge 7500N using one FC port
If you are using FibreBridge 6500N bridges, or FibreBridge 7500N bridges using only one of the two
FC ports, you need to create storage zones for the bridge ports. You should understand the zones and
associated ports before you configure the zones.
The examples show zoning for DR group 1 only. If your configuration includes a second DR group,
configure the zoning for the second DR group in the same manner, using the corresponding ports of
the controllers and bridges.
Required zones
You must configure one zone for each of the FC-to-SAS bridge FC ports that allows traffic between
initiators on each controller module and that FC-to-SAS bridge.
Each storage zone contains nine ports:
•
Eight HBA initiator ports (two connections for each controller)
•
One port connecting to an FC-to-SAS bridge FC port
The storage zones use standard zoning.
The examples show two pairs of bridges connecting two stack groups at each site. Because each
bridge uses one FC port, there are a total of four storage zones per fabric (eight in total).
Bridge naming
The bridges use the following example naming: bridge_site_stack grouplocation in pair
This portion of the name...
Identifies the...
Possible values...
site
Site on which the bridge pair
physically resides.
A or B
70 | Fabric-attached MetroCluster Installation and Configuration Guide
This portion of the name...
Identifies the...
Possible values...
stack group
Number of the stack group to
which the bridge pair connects.
1, 2, etc.
location in pair
•
FibreBridge 7500N bridges
support up to four stacks in
the stack group.
The stack group can
contain no more than 10
storage shelves.
•
FibreBridge 6500N bridges
support only a single stack
in the stack group.
Bridge within the bridge pair.
A pair of bridges connect to a
specific stack group.
a or b
Example bridge names for one stack group on each site:
•
bridge_A_1a
•
bridge_A_1b
•
bridge_B_1a
•
bridge_B_1b
DR Group 1 - Stack 1 at Site_A
DrGroup 1 : MC1_INIT_GRP_1_SITE_A_STK_GRP_1_TOP_FC1
Color in illustration: Black
FC switch
FC_switch_
A_1
FC_switch_
B_1
Site
A
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port 0a
3
3
controller_A_1 port 0c
6
6
controller_A_2 port 0a
7
7
controller_A_2 port 0c
8
8
bridge_A_1a FC1
2
2
controller_B_1 port 0a
3
3
controller_B_1 port 0c
6
6
controller_B_2 port 0a
7
7
controller_B_2 port 0c
5
7
Zone in Fabric_1
Member ports
MC1_INIT_GRP_1_SITE_A_STK_GRP_1_T
OP_FC1
5,2;5,3;5,6;5,7;7,2;7,3;7,6;7,7;5,8
Cabling a fabric-attached MetroCluster configuration | 71
DrGroup 1 : MC1_INIT_GRP_1_SITE_A_STK_GRP_1_BOT_FC1
Color in illustration: Black
FC switch
Site
FC_switch_
A_1
Switch
domain
A
FC_switch_
B_1
Switch port
6505 /
6510
6520
2
2
controller_A_1 port
0b
3
3
controller_A_1 port
0d
6
6
controller_A_2 port
0b
7
7
controller_A_2 port
0d
8
8
bridge_A_1b FC1
2
2
controller_B_1 port 0b
3
3
controller_B_1 port 0d
6
6
controller_B_2 port 0b
7
7
controller_B_2 port 0d
5
B
Connects to...
7
Zone in Fabric_2
Member ports
MC1_INIT_GRP_1_SITE_A_STK_GRP_1_B
OT_FC1
6,2;6,3;6,6;6,7;8,2;8,3;8,6;8,7;6,8
DR Group 1 - Stack 2 at Site_A
DrGroup 1 : MC1_INIT_GRP_1_SITE_A_STK_GRP_2_TOP_FC1
Color in illustration: Black
FC switch
FC_switch_
A_1
FC_switch_
B_1
Site
A
B
Switch
domain
5
7
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port 0a
3
3
controller_A_1 port 0c
6
6
controller_A_2 port 0a
7
7
controller_A_2 port 0c
9
9
bridge_A_2a FC1
2
2
controller_B_1 port 0a
3
3
controller_B_1 port 0c
6
6
controller_B_2 port 0a
7
7
controller_B_2 port 0c
72 | Fabric-attached MetroCluster Installation and Configuration Guide
Zone in Fabric_1
Member ports
MC1_INIT_GRP_1_SITE_A_STK_GRP_2_T
OP_FC1
5,2;5,3;5,6;5,7;7,2;7,3;7,6;7,7;5,9
DrGroup 1 : MC1_INIT_GRP_1_SITE_A_STK_GRP_2_BOT_FC1
Color in illustration: Black
FC switch
Site
FC_switch_
A_1
Switch
domain
A
FC_switch_
B_1
Switch port
6505 /
6510
6520
2
2
controller_A_1 port
0b
3
3
controller_A_1 port
0d
6
6
controller_A_2 port
0b
7
7
controller_A_2 port
0d
9
9
bridge_A_2b FC1
2
2
controller_B_1 port 0b
3
3
controller_B_1 port 0d
6
6
controller_B_2 port 0b
7
7
controller_B_2 port 0d
5
B
Connects to...
7
Zone in Fabric_2
Member ports
MC1_INIT_GRP_1_SITE_A_STK_GRP_2_B
OT_FC1
6,2;6,3;6,6;6,7;8,2;8,3;8,6;8,7;6,9
DR Group 1 - Stack 1 at Site_B
DrGroup 1 : MC1_INIT_GRP_1_SITE_B_STK_GRP_1_TOP_FC1
Color in illustration: Black
FC switch
FC_switch_
A_1
Site
A
Switch
domain
5
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port 0a
3
3
controller_A_1 port 0c
6
6
controller_A_2 port 0a
7
7
controller_A_2 port 0c
Cabling a fabric-attached MetroCluster configuration | 73
DrGroup 1 : MC1_INIT_GRP_1_SITE_B_STK_GRP_1_TOP_FC1
Color in illustration: Black
FC switch
Site
FC_switch_
B_1
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_B_1 port 0a
3
3
controller_B_1 port 0c
6
6
controller_B_2 port 0a
7
7
controller_B_2 port 0c
8
8
bridge_B_1a FC1
7
Zone in Fabric_1
Member ports
MC1_INIT_GRP_1_SITE_B_STK_GRP_1_T
OP_FC1
5,2;5,3;5,6;5,7;7,2;7,3;7,6;7,7;7,8
DrGroup 1 : MC1_INIT_GRP_1_SITE_B_STK_GRP_1_BOT_FC1
Color in illustration: Black
FC switch
FC_switch_
A_1
FC_switch_
B_1
Site
A
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port
0b
3
3
controller_A_1 port
0d
6
6
controller_A_2 port
0b
7
7
controller_A_2 port
0d
2
2
controller_B_1 port 0b
3
3
controller_B_1 port 0d
6
6
controller_B_2 port 0b
7
7
controller_B_2 port 0d
8
8
bridge_B_1b FC1
5
7
Zone in Fabric_2
Member ports
MC1_INIT_GRP_1_SITE_B_STK_GRP_1_B
OT_FC1
5,2;5,3;5,6;5,7;7,2;7,3;7,6;7,7;8,8
74 | Fabric-attached MetroCluster Installation and Configuration Guide
DR Group 1 - Stack 2 at Site_B
DrGroup 1 : MC1_INIT_GRP_1_SITE_B_STK_GRP_2_TOP_FC1
Color in illustration: Black
FC switch
Site
FC_switch_
A_1
A
FC_switch_
B_1
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port 0a
3
3
controller_A_1 port 0c
6
6
controller_A_2 port 0a
7
7
controller_A_2 port 0c
2
16
controller_B_1 port 0a
3
3
controller_B_1 port 0c
6
6
controller_B_2 port 0a
7
7
controller_B_2 port 0c
9
9
bridge_b_2a FC1
5
7
Zone in Fabric_1
Member ports
MC1_INIT_GRP_1_SITE_b_STK_GRP_2_TO
P_FC1
5,2;5,3;5,6;5,7;7,2;7,3;7,6;7,7;6,9
DrGroup 1 : MC1_INIT_GRP_1_SITE_B_STK_GRP_2_BOT_FC1
Color in illustration: Black
FC switch
FC_switch_
A_1
FC_switch_
B_1
Site
A
B
Switch
domain
5
7
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port
0b
3
3
controller_A_1 port
0d
6
6
controller_A_2 port
0b
7
7
controller_A_2 port
0d
2
2
controller_B_1 port 0b
3
3
controller_B_1 port 0d
6
6
controller_B_2 port 0b
7
7
controller_B_2 port 0d
9
9
bridge_B_1b FC1
Cabling a fabric-attached MetroCluster configuration | 75
Zone in Fabric_2
Member ports
MC1_INIT_GRP_1_SITE_B_STK_GRP_2_B
OT_FC1
6,2;6,3;6,6;6,7;8,2;8,3;8,6;8,7;8,9
Summary of storage zones
Fabric
FC_switch_A
_1 and
FC_switch_B
_1
FC_switch_A
_2 and
FC_switch_B
_2
Zone name
Member ports
MC1_INIT_GRP_1_SITE_A_STK_GRP_1_
TOP_FC1
5,2;5,3;5,6;5,7;7,2;7,3;7,6;7,7;5,8
MC1_INIT_GRP_1_SITE_A_STK_GRP_2_
TOP_FC1
5,2;5,3;5,6;5,7;7,2;7,3;7,6;7,7;5,9
MC1_INIT_GRP_1_SITE_B_STK_GRP_1_
TOP_FC1
5,2;5,3;5,6;5,7;7,2;7,3;7,6;7,7;7,8
MC1_INIT_GRP_1_SITE_B_STK_GRP_2_
TOP_FC1
5,2;5,3;5,6;5,7;7,2;7,3;7,6;7,7;6,9
MC1_INIT_GRP_1_SITE_A_STK_GRP_1_
BOT_FC1
6,2;6,3;6,6;6,7;8,2;8,3;8,6;8,7;6,8
MC1_INIT_GRP_1_SITE_A_STK_GRP_2_
BOT_FC1
6,2;6,3;6,6;6,7;8,2;8,3;8,6;8,7;6,9
MC1_INIT_GRP_1_SITE_B_STK_GRP_1_
BOT_FC1
6,2;6,3;6,6;6,7;8,2;8,3;8,6;8,7;8,8
MC1_INIT_GRP_1_SITE_B_STK_GRP_2_
BOT_FC1
6,2;6,3;6,6;6,7;8,2;8,3;8,6;8,7;8,9
Zoning for FibreBridge 7500N bridges using both FC ports
If you are using FibreBridge 7500N bridges with both FC ports, you need to create storage zones for
the bridge ports. You should understand the zones and associated ports before you configure the
zones.
Required zones
You must configure one zone for each of the FC-to-SAS bridge FC ports that allows traffic between
initiators on each controller module and that FC-to-SAS bridge.
Each storage zone contains five ports:
•
Four HBA initiator ports (one connection for each controller)
•
One port connecting to an FC-to-SAS bridge FC port
The storage zones use standard zoning.
The examples show two pairs of bridges connecting two stack groups at each site. Because each
bridge uses one FC port, there are a total of eight storage zones per fabric (sixteen in total).
Bridge naming
The bridges use the following example naming: bridge_site_stack grouplocation in pair
This portion of the name...
Identifies the...
Possible values...
site
Site on which the bridge pair
physically resides.
A or B
76 | Fabric-attached MetroCluster Installation and Configuration Guide
This portion of the name...
Identifies the...
Possible values...
stack group
Number of the stack group to
which the bridge pair connects.
1, 2, etc.
location in pair
•
FibreBridge 7500N bridges
support up to four stacks in
the stack group.
The stack group can
contain no more than 10
storage shelves.
•
FibreBridge 6500N bridges
support only a single stack
in the stack group.
Bridge within the bridge pair.
A pair of bridges connect to a
specific stack group.
a or b
Example bridge names for one stack group on each site:
•
bridge_A_1a
•
bridge_A_1b
•
bridge_B_1a
•
bridge_B_1b
DR Group 1 - Stack 1 at Site_A
DrGroup 1 : MC1_INIT_GRP_1_SITE_A_STK_GRP_1_TOP_FC1
Color in illustration: Black
FC switch
FC_switch_
A_1
FC_switch_
B_1
Site
A
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port 0a
6
6
controller_A_2 port 0a
8
8
bridge_A_1a FC1
2
2
controller_B_1 port 0a
6
6
controller_B_2 port 0a
5
7
Zone in Fabric_1
Member ports
MC1_INIT_GRP_1_SITE_A_STK_GRP_1_T
OP_FC1
5,2;5,6;7,2;7,6;5,8
Cabling a fabric-attached MetroCluster configuration | 77
DrGroup 1 : MC1_INIT_GRP_2_SITE_A_STK_GRP_1_TOP_FC1
Color in illustration: Black
FC switch
Site
FC_switch_
A_1
A
FC_switch_
B_1
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
3
3
controller_A_1 port 0c
7
7
controller_A_2 port 0c
9
9
bridge_A_1b FC1
3
3
controller_B_1 port 0c
7
7
controller_B_2 port 0c
5
7
Zone in Fabric_2
Member ports
MC1_INIT_GRP_2_SITE_A_STK_GRP_1_T
OP_FC1
5,3;5,7;7,3;7,7;5,9
DrGroup 1 : MC1_INIT_GRP_1_SITE_A_STK_GRP_1_BOT_FC1
Color in illustration: Black
FC switch
FC_switch_
A_1
FC_switch_
B_1
Site
A
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port
0b
6
6
controller_A_2 port
0b
8
8
bridge_A_1a FC2
2
2
controller_B_1 port 0b
6
6
controller_B_2 port 0b
6
8
Zone in Fabric_1
Member ports
MC1_INIT_GRP_1_SITE_A_STK_GRP_1_B
OT_FC2
6,2;6,6;8,2;8,6;6,8
78 | Fabric-attached MetroCluster Installation and Configuration Guide
DrGroup 1 : MC1_INIT_GRP_2_SITE_A_STK_GRP_1_BOT_FC2
Color in illustration: Black
FC switch
Site
FC_switch_
A_1
Switch
domain
A
FC_switch_
B_1
Switch port
6505 /
6510
6520
3
3
controller_A_1 port
0b
7
7
controller_A_2 port
0b
9
9
bridge_A_1b FC2
3
3
controller_B_1 port 0b
7
7
controller_B_2 port 0b
6
B
Connects to...
8
Zone in Fabric_2
Member ports
MC1_INIT_GRP_2_SITE_A_STK_GRP_1_B
OT_FC2
6,3;6,7;8,3;8,7;6,9
DR Group 1 - Stack 2 at Site_A
DrGroup 1 : MC1_INIT_GRP_1_SITE_A_STK_GRP_2_TOP_FC1
Color in illustration: Black
FC switch
FC_switch_
A_1
FC_switch_
B_1
Site
A
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port 0a
6
6
controller_A_2 port 0a
10
10
bridge_A_2a FC1
2
2
controller_B_1 port 0a
6
6
controller_B_2 port 0a
5
7
Zone in Fabric_1
Member ports
MC1_INIT_GRP_1_SITE_A_STK_GRP_1_T
OP_FC1
5,2;5,6;7,2;7,6;5,10
Cabling a fabric-attached MetroCluster configuration | 79
DrGroup 1 : MC1_INIT_GRP_2_SITE_A_STK_GRP_2_TOP_FC1
Color in illustration: Black
FC switch
Site
FC_switch_
A_1
A
FC_switch_
B_1
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
3
3
controller_A_1 port 0c
7
7
controller_A_2 port 0c
11
11
bridge_A_2b FC1
3
3
controller_B_1 port 0c
7
7
controller_B_2 port 0c
5
7
Zone in Fabric_2
Member ports
MC1_INIT_GRP_2_SITE_A_STK_GRP_1_T
OP_FC1
5,3;5,7;7,3;7,7;5,11
DrGroup 1 : MC1_INIT_GRP_1_SITE_A_STK_GRP_2_BOT_FC2
Color in illustration: Black
FC switch
FC_switch_
A_1
FC_switch_
B_1
Site
A
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
2
0
controller_A_1 port
0b
6
4
controller_A_2 port
0b
10
10
bridge_A_2a FC2
2
2
controller_B_1 port 0b
6
6
controller_B_2 port 0b
6
8
Zone in Fabric_1
Member ports
MC1_INIT_GRP_1_SITE_A_STK_GRP_1_B
OT_FC2
6,2;6,6;8,2;8,6;6,10
80 | Fabric-attached MetroCluster Installation and Configuration Guide
DrGroup 1 : MC1_INIT_GRP_2_SITE_A_STK_GRP_2_BOT_FC2
Color in illustration: Black
FC switch
Site
FC_switch_
A_1
Switch
domain
A
FC_switch_
B_1
Switch port
6505 /
6510
6520
3
3
controller_A_1 port
0b
7
7
controller_A_2 port
0b
11
11
bridge_A_2b FC2
3
3
controller_B_1 port 0b
7
7
controller_B_2 port 0b
6
B
Connects to...
8
Zone in Fabric_2
Member ports
MC1_INIT_GRP_2_SITE_A_STK_GRP_1_B
OT_FC2
6,3;6,7;8,3;8,7;6,11
DR Group 1 - Stack 1 at Site_B
DrGroup 1 : MC1_INIT_GRP_1_SITE_B_STK_GRP_1_TOP_FC1
Color in illustration: Black
FC switch
Site
FC_switch_
A_1
A
FC_switch_
B_1
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port 0a
6
6
controller_A_2 port 0a
2
2
controller_B_1 port 0a
6
6
controller_B_2 port 0a
8
8
bridge_B_1a FC1
5
7
Zone in Fabric_1
Member ports
MC1_INIT_GRP_1_SITE_B_STK_GRP_1_T
OP_FC1
5,2;5,6;7,2;7,6;7,8
DrGroup 1 : MC1_INIT_GRP_2_SITE_B_STK_GRP_1_TOP_FC1
Color in illustration: Black
FC switch
FC_switch_
A_1
Site
A
Switch
domain
5
Switch port
Connects to...
6505 /
6510
6520
3
3
controller_A_1 port 0c
7
7
controller_A_2 port 0c
Cabling a fabric-attached MetroCluster configuration | 81
DrGroup 1 : MC1_INIT_GRP_2_SITE_B_STK_GRP_1_TOP_FC1
Color in illustration: Black
FC switch
Site
FC_switch_
B_1
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
3
3
controller_B_1 port 0c
7
7
controller_B_2 port 0c
9
9
bridge_B_1b FC1
7
Zone in Fabric_2
Member ports
MC1_INIT_GRP_2_SITE_B_STK_GRP_1_T
OP_FC1
5,3;5,7;7,3;7,7;7,9
DrGroup 1 : MC1_INIT_GRP_1_SITE_B_STK_GRP_1_BOT_FC1
Color in illustration: Black
FC switch
Site
FC_switch_
A_1
A
FC_switch_
B_1
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port
0b
6
6
controller_A_2 port
0b
2
2
controller_B_1 port 0b
6
6
controller_B_2 port 0b
8
8
bridge_B_1a FC2
6
8
Zone in Fabric_1
Member ports
MC1_INIT_GRP_1_SITE_B_STK_GRP_1_B
OT_FC2
6,2;6,6;8,2;8,6;8,8
DrGroup 1 : MC1_INIT_GRP_2_SITE_B_STK_GRP_1_BOT_FC2
Color in illustration: Black
FC switch
FC_switch_
A_1
Site
A
Switch
domain
6
Switch port
Connects to...
6505 /
6510
6520
3
3
controller_A_1 port
0b
7
7
controller_A_2 port
0b
82 | Fabric-attached MetroCluster Installation and Configuration Guide
DrGroup 1 : MC1_INIT_GRP_2_SITE_B_STK_GRP_1_BOT_FC2
Color in illustration: Black
FC switch
Site
FC_switch_
B_1
Switch
domain
B
Switch port
Connects to...
6505 /
6510
6520
3
3
controller_B_1 port 0b
7
7
controller_B_2 port 0b
9
9
bridge_A_1b FC2
8
Zone in Fabric_2
Member ports
MC1_INIT_GRP_2_SITE_B_STK_GRP_1_B
OT_FC2
6,3;6,7;8,3;8,7;8,9
DR Group 1 - Stack 2 at Site_B
DrGroup 1 : MC1_INIT_GRP_1_SITE_B_STK_GRP_2_TOP_FC1
Color in illustration: Black
FC switch
Site
FC_switch_
A_1
A
FC_switch_
B_1
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port 0a
6
6
controller_A_2 port 0a
2
2
controller_B_1 port 0a
6
6
controller_B_2 port 0a
10
10
bridge_B_2a FC1
5
7
Zone in Fabric_1
Member ports
MC1_INIT_GRP_1_SITE_B_STK_GRP_2_T
OP_FC1
5,2;5,6;7,2;7,6;7,10
DrGroup 1 : MC1_INIT_GRP_2_SITE_B_STK_GRP_2_TOP_FC1
Color in illustration: Black
FC switch
Site
FC_switch_
A_1
A
FC_switch_
B_1
B
Switch
domain
5
7
Switch port
Connects to...
6505 /
6510
6520
3
3
controller_A_1 port 0c
7
7
controller_A_2 port 0c
3
3
controller_B_1 port 0c
7
7
controller_B_2 port 0c
11
11
bridge_B_2b FC1
Cabling a fabric-attached MetroCluster configuration | 83
Zone in Fabric_2
Member ports
MC1_INIT_GRP_2_SITE_B_STK_GRP_2_T
OP_FC1
5,3;5,7;7,3;7,7;7,11
DrGroup 1 : MC1_INIT_GRP_1_SITE_B_STK_GRP_2_BOT_FC2
Color in illustration: Black
FC switch
Site
FC_switch_
A_1
A
FC_switch_
B_1
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
2
2
controller_A_1 port
0b
6
6
controller_A_2 port
0b
2
2
controller_B_1 port 0b
6
6
controller_B_2 port 0b
10
10
bridge_B_2a FC2
6
8
Zone in Fabric_1
Member ports
MC1_INIT_GRP_1_SITE_B_STK_GRP_2_B
OT_FC2
6,2;6,6;8,2;8,6;8,10
DrGroup 1 : MC1_INIT_GRP_2_SITE_B_STK_GRP_2_BOT_FC2
Color in illustration: Black
FC switch
FC_switch_
A_1
FC_switch_
B_1
Site
A
B
Switch
domain
Switch port
Connects to...
6505 /
6510
6520
3
3
controller_A_1 port
0b
7
7
controller_A_2 port
0b
3
3
controller_B_1 port 0b
7
7
controller_B_2 port 0b
11
11
bridge_B_2b FC2
6
8
Zone in Fabric_2
Member ports
MC1_INIT_GRP_2_SITE_B_STK_GRP_2_B
OT_FC2
6,3;6,7;8,3;8,7;8,11
84 | Fabric-attached MetroCluster Installation and Configuration Guide
Summary of storage zones
Fabric
FC_switch_A
_1 and
FC_switch_B
_1
FC_switch_A
_2 and
FC_switch_B
_2
Zone name
Member ports
MC1_INIT_GRP_1_SITE_A_STK_GRP_1_
TOP_FC1
5,2;5,6;7,2;7,6;5,8
MC1_INIT_GRP_2_SITE_A_STK_GRP_1_
TOP_FC1
5,3;5,7;7,3;7,7;5,9
MC1_INIT_GRP_1_SITE_A_STK_GRP_2_
TOP_FC1
5,2;5,6;7,2;7,6;5,10
MC1_INIT_GRP_2_SITE_A_STK_GRP_2_
TOP_FC1
5,3;5,7;7,3;7,7;5,11
MC1_INIT_GRP_1_SITE_B_STK_GRP_1_
BOT_FC1
5,2;5,6;7,2;7,6;7,8
MC1_INIT_GRP_2_SITE_B_STK_GRP_1_
BOT_FC1
5,3;5,7;7,3;7,7;7,9
MC1_INIT_GRP_1_SITE_B_STK_GRP_2_
BOT_FC1
5,2;5,6;7,2;7,6;7,10
MC1_INIT_GRP_2_SITE_B_STK_GRP_2_
BOT_FC1
5,3;5,7;7,3;7,7;7,11
MC1_INIT_GRP_1_SITE_A_STK_GRP_1_
TOP_FC2
6,2;6,6;8,2;8,6;6,8
MC1_INIT_GRP_2_SITE_A_STK_GRP_1_
TOP_FC2
6,3;6,7;8,3;8,7;6,9
MC1_INIT_GRP_1_SITE_A_STK_GRP_2_
TOP_FC2
6,2;6,6;8,2;8,6;6,10
MC1_INIT_GRP_2_SITE_A_STK_GRP_2_
TOP_FC2
6,3;6,7;8,3;8,7;6,11
MC1_INIT_GRP_1_SITE_B_STK_GRP_1_
BOT_FC2
6,2;6,6;8,2;8,6;8,8
MC1_INIT_GRP_2_SITE_B_STK_GRP_1_
BOT_FC2
6,3;6,7;8,3;8,7;8,9
MC1_INIT_GRP_1_SITE_B_STK_GRP_2_
BOT_FC2
6,2;6,6;8,2;8,6;8,10
MC1_INIT_GRP_2_SITE_B_STK_GRP_2_
BOT_FC2
6,3;6,7;8,3;8,7;8,11
Configuring zoning on Brocade FC switches
You must assign the switch ports to separate zones to separate controller and storage traffic, with
zones for the FC-VI ports and zones for the storage ports.
About this task
The following steps use the standard zoning for the MetroCluster configuration.
Zoning for FC-VI ports on page 66
Zoning for FibreBridge 6500N bridges or FibreBridge 7500N using one FC port on page 69
Zoning for FibreBridge 7500N bridges using both FC ports on page 75
Cabling a fabric-attached MetroCluster configuration | 85
Steps
1. Create the FC-VI zones on each switch:
zonecreate "QOSH1_FCVI_1", member;member ...
Example
In this example a QOS FCVI zone is created containing ports 5,0;5,1;5,4;5,5;7,0;7,1;7,4;7,5:
Switch_A_1:admin> zonecreate "QOSH1_FCVI_1",
"5,0;5,1;5,4;5,5;7,0;7,1;7,4;7,5"
2. Configure the storage zone s on each switch.
You can configure zoning for the fabric from one switch in the fabric. In the example that follows,
zoning is configured on Switch_A_1.
a. Create the storage zone for each switch domain in the switch fabric:
zonecreate name, member;member ...
Example
In this example a storage zone for a FibreBridge 7500N using both FC ports is being created.
The zones contains ports 5,2;5,6;7,2;7,6;5,16:
Switch_A_1:admin> zonecreate
"MC1_INIT_GRP_1_SITE_A_STK_GRP_1_TOP_FC1", "5,2;5,6;7,2;7,6;5,16"
b. Create the configuration in the first switch fabric:
cfgcreate config_name, zone;zone...
Example
In this example a configuration with the name CFG_1 and the two zones
QOSH1_MC1_FAB_1_FCVI and MC1_INIT_GRP_1_SITE_A_STK_GRP_1_TOP_FC1 is
created
Switch_A_1:admin> cfgcreate "CFG_1", "QOSH1_MC1_FAB_1_FCVI;
MC1_INIT_GRP_1_SITE_A_STK_GRP_1_TOP_FC1"
c. Add zones to the configuration, if desired:
cfgadd config_name zone;zone...
d. Enable the configuration:
cfgenable config_name
Example
Switch_A_1:admin> cfgenable "CFG_1"
e. Save the configuration:
cfgsave
Example
Switch_A_1:admin> cfgsave
86 | Fabric-attached MetroCluster Installation and Configuration Guide
f. Validate the zoning configuration:
zone --validate
Example
Switch_A_1:admin> zone --validate
Defined configuration:
cfg: CFG_1 QOSH1_MC1_FAB_1_FCVI ;
MC1_INIT_GRP_1_SITE_A_STK_GRP_1_TOP_FC1
zone: QOSH1_MC1_FAB_1_FCVI
5,0;5,1;5,4;5,5;7,0;7,1;7,4;7,5
zone: MC1_INIT_GRP_1_SITE_A_STK_GRP_1_TOP_FC1
5,2;5,6;7,2;7,6;5,16
Effective configuration:
cfg: CFG_1
zone: QOSH1_MC1_FAB_1_FCVI
5,0
5,1
5,4
5,5
7,0
7,1
7,4
7,5
zone: MC1_INIT_GRP_1_SITE_A_STK_GRP_1_TOP_FC1
5,2
5,6
7,2
7,6
5,16
-----------------------------------~ - Invalid configuration
* - Member does not exist
# - Invalid usage of broadcast zone
Setting ISL encryption on Brocade 6510 switches
On Brocade 6510 switches, you can optionally use the Brocade encryption feature on the ISL
connections. If you want to use the encryption feature, you must perform additional configuration
steps on each switch in the MetroCluster configuration.
Before you begin
•
You must have Brocade 6510 switches.
•
You must have selected two switches from the same fabric.
•
You must review the Brocade documentation for your switch and Fabric Operating System
version to confirm the bandwidth and port limits.
About this task
The steps must be performed on both the switches in the same fabric.
Disabling virtual fabric
In order to set the ISL encryption, you must disable the virtual fabric on all the four switches being
used in a MetroCluster configuration.
Step
1. Disable the virtual fabric by entering the following command at the switch console:
fosconfig --disable vf
Cabling a fabric-attached MetroCluster configuration | 87
After you finish
Reboot the switch.
Setting the payload
After disabling the virtual fabric, you must set the payload or the data field size on both switches in
the fabric.
About this task
The data field size must not exceed 2048.
Steps
1. Disable the switch:
switchdisable
2. Configure and set the payload:
configure
3. Set the following switch parameters:
a. Set the Fabric parameter as follows:
y
b. Set the other parameters such as Domain, WWN Based persistent PID and so on.
c. Set the data field size as follows:
2048
4. Enable the switch:
switchenable
Setting the authentication policy
You must set the authentication policy and associated parameters.
About this task
The commands must be executed at the switch console.
Steps
1. Disable the switch:
switchdisable
2. Set the authentication policy on the switch to on:
authUtil --policy -sw on
The system displays the following output:
Warning: Activating the authentication policy requires either DH-CHAP
secrets or PKI certificates depending on the protocol selected.
Otherwise, ISLs will be segmented during next E-port bring-up.
ARE YOU SURE (yes, y, no, n): [no] yes
Auth Policy is set to ON
3. Set the authentication type to dhchap:
authUtil --set -a dhchap
88 | Fabric-attached MetroCluster Installation and Configuration Guide
The system displays the following output:
Authentication is set to dhchap.
4. Set the authentication group to 4:
authUtil --set -g 4
5. Set the authentication secret:
a. Begin the set up process:
secAuthSecret --set
This command initiates a series of prompts that you respond to in the following steps.
b. Provide the worldwide name (WWN) of the other switch in the fabric for the Enter peer
WWN, Domain, or switch name parameter.
c. Provide the peer secret for the Enter peer secret parameter.
d. Provide the local secret for the Enter local secret parameter.
e. Enter Y for the Are you done parameter.
Example
The following is an example of setting the authentication secret:
brcd> secAuthSecret --set
This command is used to set up secret keys for the DH-CHAP
authentication.
The minimum length of a secret key is 8 characters and maximum 40
characters. Setting up secret keys does not initiate DH-CHAP
authentication. If switch is configured to do DH-CHAP, it is performed
whenever a port or a switch is enabled.
Warning: Please use a secure channel for setting secrets. Using
an insecure channel is not safe and may compromise secrets.
Following inputs should be specified for each entry.
1. WWN for which secret is being set up.
2. Peer secret: The secret of the peer that authenticates to peer.
3. Local secret: The local secret that authenticates peer.
Press enter to start setting up secrets > <cr>
Enter peer WWN, Domain, or switch name (Leave blank when done):
10:00:00:05:33:76:2e:99
Enter peer secret: <hidden>
Re-enter peer secret: <hidden>
Enter local secret: <hidden>
Re-enter local secret: <hidden>
Enter peer WWN, Domain, or switch name (Leave blank when done):
Are you done? (yes, y, no, n): [no] yes
Saving data to key store... Done.
6. Enable the switch:
switchenable
Cabling a fabric-attached MetroCluster configuration | 89
Enabling ISL encryption in a Brocade switch
After setting the authentication policy and the authentication secret, you must enable ISL encryption
on the Brocade 6510 switches.
About this task
•
These steps should be performed on one switch fabric at a time.
•
The commands must be run at the switch console.
Steps
1. Disable the switch:
switchdisable
2. Enable encryption on all the ISL ports:
portCfgEncrypt --enable port_number
Example
In the following example, the encryption is enabled on ports 8 and 12:
portCfgEncrypt --enable 8
portCfgEncrypt --enable 12
3. Enable the switch:
switchenable
4. Verify that the ISL is up and working:
islshow
5. Verify that the encryption is enabled:
portenccompshow
Example
The following example shows the encryption is enabled on ports 8 and 12:
User Encryption
Port configured
------------8
yes
9
No
10
No
11
No
12
yes
Active
-----yes
No
No
No
yes
After you finish
Perform all the steps on the switches in the other fabric in a MetroCluster configuration.
90 | Fabric-attached MetroCluster Installation and Configuration Guide
Configuring the Cisco FC switches
Each Cisco switch in the MetroCluster configuration must be configured appropriately for the ISL
and storage connections.
About this task
The following requirements apply to the Cisco FC switches:
•
You must be using four supported Cisco switches of the same model with the same NX-OS
version and licensing.
•
The MetroCluster configuration requires four switches.
The four switches must be connected into two fabrics of two switches each, with each fabric
spanning both sites.
•
The switch must support connectivity to the ATTO FibreBridge model.
•
You cannot be using encryption or compression in the Cisco FC storage fabric.
It is not supported in the MetroCluster configuration.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray). You
use the Component Explorer to select the components and ONTAP version to refine your search.
You can click Show Results to display the list of supported configurations that match the criteria.
The following requirement applies to the Inter-Switch Link (ISL) connections:
•
ISLs of different speeds and lengths are supported between switches in the same fabric.
The following requirement applies to the storage connections:
•
Two initiator ports must be connected from each storage controller to each fabric.
Each storage controller must have four initiator ports available to connect to the switch fabrics.
Steps
1. Cisco switch license requirements on page 90
2. Setting the Cisco FC switch to factory defaults on page 91
3. Configure the Cisco FC switch basic settings and community string on page 92
4. Acquiring licenses for ports on page 92
5. Enabling ports in a Cisco MDS 9148 or 9148S switch on page 94
6. Configuring the F-ports on a Cisco FC switch on page 95
7. Assigning buffer-to-buffer credits to F-Ports in the same port group as the ISL on page 96
8. Creating and configuring VSANs on Cisco FC switches on page 98
9. Configuring E-ports on page 102
10. Configuring zoning on a Cisco FC switch on page 119
11. Ensuring the FC switch configuration is saved on page 122
Related information
NetApp Interoperability Matrix Tool
Cisco switch license requirements
Certain feature-based licenses might be required for the Cisco switches in a fabric-attached
MetroCluster configuration. These licenses enable you to use features such as QoS or long-distance
Cabling a fabric-attached MetroCluster configuration | 91
mode credits on the switches. You must install these licenses on all four switches in a MetroCluster
configuration.
The following feature-based licenses might be required in a MetroCluster configuration:
•
ENTERPRISE_PKG
This enables you to use the QoS feature in Cisco switches.
•
PORT_ACTIVATION_PKG
You can use this license for Cisco 9148 switches. This license enables you to activate or
deactivate ports on the switches as long as only 16 ports are active at any given time. By default,
16 ports are enabled in Cisco MDS 9148 switches.
•
FM_SERVER_PKG
This enables you to manage fabrics simultaneously and to manage switches through a web
browser.
The FM_SERVER_PKG license also enables performance management features such as
performance thresholds, threshold monitoring, and so on. For more information about this license,
see the Cisco Fabric Manager Server Package.
You can verify that the licenses are installed by using the show license usage command. If you
do not have these licenses, contact your sales representative before proceeding.
Note: The Cisco MDS 9250i switches have two fixed 1/10 GbE IP storage services ports. No
additional license are required for these ports. The Cisco SAN Extension over IP application
package is a standard license on these switches and enables features such as FCIP and
compression.
Setting the Cisco FC switch to factory defaults
To ensure a successful configuration, you must set the switch to its factory defaults. This ensures that
the switch is starting from a clean configuration.
About this task
This task must be performed on all switches in the MetroCluster configuration.
Steps
1. Make a console connection and log in to both switches in the same fabric.
2. Issue the following command to set the switch back to its default settings:
write erase
You can respond y when prompted to confirm the command. This erases all licenses and
configuration information on the switch.
3. Issue the following command to reboot the switch:
reload
You can respond y when prompted to confirm the command.
4. Repeat the write erase and reload commands on the other switch.
After issuing the reload command, the switch reboots and then prompts with setup questions. At
that point, proceed to the next section.
The following example shows the process on a fabric consisting of FC_switch_A_1 and
FC_switch_B_1.
92 | Fabric-attached MetroCluster Installation and Configuration Guide
FC_Switch_A_1# write erase
Warning: This command will erase the startup-configuration.
Do you wish to proceed anyway? (y/n) [n] y
FC_Switch_A_1# reload
This command will reboot the system. (y/n)? [n] y
FC_Switch_B_1# write erase
Warning: This command will erase the startup-configuration.
Do you wish to proceed anyway? (y/n) [n] y
FC_Switch_B_1# reload
This command will reboot the system. (y/n)? [n] y
Configure the Cisco FC switch basic settings and community string
You must specify the basic settings with the setup command or after issuing the reload command.
Steps
1. If the switch does not display the setup questions, issue the following command to configure the
basic switch settings:
setup
2. Accept the default responses to the setup questions until you are prompted for the SNMP
community string.
3. Set the community string to public (all lowercase) to allow access from the ONTAP Health
Monitors.
Example
The following example shows the commands on FC_switch_A_1:
FC_switch_A_1# setup
Configure read-only SNMP community string (yes/no) [n]: y
SNMP community string : public
Note: Please set the SNMP community string to "Public" or
another value of your choosing.
Configure default switchport interface state (shut/noshut)
[shut]: noshut
Configure default switchport port mode F (yes/no) [n]: n
Configure default zone policy (permit/deny) [deny]: deny
Enable full zoneset distribution? (yes/no) [n]: yes
The following example shows the commands on FC_switch_B_1:
FC_switch_B_1# setup
Configure read-only SNMP community string (yes/no) [n]: y
SNMP community string : public
Note: Please set the SNMP community string to "Public" or
another value of your choosing.
Configure default switchport interface state (shut/noshut)
[shut]: noshut
Configure default switchport port mode F (yes/no) [n]: n
Configure default zone policy (permit/deny) [deny]: deny
Enable full zoneset distribution? (yes/no) [n]: yes
Acquiring licenses for ports
You do not have to use Cisco switch licenses on a continuous range of ports; instead, you can acquire
licenses for specific ports that are used and remove licenses from unused ports. You should verify the
Cabling a fabric-attached MetroCluster configuration | 93
number of licensed ports in the switch configuration and, if necessary, move licenses from one port to
another as needed.
Steps
1. Issue the following command to show license usage for a switch fabric:
show port-resources module 1
Determine which ports require licenses. If some of those ports are unlicensed, determine if you
have extra licensed ports and consider removing the licenses from them.
2. Issue the following command to enter configuration mode:
config t
3. Remove the license from the selected port:
a. Issue the following command to select the port to be unlicensed:
interface interface-name
b. Remove the license from the port using the following command:
no port-license acquire
c. Exit the port configuration interface:
exit
4. Acquire the license for the selected port:
a. Issue the following command to select the port to be unlicensed:
interface interface-name
b. Make the port eligible to acquire a license using the "port license" command:
port-license
c. Acquire the license on the port using the following command:
port-license acquire
d. Exit the port configuration interface:
exit
5. Repeat for any additional ports.
6. Issue the following command to exit configuration mode:
exit
Removing and acquiring a license on a port
This example shows a license being removed from port fc1/2, port fc1/1 being made eligible to
acquire a license, and the license being acquired on port fc1/1:
Switch_A_1# conf t
Switch_A_1(config)# interface fc1/2
Switch_A_1(config)# shut
Switch_A_1(config-if)# no port-license acquire
Switch_A_1(config-if)# exit
Switch_A_1(config)# interface fc1/1
Switch_A_1(config-if)# port-license
Switch_A_1(config-if)# port-license acquire
Switch_A_1(config-if)# no shut
Switch_A_1(config-if)# end
Switch_A_1# copy running-config startup-config
94 | Fabric-attached MetroCluster Installation and Configuration Guide
Switch_B_1# conf t
Switch_B_1(config)# interface fc1/2
Switch_B_1(config)# shut
Switch_B_1(config-if)# no port-license acquire
Switch_B_1(config-if)# exit
Switch_B_1(config)# interface fc1/1
Switch_B_1(config-if)# port-license
Switch_B_1(config-if)# port-license acquire
Switch_B_1(config-if)# no shut
Switch_B_1(config-if)# end
Switch_B_1# copy running-config startup-config
The following example shows port license usage being verified:
Switch_A_1# show port-resources module 1
Switch_B_1# show port-resources module 1
Enabling ports in a Cisco MDS 9148 or 9148S switch
In Cisco MDS 9148 or 9148S switches, you must manually enable the ports required in a
MetroCluster configuration.
About this task
•
You can manually enable 16 ports in a Cisco MDS 9148 or 9148S switch.
•
The Cisco switches enable you to apply the POD license on random ports, as opposed to applying
them in sequence.
•
Cisco switches require that you use one port from each port group, unless you need more than 12
ports.
Steps
1. View the port groups available in a Cisco switch:
show port-resources module blade_number
2. License and acquire the required port in a port group by entering the following commands in
sequence:
config t
interface port_number
shut
port-license acquire
no shut
Example
For example, the following command licenses and acquires Port fc 1/45:
switch# config t
switch(config)#
switch(config)# interface fc 1/45
switch(config-if)#
switch(config-if)# shut
switch(config-if)# port-license acquire
switch(config-if)# no shut
switch(config-if)# end
Cabling a fabric-attached MetroCluster configuration | 95
3. Save the configuration:
copy running-config startup-config
Configuring the F-ports on a Cisco FC switch
You must configure the F-ports on the FC switch. In a MetroCluster configuration, the F-ports are the
ports that connect the switch to the HBA initiators, FC-VI interconnects and FC-to-SAS bridges.
Each port must be configured individually.
About this task
The following table lists the ports that must be configured as F-ports (switch-to-node) and shows
what each port connects to:
Configure this port to F-mode:
Port connects to...
1
controller_x_1 FC-VI port 1
2
controller_x_1 HBA port 1
3
controller_x_1 HBA port 2
4
controller_x_2 FC-VI port 1
5
controller_x_2 HBA 1
6
controller_x_2 HBA 2
7
FC-to-SAS bridge
This task must be performed on each switch in the MetroCluster configuration.
Steps
1. Issue the following command to enter configuration mode:
config t
2. Enter interface configuration mode for the port:
interface port-ID
3. Shut down the port:
shutdown
4. Set the ports to F mode by issuing the following command:
switchport mode F
5. Set the ports to fixed speed by issuing the following command:
switchport speed speed
speed is either 8000 or 16000
6. Set the rate mode of the switch port to dedicated by issuing the following command:
switchport rate-mode dedicated
7. Restart the port:
no shutdown
8. Issue the following command to exit configuration mode:
end
96 | Fabric-attached MetroCluster Installation and Configuration Guide
The following example shows the commands on the two switches:
Switch_A_1# config t
FC_switch_A_1(config)# interface fc 1/1
FC_switch_A_1(config-if)# shutdown
FC_switch_A_1(config-if)# switchport mode F
FC_switch_A_1(config-if)# switchport speed 8000
FC_switch_A_1(config-if)# switchport rate-mode dedicated
FC_switch_A_1(config-if)# no shutdown
FC_switch_A_1(config-if)# end
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# config t
FC_switch_B_1(config)# interface fc 1/1
FC_switch_B_1(config-if)# switchport mode F
FC_switch_B_1(config-if)# switchport speed 8000
FC_switch_B_1(config-if)# switchport rate-mode dedicated
FC_switch_B_1(config-if)# no shutdown
FC_switch_B_1(config-if)# end
FC_switch_B_1# copy running-config startup-config
Assigning buffer-to-buffer credits to F-Ports in the same port group as the ISL
You must assign the buffer-to-buffer credits to the F-ports if they are in the same port group as the
ISL. If the ports do not have the required buffer-to-buffer credits, the ISL could be inoperative. This
task is not required if the F-ports are not in the same port group as the ISL port.
About this task
If the F-Ports are in a port group that contains the ISL, this task must be performed on each FC
switch in the MetroCluster configuration.
Steps
1. Issue the following command to enter configuration mode:
config t
2. Enter the following command to set the interface configuration mode for the port:
interface port-ID
3. Disable the port:
shut
4. If the port is not already in F mode, set the port to F mode by entering the following command:
switchport mode F
5. Set the buffer-to-buffer credit of the non-E ports to 1 by using the following command:
switchport fcrxbbcredit 1
6. Re-enable the port:
no shut
7. Exit configuration mode:
exit
8. Copy the updated configuration to the startup configuration:
copy running-config startup-config
Cabling a fabric-attached MetroCluster configuration | 97
9. Verify the buffer-to-buffer credit assigned to a port by entering the following commands:
show port-resources module 1
10. Issue the following command to exit configuration mode:
exit
11. Repeat these steps on the other switch in the fabric.
12. Verify the settings:
show port-resource module 1
In this example, port fc1/40 is the ISL. Ports fc1/37, fc1/38 and fc1/39 are in the same port
group and must be configured.
The following commands show the port range being configured for fc1/37 through fc1/39:
FC_switch_A_1# conf t
FC_switch_A_1(config)# interface fc1/37-39
FC_switch_A_1(config-if)# shut
FC_switch_A_1(config-if)# switchport mode F
FC_switch_A_1(config-if)# switchport fcrxbbcredit
1FC_switch_A_1(config-if)# no shut
FC_switch_A_1(config-if)# exit
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# conf t
FC_switch_B_1(config)# interface fc1/37-39
FC_switch_B_1(config-if)# shut
FC_switch_B_1(config-if)# switchport mode F
FC_switch_B_1(config-if)# switchport fcrxbbcredit 1
FC_switch_A_1(config-if)# no shut
FC_switch_A_1(config-if)# exit
FC_switch_B_1# copy running-config startup-config
The following commands and system output show that the settings are properly applied:
FC_switch_A_1# show port-resource module 1
...
Port-Group 11
Available dedicated buffers are 93
-------------------------------------------------------------------Interfaces in the Port-Group
B2B Credit Bandwidth Rate
Mode
Buffers
(Gbps)
-------------------------------------------------------------------fc1/37
32
8.0 dedicated
fc1/38
1
8.0 dedicated
fc1/39
1
8.0 dedicated
...
FC_switch_B_1# port-resource module
...
Port-Group 11
Available dedicated buffers are 93
-------------------------------------------------------------------Interfaces in the Port-Group
B2B Credit Bandwidth Rate Mode
Buffers
(Gbps)
--------------------------------------------------------------------
98 | Fabric-attached MetroCluster Installation and Configuration Guide
fc1/37
fc1/38
fc1/39
...
32
1
1
8.0 dedicated
8.0 dedicated
8.0 dedicated
Creating and configuring VSANs on Cisco FC switches
You must create a VSAN for the FC-VI ports and a VSAN for the storage ports on each FC switch in
the MetroCluster configuration. The VSANs should have a unique number and name. You must do
additional configuration if you are using two ISLs with in-order delivery of frames.
About this task
The examples here use the following naming conventions:
Switch fabric
VSAN name
ID number
1
FCVI_1_10
10
STOR_1_20
20
FCVI_2_30
30
STOR_2_20
40
2
This task must be performed on each FC switch fabric.
Steps
1. Configure the FC-VI VSAN:
a. Enter configuration mode if you have not done so already:
config t
b. Edit the VSAN database:
vsan database
c. Set the VSAN ID:
vsan vsan-ID
d. Set the VSAN name:
vsan vsan-ID name vsan_name
2. Add ports to the FC-VI VSAN:
a. Add the interfaces for each port in the VSAN:
vsan vsan-ID interface interface_name
For the FC-VI VSAN, the ports connecting the local FC-VI ports will be added.
b. Exit configuration mode:
end
c. Copy the running-config to the startup-config:
copy running-config startup-config
Example
In the following example, the ports are fc1/1 and fc1/13:
Cabling a fabric-attached MetroCluster configuration | 99
FC_switch_A_1# conf t
FC_switch_A_1(config)# vsan database
FC_switch_A_1(config)# vsan 10 interface fc1/1
FC_switch_A_1(config)# vsan 10 interface fc1/13
FC_switch_A_1(config)# end
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# conf t
FC_switch_B_1(config)# vsan database
FC_switch_B_1(config)# vsan 10 interface fc1/1
FC_switch_B_1(config)# vsan 10 interface fc1/13
FC_switch_B_1(config)# end
FC_switch_B_1# copy running-config startup-config
3. Verify port membership of the VSAN:
show vsan member
Example
FC_switch_A_1# show vsan member
FC_switch_B_1# show vsan member
4. Configure the VSAN to guarantee in-order delivery of frames or out-of-order delivery of frames:
Note: The standard IOD settings are recommended. You should configure OOD only if
necessary.
Considerations for using TDM/xWDM equipment with fabric-attached MetroCluster
configurations on page 15
•
The following steps must be performed to configure in-order delivery of frames:
a. Enter configuration mode:
conf t
b. Enable the in-order guarantee of exchanges for the VSAN:
in-order-guarantee vsan vsan-ID
Attention: For FC-VI VSANs (FCVI_1_10 and FCVI_2_30), you must enable in-order
guarantee of frames and exchanges only on VSAN 10.
c. Enable load balancing for the VSAN:
vsan vsan-ID loadbalancing src-dst-id
d. Exit configuration mode:
end
e. Copy the running-config to the startup-config:
copy running-config startup-config
The commands to configure in-order delivery of frames on FC_switch_A_1:
FC_switch_A_1# config t
FC_switch_A_1(config)# in-order-guarantee vsan 10
FC_switch_A_1(config)# vsan database
FC_switch_A_1(config-vsan-db)# vsan 10 loadbalancing src-dst-id
FC_switch_A_1(config-vsan-db)# end
FC_switch_A_1# copy running-config startup-config
The commands to configure in-order delivery of frames on FC_switch_B_1:
100 | Fabric-attached MetroCluster Installation and Configuration Guide
FC_switch_B_1# config t
FC_switch_B_1(config)# in-order-guarantee vsan 10
FC_switch_B_1(config)# vsan database
FC_switch_B_1(config-vsan-db)# vsan 10 loadbalancing src-dst-id
FC_switch_B_1(config-vsan-db)# end
FC_switch_B_1# copy running-config startup-config
•
The following steps must be performed to configure out-of-order delivery of frames:
a. Enter configuration mode:
conf t
b. Disable the in-order guarantee of exchanges for the VSAN:
no in-order-guarantee vsan vsan-ID
c. Enable load balancing for the VSAN:
vsan vsan-ID loadbalancing src-dst-id
d. Exit configuration mode:
end
e. Copy the running-config to the startup-config:
copy running-config startup-config
The commands to configure out-of-order delivery of frames on FC_switch_A_1:
FC_switch_A_1# config t
FC_switch_A_1(config)# no in-order-guarantee vsan 10
FC_switch_A_1(config)# vsan database
FC_switch_A_1(config-vsan-db)# vsan 10 loadbalancing src-dst-id
FC_switch_A_1(config-vsan-db)# end
FC_switch_A_1# copy running-config startup-config
The commands to configure out-of-order delivery of frames on FC_switch_B_1:
FC_switch_B_1# config t
FC_switch_B_1(config)# no in-order-guarantee vsan 10
FC_switch_B_1(config)# vsan database
FC_switch_B_1(config-vsan-db)# vsan 10 loadbalancing src-dst-id
FC_switch_B_1(config-vsan-db)# end
FC_switch_B_1# copy running-config startup-config
Note: When configuring ONTAP on the controller modules, OOD must be explicitly
configured on each controller module in the MetroCluster configuration.
Configuring in-order delivery or out-of-order delivery of frames on ONTAP software on
page 180
5. Set QoS policies for the FC-VI VSAN:
a. Enter configuration mode:
conf t
b. Enable the QoS and create a class map by entering the following commands in sequence:
qos enable
qos class-map class_name match-any
c. Add the class map created in a previous step to the policy map:
class class_name
Cabling a fabric-attached MetroCluster configuration | 101
d. Set the priority:
priority high
e. Add the VSAN to the policy map created previously in this procedure:
qos service policy policy_name vsan vsanid
f. Copy the updated configuration to the startup configuration:
copy running-config startup-config
Example
The commands to set the QoS policies on FC_switch_A_1:
FC_switch_A_1# conf t
FC_switch_A_1(config)# qos enable
FC_switch_A_1(config)# qos class-map FCVI_1_10_Class match-any
FC_switch_A_1(config)# qos policy-map FCVI_1_10_Policy
FC_switch_A_1(config-pmap)# class FCVI_1_10_Class
FC_switch_A_1(config-pmap-c)# priority high
FC_switch_A_1(config-pmap-c)# exit
FC_switch_A_1(config)# exit
FC_switch_A_1(config)# qos service policy FCVI_1_10_Policy vsan 10
FC_switch_A_1(config)# end
FC_switch_A_1# copy running-config startup-config
The commands to set the QoS policies on FC_switch_B_1:
FC_switch_B_1# conf t
FC_switch_B_1(config)# qos enable
FC_switch_B_1(config)# qos class-map FCVI_1_10_Class match-any
FC_switch_B_1(config)# qos policy-map FCVI_1_10_Policy
FC_switch_B_1(config-pmap)# class FCVI_1_10_Class
FC_switch_B_1(config-pmap-c)# priority high
FC_switch_B_1(config-pmap-c)# exit
FC_switch_B_1(config)# exit
FC_switch_B_1(config)# qos service policy FCVI_1_10_Policy vsan 10
FC_switch_B_1(config)# end
FC_switch_B_1# copy running-config startup-config
6. Configure the storage VSAN:
a. Set the VSAN ID:
vsan vsan-ID
b. Set the VSAN name:
vsan vsan-ID name vsan_name
Example
The commands to configure the storage VSAN on FC_switch_A_1:
FC_switch_A_1# conf t
FC_switch_A_1(config)# vsan database
FC_switch_A_1(config-vsan-db)# vsan 20
FC_switch_A_1(config-vsan-db)# vsan 20 name STOR_1_20
FC_switch_A_1(config-vsan-db)# end
FC_switch_A_1# copy running-config startup-config
The commands to configure the storage VSAN on FC_switch_B_1:
102 | Fabric-attached MetroCluster Installation and Configuration Guide
FC_switch_B_1# conf t
FC_switch_B_1(config)# vsan database
FC_switch_B_1(config-vsan-db)# vsan 20
FC_switch_B_1(config-vsan-db)# vsan 20 name STOR_1_20
FC_switch_B_1(config-vsan-db)# end
FC_switch_B_1# copy running-config startup-config
7. Add ports to the storage VSAN.
For the storage VSAN, all ports connecting HBA or FC-to-SAS bridges must be added. In this
example fc1/5, fc1/9, fc1/17, fc1/21. fc1/25, fc1/29, fc1/33, and fc1/37 are being added.
Example
The commands to add ports to the storage VSAN on FC_switch_A_1:
FC_switch_A_1# conf t
FC_switch_A_1(config)# vsan database
FC_switch_A_1(config)# vsan 20 interface fc1/5
FC_switch_A_1(config)# vsan 20 interface fc1/9
FC_switch_A_1(config)# vsan 20 interface fc1/17
FC_switch_A_1(config)# vsan 20 interface fc1/21
FC_switch_A_1(config)# vsan 20 interface fc1/25
FC_switch_A_1(config)# vsan 20 interface fc1/29
FC_switch_A_1(config)# vsan 20 interface fc1/33
FC_switch_A_1(config)# vsan 20 interface fc1/37
FC_switch_A_1(config)# end
FC_switch_A_1# copy running-config startup-config
The commands to add ports to the storage VSAN on FC_switch_B_1:
FC_switch_B_1# conf t
FC_switch_B_1(config)# vsan database
FC_switch_B_1(config)# vsan 20 interface fc1/5
FC_switch_B_1(config)# vsan 20 interface fc1/9
FC_switch_B_1(config)# vsan 20 interface fc1/17
FC_switch_B_1(config)# vsan 20 interface fc1/21
FC_switch_B_1(config)# vsan 20 interface fc1/25
FC_switch_B_1(config)# vsan 20 interface fc1/29
FC_switch_B_1(config)# vsan 20 interface fc1/33
FC_switch_B_1(config)# vsan 20 interface fc1/37
FC_switch_B_1(config)# end
FC_switch_B_1# copy running-config startup-config
Configuring E-ports
You must configure the switch ports that connect the ISL (these are the E-Ports). The procedure you
use depends on which switch you are using.
Configuring the E-ports on the Cisco FC switch
You must configure the FC switch ports that connect the ISL. These are the E-ports, and
configuration must be done for each port. To do so, the correct number of buffer-to-buffer credits
(BBCs) must be calculated.
About this task
All ISLs in the fabric must be configured with the same speed and distance settings.
This task must be performed on each ISL port.
Cabling a fabric-attached MetroCluster configuration | 103
Steps
1. Use the following table to determine the adjusted required BBCs per kilometer for possible port
speeds.
To determine the correct number of BBCs, you multiply the Adjusted BBCs required
(determined from the table below) by the distance in kilometers between the
switches. The adjustment factor of 1.5 is required to account for FC-VI framing behavior.
Speed in Gbps
BBCs required per
kilometer
Adjusted BBCs required (BBCs per
km x 1.5)
1
0.5
0.75
2
1
1.5
4
2
3
8
4
6
16
8
12
Example
For example, to compute the required number of credits for a distance of 30 km on a 4-Gbps link,
make the following calculation:
•
Speed in Gbps is 4
•
Adjusted BBCs required is 3.
•
Distance in kilometers between switches is 30 km.
•
3 x 30 = 90
2. Issue the following command to enter configuration mode:
config t
3. Specify the port you are configuring by entering the following command:
interface port-name
4. Shut down the port:
shutdown
5. Set the rate mode of the port to dedicated:
switchport rate-mode dedicated
6. Set the speed for the port:
switchport speed speed
7. Set the buffer-to-buffer credits for the port:
switchport fcrxbbcredit number of buffers
8. Set the port to E mode:
switchport mode E
9. Enable the trunk mode for the port:
switchport trunk mode on
10. Add the ISL VSANs to the trunk:
104 | Fabric-attached MetroCluster Installation and Configuration Guide
switchport trunk allowed vsan 10
switchport trunk allowed vsan add 20
11. Add the port to port channel 1:
channel-group 1
12. Repeat the previous steps for the matching ISL port on the partner switch in the fabric.
Example
The following example shows port fc1/41 configured for a distance of 30 km and 8 Gbps:
FC_switch_A_1# conf t
FC_switch_A_1# shutdown
FC_switch_A_1# switchport rate-mode dedicated
FC_switch_A_1# switchport speed 8000
FC_switch_A_1# switchport fcrxbbcredit 60
FC_switch_A_1# switchport mode E
FC_switch_A_1# switchport trunk mode on
FC_switch_A_1# switchport trunk allowed vsan 10
FC_switch_A_1# switchport trunk allowed vsan add 20
FC_switch_A_1# channel-group 1
fc1/36 added to port-channel 1 and disabled
FC_switch_B_1# conf t
FC_switch_B_1# shutdown
FC_switch_B_1# switchport rate-mode dedicated
FC_switch_B_1# switchport speed 8000
FC_switch_B_1# switchport fcrxbbcredit 60
FC_switch_B_1# switchport mode E
FC_switch_B_1# switchport trunk mode on
FC_switch_B_1# switchport trunk allowed vsan 10
FC_switch_B_1# switchport trunk allowed vsan add 20
FC_switch_B_1# channel-group 1
fc1/36 added to port-channel 1 and disabled
13. Issue the following command on both switches to restart the ports:
no shutdown
14. Repeat the previous steps for the other ISL ports in the fabric.
15. Add native vsan to port-channel interface on both switches in the same fabric:
interface port-channel number
switchport trunk allowed vsan add native_san_id
16. Verify configuration of the port-channel:
show interface port-channel number
The port channel should have the following attributes:
•
The port-channel is trunking.
•
Admin port mode is E, trunk mode is on.
•
Speed shows the cumulative value of all the ISL link speeds.
For example, two ISL ports operating at 4 Gbps should show a speed of 8 Gbps.
•
Trunk vsans (admin allowed and active) shows all the allowed VSANs.
•
Trunk vsans (up) shows all the allowed VSANs.
•
The member list shows all the ISL ports that were added to the port-channel.
Cabling a fabric-attached MetroCluster configuration | 105
•
The port VSAN number should be the same as the VSAN that contains the ISLs (usually
native vsan 1).
Example
FC_switch_A_1(config-if)# show int port-channel 1
port-channel 1 is trunking
Hardware is Fibre Channel
Port WWN is 24:01:54:7f:ee:e2:8d:a0
Admin port mode is E, trunk mode is on
snmp link state traps are enabled
Port mode is TE
Port vsan is 1
Speed is 8 Gbps
Trunk vsans (admin allowed and active) (1,10,20)
Trunk vsans (up)
(1,10,20)
Trunk vsans (isolated)
()
Trunk vsans (initializing)
()
5 minutes input rate 1154832 bits/sec,144354 bytes/sec, 170
frames/sec
5 minutes output rate 1299152 bits/sec,162394 bytes/sec, 183
frames/sec
535724861 frames input,1069616011292 bytes
0 discards,0 errors
0 invalid CRC/FCS,0 unknown class
0 too long,0 too short
572290295 frames output,1144869385204 bytes
0 discards,0 errors
5 input OLS,11 LRR,2 NOS,0 loop inits
14 output OLS,5 LRR, 0 NOS, 0 loop inits
Member[1] : fc1/36
Member[2] : fc1/40
Interface last changed at Thu Oct 16 11:48:00 2014
17. Exit interface configuration on both switches:
end
18. Copy the updated configuration to the startup configuration on both fabrics:
copy running-config startup-config
Example
FC_switch_A_1(config-if)# end
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1(config-if)# end
FC_switch_B_1# copy running-config startup-config
19. Repeat the previous steps on the second switch fabric.
Related concepts
Port assignments for FC switches on page 38
Configuring FCIP ports for a single ISL on Cisco 9250i switches
You must configure the FCIP switch ports that connect the ISL (E-ports) by creating FCIP profiles
and interfaces and assigning them to the IPStorage1/1 GbE interface.
About this task
This task is only for configurations using a single ISL per switch fabric, using the IPStorage1/1
interface on each switch.
106 | Fabric-attached MetroCluster Installation and Configuration Guide
This task must be performed on each FC switch.
Two FCIP profiles are created on each switch:
•
•
Fabric 1
◦
FC_switch_A_1 is configured with FCIP profiles 11 and 111.
◦
FC_switch_B_1 is configured with FCIP profiles 12 and 121.
Fabric 2
◦
FC_switch_A_2 is configured with FCIP profiles 13 and 131.
◦
FC_switch_B_2 is configured with FCIP profiles 14 and 141.
Steps
1. Enter configuration mode:
config t
2. Enable FCIP:
feature fcip
3. Configure the IPStorage1/1 GbE interface:
a. Enter configuration mode:
conf t
b. Specify the IPStorage1/1 interface:
interface IPStorage1/1
c. Specify the IP address and subnet mask:
interface ip-address subnet-mask
d. Specify the MTU size of 2500:
switchport mtu 2500
e. Enable the port:
no shutdown
f. Exit configuration mode:
exit
Example
The following example shows the configuration of an IPStorage1/1 port:
conf t
interface IPStorage1/1
ip address 192.168.1.201 255.255.255.0
switchport mtu 2500
no shutdown
exit
4. Configure the FCIP profile for FC-VI traffic:
a. Configure an FCIP profile and enter FCIP profile configuration mode:
fcip profile FCIP-profile-name
Cabling a fabric-attached MetroCluster configuration | 107
The profile name depends on which switch is being configured.
b. Assign the IP address of the IPStorage1/1 interface to the FCIP profile:
ip address ip-address
c. Assign the FCIP profile to TCP port 3227:
port 3227
d. Set the TCP settings:
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-triptime-ms 3
tcp min-retransmit-time 200
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
tcp sack-enable
no tcp cwm
Example
The following example shows the configuration of the FCIP profile:
conf t
fcip profile 11
ip address 192.168.1.333
port 3227
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-triptime-ms 3
tcp min-retransmit-time 200
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
tcp sack-enable
no tcp cwm
5. Configure the FCIP profile for storage traffic:
a. Configure an FCIP profile with the name 111 and enter FCIP profile configuration mode:
fcip profile 111
b. Assign the IP address of the IPStorage1/1 interface to the FCIP profile:
ip address ip-address
c. Assign the FCIP profile to TCP port 3229:
port 3229
d. Set the TCP settings:
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-triptime-ms 3
tcp min-retransmit-time 200
108 | Fabric-attached MetroCluster Installation and Configuration Guide
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
tcp sack-enable
no tcp cwm
Example
The following example shows the configuration of the FCIP profile:
conf t
fcip profile 111
ip address 192.168.1.334
port 3229
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-triptime-ms 3
tcp min-retransmit-time 200
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
tcp sack-enable
no tcp cwm
6. Create the first of two FCIP interfaces:
interface fcip 1
This interface is used for FC-IV traffic.
a. Select the profile 11 created previously:
use-profile 11
b. Set the IP address and port of the IPStorage1/1 port on the partner switch:
peer-info ipaddr partner-switch-port-ip port 3227
c. Select TCP connection 2:
tcp-connection 2
d. Select compression to auto:
ip-compression auto
e. Enable the interface:
no shutdown
f. Configure the control TCP connection to 48 and data connection to 26 to mark all packets on
that differentiated services code point (DSCP) value:
qos control 48 data 26
g. Exit the interface configuration mode:
exit
Example
The following example shows the configuration of the FCIP interface:
interface fcip 1
use-profile 11
# the port # listed in this command is the port that the remote
switch is listening on
peer-info ipaddr 192.168.32.334
port 3227
Cabling a fabric-attached MetroCluster configuration | 109
tcp-connection 2
ip-compression auto
no shutdown
qos control 48 data 26
exit
7. Create the second of two FCIP interfaces:
interface fcip 2
This interface is used for storage traffic.
a. Select the profile 111 created previously:
use-profile 111
b. Set the IP address and port of the IPStorage1/1 port on the partner switch:
peer-info ipaddr partner-switch-port-ip port 3229
c. Select TCP connection 2:
tcp-connection 5
d. Select compression to auto:
ip-compression auto
e. Enable the interface:
no shutdown
f. Configure the control TCP connection to 48 and data connection to 26 to mark all packets on
that differentiated services code point (DSCP) value:
qos control 48 data 26
g. Exit the interface configuration mode:
exit
Example
The following example shows the configuration of the FCIP interface:
interface fcip 2
use-profile 11
# the port # listed in this command is the port that the remote
switch is listening on
peer-info ipaddr 192.168.32.33e port 3229
tcp-connection 5
ip-compression auto
no shutdown
qos control 48 data 26
exit
8. Configure the switchport settings on the fcip 1 interface:
a. Enter configuration mode:
config t
b. Specify the port you are configuring:
interface fcip 1
c. Shut down the port:
shutdown
110 | Fabric-attached MetroCluster Installation and Configuration Guide
d. Set the port to E mode:
switchport mode E
e. Enable the trunk mode for the port:
switchport trunk mode on
f. Set the trunk allowed vsan to 10:
switchport trunk allowed vsan 10
g. Set the speed for the port:
switchport speed speed
9. Configure the switchport settings on the fcip 2 interface:
a. Enter configuration mode:
config t
b. Specify the port you are configuring:
interface fcip 2
c. Shut down the port:
shutdown
d. Set the port to E mode:
switchport mode E
e. Enable the trunk mode for the port:
switchport trunk mode on
f. Set the trunk allowed vsan to 20:
switchport trunk allowed vsan 20
g. Set the speed for the port:
switchport speed speed
10. Repeat the previous steps on the second switch.
The only differences are the appropriate IP addresses and unique FCIP profile names.
•
When configuring the first switch fabric, FC_switch_B_1 is configured with FCIP profiles 12
and 121.
•
When configuring the first switch fabric, FC_switch_A_2 is configured with FCIP profiles 13
and 131 and FC_switch_B_2 is configured with FCIP profiles 14 and 141.
11. Restart the ports on both switches:
no shutdown
12. Exit the interface configuration on both switches:
end
13. Copy the updated configuration to the startup configuration on both switches:
copy running-config startup-config
Cabling a fabric-attached MetroCluster configuration | 111
Example
FC_switch_A_1(config-if)# end
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1(config-if)# end
FC_switch_B_1# copy running-config startup-config
14. Repeat the previous steps on the second switch fabric.
Configuring FCIP ports for a dual ISL on Cisco 9250i switches
You must configure the FCIP switch ports that connect the ISL (E-ports) by creating FCIP profiles
and interfaces and then assigning them to the IPStorage1/1 and IPStorage1/2 GbE interfaces.
About this task
This task is only for configurations that use a dual ISL per switch fabric, using the IPStorage1/1 and
IPStorage1/2 GbE interfaces on each switch.
This task must be performed on each FC switch.
The task and examples use the following profile configuration table:
Switch IPStor IP
fabric age
Addre
interfa ss
ce
Port type
FCIP
interface
FCIP
profile
Port
Peer IP/
port
VSAN
ID
FC-VI
fcip 1
15
3220
c.c.c.c
10
Fabric 1
FC_sw
itch_A
_1
IPStor
age1/1
a.a.a
.a
/3230
Storage
fcip 2
20
3221
c.c.c.c
20
/3231
IPStor
age1/2
b.b.b
.b
FC-VI
fcip 3
25
3222
d.d.d.d
10
/3232
Storage
fcip 4
30
3223
d.d.d.d
/3233
20
112 | Fabric-attached MetroCluster Installation and Configuration Guide
Switch IPStor IP
fabric age
Addre
interfa ss
ce
Port type
FCIP
interface
FCIP
profile
Port
Peer IP/
port
VSAN
ID
FC_sw
itch_B
_1
FC-VI
fcip 1
15
3230
a.a.a.a
10
IPStor
age1/1
c.c.c
.c
/3220
Storage
fcip 2
20
3231
a.a.a.a
20
/3221
IPStor
age1/2
d.d.d
.d
FC-VI
fcip 3
25
3232
b.b.b.b
10
/3222
Storage
fcip 4
30
3233
b.b.b.b
20
/3223
Fabric 2
FC_sw
itch_A
_2
IPStor
age1/1
e.e.e
.e
FC-VI
fcip 1
15
3220
g.g.g.g
10
/3230
Storage
fcip 2
20
3221
g.g.g.g
20
/3231
IPStor
age1/2
f.f.f
.f
FC-VI
fcip 3
25
3222
h.h.h.h
10
/3232
Storage
fcip 4
30
3223
h.h.h.h
20
/3233
FC_sw
itch_B
_2
IPStor
age1/1
g.g.g
.g
FC-VI
fcip 1
15
3230
e.e.e.e
10
/3220
Storage
fcip 2
20
3231
e.e.e.e
20
/3221
IPStor
age1/2
h.h.h
.h
FC-VI
fcip 3
25
3232
f.f.f.f
10
/3222
Storage
fcip 4
30
3233
f.f.f.f
20
/3223
Steps
1. Enter configuration mode:
config t
2. Enable FCIP:
feature fcip
3. On each switch, configure the two IPStorage interfaces (IPStorage1/1 and IPStorage1/2):
a. Enter configuration mode:
conf t
b. Specify the IPStorage interface to create:
interface ipstorage
The ipstorage parameter value is IPStorage1/1 or IPStorage1/2.
c. Specify the IP address and subnet mask of the IPStorage interface previously specified:
Cabling a fabric-attached MetroCluster configuration | 113
interface ip-address subnet-mask
Note: On each switch, the IPStorage interfaces IPStorage1/1 and IPStorage1/2 must have
different IP addresses.
d. Specify the MTU size as 2500:
switchport mtu 2500
e. Enable the port:
no shutdown
f. Exit configuration mode:
exit
g. Repeat steps a on page 112 through f on page 113 to configure IPStorage1/2 GbE interface
with a different IP address.
4. Configure the FCIP profiles for FC-VI and storage traffic with the profile names given in the
profile configuration table:
a. Enter configuration mode:
conf t
b. Configure the FCIP profiles with the following profile names:
fcip profile FCIP-profile-name
The following list provides the values for the FCIP-profile-name parameter:
•
15 for FC-VI on IPStorage1/1
•
20 for storage on IPStorage1/1
•
25 for FC-VI on IPStorage1/2
•
30 for storage on IPStorage1/2
c. Assign the FCIP profile ports according to the profile configuration table:
port port number
d. Set the TCP settings:
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-triptime-ms 3
tcp min-retransmit-time 200
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
tcp sack-enable
no tcp cwm
5. Create FCIP interfaces:
interface fcip FCIP interface
The FCIP interface parameter value is 1, 2, 3, or 4 as given in the profile configuration table.
a. Map interfaces to the previously created profiles:
use-profile profile
114 | Fabric-attached MetroCluster Installation and Configuration Guide
b. Set the peer IP address and peer profile port number:
peer-info peer IPstorage ipaddr peer profile port number
c. Select the TCP connections:
tcp-connection connection #
The connection # parameter value is 2 for FC-VI profiles and 5 for storage profiles.
d. Set compression to auto:
ip-compression auto
e. Enable the interface:
no shutdown
f. Configure the control TCP connection to 48 and data connection to 26 to mark all packets that
have differentiated services code point (DSCP) value:
qos control 48 data 26
g. Exit configuration mode:
exit
6. Configure the switchport settings on each FCIP interface:
a. Enter configuration mode:
config t
b. Specify the port that you are configuring:
interface fcip 1
c. Shut down the port:
shutdown
d. Set the port to E mode:
switchport mode E
e. Enable the trunk mode for the port:
switchport trunk mode on
f. Specify the trunk that is allowed on a specific VSAN:
switchport trunk allowed vsan vsan
The vsan parameter value is VSAN 10 for FC-VI profiles and VSAN 20 for storage profiles.
g. Set the speed for the port:
switchport speed speed
h. Exit configuration mode:
exit
7. Copy the updated configuration to the startup configuration on both switches:
copy running-config startup-config
The following examples show the configuration of FCIP ports for a dual ISL in fabric 1
switches FC_switch_A_1 and FC_switch_B_1.
For FC_switch_A_1:
Cabling a fabric-attached MetroCluster configuration | 115
FC_switch_A_1# config t
FC_switch_A_1(config)# no in-order-guarantee vsan 10
FC_switch_A_1(config-vsan-db)# end
FC_switch_A_1# copy running-config startup-config
# fcip settings
feature
fcip
conf t
interface IPStorage1/1
# IP address: a.a.a.a
# Mask: y.y.y.y
ip address <a.a.a.a
y.y.y.y>
switchport mtu 2500
no shutdown
exit
conf t
fcip profile 15
ip address <a.a.a.a>
port 3220
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-trip-time-ms
3
tcp min-retransmit-time 200
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
tcp sack-enable
no tcp cwm
conf t
fcip profile 20
ip address <a.a.a.a>
port 3221
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-trip-time-ms
3
tcp min-retransmit-time 200
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
tcp sack-enable
no tcp cwm
conf t
interface IPStorage1/2
# IP address: b.b.b.b
# Mask: y.y.y.y
ip address <b.b.b.b
y.y.y.y>
switchport mtu 2500
no shutdown
exit
conf t
fcip profile 25
ip address <b.b.b.b>
port 3222
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-trip-time-ms
3
tcp min-retransmit-time 200
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
tcp sack-enable
no tcp cwm
conf t
fcip profile 30
ip address <b.b.b.b>
port 3223
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-trip-time-ms
3
tcp min-retransmit-time 200
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
116 | Fabric-attached MetroCluster Installation and Configuration Guide
tcp sack-enable
no tcp cwm
interface fcip 1
use-profile 15
# the port # listed in this command is the port that the remote switch is
listening on
peer-info ipaddr <c.c.c.c> port 3230
tcp-connection 2
ip-compression auto
no shutdown
qos control 48 data 26
exit
interface fcip 2
use-profile 20
# the port # listed in this command is the port that the remote switch is
listening on
peer-info ipaddr <c.c.c.c> port 3231
tcp-connection 5
ip-compression auto
no shutdown
qos control 48 data 26
exit
interface fcip 3
use-profile 25
# the port # listed in this command is the port that the remote switch is
listening on
peer-info ipaddr < d.d.d.d > port 3232
tcp-connection 2
ip-compression auto
no shutdown
qos control 48 data 26
exit
interface fcip 4
use-profile 30
# the port # listed in this command is the port that the remote switch is
listening on
peer-info ipaddr < d.d.d.d > port 3233
tcp-connection 5
ip-compression auto
no shutdown
qos control 48 data 26
exit
conf t
interface fcip 1
shutdown
switchport mode E
switchport trunk mode on
switchport trunk allowed vsan 10
no shutdown
exit
conf t
interface fcip 2
shutdown
switchport mode E
switchport trunk mode on
switchport trunk allowed vsan 20
no shutdown
exit
conf t
interface fcip 3
shutdown
switchport mode E
switchport trunk mode on
switchport trunk allowed vsan 10
no shutdown
exit
conf t
interface fcip 4
shutdown
switchport mode E
Cabling a fabric-attached MetroCluster configuration | 117
switchport trunk mode on
switchport trunk allowed vsan 20
no shutdown
exit
For FC_switch_B_1:
FC_switch_A_1# config t
FC_switch_A_1(config)# in-order-guarantee vsan 10
FC_switch_A_1(config-vsan-db)# end
FC_switch_A_1# copy running-config startup-config
# fcip settings
feature
fcip
conf t
interface IPStorage1/1
# IP address: c.c.c.c
# Mask: y.y.y.y
ip address <c.c.c.c
y.y.y.y>
switchport mtu 2500
no shutdown
exit
conf t
fcip profile 15
ip address <c.c.c.c>
port 3230
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-trip-time-ms
3
tcp min-retransmit-time 200
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
tcp sack-enable
no tcp cwm
conf t
fcip profile 20
ip address <c.c.c.c>
port 3231
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-trip-time-ms
3
tcp min-retransmit-time 200
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
tcp sack-enable
no tcp cwm
conf t
interface IPStorage1/2
# IP address: d.d.d.d
# Mask: y.y.y.y
ip address <b.b.b.b
y.y.y.y>
switchport mtu 2500
no shutdown
exit
conf t
fcip profile 25
ip address <d.d.d.d>
port 3232
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-trip-time-ms
3
tcp min-retransmit-time 200
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
tcp sack-enable
no tcp cwm
conf t
118 | Fabric-attached MetroCluster Installation and Configuration Guide
fcip profile 30
ip address <d.d.d.d>
port 3233
tcp keepalive-timeout 1
tcp max-retransmissions 3
max-bandwidth-mbps 5000 min-available-bandwidth-mbps 4500 round-trip-time-ms
3
tcp min-retransmit-time 200
tcp keepalive-timeout 1
tcp pmtu-enable reset-timeout 3600
tcp sack-enable
no tcp cwm
interface fcip 1
use-profile 15
# the port # listed in this command is the port that the remote switch is
listening on
peer-info ipaddr <a.a.a.a> port 3220
tcp-connection 2
ip-compression auto
no shutdown
qos control 48 data 26
exit
interface fcip 2
use-profile 20
# the port # listed in this command is the port that the remote switch is
listening on
peer-info ipaddr <a.a.a.a> port 3221
tcp-connection 5
ip-compression auto
no shutdown
qos control 48 data 26
exit
interface fcip 3
use-profile 25
# the port # listed in this command is the port that the remote switch is
listening on
peer-info ipaddr < b.b.b.b > port 3222
tcp-connection 2
ip-compression auto
no shutdown
qos control 48 data 26
exit
interface fcip 4
use-profile 30
# the port # listed in this command is the port that the remote switch is
listening on
peer-info ipaddr < b.b.b.b > port 3223
tcp-connection 5
ip-compression auto
no shutdown
qos control 48 data 26
exit
conf t
interface fcip 1
shutdown
switchport mode E
switchport trunk mode on
switchport trunk allowed vsan 10
no shutdown
exit
conf t
interface fcip 2
shutdown
switchport mode E
switchport trunk mode on
switchport trunk allowed vsan 20
no shutdown
exit
conf t
interface fcip 3
shutdown
switchport mode E
switchport trunk mode on
Cabling a fabric-attached MetroCluster configuration | 119
switchport trunk allowed vsan 10
no shutdown
exit
conf t
interface fcip 4
shutdown
switchport mode E
switchport trunk mode on
switchport trunk allowed vsan 20
no shutdown
exit
Configuring zoning on a Cisco FC switch
You must assign the switch ports to separate zones to isolate storage (HBA) and controller (FC-VI)
traffic.
About this task
These steps must be performed on both FC switch fabrics.
The following steps use the zoning described in the section Zoning for a FibreBridge 7500N in a
four-node MetroCluster configuration.
Zoning for FC-VI ports on page 66
Steps
1. Clear the existing zones and zone set, if present.
a. Determine which zones and zone sets are active:
show zoneset active
Example
FC_switch_A_1# show zoneset active
FC_switch_B_1# show zoneset active
b. Disable the active zone sets identified in the previous step:
no zoneset activate name zoneset_name vsan vsan_id
Example
The following example shows two zone sets being disabled:
•
ZoneSet_A on FC_switch_A_1 in VSAN 10
•
ZoneSet_B on FC_switch_B_1 in VSAN 20
FC_switch_A_1# no zoneset activate name ZoneSet_A vsan 10
FC_switch_B_1# no zoneset activate name ZoneSet_B vsan 20
c. After all zone sets are deactivated, clear the zone database:
clear zone database zone-name
120 | Fabric-attached MetroCluster Installation and Configuration Guide
Example
FC_switch_A_1# clear zone database 10
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# clear zone database 20
FC_switch_B_1# copy running-config startup-config
2. Obtain the switch worldwide name (WWN):
show wwn switch
3. Configure the basic zone settings:
a. Set the default zoning policy to permit:
no system default zone default-zone permit
b. Enable the full zone distribution:
system default zone distribute full
c. Set the default zoning policy for each VSAN:
no zone default-zone permit vsanid
d. Set the default full zone distribution for each VSAN:
zoneset distribute full vsanid
Example
FC_switch_A_1# conf t
FC_switch_A_1(config)# no system default zone default-zone permit
FC_switch_A_1(config)# system default zone distribute full
FC_switch_A_1(config)# no zone default-zone permit 10
FC_switch_A_1(config)# no zone default-zone permit 20
FC_switch_A_1(config)# zoneset distribute full vsan 10
FC_switch_A_1(config)# zoneset distribute full vsan 20
FC_switch_A_1(config)# end
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# conf t
FC_switch_B_1(config)# no system default zone default-zone permit
FC_switch_B_1(config)# system default zone distribute full
FC_switch_B_1(config)# no zone default-zone permit 10
FC_switch_B_1(config)# no zone default-zone permit 20
FC_switch_B_1(config)# zoneset distribute full vsan 10
FC_switch_B_1(config)# zoneset distribute full vsan 20
FC_switch_B_1(config)# end
FC_switch_B_1# copy running-config startup-config
4. Create storage zones and add the storage ports to them.
These steps only need to be performed on one switch in each fabric.
The zoning depends on the model FC-to-SAS bridge you are using. For details, see the section for
your model bridge. The examples show Brocade switch ports, so adjust your ports accordingly.
•
Zoning for FibreBridge 6500N bridges or FibreBridge 7500N using one FC port on page 69
•
Zoning for FibreBridge 7500N bridges using both FC ports on page 75
Each storage zone contains the HBA initiator ports from all controllers and one single port
connecting an FC-to-SAS bridge.
a. Create the storage zones:
Cabling a fabric-attached MetroCluster configuration | 121
zone name STOR_zone-name vsan vsanid
b. Add storage ports to the zone:
member STOR_zone-name
c. Activate the zone set:
zoneset activate name STOR_zonenameesetname vsan vsanid
Example
FC_switch_A_1# conf t
FC_switch_A_1(config)# zone name STOR_Zone_1_20_25 vsan 20
FC_switch_A_1(config-zone)# member interface fc1/5 swwn
20:00:00:05:9b:24:cb:78
FC_switch_A_1(config-zone)# member interface fc1/9 swwn
20:00:00:05:9b:24:cb:78
FC_switch_A_1(config-zone)# member interface fc1/17 swwn
20:00:00:05:9b:24:cb:78
FC_switch_A_1(config-zone)# member interface fc1/21 swwn
20:00:00:05:9b:24:cb:78
FC_switch_A_1(config-zone)# member interface fc1/5 swwn
20:00:00:05:9b:24:12:99
FC_switch_A_1(config-zone)# member interface fc1/9 swwn
20:00:00:05:9b:24:12:99
FC_switch_A_1(config-zone)# member interface fc1/17 swwn
20:00:00:05:9b:24:12:99
FC_switch_A_1(config-zone)# member interface fc1/21 swwn
20:00:00:05:9b:24:12:99
FC_switch_A_1(config-zone)# member interface fc1/25 swwn
20:00:00:05:9b:24:cb:78
FC_switch_A_1(config-zone)# end
FC_switch_A_1# copy running-config startup-config
5. Create an FCVI zone set and add the FCVI ports to it:
These steps only need to be performed on one switch in the fabric.
a. Create the FCVI zone set:
zoneset name FCVI_zonesetname vsan vsanid
b. Add FCVI zones to the zone set:
member FCVI_zonename
c. Activate the zone set:
zoneset activate name FCVI_zonesetname vsan vsanid
Example
FC_switch_A_1# conf t
FC_switch_A_1(config)# zoneset name FCVI_Zoneset_1_20 vsan 20
FC_switch_A_1(config-zoneset)# member FCVI_Zone_1_20_25
FC_switch_A_1(config-zoneset)# member FCVI_Zone_1_20_29
...
FC_switch_A_1(config-zoneset)# exit
FC_switch_A_1(config)# zoneset activate name FCVI_ZoneSet_1_20 vsan 20
FC_switch_A_1(config)# exit
FC_switch_A_1# copy running-config startup-config
6. Verify the zoning:
show zone
7. Repeat the previous steps on the second FC switch fabric.
122 | Fabric-attached MetroCluster Installation and Configuration Guide
Ensuring the FC switch configuration is saved
You must make sure the FC switch configuration is saved to the startup config on all switches.
Step
1. Issue the following command on both FC switch fabrics:
copy running-config startup-config
Example
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# copy running-config startup-config
Installing FC-to-SAS bridges and SAS disk shelves
You install and cable ATTO FibreBridge bridges and SAS disk shelves when adding new storage to
the configuration.
About this task
For systems received from the factory, the FC-to-SAS bridges are preconfigured and do not require
additional configuration.
This procedure is written with the assumption that you are using the recommended bridge
management interfaces: the ATTO ExpressNAV GUI and ATTO QuickNAV utility.
You use the ATTO ExpressNAV GUI to configure and manage a bridge, and to update the bridge
firmware. You use the ATTO QuickNAV utility to configure the bridge Ethernet management 1 port.
You can use other management interfaces instead, if needed, such as a serial port or Telnet to
configure and manage a bridge and to configure the Ethernet management 1 port, and FTP to update
the bridge firmware.
This procedure uses the following workflow:
Cabling a fabric-attached MetroCluster configuration | 123
Steps
1. Preparing for the installation on page 123
2. Installing the FC-to-SAS bridge and SAS shelves on page 125
Preparing for the installation
When you are preparing to install the bridges as part of your new MetroCluster system, you must
ensure that your system meets certain requirements, including meeting setup and configuration
requirements for the bridges. Other requirements include downloading the necessary documents, the
ATTO QuickNAV utility, and the bridge firmware.
Before you begin
•
Your system must already be installed in a rack if it was not shipped in a system cabinet.
•
Your configuration must be using supported hardware models and software versions.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and ONTAP version to refine your
search. You can click Show Results to display the list of supported configurations that match the
criteria.
•
Each FC switch must have one FC port available for one bridge to connect to it.
•
You must have familiarized yourself with how to handle SAS cables and the considerations and
best practices for installing and cabling disk shelves.
The Installation and Service Guide for your disk shelf model describes the considerations and
best practices.
124 | Fabric-attached MetroCluster Installation and Configuration Guide
NetApp Documentation: Disk Shelves
•
The computer you are using to set up the bridges must be running an ATTO-supported web
browser to use the ATTO ExpressNAV GUI.
The ATTO-supported web browsers are Internet Explorer 8 and 9, and Mozilla Firefox 3.
The ATTO Product Release Notes have an up-to-date list of supported web browsers. You can
access this document from the ATTO web site as described in the following steps.
Steps
1. Download the Installation and Service Guide for your disk shelf model:
NetApp Documentation: Disk Shelves
2. Download content from the ATTO web site and from the NetApp web site:
a. From the NetApp Support Site, navigate to the ATTO FibreBridge Description page by
clicking Software, scrolling to Protocol Bridge and selecting ATTO FibreBridge from the
drop-down menu, clicking Go!, and then clicking View & Download.
NetApp Support
b. Access the ATTO web site using the link provided for your FibreBridge model and download
the following:
•
ATTO FibreBridge 7500N Installation and Operation Manual
•
ATTO FibreBridge 6500N Installation and Operation Manual
•
ATTO QuickNAV utility (to the computer you are using for setup)
c. Go to the ATTO FibreBridge Firmware Download page for your FibreBridge model and do
the following:
If you are
using...
Then...
FibreBridge
7500N
•
Navigate to the ATTO Fibrebridge 7500N Firmware Download page
by clicking Continue at the end of the ATTO FibreBridge Description
page.
•
Download the bridge firmware file using Steps 1 through 3 of that
procedure.
You update the firmware on each bridge later in this procedure.
•
Make a copy of the ATTO FibreBridge 7500N Firmware Download
page and release notes for reference when you are instructed to update
the firmware on each bridge.
•
Navigate to the ATTO Fibrebridge 6500N Firmware Download page
by clicking Continue at the end of the ATTO FibreBridge Description
page.
•
Download the bridge firmware file using Steps 1 through 3 of that
procedure.
You update the firmware on each bridge later in this procedure.
•
Make a copy of the ATTO FibreBridge 6500N Firmware Download
page and release notes for reference when you are instructed to update
the firmware on each bridge.
FibreBridge
6500N
Cabling a fabric-attached MetroCluster configuration | 125
3. Gather the hardware and information needed to use the recommended bridge management
interfaces, the ATTO ExpressNAV GUI, and the ATTO QuickNAV utility:
a. Acquire a shielded Ethernet cable provided with the bridges (which connects from the bridge
Ethernet management 1 port to your network).
b. Determine a non-default user name and password (for accessing the bridges).
You should change the default user name and password.
c. Obtain an IP address, subnet mask, and gateway information for the Ethernet management 1
port on each bridge.
d. Disable VPN clients on the computer you are using for setup.
Active VPN clients cause the QuickNAV scan for bridges to fail.
Installing the FC-to-SAS bridge and SAS shelves
After ensuring that the system meets all the requirements in the “Preparing for the installation”
section, you can install your new system.
About this task
•
You should use an equal number of disk shelves at each site.
•
The system connectivity requirements for maximum distances for disk shelves, FC switches, and
backup tape devices using 50-micron, multimode fiber-optic cables, also apply to FibreBridge
bridges.
The Site Requirements Guide has detailed information about system connectivity requirements.
•
A mix of IOM12 modules and IOM3/IOM6 modules is not supported within the same storage
stack.
Note: SAS shelves in MetroCluster configurations do not support ACP cabling.
Steps
1.
2.
3.
4.
Configuring the FC-to-SAS bridges on page 125
Cabling disk shelves to the bridges on page 127
Verifying bridge connectivity and cabling the bridge FC ports on page 131
Disabling unused SAS ports on the bridge on page 137
Configuring the FC-to-SAS bridges
Before cabling your model of the FC-to-SAS bridges you must configure the settings in the
FibreBridge software.
Steps
1. Connect the Ethernet management 1 port on each bridge to your network using the shielded
Ethernet cable provided with the bridges.
Note: The Ethernet management 1 port enables you to quickly download the bridge firmware
(using ATTO ExpressNAV or FTP management interfaces) and to retrieve core files and extract
logs.
2. Configure the Ethernet management 1 port for each bridge by following the procedure in the
ATTO FibreBridge Installation and Operation Manual for your bridge.
Note: When running QuickNAV to configure an Ethernet management port, only the Ethernet
management port that is connected by the shielded Ethernet cable is configured. For example,
126 | Fabric-attached MetroCluster Installation and Configuration Guide
if you also wanted to configure the Ethernet management 2 port, you would need to connect the
shielded Ethernet cable to port 2 and run QuickNAV.
If you are using...
Then see the..
ATTO FibreBridge 7500N
ATTO FibreBridge 7500N Installation and Operation Manual, section
2.0.
ATTO FibreBridge 6500N
ATTO FibreBridge 6500N Installation and Operation Manual, section
2.0.
3. Configure the bridges.
Be sure to make note of the user name and password that you designate.
The ATTO FibreBridge Installation and Operation Manual for your bridge model has the most
current information on available commands and how to use them.
Note: Do not configure time synchronization on ATTO FibreBridge 7500N. The time
synchronization for ATTO FibreBridge 7500N is set to the cluster time after the bridge is
discovered by ONTAP. It is also synchronized periodically once a day. The time zone used is
GMT and is not changeable.
a. Configure the IP settings of the bridge.
To set the IP address without the Quicknav utility, you need to have a serial connection to the
FibreBridge.
Example
If using the CLI, issue the following commands:
set ipaddress mp1 ip-address
set ipsubnetmask mp1 subnet-mask
set ipgateway mp1 x.x.x.x
set ipdhcp mp1
b. Configure the bridge name.
The bridges should each have a unique name within the MetroCluster configuration.
Example bridge names for one stack group on each site:
•
bridge_A_1a
•
bridge_A_1b
•
bridge_B_1a
•
bridge_B_1b
Example
If using the CLI, issue the following command:
set bridgename bridgename
c. Enable SNMP on the bridge:
set SNMP enabled
4. Configure the bridge FC ports.
a. Configure the data rate/speed of the bridge FC ports. 6500 supports up to 8-Gbps and 7500 up
to 16-Gbps.
Cabling a fabric-attached MetroCluster configuration | 127
Note: Set the FCDataRate speed to the maximum speed supported by the FC port of the FC
switch or the controller module to which the bridge port connects. The most current
information on supported distance can be found in the Interoperability Matrix.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without
FlexArray). You use the Component Explorer to select the components and ONTAP version
to refine your search. You can click Show Results to display the list of supported
configurations that match the criteria.
Example
If using the CLI, issue the following command:
set FCDataRate port-number 16Gb
b. Configure the connection mode that the port uses to communicate across the FC network.
•
If you have a fabric-attached MetroCluster system, you must set the bridge connection
mode to ptp (point-to-point).
•
If you have a stretch MetroCluster system, you must set the bridge connection mode
depending on the adapter that the bridge connects to:
◦
For 16-Gbps capable adapters, set the connection mode to ptp, even if it is operating at
a lower speed.
◦
For 8-Gbps and 4-Gbps capable adapters, set the connection mode to loop.
Example
If using the CLI, issue the following command:
set FCConnMode port-number value
value can be either ptp or loop.
c. If you are configuring an ATTO 7500N bridge and using the second port, repeat the previous
substeps for the FC2 port.
5. Save the bridge's configuration.
Example
If using the CLI, issue the following command:
SaveConfiguration Restart
You are prompted to restart the bridge.
6. Update the firmware on each bridge to the latest version by following the instructions—starting
with Step 4—on the FibreBridge Download page.
Cabling disk shelves to the bridges
You must use the correct FC-to-SAS bridges for cabling your disk shelves.
Choices
• Cabling a FibreBridge 7500N bridge with disk shelves using IOM12 modules on page 128
• Cabling a FibreBridge 7500N bridge with disk shelves using IOM6 or IOM3 modules on page 129
• Cabling a FibreBridge 6500N bridge with disk shelves using IOM6 or IOM3 modules on page 130
128 | Fabric-attached MetroCluster Installation and Configuration Guide
Cabling a FibreBridge 7500N bridge with disk shelves using IOM12 modules
After configuring the bridge, you can start cabling your new system. The FibreBridge 7500N bridge
uses mini-SAS connectors and supports disk shelves that use IOM12 modules.
About this task
For disk shelves, you insert a SAS cable connector with the pull tab oriented down (on the underside
of the connector).
Steps
1. Daisy-chain the disk shelves in each stack:
a. Beginning with the logical first shelf in the stack, connect IOM A port 3 to the next shelf's
IOM A port 1 until each IOM A in the stack is connected.
b. Repeat substep a for IOM B.
c. Repeat substeps a and b for each stack.
The Installation and Service Guide for your disk shelf model provides detailed information about
daisy-chaining disk shelves.
NetApp Documentation: Disk Shelves
2. Power on the disk shelves, and then set the shelf IDs.
•
You must power-cycle each disk shelf.
•
Shelf IDs must be unique for each SAS disk shelf within the entire MetroCluster configuration
(including both sites).
NetApp Documentation: Disk Shelves
3. Cable disk shelves to the FibreBridge bridges.
a. For the first stack of disk shelves, cable IOM A of the first shelf to SAS port A on FibreBridge
A, and cable IOM B of the last shelf to SAS port A on FibreBridge B.
b. For additional shelf stacks, repeat the previous step using the next available SAS port on the
FibreBridge bridges, using port B for the second stack, port C for the third stack, and port D
for the fourth stack.
c. During cabling, attach the stacks based on IOM12 and IOM3/IOM6 modules to the same
bridge as long as they are connected to separate SAS ports.
Note: Each stack can use different models of IOM, but all disk shelves within a stack must
use the same model.
The following illustration shows disk shelves connected to a pair of FibreBridge 7500N bridges.
Cabling a fabric-attached MetroCluster configuration | 129
Cabling a FibreBridge 7500N bridge with disk shelves using IOM6 or IOM3 modules
After configuring the bridge, you can start cabling your new system. The FibreBridge 7500N bridge
uses mini-SAS connectors and supports disk shelves that use IOM6 or IOM3 modules.
About this task
For disk shelves, you insert a SAS cable connector with the pull tab oriented down (on the underside
of the connector).
Steps
1. Daisy-chain the disk shelves in each stack.
a. For the first stack of disk shelves, cable IOM A square port of the first shelf to SAS port A on
FibreBridge A.
b. For the first stack of disk shelves, cable IOM B circle port of the last shelf to SAS port A on
FibreBridge B.
The Installation and Service Guide for your disk shelf model provides detailed information about
daisy-chaining disk shelves.
SAS Disk Shelves Installation and Service Guide for DS4243, DS2246, DS4486, and DS4246
The following illustration shows a set of bridges cabled to a stack of disk shelves:
130 | Fabric-attached MetroCluster Installation and Configuration Guide
ATTO 7500N bridge
FC2
FC1
M1
bridge_A_1
SAS A SAS B SAS C SAS D
IOM A
IOM B
SAS A SAS B SAS C SAS D
bridge_A_2
M1
FC1
FC2
ATTO 7500N bridge
2. For additional shelf stacks, repeat the previous steps using the next available SAS port on the
FibreBridge bridges, using port B for a second stack, port C for a third stack, and port D for a
fourth stack.
The following illustration shows four stacks connected to a pair of FibreBridge 7500N bridges.
ATTO 7500N bridge bridge_A_1
M1
FC1
FC2
SAS A SAS B SAS C SAS D
IOM A
IOM B
IOM A
IOM B
IOM A
IOM B
IOM A
IOM B
SAS A SAS B SAS C SAS D
M1
FC1
FC2
ATTO 7500N bridge bridge_A_2
Cabling a FibreBridge 6500N bridge with disk shelves using IOM6 or IOM3 modules
After configuring the bridge, you can start cabling your new system. The FibreBridge 6500N bridge
uses QSFP connectors.
About this task
Wait at least 10 seconds before connecting the port. The SAS cable connectors are keyed; when
oriented correctly into a SAS port, the connector clicks into place and the disk shelf SAS port LNK
Cabling a fabric-attached MetroCluster configuration | 131
LED illuminates green. For disk shelves, you insert a SAS cable connector with the pull tab oriented
down (on the underside of the connector).
The FibreBridge 6500N bridge does not support disk shelves that use IOM12.
Steps
1. Daisy-chain the disk shelves in each stack.
For information about daisy-chaining disk shelves, see the Installation and Service Guide for your
disk shelf model.
2. For each stack of disk shelves, cable the IOM A square port of the first shelf to the SAS port A on
FibreBridge A.
3. For each stack of disk shelves, cable the IOM B circle port of the last shelf to the SAS port A on
FibreBridge B.
Each bridge has one path to its stack of disk shelves: bridge A connects to the A-side of the stack
through the first shelf, and bridge B connects to the B-side of the stack through the last shelf.
Example
Note: The SAS port B bridge is disabled.
The following illustration shows a set of bridges cabled to a stack of four disk shelves:
FC1
FC2
FC_bridge_x_1_06
M1
SAS A
FC1
FC2
FC_bridge_x_2_o6
M1
SAS A
Stack of SAS shelves
IOM A
IOM B
First
shelf
Last
shelf
Verifying bridge connectivity and cabling the bridge FC ports
You should verify that each bridge can detect all of the disk drives, and then cable each bridge to the
local FC switches.
Steps
1. Verify that each bridge can detect all of the disk drives and disk shelves it is connected to:
132 | Fabric-attached MetroCluster Installation and Configuration Guide
If you are using the...
ATTO ExpressNAV GUI
Then...
a.
In a supported web browser, enter the IP address of a bridge in the
browser box.
You are brought to the ATTO FibreBridge homepage of the bridge for
which you entered the IP address, which has a link.
b.
Click the link, and then enter your user name and the password that
you designated when you configured the bridge.
The ATTO FibreBridge status page of the bridge appears with a menu
to the left.
c.
Click Advanced.
d.
Enter the following command, and then click Submit to view the
connected devices:
sastargets
Serial port connection
View the connected devices:
sastargets
Example
The output shows the devices (disks and disk shelves) that the bridge is connected to. Output lines
are sequentially numbered so that you can quickly count the devices. For example, the following
output shows that 10 disks are connected.
Tgt
0
1
2
3
4
5
6
7
8
9
VendorID
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
NETAPP
ProductID
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
X410_S15K6288A15
Type
DISK
DISK
DISK
DISK
DISK
DISK
DISK
DISK
DISK
DISK
SerialNumber
3QP1CLE300009940UHJV
3QP1ELF600009940V1BV
3QP1G3EW00009940U2M0
3QP1EWMP00009940U1X5
3QP1FZLE00009940G8YU
3QP1FZLF00009940TZKZ
3QP1CEB400009939MGXL
3QP1G7A900009939FNTT
3QP1FY0T00009940G8PA
3QP1FXW600009940VERQ
Note: If the text response truncated appears at the beginning of the output, you can use
Telnet to connect to the bridge and enter the same command to see all of the output.
2. Verify that the command output shows that the bridge is connected to all disks and disk shelves in
the stack that it is supposed to be connected to.
If the output is...
Then...
Correct
Repeat Step 1 on page 131 for each remaining bridge.
Not correct
a.
Check for loose SAS cables or correct the SAS cabling by repeating
the cabling.
Cabling disk shelves to the bridges on page 127
b.
Repeat Step 1 on page 131.
3. Cable each bridge to the local FC switches, using the cabling in the table for your configuration
and switch model and FC-to-SAS bridge model:
Attention: The second FC port connection on the FibreBridge 7500N bridge should not be
cabled until zoning has been completed.
Note: The Brocade and Cisco switches use different port numbering:
Cabling a fabric-attached MetroCluster configuration | 133
•
On Brocade switches, the first port is numbered “0”.
•
On Cisco switches, the first port is numbered “1”.
This is reflected in the following tables:
Configurations using FibreBridge 7500N using both FC ports (FC1 and FC2)
DR GROUP 1
Brocade 6505
Port
Switch
1
Stack 1
FC1
8
FC2
-
8
-
8
-
8
FC1
9
-
9
-
9
-
FC2
-
9
-
9
-
9
FC1
10
-
10
-
10
-
FC2
-
10
-
10
-
10
FC1
11
-
11
-
11
-
FC2
-
11
-
11
-
11
FC1
12
-
12
-
12
-
FC2
-
12
-
12
-
12
FC1
13
-
13
-
13
-
FC2
-
13
-
13
-
13
FC1
14
-
14
-
14
-
FC2
-
14
-
14
-
14
FC1
15
-
15
-
15
-
FC2
--
15
--
15
--
15
bridge_x_1B
Stack 2
bridge_x_2a
bridge_x_2B
Stack 3
bridge_x_3a
bridge_x_3B
Stack y
bridge_x_ya
bridge_x_yb
Switch
1
Switch
2
Brocade 6520
Component
bridge_x_1a
Switch
2
Brocade 6510
8
Switch
1
Switch
2
8
134 | Fabric-attached MetroCluster Installation and Configuration Guide
Configurations using FibreBridge 7500N using both FC ports (FC1 and FC2)
DR GROUP 2
Brocade 6505
Component
Port
Stack
1
FC1
bridge_x_51a
bridge_x_51b
Stack
2
bridge_x_52a
bridge_x_52b
Stack
3
bridge_x_53a
bridge_x_53b
Stack
bridge_x_5ya
y
bridge_x_5yb
Switch
1
Brocade 6510
Switch
2
Brocade 6520
Switch 1
Switch
2
Switch
1
Switch
2
32
-
56
-
FC2
-
32
-
56
FC1
33
-
57
-
FC2
-
33
-
57
FC1
34
-
58
-
FC2
-
34
-
58
FC1
35
-
59
-
FC2
-
35
-
59
FC1
36
-
60
-
FC2
-
36
-
60
FC1
37
-
61
-
FC2
-
37
-
61
FC1
38
-
62
-
FC2
-
38
-
62
FC1
39
-
63
-
FC2
-
39
-
63
Not supported
Configurations using FibreBridge 6500N bridges or FibreBridge 7500N using one FC
port (FC1 or FC2) only
DR GROUP 1
Brocade 6505
Port
Stack 1
bridge_x_1a
8
bridge_x_1b
-
8
-
8
-
8
bridge_x_2a
9
-
9
-
9
-
bridge_x_2b
-
9
-
9
-
9
bridge_x_3a
10
-
10
-
10
-
bridge_x_4b
-
10
-
10
-
10
bridge_x_ya
11
-
11
-
11
-
bridge_x_yb
-
11
-
11
-
11
Stack 3
Stack y
Switch 2
Switch 1
Switch 2
Brocade 6520
Compo
nent
Stack 2
Switch 1
Brocade 6510
8
Switch 1
Switch
2
8
Cabling a fabric-attached MetroCluster configuration | 135
Configurations using FibreBridge 6500N bridges or FibreBridge 7500N using one FC
port (FC1 or FC2) only
DR GROUP 2
Stack 1
Brocade
6505
Brocade 6510
Brocade 6520
Not
supported
32
-
56
-
-
32
-
56
bridge_x_52a
33
-
57
-
bridge_x_52b
-
33
-
57
bridge_x_53a
34
-
58
-
bridge_x_54b
-
34
-
58
bridge_x_ya
35
-
59
-
bridge_x_yb
-
35
-
59
bridge_x_51a
bridge_x_51b
Stack 2
Stack 3
Stack y
Cisco 9396S
FibreBridge 7500 using two
FC ports
Port
Switch 1
Switch 2
bridge_x_1a
FC1
9
-
FC2
-
9
FC1
10
-
FC2
-
10
FC1
11
-
FC2
-
11
FC1
12
-
FC2
-
12
FC1
13
-
FC2
-
13
FC1
14
-
FC2
-
14
FC1
15
-
FC2
-
15
FC1
16
-
FC2
-
16
bridge_x_1b
bridge_x_2a
bridge_x_2b
bridge_x_3a
bridge_x_3b
bridge_x_4a
bridge_x_4b
Additional bridges can be attached using
ports 17 through 40 and 57 through 88
following the same pattern.
136 | Fabric-attached MetroCluster Installation and Configuration Guide
Cisco 9148 or 9148S
FibreBridge 7500 using two
FC ports
Port
bridge_x_1a
bridge_x_1b
bridge_x_2a
bridge_x_2b
bridge_x_3a
bridge_x_3b
bridge_x_4a
bridge_x_4b
Switch 1
Switch 2
FC1
14
-
FC2
-
14
FC1
15
-
FC2
-
15
FC1
17
-
FC2
-
17
FC1
18
-
FC2
-
18
FC1
19
-
FC2
-
19
FC1
21
-
FC2
-
21
FC1
22
-
FC2
-
22
FC1
23
-
FC2
-
23
Additional bridges can be attached using
ports 25 through 48 following the same
pattern.
FibreBridge 6500 bridge or
FibreBridge 7500 using one
FC port
Port
bridge_x_1a
Cisco 9396S
Switch 1
Switch 2
FC1
9
-
bridge_x_1b
FC1
-
9
bridge_x_2a
FC1
10
-
bridge_x_2b
FC1
-
10
bridge_x_3a
FC1
11
-
bridge_x_3b
FC1
-
11
bridge_x_4a
FC1
12
-
bridge_x_4b
FC1
-
12
bridge_x_5a
FC1
13
-
bridge_x_5b
FC1
-
13
bridge_x_6a
FC1
14
-
bridge_x_6b
FC1
-
14
bridge_x_7a
FC1
15
-
Cabling a fabric-attached MetroCluster configuration | 137
FibreBridge 6500 bridge or
FibreBridge 7500 using one
FC port
Port
bridge_x_7b
Cisco 9396S
Switch 1
Switch 2
FC1
-
15
bridge_x_8a
FC1
16
-
bridge_x_8b
FC1
-
16
Additional bridges can be attached using
ports 17 through 40 and 57 through 88
following the same pattern.
Cisco 9148 or 9148S
FibreBridge 6500 bridge or
FibreBridge 7500 using one
FC port
Port
Switch 1
Switch 2
bridge_x_1a
FC1
14
-
bridge_x_1b
FC1
-
14
bridge_x_2a
FC1
15
-
bridge_x_2b
FC1
-
15
bridge_x_3a
FC1
17
-
bridge_x_3b
FC1
-
17
bridge_x_4a
FC1
18
-
bridge_x_4b
FC1
-
18
bridge_x_5a
FC1
19
-
bridge_x_5b
FC1
-
19
bridge_x_6a
FC1
21
-
bridge_x_6b
FC1
-
21
bridge_x_7a
FC1
22
-
bridge_x_7b
FC1
-
22
bridge_x_8a
FC1
23
-
bridge_x_8b
FC1
-
23
Additional bridges can be attached using
ports 25 through 48 following the same
pattern.
4. Repeat the previous step on the bridges at the partner site.
Disabling unused SAS ports on the bridge
After making cabling changes to the bridge, you should disable any unused SAS ports on the bridge
to avoid health monitor alerts related to the unused ports.
Steps
1. Disable unused ports on the top FC-to-SAS bridge:
138 | Fabric-attached MetroCluster Installation and Configuration Guide
a. Log in to the bridge CLI.
b. Disable any unused the SAS ports:
SASPortDisable port-letter
Example
The following example shows disabling of SAS ports C and D:
Ready. *
SASPortDisable C
SAS Port C has been disabled.
Ready. *
SASPortDisable D
SAS Port D has been disabled.
Ready. *
c. Save the bridge configuration:
SaveConfiguration
Example
The following example shows disabling of SAS ports C and D. Note that the asterisk no
longer appears, indicating that the configuration has been saved.
Ready. *
SaveConfiguration
Ready.
2. Repeat the previous step on the bottom FC-to-SAS bridge.
139
Configuring hardware for sharing a Brocade 6510
FC fabric during transition
If your 7-Mode fabric MetroCluster configuration uses Brocade 6510 switches, you can share the
existing switch fabrics with the new clustered MetroCluster configuration. Shared switch fabrics
means the new MetroCluster configuration does not require a new, separate switch fabric. This
temporary configuration is only supported with the Brocade 6510 switch for transition purposes.
Before you begin
•
The 7-Mode fabric MetroCluster must be using Brocade 6510 switches.
If the MetroCluster configuration is currently not using Brocade 6510 switches, the switches must
be upgraded to Brocade 6510 prior to using this procedure.
•
The 7-Mode fabric MetroCluster configuration must be using SAS storage shelves only.
If the existing configuration includes FC storage shelves (such as the DS14mk4 FC), FC switch
fabric sharing is not supported.
•
All 48 ports of the Brocade 6510 switches must be licensed.
•
The SFPs on the switch ports used by the new, clustered MetroCluster configuration must support
16-Gbps rates.
The existing 7-Mode fabric MetroCluster can remain connected to ports using 8-Gbps or 16-Gbps
SFPs.
•
On each of the four Brocade 6510 switches, ports 24 through 45 must be available to connect the
ports of the new MetroCluster components.
•
You should verify that the existing Inter-Switch Links (ISLs) are on ports 46 and 47.
•
The Brocade 6510 switches must be running a FOS firmware version that is supported on both
the 7-Mode fabric MetroCluster and clustered ONTAP MetroCluster configuration.
After you finish
After sharing the fabric and completing the MetroCluster configuration, you can transition data from
the 7-Mode fabric MetroCluster configuration.
After transitioning the data, you can remove the 7-Mode fabric MetroCluster cabling and, if desired,
move the clustered ONTAP MetroCluster cabling to the lower-numbered ports previously used for
the 7-Mode MetroCluster cabling. The ports are shown in the section "Reviewing FC switch port
assignments for a four node MetroCluster." You must adjust the zoning for the rearranged ports.
Port assignments for FC switches on page 38
Steps
1. Reviewing Brocade license requirements on page 140
2. Racking the hardware components on page 140
3. Cabling the new MetroCluster controllers to the existing FC fabrics on page 141
4. Configuring switch fabrics sharing between the 7-Mode and clustered MetroCluster configuration
on page 142
Related information
Copy-based transition
140 | Fabric-attached MetroCluster Installation and Configuration Guide
Reviewing Brocade license requirements
You need certain licenses for the switches in a MetroCluster configuration. You must install these
licenses on all four switches.
The MetroCluster configuration has the following Brocade license requirements:
•
Trunking license for systems using more than one ISL, as recommended.
•
Extended Fabric license (for ISL distances over 6 km)
•
Enterprise license for sites with more than one ISL and an ISL distance greater than 6 km
The Enterprise license includes Brocade Network Advisor and all licenses except for additional
port licenses.
You can verify that the licenses are installed by using the licenseshow command. If you do not
have these licenses, you should contact your sales representative before proceeding.
Racking the hardware components
If you have not received the equipment already installed in cabinets, you must rack the components.
About this task
This task must be performed on both MetroCluster sites.
Steps
1. Plan out the positioning of the MetroCluster components.
The rack space depends on the platform model of the storage controllers, the switch types, and the
number of disk shelf stacks in your configuration.
2. Properly ground yourself.
3. Install the storage controllers in the rack or cabinet.
Note: AFF systems are not supported with array LUNs.
Installation and Setup Instructions for AFF A300 Systems
Installation and Setup Instructions for AFF A700 and FAS9000
Installation and Setup Instructions for FAS8200 Systems
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
4. Install the FC switches in the rack or cabinet.
5. Install the disk shelves, power them on, and set the shelf IDs.
NetApp Documentation: Disk Shelves
•
You must power-cycle each disk shelf.
Configuring hardware for sharing a Brocade 6510 FC fabric during transition | 141
•
Shelf IDs must be unique for each SAS disk shelf within each MetroCluster DR group
(including both sites).
6. Install each FC-to-SAS bridge:
a. Secure the “L” brackets on the front of the bridge to the front of the rack (flush-mount) with
the four screws.
The openings in the bridge “L” brackets are compliant with rack standard ETA-310-X for 19inch (482.6 mm) racks.
For more information and an illustration of the installation, see the ATTO FibreBridge
Installation and Operation Manual for your bridge model.
Note: For adequate port space access and FRU serviceability, you must leave 1U space
below the bridge pair and cover this space with a tool-less blanking panel.
b. Connect each bridge to a power source that provides a proper ground.
c. Power on each bridge.
Note: For maximum resiliency, bridges that are attached to the same stack of disk shelves
must be connected to different power sources.
The bridge Ready LED might take up to 30 seconds to illuminate, indicating that the bridge
has completed its power-on self test sequence.
Cabling the new MetroCluster controllers to the existing FC
fabrics
On each controller in the clustered ONTAP MetroCluster configuration, the FC-VI adapter and HBAs
must be cabled to specific ports on the existing FC switches.
Steps
1. Cable the FC-VI and HBA ports according to the following table:
Site A
Site B
Connect this Site A
component and port...
FC_switch_
A_1 port...
Connect this Site B
component and port...
FC_switch_B_1
port...
controller_A_1 FC-VI port 1
32
controller_B_1 FC-VI port
1
32
controller_A_1 HBA port 1
33
controller_B_1 HBA port
1
33
controller_A_1 HBA port 2
34
controller_B_1 HBA port
2
34
controller_A_2 FC-VI port 1
35
controller_B_2 FC-VI port
1
35
controller_A_2 HBA 1
36
controller_B_2 HBA 1
36
controller_A_2 HBA 2
37
controller_B_2 HBA 2
37
2. Cable each FC-SAS bridge in the first switch fabric to the FC switches.
The number of bridges varies depending on the number of SAS storage stacks.
142 | Fabric-attached MetroCluster Installation and Configuration Guide
Site A
Site B
Cable this site A bridge...
FC_switch
_A_1
port...
Cable this Site B bridge...
FC_switch_B_1
port...
bridge_A_1_38
38
bridge_B_1_38
38
bridge_A_1_39
39
bridge_B_1_39
39
bridge_A_1_40
40
bridge_B_1_40
40
bridge_A_1_41
41
bridge_B_1_41
41
bridge_A_1_42
42
bridge_B_1_42
42
bridge_A_1_43
43
bridge_B_1_43
43
bridge_A_1_44
44
bridge_B_1_44
44
bridge_A_1_45
45
bridge_B_1_45
45
3. Cable each bridge in the second switch fabric to the FC switches.
The number of bridges varies depending on the number of SAS storage stacks.
Site B
Site A
Cable this site A bridge...
FC_switch
_A_2
port...
Cable this Site B bridge...
FC_switch_B_2
port...
bridge_A_2_38
38
bridge_B_2_38
38
bridge_A_2_39
39
bridge_B_2_39
39
bridge_A_2_40
40
bridge_B_2_40
40
bridge_A_2_41
41
bridge_B_2_41
41
bridge_A_2_42
42
bridge_B_2_42
42
bridge_A_2_43
43
bridge_B_2_43
43
bridge_A_2_44
44
bridge_B_2_44
44
bridge_A_2_45
45
bridge_B_2_45
45
Configuring switch fabrics sharing between the 7-Mode and
clustered MetroCluster configuration
To share switch fabrics between the existing 7-Mode fabric MetroCluster and the new MetroCluster
configuration, you must set up specific zoning and other settings that are different than an unshared
configuration.
About this task
This task must be performed on both switch fabrics, one at a time.
Configuring hardware for sharing a Brocade 6510 FC fabric during transition | 143
Disabling one of the switch fabrics
You must disable one of the switch fabrics so you can modify its configuration. After you complete
the configuration and reenable the switch fabric, you will repeat the process on the other fabric.
Before you begin
You must have run the fmc_dc utility on the existing 7-Mode fabric MetroCluster configuration and
resolved any issues prior to beginning the configuration process.
About this task
To ensure continued operation of the MetroCluster configuration, you must not disable the second
fabric while the first fabric is disabled.
Steps
1. Disable each of the switches in the fabric:
switchCfgPersistentDisable
If this command is not available, use the switchDisable command.
Example
The following example shows the command issued on FC_switch_A_1:
FC_switch_A_1:admin> switchCfgPersistentDisable
The following example shows the command issued on FC_switch_B_1:
FC_switch_B_1:admin> switchCfgPersistentDisable
2. Ensure that the 7-Mode MetroCluster configuration is functioning correctly using the redundant
fabric:
a. Confirm that controller failover is healthy:
cf status
Example
node_A> cf status
Controller Failover enabled, node_A is up.
VIA Interconnect is up (link 0 down, link 1 up).
b. Confirm that disks are visible:
storage show disk –p
Example
node_A> storage show disk –p
PRIMARY
PORT
--------------------------- ---Brocade-6510-2K0GG:5.126L27 B
Brocade-6510-2K0GG:5.126L28 B
Brocade-6510-2K0GG:5.126L29 B
Brocade-6510-2K0GG:5.126L30 B
Brocade-6510-2K0GG:5.126L31 B
SECONDARY
PORT SHELF BAY
------------------ ---- --------1
0
1
1
1
2
1
3
1
4
144 | Fabric-attached MetroCluster Installation and Configuration Guide
.
.
.
c. Confirm that the aggregates are healthy:
aggr status
Example
node_A> aggr status
Aggr State
aggr0 online
Status
raid_dp, aggr
mirrored
64-bit
Options
root, nosnap=on
Deleting TI zoning and configuring IOD settings
You must delete the existing TI zoning and reconfigure in-order-delivery (IOD) settings on the switch
fabric.
Steps
1. Identify the TI zones that are configured on the fabric:
zone –-show
Example
The following example shows the zone FCVI_TI_FAB_2.
Brocade-6510:admin> zone --show
Defined TI zone configuration:
TI Zone Name:
FCVI_TI_FAB_2
Port List:
1,0; 1,3; 2,0; 2,3
configured Status: Activated / Failover-Disabled
Enabled Status: Activated / Failover-Disabled
2. Delete the TI zones:
zone -–delete zone-name
Example
The following example shows the deletion of zone FCVI_TI_FAB_2.
Brocade-6510:admin> zone --delete FCVI_TI_FAB_2
3. Confirm that the zones have been deleted:
zone –-show
Example
The output should be similar to the following:
Configuring hardware for sharing a Brocade 6510 FC fabric during transition | 145
Brocade-6510:admin> zone -–show
Defined TI zone configuration:
no TI zone configuration defined
4. Save the configuration:
cfgsave
5. Enable in-order-delivery:
iodset
6. Select Advanced Performance Tuning (APT) policy 1, the Port Based Routing Policy:
aptpolicy 1
7. Disable Dynamic Load Sharing (DLS):
dlsreset
8. Verify the IOD settings using the following commands:
iodshow
aptpolicy
dlsshow
Example
The output should be similar to the following:
Brocade-6510:admin> iodshow
IOD is set
Brocade-6510:admin> aptpolicy
Current Policy: 1
3 : Default Policy
1: Port Based Routing Policy
2: Device Based Routing Policy (FICON support only)
3: Exchange Based Routing Policy
Brocade-6510:admin> dlsshow
DLS is not set
Ensuring ISLs are in the same port group and configuring zoning
You must make sure that the Inter-Switch Links (ISLs) are in the same port group and configure
zoning for the MetroCluster configurations to successfully share the switch fabrics.
Steps
1. If the ISLs are not in the same port group, move one of the ISL ports to the same port group as the
other one.
You can use any available port except 32 through 45, which are used by the new MetroCluster
configuration. The recommended ISL ports are 46 and 47.
2. Follow the steps in the Configuring zoning on a Brocade FC switch section to enable trunking and
the QoS zone.
Configuring zoning on Brocade FC switches on page 65
146 | Fabric-attached MetroCluster Installation and Configuration Guide
The port numbers when sharing fabrics are different than those shown in the section. When
sharing, use ports 46 and 47 for the ISL ports. If you moved your ISL ports, you need to use the
procedure in the Configuring the E-ports (ISL ports) on a Brocade FC switch section to configure
the ports.
Configuring the E-ports (ISL ports) on a Brocade FC switch on page 59
3. Follow the steps in the Configuring the non-E ports on the Brocade switch section to configure
the non-E ports.
Configuring the non-E ports on the Brocade switch on page 64
4. Do not delete the zones or zone sets that already exist in the backend switches (for the 7-Mode
fabric MetroCluster) except the Traffic Isolation (TI) zones in Step 3.
5. Follow the steps in the Configuring the E-ports (ISL ports) on a Brocade FC switch section to add
the zones required by the new MetroCluster to the existing zone sets.
Configuring the E-ports (ISL ports) on a Brocade FC switch on page 59
Example
The following example shows the commands and system output for creating the zones:
Brocade-6510-2K0GG:admin> zonecreate "QOSH2_FCVI_1", "2,32; 2,35;
1,32; 1,35"
Brocade-6510-2K0GG:admin> zonecreate "STOR_A_2_47", "2,33; 2,34;
2,36; 2,37; 1,33; 1,34; 1,36; 1,37; 1,47"
Brocade-6510-2K0GG:admin> zonecreate "STOR_B_2_47", "2,33; 2,34;
2,36; 2,37; 1,33; 1,34; 1,36; 1,37; 2,47"
Brocade-6510-2K0GG:admin> cfgadd config_1_FAB2, "QOSH2_FCVI_1;
STOR_A_2_47; STOR_B_2_47"
Brocade-6510-2K0GG:admin> cfgenable "config_1_FAB2"
You are about to enable a new zoning configuration.
This action will replace the old zoning configuration with the
current configuration selected. If the update includes changes
to one or more traffic isolation zones, the update may result in
localized disruption to traffic on ports associated with
the traffic isolation zone changes
Do you want to enable 'config_1_FAB2' configuration (yes, y, no, n):
[no] yes
Brocade-6510-2K0GG:admin> cfgsave
You are about to save the Defined zoning configuration. This
action will only save the changes on Defined configuration.
Do you want to save the Defined zoning configuration only? (yes, y,
no, n): [no] yes
Nothing changed: nothing to save, returning ...
Brocade-6510-2K0GG:admin>
Reenabling the switch fabric and verify operation
You must enable the FC switch fabric and ensure that the switches and devices are operating
correctly.
Steps
1. Enable the switches:
switchCfgPersistentEnable
Configuring hardware for sharing a Brocade 6510 FC fabric during transition | 147
If this command is not available, the switch should be in the enabled state after the fastBoot
command is issued.
Example
The following example shows the command issued on FC_switch_A_1:
FC_switch_A_1:admin> switchCfgPersistentEnable
The following example shows the command issued on FC_switch_B_1:
FC_switch_B_1:admin> switchCfgPersistentEnable
2. Verify that the switches are online and all devices are properly logged in:
switchShow
Example
The following example shows the command issued on FC_switch_A_1:
FC_switch_A_1:admin> switchShow
The following example shows the command issued on FC_switch_B_1:
FC_switch_B_1:admin> switchShow
3. Run the fmc_dc utility to ensure that the 7-Mode fabric MetroCluster is functioning correctly.
You can ignore errors related to Traffic Isolation (TI) zoning and trunking.
4. Repeat the tasks for the second switch fabric.
148
Configuring the MetroCluster software in ONTAP
You must set up each node in the MetroCluster configuration in ONTAP, including the node-level
configurations and the configuration of the nodes into two sites. Finally, you implement the
MetroCluster relationship between the two sites. The steps for systems with native disk shelves are
slightly different than those for systems with array LUNs.
For new systems configured in the factory, you do not need to configure the ONTAP software except
to change the preconfigured IP addresses. If your system is new, you can proceed with verifying the
configuration.
Steps
1. Gathering required information and reviewing the workflow on page 149
2. Similarities and differences between standard cluster and MetroCluster configurations on page
154
3. Setting a previously used controller module to system defaults in Maintenance mode on page 154
4. Configuring FC-VI ports on a QLE2564 quad-port card on FAS8020 systems on page 155
5. Verifying disk assignment in Maintenance mode in an eight-node or a four-node configuration on
page 157
6. Verifying disk assignment in Maintenance mode in a two-node configuration on page 163
7. Verifying and configuring the HA state of components in Maintenance mode on page 164
Configuring the MetroCluster software in ONTAP | 149
8. Setting up ONTAP on page 165
9. Configuring the clusters into a MetroCluster configuration on page 167
10. Checking for MetroCluster configuration errors with Config Advisor on page 186
11. Verifying local HA operation on page 186
12. Verifying switchover, healing, and switchback on page 188
13. Installing the MetroCluster Tiebreaker software on page 188
14. Protecting configuration backup files on page 188
Gathering required information and reviewing the workflow
You need to gather the required IP addresses for the controller and review the software installation
workflow before you begin the configuration process.
IP network information worksheet for site A
You must obtain IP addresses and other network information for the first MetroCluster site (site A)
from your network administrator before you configure the system.
Site A switch information (if using a switched cluster)
When you cable the system, you need a host name and management IP address for each cluster
switch. This information is not needed if you are using a two-node switchless cluster or you are using
two-node MetroCluster configuration (one node at each site).
Cluster switch
Host name
IP address
Network mask
Default gateway
Interconnect 1
Interconnect 2
Management 1
Management 2
Site A cluster creation information
When you first create the cluster, you need the following information:
Type of information
Your values
Cluster name
Example used in this guide:
site_A
DNS domain
DNS name servers
Location
Administrator password
Site A node information
For each node in the cluster, you need a management IP address, a network mask, and a default
gateway.
150 | Fabric-attached MetroCluster Installation and Configuration Guide
Node
Port
IP address
Network mask
Default gateway
Node 1
Example used
in this guide:
controller_A_1
Node 2
Not required if
using two-node
MetroCluster
configuration
(one node at
each site).
Example used
in this guide:
controller_A_2
Site A LIFs and ports for cluster peering
For each node in the cluster, you need the IP addresses of two intercluster LIFs, including a network
mask and a default gateway. The intercluster LIFs are used to peer the clusters.
Node
Port
IP address of
intercluster LIF
Network mask
Default gateway
Node 1 IC LIF
1
Node 1 IC LIF
2
Node 2 IC LIF
1
Not required if
using two-node
MetroCluster
configuration
(one node at
each site).
Node 2 IC LIF
2
Not required if
using two-node
MetroCluster
configuration
(one node at
each site).
Site A time server information
You must synchronize the time, which requires one or more NTP time servers.
Node
NTP server 1
NTP server 2
Host name
IP address
Network mask
Default gateway
Configuring the MetroCluster software in ONTAP | 151
Site A AutoSupport information
You must configure AutoSupport on each node, which requires the following information:
Type of information
Your values
From email address
Mail hosts
IP addresses or names
Transport protocol
HTTP, HTTPS, or
SMTP
Proxy server
Recipient email
addresses or
distribution lists
Full-length messages
Concise messages
Partners
Site A service processor information
You must enable access to the service processor of each node for troubleshooting and maintenance,
which requires the following network information for each node:
Node
IP address
Network mask
Default gateway
Node 1
Node 2
Not required if
using two-node
MetroCluster
configuration
(one node at each
site).
IP network information worksheet for site B
You must obtain IP addresses and other network information for the second MetroCluster site (site B)
from your network administrator before you configure the system.
Site B switch information (if using a switched cluster)
When you cable the system, you need a host name and management IP address for each cluster
switch. This information is not needed if you are using a two-node switchless cluster or you are using
two-node MetroCluster configuration (one node at each site).
Cluster switch
Host name
IP address
Network mask
Interconnect 1
Interconnect 2
Management 1
Management 2
Site B cluster creation information
When you first create the cluster, you need the following information:
Default gateway
152 | Fabric-attached MetroCluster Installation and Configuration Guide
Type of information
Your values
Cluster name
Example used in this guide:
site_B
DNS domain
DNS name servers
Location
Administrator password
Site B node information
For each node in the cluster, you need a management IP address, a network mask, and a default
gateway.
Node
Port
IP address
Network mask
Default gateway
Node 1
Example used
in this guide:
controller_B_1
Node 2
Not required if
using two-node
MetroCluster
configuration
(one node at
each site).
Example used
in this guide:
controller_B_2
Site B LIFs and ports for cluster peering
For each node in the cluster, you need the IP addresses of two intercluster LIFs including a network
mask and a default gateway. The intercluster LIFs are used to peer the clusters.
Node
Node 1 IC LIF
1
Node 1 IC LIF
2
Node 2 IC LIF
1
Not required if
using two-node
MetroCluster
configuration
(one node at
each site).
Port
IP address of
intercluster LIF
Network mask
Default gateway
Configuring the MetroCluster software in ONTAP | 153
Node
Port
IP address of
intercluster LIF
Network mask
Default gateway
Node 2 IC LIF
2
Not required if
using two-node
MetroCluster
configuration
(one node at
each site).
Site B time server information
You must synchronize the time, which requires one or more NTP time servers.
Node
Host name
IP address
Network mask
Default gateway
NTP server 1
NTP server 2
Site B AutoSupport information
You must configure AutoSupport on each node, which requires the following information:
Type of information
Your values
From email address
Mail hosts
IP addresses or names
Transport protocol
HTTP, HTTPS, or
SMTP
Proxy server
Recipient email
addresses or
distribution lists
Full-length messages
Concise messages
Partners
Site B service processor information
You must enable access to the service processor of each node for troubleshooting and maintenance,
which requires the following network information for each node:
Node
Node 1
(controller_B_1)
Node 2
(controller_B_2)
Not required if
using two-node
MetroCluster
configuration
(one node at each
site).
IP address
Network mask
Default gateway
154 | Fabric-attached MetroCluster Installation and Configuration Guide
Similarities and differences between standard cluster and
MetroCluster configurations
The configuration of the nodes in each cluster in a MetroCluster configuration is similar to that of
nodes in a standard cluster.
The MetroCluster configuration is built on two standard clusters. Physically, the configuration must
be symmetrical, with each node having the same hardware configuration, and all of the MetroCluster
components must be cabled and configured. However, the basic software configuration for nodes in a
MetroCluster configuration is the same as that for nodes in a standard cluster.
Standard cluster
configuration
Configuration step
MetroCluster
configuration
Configure management, cluster, and data LIFs
on each node.
Same in both types of clusters
Configure the root aggregate.
Same in both types of clusters
Configure nodes in the cluster as HA pairs
Same in both types of clusters
Set up the cluster on one node in the cluster.
Same in both types of clusters
Join the other node to the cluster.
Same in both types of clusters
Create a mirrored root aggregate.
Optional
Required
Peer the clusters.
Optional
Required
Does not apply
Required
Enable the MetroCluster configuration.
Setting a previously used controller module to system
defaults in Maintenance mode
If your controller modules have been used previously, you must reset them for a successful
MetroCluster configuration.
About this task
Important: This task is required only on controller modules that have been previously configured;
you do not need to perform this task if you received the controller modules from the factory.
Steps
1. In Maintenance mode, return the environmental variables to their default setting:
set-defaults
2. Configure the settings for any HBAs in the system:
If you have this type of HBA
and desired mode...
Use this command...
CNA FC
ucadmin modify -mode fc -type initator
adapter_name
CNA Ethernet
ucadmin modify -mode cna adapter_name
FC target
fcadmin config -t target adapter_name
Configuring the MetroCluster software in ONTAP | 155
If you have this type of HBA
and desired mode...
Use this command...
FC initiator
fcadmin config -t initiator adapter_name
3. Exit Maintenance mode:
halt
After you issue the command, wait until the system stops at the LOADER prompt.
4. Boot the node back into Maintenance mode to enable the configuration changes to take effect.
5. Verify the changes you made:
If you have this type of
HBA...
Use this command...
CNA
ucadmin show
FC
fcadmin show
6. Clear the system configuration:
wipeconfig
This does not clear the HBA modes.
Configuring FC-VI ports on a QLE2564 quad-port card on
FAS8020 systems
If you are using the QLE2564 quad-port card on a FAS8020 system, you can enter Maintenance
mode to configure the 1a and 1b ports for FC-VI and initiator usage. This is not required on
MetroCluster systems received from the factory, in which the ports are set appropriately for your
configuration.
About this task
This task must be performed in Maintenance mode.
Steps
1. Disable the ports:
storage disable adapter 1a
storage disable adapter 1b
Example
*> storage disable adapter 1a
Jun 03 02:17:57 [controller_B_1:fci.adapter.offlining:info]:
Offlining Fibre Channel adapter 1a.
Host adapter 1a disable succeeded
Jun 03 02:17:57 [controller_B_1:fci.adapter.offline:info]: Fibre
Channel adapter 1a is now offline.
*> storage disable adapter 1b
Jun 03 02:18:43 [controller_B_1:fci.adapter.offlining:info]:
Offlining Fibre Channel adapter 1b.
Host adapter 1b disable succeeded
Jun 03 02:18:43 [controller_B_1:fci.adapter.offline:info]: Fibre
Channel adapter 1b is now offline.
*>
156 | Fabric-attached MetroCluster Installation and Configuration Guide
2. Verify that the ports are disabled:
ucadmin show
Example
*> ucadmin show
Current
Adapter Mode
------- ------...
1a
fc
1b
fc
1c
fc
1d
fc
Current
Type
---------
Pending
Mode
-------
Pending
Type
---------
Admin
Status
-------
initiator
initiator
initiator
initiator
-
-
offline
offline
online
online
3. Set the a and b ports to FC-VI mode:
ucadmin modify -adapter 1a -type fcvi
The command sets the mode on both ports in the port pair, 1a and 1b (even though only 1a is
specified in the command).
Example
*> ucadmin modify -t fcvi 1a
Jun 03 02:19:13 [controller_B_1:ucm.type.changed:info]: FC-4
changed to fcvi on adapter 1a. Reboot the controller for the
to take effect.
Jun 03 02:19:13 [controller_B_1:ucm.type.changed:info]: FC-4
changed to fcvi on adapter 1b. Reboot the controller for the
to take effect.
type has
changes
type has
changes
4. Confirm that the change is pending:
ucadmin show
Example
*> ucadmin show
Current
Adapter Mode
------- ------...
1a
fc
1b
fc
1c
fc
1d
fc
Current
Type
---------
Pending
Mode
-------
Pending
Type
---------
Admin
Status
-------
initiator
initiator
initiator
initiator
-
fcvi
fcvi
-
offline
offline
online
online
5. Shut down the controller, and then reboot into Maintenance mode.
6. Confirm the configuration change:
ucadmin show local
Example
Node
Adapter
---------------------------...
controller_B_1
1a
Mode
-------
Type
---------
Mode
-------
Type
---------
Status
fc
fcvi
-
-
online
Configuring the MetroCluster software in ONTAP | 157
controller_B_1
1b
fc
fcvi
-
-
online
1c
fc
initiator
-
-
online
1d
fc
6 entries were displayed.
initiator
-
-
online
controller_B_1
controller_B_1
Verifying disk assignment in Maintenance mode in an eightnode or a four-node configuration
Before fully booting the system to ONTAP, you can optionally boot to Maintenance mode and verify
the disk assignment on the nodes. The disks should be assigned to create a fully symmetric activeactive configuration, where each pool has an equal number of disks assigned to them.
About this task
New MetroCluster systems have disk assignment completed prior to shipment.
The following table shows example pool assignments for a MetroCluster configuration. Disks are
assigned to pools on a per-shelf basis.
158 | Fabric-attached MetroCluster Installation and Configuration Guide
Disk shelf
(sample_shelf_name)
...
At site...
Belongs to...
And is assigned to
that node's...
Disk shelf 1
(shelf_A_1_1)
Site A
Node A 1
Pool 0
Node B 1
Pool 1
Node A 2
Pool 0
Node B 2
Pool 1
Node A 3
Pool 0
Node B 3
Pool 1
Node A 4
Pool 0
Node B 4
Pool 1
Disk shelf 2
(shelf_A_1_3)
Disk shelf 3
(shelf_B_1_1)
Disk shelf 4
(shelf_B_1_3)
Disk shelf 5
(shelf_A_2_1)
Disk shelf 6
(shelf_A_2_3)
Disk shelf 7
(shelf_B_2_1)
Disk shelf 8
(shelf_B_2_3)
Disk shelf 1
(shelf_A_3_1)
Disk shelf 2
(shelf_A_3_3)
Disk shelf 3
(shelf_B_3_1)
Disk shelf 4
(shelf_B_3_3)
Disk shelf 5
(shelf_A_4_1)
Disk shelf 6
(shelf_A_4_3)
Disk shelf 7
(shelf_B_4_1)
Disk shelf 8
(shelf_B_4_3)
Configuring the MetroCluster software in ONTAP | 159
Disk shelf
(sample_shelf_name)
...
At site...
Belongs to...
And is assigned to
that node's...
Disk shelf 9
(shelf_B_1_2)
Site B
Node B 1
Pool 0
Node A 1
Pool 1
Node B 2
Pool 0
Node A 2
Pool 1
Node A 3
Pool 0
Node B 3
Pool 1
Node A 4
Pool 0
Node B 4
Pool 1
Disk shelf 10
(shelf_B_1_4)
Disk shelf 11
(shelf_A_1_2)
Disk shelf 12
(shelf_A_1_4)
Disk shelf 13
(shelf_B_2_2)
Disk shelf 14
(shelf_B_2_4)
Disk shelf 15
(shelf_A_2_2)
Disk shelf 16
(shelf_A_2_4)
Disk shelf 1
(shelf_B_3_2)
Disk shelf 2
(shelf_B_3_4)
Disk shelf 3
(shelf_A_3_2)
Disk shelf 4
(shelf_A_3_4)
Disk shelf 5
(shelf_B_4_2)
Disk shelf 6
(shelf_B_4_4)
Disk shelf 7
(shelf_A_4_2)
Disk shelf 8
(shelf_A_4_4)
Steps
1. Confirm the shelf assignments:
disk show –v
2. If necessary, explicitly assign disks on the attached disk shelves to the appropriate pool by using
the disk assign command.
Using wildcards in the command enables you to assign all of the disks on a disk shelf with one
command. You can identify the disk shelf IDs and bays for each disk with the storage show
disk –x command.
160 | Fabric-attached MetroCluster Installation and Configuration Guide
Assigning disk ownership in non-AFF systems
If the MetroCluster nodes do not have the disks correctly assigned, you must assign disks to each of
the nodes in the MetroCluster configuration on a shelf-by-shelf basis. You will create a configuration
in which each node has the same number of disks in its local and remote disk pools.
Before you begin
The storage controllers must be in Maintenance mode.
About this task
This task is not required if disks were correctly assigned when received from the factory.
Note: Pool 0 always contains the disks that are found at the same site as the storage system that
owns them.
Pool 1 always contains the disks that are remote to the storage system that owns them.
Steps
1. If you have not done so, boot each system into Maintenance mode.
2. Assign the disk shelves to the nodes located at the first site (site A):
Disk shelves at the same site as the node are assigned to pool 0 and disk shelves located at the
partner site are assigned to pool 1.
You should assign an equal number of shelves to each pool.
a. On the first node, systematically assign the local disk shelves to pool 0 and the remote disk
shelves to pool 1:
disk assign -shelf local-switch-name:shelf-name.port -p pool
Example
If storage controller Controller_A_1 has four shelves, you issue the following commands:
*> disk assign -shelf FC_switch_A_1:1-4.shelf1 -p 0
*> disk assign -shelf FC_switch_A_1:1-4.shelf2 -p 0
*> disk assign -shelf FC_switch_B_1:1-4.shelf1 -p 1
*> disk assign -shelf FC_switch_B_1:1-4.shelf2 -p 1
b. Repeat the process for the second node at the local site, systematically assigning the local disk
shelves to pool 0 and the remote disk shelves to pool 1:
disk assign -shelf local-switch-name:shelf-name.port -p pool
Example
If storage controller Controller_A_2 has four shelves, you issue the following commands:
*> disk assign -shelf FC_switch_A_1:1-4.shelf3 -p 0
*> disk assign -shelf FC_switch_B_1:1-4.shelf4 -p 1
*> disk assign -shelf FC_switch_A_1:1-4.shelf3 -p 0
*> disk assign -shelf FC_switch_B_1:1-4.shelf4 -p 1
3. Assign the disk shelves to the nodes located at the second site (site B):
Configuring the MetroCluster software in ONTAP | 161
Disk shelves at the same site as the node are assigned to pool 0 and disk shelves located at the
partner site are assigned to pool 1.
You should assign an equal number of shelves to each pool.
a. On the first node at the remote site, systematically assign its local disk shelves to pool 0 and
its remote disk shelves to pool 1:
disk assign -shelf local-switch-nameshelf-name -p pool
Example
If storage controller Controller_B_1 has four shelves, you issue the following commands:
*> disk assign -shelf FC_switch_B_1:1-5.shelf1 -p 0
*> disk assign -shelf FC_switch_B_1:1-5.shelf2 -p 0
*> disk assign -shelf FC_switch_A_1:1-5.shelf1 -p 1
*> disk assign -shelf FC_switch_A_1:1-5.shelf2 -p 1
b. Repeat the process for the second node at the remote site, systematically assigning its local
disk shelves to pool 0 and its remote disk shelves to pool 1:
disk assign -shelf shelf-name -p pool
Example
If storage controller Controller_B_2 has four shelves, you issue the following commands:
*> disk assign -shelf FC_switch_B_1:1-5.shelf3 -p 0
*> disk assign -shelf FC_switch_B_1:1-5.shelf4 -p 0
*> disk assign -shelf FC_switch_A_1:1-5.shelf3 -p 1
*> disk assign -shelf FC_switch_A_1:1-5.shelf4 -p 1
4. Confirm the shelf assignments:
storage show shelf
5. Exit Maintenance mode:
halt
6. Display the boot menu:
boot_ontap menu
7. On each node, select option 4 to initialize all disks.
Assigning disk ownership in AFF systems
If you are using AFF systems and the nodes do not have the disks (SSDs) correctly assigned, you
must assign half the disks on each shelf to one local node and the other half of the disks to its HA
partner node. You will create a configuration in which each node has the same number of disks in its
local and remote disk pools.
Before you begin
The storage controllers must be in Maintenance mode.
About this task
This task is not required if disks were correctly assigned when received from the factory.
162 | Fabric-attached MetroCluster Installation and Configuration Guide
Note: Pool 0 always contains the disks that are found at the same site as the storage system that
owns them.
Pool 1 always contains the disks that are remote to the storage system that owns them.
Steps
1. If you have not done so, boot each system into Maintenance mode.
2. Assign the disks to the nodes located at the first site (site A):
You should assign an equal number of disks to each pool.
a. On the first node, systematically assign half the disks on each shelf to pool 0 and the other
half to the HA partner's pool 0:
disk assign -disk disk-name -p pool -n number-of-disks
Example
If storage controller Controller_A_1 has four shelves, each with 8 SSDs, you issue the
following commands:
*> disk assign -shelf FC_switch_A_1:1-4.shelf1 -p 0 -n 4
*> disk assign -shelf FC_switch_A_1:1-4.shelf2 -p 0 -n 4
*> disk assign -shelf FC_switch_B_1:1-4.shelf1 -p 1 -n 4
*> disk assign -shelf FC_switch_B_1:1-4.shelf2 -p 1 -n 4
b. Repeat the process for the second node at the local site, systematically assigning half the disks
on each shelf to pool 1 and the other half to the HA partner's pool 1:
disk assign -disk disk-name -p pool
Example
If storage controller Controller_A_1 has four shelves, each with 8 SSDs, you issue the
following commands:
*> disk assign -shelf FC_switch_A_1:1-4.shelf3 -p 0 -n 4
*> disk assign -shelf FC_switch_B_1:1-4.shelf4 -p 1 -n 4
*> disk assign -shelf FC_switch_A_1:1-4.shelf3 -p 0 -n 4
*> disk assign -shelf FC_switch_B_1:1-4.shelf4 -p 1 -n 4
3. Assign the disks to the nodes located at the second site (site B):
You should assign an equal number of disks to each pool.
a. On the first node at the remote site, systematically assign half the disks on each shelf to pool 0
and the other half to the HA partner's pool 0:
disk assign -disk disk-name -p pool
Example
If storage controller Controller_B_1 has four shelves, each with 8 SSDs, you issue the
following commands:
Configuring the MetroCluster software in ONTAP | 163
*> disk assign -shelf FC_switch_B_1:1-5.shelf1 -p 0 -n 4
*> disk assign -shelf FC_switch_B_1:1-5.shelf2 -p 0 -n 4
*> disk assign -shelf FC_switch_A_1:1-5.shelf1 -p 1 -n 4
*> disk assign -shelf FC_switch_A_1:1-5.shelf2 -p 1 -n 4
b. Repeat the process for the second node at the remote site, systematically assigning half the
disks on each shelf to pool 1 and the other half to the HA partner's pool 1:
disk assign -disk disk-name -p pool
Example
If storage controller Controller_B_2 has four shelves, each with 8 SSDs, you issue the
following commands:
*> disk assign -shelf FC_switch_B_1:1-5.shelf3 -p 0 -n 4
*> disk assign -shelf FC_switch_B_1:1-5.shelf4 -p 0 -n 4
*> disk assign -shelf FC_switch_A_1:1-5.shelf3 -p 1 -n 4
*> disk assign -shelf FC_switch_A_1:1-5.shelf4 -p 1 -n 4
4. Confirm the disk assignments:
storage show disk
5. Exit Maintenance mode:
halt
6. Display the boot menu:
boot_ontap menu
7. On each node, select option 4 to initialize all disks.
Verifying disk assignment in Maintenance mode in a twonode configuration
Before fully booting the system to ONTAP, you can optionally boot to Maintenance mode and verify
the disk assignment on the nodes. The disks should be assigned to create a fully symmetric activeactive configuration, where each node and each pool have an equal number of disks assigned to them.
About this task
New MetroCluster systems have disk assignment completed prior to shipment.
The following table shows example pool assignments for a MetroCluster configuration. Disks are
assigned to pools on a per-shelf basis.
164 | Fabric-attached MetroCluster Installation and Configuration Guide
Disk shelf (example
At site...
Belongs to...
And is assigned to
that node's...
Site A
Node A 1
Pool 0
Node B 1
Pool 1
Node B 1
Pool 0
Node A 1
Pool 1
name)...
Disk shelf 1
(shelf_A_1_1)
Disk shelf 2
(shelf_A_1_3)
Disk shelf 3
(shelf_B_1_1)
Disk shelf 4
(shelf_B_1_3)
Disk shelf 9
(shelf_B_1_2)
Site B
Disk shelf 10
(shelf_B_1_4)
Disk shelf 11
(shelf_A_1_2)
Disk shelf 12
(shelf_A_1_4)
Steps
1. Confirm the shelf assignments:
disk show –v
2. If necessary, you can explicitly assign disks on the attached disk shelves to the appropriate pool
with the disk assign command.
Using wildcards in the command enables you to assign all the disks on a disk shelf with one
command.
3. Show the disk shelf IDs and bays for each disk:
storage show disk –x
Verifying and configuring the HA state of components in
Maintenance mode
When configuring a storage system in a MetroCluster configuration, you must make sure that the
high-availability (HA) state of the controller module and chassis components is mcc or mcc-2n so
that these components boot properly.
Before you begin
The system must be in Maintenance mode.
About this task
This task is not required on systems that are received from the factory.
Steps
1. In Maintenance mode, display the HA state of the controller module and chassis:
Configuring the MetroCluster software in ONTAP | 165
ha-config show
The correct HA state depends on whether you have a four-node or two-node MetroCluster
configuration.
Number of controllers in the MetroCluster
configuration
HA state for all components should be...
Eight or four
mcc
Two
mcc-2n
2. If the displayed system state of the controller is not correct, set the HA state for the controller
module:
Number of controllers in
the MetroCluster
configuration
Command
Eight or four
ha-config modify controller mcc
Two
ha-config modify controller mcc-2n
3. If the displayed system state of the chassis is not correct, set the HA state for the chassis:
Number of controllers in
the MetroCluster
configuration
Command
Eight or four
ha-config modify chassis mcc
Two
ha-config modify chassis mcc-2n
4. Boot the node to ONTAP:
boot_ontap
5. Repeat these steps on each node in the MetroCluster configuration.
Setting up ONTAP
You must set up ONTAP on each controller module.
Choices
• Setting up the clusters on page 165
Setting up the clusters
In a two-node MetroCluster configuration, you must boot up the node, exit the Node Setup wizard,
and use the Cluster Setup wizard to configure the node into a single-node cluster.
Before you begin
You must not have configured the Service Processor.
About this task
This task is for two-node MetroCluster configurations using native NetApp storage.
New MetroCluster systems are preconfigured; you do not need to perform these steps. However, you
should configure AutoSupport.
This task must be performed on both clusters in the MetroCluster configuration.
166 | Fabric-attached MetroCluster Installation and Configuration Guide
For more general information about setting up ONTAP, see the Clustered Data ONTAP Software
Setup Guide
Software setup
Steps
1. Power on the first node.
The node boots, and then the Node Setup wizard starts on the console, informing you that
AutoSupport will be enabled automatically.
Welcome to node setup.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the setup wizard.
Any changes you made before quitting will be saved.
To accept a default or omit a question, do not enter a value.
This system will send event messages and weekly reports to NetApp
Technical
Support. To disable this feature, enter "autosupport modify -support
disable"
within 24 hours. Enabling AutoSupport can significantly speed problem
determination and resolution should a problem occur on your system.
For
further information on AutoSupport, see:
http://support.netapp.com/autosupport/
Type yes to confirm and continue {yes}:
2. Because you are using the CLI to set up the cluster, exit the Node Setup wizard; Node Setup
wizard:
exit
The Node Setup might be used to configure the node's node management interface for use with
the Cluster Setup wizard.
The Node Setup wizard exits, and a login prompt appears, warning that you have not completed
the setup tasks.
Exiting the node setup wizard. Any changes you made have been saved.
Warning: You have exited the node setup wizard before completing all
of the tasks. The node is not configured. You can complete node setup
by typing
"node setup" in the command line interface.
login:
3. Log in to the admin account by using the admin user name.
4. Start the Cluster Setup wizard:
cluster setup
::> cluster setup
Welcome to the cluster setup wizard.
You can enter the following commands at any time:
Configuring the MetroCluster software in ONTAP | 167
"help"
"back"
"exit"
Any
or "?" - if you want to
- if you want to change
or "quit" - if you want
changes you made before
have a question clarified,
previously answered questions, and
to quit the cluster setup wizard.
quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster?
{create, join}:
5. Create a new cluster:
create
6. Accept the system defaults by pressing Enter, or enter your own values by typing no, and then
pressing Enter.
7. Follow the prompts to complete the Cluster Setup wizard, pressing Enter to accept the default
values or typing your own values and then pressing Enter.
The default values are determined automatically based on your platform and network
configuration.
8. After you complete the Cluster Setup wizard and it exits, verify that the cluster is active and the
first node is healthy:
cluster show
Example
The following example shows a cluster in which the first node (cluster1-01) is healthy and
eligible to participate:
cluster1::> cluster show
Node
Health Eligibility
--------------------- ------- -----------cluster1-01
true
true
If it becomes necessary to change any of the settings you entered for the admin SVM or node
SVM, you can access the Cluster Setup wizard by using the cluster setup command.
Configuring the clusters into a MetroCluster configuration
You must mirror the root aggregates, create a mirrored data aggregate, and then issue the command to
implement the MetroCluster operations.
Peering the clusters
The clusters in the MetroCluster configuration must be in a peer relationship so that they can
communicate with each other and perform the data mirroring essential to MetroCluster disaster
recovery.
Before you begin
For systems received from the factory, the cluster peering is configured and you do not need to
manually peer the clusters.
Steps
1. Manually peering the clusters on page 168
168 | Fabric-attached MetroCluster Installation and Configuration Guide
Related concepts
Considerations when using dedicated ports on page 13
Considerations when sharing data ports on page 13
Related references
Prerequisites for cluster peering on page 12
Related information
Data protection using SnapMirror and SnapVault technology
Cluster peering express configuration
Manually peering the clusters
To manually create the cluster peering relationship, you must decide whether to use dedicated LIFs,
configure the LIFs and then enable the relationship.
Before you begin
For systems received from the factory, the cluster peering is configured and you do not need to
manually peer the clusters.
Configuring intercluster LIFs
You must create intercluster LIFs on ports used for communication between the MetroCluster partner
clusters. You can use dedicated ports or ports that also have data traffic.
Choices
• Configuring intercluster LIFs to use dedicated intercluster ports on page 168
• Configuring intercluster LIFs to share data ports on page 172
Configuring intercluster LIFs to use dedicated intercluster ports
Configuring intercluster LIFs to use dedicated data ports allows greater bandwidth than using shared
data ports on your intercluster networks for cluster peer relationships.
About this task
Creating intercluster LIFs that use dedicated ports involves creating a failover group for the dedicated
ports and assigning LIFs to those ports. In this procedure, a two-node cluster exists in which each
node has two data ports that you have added, e0e and e0f. These ports are ones you will dedicate for
intercluster replication and currently are in the default IPspace. These ports will be grouped together
as targets for the intercluster LIFs you are configuring. You must configure intercluster LIFs on the
peer cluster before you can create cluster peer relationships. In your own environment, you might
replace the ports, networks, IP addresses, subnet masks, and subnets with those specific to your
environment.
Steps
1. List the ports in the cluster by using network port show command.
Example
cluster01::> network port show
Speed (Mbps)
Node
Port
IPspace
Broadcast Domain Link
MTU
Admin/Oper
------ --------- ------------ ---------------- ----- ------- -----------cluster01-01
Configuring the MetroCluster software in ONTAP | 169
e0a
e0b
e0c
e0d
e0e
e0f
cluster01-02
e0a
e0b
e0c
e0d
e0e
e0f
Cluster
Cluster
Default
Default
Default
Default
Cluster
Cluster
Default
Default
Default
Default
up
up
up
up
up
up
1500
1500
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
Cluster
Cluster
Default
Default
Default
Default
Cluster
Cluster
Default
Default
Default
Default
up
up
up
up
up
up
1500
1500
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
2. Determine whether any of the LIFs are using ports that are dedicated for replication by using the
network interface show command.
Example
Ports e0e and e0f do not appear in the following output; therefore, they do not have any LIFs
located on them:
cluster01::> network interface show -fields home-port,curr-port
vserver lif
home-port curr-port
------- -------------------- --------- --------Cluster cluster01-01_clus1
e0a
e0a
Cluster cluster01-01_clus2
e0b
e0b
Cluster cluster01-02_clus1
e0a
e0a
Cluster cluster01-02_clus2
e0b
e0b
cluster01
cluster_mgmt
e0c
e0c
cluster01
cluster01-01_mgmt1
e0c
e0c
cluster01
cluster01-02_mgmt1
e0c
e0c
3. If a LIF is using a port that you want dedicated to intercluster connectivity, migrate the LIF to a
different port.
a. Migrate the LIF to another port by using the network interface migrate command.
Example
The following example assumes that the data LIF named cluster01_data01 uses port e0e and
you want only an intercluster LIF to use that port:
cluster01::> network interface migrate -vserver cluster01
-lif cluster01_data01 -dest-node cluster01-01 -dest-port e0d
b. You might need to modify the migrated LIF home port to reflect the new port where the LIF
should reside by using the network interface modify command.
Example
cluster01::> network interface modify -vserver cluster01
-lif cluster01_data01 -home-node cluster01-01 -home-port e0d
4. Group the ports that you will use for the intercluster LIFs by using the network interface
failover-groups create command.
Example
cluster01::> network interface failover-groups create -vserver cluster01
-failover-group intercluster01 -targets cluster01-01:e0e,cluster01-01:e0f,
cluster01-02:e0e,cluster01-02:e0f
5. Display the failover group that you created by using the network interface failovergroups show command.
170 | Fabric-attached MetroCluster Installation and Configuration Guide
Example
cluster01::> network interface failover-groups show
Failover
Vserver
Group
Targets
---------------- ---------------- -------------------------------------------Cluster
Cluster
cluster01-01:e0a, cluster01-01:e0b,
cluster01-02:e0a, cluster01-02:e0b
cluster01
Default
cluster01-01:e0c, cluster01-01:e0d,
cluster01-02:e0c, cluster01-02:e0d,
cluster01-01:e0e, cluster01-01:e0f
cluster01-02:e0e, cluster01-02:e0f
intercluster01
cluster01-01:e0e, cluster01-01:e0f
cluster01-02:e0e, cluster01-02:e0f
6. Create an intercluster LIF on the admin SVM cluster01 by using the network interface
create command.
Example
This example uses the LIF naming convention adminSVMname_icl# for the intercluster LIF:
cluster01::> network interface create -vserver cluster01 -lif cluster01_icl01 -role
intercluster -home-node cluster01-01 -home-port e0e
-address 192.168.1.201 -netmask 255.255.255.0 -failover-group intercluster01
cluster01::> network interface create -vserver cluster01 -lif cluster01_icl02 -role
intercluster -home-node cluster01-02 -home-port e0e
-address 192.168.1.202 -netmask 255.255.255.0 -failover-group intercluster01
7. Verify that the intercluster LIFs were created properly by using the network interface show
command.
Example
cluster01::> network interface show
Logical
Status
Network
Vserver
Interface Admin/Oper Address/Mask
----------- ---------- ---------- -----------------Cluster
cluster01-01_clus_1
up/up
192.168.0.xxx/24
cluster01-01_clus_2
up/up
192.168.0.xxx/24
cluster01-02_clus_1
up/up
192.168.0.xxx/24
cluster01-02_clus_2
up/up
192.168.0.xxx/24
cluster01
cluster_mgmt up/up
192.168.0.xxx/24
cluster01_icl01
up/up
192.168.1.201/24
cluster01_icl02
up/up
192.168.1.202/24
cluster01-01_mgmt1
up/up
192.168.0.xxx/24
cluster01-02_mgmt1
up/up
192.168.0.xxx/24
Current
Current Is
Node
Port
Home
------------- ------- ---cluster01-01
e0a
true
cluster01-01
e0b
true
cluster01-01
e0a
true
cluster01-01
e0b
true
cluster01-01
e0c
true
cluster01-01
e0e
true
cluster01-02
e0e
true
cluster01-01
e0c
true
cluster01-02
e0c
true
8. Verify that the intercluster LIFs are configured for redundancy by using the network
interface show command with the -role intercluster and -failover parameters.
Example
The LIFs in this example are assigned the e0e home port on each node. If the e0e port fails, the
LIF can fail over to the e0f port.
cluster01::> network interface show -role intercluster –failover
Logical
Home
Failover
Failover
Vserver Interface
Node:Port
Policy
Group
-------- --------------- --------------------- --------------- --------
Configuring the MetroCluster software in ONTAP | 171
cluster01-01
cluster01-01_icl01 cluster01-01:e0e
Failover Targets:
cluster01-01_icl02 cluster01-02:e0e
Failover Targets:
local-only
intercluster01
cluster01-01:e0e,
cluster01-01:e0f
local-only
intercluster01
cluster01-02:e0e,
cluster01-02:e0f
9. Display the routes in the cluster by using the network route show command to determine
whether intercluster routes are available or you must create them.
Creating a route is required only if the intercluster addresses in both clusters are not on the same
subnet and a specific route is needed for communication between the clusters.
Example
In this example, no intercluster routes are available:
cluster01::> network route show
Vserver
Destination
Gateway
--------- --------------- --------------Cluster
0.0.0.0/0
192.168.0.1
cluster01
0.0.0.0/0
192.168.0.1
Metric
-----20
10
10. If communication between intercluster LIFs in different clusters requires routing, create an
intercluster route by using the network route create command.
The gateway of the new route should be on the same subnet as the intercluster LIF.
Example
In this example, 192.168.1.1 is the gateway address for the 192.168.1.0/24 network. If the
destination is specified as 0.0.0.0/0, then it becomes a default route for the intercluster network.
cluster01::> network route create -vserver cluster01
-destination 0.0.0.0/0 -gateway 192.168.1.1 -metric 40
11. Verify that you created the routes correctly by using the network route show command.
Example
cluster01::> network route show
Vserver
Destination
Gateway
--------- --------------- --------------Cluster
0.0.0.0/0
192.168.0.1
cluster01
0.0.0.0/0
192.168.0.1
0.0.0.0/0
192.168.1.1
Metric
-----20
10
40
12. Repeat these steps to configure intercluster networking in the peer cluster.
13. Verify that the ports have access to the proper subnets, VLANs, and so on.
Dedicating ports for replication in one cluster does not require dedicating ports in all clusters; one
cluster might use dedicated ports, while the other cluster shares data ports for intercluster
replication.
Related concepts
Considerations when using dedicated ports on page 13
172 | Fabric-attached MetroCluster Installation and Configuration Guide
Configuring intercluster LIFs to share data ports
Configuring intercluster LIFs to share data ports enables you to use existing data ports to create
intercluster networks for cluster peer relationships. Sharing data ports reduces the number of ports
you might need for intercluster networking.
About this task
Creating intercluster LIFs that share data ports involves assigning LIFs to existing data ports. In this
procedure, a two-node cluster exists in which each node has two data ports, e0c and e0d, and these
data ports are in the default IPspace. These are the two data ports that are shared for intercluster
replication. You must configure intercluster LIFs on the peer cluster before you can create cluster
peer relationships. In your own environment, you replace the ports, networks, IP addresses, subnet
masks, and subnets with those specific to your environment.
Steps
1. List the ports in the cluster by using the network port show command:
Example
cluster01::> network port show
Node
Port
------ --------cluster01-01
e0a
e0b
e0c
e0d
cluster01-02
e0a
e0b
e0c
e0d
Speed (Mbps)
IPspace
Broadcast Domain Link
MTU
Admin/Oper
------------ ---------------- ----- ------- -----------Cluster
Cluster
Default
Default
Cluster
Cluster
Default
Default
up
up
up
up
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
Cluster
Cluster
Default
Default
Cluster
Cluster
Default
Default
up
up
up
up
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
2. Create an intercluster LIF on the admin SVM cluster01 by using the network interface
create command.
Example
This example uses the LIF naming convention
adminSVMname_icl#
for the intercluster LIF:
cluster01::> network interface create -vserver cluster01 -lif cluster01_icl01 -role
intercluster
-home-node cluster01-01 -home-port e0c -address 192.168.1.201 -netmask 255.255.255.0
cluster01::> network interface create -vserver cluster01 -lif cluster01_icl02 -role
intercluster
-home-node cluster01-02 -home-port e0c -address 192.168.1.202 -netmask 255.255.255.0
3. Verify that the intercluster LIFs were created properly by using the network interface show
command with the -role intercluster parameter:
Example
cluster01::> network interface show –role intercluster
Logical
Status
Network
Current
Current Is
Vserver
Interface Admin/Oper Address/Mask
Node
Port
Home
----------- ---------- ---------- ------------------ ------------- ------- ---cluster01
Configuring the MetroCluster software in ONTAP | 173
cluster01_icl01
up/up
cluster01_icl02
up/up
192.168.1.201/24
cluster01-01
e0c
true
192.168.1.202/24
cluster01-02
e0c
true
4. Verify that the intercluster LIFs are configured to be redundant by using the network
interface show command with the -role intercluster and -failover parameters.
Example
The LIFs in this example are assigned the e0c port on each node. If the e0c port fails, the LIF can
fail over to the e0d port.
cluster01::> network interface show -role intercluster –failover
Logical
Home
Failover
Failover
Vserver Interface
Node:Port
Policy
Group
-------- --------------- --------------------- --------------- -------cluster01
cluster01_icl01 cluster01-01:e0c
local-only
192.168.1.201/24
Failover Targets: cluster01-01:e0c,
cluster01-01:e0d
cluster01_icl02 cluster01-02:e0c
local-only
192.168.1.201/24
Failover Targets: cluster01-02:e0c,
cluster01-02:e0d
5. Display the routes in the cluster by using the network route show command to determine
whether intercluster routes are available or you must create them.
Creating a route is required only if the intercluster addresses in both clusters are not on the same
subnet and a specific route is needed for communication between the clusters.
Example
In this example, no intercluster routes are available:
cluster01::> network route show
Vserver
Destination
Gateway
--------- --------------- --------------Cluster
0.0.0.0/0
192.168.0.1
cluster01
0.0.0.0/0
192.168.0.1
Metric
-----20
10
6. If communication between intercluster LIFs in different clusters requires routing, create an
intercluster route by using the network route create command.
The gateway of the new route should be on the same subnet as the intercluster LIF.
Example
In this example, 192.168.1.1 is the gateway address for the 192.168.1.0/24 network. If the
destination is specified as 0.0.0.0/0, then it becomes a default route for the intercluster network.
cluster01::> network route create -vserver cluster01
-destination 0.0.0.0/0 -gateway 192.168.1.1 -metric 40
7. Verify that you created the routes correctly by using the network route show command.
Example
cluster01::> network route show
Vserver
Destination
Gateway
--------- --------------- --------------Cluster
0.0.0.0/0
192.168.0.1
cluster01
0.0.0.0/0
192.168.0.1
0.0.0.0/0
192.168.1.1
Metric
-----20
10
40
174 | Fabric-attached MetroCluster Installation and Configuration Guide
8. Repeat these steps on the cluster to which you want to connect.
Related concepts
Considerations when sharing data ports on page 13
Creating the cluster peer relationship
You create the cluster peer relationship using a set of intercluster logical interfaces to make
information about one cluster available to the other cluster for use in cluster peering applications.
Before you begin
•
Intercluster LIFs should be created in the IPspaces of both clusters you want to peer.
•
You should ensure that the intercluster LIFs of the clusters can route to each other.
•
If there are different administrators for each cluster, the passphrase used to authenticate the
cluster peer relationship should be agreed upon.
About this task
If you created intercluster LIFs in a nondefault IPspace, you need to designate the IPspace when you
create the cluster peer.
Steps
1. Create the cluster peer relationship on each cluster by using the cluster peer create
command.
The passphrase that you use is not displayed as you type it.
If you created a nondefault IPspace to designate intercluster connectivity, you use the ipspace
parameter to select that IPspace.
Example
In the following example, cluster01 is peered with a remote cluster named cluster02. Cluster01 is
a two-node cluster that has one intercluster LIF per node. The IP addresses of the intercluster
LIFs created in cluster01 are 192.168.2.201 and 192.168.2.202. Similarly, cluster02 is a two-node
cluster that has one intercluster LIF per node. The IP addresses of the intercluster LIFs created in
cluster02 are 192.168.2.203 and 192.168.2.204. These IP addresses are used to create the cluster
peer relationship.
cluster01::> cluster peer create -peer-addrs
192.168.2.203,192.168.2.204
Please type the passphrase:
Please type the passphrase again:
cluster02::> cluster peer create -peer-addrs
192.168.2.201,192.168.2.202
Please type the passphrase:
Please type the passphrase again:
If DNS is configured to resolve host names for the intercluster IP addresses, you can use host
names in the –peer-addrs option. It is not likely that intercluster IP addresses frequently
change; however, using host names allows intercluster IP addresses to change without having to
modify the cluster peer relationship.
Configuring the MetroCluster software in ONTAP | 175
Example
In the following example, an IPspace called IP01A was created on cluster01 for intercluster
connectivity. The IP addresses used in the previous example are used in this example to create the
cluster peer relationship.
cluster01::> cluster peer create -peer-addrs
192.168.2.203,192.168.2.204
-ipspace IP01A
Please type the passphrase:
Please type the passphrase again:
cluster02::> cluster peer create -peer-addrs
192.168.2.201,192.168.2.202
Please type the passphrase:
Please type the passphrase again:
2. Display the cluster peer relationship by using the cluster peer show command with the instance parameter.
Displaying the cluster peer relationship verifies that the relationship was established successfully.
Example
cluster01::> cluster peer show –instance
Peer Cluster Name: cluster02
Remote Intercluster Addresses: 192.168.2.203,192.168.2.204
Availability: Available
Remote Cluster Name: cluster02
Active IP Addresses: 192.168.2.203,192.168.2.204
Cluster Serial Number: 1-80-000013
3. Preview the health of the nodes in the peer cluster by using the cluster peer health show
command.
Previewing the health checks the connectivity and status of the nodes on the peer cluster.
Example
cluster01::> cluster peer health show
Node
cluster-Name
Ping-Status
---------- --------------------------cluster01-01
cluster02
Data: interface_reachable
ICMP: interface_reachable
Node-Name
RDB-Health Cluster-Health Avail…
--------- --------------- -------cluster02-01
true
true
cluster02-02
Data: interface_reachable
ICMP: interface_reachable true
true
cluster01-02
cluster02
cluster02-01
Data: interface_reachable
ICMP: interface_reachable true
true
cluster02-02
Data: interface_reachable
ICMP: interface_reachable true
true
true
true
true
true
Mirroring the root aggregates
You must mirror the root aggregates to provide data protection.
About this task
By default, the root aggregate is created as RAID-DP type aggregate. You can change the root
aggregate from RAID-DP to RAID4 type aggregate. The following command modifies the root
aggregate for RAID4 type aggregate:
176 | Fabric-attached MetroCluster Installation and Configuration Guide
storage aggregate modify –aggregate aggr_name -raidtype raid4
Note: The RAID type of the aggregate can be modified from the default RAID-DP to RAID4
before or after the aggregate is mirrored.
Steps
1. Mirror the root aggregate:
storage aggregate mirror aggr_ID
Example
The following command mirrors the root aggregate for controller_A_1:
controller_A_1::> storage aggregate mirror aggr0_controller_A_1
This creates an aggregate with a local plex located at the local MetroCluster site and a remote
plex located at the remote MetroCluster site.
2. Mirror the root aggregate:
storage aggregate mirror aggr_ID
Example
The following command mirrors the root aggregate for controller_A_1:
controller_A_1::> storage aggregate mirror aggr0_controller_A_1
This creates an aggregate with a local plex located at the local MetroCluster site and a remote
plex located at the remote MetroCluster site.
3. Repeat the previous steps for each node in the MetroCluster configuration.
Related information
Logical storage management
Creating a mirrored data aggregate on each node
You must create a mirrored data aggregate on each node in the DR group.
Before you begin
•
You should know what drives or array LUNs will be used in the new aggregate.
•
If you have multiple drive types in your system (heterogeneous storage), you should understand
how you can ensure that the correct drive type is selected.
About this task
•
Drives and array LUNs are owned by a specific node; when you create an aggregate, all drives in
that aggregate must be owned by the same node, which becomes the home node for that
aggregate.
•
Aggregate names should conform to the naming scheme you determined when you planned your
MetroCluster configuration.
Configuring the MetroCluster software in ONTAP | 177
•
The Clustered Data ONTAP Data Protection Guide contains more information about mirroring
aggregates.
Steps
1. Display a list of available spares:
storage disk show -spare -owner node_name
2. Create the aggregate by using the storage aggregate create -mirror true command.
If you are logged in to the cluster on the cluster management interface, you can create an
aggregate on any node in the cluster. To ensure that the aggregate is created on a specific node,
use the -node parameter or specify drives that are owned by that node.
You can specify the following options:
•
Aggregate's home node (that is, the node that owns the aggregate in normal operation)
•
List of specific drives or array LUNs that are to be added to the aggregate
•
Number of drives to include
•
Checksum style to use for the aggregate
•
Type of drives to use
•
Size of drives to use
•
Drive speed to use
•
RAID type for RAID groups on the aggregate
•
Maximum number of drives or array LUNs that can be included in a RAID group
•
Whether drives with different RPM are allowed
For more information about these options, see the storage aggregate create man page.
Example
The following command creates a mirrored aggregate with 10 disks:
controller_A_1::> storage aggregate create aggr1_controller_A_1 diskcount 10 -node controller_A_1 -mirror true
[Job 15] Job is queued: Create aggr1_controller_A_1.
[Job 15] The job is starting.
[Job 15] Job succeeded: DONE
3. Verify the RAID group and drives of your new aggregate:
storage aggregate show-status -aggregate aggregate-name
Creating unmirrored data aggregates
You can optionally create unmirrored data aggregates for data that does not require the redundant
mirroring provided by MetroCluster configurations. Unmirrored aggregates are not protected in the
event of a site disaster or if a site wide maintenance in one site is performed.
Before you begin
•
You should know what drives or array LUNs will be used in the new aggregate.
•
If you have multiple drive types in your system (heterogeneous storage), you should understand
how you can be sure that the correct drive type is selected.
178 | Fabric-attached MetroCluster Installation and Configuration Guide
About this task
Attention: Data in unmirrored aggregates in the MetroCluster configuration is not protected in the
event of a MetroCluster switchover operation.
•
Drives and array LUNs are owned by a specific node; when you create an aggregate, all drives in
that aggregate must be owned by the same node, which becomes the home node for that
aggregate.
•
Aggregate names should conform to the naming scheme you determined when you planned your
MetroCluster configuration.
•
The Clustered Data ONTAP Data Protection Guide contains more information about mirroring
aggregates.
Steps
1. Display a list of available spares:
storage disk show -spare -owner node_name
2. Create the aggregate by using the storage aggregate create command.
If you are logged in to the cluster on the cluster management interface, you can create an
aggregate on any node in the cluster. To verify that the aggregate is created on a specific node, use
the -node parameter or specify drives that are owned by that node.
You can specify the following options:
•
Aggregate's home node (that is, the node that owns the aggregate in normal operation)
•
List of specific drives or array LUNs that are to be added to the aggregate
•
Number of drives to include
•
Checksum style to use for the aggregate
•
Type of drives to use
•
Size of drives to use
•
Drive speed to use
•
RAID type for RAID groups on the aggregate
•
Maximum number of drives or array LUNs that can be included in a RAID group
•
Whether drives with different RPM are allowed
For more information about these options, see the storage aggregate create man page.
Example
The following command creates a mirrored aggregate with 10 disks:
controller_A_1::> storage aggregate create aggr1_controller_A_1 diskcount 10 -node controller_A_1
[Job 15] Job is queued: Create aggr1_controller_A_1.
[Job 15] The job is starting.
[Job 15] Job succeeded: DONE
3. Verify the RAID group and drives of your new aggregate:
storage aggregate show-status -aggregate aggregate-name
Configuring the MetroCluster software in ONTAP | 179
Implementing the MetroCluster configuration
You must run the metrocluster configure command to start data protection in the MetroCluster
configuration.
Before you begin
•
There must be at least two non-root mirrored data aggregates on each cluster, and additional data
aggregates can be either mirrored or unmirrored.
You can verify this with the storage aggregate show command.
•
The ha-config state of the controllers and chassis must be mcc.
This state is preconfigured on systems shipped from the factory.
About this task
You issue the metrocluster configure command once, on any of the nodes, to enable the
MetroCluster configuration. You do not need to issue the command on each of the sites or nodes, and
it does not matter which node or site you choose to issue the command on.
The metrocluster configure command automatically pairs the two nodes with the lowest
system IDs in each of the two clusters as disaster recovery (DR) partners. In a four-node
MetroCluster, there are two DR partner pairs. The second DR pair is created from the two nodes with
higher system IDs.
Steps
1. Enter the metrocluster configure command in the following format:
If your MetroCluster
configuration has...
Then take these steps...
Multiple data aggregates
From any node's prompt, perform the MetroCluster configuration
operation:
This is the best practice.
metrocluster configure node-name
A single mirrored data
aggregate
a.
From any node's prompt, change to the advanced privilege level:
set -privilege advanced
You need to respond with y when prompted to continue into
advanced mode and see the advanced mode prompt (*>).
b.
Perform the MetroCluster configuration operation with the -allowwith-one-aggregate true parameter:
metrocluster configure -allow-with-oneaggregate true node-name
c.
Return to the admin privilege level:
set -privilege admin
Example
The following command enables MetroCluster configuration on all nodes in the DR group that
contains controller_A_1:
controller_A_1::*> metrocluster configure -node-name controller_A_1
[Job 121] Job succeeded: Configure is successful.
180 | Fabric-attached MetroCluster Installation and Configuration Guide
2. Check the networking status on site A:
network port show
Example
The following example shows the network port usage on a four-node MetroCluster configuration:
cluster_A::> network port show
Node
Port
IPspace
------ --------- --------controller_A_1
e0a
Cluster
e0b
Cluster
e0c
Default
e0d
Default
e0e
Default
e0f
Default
e0g
Default
controller_A_2
e0a
Cluster
e0b
Cluster
e0c
Default
e0d
Default
e0e
Default
e0f
Default
e0g
Default
14 entries were displayed.
Speed (Mbps)
Broadcast Domain Link
MTU
Admin/Oper
---------------- ----- ------- -----------Cluster
Cluster
Default
Default
Default
Default
Default
up
up
up
up
up
up
up
9000
9000
1500
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
Cluster
Cluster
Default
Default
Default
Default
Default
up
up
up
up
up
up
up
9000
9000
1500
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
3. Confirm the MetroCluster configuration from both sites in the MetroCluster configuration.
a. Confirm the configuration from site A:
metrocluster show
Example
cluster_A::> metrocluster
Cluster
------------------------Local: cluster_A
Remote: cluster_B
show
Configuration State
------------------configured
configured
Mode
----------normal
normal
b. Confirm the configuration from site B:
metrocluster show
Example
cluster_B::> metrocluster
Cluster
------------------------Local: cluster_B
Remote: cluster_A
show
Configuration State
------------------configured
configured
Mode
----------normal
normal
Configuring in-order delivery or out-of-order delivery of frames on ONTAP
software
You must configure either in-order delivery (IOD) or out-of-order delivery (OOD) of frames
according to the fibre channel (FC) switch configuration. If the FC switch is configured for IOD, then
Configuring the MetroCluster software in ONTAP | 181
the ONTAP software must be configured for IOD. Similarly, if the FC switch is configured for OOD,
then ONTAP must be configured for OOD.
Step
1. Configure ONTAP to operate either IOD or OOD of frames.
•
By default, IOD of frames is enabled in ONTAP. To check the configuration details:
a. Enter advanced mode:
set advanced
b. Verify the settings:
metrocluster interconnect adapter show
mcc4-b12_siteB::*> metrocluster interconnect adapter show
Adapter Link
Is OOD
Node
Adapter Name
Type
Status Enabled? IP Address
Port Number
------------ --------------- ------- ------ -------- --------------------mcc4-b1
fcvi_device_0
FC-VI
Up
false
17.0.1.2
6a
mcc4-b1
fcvi_device_1
FC-VI
Up
false
18.0.0.2
6b
mcc4-b1
mlx4_0
IB
Down false
192.0.5.193
ib2a
mcc4-b1
mlx4_0
IB
Up
false
192.0.5.194
ib2b
mcc4-b2
fcvi_device_0
FC-VI
Up
false
17.0.2.2
6a
mcc4-b2
fcvi_device_1
FC-VI
Up
false
18.0.1.2
6b
mcc4-b2
mlx4_0
IB
Down false
192.0.2.9
ib2a
mcc4-b2
mlx4_0
IB
Up
false
192.0.2.10
ib2b
8 entries were displayed.
•
The following steps must be performed on each node to configure OOD of frames:
a. Enter advanced mode:
set advanced
b. Verify the MetroCluster configuration settings:
metrocluster interconnect adapter show
mcc4-b12_siteB::*> metrocluster interconnect adapter
Adapter Link
Is OOD
Node
Adapter Name
Type
Status Enabled?
Number
------------ --------------- ------- ------ -----------------mcc4-b1
fcvi_device_0
FC-VI
Up
false
6a
mcc4-b1
fcvi_device_1
FC-VI
Up
false
6b
mcc4-b1
mlx4_0
IB
Down false
ib2a
mcc4-b1
mlx4_0
IB
Up
false
ib2b
mcc4-b2
fcvi_device_0
FC-VI
Up
false
6a
mcc4-b2
fcvi_device_1
FC-VI
Up
false
mcc4-b2
mlx4_0
IB
Down false
show
IP Address
Port
----------17.0.1.2
18.0.0.2
192.0.5.193
192.0.5.194
17.0.2.2
18.0.1.2
192.0.2.9
6b
182 | Fabric-attached MetroCluster Installation and Configuration Guide
ib2a
mcc4-b2
mlx4_0
ib2b
8 entries were displayed.
IB
Up
false
192.0.2.10
c. Enable OOD on node “mcc4-b1” and node “mcc4-b2”:
metrocluster interconnect adapter modify -node node name -is-oodenabled true
mcc4-b12_siteB::*>
b1 -is-ood-enabled
mcc4-b12_siteB::*>
b2 -is-ood-enabled
metrocluster interconnect adapter modify -node mcc4true
metrocluster interconnect adapter modify -node mcc4true
d. Verify the settings:
metrocluster interconnect adapter show
mcc4-b12_siteB::*> metrocluster interconnect adapter show
Adapter Link
Is OOD
Node
Adapter Name
Type
Status Enabled? IP Address Port
Number
------------ --------------- ------- ------ -------- --------------------mcc4-b1
fcvi_device_0
FC-VI
Up
true
17.0.1.2
mcc4-b1
fcvi_device_1
FC-VI
Up
true
18.0.0.2
mcc4-b1
mlx4_0
IB
Down
false
192.0.5.193
ib2a
mcc4-b1
mlx4_0
IB
Up
false
192.0.5.194
ib2b
mcc4-b2
fcvi_device_0
FC-VI
Up
true
17.0.2.2
mcc4-b2
fcvi_device_1
FC-VI
Up
true
18.0.1.2
mcc4-b2
mlx4_0
IB
Down
false
192.0.2.9
ib2a
mcc4-b2
mlx4_0
IB
Up
false
192.0.2.10
ib2b
8 entries were displayed.
6a
6b
6a
6b
Configuring MetroCluster components for health monitoring
You must perform some special configuration steps before monitoring the components in the
MetroCluster configuration.
About this task
These tasks apply only to systems with FC-to-SAS bridges or FC switches.
Note: If a dedicated network for Health Monitoring is used, then each node must have a node
management LIF in the dedicated Health Monitoring network.
Steps
1. Configuring the MetroCluster FC switches for health monitoring on page 182
2. Configuring FC-to-SAS bridges for health monitoring on page 183
Configuring the MetroCluster FC switches for health monitoring
In a fabric-attached MetroCluster configuration, you must perform some special configuration steps
to monitor the FC switches.
Steps
1. Issue the following command on each MetroCluster node to add a switch with an IP address:
storage switch add -address ipaddress
Configuring the MetroCluster software in ONTAP | 183
This command must be repeated on all four switches in the MetroCluster configuration.
Example
The following example shows the command to add a switch with IP address 10.10.10.10:
controller_A_1::> storage switch add -address 10.10.10.10
2. Verify that all switches are properly configured:
storage switch show
It might take up to 15 minutes to reflect all data due to the 15-minute polling interval.
Example
The following example shows the command given to verify that the MetroCluster FC switches are
configured:
controller_A_1::> storage switch
Fabric
Switch Name
---------------- --------------1000000533a9e7a6 brcd6505-fcs40
1000000533a9e7a6 brcd6505-fcs42
1000000533ed94d1 brcd6510-fcs44
1000000533ed94d1 brcd6510-fcs45
4 entries were displayed.
show
Vendor
------Brocade
Brocade
Brocade
Brocade
Model
-----------Brocade6505
Brocade6505
Brocade6510
Brocade6510
Switch WWN
---------------1000000533a9e7a6
1000000533d3660a
1000000533eda031
1000000533ed94d1
Status
-----OK
OK
OK
OK
controller_A_1::>
If the worldwide name (WWN) of the switch is shown, the ONTAP health monitor can contact
and monitor the FC switch.
Related information
System administration
Configuring FC-to-SAS bridges for health monitoring
You must perform some special configuration steps to monitor the FC-to-SAS bridges in the
MetroCluster configuration.
About this task
Third-party SNMP monitoring tools are not supported for FibreBridge bridges.
Steps
1. Configure each FC-to-SAS bridge for monitoring on each storage controller:
storage bridge add -address ipaddress
This command must be repeated for all FC-to-SAS bridges in the MetroCluster configuration.
Example
The following example shows the command you must use to add an FC-to-SAS bridge with an IP
address of 10.10.20.10:
controller_A_1::> storage bridge add -address 10.10.20.10
2. Verify that all FC-to-SAS bridges are properly configured:
storage bridge show
184 | Fabric-attached MetroCluster Installation and Configuration Guide
It might take as long as 15 minutes to reflect all data because of the polling interval. The ONTAP
health monitor can contact and monitor the bridge if the value in the Status column is ok, and
other information, such as the worldwide name (WWN), is displayed.
Example
The following example shows that the FC-to-SAS bridges are configured:
controller_A_1::> storage bridge show
Bridge
------ATTO_1
true
ATTO_2
WWN
ATTO_3
WWN
ATTO_4
WWN
Symbolic
Name
------------ -----atto6500n-1
Atto
ok
atto7500n-2
Atto
true
atto7500n-3
Atto
true
atto7500n-4
Atto
true
Vendor Model
----------FibreBridge 6500N
Bridge WWN
Monitored
------------------Sample WWN
FibreBridge 7500N
ok
FibreBridge 7500N
ok
FibreBridge 7500N
ok
Sample
Status
-------
Sample
Sample
controller_A_1::>
Checking the MetroCluster configuration
You can check that the components and relationships in the MetroCluster configuration are working
correctly. You should do a check after initial configuration, and after making any changes to the
MetroCluster configuration, and before a negotiated (planned) switchover or a switchback operation.
After you run the metrocluster check run command, you then display the results of the check
with various metrocluster check show commands.
About this task
If the metrocluster check run command is issued twice within a short time, on either or both
clusters, a conflict can occur and the command might not collect all data. Subsequent
metrocluster check show commands will not show the expected output.
Steps
1. Check the configuration:
metrocluster check run
Example
The command will run as a background job:
controller_A_1::> metrocluster check run
[Job 60] Job succeeded: Check is successful. Run "metrocluster check
show" command to view the results of this operation.
controller_A_1::> metrocluster check show
Last Checked On: 5/22/2015 12:23:28
Component
Result
------------------- --------nodes
ok
lifs
ok
config-replication ok
aggregates
ok
clusters
ok
5 entries were displayed.
2. Display more detailed results from the most recent metrocluster check run command:
Configuring the MetroCluster software in ONTAP | 185
metrocluster check aggregate show
metrocluster check cluster show
metrocluster check config-replication show
metrocluster check lif show
metrocluster check node show
The metrocluster check show commands show the results of the most recent
metrocluster check run command. You should always run the metrocluster check
run command prior to using the metrocluster check show commands to ensure that the
information displayed is current.
Example
The following example shows the metrocluster check aggregate show output for a
healthy four-node MetroCluster configuration:
controller_A_1::> metrocluster check aggregate show
Last Checked On: 8/5/2014 00:42:58
Node
Aggregate
Check
--------------------- --------------------- --------------------controller_A_1
aggr0_controller_A_1_0
mirroring-status
disk-pool-allocation
controller_A_1_aggr1
mirroring-status
disk-pool-allocation
controller_A_2
aggr0_controller_A_2
mirroring-status
disk-pool-allocation
controller_A_2_aggr1
mirroring-status
disk-pool-allocation
controller_B_1
aggr0_controller_B_1_0
mirroring-status
disk-pool-allocation
controller_B_1_aggr1
mirroring-status
disk-pool-allocation
controller_B_2
aggr0_controller_B_2
mirroring-status
disk-pool-allocation
controller_B_2_aggr1
mirroring-status
disk-pool-allocation
16 entries were displayed.
Result
--------ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
The following example shows the metrocluster check cluster show output for a healthy
four-node MetroCluster configuration. It indicates that the clusters are ready to perform a
negotiated switchover if necessary.
Last Checked On: 5/22/2015 12:23:28
Cluster
Check
--------------------- -----------------------------------cluster_A
negotiated-switchover-ready
switchback-ready
applicable
job-schedules
licenses
cluster_B
negotiated-switchover-ready
switchback-ready
applicable
job-schedules
licenses
8 entries were displayed.
Result
--------ok
notok
ok
ok
notok
ok
186 | Fabric-attached MetroCluster Installation and Configuration Guide
Related information
Disk and aggregate management
Network and LIF management
Data protection using SnapMirror and SnapVault technology
Checking for MetroCluster configuration errors with Config
Advisor
You can go to the NetApp Support Site and download the Config Advisor tool to check for common
configuration errors.
About this task
Config Advisor is a configuration validation and health check tool. You can deploy it at both secure
sites and non-secure sites for data collection and system analysis.
Note: Support for Config Advisor is limited, and available only online.
Steps
1. Go to the Config Advisor download page and download the tool.
NetApp Downloads: Config Advisor
2. Run Config Advisor, review the tool's output and follow the recommendations in the output to
address any issues discovered.
Verifying local HA operation
If you have a four-node MetroCluster configuration, you should verify the operation of the local HA
pairs in the MetroCluster configuration. This is not required for two-node configurations.
About this task
Two-node MetroCluster configurations do not consist of local HA pairs and this task does not apply.
The examples in this task use standard naming conventions:
•
•
cluster_A
◦
controller_A_1
◦
controller_A_2
cluster_B
◦
controller_B_1
◦
controller_B_2
Steps
1. On cluster_A, perform a failover and giveback in both directions.
a. Confirm that storage failover is enabled:
storage failover show
Configuring the MetroCluster software in ONTAP | 187
Example
The output should indicate that takeover is possible for both nodes:
cluster_A::> storage failover show
Takeover
Node
Partner
Possible State Description
-------------- -------------- -------- --------------------------controller_A_1 controller_A_2 true
Connected to controller_A_2
controller_A_2 controller_A_1 true
2 entries were displayed.
Connected to controller_A_1
b. Take over controller_A_2 from controller_A_1:
storage failover takeover controller_A_2
You can use the storage failover show-takeover command to monitor the progress of
the takeover operation.
c. Confirm that the takeover is complete:
storage failover show
Example
The output should indicate that controller_A_1 is in takeover state, meaning that it has taken
over its HA partner:
cluster_A::> storage failover show
Takeover
Node
Partner
Possible State Description
-------------- -------------- -------- ----------------controller_A_1 controller_A_2 false
In takeover
controller_A_2 controller_A_1 2 entries were displayed.
Unknown
d. Give back controller_A_2:
storage failover giveback controller_A_2
You can use the storage failover show-giveback command to monitor the progress of
the giveback operation.
e. Confirm that storage failover has returned to a normal state:
storage failover show
Example
The output should indicate that takeover is possible for both nodes:
cluster_A::> storage failover show
Takeover
Node
Partner
Possible State Description
-------------- -------------- -------- --------------------------controller_A_1 controller_A_2 true
Connected to controller_A_2
controller_A_2 controller_A_1 true
2 entries were displayed.
Connected to controller_A_1
f. Repeat the previous substeps, this time taking over controller_A_1 from controller_A_2.
188 | Fabric-attached MetroCluster Installation and Configuration Guide
2. Repeat the preceding steps on cluster_B.
Related information
High-availability configuration
Verifying switchover, healing, and switchback
You should verify the switchover, healing, and switchback operations of the MetroCluster
configuration.
Step
1. Use the procedures for negotiated switchover, healing, and switchback that are mentioned in the
MetroCluster Management and Disaster Recovery Guide.
MetroCluster management and disaster recovery
Installing the MetroCluster Tiebreaker software
You can download and install Tiebreaker software to monitor the two clusters and the connectivity
status between them from a third site. Doing so enables each partner in a cluster to distinguish
between an ISL failure (when inter-site links are down) and a site failure.
Before you begin
You must have a Linux host available that has network connectivity to both clusters in the
MetroCluster configuration.
Steps
1. Go to MetroCluster Tiebreaker Software Download page.
NetApp Downloads: MetroCluster Tiebreaker for Linux
2. Follow the directions to download the Tiebreaker software and documentation.
Protecting configuration backup files
You can provide additional protection for the cluster configuration backup files by specifying a
remote URL (either HTTP or FTP) where the configuration backup files will be uploaded in addition
to the default locations in the local cluster.
Step
1. Set the URL of the remote destination for the configuration backup files:
system configuration backup settings modify URL-of-destination
Related information
System administration
189
Considerations when removing MetroCluster
configurations
You can remove the MetroCluster configuration on either all nodes in a MetroCluster cluster or solely
on the nodes in a disaster recovery (DR) group. After removing the MetroCluster configuration, all
disk connectivity and interconnects should be adjusted to be in a supported state. If you need to
remove the MetroCluster configuration, contact technical support.
Warning: You cannot easily reverse the MetroCluster unconfiguration. This process should only be
done with assistance of technical support.
190
Planning and installing a MetroCluster
configuration with array LUNs
If you are using array LUNs in your MetroCluster configuration, you must plan the installation and
follow the specific procedures for such a configuration. You can set up a MetroCluster configuration
with either a mix of array LUNs and native disk shelves or only array LUNs.
Planning for a MetroCluster configuration with array LUNs
Creating a detailed plan for your MetroCluster configuration helps you understand the unique
requirements for a MetroCluster configuration that uses LUNs on storage arrays. Installing a
MetroCluster configuration involves connecting and configuring a number of devices, which might
be done by different people. Therefore, the plan also helps you communicate with other people
involved in the installation.
Supported MetroCluster configuration with array LUNs
You can set up either a MetroCluster configuration with array LUNs. Both stretch and fabric-attached
configurations are supported. AFF systems are not supported with array LUNs.
The features supported on the MetroCluster configurations vary with the configuration types. The
following table lists the features supported on the different types of MetroCluster configurations with
array LUNs:
Feature
Number of
controllers
Fabric-attached configurations
Stretch
configurations
Eight-node
Four-node
Two-node
Two-node
Eight
Four
Two
Two
Uses an FC
switch storage
fabric
Yes
Yes
Yes
Uses FC-toSAS bridges
Yes
Yes
Yes
Supports local
HA
Yes
No
No
Supports
automatic
switchover
Yes
Yes
Yes
Related concepts
Differences between the ONTAP MetroCluster configurations on page 11
Planning and installing a MetroCluster configuration with array LUNs | 191
Requirements for a MetroCluster configuration with array
LUNs
The ONTAP systems, storage arrays, and FC switches used in MetroCluster configurations must
meet the requirements for such types of configurations. In addition, you must also consider the
SyncMirror requirements for MetroCluster configurations with array LUNs.
Requirements for ONTAP systems
•
The ONTAP systems must be identified as supported for MetroCluster configurations.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and ONTAP version to refine your
search. You can click Show Results to display the list of supported configurations that match the
criteria.
Note: You must refer to the alert details associated with any configuration that you select in the
Interoperability Matrix.
•
All the ONTAP systems in a MetroCluster configuration must be of the same model.
•
FC-VI adapters must be installed in the appropriate slots for each ONTAP system, depending on
the model.
NetApp Hardware Universe
Requirements for storage arrays
•
The storage arrays must be identified as supported for MetroCluster configurations.
NetApp Interoperability Matrix Tool
•
The storage arrays in the MetroCluster configuration must be symmetric:
◦
The two storage arrays must be from the same supported vendor family and have the same
firmware version installed.
FlexArray virtualization implementation for NetApp E-Series storage
FlexArray virtualization implementation for third-party storage
◦
Disk types (for example, SATA, SSD, or SAS) used for mirrored storage must be the same on
both storage arrays.
◦
The parameters for configuring storage arrays, such as RAID type and tiering, must be the
same across both sites.
Requirements for FC switches
•
The switches and switch firmware must be identified as supported for MetroCluster
configurations.
NetApp Interoperability Matrix Tool
•
Each fabric must have two FC switches.
•
Each ONTAP system must be connected to storage using redundant components so that there is
redundancy in case of device and path failures.
•
FAS9000 storage systems support up to eight ISLs per fabric. Other storage system models
support up to four ISLs per fabric.
192 | Fabric-attached MetroCluster Installation and Configuration Guide
The switches must use the MetroCluster basic switch configuration, ISL settings, and FC-VI
configurations.
Configuring the Cisco or Brocade FC switches manually on page 52
SyncMirror requirements
•
SyncMirror is required for a MetroCluster configuration.
•
Two separate storage arrays, one at each site, are required for the mirrored storage.
•
Two sets of array LUNs are required.
One set is required for the aggregate on the local storage array (pool0) and another set is required
at the remote storage array for the mirror of the aggregate (the other plex of the aggregate, pool1).
The array LUNs must be of the same size for mirroring the aggregate.
Data protection using SnapMirror and SnapVault technology
•
Unmirrored aggregates are also supported in the MetroCluster configuration.
They are not protected in the event of a site disaster.
Installing and cabling the MetroCluster components in a
configuration with array LUNs
For setting up a MetroCluster configuration with array LUNs, you must cable the storage controllers
to the FC switches and cable the ISLs to link the sites. In addition, you must cable the storage arrays
to the FC switches.
Steps
1. Racking the hardware components in a MetroCluster configuration with array LUNs on page 192
2. Preparing a storage array for use with ONTAP systems on page 193
3. Switch ports required for an MetroCluster configuration with array LUNs on page 194
4. Cabling the FC-VI and HBA ports in a MetroCluster configuration with array LUNs on page 199
5. Cabling the ISLs in a MetroCluster configuration with array LUNs on page 205
6. Cabling the cluster interconnect in eight- or four-node configurations on page 206
7. Cabling the cluster peering connections on page 207
8. Cabling the HA interconnect, if necessary on page 207
9. Cabling the management and data connections on page 208
10. Cabling storage arrays to FC switches in a MetroCluster configuration on page 208
Racking the hardware components in a MetroCluster configuration with
array LUNs
You must ensure that the hardware components required to set up a MetroCluster configuration with
array LUNs are properly racked.
About this task
You must perform this task at both the MetroCluster sites.
Steps
1. Plan the positioning of the MetroCluster components.
The rack space depends on the platform model of the storage controllers, the switch types, and the
number of disk shelf stacks in your configuration.
Planning and installing a MetroCluster configuration with array LUNs | 193
2. Properly ground yourself.
3. Install the storage controllers in the rack or cabinet.
Note: AFF systems are not supported with array LUNs.
Installation and Setup Instructions for AFF A700 and FAS9000
Installation and Setup Instructions for FAS8200 Systems
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
4. Install the FC switches in the rack or cabinet.
Preparing a storage array for use with ONTAP systems
Before you can begin setting up ONTAP systems in a MetroCluster configuration with array LUNs,
the storage array administrator must prepare the storage for use with ONTAP.
Before you begin
The storage arrays, firmware, and switches that you plan to use in the configuration must be
supported by the specific ONTAP version.
•
NetApp Interoperability
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and ONTAP version to refine your
search. You can click Show Results to display the list of supported configurations that match the
criteria.
•
NetApp Hardware Universe
About this task
You must coordinate with the storage array administrator to perform this task on the storage array.
Steps
1. Create LUNs on the storage array depending on the number of nodes in the MetroCluster
configuration.
Each node in the MetroCluster configuration requires array LUNs for the root aggregate, data
aggregate, and spares.
2. Configure parameters on the storage array that are required to work with ONTAP.
•
FlexArray virtualization implementation for third-party storage
•
FlexArray virtualization implementation for NetApp E-Series storage
194 | Fabric-attached MetroCluster Installation and Configuration Guide
Switch ports required for an MetroCluster configuration with array LUNs
When you are connecting ONTAP systems to FC switches for setting up a MetroCluster
configuration with array LUNs, you must connect FC-VI and HBA ports from each controller to
specific switch ports.
If you are using both array LUNs and disks in the MetroCluster configuration, you must ensure that
the controller ports are connected to the switch ports recommended for configuration with disks, and
then use the remaining ports for configuration with array LUNs.
The following table lists the specific FC switch ports to which you must connect the different
controller ports in an eight-node MetroCluster configuration with array LUNs.
Overall cabling guidelines with array LUNs
You should be aware of the following guidelines when using the cabling tables:
•
The Brocade and Cisco switches use different port numbering:
◦
On Brocade switches, the first port is numbered 0.
◦
On Cisco switches, the first port is numbered 1.
•
The cabling is the same for each FC switch in the switch fabric.
•
FAS8200 storage systems can be ordered with one of two options for FC-VI connectivity:
•
◦
Onboard ports 0e and 0f configured in FC-VI mode.
◦
Ports 1a and 1b on an FC-VI card in slot 1.
FAS9000 storage systems support four FC-VI ports. The following tables show cabling for the FC
switches with four FC-VI ports on each controller.
For other storage systems, use the cabling shown in the tables but ignore the cabling for FC-VI
ports c and d.
You can leave those ports empty.
Brocade port usage for controllers in a MetroCluster configuration
The following tables show port usage on Brocade switches. The tables show the maximum supported
configuration, with eight controller modules in two DR groups. For smaller configurations, ignore the
rows for the additional controller modules. Note that eight ISLs are supported only on the Brocade
6510 switch.
Planning and installing a MetroCluster configuration with array LUNs | 195
DR GROUP 1
Brocade 6505
Switch
2
Brocade 6520
Switc
h1
Switc
h1
Component
Port
controller_x_1
FC-VI port
a
0
FC-VI port
b
-
0
-
0
-
0
FC-VI port
c
1
-
1
-
1
-
FC-VI port
d
-
1
-
1
-
1
HBA port a
2
-
2
-
2
-
HBA port b
-
2
-
2
-
2
HBA port c
3
-
3
-
3
-
HBA port d
-
3
-
3
-
3
FC-VI port
a
4
-
4
-
4
-
FC-VI port
b
-
4
-
4
-
4
FC-VI port
c
5
-
5
-
5
-
FC-VI port
d
-
5
-
5
-
5
HBA port a
6
-
6
-
6
-
HBA port b
-
6
-
6
-
6
HBA port c
7
-
7
-
7
-
HBA port d
-
7
-
7
-
7
controller_x_2
Switch
1
Brocade 6510
Switc
h2
0
Switc
h2
0
196 | Fabric-attached MetroCluster Installation and Configuration Guide
DR GROUP 2
Brocade 6505
Brocade 6510
Brocade 6520
Switc
h1
Switc
h2
Switc
h1
Switc
h2
24
-
48
-
FC-VI port b
-
24
-
48
FC-VI port c
25
-
49
-
FC-VI port d
-
25
-
49
HBA port a
26
-
50
-
HBA port b
-
26
-
50
HBA port c
27
-
51
-
HBA port d
-
27
-
51
28
-
52
-
FC-VI port b
-
28
-
52
FC-VI port c
29
-
53
-
FC-VI port d
-
29
-
53
HBA port a
30
-
54
-
HBA port b
-
30
-
54
HBA port c
31
-
55
-
HBA port d
-
31
-
55
Component
Port
Switc
h1
controller_x_3
FC-VI port a
Not supported
controller_x_4
FC-VI port a
Switc
h2
Not supported
ISLs
ISL 1
20
20
40
40
23
23
ISL 2
21
21
41
41
47
47
ISL 3
22
22
42
42
71
71
ISL 4
23
23
43
43
95
95
44
44
ISL 6
45
45
ISL 7
46
46
ISL 8
47
47
ISL 5
Not supported
Not supported
Cisco port usage for controllers in a MetroCluster configuration
The tables show the maximum supported configuration, with eight controller modules in two DR
groups. For smaller configurations, ignore the rows for the additional controller modules.
Planning and installing a MetroCluster configuration with array LUNs | 197
Cisco 9396S
Component
controller_x_1
controller_x_2
controller_x_3
controller_x_4
Port
Switch 1
Switch 2
FC-VI port a
1
-
FC-VI port b
-
1
FC-VI port c
2
-
FC-VI port d
-
2
HBA port a
3
-
HBA port b
-
3
HBA port c
4
-
HBA port d
-
4
FC-VI port a
5
-
FC-VI port b
-
5
FC-VI port c
6
-
FC-VI port d
-
6
HBA port a
7
-
HBA port b
-
7
HBA port c
8
-
HBA port d
-
8
FC-VI port a
49
FC-VI port b
-
FC-VI port c
50
FC-VI port d
-
HBA port a
51
HBA port b
-
HBA port c
52
HBA port d
-
52
FC-VI port a
53
-
FC-VI port b
-
53
FC-VI port c
54
-
FC-VI port d
-
54
HBA port a
55
-
HBA port b
-
55
HBA port c
56
-
HBA port d
-
56
49
50
51
198 | Fabric-attached MetroCluster Installation and Configuration Guide
Cisco 9148 or 9148S
Component
controller_x_1
controller_x_2
controller_x_3
controller_x_4
Port
Switch 1
Switch 2
FC-VI port a
1
-
FC-VI port b
-
1
HBA port a
2
-
HBA port b
-
2
HBA port c
3
-
HBA port d
-
3
FC-VI port a
4
-
FC-VI port b
-
4
HBA port a
5
-
HBA port b
-
5
HBA port c
6
-
HBA port d
-
6
FC-VI port a
7
FC-VI port b
-
7
HBA port a
8
-
HBA port b
-
8
HBA port c
9
-
HBA port d
-
9
FC-VI port a
10
-
FC-VI port b
-
10
HBA port a
11
-
HBA port b
-
11
HBA port c
13
-
HBA port d
-
13
Shared initiator and shared target support for MetroCluster configuration with array
LUNs
Being able to share a given FC initiator port or target ports is useful for organizations that want to
minimize the number of initiator or target ports used. For example, an organization that expects low
I/O usage over an FC initiator port or target ports might prefer to share FC initiator port or target
ports instead of dedicating each FC initiator port to a single target port.
However sharing of initiator or target ports can adversely affect performance.
NetApp KB Article 1015978: How to support Shared Initiator and Shared Target configuration with
Array LUNs in a MetroCluster (MCC) environment
Planning and installing a MetroCluster configuration with array LUNs | 199
Cabling the FC-VI and HBA ports in a MetroCluster configuration with array
LUNs
For a fabric-attached MetroCluster configuration with array LUNs, you must connect the controllers
in a MetroCluster configuration to the storage arrays through FC switches.
Choices
• Cabling the FC-VI and HBA ports in a two-node fabric-attached MetroCluster configuration with
array LUNs on page 199
• Cabling the FC-VI and HBA ports in a four-node fabric-attached MetroCluster configuration with
array LUNs on page 200
• Cabling the FC-VI and HBA ports in an eight-node fabric-attached MetroCluster configuration
with array LUNs on page 202
Cabling the FC-VI and HBA ports in a two-node fabric-attached MetroCluster configuration
with array LUNs
If you are setting up a two-node fabric-attached MetroCluster configuration with array LUNs, you
must cable the FC-VI ports and the HBA ports to the switch ports.
About this task
•
You must repeat this task for each controller at both of the MetroCluster sites.
•
If you plan to use disks in addition to array LUNs in your MetroCluster configuration, you must
use the HBA ports and switch ports specified for configuration with disks.
Port assignments for FC switches on page 38
Steps
1. Cable the FC-VI ports from the controller to alternate switch ports.
Example
The following example shows two FC-VI ports from Controller A cabled to switch ports on
alternate switches FC_switch_A_1 and FC_switch_A_2:
200 | Fabric-attached MetroCluster Installation and Configuration Guide
2. Perform the controller-to-switch cabling at both of the MetroCluster sites.
You must ensure redundancy in connections from the controller to the switches. Therefore, for
each controller at a site, you must ensure that both of the HBA ports in the same port pair are
connected to alternate FC switches.
Example
The following example shows the connections between the HBA ports on Controller A and ports
on FC_switch_A_1 and FC_switch_A_2:
The following table lists the connections between the HBA ports and the FC switch ports in the
illustration:
HBA ports
Switch ports
Port pair
Port a
FC_switch_A_1, Port 2
Port d
FC_switch_A_2, Port 3
Port pair
Port b
FC_switch_A_2, Port 2
Port c
FC_switch_A_1, Port 3
After you finish
You should cable the ISLs between the FC switches across the MetroCluster sites.
Cabling the FC-VI and HBA ports in a four-node fabric-attached MetroCluster configuration
with array LUNs
If you are setting up a four-node fabric-attached MetroCluster configuration with array LUNs, you
must cable the FC-VI ports and the HBA ports to the switch ports.
About this task
•
You must repeat this task for each controller at both of the MetroCluster sites.
•
If you plan to use disks in addition to array LUNs in your MetroCluster configuration, you must
use the HBA ports and switch ports specified for configuration with disks.
Port assignments for FC switches on page 38
Planning and installing a MetroCluster configuration with array LUNs | 201
Steps
1. Cable the FC-VI ports from each controller to the ports on alternate FC switches.
Example
The following example shows the connections between the FC-VI ports and switch ports at Site
A:
2. Perform the controller-to-switch cabling at both of the MetroCluster sites.
You must ensure redundancy in connections from the controller to the switches. Therefore, for
each controller at a site, you must ensure that both of the HBA ports in the same port pair are
connected to alternate FC switches.
Example
The following example shows the connections between the HBA ports and switch ports at Site A:
202 | Fabric-attached MetroCluster Installation and Configuration Guide
The following table lists the connections between the HBA ports on controller_A_1 and the FC
switch ports in the illustration:
HBA ports
Switch ports
Port pair
Port a
FC_switch_A_1, Port 2
Port d
FC_switch_A_2, Port 3
Port pair
Port b
FC_switch_A_2, Port 2
Port c
FC_switch_A_1, Port 3
The following table lists the connections between the HBA ports on controller_A_2 and the FC
switch ports in the illustration:
HBA ports
Switch ports
Port pair
Port a
FC_switch_A_1, Port 5
Port d
FC_switch_A_2, Port 6
Port pair
Port b
FC_switch_A_2, Port 5
Port c
FC_switch_A_1, Port 6
After you finish
You should cable the ISLs between the FC switches across the MetroCluster sites.
Related concepts
Switch ports required for an MetroCluster configuration with array LUNs on page 194
Cabling the FC-VI and HBA ports in an eight-node fabric-attached MetroCluster
configuration with array LUNs
If you are setting up an eight-node fabric-attached MetroCluster configuration with array LUNs, you
must cable the FC-VI ports and the HBA ports to the switch ports.
About this task
•
You must repeat this task for each controller at both of the MetroCluster sites.
•
If you plan to use disks in addition to array LUNs in your MetroCluster configuration, you must
use the HBA ports and switch ports specified for configuration with disks.
Port assignments for FC switches on page 38
Step
1. Cable the FC-VI ports and HBA ports from each controller to the ports on alternate FC switches.
Planning and installing a MetroCluster configuration with array LUNs | 203
Configurations using FibreBridge 7500N using both FC ports (FC1 and FC2)
DR GROUP 1
Brocade 6505
Port
controller_x_1
FC-VI port
a
0
FC-VI port
b
-
0
-
0
-
0
FC-VI port
c
1
-
1
-
1
-
FC-VI port
d
-
1
-
1
-
1
HBA port a
2
-
2
-
2
-
HBA port
b
-
2
-
2
-
2
HBA port c
3
-
3
-
3
-
HBA port
d
-
3
-
3
-
3
FC-VI port
a
4
-
4
-
4
-
FC-VI port
b
-
4
-
4
-
4
FC-VI port
c
5
-
5
-
5
-
FC-VI port
d
-
5
-
5
-
5
HBA port a
6
-
6
-
6
-
HBA port
b
-
6
-
6
-
6
HBA port c
7
-
7
-
7
-
HBA port
d
-
7
-
7
-
7
FC1
8
FC2
-
8
-
8
-
8
FC1
9
-
9
-
9
-
FC2
-
9
-
9
-
9
Stack 1
bridge_x_1a
bridge_x_1B
Switch
2
Switc
h1
Switc
h2
Brocade
6520
Component
controller_x_2
Switch
1
Brocade 6510
0
Switc
h1
Swit
ch 2
0
8
8
204 | Fabric-attached MetroCluster Installation and Configuration Guide
Configurations using FibreBridge 7500N using both FC ports (FC1 and FC2)
DR GROUP 1
Brocade 6505
Component
Stack 2
bridge_x_2a
bridge_x_2B
Stack 3
bridge_x_3a
bridge_x_3B
Stack y
bridge_x_ya
bridge_x_yb
Brocade 6510
Brocade
6520
Port
Switch
1
Switch
2
Switc
h1
Switc
h2
Switc
h1
Swit
ch 2
FC1
10
-
10
-
10
-
FC2
-
10
-
10
-
10
FC1
11
-
11
-
11
-
FC2
-
11
-
11
-
11
FC1
12
-
12
-
12
-
FC2
-
12
-
12
-
12
FC1
13
-
13
-
13
-
FC2
-
13
-
13
-
13
FC1
14
-
14
-
14
-
FC2
-
14
-
14
-
14
FC1
15
-
15
-
15
-
FC2
--
15
--
15
--
15
Cisco 9148 or 9148S
Component
controller_x_1
controller_x_2
Port
Switch 1
Switch 2
FC-VI port a
1
-
FC-VI port b
-
1
HBA port a
2
-
HBA port b
-
2
HBA port c
3
-
HBA port d
-
3
FC-VI port a
4
-
FC-VI port b
-
4
HBA port a
5
-
HBA port b
-
5
HBA port c
6
-
HBA port d
-
6
Planning and installing a MetroCluster configuration with array LUNs | 205
Cisco 9148 or 9148S
Component
controller_x_3
controller_x_4
Port
Switch 1
Switch 2
FC-VI port a
7
FC-VI port b
-
7
HBA port a
8
-
HBA port b
-
8
HBA port c
9
-
HBA port d
-
9
FC-VI port a
10
-
FC-VI port b
-
10
HBA port a
11
-
HBA port b
-
11
HBA port c
13
-
HBA port d
-
13
After you finish
You should cable the ISLs between the FC switches across the MetroCluster sites.
Cabling the ISLs in a MetroCluster configuration with array LUNs
You must connect the FC switches across the sites through Inter-Switch Links (ISLs) to form switch
fabrics in your MetroCluster configuration with array LUNs.
Step
1. Connect the switches at each site to the ISL or ISLs, using the cabling in the table that
corresponds to your configuration and switch model.
The switch port numbers that you can use for the FC ISLs are as follows:
Switch model
ISL port
Switch 1
Brocade 6520
ISL port 1
23
ISL port 2
47
ISL port 3
71
ISL port 4
95
ISL port 1
20
ISL port 2
21
ISL port 3
22
ISL port 4
23
Brocade 6505
206 | Fabric-attached MetroCluster Installation and Configuration Guide
Switch model
ISL port
Switch 1
Brocade 6510
ISL port 1
40
ISL port 2
41
ISL port 3
42
ISL port 4
43
ISL port 5
44
ISL port 6
45
ISL port 7
46
ISL port 8
47
ISL connection
Cisco 9396S
Cisco 9148 or 9148S
Switch Port
Switch port
ISL 1
44
12
ISL 2
48
16
ISL 3
92
20
ISL 4
96
24
* The Cisco 9250i switch uses the FCIP ports for the ISL. There are certain limitations and
procedures for using the FCIP ports.
NetApp KB Article 1015624: How to configure the Cisco 9250i FC storage backend switch in
MetroCluster for clustered Data ONTAP
Ports 40 through 48 are 10 GbE ports and are not used in the MetroCluster configuration.
Cabling the cluster interconnect in eight- or four-node configurations
In eight- or four-node MetroCluster configurations, you must cable the cluster interconnect between
the local controller modules at each site.
About this task
This task is not required on two-node MetroCluster configurations.
This task must be performed at both MetroCluster sites.
Step
1. Cable the cluster interconnect from one controller module to the other, or if cluster interconnect
switches are used, from each controller module to the switches.
Note: AFF systems are not supported with array LUNs.
Note: AFF systems are not supported with array LUNs.
Installation and Setup Instructions for AFF A300 Systems
Installation and Setup Instructions for AFF A700 and FAS9000
Installation and Setup Instructions for FAS8200 Systems
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Planning and installing a MetroCluster configuration with array LUNs | 207
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
Related information
Network and LIF management
Cabling the cluster peering connections
You must cable the controller ports used for cluster peering so that they have connectivity with the
cluster on the partner site.
About this task
This task must be performed on each controller in the MetroCluster configuration.
At least two ports on each controller should be used for cluster peering.
The recommended minimum bandwidth for the ports and network connectivity is 1 GbE.
Step
1. Identify and cable at least two ports for cluster peering and verify they have network connectivity
with the partner cluster.
Cluster peering can be done on dedicated ports or on data ports. Using dedicated ports provides
higher throughput for the cluster peering traffic.
Cluster peering express configuration
Related concepts
Considerations for configuring cluster peering on page 12
Related information
Data protection using SnapMirror and SnapVault technology
Cluster peering express configuration
Cabling the HA interconnect, if necessary
If you have an eight- or a four-node MetroCluster configuration and the storage controllers within the
HA pairs are in separate chassis, you must cable the HA interconnect between the controllers.
About this task
•
This task does not apply to two-node MetroCluster configurations.
•
This task must be performed at both MetroCluster sites.
•
The HA interconnect must be cabled only if the storage controllers within the HA pair are in
separate chassis.
Some storage controller models support two controllers in a single chassis, in which case they use
an internal HA interconnect.
Steps
1. Cable the HA interconnect if the storage controller's HA partner is in a separate chassis.
Note: AFF systems are not supported with array LUNs.
208 | Fabric-attached MetroCluster Installation and Configuration Guide
Note: AFF systems are not supported with array LUNs.
Installation and Setup Instructions for AFF A300 Systems
Installation and Setup Instructions for AFF A700 and FAS9000
Installation and Setup Instructions for FAS8200 Systems
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
2. If the MetroCluster site includes two HA pairs, repeat the previous steps on the second HA pair.
3. Repeat this task at the MetroCluster partner site.
Cabling the management and data connections
You must cable the management and data ports on each storage controller to the site networks.
About this task
This task must be repeated for each new controller at both MetroCluster sites.
You can connect the controller and cluster switch management ports to existing switches in your
network or to new dedicated network switches such as NetApp CN1601 cluster management
switches.
Step
1. Cable the controller's management and data ports to the management and data networks at the
local site.
Note: AFF systems are not supported with array LUNs.
Installation and Setup Instructions for AFF A300 Systems
Installation and Setup Instructions for AFF A700 and FAS9000
Installation and Setup Instructions for FAS8200 Systems
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
Cabling storage arrays to FC switches in a MetroCluster configuration
You must connect storage arrays to FC switches so that the ONTAP systems in the MetroCluster
configuration can access a specific array LUN through at least two paths.
Before you begin
•
The storage arrays must be set up to present array LUNs to ONTAP.
•
The ONTAP controllers must be connected to the FC switches.
Planning and installing a MetroCluster configuration with array LUNs | 209
•
The ISLs must be cabled between the FC switches across the MetroCluster sites.
About this task
•
You must repeat this task for each storage array at both of the MetroCluster sites.
•
You must connect the controllers in a MetroCluster configuration to the storage arrays through FC
switches.
Step
1. Connect the storage array ports to FC switch ports.
At each site, connect the redundant port pairs in the storage array to FC switches on alternate
fabrics. This provides redundancy in the paths for accessing the array LUNs.
Related concepts
Switch zoning in a MetroCluster configuration with array LUNs on page 213
Related references
Example of cabling storage array ports to FC switches in a two-node MetroCluster configuration
on page 209
Example of cabling storage array ports to FC switches in a four-node MetroCluster configuration
on page 210
Example of cabling storage array ports to FC switches in an eight-node MetroCluster configuration
on page 212
Example of cabling storage array ports to FC switches in a two-node MetroCluster
configuration
In a MetroCluster configuration with array LUNs, you must connect the storage array ports that form
a redundant port pair to alternate FC switches.
The following illustration shows the connections between storage arrays and FC switches in a twonode fabric-attached MetroCluster configuration with array LUNs:
The connections between storage array ports and FC switch ports are similar for both stretch and
fabric-attached variants of two-node MetroCluster configurations with array LUNs.
210 | Fabric-attached MetroCluster Installation and Configuration Guide
Note: If you plan to use disks in addition to array LUNs in your MetroCluster configuration, you
must use the switch ports specified for the configuration with disks.
Port assignments for FC switches on page 38
In the illustration, the redundant array port pairs for both the sites are as follows:
•
•
Storage array at Site A:
◦
Ports 1A and 2A
◦
Ports 1B and 2B
Storage array at Site B:
◦
Ports 1A' and 2A'
◦
Ports 1B' and 2B'
FC_switch_A_1 at Site A and FC_switch_B_1 at Site B are connected to form fabric_1. Similarly,
FC_switch_A_2 at Site A and FC_switch_B_2 are connected to form fabric_2.
The following table lists the connections between the storage array ports and the FC switches for the
example MetroCluster illustration:
FC switch ports
Switch fabrics
1A
FC_switch_A_1, Port 9
fabric_1
2A
FC_switch_A_2, Port 10
fabric_2
1B
FC_switch_A_1, Port 10
fabric_1
2B
FC_switch_A_2, Port 9
fabric_2
1A'
FC_switch_B_1, Port 9
fabric_1
2A'
FC_switch_B_2, Port 10
fabric_2
1B'
FC_switch_B_1, Port 10
fabric_1
2B'
FC_switch_B_2, Port 9
fabric_2
Array LUN ports
Site A
Site B
Example of cabling storage array ports to FC switches in a four-node MetroCluster
configuration
In a MetroCluster configuration with array LUNs, you must connect the storage array ports that form
a redundant port pair to alternate FC switches.
The following reference illustration shows the connections between storage arrays and FC switches
in a four-node MetroCluster configuration with array LUNs:
Planning and installing a MetroCluster configuration with array LUNs | 211
Note: If you plan to use disks in addition to array LUNs in your MetroCluster configuration, you
must use the switch ports specified for the configuration with disks.
Port assignments for FC switches on page 38
In the illustration, the redundant array port pairs for both the sites are as follows:
•
•
Storage array at Site A:
◦
Ports 1A and 2A
◦
Ports 1B and 2B
◦
Ports 1C and 2C
◦
Ports 1D and 2D
Storage array at Site B:
◦
Ports 1A' and 2A'
◦
Ports 1B' and 2B'
◦
Ports 1C' and 2C'
◦
Ports 1D' and 2D'
FC_switch_A_1 at Site A and FC_switch_B_1 at Site B are connected to form fabric_1. Similarly,
FC_switch_A_2 at Site A and FC_switch_B_2 are connected to form fabric_2.
The following table lists the connections between the storage array ports and the FC switches for the
MetroCluster illustration:
Array LUN ports
FC switch ports
Switch fabrics
1A
FC_switch_A_1, Port 7
fabric_1
2A
FC_switch_A_2, Port 11
fabric_2
1B
FC_switch_A_1, Port 8
fabric_1
2B
FC_switch_A_2, Port 10
fabric_2
1C
FC_switch_A_1, Port 9
fabric_1
2C
FC_switch_A_2, Port 9
fabric_2
Site A
212 | Fabric-attached MetroCluster Installation and Configuration Guide
Array LUN ports
FC switch ports
Switch fabrics
1D
FC_switch_A_1, Port 10
fabric_1
2D
FC_switch_A_2, Port 8
fabric_2
1A'
FC_switch_B_1, Port 7
fabric_1
2A'
FC_switch_B_2, Port 11
fabric_2
1B'
FC_switch_B_1, Port 8
fabric_1
2B'
FC_switch_B_2, Port 10
fabric_2
1C'
FC_switch_B_1, Port 9
fabric_1
2C'
FC_switch_B_2, Port 9
fabric_2
1D'
FC_switch_B_1, Port 10
fabric_1
2D'
FC_switch_B_2, Port 8
fabric_2
Site B
Example of cabling storage array ports to FC switches in an eight-node MetroCluster
configuration
In a MetroCluster configuration with array LUNs, you must connect the storage array ports that form
a redundant port pair to alternate FC switches.
An eight-node MetroCluster configuration consists of two four-node DR groups. The first DR group
consists of the following nodes:
•
controller_A_1
•
controller_A_2
•
controller_B_1
•
controller_B_2
The second DR group consists of the following nodes:
•
controller_A_3
•
controller_A_4
•
controller_B_3
•
controller_B_4
To cable the array ports for the first DR group, you can use the cabling examples for a four-node
MetroCluster configuration for the first DR group.
Example of cabling storage array ports to FC switches in a four-node MetroCluster configuration on
page 210
To cable the array ports for the second DR group, follow the same examples and extrapolate for the
FC-VI ports and FC initiator ports belonging to the controllers in the second DR group.
Planning and installing a MetroCluster configuration with array LUNs | 213
Switch zoning in a MetroCluster configuration with array
LUNs
Configuring switch zoning enables you to define which array LUNs can be viewed by a specific
ONTAP system in the MetroCluster configuration.
Related concepts
Example of switch zoning in a two-node MetroCluster configuration with array LUNs on page
214
Example of switch zoning in a four-node MetroCluster configuration with array LUNs on page
215
Example of switch zoning in an eight-node MetroCluster configuration with array LUNs on page
217
Requirements for switch zoning in a MetroCluster configuration with array
LUNs
When using switch zoning in a MetroCluster configuration with array LUNs, you must ensure that
certain basic requirements are followed.
The requirements for switch zoning in a MetroCluster configuration with array LUNs are as follows:
•
The MetroCluster configuration must follow the single-initiator to single-target zoning scheme.
Single-initiator to single-target zoning limits each zone to a single FC initiator port and a single
target port.
•
The FC-VI ports must be zoned end-to-end across the fabric.
•
Sharing of multiple initiator ports with a single target port can cause performance issues.
Similarly, sharing of multiple target ports with a single initiator port can cause performance
issues.
•
You must have performed a basic configuration of the FC switches used in the MetroCluster
configuration.
Configuring the Cisco or Brocade FC switches manually on page 52
Shared initiator and shared target support for MetroCluster configuration with array
LUNs
Being able to share a given FC initiator port or target ports is useful for organizations that want to
minimize the number of initiator or target ports used. For example, an organization that expects low
I/O usage over an FC initiator port or target ports might prefer to share FC initiator port or target
ports instead of dedicating each FC initiator port to a single target port.
However sharing of initiator or target ports can adversely affect performance.
NetApp KB Article 1015978: How to support Shared Initiator and Shared Target configuration with
Array LUNs in a MetroCluster (MCC) environment
214 | Fabric-attached MetroCluster Installation and Configuration Guide
Example of switch zoning in a two-node MetroCluster configuration with
array LUNs
Switch zoning defines paths between connected nodes. Configuring the zoning enables you to define
which array LUNs can be viewed by specific ONTAP systems.
You can use the following example as a reference when determining zoning for a two-node fabricattached MetroCluster configuration with array LUNs:
The example shows single-initiator to single-target zoning for the MetroCluster configurations. The
lines in the example represent zones rather than connections; each line is labeled with its zone
number.
In the example, array LUNs are allocated on each storage array. LUNs of equal size are provisioned
on the storage arrays at both sites, which is a SyncMirror requirement. Each ONTAP system has two
paths to array LUNs. The ports on the storage array are redundant.
The redundant array port pairs for both the sites are as follows:
•
•
Storage array at Site A:
◦
Ports 1A and 2A
◦
Ports 1B and 2B
Storage array at Site B:
◦
Ports 1A' and 2A'
◦
Ports 1B' and 2B'
The redundant port pairs on each storage array form alternate paths. Therefore, both the ports of the
port pairs can access the LUNs on the respective storage arrays.
The following table shows the zones for the illustrations:
ONTAP controller and initiator port
Storage array port
z1
Controller A: Port 0a
Port 1A
z3
Controller A: Port 0c
Port 1A'
Zone
FC_switch_A_1
Planning and installing a MetroCluster configuration with array LUNs | 215
Zone
ONTAP controller and initiator port
Storage array port
z2
Controller A: Port 0b
Port 2A'
z4
Controller A: Port 0d
Port 2A
z5
Controller B: Port 0a
Port 1B'
z7
Controller B: Port 0c
Port 1B
z6
Controller B: Port 0b
Port 2B
z8
Controller B: Port 0d
Port 2B'
FC_switch_A_2
FC_switch_B_1
FC_switch_B_2
The following table shows the zones for the FC-VI connections:
ONTAP controller and initiator port
Switch
zX
Controller A: Port FC-VI a
FC_switch_A_1
zY
Controller A: Port FC-VI b
FC_switch_A_2
zX
Controller B: Port FC-VI a
FC_switch_B_1
zY
Controller B: Port FC-VI b
FC_switch_B_2
Zone
Site A
Site B
Example of switch zoning in a four-node MetroCluster configuration with
array LUNs
Switch zoning defines paths between connected nodes. Configuring the zoning enables you to define
which array LUNs can be viewed by a specific ONTAP systems.
You can use the following example as a reference when determining zoning for a four-node
MetroCluster configuration with array LUNs. The example shows single-initiator to single-target
zoning for a MetroCluster configuration. The lines in the following example represent zones rather
than connections; each line is labeled with its zone number:
216 | Fabric-attached MetroCluster Installation and Configuration Guide
In the illustration, array LUNs are allocated on each storage array for the MetroCluster configuration.
LUNs of equal size are provisioned on the storage arrays at both sites, which is a SyncMirror
requirement. Each ONTAP system has two paths to array LUNs. The ports on the storage array are
redundant.
In the illustration, the redundant array port pairs for both the sites are as follows:
•
•
Storage array at Site A:
◦
Ports 1A and 2A
◦
Ports 1B and 2B
◦
Ports 1C and 2C
◦
Ports 1D and 2D
Storage array at Site B:
◦
Ports 1A' and 2A'
◦
Ports 1B' and 2B'
◦
Ports 1C' and 2C'
◦
Ports 1D' and 2D'
The redundant port pairs on each storage array form alternate paths. Therefore, both the ports of the
port pairs can access the LUNs on the respective storage arrays.
The following table shows the zones for this example:
Zone
ONTAP controller and initiator port
Storage array port
z1
Controller_A_1: Port 0a
Port 1A
z3
Controller_A_1: Port 0c
Port 1A'
z5
Controller_A_2: Port 0a
Port 1B
z7
Controller_A_2: Port 0c
Port 1B'
z2
Controller_A_1: Port 0b
Port 2A'
z4
Controller_A_1: Port 0d
Port 2A
z6
Controller_A_2: Port 0b
Port 2B'
z8
Controller_A_2: Port 0d
Port 2B
z9
Controller_B_1: Port 0a
Port 1C'
z11
Controller_B_1: Port 0c
Port 1C
z13
Controller_B_2: Port 0a
Port 1D'
z15
Controller_B_2: Port 0c
Port 1D
z10
Controller_B_1: Port 0b
Port 2C
z12
Controller_B_1: Port 0d
Port 2C'
FC_switch_A_1
FC_switch_A_2
FC_switch_B_1
FC_switch_B_2
Planning and installing a MetroCluster configuration with array LUNs | 217
Zone
ONTAP controller and initiator port
Storage array port
z14
Controller_B_2: Port 0b
Port 2D
z16
Controller_B_2: Port 0d
Port 2D'
The following table shows the zones for the FC-VI connections at Site A and Site B:
Zone
ONTAP controller and FC initiator port
Switch
zX
Controller_A_1: Port FC-VI a
FC_switch_A_1
zY
Controller_A_1: Port FC-VI b
FC_switch_A_2
zX
Controller_A_2: Port FC-VI a
FC_switch_A_1
zY
Controller_A_2: Port FC-VI b
FC_switch_A_2
zX
Controller_B_1: Port FC-VI a
FC_switch_B_1
zY
Controller_B_1: Port FC-VI b
FC_switch_B_2
zX
Controller_B_2: Port FC-VI a
FC_switch_B_1
zY
Controller_B_2: Port FC-VI b
FC_switch_B_2
Site A
Site B
Example of switch zoning in an eight-node MetroCluster configuration with
array LUNs
Switch zoning defines paths between connected nodes. Configuring the zoning enables you to define
which array LUNs can be viewed by specific ONTAP systems.
An eight-node MetroCluster configuration consists of two four-node DR groups. The first DR group
consists of the following nodes:
•
controller_A_1
•
controller_A_2
•
controller_B_1
•
controller_B_2
The second DR group consists of the following nodes:
•
controller_A_3
•
controller_A_4
•
controller_B_3
•
controller_B_4
To configure the switch zoning, you can use the zoning examples for a four-node MetroCluster
configuration for the first DR group.
Example of switch zoning in a four-node MetroCluster configuration with array LUNs on page 215
To configure zoning for the second DR group, follow the same examples and requirements for the FC
initiator ports and array LUNs belonging to the controllers in the second DR group.
218 | Fabric-attached MetroCluster Installation and Configuration Guide
Setting up ONTAP in a MetroCluster configuration with array
LUNs
After connecting the devices in the MetroCluster configuration, you must set up the ONTAP systems
to use the storage on the storage array. You must also configure any required ONTAP feature.
Steps
1. Verifying and configuring the HA state of components in Maintenance mode on page 218
2. Configuring ONTAP on a system that uses only array LUNs on page 219
3. Setting up the cluster on page 222
4. Installing the license for using array LUNs in a MetroCluster configuration on page 222
5. Configuring FC-VI ports on a QLE2564 quad-port card on FAS8020 systems on page 223
6. Assigning ownership of array LUNs on page 225
7. Peering the clusters on page 225
8. Mirroring the root aggregates on page 228
9. Creating a mirrored data aggregate on each node on page 228
10. Creating unmirrored data aggregates on page 229
11. Implementing the MetroCluster configuration on page 230
12. Configuring the MetroCluster FC switches for health monitoring on page 232
13. Checking the MetroCluster configuration on page 233
14. Checking for MetroCluster configuration errors with Config Advisor on page 235
15. Verifying switchover, healing, and switchback on page 235
16. Installing the MetroCluster Tiebreaker software on page 235
17. Protecting configuration backup files on page 236
Verifying and configuring the HA state of components in Maintenance
mode
When configuring a storage system in a MetroCluster configuration, you must make sure that the
high-availability (HA) state of the controller module and chassis components is mcc or mcc-2n so
that these components boot properly.
Before you begin
The system must be in Maintenance mode.
About this task
This task is not required on systems that are received from the factory.
Steps
1. In Maintenance mode, display the HA state of the controller module and chassis:
ha-config show
The correct HA state depends on whether you have a four-node or two-node MetroCluster
configuration.
Planning and installing a MetroCluster configuration with array LUNs | 219
Number of controllers in the MetroCluster
configuration
HA state for all components should be...
Eight or four
mcc
Two
mcc-2n
2. If the displayed system state of the controller is not correct, set the HA state for the controller
module:
Number of controllers in
the MetroCluster
configuration
Command
Eight or four
ha-config modify controller mcc
Two
ha-config modify controller mcc-2n
3. If the displayed system state of the chassis is not correct, set the HA state for the chassis:
Number of controllers in
the MetroCluster
configuration
Command
Eight or four
ha-config modify chassis mcc
Two
ha-config modify chassis mcc-2n
4. Boot the node to ONTAP:
boot_ontap
5. Repeat these steps on each node in the MetroCluster configuration.
Configuring ONTAP on a system that uses only array LUNs
If you want to configure ONTAP for use with array LUNs, you must configure the root aggregate and
root volume, reserve space for diagnostics and recovery operations, and set up the cluster.
Before you begin
•
The ONTAP system must be connected to the storage array.
•
The storage array administrator must have created LUNs and presented them to ONTAP.
•
The storage array administrator must have configured the LUN security.
About this task
You must configure each node that you want to use with array LUNs. If the node is in an HA pair,
then you must complete the configuration process on one node before proceeding with the
configuration on the partner node.
Steps
1. Power on the primary node and interrupt the boot process by pressing Ctrl-C when you see the
following message on the console: Press CTRL-C for special boot menu.
2. Select option 4 (Clean configuration and initialize all disks) on the boot menu.
The list of array LUNs made available to ONTAP is displayed. In addition, the array LUN size
required for root volume creation is also specified. The size required for root volume creation
differs from one ONTAP system to another.
220 | Fabric-attached MetroCluster Installation and Configuration Guide
Example
•
If no array LUNs were previously assigned, ONTAP detects and displays the available array
LUNs, as shown in the following example:
**********************************************************************
* No disks or array LUNs are owned by this node.
*
* You can use the following information to verify connectivity from *
* HBAs to switch ports. If the connectivity of HBAs to switch ports *
* does not match your expectations, configure your SAN and rescan.
*
* You can rescan by entering 'r' at the prompt for selecting
*
* array LUNs below. To halt the node enter 'h'.
*
**********************************************************************
HBA HBA WWPN
--- -------v0 0a9032f400000000
v1 0b9032f400000000
v2 0c9032f400000000
v3 0d9032f400000000
Switch port
----------switch0:0
switch0:2
switch1:0
switch1:2
Switch port WWPN
---------------9408b11e15000010
9408b11e15000210
9508b11e15000010
9508b11e15000210
No native disks were detected, but array LUNs were detected.
You will need to select an array LUN to be used to create the root aggregate and root volume.
The array LUNs visible to the system are listed below. Select one array LUN to be used to
create the root aggregate and root volume. The root volume requires 45.0 GB of space.
Warning: The contents of the array LUN you select will be erased by ONTAP prior to their use.
Index
----0
1
2
3
4
5
6
7
8
9
Array LUN Name
----------------switch0:5.183L1
switch0:5.183L3
switch0:5.183L31
switch0:5.183L33
switch0:7.183L0
switch0:7.183L2
switch0:7.183L4
switch0:7.183L30
switch0:7.183L32
switch0:7.183L34
Model
---------SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
Vendor
-------EMC
EMC
EMC
EMC
EMC
EMC
EMC
EMC
EMC
EMC
Size
-------266.1 GB
266.1 GB
266.1 GB
658.3 GB
173.6 GB
173.6 GB
658.3 GB
173.6 GB
266.1 GB
658.3 GB
Owner
------
Checksum
-------Block
Block
Block
Block
Block
Block
Block
Block
Block
Block
Serial Number
-----------------------600604803436313734316631
600604803436316333353837
600604803436313237643666
600604803436316263613066
600604803436313261356235
600604803436313438396431
600604803436313161663031
600604803436316538353834
600604803436313237353738
600604803436313737333662
Select an array LUN, rescan or halt {Index number, r, h}
•
If array LUNs were previously assigned, for example, through the maintenance mode, they are
either marked local or partner in the list of the available array LUNs, depending on
whether the array LUNs were selected from the node on which you are installing ONTAP or
its HA partner:
In this example, array LUNs with index numbers 3 and 6 are marked local because they had
been previously assigned from this particular node:
**********************************************************************
* No disks are owned by this node, but array LUNs are assigned.
*
* You can use the following information to verify connectivity from *
* HBAs to switch ports. If the connectivity of HBAs to switch ports *
* does not match your expectations, configure your SAN and rescan.
*
* You can rescan by entering 'r' at the prompt for selecting
*
* array LUNs below.
**********************************************************************
HBA HBA WWPN
Switch port
Switch port WWPN
--- --------------------------------0e 500a098001baf8e0 vgbr6510s203:25
20190027f88948dd
0f 500a098101baf8e0 vgci9710s202:1-17
2011547feeead680
0g 500a098201baf8e0 vgbr6510s203:27
201b0027f88948dd
0h 500a098301baf8e0 vgci9710s202:1-18
2012547feeead680
No native disks were detected, but array LUNs were detected.
You will need to select an array LUN to be used to create the root aggregate and root volume.
The array LUNs visible to the system are listed below. Select one array LUN to be used to
create the root aggregate and root volume. The root volume requires 350.0 GB of space.
Warning: The contents of the array LUN you select will be erased by ONTAP prior to their use.
Index
----0
1
2
3
4
5
6
7
Array LUN Name
----------------------vgci9710s202:2-24.0L19
vgbr6510s203:30.126L20
vgci9710s202:2-24.0L21
vgbr6510s203:30.126L22
vgci9710s202:2-24.0L23
vgbr6510s203:30.126L24
vgbr6510s203:30.126L25
vgci9710s202:2-24.0L26
Model
-----RAID5
RAID5
RAID5
RAID5
RAID5
RAID5
RAID5
RAID5
Vendor
-----DGC
DGC
DGC
DGC
DGC
DGC
DGC
DGC
Size
-------217.3 GB
217.3 GB
217.3 GB
405.4 GB
217.3 GB
217.3 GB
423.5 GB
423.5 GB
Owner
------
local
local
Checksum
-------Block
Block
Block
Block
Block
Block
Block
Block
Serial Number
-----------------------6006016083402B0048E576D7
6006016083402B0049E576D7
6006016083402B004AE576D7
6006016083402B004BE576D7
6006016083402B004CE576D7
6006016083402B004DE576D7
6006016083402B003CF93694
6006016083402B003DF93694
Planning and installing a MetroCluster configuration with array LUNs | 221
3. Select the index number corresponding to the array LUN you want to assign as the root volume.
The array LUN must be of sufficient size to create the root volume.
The array LUN selected for root volume creation is marked local (root).
Example
In the following example, the array LUN with index number 3 is marked for root volume
creation:
The root volume will be created on switch 0:5.183L33.
ONTAP requires that 11.0 GB of space be reserved for use in diagnostic and recovery
operations. Select one array LUN to be used as spare for diagnostic and recovery operations.
Index
----0
1
2
3
4
5
6
7
8
9
Array LUN Name
----------------switch0:5.183L1
switch0:5.183L3
switch0:5.183L31
switch0:5.183L33
switch0:7.183L0
switch0:7.183L2
switch0:7.183L4
switch0:7.183L30
switch0:7.183L32
switch0:7.183L34
Model
---------SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
SYMMETRIX
Vendor
-----EMC
EMC
EMC
EMC
EMC
EMC
EMC
EMC
EMC
EMC
Size
Owner
-------- -------------266.1 GB
266.1 GB
266.1 GB
658.3 GB local (root)
173.6 GB
173.6 GB
658.3 GB
173.6 GB
266.1 GB
658.3 GB
Checksum
-------Block
Block
Block
Block
Block
Block
Block
Block
Block
Block
Serial Number
-----------------------600604803436313734316631
600604803436316333353837
600604803436313237643666
600604803436316263613066
600604803436313261356235
600604803436313438396431
600604803436313161663031
600604803436316538353834
600604803436313237353738
600604803436313737333662
4. Select the index number corresponding to the array LUN you want to assign for use in diagnostic
and recovery options.
The array LUN must be of sufficient size for use in diagnostic and recovery options. If required,
you can also select multiple array LUNs with a combined size greater than or equal to the
specified size. To select multiple entries, you must enter the comma-separated values of all of the
index numbers corresponding to the array LUNs you want to select for diagnostic and recovery
options.
Example
The following example shows a list of array LUNs selected for root volume creation and for
diagnostic and recovery options:
Here is a list of the selected array LUNs
Index Array LUN Name
Model
Vendor
----- ----------------- --------- -----2 switch0:5.183L31
SYMMETRIX EMC
3 switch0:5.183L33
SYMMETRIX EMC
4 switch0:7.183L0
SYMMETRIX EMC
5 switch0:7.183L2
SYMMETRIX EMC
Do you want to continue (yes|no)?
Size
-------266.1 GB
658.3 GB
173.6 GB
173.6 GB
Owner
------------local
local
(root)
local
local
Checksum
-------Block
Block
Block
Block
Serial Number
-----------------------600604803436313237643666
600604803436316263613066
600604803436313261356235
600604803436313438396431
Note: Selecting “no” clears the LUN selection.
5. Enter y when prompted by the system to continue with the installation process.
The root aggregate and the root volume are created and the rest of the installation process
continues.
6. Enter the required details to create the node management interface.
Example
The following example shows the node management interface screen with a message confirming
the creation of the node management interface:
Welcome to node setup.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the setup wizard.
Any changes you made before quitting will be saved.
To accept a default or omit a question, do not enter a value.
222 | Fabric-attached MetroCluster Installation and Configuration Guide
Enter the node management interface port [e0M]:
Enter the node management interface IP address: 192.0.2.66
Enter the node management interface netmask: 255.255.255.192
Enter the node management interface default gateway: 192.0.2.7
A node management interface on port e0M with IP address 192.0.2.66 has been created.
This node has its management address assigned and is ready for cluster setup.
After you finish
After configuring ONTAP on all of the nodes that you want to use with array LUNs, you should
complete the cluster setup process.
Software setup
Related information
FlexArray virtualization installation requirements and reference
Setting up the cluster
Setting up the cluster involves setting up each node, creating the cluster on the first node, and joining
any remaining nodes to the cluster.
Related information
Software setup
Installing the license for using array LUNs in a MetroCluster configuration
You must install the V_StorageAttach license on each MetroCluster node that you want to use with
array LUNs. You cannot use array LUNs in an aggregate until the license is installed.
Before you begin
•
The cluster must be installed.
•
You must have the license key for the V_StorageAttach license.
About this task
You must use a separate license key for each node on which you want to install the V_StorageAttach
license.
Steps
1. Use the system license add command to install the V_StorageAttach license.
Repeat this step for each cluster node on which you want to install the license.
2. Use the system license show command to verify that the V_StorageAttach license is
installed on all required nodes in a cluster.
Example
The following sample output shows that the V_StorageAttach license is installed on the nodes of
cluster_A:
Planning and installing a MetroCluster configuration with array LUNs | 223
cluster_A::> system license show
Serial Number: nnnnnnnn
Owner: controller_A_1
Package
Type
Description
Expiration
----------------- ------- --------------------- -------------------V_StorageAttach
license Virtual Attached Storage
Serial Number: llllllll
Owner: controller_A_2
Package
Type
Description
Expiration
----------------- ------- --------------------- -------------------V_StorageAttach
license Virtual Attached Storage
Configuring FC-VI ports on a QLE2564 quad-port card on FAS8020 systems
If you are using the QLE2564 quad-port card on a FAS8020 system, you can enter Maintenance
mode to configure the 1a and 1b ports for FC-VI and initiator usage. This is not required on
MetroCluster systems received from the factory, in which the ports are set appropriately for your
configuration.
About this task
This task must be performed in Maintenance mode.
Steps
1. Disable the ports:
storage disable adapter 1a
storage disable adapter 1b
Example
*> storage disable adapter 1a
Jun 03 02:17:57 [controller_B_1:fci.adapter.offlining:info]:
Offlining Fibre Channel adapter 1a.
Host adapter 1a disable succeeded
Jun 03 02:17:57 [controller_B_1:fci.adapter.offline:info]: Fibre
Channel adapter 1a is now offline.
*> storage disable adapter 1b
Jun 03 02:18:43 [controller_B_1:fci.adapter.offlining:info]:
Offlining Fibre Channel adapter 1b.
Host adapter 1b disable succeeded
Jun 03 02:18:43 [controller_B_1:fci.adapter.offline:info]: Fibre
Channel adapter 1b is now offline.
*>
2. Verify that the ports are disabled:
ucadmin show
Example
*> ucadmin show
Current
Adapter Mode
------- ------...
Current
Type
---------
Pending
Mode
-------
Pending
Type
---------
Admin
Status
-------
224 | Fabric-attached MetroCluster Installation and Configuration Guide
1a
1b
1c
1d
fc
fc
fc
fc
initiator
initiator
initiator
initiator
-
-
offline
offline
online
online
3. Set the a and b ports to FC-VI mode:
ucadmin modify -adapter 1a -type fcvi
The command sets the mode on both ports in the port pair, 1a and 1b (even though only 1a is
specified in the command).
Example
*> ucadmin modify -t fcvi 1a
Jun 03 02:19:13 [controller_B_1:ucm.type.changed:info]: FC-4
changed to fcvi on adapter 1a. Reboot the controller for the
to take effect.
Jun 03 02:19:13 [controller_B_1:ucm.type.changed:info]: FC-4
changed to fcvi on adapter 1b. Reboot the controller for the
to take effect.
type has
changes
type has
changes
4. Confirm that the change is pending:
ucadmin show
Example
*> ucadmin show
Current
Adapter Mode
------- ------...
1a
fc
1b
fc
1c
fc
1d
fc
Current
Type
---------
Pending
Mode
-------
Pending
Type
---------
Admin
Status
-------
initiator
initiator
initiator
initiator
-
fcvi
fcvi
-
offline
offline
online
online
5. Shut down the controller, and then reboot into Maintenance mode.
6. Confirm the configuration change:
ucadmin show local
Example
Node
Adapter Mode
------------------ ----------------...
controller_B_1
1a
fc
controller_B_1
1b
fc
controller_B_1
1c
fc
controller_B_1
1d
fc
6 entries were displayed.
Type
---------
Mode
-------
Type
---------
Status
fcvi
-
-
online
fcvi
-
-
online
initiator
-
-
online
initiator
-
-
online
Planning and installing a MetroCluster configuration with array LUNs | 225
Assigning ownership of array LUNs
Array LUNs must be owned by a node before they can be added to an aggregate to be used as
storage.
Before you begin
•
Back-end configuration testing (testing of the connectivity and configuration of devices behind
the ONTAP systems) must be completed.
•
Array LUNs that you want to assign must be presented to the ONTAP systems.
About this task
You can assign ownership of array LUNs that have the following characteristics:
•
They are unowned.
•
They have no storage array configuration errors, such as the following:
◦
The array LUN is smaller than or larger than the size that ONTAP supports.
◦
The LDEV is mapped on only one port.
◦
The LDEV has inconsistent LUN IDs assigned to it.
◦
The LUN is available on only one path.
ONTAP issues an error message if you try to assign ownership of an array LUN with back-end
configuration errors that would interfere with the ONTAP system and the storage array operating
together. You must fix such errors before you can proceed with array LUN assignment.
ONTAP alerts you if you try to assign an array LUN with a redundancy error: for example, all paths
to this array LUN are connected to the same controller or only one path to the array LUN. You can
fix a redundancy error before or after assigning ownership of the LUN.
Steps
1. Enter the following command to see the array LUNs that have not yet been assigned to a node:
storage disk show -container-type unassigned
2. Enter the following command to assign an array LUN to this node:
storage disk assign -disk arrayLUNname -owner nodename
If you want to fix a redundancy error after disk assignment instead of before, you must use the force parameter with the storage disk assign command.
Related information
FlexArray virtualization installation requirements and reference
Peering the clusters
The clusters in the MetroCluster configuration must be in a peer relationship so that they can
communicate with each other and perform the data mirroring essential to MetroCluster disaster
recovery.
Before you begin
For systems received from the factory, the cluster peering is configured and you do not need to
manually peer the clusters.
226 | Fabric-attached MetroCluster Installation and Configuration Guide
Related concepts
Considerations when using dedicated ports on page 13
Considerations when sharing data ports on page 13
Related references
Prerequisites for cluster peering on page 12
Related information
Data protection using SnapMirror and SnapVault technology
Cluster peering express configuration
Configuring intercluster LIFs
You must create intercluster LIFs on ports used for communication between the MetroCluster partner
clusters. You can use dedicated ports or ports that also have data traffic.
Creating the cluster peer relationship
You create the cluster peer relationship using a set of intercluster logical interfaces to make
information about one cluster available to the other cluster for use in cluster peering applications.
Before you begin
•
Intercluster LIFs should be created in the IPspaces of both clusters you want to peer.
•
You should ensure that the intercluster LIFs of the clusters can route to each other.
•
If there are different administrators for each cluster, the passphrase used to authenticate the
cluster peer relationship should be agreed upon.
About this task
If you created intercluster LIFs in a nondefault IPspace, you need to designate the IPspace when you
create the cluster peer.
Steps
1. Create the cluster peer relationship on each cluster by using the cluster peer create
command.
The passphrase that you use is not displayed as you type it.
If you created a nondefault IPspace to designate intercluster connectivity, you use the ipspace
parameter to select that IPspace.
Example
In the following example, cluster01 is peered with a remote cluster named cluster02. Cluster01 is
a two-node cluster that has one intercluster LIF per node. The IP addresses of the intercluster
LIFs created in cluster01 are 192.168.2.201 and 192.168.2.202. Similarly, cluster02 is a two-node
cluster that has one intercluster LIF per node. The IP addresses of the intercluster LIFs created in
cluster02 are 192.168.2.203 and 192.168.2.204. These IP addresses are used to create the cluster
peer relationship.
cluster01::> cluster peer create -peer-addrs
192.168.2.203,192.168.2.204
Please type the passphrase:
Please type the passphrase again:
Planning and installing a MetroCluster configuration with array LUNs | 227
cluster02::> cluster peer create -peer-addrs
192.168.2.201,192.168.2.202
Please type the passphrase:
Please type the passphrase again:
If DNS is configured to resolve host names for the intercluster IP addresses, you can use host
names in the –peer-addrs option. It is not likely that intercluster IP addresses frequently
change; however, using host names allows intercluster IP addresses to change without having to
modify the cluster peer relationship.
Example
In the following example, an IPspace called IP01A was created on cluster01 for intercluster
connectivity. The IP addresses used in the previous example are used in this example to create the
cluster peer relationship.
cluster01::> cluster peer create -peer-addrs
192.168.2.203,192.168.2.204
-ipspace IP01A
Please type the passphrase:
Please type the passphrase again:
cluster02::> cluster peer create -peer-addrs
192.168.2.201,192.168.2.202
Please type the passphrase:
Please type the passphrase again:
2. Display the cluster peer relationship by using the cluster peer show command with the instance parameter.
Displaying the cluster peer relationship verifies that the relationship was established successfully.
Example
cluster01::> cluster peer show –instance
Peer Cluster Name: cluster02
Remote Intercluster Addresses: 192.168.2.203,192.168.2.204
Availability: Available
Remote Cluster Name: cluster02
Active IP Addresses: 192.168.2.203,192.168.2.204
Cluster Serial Number: 1-80-000013
3. Preview the health of the nodes in the peer cluster by using the cluster peer health show
command.
Previewing the health checks the connectivity and status of the nodes on the peer cluster.
Example
cluster01::> cluster peer health show
Node
cluster-Name
Ping-Status
---------- --------------------------cluster01-01
cluster02
Data: interface_reachable
ICMP: interface_reachable
Node-Name
RDB-Health Cluster-Health Avail…
--------- --------------- -------cluster02-01
true
true
cluster02-02
Data: interface_reachable
ICMP: interface_reachable true
true
cluster01-02
cluster02
cluster02-01
Data: interface_reachable
ICMP: interface_reachable true
true
cluster02-02
Data: interface_reachable
ICMP: interface_reachable true
true
true
true
true
true
228 | Fabric-attached MetroCluster Installation and Configuration Guide
Mirroring the root aggregates
You must mirror the root aggregates in your MetroCluster configuration to ensure data protection.
Before you begin
You must have ensured that the SyncMirror requirements for the MetroCluster configuration with
array LUNs are satisfied.
Requirements for a MetroCluster configuration with array LUNs on page 191
About this task
You must repeat this task for each controller in the MetroCluster configuration.
Step
1. Use the storage aggregate mirror command to mirror the unmirrored root aggregate.
Example
The following command mirrors the root aggregate for controller_A_1:
controller_A_1::> storage aggregate mirror aggr0_controller_A_1
The root aggregate is mirrored with array LUNs from pool1.
Creating a mirrored data aggregate on each node
You must create a mirrored data aggregate on each node in the DR group.
Before you begin
•
You should know what drives or array LUNs will be used in the new aggregate.
•
If you have multiple drive types in your system (heterogeneous storage), you should understand
how you can ensure that the correct drive type is selected.
About this task
•
Drives and array LUNs are owned by a specific node; when you create an aggregate, all drives in
that aggregate must be owned by the same node, which becomes the home node for that
aggregate.
•
Aggregate names should conform to the naming scheme you determined when you planned your
MetroCluster configuration.
•
The Clustered Data ONTAP Data Protection Guide contains more information about mirroring
aggregates.
Steps
1. Display a list of available spares:
storage disk show -spare -owner node_name
2. Create the aggregate by using the storage aggregate create -mirror true command.
If you are logged in to the cluster on the cluster management interface, you can create an
aggregate on any node in the cluster. To ensure that the aggregate is created on a specific node,
use the -node parameter or specify drives that are owned by that node.
Planning and installing a MetroCluster configuration with array LUNs | 229
You can specify the following options:
•
Aggregate's home node (that is, the node that owns the aggregate in normal operation)
•
List of specific drives or array LUNs that are to be added to the aggregate
•
Number of drives to include
•
Checksum style to use for the aggregate
•
Type of drives to use
•
Size of drives to use
•
Drive speed to use
•
RAID type for RAID groups on the aggregate
•
Maximum number of drives or array LUNs that can be included in a RAID group
•
Whether drives with different RPM are allowed
For more information about these options, see the storage aggregate create man page.
Example
The following command creates a mirrored aggregate with 10 disks:
controller_A_1::> storage aggregate create aggr1_controller_A_1 diskcount 10 -node controller_A_1 -mirror true
[Job 15] Job is queued: Create aggr1_controller_A_1.
[Job 15] The job is starting.
[Job 15] Job succeeded: DONE
3. Verify the RAID group and drives of your new aggregate:
storage aggregate show-status -aggregate aggregate-name
Creating unmirrored data aggregates
You can optionally create unmirrored data aggregates for data that does not require the redundant
mirroring provided by MetroCluster configurations. Unmirrored aggregates are not protected in the
event of a site disaster or if a site wide maintenance in one site is performed.
Before you begin
•
You should know what drives or array LUNs will be used in the new aggregate.
•
If you have multiple drive types in your system (heterogeneous storage), you should understand
how you can be sure that the correct drive type is selected.
About this task
Attention: Data in unmirrored aggregates in the MetroCluster configuration is not protected in the
event of a MetroCluster switchover operation.
•
Drives and array LUNs are owned by a specific node; when you create an aggregate, all drives in
that aggregate must be owned by the same node, which becomes the home node for that
aggregate.
•
Aggregate names should conform to the naming scheme you determined when you planned your
MetroCluster configuration.
230 | Fabric-attached MetroCluster Installation and Configuration Guide
•
The Clustered Data ONTAP Data Protection Guide contains more information about mirroring
aggregates.
Steps
1. Display a list of available spares:
storage disk show -spare -owner node_name
2. Create the aggregate by using the storage aggregate create command.
If you are logged in to the cluster on the cluster management interface, you can create an
aggregate on any node in the cluster. To verify that the aggregate is created on a specific node, use
the -node parameter or specify drives that are owned by that node.
You can specify the following options:
•
Aggregate's home node (that is, the node that owns the aggregate in normal operation)
•
List of specific drives or array LUNs that are to be added to the aggregate
•
Number of drives to include
•
Checksum style to use for the aggregate
•
Type of drives to use
•
Size of drives to use
•
Drive speed to use
•
RAID type for RAID groups on the aggregate
•
Maximum number of drives or array LUNs that can be included in a RAID group
•
Whether drives with different RPM are allowed
For more information about these options, see the storage aggregate create man page.
Example
The following command creates a mirrored aggregate with 10 disks:
controller_A_1::> storage aggregate create aggr1_controller_A_1 diskcount 10 -node controller_A_1
[Job 15] Job is queued: Create aggr1_controller_A_1.
[Job 15] The job is starting.
[Job 15] Job succeeded: DONE
3. Verify the RAID group and drives of your new aggregate:
storage aggregate show-status -aggregate aggregate-name
Implementing the MetroCluster configuration
You must run the metrocluster configure command to start data protection in the MetroCluster
configuration.
Before you begin
•
There must be at least two non-root mirrored data aggregates on each cluster, and additional data
aggregates can be either mirrored or unmirrored.
You can verify this with the storage aggregate show command.
Planning and installing a MetroCluster configuration with array LUNs | 231
•
The ha-config state of the controllers and chassis must be mcc.
This state is preconfigured on systems shipped from the factory.
About this task
You issue the metrocluster configure command once, on any of the nodes, to enable the
MetroCluster configuration. You do not need to issue the command on each of the sites or nodes, and
it does not matter which node or site you choose to issue the command on.
The metrocluster configure command automatically pairs the two nodes with the lowest
system IDs in each of the two clusters as disaster recovery (DR) partners. In a four-node
MetroCluster, there are two DR partner pairs. The second DR pair is created from the two nodes with
higher system IDs.
Steps
1. Enter the metrocluster configure command in the following format:
If your MetroCluster
configuration has...
Then take these steps...
Multiple data aggregates
From any node's prompt, perform the MetroCluster configuration
operation:
This is the best practice.
metrocluster configure node-name
A single mirrored data
aggregate
a.
From any node's prompt, change to the advanced privilege level:
set -privilege advanced
You need to respond with y when prompted to continue into
advanced mode and see the advanced mode prompt (*>).
b.
Perform the MetroCluster configuration operation with the -allowwith-one-aggregate true parameter:
metrocluster configure -allow-with-oneaggregate true node-name
c.
Return to the admin privilege level:
set -privilege admin
Example
The following command enables MetroCluster configuration on all nodes in the DR group that
contains controller_A_1:
controller_A_1::*> metrocluster configure -node-name controller_A_1
[Job 121] Job succeeded: Configure is successful.
2. Check the networking status on site A:
network port show
Example
The following example shows the network port usage on a four-node MetroCluster configuration:
cluster_A::> network port show
Speed (Mbps)
Node
Port
IPspace
Broadcast Domain Link
MTU
Admin/Oper
------ --------- --------- ---------------- ----- ------- -----------controller_A_1
232 | Fabric-attached MetroCluster Installation and Configuration Guide
e0a
Cluster
e0b
Cluster
e0c
Default
e0d
Default
e0e
Default
e0f
Default
e0g
Default
controller_A_2
e0a
Cluster
e0b
Cluster
e0c
Default
e0d
Default
e0e
Default
e0f
Default
e0g
Default
14 entries were displayed.
Cluster
Cluster
Default
Default
Default
Default
Default
up
up
up
up
up
up
up
9000
9000
1500
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
Cluster
Cluster
Default
Default
Default
Default
Default
up
up
up
up
up
up
up
9000
9000
1500
1500
1500
1500
1500
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
3. Confirm the MetroCluster configuration from both sites in the MetroCluster configuration.
a. Confirm the configuration from site A:
metrocluster show
Example
cluster_A::> metrocluster
Cluster
------------------------Local: cluster_A
Remote: cluster_B
show
Configuration State
------------------configured
configured
Mode
----------normal
normal
b. Confirm the configuration from site B:
metrocluster show
Example
cluster_B::> metrocluster
Cluster
------------------------Local: cluster_B
Remote: cluster_A
show
Configuration State
------------------configured
configured
Mode
----------normal
normal
Configuring the MetroCluster FC switches for health monitoring
In a fabric-attached MetroCluster configuration, you must perform some special configuration steps
to monitor the FC switches.
Steps
1. Issue the following command on each MetroCluster node to add a switch with an IP address:
storage switch add -address ipaddress
This command must be repeated on all four switches in the MetroCluster configuration.
Example
The following example shows the command to add a switch with IP address 10.10.10.10:
controller_A_1::> storage switch add -address 10.10.10.10
2. Verify that all switches are properly configured:
Planning and installing a MetroCluster configuration with array LUNs | 233
storage switch show
It might take up to 15 minutes to reflect all data due to the 15-minute polling interval.
Example
The following example shows the command given to verify that the MetroCluster FC switches are
configured:
controller_A_1::> storage switch
Fabric
Switch Name
---------------- --------------1000000533a9e7a6 brcd6505-fcs40
1000000533a9e7a6 brcd6505-fcs42
1000000533ed94d1 brcd6510-fcs44
1000000533ed94d1 brcd6510-fcs45
4 entries were displayed.
show
Vendor
------Brocade
Brocade
Brocade
Brocade
Model
-----------Brocade6505
Brocade6505
Brocade6510
Brocade6510
Switch WWN
---------------1000000533a9e7a6
1000000533d3660a
1000000533eda031
1000000533ed94d1
Status
-----OK
OK
OK
OK
controller_A_1::>
If the worldwide name (WWN) of the switch is shown, the ONTAP health monitor can contact
and monitor the FC switch.
Related information
System administration
Checking the MetroCluster configuration
You can check that the components and relationships in the MetroCluster configuration are working
correctly. You should do a check after initial configuration, and after making any changes to the
MetroCluster configuration, and before a negotiated (planned) switchover or a switchback operation.
After you run the metrocluster check run command, you then display the results of the check
with various metrocluster check show commands.
About this task
If the metrocluster check run command is issued twice within a short time, on either or both
clusters, a conflict can occur and the command might not collect all data. Subsequent
metrocluster check show commands will not show the expected output.
Steps
1. Check the configuration:
metrocluster check run
Example
The command will run as a background job:
controller_A_1::> metrocluster check run
[Job 60] Job succeeded: Check is successful. Run "metrocluster check
show" command to view the results of this operation.
controller_A_1::> metrocluster check show
Last Checked On: 5/22/2015 12:23:28
Component
------------------nodes
lifs
Result
--------ok
ok
234 | Fabric-attached MetroCluster Installation and Configuration Guide
config-replication ok
aggregates
ok
clusters
ok
5 entries were displayed.
2. Display more detailed results from the most recent metrocluster check run command:
metrocluster check aggregate show
metrocluster check cluster show
metrocluster check config-replication show
metrocluster check lif show
metrocluster check node show
The metrocluster check show commands show the results of the most recent
metrocluster check run command. You should always run the metrocluster check
run command prior to using the metrocluster check show commands to ensure that the
information displayed is current.
Example
The following example shows the metrocluster check aggregate show output for a
healthy four-node MetroCluster configuration:
controller_A_1::> metrocluster check aggregate show
Last Checked On: 8/5/2014 00:42:58
Node
Aggregate
Check
--------------------- --------------------- --------------------controller_A_1
aggr0_controller_A_1_0
mirroring-status
disk-pool-allocation
controller_A_1_aggr1
mirroring-status
disk-pool-allocation
controller_A_2
aggr0_controller_A_2
mirroring-status
disk-pool-allocation
controller_A_2_aggr1
mirroring-status
disk-pool-allocation
controller_B_1
aggr0_controller_B_1_0
mirroring-status
disk-pool-allocation
controller_B_1_aggr1
mirroring-status
disk-pool-allocation
controller_B_2
aggr0_controller_B_2
mirroring-status
disk-pool-allocation
controller_B_2_aggr1
mirroring-status
disk-pool-allocation
16 entries were displayed.
Result
--------ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
The following example shows the metrocluster check cluster show output for a healthy
four-node MetroCluster configuration. It indicates that the clusters are ready to perform a
negotiated switchover if necessary.
Last Checked On: 5/22/2015 12:23:28
Cluster
Check
--------------------- -----------------------------------cluster_A
negotiated-switchover-ready
switchback-ready
applicable
job-schedules
licenses
cluster_B
negotiated-switchover-ready
Result
--------ok
notok
ok
ok
Planning and installing a MetroCluster configuration with array LUNs | 235
switchback-ready
not-
applicable
job-schedules
licenses
8 entries were displayed.
ok
ok
Related information
Disk and aggregate management
Network and LIF management
Data protection using SnapMirror and SnapVault technology
Checking for MetroCluster configuration errors with Config Advisor
You can go to the NetApp Support Site and download the Config Advisor tool to check for common
configuration errors.
About this task
Config Advisor is a configuration validation and health check tool. You can deploy it at both secure
sites and non-secure sites for data collection and system analysis.
Note: Support for Config Advisor is limited, and available only online.
Steps
1. Go to the Config Advisor download page and download the tool.
NetApp Downloads: Config Advisor
2. Run Config Advisor, review the tool's output and follow the recommendations in the output to
address any issues discovered.
Verifying switchover, healing, and switchback
You should verify the switchover, healing, and switchback operations of the MetroCluster
configuration.
Step
1. Use the procedures for negotiated switchover, healing, and switchback that are mentioned in the
MetroCluster Management and Disaster Recovery Guide.
MetroCluster management and disaster recovery
Installing the MetroCluster Tiebreaker software
You can download and install Tiebreaker software to monitor the two clusters and the connectivity
status between them from a third site. Doing so enables each partner in a cluster to distinguish
between an ISL failure (when inter-site links are down) and a site failure.
Before you begin
You must have a Linux host available that has network connectivity to both clusters in the
MetroCluster configuration.
Steps
1. Go to MetroCluster Tiebreaker Software Download page.
NetApp Downloads: MetroCluster Tiebreaker for Linux
236 | Fabric-attached MetroCluster Installation and Configuration Guide
2. Follow the directions to download the Tiebreaker software and documentation.
Protecting configuration backup files
You can provide additional protection for the cluster configuration backup files by specifying a
remote URL (either HTTP or FTP) where the configuration backup files will be uploaded in addition
to the default locations in the local cluster.
Step
1. Set the URL of the remote destination for the configuration backup files:
system configuration backup settings modify URL-of-destination
Related information
System administration
Implementing a MetroCluster configuration with both disks
and array LUNs
To implement a MetroCluster configuration with native disks and array LUNs, you must ensure that
the ONTAP systems used in the configuration can attach to storage arrays.
A MetroCluster configuration with disks and array LUNs can have either two or four nodes.
Although the four-node MetroCluster configuration must be fabric-attached, the two-node
configuration can either be stretch or fabric-attached.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray). You
use the Component Explorer to select the components and ONTAP version to refine your search.
You can click Show Results to display the list of supported configurations that match the criteria.
Related concepts
Example of a four-node MetroCluster configuration with disks and array LUNs on page 238
Related references
Example of a two-node fabric-attached MetroCluster configuration with disks and array LUNs on
page 237
Considerations when implementing a MetroCluster configuration with disks
and array LUNs
When planning your MetroCluster configuration for use with disks and array LUNs, you must
consider various factors, such as the order of setting up access to storage, root aggregate location, and
the usage of FC initiator ports, switches, and FC-to-SAS bridges.
Consider the information in the following table when planning your configuration:
Consideration
Guideline
Order of setting up access to the
storage
You can set up access to either disks or array LUNs first.
You must complete all setup for that type of storage and
verify that it is set up correctly before setting up the other
type of storage.
Planning and installing a MetroCluster configuration with array LUNs | 237
Consideration
Guideline
Location of the root aggregate
•
If you are setting up a new MetroCluster deployment
with both disks and array LUNs, you must create the
root aggregate on native disks.
When doing this, ensure that at least one disk shelf
(with 24 disk drives) is set up at each of the sites.
•
If you are adding native disks to an existing
MetroCluster configuration that uses array LUNs, the
root aggregate can remain on an array LUN.
Using switches and FC-to-SAS
bridges
FC-to-SAS bridges are required in four-node
configurations and two-node fabric-attached
configurations to connect the Data ONTAP systems to
the disk shelves through the switches.
You must use the same switches to connect to the storage
arrays and the FC-to-SAS bridges.
Using FC initiator ports
The initiator ports used to connect to an FC-to-SAS
bridge must be different from the ports used to connect to
the switches, which connect to the storage arrays.
A minimum of eight initiator ports is required to connect
a Data ONTAP system to both disks and array LUNs.
Related concepts
Example of switch zoning in a four-node MetroCluster configuration with array LUNs on page
215
Example of switch zoning in an eight-node MetroCluster configuration with array LUNs on page
217
Related tasks
Configuring the Cisco or Brocade FC switches manually on page 52
Installing FC-to-SAS bridges and SAS disk shelves on page 122
Related information
NetApp Hardware Universe
Example of a two-node fabric-attached MetroCluster configuration with
disks and array LUNs
For setting up a two-node fabric-attached MetroCluster configuration with native disks and array
LUNs, you must use FC-to-SAS bridges to connect the ONTAP systems with the disk shelves
through the FC switches. You can connect array LUNs through the FC switches to the ONTAP
systems.
The following illustrations represent examples of a two-node fabric-attached MetroCluster
configuration with disks and array LUNs. They both represent the same MetroCluster configuration;
the representations for disks and array LUNs are separated only for simplification.
In the following illustration showing the connectivity between ONTAP systems and disks, the HBA
ports 1a through 1d are used for connectivity with disks through the FC-to-SAS bridges:
238 | Fabric-attached MetroCluster Installation and Configuration Guide
In the following illustration showing the connectivity between ONTAP systems and array LUNs, the
HBA ports 0a through 0d are used for connectivity with array LUNs because ports 1a through 1d are
used for connectivity with disks:
Example of a four-node MetroCluster configuration with disks and array
LUNs
For setting up a four-node MetroCluster configuration with native disks and array LUNs, you must
use FC-to-SAS bridges to connect the ONTAP systems with the disk shelves through the FC
switches. You can connect array LUNs through the FC switches to the ONTAP systems.
A minimum of eight initiator ports is required for an ONTAP system to connect to both native disks
and array LUNs.
The following illustrations represent examples of a MetroCluster configuration with disks and array
LUNs. They both represent the same MetroCluster configuration; the representations for disks and
array LUNs are separated only for simplification.
In the following illustration that shows the connectivity between ONTAP systems and disks, the HBA
ports 1a through 1d are used for connectivity with disks through the FC-to-SAS bridges:
Planning and installing a MetroCluster configuration with array LUNs | 239
In the following illustration that shows the connectivity between ONTAP systems and array LUNs,
the HBA ports 0a through 0d are used for connectivity with array LUNs because ports 1a through 1d
are used for connectivity with disks:
240
Using the OnCommand management tools for
further configuration and monitoring
The OnCommand management tools can be used for GUI management of the clusters and
monitoring of the configuration.
Each node has OnCommand System Manager pre-installed. To load System Manager, enter the
cluster management LIF address as the URL in a web browser that has connectivity to the node.
You can also use OnCommand Unified Manager and OnCommand Performance Manager to monitor
the MetroCluster configuration.
Related information
NetApp Documentation: OnCommand Unified Manager Core Package (current releases)
NetApp Documentation: OnCommand System Manager (current releases)
Synchronizing the system time using NTP
Each cluster needs its own Network Time Protocol (NTP) server to synchronize the time between the
nodes and their clients. You can use the Edit DateTime dialog box in System Manager to configure
the NTP server.
Before you begin
You must have downloaded and installed System Manager. System Manager is available from the
NetApp Support Site.
About this task
•
You cannot modify the time zone settings for a failed node or the partner node after a takeover
occurs.
•
Each cluster in the MetroCluster configuration should have its own separate NTP server or servers
used by the nodes, FC switches and FC-to-SAS bridges at that MetroCluster site.
If you are using the MetroCluster Tiebreaker software, it should also have its own separate NTP
server.
Steps
1. From the home page, double-click the appropriate storage system.
2. Expand the Cluster hierarchy in the left navigation pane.
3. In the navigation pane, click Configuration > System Tools > DateTime.
4. Click Edit.
5. Select the time zone.
6. Specify the IP addresses of the time servers, and then click Add.
You must add an NTP server to the list of time servers. The domain controller can be an
authoritative server.
7. Click OK.
Using the OnCommand management tools for further configuration and monitoring | 241
8. Verify the changes you made to the date and time settings in the Date and Time window.
242
Requirements and limitations when using ONTAP
in a MetroCluster configuration
When using ONTAP in a MetroCluster configuration, you should be aware of certain requirements
and limitations for licensing, peering to clusters outside the MetroCluster configuration, performing
volume operations, NVAIL operations, and other ONTAP operations.
•
Both sites should be licensed for the same site-licensed features.
•
All nodes should be licensed for the same node-locked features.
Cluster peering from the MetroCluster site to a third cluster
Because the peering configuration is not replicated, if you peer one of the clusters in the
MetroCluster configuration to a third cluster outside of that configuration, you must also configure
the peering on the partner MetroCluster cluster. This is so that peering can be maintained if a
switchover occurs.
The non-MetroCluster cluster must be running ONTAP 8.3 or later or peering is lost if a switchover
occurs.
Volume creation on a root aggregate
The system does not allow the creation of new volumes on the root aggregate (an aggregate with an
HA policy of CFO) of a node in a MetroCluster configuration.
Because of this restriction, root aggregates cannot be added to an SVM using the vserver addaggregates command.
Networking and LIF creation guidelines for MetroCluster
configurations
You should be aware of how LIFs are created and replicated in the MetroCluster configuration. You
must also know about the requirement for consistency so that you can make proper decisions when
configuring your network.
IPspace configuration
IPspace names of the two sites must match.
IPspace objects must be manually replicated to the partner cluster. Any SVMs that are created and
assigned to an IPspace before the IPspace is replicated will not be replicated to the partner cluster.
IPv6 configuration
If IPv6 is configured on one site, it must be configured on the other site.
LIF creation
You can confirm the successful creation of a LIF in a MetroCluster configuration by running the
metrocluster check lif show command. If there are issues, you can use the metrocluster
check lif repair-placement command to repair the issues.
Requirements and limitations when using ONTAP in a MetroCluster configuration | 243
Duplicate LIFs
You should not create duplicate LIFs (multiple LIFs with the same IP address) in an IPspace.
Replication of LIFs to the partner cluster
When you create a LIF on a cluster in a MetroCluster configuration, the LIF is replicated on the
partner cluster. The system must meet the following conditions to place the replicated LIF on the
partner cluster:
1. DR partner availability
The system attempts to place the replicated LIF on the DR partner of the node on which it was
created.
2. Connectivity
•
For IP or iSCSI LIFs, the system places the replicated LIF on a reachable subnet.
•
For FCP LIFs, the system attempts to place the replicated LIF on a reachable FC fabric.
3. Port attributes
The system attempts to place the replicated LIF on a port with the desired VLAN, adapter type,
and speed attributes.
An EMS message is displayed if LIF replication fails.
You can also check whether LIF replication was successful by using the metrocluster check
lif show command. Failures can be corrected by running the metrocluster check lif
repair-placement command for any LIF that fails to find a correct port. You should resolve any
LIF failures as soon as possible to ensure LIF availability during a MetroCluster switchover
operation.
Note: Even if the source Storage Virtual Machine (SVM) is down, LIF placement may proceed
normally if there is a LIF belonging to a different SVM in a port with the same IPspace and
network in the destination.
Placement of replicated LIFs when the DR partner node is down
When an iSCSI or FCP LIF is created on a node whose DR partner has been taken over, the
replicated LIF is placed on the DR auxiliary partner node. After a subsequent giveback, the LIFs are
not automatically moved to the DR partner. This could lead to LIFs being concentrated on a single
node in the partner cluster. In the event of a MetroCluster switchover operation, subsequent attempts
to map LUNs belonging to the SVM will fail.
You should run the metrocluster check lif show command after a takeover or giveback
operation to ensure correct LIF placement. If errors exist, you can run the metrocluster check
lif repair-placement command to resolve the errors/issues/failures.
LIF placement errors
Starting with Data ONTAP 8.3.1, LIF placement errors that are displayed by the metrocluster
check lif show command are retained after a switchover. If the network interface modify,
network interface rename, or network interface delete command is issued for a LIF
with a placement error, the error is removed and does not appear in the output of the metrocluster
check lif show command.
Related information
Network and LIF management
244 | Fabric-attached MetroCluster Installation and Configuration Guide
Volume or FlexClone command VLDB errors
If a volume or FlexClone volume command (such as volume create or volume delete) fails and the
error message indicates that the failure is due to a VLDB error, you should manually retry the job.
If the retry fails with an error that indicates a duplicate volume name, there is a stale entry in the
internal volume database. Please call customer support for assistance in removing the stale entry.
Removing the entry helps ensure that configuration inconsistencies do not develop between the two
MetroCluster clusters.
Output for storage disk show and storage shelf show
commands in a two-node MetroCluster configuration
In a two-node MetroCluster configuration, the is-local-attach field of the storage disk
show and storage shelf show commands shows all disks and storage shelves as local, regardless
of the node to which they are attached.
Output for the storage aggregate plex show command after
a MetroCluster switchover is indeterminate
When you run the storage aggregate plex show command after a MetroCluster switchover,
the status of plex0 of the switched over root aggregate is indeterminate and is displayed as failed.
During this time, the switched over root is not updated. The actual status of this plex can only be
determined after running the metrocluster heal –phase aggregates command.
Modifying volumes to set NVFAIL in case of switchover
You can modify a volume so that, in event of a MetroCluster switchover, the NVFAIL flag is set on
the volume. The NVFAIL flag causes the volume to be fenced off from any modification. This is
required for volumes that need to be handled as if committed writes to the volume were lost after the
switchover.
Step
1. Enable MetroCluster to trigger NVFAIL on switchover by setting the vol -dr-force-nvfail
parameter to on:
vol modify -vserver vserver-name -volume volume-name -dr-force-nvfail on
.
Monitoring and protecting database validity by using
NVFAIL
The -nvfail parameter of the volume modify command enables Data ONTAP to detect
nonvolatile RAM (NVRAM) inconsistencies when the system is booting or after a switchover
operation. It also warns you and protects the system against data access and modification until the
volume can be manually recovered.
If Data ONTAP detects any problems, database or file system instances stop responding or shut
down. Data ONTAP then sends error messages to the console to alert you to check the state of the
Requirements and limitations when using ONTAP in a MetroCluster configuration | 245
database or file system. You can enable NVFAIL to warn database administrators of NVRAM
inconsistencies among clustered nodes that can compromise database validity.
After a system crash or switchover operation, NFS clients cannot access data from any of the nodes
until the NVFAIL state is cleared. CIFS clients are unaffected.
How NVFAIL protects database files
The NVFAIL state is set in two cases, either when ONTAP detects NVRAM errors when booting up
or when a MetroCluster switchover operation occurs. If no errors are detected at startup, the file
service is started normally. However, if NVRAM errors are detected or the force-fail option was
set and then there was a switchover, ONTAP stops database instances from responding.
When you enable the NVFAIL option, one of the following processes takes place during a bootup:
If...
Then...
ONTAP detects no NVRAM errors
File service starts normally.
ONTAP detects NVRAM errors
•
ONTAP returns a stale file handle (ESTALE)
error to NFS clients trying to access the
database, causing the application to stop
responding, crash, or shut down.
ONTAP then sends an error message to the
system console and log file.
•
When the application restarts, files are
available to CIFS clients even if you have
not verified that they are valid.
For NFS clients, files remain inaccessible
until you reset the in-nvfailed-state
option on the affected volume.
ONTAP detects NVRAM errors on a volume
that contains LUNs
LUNs in that volume are brought offline. The
in-nvfailed-state option on the volume
must be cleared, and the NVFAIL attribute on
the LUNs must be cleared by bringing each
LUN in the affected volume online.
You can perform the steps to check the integrity
of the LUNs and recover the LUN from a
Snapshot copy or back up as necessary. After all
of the LUNs in the volume are recovered, the
in-nvfailed-state option on the affected
volume is cleared.
Commands for monitoring data loss events
If you enable the NVFAIL option, you receive notification when a system crash caused by NVRAM
inconsistencies or a MetroCluster switchover occurs.
By default, the NVFAIL parameter is not enabled.
If you want to...
Use this command...
Create a new volume with NVFAIL enabled
volume create -nvfail on
Enable NVFAIL on an existing volume
volume modify
Note: You set the -nvfail option to on to
enable NVFAIL on the created volume.
246 | Fabric-attached MetroCluster Installation and Configuration Guide
If you want to...
Use this command...
Display whether NVFAIL is currently enabled
for a specified volume
volume show
Note: You set the -fields parameter to
nvfail to display the NVFAIL attribute for a
specified volume.
See the man page for each command for more information.
Accessing volumes in NVFAIL state after a switchover
After a switchover, you must clear the NVFAIL state by resetting the -in-nvfailed-state
parameter of the volume modify command to remove the restriction of clients to access data.
Before you begin
The database or file system must not be running or trying to access the affected volume.
About this task
Setting -in-nvfailed-state parameter requires advanced-level privilege.
Step
1. Recover the volume by using the volume modify command with the -in-nvfailed-state
parameter set to false.
After you finish
For instructions about examining database file validity, see the documentation for your specific
database software.
If your database uses LUNs, review the steps to make the LUNs accessible to the host after an
NVRAM failure.
Recovering LUNs in NVFAIL states after switchover
After a switchover, the host no longer has access to data on the LUNs that are in NVFAIL states. You
must perform a number of actions before the database has access to the LUNs.
Before you begin
The database must not be running.
Steps
1. Clear the NVFAIL state on the affect volume that hosts the LUNs by resetting the -innvfailed-state parameter of the volume modify command.
2. Bring the affected LUNs online.
3. Examine the LUNs for any data inconsistencies and resolve them.
This might involve host-based recovery or recovery done on the storage controller using
SnapRestore.
4. Bring the database application online after recovering the LUNs.
247
Considerations for using TDM/xWDM equipment
with fabric-attached MetroCluster configurations
The Interoperability Matrix provides some notes about the requirements that TDM/xWDM
equipment must meet to work with a fabric-attached MetroCluster configuration. These notes also
include information about various configurations, which can help you to determine when to use inorder delivery (IOD) of frames or out-of-order delivery (OOD) of frames.
An example of such requirements is that the TDM/xWDM equipment must support the link
aggregation (trunking) feature with routing policies. The order of delivery (IOD or OOD) of frames
is maintained within a switch, and is determined by the routing policy that is in effect.
The following table provides the routing policies for configurations containing Brocade switches and
Cisco switches:
Switches
Configuring MetroCluster
configurations for IOD
Configuring MetroCluster
configurations for OOD
Brocade
•
AptPolicy must be set to 1
•
AptPolicy must be set to 3
•
DLS must be set to off
•
DLS must be set to on
•
IOD must be set to on
•
IOD must be set to off
Cisco
Policies for the FCVI-designated
VSAN:
•
Load balancing policy: srcid and
dstid
•
IOD must be set to on
Not applicable
Policies for the storage-designated
VSAN:
•
Load balancing policy: srcid, dstid,
and oxid
•
VSAN must not have the inorder-guarantee option set
When to use in-order delivery
It is best to use IOD if it is supported by the links. The following configurations support IOD:
•
A single ISL, and the ISL and the link (and the link equipment, such as TDM/xWDM, if used).
•
A single trunk, and the ISLs and the links (and the link equipment, such as TDM/xWDM, if
used).
When to use out-of-order delivery
You can use OOD for all configurations that do not support IOD.
248
Glossary of MetroCluster terms
aggregate
A grouping of physical storage resources (disks or array LUNs) that provides storage to
volumes associated with the aggregate. Aggregates provide the ability to control the RAID
configuration for all associated volumes.
data SVM
Formerly known as data Vserver. In clustered Data ONTAP, a Storage Virtual Machine
(SVM) that facilitates data access from the cluster; the hardware and storage resources of
the cluster are dynamically shared by data SVMs within a cluster.
admin SVM
Formerly known as admin Vserver. In clustered Data ONTAP, a Storage Virtual Machine
(SVM) that has overall administrative access to all objects in the cluster, including all
objects owned by other SVMs, but does not provide data access to clients or hosts.
inter-switch link (ISL)
A connection between two switches using the E-port.
destination
The storage to which source data is backed up, mirrored, or migrated.
disaster recovery (DR) group
The four nodes in a MetroCluster configuration that synchronously replicate each others'
configuration and data.
disaster recovery (DR) partner
A node's partner at the remote MetroCluster site. The node mirrors its DR partner's
NVRAM or NVMEM partition.
disaster recovery auxiliary (DR auxiliary) partner
The HA partner of a node's DR partner. The DR auxiliary partner mirrors a node's
NVRAM or NVMEM partition in the event of an HA takeover after a MetroCluster
switchover operation.
HA pair
•
In Data ONTAP 8.x, a pair of nodes whose controllers are configured to serve data for
each other if one of the two nodes stops functioning.
Depending on the system model, both controllers can be in a single chassis, or one
controller can be in one chassis and the other controller can be in a separate chassis.
•
In the Data ONTAP 7.3 and 7.2 release families, this functionality is referred to as an
active/active configuration.
HA partner
A node's partner within the local HA pair. The node mirrors its HA partner's NVRAM or
NVMEM cache.
high availability (HA)
In Data ONTAP 8.x, the recovery capability provided by a pair of nodes (storage systems),
called an HA pair, that are configured to serve data for each other if one of the two nodes
stops functioning. In the Data ONTAP 7.3 and 7.2 release families, this functionality is
referred to as an active/active configuration.
Glossary of MetroCluster terms | 249
healing
The two required MetroCluster operations that prepare the storage located at the DR site
for switchback. The first heal operation resynchronizes the mirrored plexes. The second
heal operation returns ownership of root aggregates to the DR nodes.
LIF (logical interface)
A logical network interface, representing a network access point to a node. LIFs currently
correspond to IP addresses, but could be implemented by any interconnect. A LIF is
generally bound to a physical network port; that is, an Ethernet port. LIFs can fail over to
other physical ports (potentially on other nodes) based on policies interpreted by the LIF
manager.
NVRAM
nonvolatile random-access memory.
NVRAM cache
Nonvolatile RAM in a storage system, used for logging incoming write data and NFS
requests. Improves system performance and prevents loss of data in case of a storage
system or power failure.
NVRAM mirror
A synchronously updated copy of the contents of the storage system NVRAM (nonvolatile
random access memory) contents kept on the partner storage system.
node
•
In Data ONTAP, one of the systems in a cluster or an HA pair.
To distinguish between the two nodes in an HA pair, one node is sometimes called the
local node and the other node is sometimes called the partner node or remote node.
•
In Protection Manager and Provisioning Manager, the set of storage containers
(storage systems, aggregates, volumes, or qtrees) that are assigned to a dataset and
designated either primary data (primary node), secondary data (secondary node), or
tertiary data (tertiary node).
A dataset node refers to any of the nodes configured for a dataset.
A backup node refers to either a secondary or tertiary node that is the destination of a
backup or mirror operation.
A disaster recovery node refers to the dataset node that is the destination of a failover
operation.
remote storage
The storage that is accessible to the local node, but is at the location of the remote node.
root volume
A special volume on each Data ONTAP system. The root volume contains system files
and configuration information, and can also contain data. It is required for the system to
be able to boot and to function properly. Core dump files, which are important for
troubleshooting, are written to the root volume if there is enough space.
switchback
The MetroCluster operation that restores service back to one of the MetroCluster sites.
switchover
The MetroCluster operation that transfers service from one of the MetroCluster sites.
•
A negotiated switchover is planned in advance and cleanly shuts down components of
the target MetroCluster site.
•
A forced switchover immediately transfers service; the shut down of the target site
might not be clean.
250
Copyright information
Copyright © 1994–2017 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any means—
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval system—without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights of
NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to
restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer
Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
251
Trademark information
Active IQ, AltaVault, Arch Design, ASUP, AutoSupport, Campaign Express, Clustered Data ONTAP,
Customer Fitness, Data ONTAP, DataMotion, Fitness, Flash Accel, Flash Cache, Flash Pool,
FlexArray, FlexCache, FlexClone, FlexGroup, FlexPod, FlexScale, FlexShare, FlexVol, FPolicy,
Fueled by SolidFire, GetSuccessful, Helix Design, LockVault, Manage ONTAP, MetroCluster,
MultiStore, NetApp, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, RAID-TEC,
SANscreen, SANshare, SANtricity, SecureShare, Simplicity, Simulate ONTAP, Snap Creator,
SnapCenter, SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror,
SnapMover, SnapProtect, SnapRestore, Snapshot, SnapValidator, SnapVault, SolidFire, SolidFire
Helix, StorageGRID, SyncMirror, Tech OnTap, Unbound Cloud, and WAFL and other names are
trademarks or registered trademarks of NetApp, Inc., in the United States, and/or other countries. All
other brands or products are trademarks or registered trademarks of their respective holders and
should be treated as such. A current list of NetApp trademarks is available on the web.
http://www.netapp.com/us/legal/netapptmlist.aspx
252
How to send comments about documentation and
receive update notifications
You can help us to improve the quality of our documentation by sending us your feedback. You can
receive automatic notification when production-level (GA/FCS) documentation is initially released or
important changes are made to existing production-level documents.
If you have suggestions for improving this document, send us your comments by email.
doccomments@netapp.com
To help us direct your comments to the correct division, include in the subject line the product name,
version, and operating system.
If you want to be notified automatically when production-level documentation is released or
important changes are made to existing production-level documents, follow Twitter account
@NetAppDoc.
You can also contact us in the following ways:
•
NetApp, Inc., 495 East Java Drive, Sunnyvale, CA 94089 U.S.
•
Telephone: +1 (408) 822-6000
•
Fax: +1 (408) 822-4501
•
Support telephone: +1 (888) 463-8277
Index | 253
Index
4-node MetroCluster configurations
cabling the HA interconnect 49, 207
6500N bridges
FibreBridge, cabling with disk shelves using IOM6
or IOM3 modules 130
6500N FibreBridge bridges
configuring for health monitoring 183
7-Mode fabric MetroClusters
sharing FC switch fabrics 21
7-Mode MetroCluster configurations
differences from clustered MetroCluster
configurations 10
7500N bridges
configuring zoning on Brocade FC switches using 84
FibreBridge, cabling with disk shelves using IOM6
or IOM3 modules 129
7500N bridges, FibreBridge
examples of zoning in eight-node or four-node
MetroCluster configurations 66, 69, 75
7500N FibreBridge bridges
configuring for health monitoring 183
8-node MetroCluster configurations
cabling the HA interconnect 49, 207
9250i FC switches, Cisco
configuring FCIP ports for a single ISL 105
A
about this guide
deciding whether to use the Fabric-attached
MetroCluster Installation and Configuration Guide 7
addresses
configuring NTP server, to synchronize system time
240
gathering for site A 149
gathering for site B 151
AFF systems
assigning disk ownership in MetroCluster
configurations 161
aggregates
mirrored data, creating on each node of a
MetroCluster configuration 176, 177, 228, 229
aggregates, root
mirroring 175, 228
architectures
parts of fabric MetroCluster 24
array LUNs
cabling to FC switches in a MetroCluster
configuration 208
cabling to FC switches in an eight-node
MetroCluster configuration 212
configuration requirements for MetroCluster
configurations 191
configuring ONTAP on systems that use 219
example of cabling to FC switches in two-node
MetroCluster configuration 209
in a MetroCluster configuration 14
installing and cabling MetroCluster components for
192
installing license for using 222
introduction to implementing a MetroCluster
configuration with disks and 236
introduction to planning and installing with a
MetroCluster configuration 190
planning for a MetroCluster configuration with 190
supported MetroCluster configurations with 190
switch zoning requirements in MetroCluster
configurations 213
array LUNs and disks
considerations when implementing MetroCluster
configuration with 236
example four-node MetroCluster configuration 238
example two-node fabric-attached MetroCluster
configuration 237
array LUNs in MetroCluster configurations
cabling the ISLs in 205
arrays, storage
preparing for use with ONTAP systems 193
assigning
disk ownership in MetroCluster configurations with
AFF systems 161
disk ownership in MetroCluster configurations with
non-AFF systems 160
assignments, port
for FC switches in configuration with array LUNs
194
for FC switches in eight-node configuration 38
ATTO FibreBridge bridges
See FC-to-SAS bridges
authentication policies
setting for Brocade switches in MetroCluster
configurations 87
automatic switchover
supported MetroCluster configurations 11
autosupport configuration
or site A 149
or site B 151
B
bridges
FibreBridge 6500N, cabling with disk shelves using
IOM6 or IOM3 modules 130
FibreBridge 7500N, cabling with disk shelves using
IOM6 or IOM3 modules 129
installing FC-to-SAS 122
bridges, FC-to-SAS
cabling and verifying connectivity 131
configuration worksheets 34
considerations for installing SAS shelves and 125
disabling unused ports 137
bridges, FibreBridge 7500N
configuring zoning on Brocade FC switches using 84
Brocade 6510s
sharing during transition 139
Brocade FC switch configuration
254 | Fabric-attached MetroCluster Installation and Configuration Guide
configuring ISL ports 64
setting to default values 54
Brocade FC switches
configuring 52
configuring basic settings 56
configuring E-ports (ISL ports) on 59
configuring zoning with FibreBridge 7500N bridges
84
introduction to manually configuring 52
port assignments in configuration with array LUNs
194
port assignments in eight-node configuration 38
requirements for configuring 52
Brocade switches
enabling ISL encryption 89
license requirements 53, 140
setting authentication policy in MetroCluster
configurations 87
setting ISL encryption 86
setting payload for 87
zoning examples for a FibreBridge 7500N bridge in
eight-node or four-node MetroCluster configurations
66, 69, 75
C
cabling
data ports 50, 208
FC-to-SAS bridge FC ports 131
HA interconnects in MetroCluster configurations 49,
207
management ports 50, 208
MetroCluster components, array LUNs 192
cabling paths
between ONTAP systems and specific array LUN
208
cabling the new MetroCluster controllers
cabling to specific ports on existing FC switches 141
cabling)
Inter-Switch Links in MetroCluster configurations
with array LUNs 205
Inter-Switch Links in the MetroCluster configuration
38
cascade configurations
networking requirements for cluster peering 12
chassis
cabling HA interconnect between, in MetroCluster
configurations 49, 207
verifying and configuring HA state in a MetroCluster
configuration 164, 218
checking
MetroCluster configuration operations 184, 233
checklists, hardware setup
for factory configured clusters 16
checklists, software setup
for factory-configured MetroCluster configurations
18
Cisco 9148 switches
manually enabling ports 94
Cisco 9250i FC switches
configuring FCIP ports for a dual ISL 111
configuring FCIP ports for a single ISL 105
Cisco FC switch configuration
calculating buffer-to-buffer credits 102
community string 92
configuring ISL ports 96, 102
configuring port channels 102
setting basic settings 92
setting to default values 91
Cisco FC switches
configuration requirements 90
configuring with configuration files 52
configuring zoning on 119
creating and configuring VSANs on 98
introduction to manually configuring 52
ISL requirements 90
licensing requirements 90
port assignments in configuration with array LUNs
194
port assignments in eight-node configuration 38
storage connection requirements 90
Cisco switch configuration
saving 122
Cisco switches
manually enabling ports in MDS 9148 94
port licensing 92
zoning examples for a FibreBridge 7500N bridge in
eight-node or four-node MetroCluster configurations
66, 69, 75
cluster configurations
similarities and differences between MetroCluster
configurations and 154
cluster interconnects
cabling in MetroCluster configurations 48, 206
cluster peer relationships
requirements for 12
cluster peering
considerations when peering a cluster from the
MetroCluster site to an outside cluster 242
illustration 30
introduction to MetroCluster configuration 167, 225
MetroCluster 30
port usage for site A 149
port usage for site B 151
cluster peering connections
cabling in MetroCluster configurations 49, 207
cluster peers
creating relationships between 174, 226
cluster setup
introduction to the process for 222
Cluster Setup wizard
using to set up a cluster in a two-node MetroCluster
configuration 165
clustered MetroCluster configurations
differences between the types 11
differences from 7-Mode MetroCluster
configurations 10
clusters
example naming conventions in the documentation
31
introduction to manually peering 168
naming requirements for cluster peering 12
setting up in a two-node MetroCluster configuration
165
clusters, factory configured
hardware setup checklist 16
Index | 255
commands
using metrocluster configure to start data protection
in MetroCluster configurations 179, 230
volume 245
commands, storage aggregate plex show
output after a MetroCluster switchover is
indeterminate 244
comments
how to send feedback about documentation 252
community string
FC switches 92
Health Monitors 92
components
preconfigured when new 16
required and supported hardware and software 31
components, MetroCluster
cabling the HA interconnect 49, 207
introduction to configuring for health monitoring
182
racking disk shelves 36, 140
racking FC switches 36, 140
racking storage controllers 36, 140
Config Advisor
checking for common configuration errors 186, 235
checkinge for MetroCluster configuration errors 186,
235
configuration
of MetroCluster systems received from the factory
16
configuration backup files
setting remote destinations for preservation 188, 236
configuration files
configuring Brocade FC switches 51
configuring Cisco FC switches with 52
configuration networking
how LIFs are created 242
how LIFs are replicated 242
IPv6 configuration 242
configuration requirements
Cisco FC switches 90
configuration worksheets
FC switch 34
FC-to-SAS bridge 34
configurations
cascade, for cluster peering 12
fan-out, for cluster peering 12
to determine whether to use in-order delivery or outof-order delivery of frames 15, 247
configurations, cluster
similarities and differences between MetroCluster
configurations and 154
configurations, eight-node
port assignments for FC switches 38
configurations, MetroCluster
array LUN requirements 191
cabling FC-VI and HBA connections to FC switches
37
cabling the HA interconnect 49, 207
differences between 7-Mode and clustered ONTAP
10
implementing 179, 230
introduction to implementing with disks and array
LUNs 236
requirements for TDM/xWDM equipment to work
with fabric-attached 15, 247
setting authentication policy for Brocade switches 87
similarities and differences between standard cluster
configurations and 154
supported with array LUNs 190
configurations, site
worksheets for site A 149
worksheets for site B 151
configurations, with array LUNs
port assignments for FC switches 194
configuring
basic switch settings 56
basic switch settings for Brocade, domain ID 56
in-order delivery (IOD) of frames 180
NTP server, to synchronize system time 240
ONTAP on systems that use only array LUNs 219
out-of-order delivery (OOD) of frames 180
configuring E-ports (ISL ports)
on Brocade FC switches 59
configuring IOD
in ONTAP 180
configuring OOD
in ONTAP 180
connections
cabling to FC-VI and HBA, to FC switches in
MetroCluster configurations 37
connections, cluster peering
cabling in MetroCluster configurations 49, 207
connectivity
verifying bridges 131
considerations
MetroCluster configuration with disks and array
LUNs 236
when removing MetroCluster configurations 189
controller modules
cabling FC-VI and HBA connections to FC switches
in MetroCluster configurations 37
resetting to system defaults when reusing 154
verifying and configuring HA state in a MetroCluster
configuration 164, 218
controller ports
checking connectivity with the partner site in
MetroCluster configurations 49, 207
controller, storage
cabling management and data connections 50, 208
controllers
racking in MetroCluster configurations 36, 140
controllers, storage
cabling HA interconnect between, in MetroCluster
configurations 49, 207
creation, LIFs
in a MetroCluster configuration 242
D
data aggregates
mirrored, creating on each node of a MetroCluster
configuration 176, 177, 228, 229
data field size
setting 87
data ports
cabling 50, 208
256 | Fabric-attached MetroCluster Installation and Configuration Guide
configuring intercluster LIFs to share 172
considerations when sharing intercluster and 13
data ports, dedicated intercluster
configuring intercluster LIFs to use 168
data protection
mirroring root aggregates to provide 175
starting in MetroCluster configurations 179, 230
database files
how NVFAIL protects 245
databases
accessing after a switchover 246
introduction to using NVFAIL to monitor and
protect validity of 244
dedicated data ports, intercluster
configuring intercluster LIFs to use 168
dedicated ports
considerations when using for intercluster replication
13
defaults, system
resetting reused controller modules to 154
destinations
specifying the URL for configuration backup 188,
236
disabling
virtual fabric in a Brocade switch 86
disaster recovery group 30
disk assignments
verifying in a eight-node or four-node MetroCluster
configuration 157
verifying in a two-node MetroCluster configuration
163
disk drives
verifying connectivity via bridges 131
disk ownership
assigning array LUNs 225
assigning in MetroCluster configurations with nonAFF systems 160
disk pools
in a eight-node or four-mode MetroCluster
configuration 157
disk shelves
racking in MetroCluster configurations 36, 140
using IOM12 modules, cabling a FibreBridge 7500N
bridge with 128
using IOM6 or IOM3 modules, cabling a
FibreBridge 6500N bridge with 130
using IOM6 or IOM3 modules, cabling a
FibreBridge 7500N bridge with 129
disk shelves, SAS
installing 122
disks
introduction to implementing a MetroCluster
configuration with array LUNs and 236
disks and array LUNs
considerations when implementing MetroCluster
configuration with 236
documentation
how to receive automatic notification of changes to
252
how to send feedback about 252
where to find MetroCluster documentation 8
drives
verifying connectivity via bridges 131
dual ISL ports
configuring FCIP ports for, on Cisco 9250i switches
111
E
E-ports
configuring on Brocade FC switches 59
eight-node configurations
port assignments for FC switches 38
eight-node configurations with array LUNs
port assignments for FC switches 194
eight-node MetroCluster configurations
cabling array LUNs and switches in 212
cabling the HA interconnect 49, 207
eight-node MetroCluster configurations with array
LUNs
cabling FC-VI ports in 202
cabling HBA ports in 202
eight-node or four-node MetroCluster configurations
example zoning for a FibreBridge 7500N bridge and
Brocade switch 66, 69, 75
example zoning with a FibreBridge 7500N bridge
66, 69, 75
events
monitoring data loss 245
examples, zoning
FibreBridge 7500N bridge in eight-node or fournode MetroCluster configurations 66, 69, 75
F
fabric MetroCluster configurations
illustration of 24
parts of 24
fabric-attached MetroCluster configuration with array
LUNs, eight-node
cabling FC-VI and HBA ports 202
fabric-attached MetroCluster configurations
array LUN requirements 191
example two-node configuration with array LUNs
and disks 237
requirements for TDM/xWDM equipment to work
with 15, 247
Fabric-attached MetroCluster Installation and
Configuration Guide
requirements for using 7
fabrics, Brocade FC switch
configuring 52
requirements for configuring 52
fabrics, switch
configuring switch ports for 59
factory configured clusters
hardware setup checklist 16
failover and giveback
verifying in a MetroCluster configuration 186
fan-out configurations
networking requirements for cluster peering 12
FC ports
bridges 131
FC switch
verifying 146
Index | 257
FC switch configuration
configuring ISL ports
Cisco 96
configuring ports
Cisco 95
setting basic settings
Cisco 92
community string 92
setting to default values
Brocade 54
Cisco 91
switch names
setting for Brocade switches 54
FC switch configurations
configuring basic settings for Brocade 56
downloading configuration files, Brocade 51
FC switch fabric
reenabling 146
supported MetroCluster configurations 11
FC switch fabrics
redundant configuration in the MetroCluster
architecture 29
sharing during 7-Mode transition 139
FC switch fabrics in MetroCluster configurations with
array LUNs
cabling the ISLs in 205
FC switches
cabling HBAs to, in MetroCluster configurations 37
cabling storage arrays to, in a MetroCluster
configuration 208
configuration requirements for MetroCluster
configurations with array LUNs 191
configuration worksheet 34
example naming conventions in the documentation
31
introduction to configuring 51
introduction to manually configuring Cisco and
Brocade 52
port assignments in configuration with array LUNs
194
port assignments in eight-node configuration 38
racking in a MetroCluster configuration with array
LUNs 192
racking in MetroCluster configurations 36, 140
FC switches, Brocade
configuring 52
configuring E-ports (ISL ports) on 59
configuring zoning with FibreBridge 7500N bridges
84
requirements for configuring 52
FC switches, Cisco
configuration requirements 90
configuring zoning on 119
ISL requirements 90
storage connection requirements 90
FC switches, Cisco 9250i
configuring FCIP ports for a single ISL 105
FC switches, MetroCluster
configuring for health monitoring 182, 232
FC-to-SAS bridge ports
cabling during 7-Mode transition 141
FC-to-SAS bridges
cabling and verifying connectivity 131
configuration worksheets 34
configuring for health monitoring 183
considerations for installing SAS shelves and 125
disabling unused ports 137
example naming conventions in the documentation
31
in the MetroCluster architecture 29
installing 122
meeting preinstallation requirements 123
supported MetroCluster configurations 11
FC-VI connections
cabling to FC switches in MetroCluster
configurations 37
FC-VI ports
cabling during 7-Mode transition 141
cabling in eight-node MetroCluster configurations
with array LUNs 202
cabling in four-node configurations 200
cabling in two-node configurations 199
configuring on a QLE2564 quad-port card on
FAS8020 systems 155, 223
creating and configuring VSANs for, on Cisco FC
switches 98
zoning examples when using FibreBridge 7500
bridges 66, 69, 75
FCIP ports
configuring for a dual ISL on Cisco 9250i switches
111
configuring for a single ISL on Cisco 9250i switches
105
feedback
how to send comments about documentation 252
FibreBridge 6500N bridges
cabling for disk shelves with IOM6 or IOM3
modules 130
FibreBridge 7500N bridges
cabling for disk shelves with IOM6 or IOM3
modules 129
cabling with disk shelves using IOM12 modules 128
configuring zoning on Brocade FC switches using 84
examples of zoning in eight-node or four-node
MetroCluster configurations 66, 69, 75
FibreBridge bridges
configuring bridges for health monitoring 183
See also FC-to-SAS bridges
files, database
how NVFAIL protects 245
firewalls
requirements for cluster peering 12
FlexClone command errors
in MetroCluster configurations 244
four-node MetroCluster configurations
cabling array LUNs and switches in 210
cabling FC-VI ports in 200
cabling HBA ports in 200
cabling the HA interconnect 49, 207
implementing 179, 230
full-mesh connectivity
description 12
G
guides
258 | Fabric-attached MetroCluster Installation and Configuration Guide
requirements for using the Fabric-attached
MetroCluster Installation and Configuration Guide 7
H
HA interconnects
cabling in MetroCluster configurations 49, 207
HA pair operations
verifying in a MetroCluster configuration 186
HA pairs
cabling HA interconnect between, in MetroCluster
configurations 49, 207
illustration of local MetroCluster 28
MetroCluster configurations
illustration of local HA pairs 28
HA state in a MetroCluster configuration
verifying and configuring controller modules and
chassis 164, 218
hardware components
racking in a MetroCluster configuration with array
LUNs 192
hardware components, MetroCluster
racking in MetroCluster configurations 36, 140
hardware setup
checklist for factory configured clusters 16
HBA ports
cabling during 7-Mode transition 141
cabling in eight-node MetroCluster configurations
with array LUNs 202
cabling in four-node configurations 200
cabling in two-node configurations 199
HBAs
cabling to FC switches in MetroCluster
configurations 37
healing
verifying in a MetroCluster configuration 188, 235
health monitoring
configuring MetroCluster FC switches for 182, 232
introduction to configuring MetroCluster
components for 182
Health Monitors
community string 92
host names
gathering for site A 149
gathering for site B 151
I
illustration
FC switch fabrics in the MetroCluster architecture
29
FC-to-SAS bridges in the MetroCluster architecture
29
in-order delivery, frames
various configurations, to determine whether to use
15, 247
information
how to send feedback about improving
documentation 252
information gathering worksheets, configuration
FC switch 34
FC-to-SAS bridge 34
initiator ports
zoning examples when using FibreBridge 7500
bridges 66, 69, 75
installation
for systems sharing an FC switch fabric 21
for systems with array LUNs 21
for systems with native disks 21
preparations for installing FC-to-SAS bridges 123
installing
MetroCluster components, array LUNs 192
Inter-Switch Link ports
cabling in the MetroCluster configuration 38
Inter-Switch Links
cabling in MetroCluster configurations with array
LUNs 205
cabling in the MetroCluster configuration 38
intercluster LIFs
configuring to share data ports 172
configuring to use dedicated intercluster ports 168
considerations when sharing with data ports 13
intercluster networks
configuring intercluster LIFs for 168, 172
considerations when sharing data and intercluster
ports 13
intercluster ports
configuring intercluster LIFs to use dedicated 168
considerations when using dedicated 13
interconnects, cluster
cabling in a MetroCluster configuration 48, 206
interconnects, HA
cabling in MetroCluster configurations 49, 207
interfaces, IPStorage1/1
configuring FCIP ports for a single ISL on Cisco
9250i switches 105
IOD settings
deleting TI zoning and configuring 144
IOM12 modules
cabling a FibreBridge 7500N bridge with disk
shelves using 128
IOM3 modules
cabling a FibreBridge 6500N bridge with disk
shelves using 130
cabling a FibreBridge 7500N bridge with disk
shelves using 129
IOM6 modules
cabling a FibreBridge 6500N bridge with disk
shelves using 130
cabling a FibreBridge 7500N bridge with disk
shelves using 129
IP addresses
gathering for site A 149
gathering for site B 151
requirements for cluster peering 12
IPspaces
requirements for cluster peering 12
IPStorage1/1 interfaces
configuring FCIP ports for a dual ISL on Cisco
9250i switches 111
configuring FCIP ports for a single ISL on Cisco
9250i switches 105
ISL encryption
enabling in a MetroCluster configuration 89
enabling in switches 89
Index | 259
setting on Brocade switches 86
ISL port group
sharing a switch fabric 145
ISL ports
configuring on Brocade FC switches 59
configuring on Brocade switches 64
configuring on Cisco switches 102
ISL ports, single
configuring FCIP ports for, on Cisco 9250i switches
105
ISLs
cabling in a MetroCluster configuration with array
LUNs 205
See also Inter-Switch Links
L
license requirements
Brocade switch 53, 140
licenses
installing for using array LUNs in MetroCluster
configurations 222
licensing
in a MetroCluster configuration 242
licensing requirements
Cisco FC switches 90
LIF creation
in a MetroCluster configuration 242
LIF replication
in a MetroCluster configuration 242
LIFs, intercluster
configuring to share data ports 172
configuring to use dedicated intercluster ports 168
local HA
supported MetroCluster configurations 11
local HA pairs
illustration of MetroCluster 28
verifying operation of, in a MetroCluster
configuration 186
LUNs
recovering after NVRAM failures 246
LUNs (array)
assigning ownership of 225
LUNs, array
configuration requirements for MetroCluster
configurations 191
configuring ONTAP on systems that use 219
installing license for using 222
introduction to implementing a MetroCluster
configuration with disks and 236
supported MetroCluster configurations with 190
M
management ports
cabling 50, 208
manually peering clusters
introduction to 168
MetroCluster
sharing existing switch fabric 139
MetroCluster architecture
cluster peering 30
FC switch fabrics 29
FC-to-SAS bridges 29
illustration
cluster peering in the MetroCluster architecture
30
MetroCluster components
cabling cluster interconnects in 48, 206
installing and cabling 192
introduction to configuring for health monitoring
182
racking disk shelves 36, 140
racking FC switches 36, 140
racking storage controllers 36, 140
MetroCluster configuration
cabling cluster peering connections 49, 207
checking for configuration errors with Config
Advisor 186, 235
fabric-attached, workflow for cabling four or two
nodes 23
what is preconfigured in the factory 16
workflow for 122
MetroCluster configuration with array LUNs
cabling FC-VI ports in 199
cabling HBA ports in 199
introduction to planning and installing a 190
racking FC switches in a 192
racking storage controllers in a 192
setting up ONTAP in a 218
switch zoning in a 213
MetroCluster configuration with disks and array LUNs
introduction to planning and installing a 190
MetroCluster configurations
array LUN requirements 191
cabling FC-VI and HBA connections to FC switches
37
cabling Inter-Switch Links 38
cabling storage arrays to FC switches 208
cabling the HA interconnect 49, 207
configuring FibreBridge bridges for health
monitoring 183
considerations when removing 189
creating mirrored data aggregates on each node of
176, 177, 228, 229
differences between 7-Mode and clustered ONTAP
10
enabling ISL encryption on Brocade switches 89
example four-node configuration with array LUNs
and disks 238
implementing 179, 230
installing license for array LUNs in 222
introduction to implementing with disks and array
LUNs 236
requirements for TDM/xWDM equipment to work
with 15, 247
setting authentication policy for Brocade switches 87
setting ISL encryption on Brocade switch 86
similarities and differences between standard cluster
configurations and 154
supported with array LUNs 190
verifying correct operation of 184, 233
MetroCluster configurations with array LUNs
cabling the ISLs in 205
requirements for zoning in 213
260 | Fabric-attached MetroCluster Installation and Configuration Guide
MetroCluster configurations, fabric
illustration of 24
parts of 24
MetroCluster configurations, two-node
example of cabling array LUNs and switches in 209
metrocluster configure command
creating MetroCluster relationships using 179, 230
MetroCluster FC switches
configuring for health monitoring 182, 232
MetroCluster monitoring
using Tiebreaker software 188, 235
MetroCluster switchovers
output for the storage aggregate plex
show command after 244
mirroring
root aggregates 228
modules, controller
cabling FC-VI and HBA connections to FC switches
in MetroCluster configurations 37
resetting to system defaults when reusing 154
N
naming conventions
for examples in the documentation 31
native disk shelves
in a MetroCluster configuration 14
network
full-mesh connectivity described 12
requirements for cluster peering 12
network information
gathering for site A 149
gathering for site B 151
networks, intercluster
configuring intercluster LIFs for 168
new MetroCluster controllers, cabling
cabling to specific ports on existing FC switches 141
node configuration after MetroCluster setup
additional configuration with OnCommand System
Manager 240
nodes
creating mirrored data aggregates on each
MetroCluster 176, 177, 228, 229
similarities and differences between standard cluster
and MetroCluster configurations 154
using the Cluster Setup wizard to set up, in a twonode MetroCluster configuration 165
NTP servers
configuring to synchronize system time 240
numbers, port
for Brocade FC switches in configurations with array
LUNs 194
for Brocade FC switches in eight-node
configurations 38
for Cisco FC switches in configurations with array
LUNs 194
for Cisco FC switches in eight-node configurations
38
NVFAIL
description of 244
how it protects database files 245
modifying volumes to set NVFAIL in case of
switchover 244
NVRAM failures
recovering LUNs after 246
O
OnCommand Performance Manager
monitoring with 240
OnCommand System Manager
node configuration with 240
OnCommand Unified Manager
monitoring with 240
ONTAP health monitoring
configuring FC-to-SAS bridges for 183
out-of-order delivery, frames
various configurations, to determine whether to use
15, 247
output for storage aggregate plex show
command
after a MetroCluster switchover is indeterminate 244
ownership
assigning array LUNs 225
P
partner clusters
cabling cluster peering connections in MetroCluster
configurations 49, 207
passwords
preconfigured in a MetroCluster configuration 16
payload
setting 87
peer relationships
creating cluster 174, 226
requirements for clusters 12
peering clusters
introduction to manually 168
MetroCluster configuration, introduction to 167, 225
peering connections, cluster
cabling in MetroCluster configurations 49, 207
planning
gathering required network information for site A
149
gathering required network information for site B
151
policies, authentication
setting for Brocade switches in MetroCluster
configurations 87
pool 0 disks
assigning ownership in MetroCluster configurations
with AFF systems 161
assigning ownership in MetroCluster configurations
with non-AFF systems 160
pool 1 disks
assigning ownership in MetroCluster configurations
with AFF systems 161
assigning ownership in MetroCluster configurations
with non-AFF systems 160
pools, disk
in a eight-node or four-mode MetroCluster
configuration 157
Index | 261
configuring FC-VI ports on FAS8020 systems 155,
port assignments
for FC switches in configuration with array LUNs
194
for FC switches in eight-node configuration 38
port channels
configuring on Cisco switches 102
port configuration
on FC switch
Cisco 95
port licensing
Cisco switches 92
port numbers
for Brocade FC switches in configurations with array
LUNs 194
for Brocade FC switches in eight-node
configurations 38
for Cisco FC switches in configuration with array
LUNs 194
for Cisco FC switches in eight-node configurations
38
ports
considerations when sharing data and intercluster
roles on 13
considerations when using dedicated intercluster 13
manually enabling in Cisco MDS 9148 switches 94
requirements for cluster peering 12
ports, controller
checking connectivity with the partner site in
MetroCluster configurations 49, 207
ports, data
cabling 50, 208
configuring intercluster LIFs to share 172
ports, FC-VI
cabling to FC switches in MetroCluster
configurations 37
ports, FCIP
configuring for a single ISL on Cisco 9250i switches
105
ports, HBA
cabling to FC switches in MetroCluster
configurations 37
ports, intercluster
configuring intercluster LIFs to use dedicated 168
ports, management
cabling 50, 208
ports, SAS
disabling on FC-to-SAS bridges 137
ports, single ISL
configuring FCIP ports for, on Cisco 9250i switches
105
ports, switch
configuring E-ports (ISL ports) on Brocade FC
switches 59
preconfiguration
MetroCluster components 16
preparing storage arrays
for use with ONTAP systems 193
protection, data
starting in MetroCluster configurations 179, 230
Q
QLE2564 quad-port cards
223
R
relationships
creating cluster peer 174, 226
creating MetroCluster 179, 230
removing
MetroCluster configurations, considerations 189
replication, LIFs
in a MetroCluster configuration 242
requirements
Cisco FC switch licensing 90
Cisco FC switches 90
cluster naming when peering 12
firewall for cluster peering 12
for TDM/xWDM equipment to work with fabricattached MetroCluster configurations 15, 247
gathering network information for site A 149
gathering network information for site B 151
IP addresses for cluster peering 12
IPspaces for cluster peering 12
ISL connections 90
network for cluster peering 12
ports for cluster peering 12
subnets for cluster peering 12
requirements for zoning
in MetroCluster configurations with array LUNs 213
root aggregates
mirroring 175, 228
volume creation on 242
S
SAS disk shelves
installing 122
SAS ports
disabling on FC-to-SAS bridges 137
SAS shelves
considerations for installing FC-to-SAS bridges and
125
SAS storage, bridge-attached
supported MetroCluster configurations 11
SAS storage, direct-attached
supported MetroCluster configurations 11
servers
configuring NTP, to synchronize system time 240
Service Processor configuration
or site A 149
or site B 151
setting
authentication policy for Brocade switches in
MetroCluster configurations 87
setup, hardware
checklist for factory configured clusters 16
setup, software
checklist for factory-configured MetroCluster
configurations 18
shelves
racking in MetroCluster configurations 36, 140
shelves, SAS
262 | Fabric-attached MetroCluster Installation and Configuration Guide
considerations for installing FC-to-SAS bridges and
125
installing 122
single ISL ports
configuring FCIP ports for, on Cisco 9250i switches
105
site configurations
worksheets for site A 149
worksheets for site B 151
site FC switches
cabling HBAs to, in MetroCluster configurations 37
software
settings already enabled 16
software setup
checklist for factory-configured MetroCluster
configurations 18
software, configuring
workflows for MetroCluster in ONTAP 148
storage aggregate plex show command
output
after a MetroCluster switchover is indeterminate 244
storage array ports
cabling to FC switches in an eight-node
MetroCluster configuration 212
storage arrays
cabling to FC switches in a MetroCluster
configuration 208
configuration requirements for MetroCluster
configurations 191
preparing for use with ONTAP systems 193
storage connections
requirements 90
storage controller
cabling management and data connections 50, 208
storage controllers
cabling HA interconnect between, in MetroCluster
configurations 49, 207
example naming conventions in the documentation
31
racking in a Metrocluster configuration 192
racking in MetroCluster configurations 36, 140
storage disk show command
output in a two-node MetroCluster configuration 244
storage ports
creating and configuring VSANs for, on Cisco FC
switches 98
storage shelf show command
output in a two-node MetroCluster configuration 244
subnets
requirements for cluster peering 12
suggestions
how to send feedback about documentation 252
switch configuration, Cisco
saving 122
switch configurations, FC
configuring basic settings for Brocade 56
switch fabrics
configuring switch ports for 59
disabling before modifying configuration 143
enabling sharing for a port group 145
Fibre Channel 29
reenabling 143
switch fabrics in MetroCluster configurations with array
LUNs
cabling the ISLs in 205
switch parameters
setting 87
switch zoning
example of eight-node MetroCluster configuration
with array LUNs 217
example of four-node MetroCluster configuration
with array LUNs 215
example of two-node MetroCluster configuration
with array LUNs 214
MetroCluster configuration with array LUNs 213
requirements in MetroCluster configurations for 213
switchback
verifying in a MetroCluster configuration 188, 235
switches
Cisco licensing requirements 90
switches, Brocade
setting authentication policy in MetroCluster
configurations 87
switches, Brocade FC
configuring 52
configuring zoning with FibreBridge 7500N bridges
84
requirements for configuring 52
switches, Cisco 9148
manually enabling ports 94
switches, Cisco 9250i FC
configuring FCIP ports for a single ISL 105
switches, Cisco FC
configuration requirements 90
configuring zoning on 119
ISL requirements 90
storage connection requirements 90
switches, cluster interconnect
cabling in MetroCluster configurations 48, 206
switches, FC
cabling HBAs to, in MetroCluster configurations 37
configuration requirements for MetroCluster
configurations with array LUNs 191
configuration worksheet 34
introduction to configuring 51
introduction to manually configuring Cisco and
Brocade 52
port assignments in configuration with array LUNs
194
port assignments in eight-node configuration 38
racking in MetroCluster configurations 36, 140
switches, FC Brocade
configuring E-ports (ISL ports) on 59
switches, MetroCluster FC
configuring for health monitoring 182, 232
switchover
accessing the database after 246
verifying in a MetroCluster configuration 188, 235
switchovers, MetroCluster
output for the storage aggregate plex
show command after 244
synchronizing system time
using NTP 240
SyncMirror
Index | 263
configuration requirements for MetroCluster
configurations with array LUNs 191
system defaults
resetting reused controller modules to 154
system time
synchronizing using NTP 240
T
TDM/xWDM equipment
requirements to work with fabric-attached
MetroCluster configurations 15, 247
third-party storage
requirements for MetroCluster configurations with
array LUNs 191
TI zoning
deleting and configuring IOD settings 144
Tiebreaker software
installing 188, 235
using to identify failures 188, 235
time
synchronizing system, using NTP 240
tools
checking for MetroCluster configuration errors with
Config Advisor 186, 235
downloading and running Config Advisor 186, 235
transition
7-Mode to clustered ONTAP 14
sharing FC switch fabric 14
Twitter
how to receive automatic notification of
documentation changes 252
two-node MetroCluster configurations
cabling FC-VI ports in 199
cabling HBA ports in 199
example of cabling array LUNs and switches in 209
implementing 179, 230
setting up clusters in 165
U
unused ports
disabling on FC-to-SAS bridges 137
user names
preconfigured in a MetroCluster configuration 16
utilities
checking for MetroCluster configuration errors with
Config Advisor 186, 235
downloading and running Config Advisor 186, 235
V
verification of disk assignment
in a eight-node or four-mode MetroCluster
configuration 157
verification of disk assignment in a two-node
MetroCluster configuration
booting to Maintenance mode 163
performing before booting to ONTAP 163
verifying
MetroCluster configuration operations 184, 233
Virtual fabric
disabling 86
VLDB errors
in MetroCluster configurations 244
volume command errors
in MetroCluster configurations 244
volume creation
in a MetroCluster configuration 242
volumes
commands 245
recovering after a switchover 246
VSANs
creating and configuring on Cisco FC switches 98
Vservers
See SVMs
W
workflows
cabling a four-node or two-node fabric-attached
MetroCluster configuration 23
MetroCluster software configurations in ONTAP 148
worksheets
for site A 149
for site B 151
Z
zoning
configuring on a Brocade FC switch with
FibreBridge 7500N bridges 84
configuring on Cisco FC switches 119
differences between eight-node and four-node
MetroCluster configuraitons 66, 69, 75
sharing a switch fabric 145
zoning examples
FibreBridge 7500N bridge in eight-node or fournode MetroCluster configurations 66, 69, 75
zoning, switch
example of eight-node MetroCluster configuration
with array LUNs 217
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising