EMC® Replication Manager

Add to my manuals
446 Pages

advertisement

EMC® Replication Manager | Manualzz

EMC

®

Replication Manager

Version 5.2.3

Administrator’s Guide

P/N 300-010-265

REV A02

EMC Corporation

Corporate Headquarters

:

Hopkinton, MA 01748

-

9103

1

-

508

-

435

-

1000 www.EMC.com

Copyright © 2003-2010 EMC Corporation. All rights reserved.

Published April, 2010

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO

REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS

PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR

FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date regulatory document for your product line, go to the Technical Documentation and

Advisories section on EMC Powerlink.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

2

EMC Replication Manager Administrator’s Guide

Contents

Preface

Chapter 1 System Requirements

Preliminary storage array setup ..................................................... 22

Preparing your storage array ................................................... 22

Installing prerequisite software ............................................... 23

Obtaining Replication Manager software license keys ........ 24

Minimum hardware requirements ................................................. 25

Replication Manager Server..................................................... 25

Replication Manager Console .................................................. 26

Replication Manager Agent ..................................................... 26

HBA driver support.......................................................................... 27

IPv6 support ...................................................................................... 28

Internet Protocol support ......................................................... 28

Veritas volumes ................................................................................. 31

Windows disk types ......................................................................... 33

Replication Manager Server requirements.................................... 34

VMware support........................................................................ 34

Email requirements ................................................................... 34

Restrictions for 64-bit configurations...................................... 35

MOM/SCOM server requirement .......................................... 35 serverdb disk storage requirements........................................ 35

Replication Manager Agent requirements .................................... 38

Replication Manager Console requirements................................. 40

Cluster requirements ........................................................................ 41

Replication Manager Server component ................................ 41

Cluster configuration for Windows 2008 failover cluster.... 42

Replication Manager Server on multiple virtual nodes....... 42

Replication Manager Agent and Console components........ 42

EMC Replication Manager Administrator’s Guide

3

4

Chapter 2

Chapter 3

Installation

Overview of Replication Manager components........................... 46

Contents of the product DVD ......................................................... 47

Preinstallation tasks.......................................................................... 47

Viewing and printing the documentation ............................. 48

Obtaining a license file.............................................................. 48

Setting up Replication Manager in a secure environment .. 50

Mounting a DVD on HP-UX.................................................... 52

Installing the Replication Manager Server software ................... 54

Installing the Replication Manager Agent software.................... 57

Using the Deployment Wizard ................................................ 59

Installing the Replication Manager Console software ................ 61

Troubleshooting install using log files........................................... 62

Modifying or removing Replication Manager.............................. 62

Modifying or removing components on Windows .............. 62

Removing Replication Manager Agent (UNIX/Linux) ....... 63

Removing Replication Manager from a cluster .................... 63

Upgrading Replication Manager.................................................... 64

Upgrade considerations............................................................ 65

Upgrading a

Replication Manager 5.x Server............................................... 66

Upgrading a Replication Manager 5.x Agent........................ 66

Upgrading a Replication Manager 5.x Console .................... 67

Migrating Replication Manager Server to a different host ......... 67

Viewing current version and patch level ...................................... 67

Replication Manager Config Checker............................................ 68

Installing Config Checker......................................................... 68

Running Config Checker.......................................................... 68

cfgchk command syntax........................................................... 70

Administration Tasks

Setting the Administrator password.............................................. 72

Setting the default Administrator password......................... 72

Changing the default Administrator password.................... 73

Managing user accounts .................................................................. 74

Adding a user............................................................................. 74

Modifying a user account......................................................... 75

Deleting a user account ............................................................ 75

Managing hosts................................................................................. 76

Registering a host ...................................................................... 76

Defining user access to hosts ................................................... 78

Modifying a host........................................................................ 80

EMC Replication Manager Administrator’s Guide

Viewing host properties ............................................................ 81

Deleting a host............................................................................ 83

Managing general server options ................................................... 85

Publishing management events ...................................................... 86

Overview..................................................................................... 86

Enabling management event publishing................................ 86

Managing server security options .................................................. 87

Configuring security events ..................................................... 87

Creating a logon banner............................................................ 89

Working with licenses....................................................................... 90

License types............................................................................... 90

License status.............................................................................. 90

Grace period ............................................................................... 91

Using host licenses from an earlier version ........................... 92

License management ................................................................. 92

License warnings........................................................................ 92

Using the Licenses tab ............................................................... 93

Adding a license......................................................................... 94

Setting the client security mode ...................................................... 95

Changing to secure client mode............................................... 95

Managing replica storage................................................................. 97

Managing Celerra storage......................................................... 97

Discovering storage arrays ....................................................... 98

Specifying credentials for a new RecoverPoint RPA ............ 98

Adding storage........................................................................... 99

Viewing storage........................................................................ 100

Viewing storage properties..................................................... 102

Removing storage .................................................................... 104

Creating storage pools............................................................. 105

Modifying storage pools ......................................................... 107

Deleting storage pools............................................................. 107

Removing a storage service .................................................... 108

Set SAN policy on Windows Server

2008 Standard Edition ............................................................. 108

Protecting the internal database.................................................... 109

Deploying the internal database directories ........................ 109

Backing up and restoring the internal database .................. 110

Changing the internal database password........................... 111

Viewing Replication Manager from EMC ControlCenter ......... 112

Free storage devices................................................................. 112

Viewing Symmetrix device groups ....................................... 113

Viewing CLARiiON storage groups ..................................... 113

EMC storage and storage-based groups............................... 113

EMC Replication Manager Administrator’s Guide

5

Chapter 4

Replica-in-progress device group (Symmetrix only) ......... 114

Replica device group

(Symmetrix only) ..................................................................... 114

Replication Manager log files ....................................................... 116

Installation log files ................................................................. 116

Server and client log files ....................................................... 116

Replication Manager Server internal database logs ........... 118

Setting the logging level and log file size ............................ 118

Replication Manager event messages in Windows ................... 119

Viewing event messages......................................................... 119

CLARiiON Array Setup

CLARiiON setup checklist ............................................................ 128

Supported CLARiiON storage arrays.......................................... 131

CLARiiON hardware and software configurations................... 132

CLARiiON firmware requirements ...................................... 132

Minimum hardware and connectivity required by

CLARiiON ................................................................................ 132

LUN sizing................................................................................ 133

Minimum software required by CLARiiON ....................... 133

Minimum software required in CLARiiON environments 133

Multiple Replication Manager Servers per

CLARiiON array...................................................................... 134

iSCSI configuration for CLARiiON arrays.................................. 135

CLARiiON iSCSI prerequisites.............................................. 135

Using CHAP security.............................................................. 135

Installing and configuring the iSCSI initiator ..................... 136

Log in to iSCSI target .............................................................. 139

CLARiiON RAID groups and LUNs ........................................... 141

Avoiding inadvertent overwrites ................................................. 142

Setting up clients to use CLARiiON storage arrays .................. 143

Install software on the client .................................................. 143

Create account with administrator privileges..................... 143

Update the sd.conf entries for dynamic mounts................. 143

Update the Navisphere host agent configuration file........ 144

Verify connectivity record settings........................................ 144

Set SAN Policy on Windows Server 2008

Standard Edition...................................................................... 145

Configuring CLARiiON alternate MSCS cluster mount........... 146

Mounting to alternate cluster while multiple nodes are visible ........................................................................................ 147

Using CLARiiON protected restore ............................................. 148

6

EMC Replication Manager Administrator’s Guide

CLARiiON snapshots ..................................................................... 149

Snaps versus clones ................................................................. 150

Configuring snapshots ............................................................ 150

Viewing snapshot information............................................... 151

Monitoring the snapshot cache .............................................. 152

Discovering storage ................................................................. 154

Planning CLARiiON drives for clone replications..................... 156

What is a replica on the CLARiiON array? .......................... 156

Planning replica storage needs .............................................. 156

Planning for temporary snapshots of clones............................... 158

Support for CLARiiON thin LUNs............................................... 159

Managing CLARiiON storage....................................................... 160

Configuring RAID groups and thin pools ........................... 160

Creating and binding LUNs ................................................... 161

Assigning LUNs to storage groups ....................................... 161

Exposing selected storage to clients ...................................... 163

LUN surfacing issues...................................................................... 164

Background ............................................................................... 164

Affected environments ............................................................ 164

Preventing LUN surfacing issues in Windows.................... 164

Reducing LUN surfacing issues on Solaris .......................... 165

Configuring CLARiiON arrays for SAN Copy replications ..... 166

Requirements for CLARiiON-to-

CLARiiON SAN Copy ............................................................ 166

Install Navisphere and Admsnap.......................................... 169

Install SAN Copy on the CLARiiON array .......................... 169

Zone SAN Copy ports ............................................................. 169

Create a SAN Copy storage group ........................................ 170

Connect the SAN Copy storage group to the remote CLARiiON array.............................................. 171

Snap cache devices for intermediate copy ........................... 171

Create a storage pool for SAN Copy..................................... 171

Incremental SAN Copy ........................................................... 171

Special configuration steps for Linux clients .............................. 174

Preparing static LUNs for mountable clone replicas .......... 174

Preparing static LUNs for mountable snapshot replicas...................................................................... 174

Planning static LUNs for temporary snapshots of clones ......................................................................................... 177

Steps to perform after an array upgrade ..................................... 178

Cleaning up CLARiiON resources affecting the LUN limit ..... 181

Requirements for MirrorView support ........................................ 182

Reducing storage discovery time with a claravoid file ............. 183

EMC Replication Manager Administrator’s Guide

7

Chapter 5 Symmetrix Array Setup

Symmetrix setup checklist............................................................. 186

Additional setup steps for Windows Server 2008 environments ........................................................................... 187

Symmetrix array hardware and software requirements........... 189

Supported hardware and microcode levels......................... 189

Gatekeeper requirements ....................................................... 189

EMC Solutions Enabler........................................................... 189

LUN sizing................................................................................ 189

Set SAN Policy on Windows Server 2008

Standard Edition...................................................................... 189

Support for 128 TimeFinder/Snap sessions................................ 191

Expiring sessions after Enginuity upgrade.......................... 191

Identifying a 128-snap session............................................... 191

Support for thin devices ................................................................ 193

TimeFinder/Clone setup........................................................ 193

TimeFinder/Snap setup ......................................................... 193

Configuring the Symmetrix array for SAN Copy replications 194

Devices for intermediate copy............................................... 194

Configuration steps................................................................. 194

SAN Copy configuration recommendations ....................... 197

Configuring for remote replications across SRDF/S................. 198

Configuring Symmetrix alternate MSCS cluster mount ........... 200

Preparing to mount a physical cluster node to Symmetrix .................................................................. 200

Preparing to mount a virtual cluster node to Symmetrix.. 200

Using Symmetrix protected restore ............................................. 202

Long Duration Locks...................................................................... 203

LDLs and Replication Manager operations......................... 203

When Replication Manager removes an LDL ..................... 203

Checking the state of LDLs on Replication Manager......... 204

Disabling the LDL feature in Replication Manager............ 204

Enabling the LDL feature in Replication Manager............. 204

Viewing an LDL....................................................................... 204

Removing an LDL ................................................................... 204

Configuring virtual devices (VDEVs).......................................... 205

Configuring TimeFinder clones for Replication Manager........ 206

Restoring TimeFinder clones with long duration locks..... 206

Steps to perform after an array upgrade..................................... 208

Reducing storage discovery time with symavoid...................... 210

8

EMC Replication Manager Administrator’s Guide

Chapter 6 Celerra Array Setup

Celerra hardware and software configurations.......................... 212

Supported Celerra storage arrays.......................................... 212

Minimum required hardware and connectivity.................. 212

QLogic iSCSI TOE card setup................................................. 212

Minimum required software for hosts.................................. 213

Set SAN policy on Windows Server 2008

Standard Edition ...................................................................... 213

Install patches and service packs........................................... 214

Configuring iSCSI targets .............................................................. 215 iSCSI target configuration checklist ...................................... 215

Create iSCSI targets ................................................................. 215

Create file systems for iSCSI LUNs ....................................... 217

Create iSCSI LUNs................................................................... 219

Creating an iSCSI LUN ........................................................... 219

Create iSCSI LUN masks ........................................................ 220

Configure iSNS on the Data Mover....................................... 222

Create CHAP entries ............................................................... 223

Start the iSCSI service.............................................................. 225

Configuring the iSCSI initiator...................................................... 226

Register the initiator with the Windows Registry............... 226

Configure CHAP secret for reverse authentication ............ 227

Set CHAP secret for reverse authentication ......................... 228

Configure iSCSI discovery on the initiator........................... 229

SendTargets discovery............................................................. 229

Automatic discovery ............................................................... 231

Log in to iSCSI target............................................................... 232

Configuring iSCSI LUNs as disk drives ...................................... 233

Configure volumes on iSCSI LUNs....................................... 234

Bind volumes in the iSCSI initiator ....................................... 235

Make Exchange service dependent on iSCSI service.......... 236

Make SQL Server service dependent on iSCSI service....... 236

Verifying host connectivity to the Celerra server ....................... 238

Configuring Celerra network file system targets ....................... 239

NFS configuration checklist ................................................... 239

Naming convention for Celerra NFS snap replicas ............ 240

Create the destination Celerra network file system............ 240

Celerra NFS restore considerations ....................................... 241

Creating iSCSI replicas using Celerra SnapSure technology .... 242

Creating iSCSI replicas using Celerra Replicator technology... 243

Celerra Replicator for iSCSI concepts ................................... 243

How Celerra Replicator works in an iSCSI Environment.. 244

System requirements, restrictions, and cautions ................. 246

EMC Replication Manager Administrator’s Guide

9

Chapter 7

Chapter 8

Planning considerations ........................................................ 248

Configuring Celerra Replicator remote replication............ 248

Managing iSCSI LUN replication ......................................... 255

Using the Celerra Replicator clone replica for disaster recovery................................................... 258

Using the Celerra Replicator clone replica for repurposing ........................................................................ 258

Troubleshooting Replication Manager with Celerra arrays ..... 261

Checking for stranded snapshots on a Celerra array......... 261

Cleaning stranded snapshots from the Celerra array ........ 261

Unable to mark the Celerra iSCSI session............................ 262

Expiring all replicas in a Celerra-based application set..... 262

Troubleshooting iSCSI LUN replication .............................. 262

Troubleshooting Celerra network connections ................... 262

Invista Setup

Invista setup checklist .................................................................... 266

Supported Invista switches ........................................................... 268

Invista hardware and software configurations .......................... 269

Minimum required hardware and connectivity ................. 269

Minimum required software for Invista hosts .................... 269

Minimum software for hosts ................................................. 270

Assigning Symmetrix devices to Invista initiators ............. 270

Discovering Invista storage.................................................... 271

Planning your configuration for clone replications................... 272

What is a replica on the Invista instance? ............................ 272

Planning replica storage needs.............................................. 272

Managing virtual frames ............................................................... 274

Exposing selected storage to clients...................................... 275

Replication Manager virtual volume surfacing issues.............. 276

Affected environments ........................................................... 276

Preventing virtual volume surfacing issues in Windows.. 276

RecoverPoint Setup

RecoverPoint setup checklist ........................................................ 280

RecoverPoint hardware and software configurations............... 281

Components of Replication Manager and RecoverPoint.......... 282

Supported replication options ............................................... 282

RecoverPoint Appliance ......................................................... 283

Synchronizing time on hosts......................................................... 284

Replication Manager configuration steps for RecoverPoint .... 285

10

EMC Replication Manager Administrator’s Guide

Chapter 9 VMware Setup

Supported VMware configurations and prerequisites .............. 288

Replicating a VMware VMFS ........................................................ 289

General configuration prerequisites...................................... 289

Replicating a VMware NFS Datastore ......................................... 298

General configuration prerequisites...................................... 298

Replicating a VMware virtual disk............................................... 304

General configuration prerequisites...................................... 308

Symmetrix configuration prerequisites ................................ 309

CLARiiON configuration prerequisites................................ 310

Celerra configuration prerequisites....................................... 311

RecoverPoint configuration prerequisites ............................ 311

Virtual disk mount, unmount, and restore considerations 312

Installing VMware Tools on the virtual machine ................ 312

Installing VMware Tools on a physical machine................. 313

Additional support information ............................................ 313

Replicating an RDM or iSCSI initiator LUN on CLARiiON ..... 314

Replication host attached to CLARiiON iSCSI system....... 314

Mount host attached to

CLARiiON iSCSI system......................................................... 315

Replication host attached to

CLARiiON Fibre Channel system ......................................... 316

Mount host attached to

CLARiiON Fibre Channel system ......................................... 316

VMware replication and mount options for combined

CLARiiON arrays..................................................................... 316

CLARiiON configuration steps for VMware ....................... 317

Replicating an RDM on Invista Virtual Volumes ....................... 320

Configuring Invista environments to replicate VMware RDMs......................................................... 320

Configuring Invista environments to mount VMware RDMs ............................................................ 321

About mounting an RDM replica to a physical host .......... 321

Replicating VMware virtual disks on Invista virtual volumes 322

Configuring Invista environments to replicate VMware virtual disk ................................................................................ 322

Restrictions when mounting virtual disks in an Invista environment.............................................................................. 324

Replicating a VMware RDM on Symmetrix................................ 325

Mount host attached to Symmetrix Fibre Channel ............. 325

Replicating hosts attached to Symmetrix Fibre Channel ... 325

Replicating a VMware RDM with RecoverPoint........................ 326

Replicating a VMware RDM on Celerra ...................................... 326

EMC Replication Manager Administrator’s Guide

11

12

Replication host attached to Celerra..................................... 326

Mount host attached to Celerra............................................. 327

Chapter 10 Hyper-V Setup

Hyper-V overview .......................................................................... 330

Supported Hyper-V configurations and prerequisites.............. 331

Replication Manager components ........................................ 331

Virtual machines ...................................................................... 331

Guest operating systems ........................................................ 331

Storage services........................................................................ 331

High-availability...................................................................... 332

Applications ............................................................................. 332

Licensing ................................................................................... 332

Chapter 11 VIO Setup

IBM AIX VIO overview ................................................................. 334

Supported VIO LPAR configurations and prerequisites .......... 335

Replication Manager components ........................................ 335

VIO requirements .................................................................... 335

Storage services........................................................................ 336

High-availability...................................................................... 336

Applications ............................................................................. 337

Licensing ................................................................................... 337

Chapter 12 Disaster Recovery

Replication Manager Server disaster recovery........................... 340

Setting up Replication Manager Server disaster recovery........ 342

Prerequisites for Server DR .................................................... 342

Installing Server DR ................................................................ 342

Converting from a standalone server to a disaster recovery server ......................................................................................... 343

Replication Manager database synchronization ................. 345

Upgrading a Server DR configuration......................................... 346

Prerequisites ............................................................................. 346

Procedure.................................................................................. 346

Performing server disaster recovery............................................ 348

Verifying the disaster recovery server configuration ......... 348

Failing over to a secondary server ........................................ 349

Failing back to the original primary server ......................... 350

Troubleshooting server disaster recovery ............................ 351

Celerra disaster recovery ............................................................... 353

EMC Replication Manager Administrator’s Guide

Celerra NFS failover ................................................................ 353

Celerra iSCSI disaster recovery.............................................. 354

Types of disaster recovery ...................................................... 355

Disaster recovery phases......................................................... 356

Limitations and minimum system requirements ................ 356

Setting up Celerra disaster recovery ..................................... 358

Setting up the production and disaster recovery servers .. 359

Setting up replication .............................................................. 364

Failing Over .............................................................................. 370

Post-failover considerations ................................................... 377

Recover applications................................................................ 379

Troubleshooting Celerra Disaster Recovery......................... 382

VMware Site Recovery Manager (SRM) disaster recovery ....... 384

Understanding VMware Site Recovery Manager ............... 384

Role of VMware SRM .............................................................. 385

Role of Replication Manager in a

VMware SRM environment.................................................... 386

VMware SRM disaster recovery in

CLARiiON environments ....................................................... 387

VMware SRM disaster recovery in Celerra environments 387

VMware SRM disaster recovery in RecoverPoint environments ............................................................................ 389

Configuring SRM and Replication Manager on CLARiiON ........................................................................... 389

Protecting the VMware SRM environment before failover on

CLARiiON................................................................................. 394

Replication Manager after SRM failover on CLARiiON .... 397

Configuring SRM and Replication Manager on Celerra .... 399

Replication Manager before SRM failover on Celerra........ 404

Mounting VMFS replicas of SRM environments................. 404

Replication Manager after SRM failover on Celerra........... 405

Configuring SRM and Replication Manager on

RecoverPoint............................................................................. 406

Replication Manager before SRM failover on

RecoverPoint............................................................................. 408

Mounting VMFS replicas of SRM environments................. 408

Replication Manager after SRM failover on RecoverPoint 409

Appendix A Using the Command Line Interface

Installation with the setup command .......................................... 412

Sample commands................................................................... 413

Using a response file (UNIX).................................................. 413

EMC Replication Manager Administrator’s Guide

13

Index

Using a response file (Windows)........................................... 416

Configuration and replication with rmcli ................................... 417

Interactive and batch............................................................... 417

Recommended batch script.................................................... 418

How to run rmcli in interactive mode.................................. 418

How to run rmcli in batch mode ........................................... 418

Script structure......................................................................... 419

Script example.......................................................................... 420

rmcli commands ...................................................................... 421

14

EMC Replication Manager Administrator’s Guide

Preface

Audience

As part of an effort to improve and enhance the performance and capabilities of its product line, EMC from time to time releases revisions of its hardware

and software. Therefore, some functions described in this guide may not be

supported by all revisions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes.

If a product does not function properly or does not function as described in this guide, please contact your service representative.

This guide is part of the Replication Manager documentation set, and is intended for use by storage administrators and operators to learn about the following:

Hardware and software requirements

Installing Replication Manager, including upgrading from previous versions

Performing Replication Manager administration tasks, such as managing hosts, replica storage, user accounts, and licenses

Setting up the storage arrays supported by Replication Manager

Using the command line interface

Considerations in a cluster environment

Readers are expected to be familiar with the following topics:

Operation of application software used in conjunction with

Replication Manager

Operation of all operating systems on hosts attached to

Replication Manager

EMC Replication Manager Administrator’s Guide

15

Preface

Organization

Hardware and software components of your storage environment

Third-party software used in conjunction with Replication

Manager, such as volume managers

Here is a list of where information is located in this guide.

Chapter 1, ”System Requirements,” describes preliminary setup tasks

and the minimum hardware and software requirements for each

Replication Manager component.

Chapter 2, ”Installation,”

describes how to install Replication

Manager and begin to use it. It also explains how to upgrade from previous releases and uninstall the product.

Chapter 3, ”Administration Tasks,” explains how to perform various

administrative tasks, such as setting up user accounts, registering hosts, and managing server options.

Chapter 4, ”CLARiiON Array Setup,” describes how to prepare the

CLARiiON storage array to work with Replication Manager.

Chapter 5, ”Symmetrix Array Setup,” describes how to set up the

Symmetrix storage array to work with Replication Manager.

Chapter 6, ”Celerra Array Setup,”

describes how to set up the Celerra storage array to work with Replication Manager.

Chapter 7, ”Invista Setup,”

describes how to prepare the Invista environment to work with Replication Manager.

Chapter 8, ”RecoverPoint Setup,” describes how to set up

RecoverPoint to work with Replication Manager.

Chapter 9, ”VMware Setup,” describes how to configure Replication

Manager to work with VMware platforms.

Chapter 10, ”Hyper-V Setup,” describes how to configure Replication

Manager to work with the Hyper-V platform.

Chapter 11, ”VIO Setup,”

describes how to configure Replication

Manager to work in an AIX VIO environment.

Chapter 12, ”Disaster Recovery,” describes how to set up and

perform Replication Manager Server Disaster Recovery, and how to use Replication Manager to implement a Celerra DR solution.

Appendix A, “Using the Command Line Interface,” describes how

users can install, configure, and create replicas through a command line interface.

16

EMC Replication Manager Administrator’s Guide

Preface

Related documentation

Conventions used in this guide

Related documents include:

EMC Replication Manager Product Guide — Provides an overview of the Replication Manager product along with a description of how to perform general tasks once the Replication Manager product has been installed and configured. This document also provides information specific to Replication Manager’s integration with various applications.

EMC Replication Manager Administrator’s Guide (this document) —

Provides information about installing Replication Manager and configuring the product and related storage services to integrate with one another.

EMC Replication Manager Release Notes — Provides information about fixed and known defects in the release and also provides information about installation of the release.

EMC Replication Manager online help — Provides detailed context-sensitive information about each screen of the product to help customers learn and understand how to use Replication

Manager.

EMC uses the following conventions for notes, cautions, warnings, and danger notices.

Note:

A note presents information that is important, but not hazard-related.

!

CAUTION

A caution contains information essential to avoid damage to the system or equipment. The caution may apply to hardware or software.

WARNING

A warning contains information essential to avoid a hazard that can cause severe personal injury, death, or substantial property damage if you ignore the warning.

EMC Replication Manager Administrator’s Guide

17

Preface

Typographical conventions

EMC uses the following type style conventions in this guide: normal font In running text:

• Interface elements (for example, button names, dialog box names) outside of procedures

• Items that user selects outside of procedures

• Java classes and interface names

• Names of resources, attributes, pools, Boolean expressions, buttons, DQL statements, keywords, clauses, environment variables, filenames, functions, menu names, utilities

• Pathnames, URLs, filenames, directory names, computer names, links, groups, service keys, file systems, environment variables (for example, command line and text), notifications

bold

italic

• User actions (what the user clicks, presses, or selects)

• Interface elements (button names, dialog box names)

• Names of keys, commands, programs, scripts, applications, utilities, processes, notifications, system calls, services, applications, and utilities in text

• Book titles

• New terms in text

• Emphasis in text

Courier

• Prompts

• System output

• Filenames

• Pathnames

• URLs

• Syntax when shown in command line or other examples

Courier, bold

• User entry

• Options in command-line syntax

|

...

<>

[]

Courier italic

• Arguments in examples of command-line syntax

• Variables in examples of screen or file output

• Variables in pathnames

Angle brackets for parameter values (variables) supplied by user.

Square brackets for optional values.

Vertical bar symbol for alternate selections. The bar means or.

Ellipsis for nonessential information omitted from the example.

18

EMC Replication Manager Administrator’s Guide

Preface

Where to get help

Your comments

For questions about technical support, call your local sales office or service provider.

Product information —

For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Powerlink website (registration required) at: http://Powerlink.EMC.com

Technical support —

For technical support, go to EMC WebSupport on Powerlink. To open a case on EMC WebSupport, you must be a

WebSupport customer. Information about your site configuration and the circumstances under which the problem occurred is required.

Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send a message to [email protected] with your opinions of this guide.

EMC Replication Manager Administrator’s Guide

19

Preface

20

EMC Replication Manager Administrator’s Guide

I

1

System Requirements

This chapter describes the following topics:

Preliminary storage array setup ...................................................... 22

Minimum hardware requirements .................................................. 25

HBA driver support........................................................................... 27

IPv6 support ....................................................................................... 28

Veritas volumes .................................................................................. 31

Windows disk types .......................................................................... 33

Replication Manager Server requirements..................................... 34

Replication Manager Agent requirements ..................................... 38

Replication Manager Console requirements.................................. 40

Cluster requirements ......................................................................... 41

System Requirements

21

System Requirements

Preliminary storage array setup

Replication Manager can create replicas on the following set of storage arrays (refer to the EMC

®

E-Lab

Interoperability Navigator on http://Powerlink.EMC.com for the authoritative list):

EMC Symmetrix

®

series 8000, and DMX storage arrays

EMC CLARiiON

®

storage array models:

• CX300, 300i, 400, 500, 500i, 600, 700, CX3-20-CL, CX3-40-CL,

CX3-80-CL, AX4-5

• CX380/CX340(F)/CX320(F)

• CX340c/CX320c, CX310c,

• CX4120/CX4240/CX4480/CX4960

EMC Celerra

®

NSX Series, NSX Series/Gateway, NS

Series/Integrated, NS Series/Gateway, CNS with 510 Data Mover model, and CNS with 514 Data Mover model

EMC Invista

®

instances

These storage arrays require a certain level of firmware and software installed before they can operate with Replication Manager.

Do the following before installing Replication Manager:

1. Prepare your storage array.

2. Install prerequisite software.

3. Obtain a Replication Manager license.

Preparing your storage array

CLARiiON array

This section describes the high-level tasks required for each array.

For CLARiiON, you must:

1. Attach CLARiiON storage to the host machines, or establish iSCSI connectivity between the CLARiiON storage and the host machines.

2. Configure CLARiiON storage hardware.

3. Install CLARiiON storage software.

Chapter 4, ”CLARiiON Array Setup,” describes these tasks.

22

EMC Replication Manager Administrator’s Guide

System Requirements

Symmetrix array

Celerra array

Invista instance

RecoverPoint

For Symmetrix, you must:

1. Attach Symmetrix storage to the host machines.

2. Configure Symmetrix storage hardware.

3. Install Symmetrix storage software.

Chapter 5, ”Symmetrix Array Setup,” provides more information.

For Celerra, you must:

1. Configure Celerra storage hardware.

2. Install Celerra storage software.

3. Establish iSCSI connectivity between Celerra storage and the host machines.

Chapter 6, ”Celerra Array Setup,” provides more information.

For Invista, you must:

1. Attach the Invista instance to the host machines through a SAN connection.

2. Zone the fibre switch so the hosts can access all storage they are using.

3. Install Invista admreplicate on the host where the Replication

Manager Agent is running.

4. Install Invista INVCLI on the host where the Replication Manager

Agent is running.

Chapter 7, ”Invista Setup,”

provides more information.

For RecoverPoint, install and configure according to the RecoverPoint documentation and the additional information in

Chapter 8,

”RecoverPoint Setup.”

Installing prerequisite software

Perform the following steps before installing Replication Manager:

1. Install the operating system, including required OS patches, on the computer you have designated to run Replication Manager

Server.

2. Install the operating system, including OS patches, on each host, including any required patches.

Preliminary storage array setup

23

System Requirements

Obtaining

Replication

Manager software license keys

3. Install the application software (such as Exchange, SQL Server,

UDB DB2, Oracle) on the hosts.

License keys control access to Replication Manager. Follow the instructions on the Software Key Request Card (shipped in the

Replication Manager package) to obtain software license keys. You will need to specify the path to the file containing the keys when you install Replication Manager. Refer to

“Obtaining a license file” on page 48 .

24

EMC Replication Manager Administrator’s Guide

System Requirements

Minimum hardware requirements

The following tables list the minimum hardware requirements to run each Replication Manager component on the supported platform.

Note the following considerations:

If you are running multiple components on the same system, you need the sum of the memory requirements for those components.

Upgrade installations require the same disk space as initial installations.

For the latest support information, refer to the EMC Replication

Manager Support Matrix, which is available from the EMC Powerlink

® website at http://Powerlink.EMC.com

.

Replication

Manager Server

Table 1

Table 1 on page 25 lists the minimum Replication Manager Server

requirements.

Note:

The Replication Manager Server can be installed on a VMware or

Hyper-V virtual machine.

Minimum requirements for the Replication Manager Server

Item

Memory

Disk space

Requirement

256 MB

1 GB

Comment

1 GB recommended

Includes 500 MB for the initial size of the

Replication Manager Server internal database. Each replica uses between 2 MB and 4 MB in the database, so 500 MB allows for between 12 and 250 replicas, depending on the size of control files that must be stored in the internal database.

Includes space required by temporary files.

Refer also to “Replication Manager log files” on page 116

regarding log file size.

Paging file

Connections

512 MB

Network connection

Minimum hardware requirements

25

System Requirements

Replication

Manager

Console

Table 2

Table 2 on page 26 lists the minimum requirements for the

Replication Manager Console.

Minimum requirements for the Replication Manager Console

Item

Memory

Disk space

Connections

Requirement

256 MB

200 MB

Network connection

Replication

Manager Agent

Table 3

Table 3 on page 26 lists the minimum requirements for the

Replication Manager Agent on each supported platform.

Minimum requirements for the Replication Manager Agent

Requirement UNIX and Linux

Memory 256 MB

Windows

256 MB

Disk Space 1 GB 500 MB

Comment

2 GB recommended

Refer to

“Replication

Manager log files” on page 116

regarding log file size.

Connections Network connection or Fibre

(to CLARiiON)

Fibre (Symmetrix)

Network connection, Fibre, or iSCSI connection

(to CLARiiON)

Network connection or iSCSI connection (to Celerra)

Fibre (Symmetrix)

26

EMC Replication Manager Administrator’s Guide

System Requirements

HBA driver support

This section describes Replication Manager HBA driver support.

Sun StorEdge SAN

Foundation (Leadville)

Replication Manager supports CLARiiON replication activities using the Sun StorEdge SAN Foundation (Leadville) HBAs and drivers on

Solaris clients, but you must meet the following specific requirements for these HBAs to work correctly with Replication Manager:

The following line should be added to the /etc/system file. A system restart is normally necessary for this setting to take effect: set fcp:ssfcp_enable_auto_configuration=1

The use of Sun StorEdge Traffic Manager (MPxIO) is not supported at this time. We recommend that EMC PowerPath

®

be used for failover and load balancing. The EMC Replication

Manager Support Matrix provides the current PowerPath supported versions.

When using Leadville in combination with Veritas Volume

Manager (VxVM), use only enclosure-based naming. Refer to the

Veritas doc ID 252395 at the following URL: http://support.veritas.com/docs/252395

Veritas Volume Manager is supported in this environment. The

EMC Replication Manager Support Matrix provides specific details.

Support of Leadville (both Symmetrix and CLARiiON) requires a minimum Solutions Enabler version. The EMC Replication

Manager Support Matrix provides specific details.

HBA driver support

27

System Requirements

IPv6 support

Internet Protocol version 6 (IPv6) is the next generation protocol for

IP addressing, designed to solve some problems with the existing

Internet Protocol version 4 (IPv4) implementation. The most pressing problem solved by IPv6 is the limited number of available IPv4 addresses. This section describes Replication Manager’s support for

IPv6 and explains how to ensure that you have connectivity between

Replication Manager Servers, Agents, and Consoles.

Internet Protocol support

Console to server connections

Table 4

Windows Server 2008 supports both IPv4 and IPv6 protocols. This is referred to as a dual stack configuration. In Windows Server 2008 environments, it is possible to disable IPv4, disable IPv6 or leave both enabled.

In addition, Windows Server 2003 can be configured to support dual stack or only traditional IPv4 connections (which is the default). The following tables help you understand what combinations are supported by Replication Manager.

Table 4

and Table 5 on page 29 contain the Internet Protocol

configurations that are supported when connecting the Replication

Manager Console to the Replication Manager Server.

IP configurations for Replication Manager Server (Windows 2003) to

Console

Console environment

Windows 2008 (IPv6 only)

Windows 2008 (IPv4 only)

Windows 2008 (dual stack)

Windows 2003 (IPv4 only)

Windows 2003 (dual stack)

Windows 2000/XP/etc. (IPv4 only)

Server (Windows 2003)

IPv4 only

N/A

Dual stack

Supported a

Supported

Supported

Supported

Supported

Supported

Not supported

Supported a

Not supported

Supported a

Not supported a

In environments where the Console and the Server are both dual stack, you must log in to the server using an IPv6 address only, you cannot use an IPv4 traditional IP address or server name to access the server in that configuration.

28

EMC Replication Manager Administrator’s Guide

System Requirements

Table 5

Agent to server connections

Table 6

IP configurations for Server (Windows 2008) to Console

Console environment

Windows 2008 (IPv6 only)

Windows 2008 (IPv4 only)

Windows 2008 (dual stack)

Windows 2003 (IPv4 only)

Windows 2003 (dual stack)

Windows 2000/XP/etc. (IPv4 only)

Server (Windows 2008)

IPv4 only Dual stack

N/A

Supported

Supported

Supported

Supported

Supported

Supported

Supported

Supported

Supported

Supported

Supported

IPv6 only

Supported

N/A

Supported

N/A

Supported

N/A

Table 6 on page 29 indicates the Internet Protocol configurations that

are supported when connecting the Replication Manager Agent to an

Replication Manager Server on Windows Server 2003.

IP configurations for Server (Windows 2003) to Agent

Agent environment

Windows 2003 (IPv4 only)

Windows 2003 (dual stack)

Windows 2008 (IPv4 only)

Windows 2008 (dual stack)

Windows 2008 (IPv6 only)

UNIX/Linux (IPv4 only)

UNIX/Linux (dual stack)

Server (Windows 2003)

IPv4 only

Supported

Dual stack

Supported

Not supported

Supported

Supported

N/A

Supported

Not supported

Supported*

Supported

Supported

Supported*

Supported

Supported

* Host must be registered to the Replication Manager Server using the IPv6 IP address, not the network name.

IPv6 support

29

System Requirements

Table 7

Replication Manager

Server disaster recovery

Table 7 on page 30 indicates the Internet Protocol configurations that

are supported when connecting the Replication Manager Agent to a

Server on Windows Server 2008.

IP configurations for Replication Manager Server (Windows 2008) to

Agent

Agent environment

Windows 2003 (IPv4 only)

Windows 2003 (dual stack)

Windows 2008 (IPv4 only)

Windows 2008 (dual stack)

Windows 2008 (IPv6 only)

UNIX/Linux (IPv4 only)

UNIX/Linux (dual stack)

Server (Windows 2008)

IPv4 only

Supported

Dual stack

Supported

IPv6 only

N/A

Not supported

Supported

Supported

N/A

Supported

Not supported

Supported* Supported*

Supported

Supported

Supported

Supported

Supported

N/A

Supported

Supported

Not supported

Supported

* Host must be registered to the Replication Manager Server using the IPv6 IP address, not the network name.

Replication Manager Server disaster recovery (Server DR) requires an

IPv4-only environment. Dual-stack and IPv6-only environments are

not supported for Server DR. “Replication Manager Server disaster recovery” on page 340 describes Server DR in detail.

30

EMC Replication Manager Administrator’s Guide

System Requirements

Veritas volumes

General considerations

Veritas naming conventions

Replication Manager supports Veritas Volume Manager (VxVM) in

Windows 2000, Solaris, HP-UX, and Linux environments. Refer to the

EMC Replication Manager Support Matrix for details.

Note the following general considerations for Replication Manager in a VxVM environment:

The VxVM configuration must match exactly on the production host and mount host. Be sure to check versions of the OS,

Replication Manager, VxVM, PowerPath, filesystem type, and

Solutions Enabler.

(Exception: Replication Manager can support systems that have a different naming scheme on the production host and mount host.)

For Replication Manager to correctly mount Veritas volumes, the decision to switch to, or from, enclosure-based names should not be made once subdisks, plexes, and volumes have been created in a disk group.

Replication Manager is tolerant of VxVM Cross Platform Data

Sharing (CDS) disks. The EMC Replication Manager Support Matrix provides specific details.

Replication Manager does not support the mntlock functionality in VxVM version 5.0 MP3 environments.

As mentioned in

“HBA driver support” on page 27

, enclosure-based naming is required when using this type of HBA driver.

Replication Manager does not support customized names for

Dynamic Multipathing (DMP) devices.

Replication Manager supports all Veritas naming conventions, and can be deployed in any existing VxVM environment.

Note:

There is one exception, enclosure-based naming is not supported in

Linux environments.

For newer, non-Linux environments, enclosure-based naming is recommended over native naming and pseudo naming (least recommended, but see PowerPath note below).

Veritas volumes

31

System Requirements

Veritas third-party driver

The Veritas third-party driver (TPD) dictates the physical device name for Veritas devices. This means if you have enclosure-based naming, you still set the TPD mode to native or pseudo device names.

Replication Manager can work with either TPD setting, as long as both production and mount hosts are using the same setting.

If you are using PowerPath, EMC recommends using pseudo names for the TPD setting. You can set the TPD setting with the following

Veritas command:

# vxdmpadm setattr enclosure enclosure tpdmode=[native|pseudo]

To check the TPD setting, run the Veritas command vxdisk list and compare the Device column in the command output.

32

EMC Replication Manager Administrator’s Guide

System Requirements

Windows disk types

Replication of Windows basic disks is supported. Refer to the EMC

Replication Manager Support Matrix for the latest information on supported versions.

One partition per physical disk is supported. Volumes that reside on multiple array types within a single volume group are not supported.

Windows disk types

33

System Requirements

Replication Manager Server requirements

The Replication Manager Server installation requires one of the following operating systems:

Windows Server 2008

Windows Server 2003

Additionally, the system must meet the Windows service pack levels and hotfixes outlined in the EMC Replication Manager Support Matrix.

VMware support

The Replication Manager Server supports Windows 2003 and

Windows 2008 under VMware. The minimum supported version is

VMware 2.5.1.

The memory and disk requirements are the same as those listed in

Table 1 on page 25 for the Replication Manager Server for the

Windows platform. The recommended network for a VMware machine is GigE.

Chapter 9, ”VMware Setup”

, offers more information about how to configure Replication Manager to work with VMware.

34

Email requirements

Replication Manager email functionality depends on the Internet

Information Service (IIS). IIS must be installed and running on the server machine before you install Replication Manager Server software.

In Windows 2003 and 2008, IIS is not automatically installed. Verify that IIS is installed on the system before installing Replication

Manager Server:

On Windows 2003, when you install IIS, be sure to select the

SMTP Service option. To see the IIS installation options including

SMTP Service, click the Detailed button on the first panel of the

Add/Remove Windows Components wizard.

On Windows 2008, open Server Manager and select Action > Add

Features, then SMTP Server. If IIS is not enabled you are automatically prompted to add the required role service.

Be sure the Simple Mail Transfer Protocol (SMTP) service is started in

Service Manager.

Refer to the Microsoft Support Knowledge Base for details on installing and configuring IIS and the SMTP Service.

EMC Replication Manager Administrator’s Guide

System Requirements

Setting the Replication

Manager email sender address

To set the email sender address on the Replication Manager Server:

1. Modify the String value Registry key to contain the sender email address you want the message to contain (for example, [email protected]) for

HKEY_LOCAL_MACHINE\SOFTWARE\emc\EMC

ControlCenter Replication Manager\Server\MailSender.

2. Restart Replication Manager Server (while no replications are running).

Restrictions for

64-bit configurations

If you are installing a Replication Manager Agent in a 64-bit environment (for example, for EMC SRDF

®

/CE or EMC

MirrorView™/CE), avoid installing Replication Manager Server on the same system. The 64-bit Solutions Enabler required by the 64-bit agent prevents the Replication Manager Server (a 32-bit application) from performing actions such as locking and unlocking devices, and from integrating with EMC ControlCenter

®

(ECC).

Additionally, there will be a minor decrease in performance as the

Replication Manager Server will not be able to communicate directly with the Symmetrix array and will have to rely on the Agent.

MOM/SCOM server requirement

For support of CIM event publishing, the MOM/SCOM server and

Replication Manager Server must be in the same domain, or a trust relationship must be created between the domains.

serverdb disk storage requirements

EMC recommends that you locate the Replication Manager Server’s internal database (serverdb) directory on its own device, separate from the other server components, to accommodate its growth. The serverdb directory contains logs for replicated application data, as well as the Replication Manager internal database.

This section describes how to estimate the disk size required for serverdb so you can choose an appropriate device on which to locate it. During installation of Replication Manager Server, you will specify the location of serverdb. The server database should be located on a storage device that is protected by a method such as mirroring or

RAID.

Replication Manager Server requirements

35

System Requirements

Disk size considerations

Consider the following when determining disk size requirements for the serverdb:

Oracle logs can be quite large depending on the complexity of the database and how active it is. Allow up to 200 MB per replica of an Oracle database with moderate to high activity.

SQL Server metadata generated during replication is relatively small. Allow 100 KB per replica.

Allow 100 KB on the serverdb device for each file system replica.

Your calculations should consider all replication technologies: snaps, clones, business continuance volumes (BCVs), and so on.

The previous values are per replica. If you schedule a job with a rotation of 7, multiply the value by 7.

Sample calculations

Table 8

The following are sample calculations estimating the size needed for serverdb.

Table 8 on page 36

calculates the amount of disk space required for serverdb for two Oracle databases when replicas are made twice a day for a week.

Calculating required serverdb size for two Oracle databases

Object

Replicas per recovery window

Oracle instance 1 14

(twice a day over a week)

Oracle instance 2 14

(twice a day over a week)

Recommended per replica Total

200 MB 2.8 GB

200 MB 2.8 GB

5.6 GB

Table 9 on page 37 calculates the amount of disk space required for

serverdb for three file systems for which replicas are made once a day for a week.

36

EMC Replication Manager Administrator’s Guide

System Requirements

Table 9

Calculating required serverdb size for three file systems

Total

Object

File system 1

File system 2

File system 3

7

7

7

Replicas per recovery window

Recommended per replica

0.7 MB

0.7 MB

0.7 MB

5 MB

5 MB

5 MB

15 MB

Replication Manager Server requirements

37

System Requirements

Replication Manager Agent requirements

Table 10 on page 38

summarizes the operating systems, logical volume managers, file systems, and databases that the Replication

Manager Agent software supports.

The EMC Replication Manager Support Matrix provides specific information about supported applications, operating systems, architectures (x86, x64, or IA64), high-availability environments, volume managers, and other supported software. This documentation is available from the Powerlink website at: http://Powerlink.EMC.com

Table 10

Replication Manager Agent support information

Operating systems

High-availability environments

Logical volume managers

File systems

Databases/Applications

Solaris, IBM AIX, HP-UX, Linux, Windows 2000, Windows Server 2003, and Windows Server 2008

Oracle RAC, PowerPath, Sun Clusters, VCS, MSCS, MC/ServiceGuard,

HACMP

VxVM, AIX native LVM, HP-LVM, RedHat Native LVM

VxFS, JFS, JFS2, Solaris UFS, HP HFS, NTFS, Linux ext3 FS

Oracle, DB2 UDB, Microsoft SQL Server, Microsoft Exchange, Microsoft

SharePoint

The agent host system must meet the operating system patch levels outlined in the EMC Replication Manager Support Matrix.

EMC does not recommend that you replicate data that resides simultaneously on more than one logical volume manager. Mount hosts must have a similar configuration to the production host from which the data originated. The EMC Replication Manager Product

Guide provides more information on mount restrictions.

Replication Manager may install 32-bit binaries, 64-bit binaries, or a combination of these on a system. Which binaries are installed on a system depends upon the architecture of the system, where you are installing the software, and whether or not a native 64-bit binary is available for the application in the given environment.

On Windows hosts, Replication Manager requires Microsoft .NET

Framework Version 2.0 for Replication Manager Config Checker and for the Exchange 2007/2010 agent. .NET Framework Version 2.0 is

38

EMC Replication Manager Administrator’s Guide

System Requirements

included on the Replication Manager distribution and is downloadable from: http://www.microsoft.com/downloads

The HP Serviceguard and IBM HACMP cluster mount feature requires that cluster nodes have the Perl interpreter version 5.8.0 or higher, located in /usr/bin/perl.

On Solaris hosts, non-disk devices (for example, fibre-attached tape devices) should not exist on the same fibre controller as a disk.

Replication Manager Agent requirements

39

System Requirements

Replication Manager Console requirements

The Replication Manager Console software supports these operating systems:

Windows Server 2008

Windows Server 2003

Windows Vista

Windows XP

Windows 2000 Advanced Server/Server/Professional

The Replication Manager Support Matrix on http://Powerlink.EMC.com

is the authoritative source of information about supported operated systems for Replication

Manager Console.

40

EMC Replication Manager Administrator’s Guide

System Requirements

Cluster requirements

This section describes requirements for Replication Manager components in a cluster environment.

Replication

Manager Server component

Cluster checklist

Cluster configuration for Windows 2003

Replication Manager Server can be installed as a cluster-aware application. When you install the Replication Manager Server on a computer running on a cluster:

You install the Replication Manager Server component on only one node of the cluster.

Replication Manager fails over and installs itself when the

Replication Manager Server resource group fails over.

Replication Manager jobs that are in progress during a cluster failover will fail. But if a job is on a repeating schedule, the next scheduled job will run.

Before installing the Replication Manager Server component in a clustered environment, verify that:

The appropriate Microsoft hotfixes are installed. The EMC

Replication Manager Support Matrix provides more information.

A supported version of cluster software is installed. The EMC

Replication Manager Support Matrix provides the latest support information.

Before installing Replication Manager Server in an MSCS environment, perform the following steps:

1. Create a resource group for the Replication Manager Server.

2. Add an IP address resource to the group.

3. Add a Network Name resource to the group. Make this resource dependent on the IP address resource that you added in the previous step. This Network Name resource will be the virtual server name that Replication Manager Server will use.

4. Add a physical disk resource for the Replication Manager Server software and database.

Cluster requirements

41

System Requirements

Cluster configuration for

Windows 2008 failover cluster

Before installing Replication Manager Server in a Windows 2008 failover cluster environment, perform the following steps:

1. Using the failover cluster administrator, create an empty application for the Replication Manager Server.

2. Add a client access point resource to the application you created in step 1.

3. Add a physical disk resource for the Replication Manager Server software and database.

Replication

Manager Server on multiple virtual nodes

If you are running multiple virtual nodes of a cluster on the same physical node and only one of those virtual nodes is running

Replication Manager Server, it is possible to access that Replication

Manager Server by entering the virtual IP address or virtual server name of one of the other virtual nodes. In other words, in this configuration, you can access and run Replication Manager on a virtual node of the cluster on which the Replication Manager software is not installed. When working with Replication Manager in a cluster, always use the virtual server name or IP address of the virtual server where the Replication Manager Server is installed. Do not use IP addresses from other virtual servers on the same cluster.

Replication

Manager Agent and

Console components

MSCS checklist

Replication Manager Agent and Console components do not fail over with the cluster. If you install agent or console components on a clustered system, you should install them locally to all nodes.

Before installing Replication Manager Agent software on a clustered system, verify that the following requirements are met for your cluster type:

All cluster nodes are operational.

All shared devices are online.

All resources are owned by the same node throughout installation and configuration.

Application software (such as SQL Server and Exchange) is running.

CLARiiON storage for all nodes of the cluster must reside in the same storage group.

42

EMC Replication Manager Administrator’s Guide

System Requirements

HP Serviceguard and

AIX HACMP checklist

Required AIX ODM packages

If the cluster is being used for mount Replication Manager requires that the passive node be in a separate CLARiiON storage group.

The cluster mount feature requires PERL to be installed on all cluster nodes.

CLARiiON storage for all nodes of the cluster must reside in the same storage group.

The following ODM packages are required for AIX hosts:

EMC.CLARiiON.aix.rte 5.3.0.0

EMC CLARiiON AIX Support Software

EMC.CLARiiON.fcp.rte 5.3.0.0

EMC CLARiiON FCP Support Software

EMC.CLARiiON.ha.rte 5.3.0.0

EMC CLARiiON HA Concurrent Support

EMC.Symmetrix.aix.rte 5.3.0.0

EMC Symmetrix AIX Support Software

EMC.Symmetrix.fcp.rte 5.3.0.0

EMC Symmetrix FCP Support Software

EMC.Symmetrix.ha.rte 5.3.0.0

EMC Symmetrix HA Concurrent Support

EMCpower.base 5.0.0.0

PowerPath Base Driver and Utilities

EMCpower.consistency_grp 5.0.0.0

PowerPath Consistency Group Extension and Utilities

EMCpower.migration_enabler 5.0.0.0

PowerPath Migration Enabler and Utilities

EMCpower.mpx 5.0.0.0

PowerPath Multi_Pathing Extension and Utilities

EMCpower.multi_path_clariion 5.0.0.0

PowerPath Multi_Pathing Extension for CLARiiON

Cluster requirements

43

System Requirements

44

EMC Replication Manager Administrator’s Guide

2

Installation

This chapter describes how to install Replication Manager. It covers the following subjects:

Overview of Replication Manager components............................ 46

Contents of the product DVD .......................................................... 47

Preinstallation tasks ........................................................................... 47

Installing the Replication Manager Server software..................... 54

Installing the Replication Manager Agent software ..................... 57

Installing the Replication Manager Console software.................. 61

Troubleshooting install using log files ............................................ 62

Modifying or removing Replication Manager............................... 62

Upgrading Replication Manager ..................................................... 64

Migrating Replication Manager Server to a different host .......... 67

Viewing current version and patch level........................................ 67

Replication Manager Config Checker ............................................. 68

Installation

45

Installation

Overview of Replication Manager components

Replication Manager consists of the software components listed in this section. Some hosts may need only one of these components, while others may need two or even all three components. To decide which components to install on each machine, consider the tasks performed by each component:

The Replication Manager Server component manages replicas.

Typically, one Replication Manager Server can meet the needs of several hosts.

Note:

The Replication Manager Server is a 32-bit application only. In supported 64-bit environments the server installation adds 32-bit server binaries to the system.

The Replication Manager Agent component prepares a host machine to create or mount a replica by performing tasks such as quiescing the data before replication. You install the software on the same host where the application software resides. The following agents are available:

• Oracle

• UDB

• SQL Server

• Exchange 2000/Exchange 2003

• Exchange 2007/Exchange 2010

• SharePoint 2007

• File system (always installed when one of the agents listed above is installed)

“Replication Manager Agent requirements” on page 38

provides detailed information.

Native 64-bit agents are available for Exchange and SQL

Server. For SRDF/CE and MirrorView/CE support, a 64-bit agent is required. In Windows Server 2008 all agents are 64-bit native, therefore no special agent is required for SRDF/CE or

MirrorView/CE.

46

EMC Replication Manager Administrator’s Guide

Installation

The Replication Manager Console component is the graphical user interface that lets you control the server and agent software locally or remotely. Install this software on a supported Windows machine.

Contents of the product DVD

The Replication Manager base release is shipped on a single DVD.

The DVD contains all Replication Manager components for all supported operating systems, including:

Replication Manager Server

Replication Manager Agents

Replication Manager Config Checker is installed with Agents

Replication Manager Console

Note:

The user documentation is no longer available from the product DVD.

To access the EMC Replication Manager Product Guide and the EMC Replication

Manager Administrator’s Guide (this manual), go to the Powerlink website.

Preinstallation tasks

Before you start the installation, ensure you have met the following conditions:

Read the Replication Manager documentation thoroughly.

“Viewing and printing the documentation” on page 48 describes

the Replication Manager documentation set.

Accessed the Replication Manager Software Key Request Card or the license key file. Refer to

“Obtaining a license file” on page 48

for detailed instructions.

If you are installing Replication Manager on HP-UX host, you must mount the DVD on the system. The section entitled

“Mounting a DVD on HP-UX” on page 52

provides special instructions.

If a firewall is present in the environment, opened the appropriate

Replication Manager ports.

“Setting up Replication Manager in a secure environment” on page 50 provides default port

information.

Contents of the product DVD

47

Installation

Viewing and printing the documentation

Performed all of the preliminary setup tasks described in

Chapter 1, ”System Requirements,”

and prepared the storage environment according to the procedures described for your

storage array in Chapter 4, ”CLARiiON Array Setup,”

Chapter 5,

”Symmetrix Array Setup,”

Chapter 6, ”Celerra Array Setup,”

and

Chapter 7, ”Invista Setup.”

Have root access (UNIX) or Administrator access (Windows) on the Replication Manager Server, hosts, and on systems that will host the console.

On UNIX systems, make sure that symcli is installed in the default location. If not, add the symcli location to the PATH.

On a Windows computer, you have set the system variable TEMP.

Install logs are saved in TEMP.

Installed the required operating system patches, as described in the EMC Replication Manager Software Support Matrix on: http://Powerlink.EMC.com

In addition to this guide, EMC recommends that you read the following documentation available on Powerlink before installing and using Replication Manager:

EMC Replication Manager Release Notes

EMC Replication Manager Product Guide

You must have Adobe Acrobat Reader to view and print the documentation. You can download Adobe Acrobat Reader from

Adobe’s website at: http://www.adobe.com

Obtaining a license file

Replication Manager is license-controlled. Any license obtained for

Replication Manager 5.x is valid in this release.

During installation you need to specify a license file. To obtain a license file:

1. Obtain a License Authorization Code (LAC) in one of the following ways:

• Locate the LAC on the Replication Manager Software Key

Request Card in the product kit.

48

EMC Replication Manager Administrator’s Guide

Installation

License requirements for Replication

Manager Server disaster recovery

• If you lose your code, you can receive a duplicate LAC by sending an email to the following address. Include your site name, site ID, phone number, and product serial number:

[email protected]

The License Authorization Code enables you to retrieve your

Replication Manager license file from Powerlink Licensing, the web-based license management system.

A single LAC generates all of the licenses for your configuration.

2. Obtain a license key file at http://Powerlink.EMC.com.

Navigate to Support>Software Download and Licensing>License

Management. Refer to the EMC License Authorization email you received from EMC. The email contains your LAC, which enables you to retrieve your Replication Manager license file.

3. Follow the directions on the licensing main page.

You will be asked for the MAC address of the computer to be used as the Replication Manager Server. You can enter any MAC address associated with the computer (for example, the MAC address of the HBA or a network card).

Make sure you save the license file with a .lic extension (the actual license filename can be unique). When saving the .lic file, the filename extension may default to .txt, which is not recognized by

Replication Manager. To ensure it saves with a .lic extension, select All Files in the Save as type field before saving.

4. After you have the license key file, apply it to the system when installing the Replication Manager Server, or afterwards using the

Console.

For information on how licensing works in Replication Manager,

refer to “License types” on page 90 .

Replication Manager requires two Replication Manager Server licenses when configuring server disaster recovery. Both of these licenses must be installed on the primary Replication Manager

Server. You cannot add a license to the secondary server. When the

Replication Manager Server fails over to the secondary server, the same licenses that were required on the primary will be used on the secondary.

For new installations of Replication Manager, you need to provide two licenses on the primary server. If you are upgrading from a

Preinstallation tasks

49

Installation

previous release, you need to provide an additional server license on the primary Replication Manager Server’s existing license file.

“Adding a license” on page 94

describes how to add a license to the license file.

Setting up

Replication

Manager in a secure environment

Default Replication

Manager ports

This section describes considerations when setting up Replication

Manager in a secure environment.

Table 11

Replication Manager

Console

Table 11 on page 50

lists the ports used by Replication Manager and supporting software. These ports need to be open when a firewall is present for proper operation of Replication Manager.

Default ports for Replication Manager can be changed during installation or by modifying host and server properties in the

Replication Manager Console.

Default ports used by Replication Manager and other software

From/To

Server Control Port

Server Data Port

Port

65432

65431

Agent Control Port

Agent Data Port

Agent to Server

6542

6543

443 (if connected to

Replication Manager

Server using SSL)

80 Console

Navisphere

®

Manager

Navisphere Secure CLI

80/443 or 2162/2163

443 or 2163 iSCSI 3260

SSH (for RecoverPoint and Celerra) 22

Hosts running Replication Manager Console must allow outbound traffic on all ephemeral ports.

50

EMC Replication Manager Administrator’s Guide

Installation

Firewall placement and ports

This section describes sample placement of firewalls in a Replication

Manager environment, indicating which open ports are required.

Note:

Placement of firewalls in these figures is for illustrative purposes only.

Figure 1 on page 51

shows a sample Replication Manager configuration with a CLARiiON array:

The firewall between the Replication Manager Console and the

Server needs ports 65431 and 65432 open so that commands from the console can reach the server.

The firewall between the host running Replication Manager

Agent software and the internal network requires ports 6542 and

6543.

Internal network

Replication Manager

Server

6542

(client

ctrl port)

6543

(client

data port)

65431 (server data port)

65432 (server ctrl port)

Replication

Manager

Agent

Array

Replication Manager Console

Figure 1

Sample firewall placement in Replication Manager environment

Preinstallation tasks

51

Installation

Mounting a DVD on

HP-UX

If you are installing Replication Manager on an HP-UX host, follow these instructions to mount the DVD on the system before you install the host:

1. Insert the DVD into the drive on the HP-UX machine and determine the device address by executing the following command:

ioscan -fun

2. Examine the output for a device capable of reading DVDs. Look for something like: disk 2 8/16/5.2.0 sdisk CLAIMED DEVICE

TOSHIBA DVD XM-5701TA /dev/dsk/c2t2d0

/dev/rdsk/c2t2d0

Make a note of the device address for later use. The device address looks like:

/dev/dsk/c#t#d0

3. Ensure that the following binaries exist in /usr/sbin/: pfs_mountd pfsd pfs_mount pfs_umount

4. Create a directory to serve as the mount point for this process:

mkdir

/mount_dir

5. Create or edit /etc/pfs_fstab with the following command:

device mount_dir pfs-rrip xlat-rrip 0 0 where device is the block device corresponding to the DVD and

mount_dir

is the mount point that you created in

Step 4 .

6. Execute the following commands:

pfs_mountd & pfsd 4 &

7. Execute the following command:

pfs_mount

device /mount_dir

52

EMC Replication Manager Administrator’s Guide

where device is the device address you noted in

Step 2 and

/mount_dir

is the directory you created in

Step 4 .

The DVD is now available for use.

To unmount the DVD:

1. Run the following command:

pfs_umount

device

2. Eject the DVD from the drive.

Installation

Preinstallation tasks

53

Installation

54

Installing the Replication Manager Server software

This section describes a fresh installation of Replication Manager

Server software by the installation wizard. To install using the

command line, refer to Appendix A, “Using the Command Line

Interface.”

To install Replication Manager as an upgrade to an existing installation, refer to

“Upgrading Replication Manager” on page 64 .

Although it is possible to install all types of components (server, agent, and console) with a single pass through the installation wizard, this section describes only the installation wizard panels that you see when you install the server component.

When installing the server component with other Replication

Manager components in a cluster environment, EMC recommends that you install them with a single pass through the installation wizard.

Note:

Replication Manager Server is installable on supported Windows hosts only.

To install the Replication Manager Server component:

1. Log in as Administrator. If you are installing in a cluster environment, log in to the active node of the cluster.

2. From the top level of the Replication Manager distribution, run

ermpatch.bat

.

3. When the Welcome panel appears, click Next.

4. On an update installation only: Verify that you have read and understand the update preparation steps and click Next.

5. Read the License Agreement panel. Select I accept and click Next to continue.

6. Be sure Server is selected and click Next.

7. If cluster software is detected, you must specify whether you want to perform a cluster install or a non-cluster install.

In a cluster environment, the Replication Manager Server component must be installed on only one cluster node.

“Cluster requirements” on page 41 describes setup requirements

for a clustered install of Replication Manager Server.

EMC Replication Manager Administrator’s Guide

Installation

8. If you selected Cluster Install, read the prerequisites listed in the next panel. Click the button that indicates you have read the prerequisites and click Next.

9. Specify the location in which to install Replication Manager and click Next.

If you enter a location other than the default location, install the software in a subdirectory named rm to isolate Replication

Manager files. If you specify a directory that does not exist, the directory will be created for you. The default location is

C:\Program Files\EMC\rm or C:\Program Files (x86)\EMC\rm.

If you are performing a cluster installation, there is no default location. Enter the volume for the physical disk resource that you added to the Replication Manager Server resource group. The

Replication Manager internal database and logs are also installed in this shared location.

10. Specify the name of the Replication Manager license key file and

click Next. The license key filename must end in .lic. “Obtaining a license file” on page 48 describes how to obtain a license file.

11. Specify the location for the internal database (this panel is not part of a cluster install):

• The default location for the internal database is C:\Program

Files\EMC\rm\serverdb or C:\Program Files

(x86)\EMC\rm\serverdb. The install program automatically creates the subdirectory serverdb at this location.

If you are performing a cluster installation, specify a path on shared clustered storage.

• EMC recommends that you locate the serverdb directory on its own device, separate from the other server components, to accommodate growth of the database. The rate at which the server database grows depends on the number and

complexity of the replicas. “serverdb disk storage requirements” on page 35 provides more information.

This installation assumes the internal database is located on storage devices that are protected by methods such as mirroring or RAID.

Click Next to continue.

Installing the Replication Manager Server software

55

Installation

12. Specify the following server options:

• Specify the ports you want to use for server control and data communications. (When you click Next, the wizard verifies that the ports are not already in use.)

• Select whether to use Secure Sockets Layer (SSL) communications. Selecting SSL at the server level causes all

Replication Manager Consoles to connect to the server using the SSL protocol. This provides strong authentication to protect against unauthorized access to communications, but may have an impact on console and server performance.

13. Record the hostname and port number of the computer on which you install the server software. You will need this information to log in to the server. Click Next to continue.

Hostname Server control port

14. Specify the user account to be used to create schedules.

Replication Manager uses Windows Scheduler to create job schedules. The user account can be a local or domain account, unless you are performing a cluster installation, in which case you must specify a domain name. The user must be a member of the local Windows Administrators group. Click Next.

Note:

Type the username, password, and domain carefully—Replication

Manager does not validate this information during the setup process.

15. Click Install to begin the installation. After you click Next, a panel displays the progress of the installation.

16. Select the Disaster Recovery role for the Replication Manager

Server. If you are not implementing an Replication Manager

Server disaster recovery solution, select Standalone Server.

“Setting up Replication Manager Server disaster recovery” on page 342

provides more information.

Once the installation is complete, Replication Manager configures the installed components and starts the Replication Manager internal database and the Replication Manager Server service.

17. Click Finish to exit the installation wizard.

56

EMC Replication Manager Administrator’s Guide

Installation

Note:

When you first log in to the Console using the default Administrator

account, Replication Manager prompts for a new password. “Setting the

Administrator password” on page 72

provides more information.

Installing the Replication Manager Agent software

This section describes a fresh installation of the Replication Manager

Agent software using the installation wizard. To install using the

command line, refer to Appendix A, “Using the Command Line

Interface.”

To install Replication Manager as an upgrade to an existing installation, refer to

“Upgrading Replication Manager” on page 64 .

Fresh installation of this service pack of Replication Manager Agent is not supported on UNIX/Linux platforms. For UNIX/Linux, you must install Replication Manager 5.2.0 first, then upgrade to the

Replication Manager service pack. Refer to

“Upgrading Replication

Manager” on page 64

.

Note:

Replication Manager Agent does not fail over with the cluster. If you install agent components on a clustered system, you should install them locally to all nodes.

To install Replication Manager Agent software:

1. Log in as a user with Windows Administrator privileges.

2. Execute ermpatch.bat:

> ermpatch.bat

3. When the Welcome panel appears, click Next.

4. On an update installation only: Verify that you have read and understand the update preparation steps and click Next.

5. Read the License Agreement panel. Select I accept and click Next to continue.

6. Select the Agent software you want to install and click Next.

Support for filesystems is also installed when you select any application agent. If you select only the top level node (“Agent”), support for filesystems only is installed.

Installing the Replication Manager Agent software

57

Installation

Replication Manager Agent for SharePoint must be installed on all nodes that have content databases or search index files, or where the SharePoint writer is registered. The SharePoint appendix in the EMC Replication Manager Product Guide contains additional information.

On Windows Server 2003 x64 or IA64 hosts, to install a 64-bit version of the agent, select from the “64-bit Agent (SRDF/CE and

MirrorView/CE)” list. The 64-bit version is required for

SRDF/CE and MirrorView/CE support.

On Windows Server 2008 hosts, the installation wizard always installs the appropriate RM Agent version (32-bit or 64-bit) automatically.

7. If prompted, specify the location in which to install the agent software and click Next. You will see this panel only if you are installing the first Replication Manager component on this computer, or if you are installing agent software and the server software on a cluster in the same pass through the installation wizard.

8. Select the following options:

• Accept the default values for the agent control port (6542) and agent data port (6543), or enter different values. When you click Next, the wizard will verify that the ports are not already in use.

• Select whether to use SSL communications. Selecting SSL causes the host to communicate with the server using SSL protocol. This provides strong protection against unauthorized access to communications, but may have an impact on host and server performance.

• Select whether to run scripts in secure mode. If you enable this option, users will need to provide a username and password whenever logging in to the system or running pre- and post-replication scripts, backup scripts, and mount scripts.

Otherwise, the scripts will run as root or Administrator.

58

EMC Replication Manager Administrator’s Guide

Installation

9. Record the hostname and the agent control port number of each computer on which you install agent software. You will need to enter this information when you register the host, as described in

“Registering a host” on page 76 .

Hostname Agent control port

Using the

Deployment Wizard

10. On the Ready to Install panel, click Next. A panel displays the progress of the installation.

11. At the end of agent installation, specify whether you want to run the Deployment Wizard. The Deployment Wizard guides you through the recommended steps to help you successfully prepare

and configure Replication Manager. Refer to “Using the

Deployment Wizard” on page 59

.

12. Click Finish to close the wizard when installation is complete.

The installation software starts the agent software.

The Deployment Wizard provides step-by-step instructions on how to install and configure prerequisite software on Replication Manager production and mount hosts. The wizard also provides helpful instructions on configuring storage arrays for use with Replication

Manager.

Note:

The Deployment Wizard is only available when installing Replication

Manager Agent software on Windows machines.

Installing the Replication Manager Agent software

59

Installation

You can start the Deployment Wizard by performing any of the following steps:

Select Run Deployment Wizard on the InstallShield Wizard

Complete panel.

Open the rmdeploy.hta file on your hard disk following installation (by default, EMC\rm\client\bin in Program Files\ or

Program Files (x86)\).

Choose the deployment scenario that best describes your storage environment, and proceed through the wizard.

Note:

The wizard includes a printable Help system that is available as a

.CHM file (rmdeploy.chm) in the client\bin subdirectory.

60

EMC Replication Manager Administrator’s Guide

Installation

Installing the Replication Manager Console software

This section describes a fresh installation of Replication Manager

Console using the installation wizard. To install using the command line, refer to

Appendix A, “Using the Command Line Interface.” To

install Replication Manager service pack as an upgrade to an existing

installation, refer to “Upgrading Replication Manager” on page 64

.

Note:

The Replication Manager Console component does not fail over with a cluster. If you install the Console a clustered system, and you want it to be available after failover, you should install it locally on all nodes.

To install the Replication Manager Console:

1. Log in as a user with Windows Administrator privileges.

2. Change directory to the top level of the Replication Manager distribution and run ermpatch.bat.

3. When the Welcome panel appears, click Next.

4. On an update installation only: Verify that you have read and understand the update preparation steps and click Next.

5. Read the License Agreement panel. Select I accept and click Next to continue.

6. Select Console and click Next.

7. Select the location in which to install the console component. The install program automatically creates the subdirectory named gui at this location. The default location is Program Files\EMC\rm or

Program Files (x86)\EMC\rm.

Note:

If you change the default directory, EMC recommends that you manually type in the subdirectory rm to isolate Replication Manager files in their own directory.

8. On the Ready to Install panel, click Install. A new panel displays the progress of the installation.

9. Optionally, select the option to create a desktop shortcut.

10. Click Finish to exit the wizard.

Installing the Replication Manager Console software

61

Installation

Troubleshooting install using log files

Installation information, errors, and warnings are logged in the following files:

IRInstall.log (UNIX only) is created in the /tmp directory of the system where Replication Manager is installed. This is the only installation log file for UNIX clients.

rm_setup.log (Windows only) is created in the installing user’s

TEMP directory. This is the main log file on Windows.

rm_install.log (Windows only) is created in the installing user’s

TEMP directory only when command line installation ("silent" install) is used. It contains output from InstallShield’s silent mode.

Modifying or removing Replication Manager

To modify or remove a Replication Manager software component, follow the appropriate procedure described in this section.

Modifying or removing components on

Windows

Before removing Replication Manager Agent from a host, use the

Replication Manager Console to delete the host.

To remove or modify an individual Replication Manager component on a Windows system, or to completely remove Replication Manager software from a Windows system:

1. Be sure you are logged in as a user with Windows Administrator privileges.

2. Run ermpatch.bat from the Replication Manager distribution.

Select the operation you want to perform:

Modify — Removes an installed component or adds a new component.

Repair — Reinstalls all previously installed components.

Remove — Removes all Replication Manager software.

An alternative method to remove all Replication Manager software from a computer is to use Windows Add/Remove

Programs.

62

EMC Replication Manager Administrator’s Guide

Removing

Replication

Manager Agent

(UNIX/Linux)

Removing

Replication

Manager from a cluster

Installation

Before removing Replication Manager Agent from a host, use the

Console to delete the host.

To completely remove Replication Manager Agent from a UNIX or

Linux system:

1. Log in as root.

2. Change directory to the _uninst directory in the location where you installed Replication Manager. For example:

# cd /opt/emc/rm/_uninst

3. Run uninstaller.bin. Follow the instructions on the screen to complete the removal of the component.

Use the -silent option (uninstaller.bin -silent) to uninstall a component without additional user interaction.

If you use the -silent option when uninstalling an agent, all installed agents will be removed. To remove an individual agent, run the uninstaller.bin program without the -silent option. This will run the Uninstall wizard, where you can select the agents that you want to remove.

To remove Replication Manager from a cluster:

1. Remove the Replication Manager Server services from the failover nodes (not from the node where Replication Manager was originally installed): a. Make sure Replication Manager Server is running on the node on which it was originally installed, and make a network share of the location where you installed Replication Manager

Server (for example, F:\rm\server\bin).

b. Map the share that you created above to a drive on the failover node. c. Verify that you have access to the files located in the server\bin directory of the share above.

d. On the failover node, open a command prompt window and go into the mapped drive.

e. Run the following command from the directory you shared out in step a:

ins_serv.exe uninstall -cluster

Modifying or removing Replication Manager

63

Installation

f. On the failover node, run Add/Remove Programs and select

Replication Manager

to run the uninstall wizard. Select the remaining Replication Manager components (agent, console) that you want to uninstall.

g. On the failover node, disconnect from the network share.

h. Repeat steps b–f for the other failover nodes in the cluster.

(Do not perform steps b–f for the original install node.

Uninstalling from the original install node is covered in

Step 2 .)

2. On the original install node of the cluster, run Add/Remove

Programs

and select Replication Manager to run the uninstall wizard. Select the remaining Replication Manager components that you want to uninstall.

Upgrading Replication Manager

To install a Replication Manager service pack as an upgrade from an earlier version, follow the appropriate procedure in this section.

See

“Upgrade considerations” on page 65

and the corresponding upgrade instructions for Server, Agent, or Console that follows that section.

Table 12 on page 64

describes all supported upgrades to Replication

Manager 5.2.3.

Table 12 Supported upgrades to Replication Manager 5.2.3

Version

RM 5.2.2.1

RM 5.2.2

RM 5.2.1

RM 5.2.0

RM 5.1.3

RM 5.1.2

RM 5.1.1

RM 5.0.x

Upgrade path to 5.2.3

Upgrade directly to RM 5.2.3

Upgrade directly to RM 5.2.3

Upgrade directly to RM 5.2.3

Upgrade directly to RM 5.2.3

Must first upgrade to RM 5.2, then to RM 5.2.3

Must first upgrade to RM 5.2, then to RM 5.2.3

Must first upgrade to RM 5.2, then to RM 5.2.3

Must first upgrade to RM 5.2, then to RM 5.2.3

64

EMC Replication Manager Administrator’s Guide

Installation

Upgrade considerations

Agent and server versions

Review the following before upgrading Replication Manager software:

Review the EMC Replication Manager Support Matrix on Powerlink to verify that the system meets all current requirements.

Be sure you have a recent backup of the serverdb directory. Refer to

“Protecting the internal database” on page 109

.

Review the following agent and server considerations before upgrading Replication Manager software:

Upgrade the server before upgrading agent systems.

Verify that agent components are no more than one version behind the server. For example, if the server is at Version 5.2.x, all agent components must be at Version 5.1.0 or later.

Verify that the Replication Manager Agent software on the production host and corresponding mount host are at the same version.

If both Replication Manager Server and Agent reside on the same host, they must be upgraded at the same pass through the

Installation wizard.

If you are upgrading a system that has 32-bit binaries installed, that can prevent you from upgrading to native 64-bit binaries. For example, this can preclude you from upgrading to agents compatible with SRDF/CE and MirrorView/CE for MSCS. If this situation occurs, uninstall all the agents on the system using

32-bit binaries and then retry the upgrade.

If you are upgrading an HP-UX agent on an Itanium-based system, you must uninstall the agent prior to upgrading to

Replication Manager 5.2. Beginning with Replication Manager

5.2, the HP-UX agent runs as a native 64-bit agent on

Itanium-based systems.

If your Windows CLARiiON storage environment (EMC

SnapView™ clone, SnapView snap, or EMC SAN Copy™) contains clients that use greater than 2 TB LUNs for replication jobs, ensure that these clients are running a minimum of

Replication Manager 5.1.2 in order to properly update the storage db.

Perform a storage discovery after upgrading clients and the server in order to update the Replication Manager database.

Upgrading Replication Manager

65

Installation

Symmetrix long duration locks

Celerra NFS host registered with private

IP address

Cluster upgrade

Upgraded Oracle jobs

Before Version 5.2.3, Replication Manager applied long duration locks (LDLs) when adding Symmetrix devices. When upgrading a

Replication Manager Server to Version 5.2.3 in a configuration that contains a Symmetrix array, you are asked if you want to disable the long duration locks feature for future operations and remove LDLs from currently added Symmetrix devices.

If you are upgrading from Version 5.2.2 and the Celerra NFS environment variable IR_ALT_HOST_IP_hostname was used to specify an alternate private IP address for the mount host name, EMC recommends that you follow these steps:

1. Upgrade the Replication Manager Server and Console.

2. Upgrade the Agent.

3. For existing jobs, modify the NFS network host interface value under Job properties, Mount tab. Specify the alternate IP address or hostname. If this is a Celerra NFS datastore job with ESX as the mount host, enter the hostname of the VMkernel interface. Be sure the IP or hostname has connectivity to Celerra data mover.

4. Unset IR_ALT_HOST_IP_hostname on the mount host.

When upgrading a clustered Replication Manager Server, install the

Replication Manager Server component on the cluster node where

Replication Manager was originally installed.

Existing Oracle jobs that are modified after upgrade require a value for the SYS password. The Oracle appendix in the EMC Replication

Manager Product Guide contains more information.

Upgrading a

Replication

Manager 5.x Server

Upgrading a

Replication

Manager 5.x Agent

To upgrade an existing Replication Manager 5.x Server to version

5.2.3:

1. Refer to the notes in “Upgrade considerations” on page 65

.

2. Run the installation wizard. Refer to

“Installing the Replication

Manager Server software” on page 54

.

To upgrade an existing Replication Manager 5.x Agent to version

5.2.3:

1. Refer to the notes in “Upgrade considerations” on page 65

.

66

EMC Replication Manager Administrator’s Guide

Installation

2. Be sure you have upgraded the Replication Manager Server before upgrading the Agent software.

3. Upgrade the Replication Manager Agent software by running the installation wizard. Refer to

“Installing the Replication Manager

Agent software” on page 57

.

Note:

Replication Manager Agent software on the production host and corresponding mount host must be at the same version.

Upgrading a

Replication

Manager 5.x

Console

To upgrade an existing Replication Manager 5.x Console to version

5.2.x:

1. Refer to the notes in “Upgrade considerations” on page 65

.

2. Run the installation wizard. Refer to

“Installing the Replication

Manager Console software” on page 61

.

Migrating Replication Manager Server to a different host

Support for Replication Manager Server on certain platforms and operating systems ended in version 5.2.2 (refer to the Replication

Manager Release Notes for details). Migration of an existing 5.2.0 or

5.2.1 Replication Manager Server to a new host should be performed by EMC customer service only. Customer service should refer to EMC

Knowledgebase solution emc218841 for the migration procedure.

Viewing current version and patch level

Check the version and patch level of Replication Manager with the

ermPatchLevel

utility. To use the utility:

1. Change directory to the location where you installed Replication

Manager

(for example, /opt/emc/rm or Program Files\EMC\rm).

2. Change to the directory where binaries are stored for the particular component you want to check (for example, rm\server\bin).

3. Execute the appropriate script, depending on the platform. On

UNIX, run ermPatchLevel.sh. On Windows, run

ermPatchLevel.bat

.

Migrating Replication Manager Server to a different host

67

Installation

Replication Manager Config Checker

Replication Manager Config Checker verifies that your software,

CLARiiON and Celerra arrays, and hosts are properly configured.

Refer to the readme_cfgchk.txt file (in the client/bin directory under the Replication Manager installation location) for the list of tests that

Replication Manager Config Checker performs.

WARNING

It is highly recommended that you remedy any failures before continuing with configuration. Replications will not work properly if you proceed without correcting these errors.

Installing Config

Checker

Replication Manager Config Checker is installed when you install an

Replication Manager Agent component on a computer running

Windows, Solaris, Linux, HP-UX, or AIX.

Running Config

Checker

Running the CLI

On Solaris, Linux, HP-UX, and AIX, you can run Config Checker on demand with the cfgchk.sh command line interface. On Windows you can run the cfgchk CLI, or select Start > Programs > Replication

Manager > Replication Manager Config Checker to run the GUI.

Config Checker runs on physical and virtual machines.

To run the Config Checker CLI:

1. Change directory to the location where Config Checker was installed (the following are the default locations):

C:\> cd \Program Files\EMC\rm\client\bin

(Windows 32-bit)

# cd /opt/emc/rm/client/bin

(Solaris, Linux, HP-UX, or AIX)

2. Run the cfgchk command. The following example saves the

Config Checker report in HTML format:

> cfgchk -html > cfgchkfile.html

On Solaris, Linux, HP-UX, and AIX, the command is cfgchk.sh.

The following example saves the report in HTML format:

# cfgchk.sh -html > cfgchkfile.html

68

EMC Replication Manager Administrator’s Guide

Installation

Running the Config

Checker GUI

(Windows only)

To run the Config Checker GUI:

1. Select Start > Programs > Replication Manager > Replication

Manager Config Checker

. The Config Checker interface appears.

2. From the Action menu select Register CLARiiON or Register

Celerra

to register the storage arrays to be checked.

3. For a CLARiiON array: a. On the Register CLARiiON dialog box, click Add a new

storage array

.

b. In the Add Storage dialog box, specify the following:

– Serial number

– IP address for SP A

– IP address for SP B

– Username and password of an administrator account to access the array

– Network ports used by Navisphere Manager for this array c. Click OK to save and close Add Storage dialog box.

d. Click OK to save and close Register CLARiiON dialog box.

4. For a Celerra array: a. On the Register Celerra dialog box, click Add a new

destination

.

b. When the Add Destination dialog box appears, specify the following:

– Source Data Mover IP Address

– Destination Data Mover IP Address

– Destination Celerra Name c. Click OK to save and close Add Destination dialog box.

d. Click OK to save and close Register Celerra dialog box.

5. Select Run from the Action menu to check the configuration.

Several command prompt windows will appear during a discovery or configuration check. This is normal, and these extra windows will close at the conclusion of the discovery or check.

6. Select Save Results from the File menu to save an ASCII copy of the Config Checker results. The results list which tests have passed and failed, and how to remedy failures.

Replication Manager Config Checker

69

Installation

cfgchk command syntax

Table 13

Table 13 on page 70

lists options for the cfgchk command.

cfgchk CLI options

Options

-technology

{CELERRA | CLARIION }

-xml

-html

-ascii

-discover

Description

Indicates the array technology to test. If omitted, all array types are tested.

Generates the output in the specified format. These options are mutually exclusive. The default is -ascii.

Searches for storage arrays. If arrays are found, asks for a username and password.

Note: This option cannot be used with the -silent option in the same call.

-silent

-clariion

spa,spb,username,password

[, 80 | 2162 ]

-celerra srcIP, dstIP, dstName

-defaultport {80 | 2162}

-defaultuser

username,password

-recoverpointoptions

username,password,hostname

-help

Runs in silent mode (no prompting).

Specifies the CLARiiON array to check. Enter the service processor address pair, username, and password, and optionally, the port. To check multiple CLARiiON arrays, repeat the -clariion option and arguments. If you omit this option, cfgchk does not check any arrays. The default port is 80.

Specifies a source and destination Celerra replication pairs. To specify multiple arrays, repeat the option.

Specifies the default port number for CLARiiON arrays found by the -discover option.

Default username and password for CLARiiON arrays found by the -discover option.

RecoverPoint Appliance username/password and

RecoverPoint Appliance hostname for RecoverPoint tests

Displays the usage statement for cfgchk.

70

EMC Replication Manager Administrator’s Guide

3

Administration Tasks

This chapter describes the following Replication Manager administration tasks:

Setting the Administrator password ............................................... 72

Managing user accounts ................................................................... 74

Managing hosts .................................................................................. 76

Managing general server options .................................................... 85

Publishing management events....................................................... 86

Managing server security options ................................................... 87

Working with licenses ....................................................................... 90

Setting the client security mode....................................................... 95

Managing replica storage.................................................................. 97

Protecting the internal database .................................................... 109

Viewing Replication Manager from EMC ControlCenter.......... 112

Replication Manager log files......................................................... 116

Replication Manager event messages in Windows..................... 119

Administration Tasks

71

Administration Tasks

Setting the Administrator password

When you first log in to the Replication Manager Console with the default Administrator account, Replication Manager prompts for a new password.

Note:

If you are upgrading Replication Manager to the present version and have already registered a host, you will not be prompted for a new password if you are using the default Administrator account.

Setting the default

Administrator password

To initially set the default Administrator password:

1. Open the Replication Manager Console. If you are logging in for the first time, enter Administrator as the username and enter any password. The Required Password Change dialog box appears, as shown in

Figure 2 on page 72 .

Figure 2

Required Password Change dialog box

2. Enter a password and confirm the password. Restrictions: ASCII only. Invalid characters: | (pipe) # (pound) ' (apostrophe) "

(double quote).

3. Click OK.

You must have at least one user that is defined with the ERM

Administrator role.

72

EMC Replication Manager Administrator’s Guide

Changing the default

Administrator password

Administration Tasks

Note:

In a Replication Manager Server disaster recovery environment, the

Administrator password that you set on the primary Replication Manager

Server will automatically become the Administrator password on the secondary Replication Manager Server.

You can change the default Administrator’s password by editing the user properties of that Administrator account.

To change the default Administrator’s password:

1. Open the Replication Manager Console. If you are logging in for the first time, enter Administrator as the username and enter any password.

2. Click Users in the tree panel. The Administrator user appears in the content panel.

3. Right-click the Administrator user, and select Properties. The

User Properties

dialog box appears.

4. Enter a password and confirm the password. Restrictions: ASCII only. Invalid characters: | (pipe) # (pound) ' (apostrophe) "

(double quote).

5. Click OK to complete the operation.

You might also consider creating a new account with ERM

Administrator privileges and deleting the Administrator account to increase the security of the system.

Setting the Administrator password

73

Administration Tasks

Managing user accounts

ERM Administrators authorize others to use Replication Manager by creating user accounts.

Adding a user

To add a user:

1. Right-click Users from the tree and select New User. The New

User

dialog box appears, as shown in Figure 3 on page 74 .

Figure 3

New User dialog box

2. Specify the following information:

• Username

• Password, Confirm Password

Password restrictions: ASCII only. Invalid characters: | (pipe)

# (pound) ' (apostrophe) " (double quote).

• Role

• User Description

• User Access

74

EMC Replication Manager Administrator’s Guide

Modifying a user account

Deleting a user account

Administration Tasks

3. Click Grant and select the objects to which this user will have access. Amount of access depends on the user’s role.

4. Click Create to create the new user.

Users specified as ERM Administrators can modify all user accounts.

Each user can change his or her own password.

To modify a user account:

1. Right-click the username in the content panel, and select

Properties

.

2. Make the appropriate changes to the user account. An ERM

Administrator can change any of the following:

• Username

• Password

• User Description

• Role

Changing a user’s role can restrict access to features that they currently control (including storage, users, hosts, particular jobs, activities, and replicas). Use caution when changing user roles. Make sure at least one Database Administrator or ERM

Administrator has access to each existing job, activity, and replica.

• Access control list

3. Click OK to complete the operation.

Users specified as ERM Administrators can delete user accounts.

When you delete a user who has the Database Administrator role, you restrict access to the existing entities (such as jobs, application sets, and replicas) that the user currently controls. Make sure there is at least one Database Administrator or ERM Administrator who has access to each entity.

To delete a user account:

1. Right-click the username from the content panel, and select

Delete

.

2. Verify that you want to delete the user.

Managing user accounts

75

Administration Tasks

Managing hosts

Registering a host

ERM Administrators must create a connection between the

Replication Manager Agent software on each host and the

Replication Manager Server. This is referred to as registering a host.

ERM Administrators with root or administrator access to a client have the privileges necessary to register a host with a server. When you register a host, you can also specify which users have access to the host. Users granted access to the host can replicate from or mount replicas to that host.

To register a host:

1. Right-click Hosts from the tree and select New Host. The

Register New Host

dialog box appears, as shown in

Figure 4 on page 76 .

Figure 4

Register New Host dialog box

2. Enter the following information in the fields provided:

Host Name — Enter a hostname or IP address. In a cluster environment, enter the IP address or network name of the resource group (virtual node).

76

EMC Replication Manager Administrator’s Guide

Administration Tasks

Servers collect host names by querying a DNS server. This means that DNS services must be set up and working on the server.

You must enter the name or IP address of a host on which the

Replication Manager Agent software is installed and running.

Port — Enter the client control port number for communications with the server. The client control port is the first of the two port numbers that were entered when the agent software was installed.

If you do not remember the host’s control port number, run the following:

On UNIX, run:

/opt/emc/rm/client/bin/rc.irclient parameters

On Windows, open a command prompt window, access the client\bin folder under the location where Replication

Manager was installed, and run:

irccd -s

This command displays the parameter information, including the control port number.

Enable RecoverPoint — Check this box to enable

RecoverPoint support for the host. Enter the RPA site management IP address or hostname.

• Enable VMware

— To enable VMware support for the host, click Enable VMware and complete the following fields:

Note:

Not required for configurations using only RDM.

– Specify the name or IP address of the machine that hosts the VirtualCenter software in the VirtualCenter host name field.

Note:

If a non-default port is to be used to communicate with the

VirtualCenter, then specify the port number in the following format: <Name/IP address>:<port number>

– Specify the Username Replication Manager should use to log in to VirtualCenter.

– Specify the Password Replication Manager should use to log in to VirtualCenter.

Managing hosts

77

Administration Tasks

Defining user access to hosts

– Decide whether to require users to enter these credentials again whenever they create an application set for a VMFS.

3. Click the Log Settings tab and complete the following fields:

• Enter a value for Maximum Directory Size. The value must be larger than the current Maximum File Size value.

• Enter a value for the Maximum File Size in bytes. The range is

100 KB to 200 MB.

• Accept the default Logging Level of Debug.

4. Click the User Access tab to modify the host access control list. If you do not specify an access control list, the host will be accessible to ERM Administrators only. If you assign access controls, the ERM Administrator must define which users have access to that host through Replication Manager.

5. Click OK to create the host entry. At this point, Replication

Manager establishes and tests communication with the host.

Therefore, the host must have the Replication Manager Agent software installed and running on it for the host connection to be created successfully.

The User Access tab lets you grant or deny user access to the host.

To grant a user access to the host you are creating or modifying:

1. Click the User Access tab in the Register New Host dialog box.

78

EMC Replication Manager Administrator’s Guide

Administration Tasks

2. Click Grant. The Grant User Access dialog box appears, as shown in

Figure 5 on page 79

.

Figure 5

Table 14

Grant User Access dialog box

3. Select the box next to the username. To deny a user access to the host, clear the checkbox. Users with the ERM Administrator role can never be denied access to a host. Their names appear with a checkbox that is automatically selected. You cannot clear the checkbox for an ERM Administrator.

Table 14 on page 79

shows who can access which hosts.

User access (page 1 of 2)

Role

ERM Administrator

Power DBA

Privileges

Can access all hosts previously registered in Replication

Manager.

(ERM Administrators can register any host to which they have root or Administrative access.)

Can access the database environment and other information system applications. This role has all the privileges of the DBA role, but also has the rights necessary to configure storage pools. This role controls the operation of one or more database applications and is often accountable for those databases.

Managing hosts

79

Administration Tasks

Table 14

User access (page 2 of 2)

Role Privileges

Database Administrator Has all the privileges of the Power User as well as the ability to restore and promote selected replicas to the production host.

Power User

Operator

Has all the privileges of the operator role, as well as the ability to perform the following tasks:

• Configure application sets.

• Configure jobs for each application set.

• Schedule each replication job.

• Mount replicas to the production or alternate host.

• Unmount replicas.

• Delete replicas.

Runs the Replication Manager software on a day-to-day basis and monitors jobs and replicas to ensure that the system runs smoothly. Operators perform the following tasks:

• View status of jobs that are currently running.

• Run selected replication jobs.

• Schedule existing replication jobs to run at certain times.

• View properties.

• Operators run jobs created by the ERM Administrator, the various Database Administrator roles, and power users.

If an ERM Administrator denies a certain user access to a host, the administrator may remove that user’s ability to mount or back up data to that host.

Modifying a host

To modify a host:

1. Make sure the client daemon or client process is running on the host before you attempt to modify the host.

2. Click Hosts in tree panel.

3. Right-click the host in the content panel, and select Properties.

4. Make the appropriate modifications to the host.

5. Click OK to complete the operation.

80

EMC Replication Manager Administrator’s Guide

Viewing host properties

Administration Tasks

To view the properties of the hosts that you have added:

1. Right-click the specific host under Hosts and select Properties.

The Host Properties dialog box appears, as shown in Figure 6 on page 81 .

Figure 6 Host Properties dialog box

Managing hosts

81

Administration Tasks

Updating host software properties

Viewing host software update history

The Host Properties dialog box displays the following information about your storage array:

Name or IP address of the host

Port number

Date on which the host was registered

Whether the host is connected through SSL

Date and time of the last update

In addition, the name and version of all Replication

Manager-dependent software that is currently installed on the host is displayed in the Software Properties table:

Property

— Displays the name of the specific software application that is installed on the host

Value

— Displays the version of the software application

All software properties of the host are initially populated when you register the host with Replication Manager. If you select the checkbox

Update properties on failure, Replication Manager automatically updates the properties when a job fails. If you change your configuration (for example, if you install a more recent version of

EMC Solutions Enabler), you should manually run an update to allow Replication Manager to refresh the host software version information that is displayed in the Host Properties dialog box.

To update host software properties:

1. Right-click the specific host from Hosts and select Properties.

2. On the General tab, click Update.

After the Update Properties progress dialog box completes, the latest information for all installed software appears in the Software

Properties table. You can view more information about the most recent update by clicking Details.

Replication Manager allows you to view a history of the software updates that have been performed for a particular host. To view the software update history of a host:

1. Right-click the host from Hosts and select Properties.

82

EMC Replication Manager Administrator’s Guide

Deleting a host

Administration Tasks

2. On the General tab, click History. The Host Software Version

History

dialog box displays the name and version of all

Replication Manager-dependent software applications that are installed on the host, organized by the date and time on which the update was performed.

The Notes column provides information about how each host software application was affected by a specific update:

Initial value — The software version is the original version that was set during host registration

Differs from previous update — The software version has changed since the previous update

No change — The software version has not changed since the previous update

By default, the Host Software Version History provides information about the previous 31 software updates for the host. You can change this value by editing the EMC_ERM_RECORDS_IN_HISTORY environment variable on the Replication Manager Server.

Make sure the client daemon or client process is running on the host before you attempt to delete the host.

You cannot delete a host that is associated with any application sets.

To delete a host, first delete any application sets that rely on that host.

These conditions of an application set can prevent a host from being deleted:

The information in the application set is associated with a specific production host.

The application set has jobs that can mount replicas of that application set to the host.

The application set has jobs that can back up using a backup application on that host.

To delete a host:

1. Right-click the host in the content panel, and select Delete.

2. Verify that you want to delete the host.

This process unregisters the host, making it unknown to Replication

Manager. The client software continues to run on the host until you stop and remove it manually. Once you delete a connection to a host,

Managing hosts

83

Administration Tasks

you can reconnect the host to the same Replication Manager Server by following the steps in

“Registering a host” on page 76

.

For security reasons, only one Replication Manager Server can connect to a host at a time. Once the connection is established, the host accepts commands from that one server only. To change which

Replication Manager Server a host takes commands from, access the server that is currently attached to the host and select Delete. Next, access the new server and register the host with that new server.

If you delete the last host that has access to a particular Symmetrix array, you should also remove the array from Replication Manager.

This is important if Symmetrix array has LDLs enabled. If you do not remove the array, Replication Manager will not be able to unlock its devices, and manual cleanup will be required.

84

EMC Replication Manager Administrator’s Guide

Administration Tasks

Managing general server options

ERM Administrators can set certain server options, such as the logging level of the messages that are captured to the logs.

To manage general server options:

1. Right-click the ERM Server and select Properties.

2. To change the username or password of the account that

Replication Manager uses to create a task in the Windows scheduler, use the Schedule User, Password, and Domain on the

General

tab.

3. On the Log Settings tab, choose a logging level, Normal or

Debug

.

Table 18 on page 118 provides information on these level

settings:

• Enter the maximum log directory size. The default is 80 MB.

• Enter the maximum size of each log file. The default is 2 MB.

4. Click OK to save the server options you have selected.

Note:

Replication Manager logs certain event messages to the Windows

Application Event Log.

Managing general server options

85

Administration Tasks

Publishing management events

This section describes publishing of Replication Manager management events.

Overview

The Replication Manager Server can publish CIM-compliant management events, allowing the monitoring of Replication Manager activities by management tools such as Microsoft Operations

Manager (MOM) and Microsoft System Center Operations Manager

(SCOM).

Replication Manager events have the source string

EMC_ProcessEvent. Use this string when creating Replication

Manager rules in MOM or SCOM.

For support of management event publishing, the MOM or SCOM server and Replication Manager Server must be in the same domain, or a trust relationship must be created between the domains in which the servers are located.

Enabling management event publishing

To enable or disable management event publishing:

1. Right-click the ERM Server and select Properties.

2. Click the General tab.

3. Select the Publish Management (CIM) Events checkbox to enable the option; clear to disable.

4. Click OK.

The Management (CIM) Events contain information about events published to the event log, including:

Importance attribute

— One of the three importance levels

(Information/Warning/Error).

Source attribute

— Name of the component that originated the error (ECCRMServer).

Data attribute

— An XML representation of the original event including the category, source, time generated, and the information that Replication Manager logged.

For a list of events that can be published refer to Table 19 on page 120

.

86

EMC Replication Manager Administrator’s Guide

Administration Tasks

Managing server security options

ERM Administrators can also set security options for the Replication

Manager Server, which allows them to:

Audit their operations by logging certain Replication Manager activities to the Windows Application Event log.

Display a pop-up message banner upon login.

Configuring security events

To configure the activities that Replication Manager logs to the

Windows Application Event log:

1. Right-click the ERM Server and select Properties.

2. Click the Security tab, shown in Figure 7 on page 88

. In the

Security Events

box, choose one or more of the following

Replication Manager activity types that you want to log in to the

Windows Application Event log:

• Account administration

• Object management

• Replica management

• Security

• User login

Managing server security options

87

Administration Tasks

Figure 7 Server Properties Security tab

3. Click OK to save the server options you have selected.

88

EMC Replication Manager Administrator’s Guide

Administration Tasks

Creating a logon banner

To create a pop-up message banner that displays upon login:

1. Right-click the ERM Server and select Properties.

2. Click the Security tab. In the Logon Banner box, shown in

Figure 8 on page 89

, type the message that you want to display upon successful login.

Figure 8

Login Banner text box

3. Click OK to save the server options you have selected. The message you typed into the Logon Banner field appears in a pop-up message each time you log in to the Replication Manager

Server.

Managing server security options

89

Administration Tasks

Working with licenses

This section describes how licensing works in Replication Manager.

License types

A license is needed for:

Replication Manager Server

Only one server license is needed. The license applies only to the system to which it has been added.

The server license is for Replication Manager Server functionality only. If you also install Replication Manager Agent software on the Replication Manager Server machine, an Replication Manager

Agent license is also needed.

Replication Manager Agent

This license allows you to run Replication Manager Agent on any

OS platform and with any replication technology that Replication

Manager supports.

Replication Manager Virtual Proxy Host

This license allows you to run Replication Manager Agent software on a virtual proxy host, and also supports replication of data that resides in supported virtual environments. An example is a proxy host used for VMFS operations. Not required for RDM,

MS iSCSI discovered or VMware virtual disks. One Replication

Manager Virtual Proxy Host license is needed per proxy host.

License status

Each license has a status, either Free or In Use. The Replication

Manager Server license is always In Use. An Replication Manager

Agent or Replication Manager Virtual Proxy Host license, on the other hand, can change from Free to In Use and back to Free again.

Initially, the status of an Replication Manager Agent license is Free and the license is not associated with any host. You can view license status from the Replication Manager Console under the Licenses tab of Server Properties.

90

EMC Replication Manager Administrator’s Guide

Administration Tasks

When a license status changes to In Use

Freeing a Replication

Manager Agent license

The status of an Replication Manager Agent or Replication Manager

Virtual Proxy Host license changes from Free to In Use, and is associated with a specific host, when you perform one of the following actions for the first time on the host:

Create a job. In this case, a license is used by the local host.

Creating a job on an Exchange CCR cluster requires an

Replication Manager Agent license for the active physical node and the passive physical node, but not for the Clustered Mailbox

Server (CMS).

Specify a mount host other than the local host. In this case, the mount host uses a license.

Perform an on-demand mount to a host. The mount host uses a license.

A host uses no more than one Replication Manager Agent license at a time. If a host is already using an Replication Manager Agent license, additional licenses are not needed when jobs are created, or when replicas are mounted to the host.

An Replication Manager Agent license becomes Free when you do all of the following:

Delete all jobs from a host.

Unmount all replicas from the host.

Grace period

If no license is found, Replication Manager operates with full functionality for the first 90 days after installation. When you run the

Replication Manager Console during this time, a pop-up reminds you how many days remain in the grace period.

After the grace period, if no valid Replication Manager Server license is found, then host registration and job creation is disabled. No jobs will run. Restores are permitted but you cannot mount replicas. At this point you should use the Replication Manager Console to add licenses (in Replication Manager Console, under Server Properties,

Licenses tab).

When an Replication Manager Server license is applied, the grace period is stopped, no matter what day of the grace period you are on.

Working with licenses

91

Administration Tasks

Using host licenses from an earlier version

No action is required to use host licenses from earlier 5.x versions of

Replication Manager. Note the following on how Replication

Manager handles existing host licenses:

A host license from an earlier release (specifying OS and replication technology type) will be counted as an Replication

Manager Agent license.

A host (whether logical or physical) now uses only one

Replication Manager Agent license, regardless of the number of licenses it used in the previous licensing scheme. For example, a host that used both TimeFinder

®

and SAN Copy formerly required two licenses. In the new scheme the host uses only one license.

All existing host licenses, which formerly had technology/OS-specific names (for example, SnapView on

Windows), are now displayed as Replication Manager Agent in the Replication Manager Console.

License management

License requirements when a replica is mounted

The Replication Manager Server stores and manages all licenses

(Server, Agent, and Virtual Proxy Host). You can add licenses during

Replication Manager Server installation, or later with the Console

(under Server Properties > Licenses tab). The Console serves as the user interface for adding licenses and viewing license status. Refer to

“Using the Licenses tab” on page 93

.

A Replication Manager Agent license is required for the production host and the mount host. If the production host and mount host are the same machine, the same license is shared.

License warnings

Table 15 on page 93

summarizes limits to functionality that you may encounter due to a lack of required licenses.

92

EMC Replication Manager Administrator’s Guide

Administration Tasks

Table 15

Limitations to functionality due to insufficient licenses

Limitation Reason

Cannot register a host.

Beyond end of grace period.

Cannot mount a replica.

Cannot create a job.

Cannot run a job.

Beyond end of grace period, or insufficient free licenses.

Scheduled jobs do not run.

Job for federated application set fails.

Beyond end of grace period, or insufficient free licenses for one or more hosts in the federated application set.

Remedy

Add a server license.

Free up an Agent license by unmounting all replicas or deleting all jobs from some other host. Or add an Agent license.

Free up an Agent license by unmounting all replicas or deleting all jobs from a host that is not part of the federated application set, or add an Agent license.

Using the Licenses tab

Use the Licenses tab under Server Properties to add new licenses, and view information about your licenses.

The Licenses tab displays the following information about

Replication Manager licenses. For each license, the following information is displayed:

License Key

— Unique license ID.

License Type

— Refer to “License types” on page 90 .

License Status

— Total number of licenses installed for this license type, the number in use, and number that are free.

Expiration Date

— Some licenses are valid for a limited time. If the license has an expiration date it is listed here. Otherwise, the value is Permanent.

License File Location

— The location where Replication Manager stores the license file, which is the server\license folder where the server was installed. This information is provided for customer service use.

Note:

All license files are renamed to rm_licensennnnnnnnnn.lic when copied to the server\license folder.

Working with licenses

93

Administration Tasks

Adding a license

To add a license:

1. Right-click the Replication Manager Server and select Properties.

2. Click Licenses.

3. Click the link Click here to add a license.

4. Enter the name of the license file (including the path) and click

OK

, or click Browse to browse to the license file. The filename must end in .lic.

5. Exit the Replication Manager Console.

6. Stop and restart the Replication Manager Server service.

7. Open the Replication Manager Console.

Note:

You can add new licenses but you cannot remove licenses.

94

EMC Replication Manager Administrator’s Guide

Administration Tasks

Setting the client security mode

You initially set either secure client mode or standard client mode when you install Replication Manager Agent software, as described in

“Installing the Replication Manager Agent software” on page 57 .

Standard client mode is the default mode. It does not require a user to specify a username and password for pre/postreplication scripts, mount scripts, or backup scripts. Standard client mode logs in to the system as root or Local System and allows any user with appropriate

Replication Manager permissions to perform secure operations with that client.

Secure client mode requires users to enter valid user credentials in order to perform secure operations with that client.

Changing to secure client mode

UNIX

This section describes how to change the client mode setting.

To stop and restart a UNIX client in secure client mode:

1. On a Replication Manager client, open a terminal window.

2. Enter the following commands at the system prompt:

# rc.irclient stop

# rc.irclient parameters -S SECURE -s

# rc.irclient start

Windows

To start a Windows client in secure client mode:

1. Right-click My Computer and select Manage.

2. Select Services and Applications and open the Services folder.

3. Right-click Replication Manager Client and select Stop.

4. Open a command prompt window.

5. Enter the following command to reset the client service to secure mode:

> irccd -S SECURE -s

6. Right-click Replication Manager Client and select Start.

7. Close the command prompt window.

Setting the client security mode

95

Administration Tasks

Changing to standard client mode

UNIX

To stop and restart a UNIX client in standard client mode:

1. On a Replication Manager client, open a terminal window.

2. Enter the following commands at the system prompt:

# rc.irclient stop

# rc.irclient parameters -S UNSECURED -s

# rc.irclient start

Windows

To start a Windows client in standard client mode (unsecured mode):

1. Right-click My Computer and select Manage.

2. Select Services and Applications and open the Services folder.

3. Right-click Replication Manager Client and select Stop.

4. Open a command prompt window.

5. Enter the following command to reset the client service to unsecured mode:

> irccd -S UNSECURED -s

6. Right-click Replication Manager Client and select Start.

96

EMC Replication Manager Administrator’s Guide

Administration Tasks

Managing replica storage

ERM Administrators can specify the storage where Replication

Manager stores replicas.

ERM Administrators can:

Define how Replication Manager runs scripts for users.

Discover storage arrays that are attached to Replication Manager hosts.

Configure storage arrays by discovering available storage devices within those arrays and adding storage for Replication Manager to use.

View storage that has been added for Replication Manager to use.

Put storage devices into pools grant user access to those pools.

Managing Celerra storage

Replication Manager manages Celerra storage differently than other storage arrays. As such, the procedures in this section that describe how to discover arrays, add and remove storage, and create storage pools do not apply to Celerra network servers. Instead, you must perform the following configuration steps to make Celerra storage available to Replication Manager:

Create an application set with objects that reside on Celerra storage, and create a job using this application set.

When you run the job, Replication Manager automatically

discovers the target LUNs.

For iSCSI connections, Replication Manager adds the IQNs it used to connect to the iSCSI target LUNs to the Storage Services folder in the Console tree.

For Celerra NFS local connections, the Celerra creates a new filesystem on the same Celerra.

For Celerra NFS remote, the Celerra Replicator session to the remote filesystem must already exist.

See

Chapter 6, ”Celerra Array Setup,” for more information.

Managing replica storage

97

Administration Tasks

Discovering storage arrays

First set up the arrays so that they are visible to the Replication

Manager client. For information on how to make storage arrays visible to Replication Manager clients, see the appropriate array setup chapter in this guide.

When your host is first registered, Replication Manager discovers all the storage arrays connected to that host. To discover additional storage arrays on a specific host:

1. Click Hosts in the tree panel. A list of registered hosts appears in the content panel.

2. Right-click the hostname in the content panel and select Discover

Arrays

. A window appears with a progress bar to track array discovery of the arrays that are visible to this host. Replication

Manager does not automatically discover arrays.

The system adds newly discovered arrays to the list of arrays that will appear in the Add Storage wizard.

3. Click Close.

Discovered arrays will not appear under the Storage Services node until you add storage from those arrays for Replication Manager to use.

Specifying credentials for a new RecoverPoint

RPA

RecoverPoint RPA credentials must be entered in order for

Replication Manager to communicate with the RPA. To specify credentials for a new RecoverPoint RPA, run the Add Storage wizard, described in

“Adding storage” on page 99 .

98

EMC Replication Manager Administrator’s Guide

Administration Tasks

Adding storage

!

Once arrays have been discovered, you can add storage:

CAUTION

Make sure that you are not sharing the storage devices with other applications on the same storage array. If you choose to add storage that is under the control of other applications, Replication Manager might overwrite data previously written to the mirror by another application.

1. Right-click Storage Services and choose Add Storage from the context menu. The Add Storage Wizard appears. Click Next on the welcome screen of the wizard.

2. The next panel shows a list of storage arrays that are visible from all the registered hosts. Storage arrays that have not previously added storage (or in the case of CLARiiON arrays and

RecoverPoint RPAs, have not had credentials entered in

Replication Manager) appear with a question mark (?) over their icon.

3. Select the checkbox next to the storage arrays from which you want to add storage.

Note:

Adding Symmetrix storage may be affected by long duration locks.

If locks have been set on certain storage, you may not be able to use that storage. Long duration locks are described in

Chapter 5, ”Symmetrix

Array Setup.”

4. If Replication Manager already has a list of storage for any of the selected arrays, a pop-up box asks if you want to update the list of devices on existing storage arrays. Click Yes to continue with the storage discovery. If you are configuring a Symmetrix, skip to

step 6 .

5. For a CLARiiON or Invista array, enter the username and password of the array and select the appropriate port from the drop-down list. For a RecoverPoint RPA, enter the RPA username and password.

If you select a RecoverPoint RPA for a configuration that uses

CLARiiON storage, you must also select the associated

CLARiiON array in order to provide CLARiiON array credentials.

Managing replica storage

99

Administration Tasks

Viewing storage

6. Click Next. Replication Manager starts the process of adding storage.

In the case of RecoverPoint RPA, entering credentials is the last step. There is no step for adding storage.

7. The tree lists devices grouped under certain categories. For example in a Symmetrix all the BCVs, STD, and VDEV devices are listed in separate groups. Select the checkboxes next to each device that you want to add.

You can click a device at the top of a range, press and hold Shift and click the device at the bottom of the continuous range. Click

Next

to add the selected devices.

8. If you want Replication Manager to perform a discovery of all array resident software (and versions) after storage discovery completes, select Run array software discovery after Add

Storage

. Array resident software discovery occurs automatically during initial storage array discovery.

9. Review your selections and click Finish to save your changes.

Once you select at least one device in an array, the storage array appears in the Storage Services part of the Tree Panel.

To view configured storage arrays and the storage devices that you have added:

1. Expand Storage Services. The tree lists each kind of storage array as follows:

Symmetrix Storage Arrays — Listed using their Symmetrix serial number. Symmetrix storage arrays are discovered automatically based on which hosts have been registered. If a host can connect to the Symmetrix array, it is added to the

Replication Manager Storage tree.

Invista Instances — Listed using the serial number of the

Invista instance. Replication Manager can discover Invista instances if they are zoned and connected to a registered host.

CLARiiON Storage Arrays — Listed using the CLARiiON serial number for the CLARiiON array. Replication Manager can discover CLARiiON storage arrays if they are zoned and connected to a registered host.

100

EMC Replication Manager Administrator’s Guide

Administration Tasks

Celerra Storage Arrays — (Celerra iSCSI) Arrays are listed using the iSCSI target name of the Celerra array (Celerra NFS)

Arrays are listed using a hostname or IP address. All storage in the Celerra is automatically discovered and added when you discover the storage array.

RecoverPoint — Listed by the fully-qualified DNS name or IP address of the RPA.

For more information on how to prepare these arrays and services for use with Replication Manager, refer to later chapters devoted to these subjects.

Storage services are listed as shown in

Figure 9 on page 101

.

Figure 9 Storage services

2. Select a storage service on the tree. The content panel displays the following information about each device in the array:

Device — Device name

State — State of the device (for example, In Use, Not In Use,

Excluded)

Size — Size of the device in GB

Type — Type of storage device (for example, Local Clone,

Local Snap, and so on)

Visible to Hosts — List of hosts that can see the device

Capabilities — Replication technologies in which this device is used

Managing replica storage

101

Administration Tasks

Viewing storage properties

To view the properties of the storage arrays that you have added:

1. Right-click the storage array under Storage Services and select

Properties

. The Storage Properties dialog box appears, as shown in

Figure 10 on page 102

.

Figure 10

Storage Properties dialog box

The Storage Properties dialog box displays the following information about your storage array:

Name

Model

User-defined description

State (Ready or Not Ready)

Date that storage was last discovered (Symmetrix and

CLARiiON)

102

EMC Replication Manager Administrator’s Guide

Administration Tasks

Refreshing storage properties on demand

Date of the last refresh

Host that was used to perform the latest refresh

Details of all the properties are displayed in the Extended Properties table:

Property

— Name of the property.

Value

— Value of the property.

For Symmetrix standard devices, the value is calculated as the sum of unprotected devices + RAID5 + RAID6 + two-way mirror

+ four-way mirror devices.

Description

— Provides more specific information about the property and its purpose.

All properties of the array are initially populated when you add the array to Replication Manager. If you change your configuration (for example, if you install a more recent version of SAN Copy software), you should refresh to allow Replication Manager to display the latest property information in the Array Properties dialog box.

To refresh array properties:

1. Right-click the array from Storage Services and select Properties.

2. On the General tab, select the host that you want to refresh the information for. Click the hostname against Refresh using host: in the left-hand bottom of the dialog box. This can be any host running Replication Manager 5.2.1 (or later) software, and does not have to be visible to the array whose properties you want to refresh.

3. Click Refresh.

After the Refresh progress dialog box completes, the latest information for all properties appears in the Extended Properties table. You can view more information about the most recent update by clicking Details.

Note:

If Replication Manager is unable to get the value for a specific property during a refresh, the Value column for that property will display Refresh

Failed

. If this occurs, try refreshing again.

Managing replica storage

103

Administration Tasks

Viewing storage refresh history

Replication Manager allows you to view a history of extended property refresh operations that have been performed for a particular storage array. To view the refresh history of an array:

1. Right-click the array from Storage Services and select Properties.

2. On the General tab, click History. The array Refresh History dialog box displays the host that performed the refresh and the results, organized by date and time. The Notes column provides information about how each property was affected by a specific refresh:

Initial value — The original value that was set during initial array software discovery

Differs from previous refresh — The value was changed for the property during this refresh

No change — No change was made to this property during this refresh

Removing storage

To remove storage so that it cannot be used by Replication Manager, first choose what storage devices you want to remove. You can remove storage devices at any of the following granularity levels:

All storage in your environment

All storage in one or more storage arrays

One or more devices

Removing storage releases a Symmetrix long duration lock.

“Long

Duration Locks” on page 203 provides more information.

To remove storage:

1. Expand the Storage Services folder to view the storage arrays.

2. Select an array. The content panel displays a list of the included devices.

104

EMC Replication Manager Administrator’s Guide

Creating storage pools

Administration Tasks

3. Select the devices to remove:

• To select multiple arrays, hold down the Ctrl key while clicking.

• To select multiple storage devices, do one of the following:

– Press and hold the Ctrl key while clicking each of a set of noncontiguous storage nodes to select a noncontiguous list of devices within those nodes.

or

– Click the first storage node of a range of nodes; then hold the Shift key; then click the last node in a contiguous range of nodes to select them all.

4. Right-click and select Remove Storage.

A storage pool is a group of devices that you define to be used by a job or similar jobs. You can name storage pools, add storage devices to them, and set user access to the storage pool.

If you create storage pools, the Database Administrator can specify which storage pool to use when creating a job. If a storage pool is specified, the job can only use storage from that pool.

If the Database Administrator decides not to specify a pool when creating a job, the job will use an available local clone for CLARiiON and an available local BCV for Symmetrix.

Note:

Storage pools are not supported for Celerra storage or RecoverPoint.

To create a storage pool:

1. Right-click Storage Pools and select New Storage Pool. The New

Pool

dialog box appears.

2. Enter a name and description for the pool.

3. Click the Storage tab and click Add. The Add Storage to Pool

dialog box appears, as shown in Figure 11 on page 106 .

Managing replica storage

105

Administration Tasks

Figure 11

Add Storage to Pool dialog box

4. Expand an array and choose devices to add to the pool.

There are no restrictions on the types of devices you can add to a pool. It is possible to assign the same storage to multiple storage pools. You can avoid unpredictable results if all storage pools containing the same storage have the same permissions.

For full SAN Copy, assign the following devices to the storage pool:

• Devices on the target CLARiiON array that are to be used as

SAN Copy targets. (Do not include devices from more than one CLARiiON array.)

• Intermediate devices that Replication Manager uses for the temporary copies in a Symmetrix-to-CLARiiON SAN Copy configuration.

“Devices for intermediate copy” on page 194

provides more information.

106

EMC Replication Manager Administrator’s Guide

Modifying storage pools

Deleting storage pools

Administration Tasks

For incremental SAN Copy, assign these devices:

• Devices on the target CLARiiON array that are to be used as

SAN Copy targets. (Do not include devices from more than one CLARiiON array.)

• Snap cache devices on the target array.

Click OK to close the Add Storage to Pool dialog box.

5. If you want to limit access to the storage pool, click the User

Access

tab, then click Grant to select a user from the Grant User

Access

dialog box. Click OK on the Grant User Access dialog box to save user access selections.

You can assign the same storage to multiple storage pools. You can avoid unpredictable results if all storage pools containing the same storage have the same permissions.

6. On the New Pool dialog box, click Create to create the pool.

To modify an existing storage pool:

1. Right-click the storage pool you want to modify. (Only the storage pools to which you have been granted access are listed.)

2. Select Properties.

3. Make the appropriate changes in the storage pool.

4. Click OK to save your changes.

To delete a storage pool:

1. Right-click the storage pool you want to delete.

2. Select Delete.

3. Verify that you want to delete the storage pool.

Do not delete a storage pool that is currently used in a job. You must delete or modify the job first.

Managing replica storage

107

Administration Tasks

Removing a storage service

When you remove a storage service, Replication Manager will remove it from the Replication Manager Console and will not use any of the storage for replicas. You cannot remove a storage service if its devices are In Use.

To remove an array:

1. Expand Storage Services to view all the storage arrays.

2. Right-click the specific array and select Remove Array.

3. Click OK to confirm.

If you delete the last host that has access to a particular Symmetrix array, you should also remove the array from Replication Manager.

This is important if Symmetrix array has LDL enabled. If you do not remove the array, Replication Manager will not be able to unlock its devices, and manual cleanup will be required.

Set SAN policy on

Windows Server

2008 Standard

Edition

On Windows Server 2008, the SAN policy determines whether a disk comes in online or offline when it is surfaced on the system. For

Enterprise Edition systems the default policy is offline, but on

Standard Edition the default policy is online. You need to set the policy to offlineshared in order to prevent mount failures.

To set the SAN policy to offline on a Windows Server 2008 Standard

Edition host, open a command line window and run the following commands:

C:\>diskpart

Microsoft DiskPart version 6.0.6001

Copyright (C) 1999-2007 Microsoft Corporation.

On computer: abcxyz

DISKPART> san policy=offlineshared

DiskPart successfully changed the SAN policy for the current operating system.

108

EMC Replication Manager Administrator’s Guide

Administration Tasks

Protecting the internal database

The Replication Manager internal database holds information about the replicas, jobs, users, clients, and other details that are needed for the application to run and maintain its existing state.

Deploying the internal database directories

Figure 12

Replication Manager internal database files are installed in a single file system, and it is the administrator’s responsibility to protect these files. To protect the internal database, you should locate the serverdb directory on protected storage devices (for example, on mirroring or

RAID storage).

The default location of the serverdb directory \Programs

Files\EMC\rm. Figure 12 on page 109 illustrates its structure.

serverdb data

internal database files

log

log files

backups

copies of internal database files

and recent logs

store

replica catalog metadata

Structure of internal database directory

If protected storage devices are not available, you can give some protection against loss by redeploying the serverdb\data, log, backups, and store directories onto different devices as follows:

1. Stop the Replication Manager Server service: Run the Replication

Manager Services Wizard (server\bin\erm_service_gui.exe), select Replication Manager Server, and then select Stop and complete the wizard.

2. Create symbolic links from backups, data, log, and store to the new locations.

3. Move the data to these new locations.

Protecting the internal database

109

Administration Tasks

Backing up and restoring the internal database

!

4. Start the Replication Manager Server service: Run the Replication

Manager Services Wizard (server\bin\erm_service_gui.exe), select Replication Manager Server, and then select Start and complete the wizard.

Replication Manager backs up the internal database to the backups directory once a day, provided the database is live. The restore script

erm_restore

restores backups from the backups directory, if necessary.

To back up the internal database, run a regular tape backup of the serverdb\backups directory. If the backups directory is damaged, you can restore from tape to the backups directory, and then restore the internal database.

CAUTION

The store subdirectory is not backed up or restored by any

Replication Manager utility. Use your backup software to explicitly schedule a backup of the store subdirectory, and to perform a restore, if necessary.

To restore the internal database:

1. Stop the Replication Manager Server service: Run the Replication

Manager Services Wizard (server\bin\erm_service_gui.exe), select Replication Manager Server, and then select Stop and complete the wizard.

2. Open a command window and run the following commands:

> cd c:\program files\emc\rm\server\bin

> erm_restore

This script copies the original (damaged) database to the backup directory, then copies the backed up copy of the database to the live location in serverdb\data. If the server database fails to restart, run:

erm_restore -nologs

With the -nologs option, the script deletes the log files in the serverdb\log directory before copying the backups to the live location. As a result, the restored database is the database as of the last backup.

110

EMC Replication Manager Administrator’s Guide

Administration Tasks

Changing the internal database password

!

CAUTION

If you use the -nologs option, the database represents the state at the time of the last backup. All transactions after the last backup are lost.

3. Start the Replication Manager Server service: Run the Replication

Manager Services Wizard (server\bin\erm_service_gui.exe), select Replication Manager Server, and then select Start and complete the wizard.

Replication Manager accesses its internal database in a password-controlled manner. Replication Manager includes a default password and no change to the password is needed for proper operation. If you need to change the password for security or other reasons:

1. Stop the Replication Manager Server service: Run the Replication

Manager Services Wizard (server\bin\erm_service_gui.exe), select Replication Manager Server, and then select Stop and complete the wizard.

2. Run the following command in server\bin from the installation location on the Replication Manager Server:

ird -P

newpassword -s

Do not use any of the following characters in newpassword:

# (number sign), | (vertical line), ' (apostrophe), " (double quote)

3. Restart the Replication Manager Server service. Run the

Replication Manager Services Wizard, select Replication

Manager Server

, and then select Start and complete the wizard.

Protecting the internal database

111

Administration Tasks

Viewing Replication Manager from EMC ControlCenter

Replication Manager integrates closely with the EMC ControlCenter

Console. You can use EMC ControlCenter to see the devices that are available to Replication Manager and the devices Replication

Manager is using to store each replica.

To monitor Replication Manager and a Symmetrix array using EMC

ControlCenter, a physical connection is required between the

Replication Manager Server and each Symmetrix array involved in the replication. However, a physical connection is not a requirement of Replication Manager.

Free storage devices

When you include storage to make it available to Replication

Manager in Symmetrix or CLARiiON arrays, that storage is added to a special group. The name of this group depends on the type of storage array you are using:

In Symmetrix storage arrays, the free storage is added to a device group named ERM_AVAILABLE_nnnn, where nnnn equals the last four digits of the Symmetrix ID.

Device groups cannot span Symmetrix arrays; therefore, if you include storage from more than one Symmetrix array, Replication

Manager creates an available device group for each Symmetrix array that has storage in use by Replication Manager. If storage is excluded from Replication Manager, the excluded storage devices are removed from the ERM_AVAILABLE_nnnn device group.

EMC ControlCenter polls the Symmetrix array every 15 minutes to discover what device groups exist and what devices have been included in those groups.

Only devices excluded from use by Replication Manager or those currently being used to store a replica are excluded from the

ERM_AVAILABLE_nnnn device group.

In CLARiiON arrays, Replication Manager requires a storage group with the reserved name EMC Replication Storage. This storage group contains all disks (whether in use or not) that can be used by the Replication Manager to create replicas. Each array must have a storage group with the name EMC Replication

Storage

for the disks that reside in that array and that are available to Replication Manager.

112

EMC Replication Manager Administrator’s Guide

Administration Tasks

Viewing Symmetrix device groups

Viewing CLARiiON storage groups

EMC ControlCenter displays information about the devices in the device groups that Replication Manager uses.

To view Replication Manager device groups from the EMC

ControlCenter Console:

1. Start the EMC ControlCenter Console.

2. Click Monitoring on the button bar at the top of the screen.

3. Expand the Device Groups folder to see all the device groups on a Symmetrix array.

4. Scan the list for device groups that start with ERM_.

5. Click the device group name to see information about that group.

To view Replication Manager storage groups from the EMC

ControlCenter Console:

1. Start the EMC ControlCenter Console.

2. Click Monitoring on the button bar at the top of the screen.

3. Expand the Storage Systems folder.

4. Expand a CLARiiON array.

5. Expand the Storage Groups folder.

6. Scan the list for the EMC RM storage group.

7. Click the storage group name to see more information.

EMC storage and storage-based groups

Symmetrix device groups

You can organize the storage that resides in Symmetrix storage arrays into device groups and storage that resides in CLARiiON storage arrays into storage groups.

In the Symmetrix array, device groups are sets of storage devices or drives that can be monitored using a single device group name. For example, a BCV device must be associated with a device group in

SYMAPI before it can be paired with a standard device in the same group. Using EMC ControlCenter, you can view these device groups and see what devices reside in each.

Replication Manager uses device groups to help you keep track of the storage devices it uses. If you have installed EMC ControlCenter, you can use it to determine which:

Viewing Replication Manager from EMC ControlCenter

113

Administration Tasks

114

BCVs are included for use by Replication Manager.

BCVs are part of each replica.

Note:

If you do not want Replication Manager to create device groups, you can set the environment variable ERM_DEVICE_GROUP_DISABLE to

TRUE. On Solaris, set the value in rc.irclient; on Windows set it as a system-wide variable. When this environment variable is set, Replication

Manager does not create device groups as described in this section.

CLARiiON storage groups

In the CLARiiON storage array, storage groups organize the drives into sets that can be used for Replication Manager or for other purposes. This concept is similar to device groups except that disks can reside in more than one storage group at a time.

Replica-in-progress device group

(Symmetrix only)

When you start a replication operation in the Symmetrix array,

Replication Manager adds the production and mirror devices to a special device group called ERM_jobname_INPROGRES_nnnn, where

jobname equals the first 12 alphabetical characters of the job name and

nnnn equals the last four digits of the Symmetrix ID.

This in-progress device group contains both the standard production volumes and the mirror volumes for each replica. It exists only while

Replication Manager is creating the replica (during synchronization and up to the split).

If you check the in-progress device group in Replication Manager, you can get information about how long it will take for the tracks to synchronize. In-progress device groups are not created during a restore.

Replica device group

(Symmetrix only)

If storage devices are used in a replica, they are removed from the

ERM_AVAILABLE_nnnn device group and added to another special device group called ERM_jobname_MMdd-hhmm_nnnn, where

jobname equals the first 12 alphabetical characters of the job name,

MMdd-hhmm is the date and time when the job was created, and nnnn is the last four digits of the Symmetrix ID.

For example, if you create a replica on a Symmetrix array with an ID ending in 4356, using a job named payroll image, at 2:00 A.M. on June

18th, the devices used to create that replica are first removed from

ERM_AVAILABLE_4356 and added to the in-progress pool for this replica (while the replica in being created). Once the replica is complete, the mirror devices get added to a device group named

EMC Replication Manager Administrator’s Guide

Administration Tasks

ERM_payroll_imag_0618-0200_4356 and removed from the in-progress device group ERM_payroll_imag_INPROGRES_4356.

Viewing Replication Manager from EMC ControlCenter

115

Administration Tasks

Replication Manager log files

Replication Manager creates three types of log files:

Installation

Server and client

Internal database

Installation log files

Installation information, errors, and warnings are logged in the following files:

IRInstall.log (UNIX only) is created in the /tmp directory of the system where Replication Manager is installed. This is the only installation log file for UNIX clients.

rm_setup.log (Windows only) is created in the installing user’s

TEMP directory. This is the main log file on Windows.

rm_install.log (Windows only) is created in the installing user’s

TEMP directory only when command line installation ("silent" install) is used. It contains output from InstallShield’s silent mode.

Server and client log files

Message types

Table 16

Replication Manager creates log files for the server and client components. There are three log files for each component. The content of the logs depends on the log file type and on the logging level that you set.

Replication Manager generates messages as described in

Table 16 on page 116

.

Message types

Message type

Console messages

Level 1 messages

Level 2 messages

Level 3 messages

Description

The same messages that appear in the Console. They provide users with information about the state of the system.

Increasingly detailed messages, useful for troubleshooting problems. Level 1 is least detailed; level 3 is most.

116

EMC Replication Manager Administrator’s Guide

Administration Tasks

Log file types

Table 17 on page 117 lists the types of log files and the messages that

they contain.

Table 17

Contents of log files

Log file type

_summary

_detail

_debug

Messages in log file

Console

Console, Level 1

Console, Level 1, 2 (if Logging Level is set to Normal)

Console, Level 1, 2, and 3 (if Logging Level 3 is set to Debug)

Log file names and locations

The default paths and sample filenames for server and client logs are listed below:

Note:

On 64-bit Windows the location begins at Program Files (x86).

_summary logs:

/opt/emc/rm/logs/server/erm_server2004_05_06_13_56_00_summary.log

/opt/emc/rm/logs/client/erm_client2004_05_06_13_56_00_summary.log

C:\Program Files\EMC\RM\logs\server\erm_server2004_05_06_13_56_00_summary.log

C:\Program Files\EMC\RM\logs\client\erm_client2004_05_06_13_56_00_summary.log

_detail logs:

/opt/emc/rm/logs/server/erm_server2004_05_06_13_56_00_detail.log

/opt/emc/rm/logs/client/erm_client2004_05_06_13_56_00_detail.log

C:\Program Files\EMC\RM\logs\server\erm_server2004_05_06_13_56_00_detail.log

C:\Program Files\EMC\RM\logs\client\erm_client2004_05_06_13_56_00_detail.log

_debug logs:

/opt/emc/rm/logs/server/erm_server2004_05_06_13_56_00_debug.log

/opt/emc/rm/logs/client/erm_client2004_05_06_13_56_00_debug.log

C:\Program Files\EMC\RM\logs\server\erm_server2004_05_06_13_56_00_debug.log

C:\Program Files\EMC\RM\logs\client\erm_client2004_05_06_13_56_00_debug.log

The filename pattern for the logs is: erm_componentDateTime_logType.log

where component is either server or client, DateTime is the date and time when the log was started, and logType is one of summary, detail, or debug.

Replication Manager log files

117

Administration Tasks

Replication

Manager Server internal database logs

Logs associated with the Replication Manager Server’s internal database are located in the base_dir/serverdb/log directory.

Setting the logging level and log file size

Table 18

To set Replication Manager logging level and the maximum storage allocated to the logs:

1. In the Replication Manager Console, right-click the Replication

Manager Server and select Properties.

2. Click the Log Settings tab.

3. Set the logging level, Normal or Debug.

Table 18 on page 118

describes the effect of this option on the contents of the log files.

Effect of logging level setting

Log file type

_summary

_detail

_debug

Setting of logging level option

Normal

Debug

Normal

Debug

Normal

Debug

Messages in log file

Console

Console

Console, Level 1

Console, Level 1

Console, Level 1 and 2

Console, Level 1, 2, and 3

• Enter a value in number of bytes for Maximum Directory

Size

. The value must be larger than the current value of

Maximum File Size

, up to 2 GB. Replication Manager always retains at least two sets of the three log files (_summary,

_detail, and _debug), regardless of the Maximum Directory

Size

.

• Enter a value for the Maximum File Size in bytes. The range is

100 KB to 200 MB.

4. Click OK to save the server options you have selected.

118

EMC Replication Manager Administrator’s Guide

Administration Tasks

Replication Manager event messages in Windows

On Windows, the Replication Manager Server logs certain event messages to the Windows Application Event log and to the

Replication Manager Server log files in the rm\logs\server directory.

The Replication Manager Server logs are circular, self-cycling log files. The files do not fill up, but rather are recycled to keep the most recent activity.

Viewing event messages

Using Windows Event Viewer, you can gather information about

Replication Manager problems by tracing the events that Replication

Manager records to the Application log.

To open the Windows Event Viewer and view the Application log:

1. Select Start > Settings > Control Panel > Administrative Tools >

Event Viewer.

2. From the Event Viewer console tree, select Application Log. Look for any events in which the source is listed as ECCRMServer.

Replication Manager logs the following types of events to the Event

Viewer’s Application log:

Error

— A significant problem, such as loss of data or loss of functionality. For example, if an on-demand mount fails, an Error event will be logged.

Information

— An event that describes the successful operation.

For example, when a replication completes successfully, an

Information event will be logged.

Replication Manager event messages in Windows

119

Administration Tasks

Table 19

Table 19 on page 120 lists and describes all Replication Manager

events that are logged in the Windows Event Viewer.

Replication Manager Windows Event Viewer messages (page 1 of 7)

Event ID

02145

5003

5004

5005

5006

5007

5008

5009

5011

Event name

UnconfigureArrayWarning

ECCRMOpenRegKeyFailed

ECCRMCreateRegKeyFailed

Type

Warning

ERROR

ERROR

Description

Unable to unlock storage for array array. You may need to unlock storage manually.

Error occurred while opening the registry key

key.

Error occurred while creating/opening the registry key key.

ECCRMQueryValueFailed

ECCRMWriteConfigSettingsFailed

ECCRMReadConfigSettingsFailed

ECCRMDecryptPasswdFailed

ECCRMEncryptPasswdFailed

ECCRMErmServiceStarted

ERROR

ERROR

ERROR

ERROR

Error occurred while querying the value of

value from the registry key key.

Error occurred while writing the configuration settings to the ird.data file or to the Windows registry.

Error occurred while reading the configuration settings from ird.data file or from the Windows registry.

Error occurred while decrypting the password for the solid database.

(The solid database is the

Replication Manager internal database.)

ERROR Error occurred while encrypting the password for the solid database.

Information Successfully started the

servicename service.

120

EMC Replication Manager Administrator’s Guide

Administration Tasks

Table 19

Replication Manager Windows Event Viewer messages (page 2 of 7)

Event ID Event name

5012 ECCRMErmServiceStopped

5013

5018

5020

5022

5024

5026

5027

5030

5032

5036

ECCRMErmClientListEmpty

EndShowStorage

EndReplication

EndMount

EndUnmount

ExpireComplete

BeginCDPUnmount

EndRestore

EndRestore

BeginReplication

Type Description

Information servicename service stopped.

ERROR Replication Manager could not generate a list of available hosts for display.

Information End of storage discovery.

Information Replication of application set: app_set/job:job successfully completed at time time.

Information Mount of the replica created at time from application set app_set, successfully completed at time date time.

Information Unmount of the replica created at time from application set app_set, successfully completed at time time from host

hostname.

Information Expire finished.

Information Another CDP replica is mounted on this appset.

Attempting to unmount it.

Information Restore of the replica created at time from application set app_set, successfully completed at time time.

Information APIT restore of user specified time time from application set app_set, successfully completed at time time.

Information Starting Replication of

[application set:app_set / job:job at time time.

Replication Manager event messages in Windows

121

Administration Tasks

Table 19

Replication Manager Windows Event Viewer messages (page 3 of 7)

Event ID Event name

5037 BeginShowStorage

5038

5039

5040

5041

5043

5045

5046

5051

BeginMount

BeginUnmount

ExpiringCheckpointY

BeginRestore

BeginAPITRestore

BeginESXMount

ECCRMCheckpointError

ECCRMStartupCheckReboot

Type Description

Information Starting storage discovery.

Information Starting mount of the replica created at time from application set

app_set, at time time on host host.

Information Starting unmount of the replica created at time from application set

app_set, at time time on host hostname.

Information Deleting replica

replica_time from date

date of application set

app_set.

Information Starting Restore of the replica created at time from application set

app_set, at time time.

Information Starting APIT restore for user specified time time, from application set

app_set, at time time.

Infiormation Starting mount of the replica created at time from application set

app_set, at time time on

ESX server using proxy

host by user.

ERROR Check the Replication

Manager server and client logs for more information.

Information Replication Manager has attempted to repair some disk signature problems and will initiate a reboot of the system.

122

EMC Replication Manager Administrator’s Guide

Administration Tasks

Table 19

Replication Manager Windows Event Viewer messages (page 4 of 7)

Event ID Event name

5052 ECCRMRepairedSignature

5053

5054

5061

5062

5063

5064

5065

5067

5068

ECCRMDidnotRepairSignature

ECCRMStartupCheckNoReboot

BeginAPITMount

EndAPITMount

BeginCopyReplication

BeginShow Arrays

EndShowArrays

LicensePassGracePeriod

MountFailedEvent

Type Description

Information Replication Manager has changed the signature for

a to x from y.

ERROR

Warning

Replication Manager could not set the signature for a to y leaving it as x.

Replication Manager will not reboot the system after n attempts.

The system may need to be manually rebooted.

Information Starting APIT mount for user specified time time, for application set

app_set, at time time on host hostname.

Information APIT mount of user specified time time, successfully completed at time time.

Information Starting Copy Replication

[application set:app_set / job:job] at time time.

Information Starting array discovery.

Information End of array discovery.

ERROR The license file on

hostname is invalid or not found. You are passed a

30 day evaluation period.

See your sales representative for further information.

ERROR The mount of replica from application set app_set created at time to host

hostname failed.

Replication Manager event messages in Windows

123

Administration Tasks

Table 19

Replication Manager Windows Event Viewer messages (page 5 of 7)

Event ID Event name

5069 UnmountFailedEvent

5070

5071

5072

5073

5074

5075

5076

5077

5078

5079

5080

RestoreFailedEvent

FailedLogin

Replication_Failed

SuccessfulLogin

NewUser

ModifyUser

DeleteUser

ReplicaCreated

ReplicaDeleted

ReplicaMounted

ReplicaUnMounted

Type

ERROR

ERROR

ERROR

Description

The unmount of replica from application set

app_set created at time from host hostname failed.

The restore of replica from application set

app_set created at time to host hostname failed.

Login for user username failed.

Information Replication failed.

Information Login for user username succeeded.

Information New user username created by username.

Information User username modified by username.

Information User username deleted by username.

Information Replica replica_time created from application set app_set, job job by

username.

Information Replica replica_time deleted from application set app_set by username.

Information Replica replica_time mounted from application set app_set to host

hostname by username.

Information Replica replica_time unmounted from application set app_set from hostname by

username.

124

EMC Replication Manager Administrator’s Guide

Administration Tasks

Table 19

Replication Manager Windows Event Viewer messages (page 6 of 7)

Event ID Event name

5081 ReplicaRestored

5082

5083

5084

5085

5086

5087

5088

5089

NewAppSet

ModifyAppSet

DeleteAppSet

NewJob

ModifyJob

DeleteJob

AuditConfiguration

DRServersNotConnected

Type Description

Information Replica restored

replica_time from application set app_set by

username.

Information New application set

app_set created by

username.

Information Application Set app_set modified by username.

Information Application Set app_set deleted by username.

Information New job job in application set app_set created by

username.

Information Job job from application set app_set modified by

username.

Information Job jobname from application set app_set deleted by username.

Information Audit configuration modified by username.

Warning The DR Primary Server is unable to connect to the

Secondary Server. Make sure the Secondary RM

Server is running.

Replication Manager event messages in Windows

125

Administration Tasks

Table 19

Replication Manager Windows Event Viewer messages (page 7 of 7)

Event ID

5090

5091

5092

Event name

FederatedBeginReplication

Type Description

Information Beginning replication for federated application set

app_set.

FEDERATED_ENDREPLICATION Information Replication for all hosts in federated application set

app_set has completed.

Performing any post-replication tasks now.

ReplicaMountedESX Information Replica time mounted from application set

app_set to ESX server using proxy host by user.

126

EMC Replication Manager Administrator’s Guide

4

CLARiiON Array Setup

This chapter describes how to prepare the CLARiiON storage array to work with Replication Manager. It covers the following subjects:

CLARiiON setup checklist.............................................................. 128

Supported CLARiiON storage arrays........................................... 131

CLARiiON hardware and software configurations.................... 132

iSCSI configuration for CLARiiON arrays ................................... 135

CLARiiON RAID groups and LUNs............................................. 141

Avoiding inadvertent overwrites .................................................. 142

Setting up clients to use CLARiiON storage arrays.................... 143

Configuring CLARiiON alternate MSCS cluster mount ............ 146

Using CLARiiON protected restore .............................................. 148

CLARiiON snapshots ...................................................................... 149

Planning CLARiiON drives for clone replications...................... 156

Planning for temporary snapshots of clones................................ 158

Support for CLARiiON thin LUNs ............................................... 159

Managing CLARiiON storage........................................................ 160

LUN surfacing issues....................................................................... 164

Configuring CLARiiON arrays for SAN Copy replications...... 166

Special configuration steps for Linux clients ............................... 174

Steps to perform after an array upgrade ...................................... 178

Cleaning up CLARiiON resources affecting the LUN limit ...... 181

Requirements for MirrorView support ......................................... 182

Reducing storage discovery time with a claravoid file .............. 183

CLARiiON Array Setup

127

CLARiiON Array Setup

CLARiiON setup checklist

To set up the CLARiiON array for use with Replication Manager, you must:

Verify that your environment has the minimum required storage hardware and that the hardware has a standard CLARiiON

configuration. “Minimum hardware and connectivity required by

CLARiiON” on page 132

provides additional information.

Confirm that your Replication Manager hosts are connected to the CLARiiON environment through a LAN connection.

Ensure Fibre or iSCSI connectivity to Replication Manager Agent hosts. The clients must be able to access all storage arrays they are using and the mount hosts must be able to access all storage arrays from which replica LUNs may be mounted.

Virtual machines can reside on a physical RDM or a virtual disk on a datastore on the CLARiiON.

Install all necessary software on each Replication Manager client, server, and mount host. Also install the appropriate firmware and software on the CLARiiON array itself.

“CLARiiON hardware and software configurations” on page 132

provides additional information.

If no administrator account exists on the CLARiiON array, create a new global level user account and give the account privileges as an administrator. Replication Manager can use this account to

access and manipulate the CLARiiON as necessary. “Create account with administrator privileges” on page 143 provides

additional information.

On Solaris hosts, verify that there are enough entries in the sd.conf file to support all dynamic mounts of replica LUNs.

“Update the sd.conf entries for dynamic mounts” on page 143

provides additional information.

On an AIX host, use PowerPath pseudo devices (for example,

rhdiskpower4) when creating a volume group. Do not use native devices in a PPVM volume group.

On an AIX host, update the ODM definitions specific to your AIX release. You can download the ODM definitions from the following location: ftp://ftp.EMC.com/pub/elab/aix/ODM_DEFINITIONS/

128

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Install Replication Manager Agent software on each host that has an application with data from which you plan to create replicas or

perform mounts. Chapter 2, ”Installation,”

provides more information.

Update the agent.config file on each client where Replication

Manager is installed to include a link to: user system@ip_address where ip_address is the IP address of a storage processor (SP). You should add a line for each SP in each CLARiiON array that you

are using. “Update the Navisphere host agent configuration file” on page 144 provides additional information.

Verify that the network ports between the network switch and the

CLARiiON SPs are set to auto-negotiate.

Verify that the CLARiiON connectivity records’ Failover mode in

Navisphere is set correctly and Array CommPath is enabled.

“Verify connectivity record settings” on page 144 provides

additional information.

For HP-UX hosts: Verify that the Initiator Type is set to HP No

Auto Trespass.

Cluster nodes in a HP Serviceguard or HACMP cluster must be in the same storage group.

If you want to use the clone instant restore functionality, enable

Allow Protected Restore on the CLARiiON array.

Set up snapshot cache.

“CLARiiON snapshots” on page 149

provides additional information.

Verify that you have Clone Private LUNs set up on your

CLARiiON storage array. These are not required if you are only using snapshots. The CLARiiON documentation provides more information about how to set these up.

Ensure that CLARiiON clone group names used in conjunction with Replication Manager do not contain a colon. CLARiiON clone groups with a colon in the name cause mount operations with those clone groups to fail.

Create a mount storage group for each mount host. Make sure that the mount storage group contains at least one LUN, and that this LUN is visible to the mount host (ESX mount hosts excepted).

If no LUNs are visible to the Replication Manager mount host,

Replication Manager will not operate because it will not be aware of the relationship between the mount host and the CLARiiON.

Replication Manager does not actually perform any operations on

CLARiiON setup checklist

129

CLARiiON Array Setup

◆ this LUN, so it can serve another purpose in addition to existence in the mount storage group. A reboot may be needed after this step.

For clones and SAN Copy (not for snapshots): Create one or more storage groups with names that start “EMC Replication Storage” and populate them with free LUNs that you created in advance for Replication Manager to use for storing replicas.

“Creating

EMC Replication Storage storage groups” on page 162

provides additional information.

For SAN Copy support, create a storage group called EMC RM

SANCopy SG.

“Configuring CLARiiON arrays for SAN Copy replications” on page 166

provides additional information.

For information on configuring your CLARiiON storage to work

with Replication Manager and VMware, refer to “Replicating a

VMware VMFS” on page 289

or “Replicating a VMware virtual disk” on page 304

.

Start the Replication Manager Console and connect to your

Replication Manager Server. You must perform the following steps:

• Register all Replication Manager hosts.

• Run Discover Arrays from each host.

• Run Discover Storage for each discovered array.

The EMC Replication Manager Product Guide provides more information about the operation of the processes listed above.

130

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Supported CLARiiON storage arrays

The EMC Replication Manager Support Matrix on http://Powerlink.EMC.com

is the authoritative list of supported arrays.

You can configure supported arrays to perform the following tasks in conjunction with Replication Manager:

Create replicas using CLARiiON SnapView and SAN Copy software.

Mount those replicas to the same or alternate hosts.

Restore information from those replicas.

Be sure to follow all required installation and configuration steps listed in the documentation for these arrays.

Supported CLARiiON storage arrays

131

CLARiiON Array Setup

CLARiiON hardware and software configurations

The EMC Replication Manager Support Matrix on http://Powerlink.EMC.com

provides specific information about supported applications, operating systems, high-availability environments, volume managers, and other supported software.

CLARiiON firmware requirements

CLARiiON storage arrays require Navisphere Manager firmware in order to work reliably with Replication Manager. The EMC

Replication Manager Support Matrix provides the minimum required

FLARE

®

release versions.

Minimum hardware and connectivity required by

CLARiiON

The following storage and connectivity hardware components are required for any implementation of Replication Manager in a

CLARiiON environment:

One or more supported CLARiiON storage arrays

Replication Manager Server

Replication Manager Agent software (on hosts)

Local area network (LAN) connectivity between the components listed above

Connectivity, which can be either of the following:

• Fibre connectivity between the Replication Manager hosts, with the agent software (IRCCD) installed, and the CLARiiON storage arrays—direct connect or through switched fibre connections.

or

• iSCSI connectivity between the Replication Manager hosts, with the agent software (IRCCD) installed, and the CLARiiON storage arrays.

132

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

LUN sizing

Minimum software required by

CLARiiON

Minimum software required in

CLARiiON environments

Replication Manager operates with LUNs that are less than 2 TBs in all supported operating systems. In selected Windows environments,

Replication Manager can support LUNs that are greater than 2 TBs.

Refer to the E-Lab Interoperability Navigator on Powerlink.EMC.com for more information.

The following software must be installed on the CLARiiON storage array before you can run Replication Manager. The EMC Replication

Manager Support Matrix provides the minimum required versions:

SnapView enabler

EMC Access Logix

enabler

SAN Copy enabler (required if creating SAN Copy replications)

MirrorView/A or MirrorView/S enabler (required if creating snaps or clones of MirrorView secondary devices)

Virtual provisioning enabler (required if thin LUNs)

Be sure to follow all required installation and configuration steps listed in the documentation for these products.

The following software must be installed on all Replication Manager hosts (production and mount) in order to run in a CLARiiON environment:

Solutions Enabler (In a UNIX environment, the multithreaded version must be installed.)

Note:

When installing Solutions Enabler on Windows Server 2003 or

2008, do not install VSS Provider, VDS Provider, or iVDS Provider.

Navisphere Agent

Navisphere Secure CLI

Replication Manager does not use the security file that you have the option to create during Navisphere Secure CLI installation.

On Windows clients, install in

C:\Program Files\EMC\Navisphere CLI. On 64-bit Windows systems, these locations begin at Program Files (x86).

Not required on VMware virtual hosts.

EMC Admsnap

CLARiiON hardware and software configurations

133

CLARiiON Array Setup

On Windows clients, install in

C:\Program Files\EMC\Navisphere Admsnap

Microsoft iSCSI Initiator (if connecting to CLARiiON through iSCSI)

PowerPath is highly recommended, but note the following exceptions:

• Do not use PowerPath if failover mode 4, ALUA, is set.

• PowerPath should not be installed on VMWare virtual machines unless iSCSI connectivity with the iSCSI initiatior is used on the VM.

• For Hyper-V, the Hyper-V server should have PowerPath, but not the children (unless iSCSI is used).

If you have more than one path to the CLARiiON devices, you must install PowerPath to manage those multiple paths. Install the minimum version specified in the EMC Replication Manager

Support Matrix. PowerPath on Windows must be installed in the default location, C:\Program Files\EMC\PowerPath.

Microsoft .NET Framework Version 2.0 (if using the Replication

Manager Config Checker). The .NET Framework is included in the Replication Manager distribution and is also downloadable from www.microsoft.com/downloads .

Be sure to follow all required installation and configuration steps listed in the documentation for these products.

Multiple Replication

Manager Servers per CLARiiON array

Configurations with multiple Replication Manager Servers per

CLARiiON array may be supported by RPQ.

134

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

iSCSI configuration for CLARiiON arrays

Replication Manager provides support for iSCSI-based CLARiiON arrays on Windows. The EMC Replication Manager Support Matrix on http://Powerlink.EMC.com

lists supported models and FLARE versions.

CLARiiON iSCSI prerequisites

Before a server is able to initiate server I/O to the iSCSI storage system, it must be configured as follows:

You have installed one of the following interface cards and relevant drivers:

• Supported iSCSI HBA cards that have a driver and configuration tool (for example, QLogic).

• Gigabit Ethernet network interface cards (NICs) running

Microsoft software that provides HBA functionality.

Note:

Replication Manager supports 10 Mb, 100 Mb, and 1000 Mb

(Gigabit) Ethernet interfaces, but the storage system only supports

1000 Mb for iSCSI connectivity. The 10 Mb and 100 Mb are sufficient for connections to the CLARiiON SPs. If your NIC does not run GigE, then you need to connect to the storage system using a GigE router or switch.

You have cabled the storage system properly.

You have installed PowerPath software on the hosts for multipathing. The EMC PowerPath Product Guide provides installation information.

You have set the network parameters and security for the SP management ports on the storage system.

Detailed information on configuring iSCSI servers and storage systems for I/O is provided in the setup guide that shipped with the storage system.

Using CHAP security

The iSCSI interface uses Challenge Handshake Authentication

Protocol (CHAP) to protect the storage system’s iSCSI ports from unwanted access. CHAP is optional, but if your storage system might be accessed from a public IP network, we strongly recommend that you use CHAP security.

iSCSI configuration for CLARiiON arrays

135

CLARiiON Array Setup

CHAP is a method for authenticating iSCSI users (initiators and targets). The iSCSI storage system can use CHAP to authenticate server initiators and initiators can authenticate targets such as the storage system. To use CHAP security, you must configure CHAP credentials for the storage-system iSCSI ports and any servers that will access the storage-system data.

Note:

If you will be using CHAP security, we strongly recommend that you configure it on both the storage system and the server before initiating server

I/O.

Detailed information on initializing an iSCSI storage system and setting up iSCSI characteristics such as iSCSI HBAs, iSCSI ports, and

CHAP authentication is provided in the Navisphere Manager documentation.

Installing and configuring the iSCSI initiator

To connect to the iSCSI targets on a CLARiiON, you must install and configure the iSCSI initiator on your Replication Manager production and mount host:

1. Download and install the Microsoft iSCSI Initiator if necessary.

2. Discover target portals on the initiator.

3. (Optional) Enable the initiator to authenticate iSCSI targets by configuring the initiator with a CHAP secret for reverse authentication.

4. Register the initiator with the Windows Registry.

5. Log in to the iSCSI target and connect to the CLARiiON array’s iSCSI portal.

This section covers the basic configuration tasks you must perform so that your Replication Manager hosts can connect to a CLARiiON storage array through iSCSI. For detailed configuration information, refer to the Microsoft iSCSI Software Initiator User Guide that was installed with the Microsoft iSCSI initiator, and the Navisphere documentation.

Install the Microsoft iSCSI initiator

EMC CLARiiON iSCSI currently supports the Microsoft iSCSI

Software Initiator that is available as a free download from the

Microsoft website.

136

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

To install the Microsoft iSCSI Software Initiator:

1. Download the latest Microsoft iSCSI Software Initiator from: www.microsoft.com

To locate the latest initiator, go to www.microsoft.com

and search for iSCSI initiator. The EMC Replication Manager Support Matrix on

EMC Powerlink provides the currently supported versions. The following procedures are for Microsoft iSCSI Software Initiator

Version 2.0.

2. Run the initiator executable. The Microsoft iSCSI Initiator wizard screen appears.

3. Click Next. The Installation Options screen appears. Accept the default installation options.

4. Click Next. The License Agreement screen appears.

5. Select I agree and click Next. The initiator is installed.

6. At the Installation Complete screen, click Finish.

Discover target portals on the initiator

Before an initiator can establish a session with a target, the initiator must first discover where targets are located and the names of the targets available to it. The initiator obtains this information through the process of iSCSI discovery. You must manually configure the initiator with a target’s network portal, and then the initiator uses the portal to discover all the targets accessible from that portal.

Note:

More information on iSCSI Discovery is provided in the Navisphere

Manager documentation.

You can add multiple network portals for multiple iSCSI targets. To add a network portal for manual discovery:

1. On the iSCSI Initiator, click the Discovery tab. The Target Portals properties sheet appears.

2. Click Add. The Add Target Portal dialog box appears.

3. Enter the IP address of the target’s network portal. If the target uses something other than the default iSCSI port of 3260, enter the port number in the Socket field.

iSCSI configuration for CLARiiON arrays

137

CLARiiON Array Setup

Note:

To ensure that the network is available, you should ping the target’s IP address before configuring it in the iSCSI initiator. If the

CLARiiON array is not available, or if you have entered a invalid IP address, you will receive a Connection Failed error.

4. (Optional) If you want to use forward CHAP authentication

(where the target challenges the initiator), click Advanced. The

Advanced Settings

dialog box appears.

5. (Optional) Do the following to enter the CHAP secret: a. Select CHAP logon information.

b. In the Target secret field, enter the secret that you configured for the iSCSI target. Microsoft only supports CHAP secrets that are 12 to 16 characters long.

c. If you also want the initiator to authenticate the target for iSCSI discovery, select Perform mutual authentication. The

Navisphere Manager documentation provides information on configuring reverse authentication on the initiator.

d. Click OK. You return to the Add Target Portal dialog box.

6. Click OK. The network portal is added to the Available portals list.

Register the Initiator with the Windows Registry

You must perform the following procedure so that the initiator’s iSCSI Qualified Name (IQN) is written to the Windows Registry. You must complete this procedure for each iSCSI initiator that will connect to the CLARiiON array:

1. Open the Microsoft iSCSI Initiator.

2. On the General tab, click Change but do not change the initiator’s IQN.

Note:

EMC recommends that you do not change the initiator’s IQN.

IQNs must be globally unique and adhere to IQN naming rules.

3. Click OK to write the existing IQN to the Windows Registry.

138

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Log in to iSCSI target

After you configure the initiator with the target’s network portal, a list of available targets appears on the initiator’s Targets properties page. To access the target’s LUNs, the initiator must log in to the target.

Note:

Make sure that the Replication Manager production host and mount host are logged in to the same target.

To log in to a target:

1. On the iSCSI Initiator, click the Targets tab. The list of available targets appears.

2. Select the target you want to connect to and click Log On. The

Log On to Target

dialog box appears.

3. (Optional) If the iSCSI target requires CHAP authentication, click

Advanced

. The Advanced Settings dialog box appears.

Note:

Information on enabling CHAP authentication is available in the

Navisphere Manager documentation.

4. (Optional) Do the following to enter the CHAP secret: a. Select CHAP logon information.

b. In the Target secret field, enter the secret that you configured for the iSCSI target. Microsoft only supports CHAP secrets that are 12 to 16 characters long.

c. If you also want the initiator to authenticate the target, select

Perform mutual authentication

. The Navisphere Manager documentation provides information on configuring reverse authentication on the initiator.

d. Click OK. You return to the Log On to Target dialog box.

5. Do the following: a. Select Automatically restore this connection when the

system boots

.

b. If PowerPath is installed, select Enable multi-path.

c. Click OK. The initiator connects to the target.

6. Click the Targets tab. Select the target and click Details. The

Target Properties

screen appears.

iSCSI configuration for CLARiiON arrays

139

CLARiiON Array Setup

7. Click the Devices tab. This screen shows the storage devices to which the initiator is connected.

140

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

CLARiiON RAID groups and LUNs

On a CLARiiON array, a RAID group is a set of disks on which you can bind one or more logical units (LUNs). A logical unit is a portion of a RAID group that is made available to the client as a logical disk.

RAID groups can contain RAID 0, 0+1, 1, 3, 5, 6, and DISK types, all of which are supported by Replication Manager.

Replication Manager works at the LUN level. To create a replica of a certain LUN using clones or SAN Copy, Replication Manager must have access to another LUN of exactly the same logical size.

However, the LUN that is the target for the replica does not have to have the same RAID type as the source LUN.

If data is spread across multiple LUNs, Replication Manager must have access to enough free LUNs of the same size in order to create the replica.

CLARiiON RAID groups and LUNs

141

CLARiiON Array Setup

Avoiding inadvertent overwrites

If you use Replication Manager to create a replica of one set of data that shares a LUN with some other data, the replica will contain all the data on that LUN. During restore, you may unintentionally write older data over newer data. The entities that are overwritten inadvertently are called affected entities. It is important to configure your data so that these affected entities problems are reduced or eliminated.

142

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Setting up clients to use CLARiiON storage arrays

You need to install the software described next on the client machines used for Replication Manager and configure the system to recognize the CLARiiON storage arrays to which you plan to connect.

Install software on the client

Install the following software on each Replication Manager client:

EMC Solutions Enabler (multithreaded)

EMC CLARiiON Navisphere Agent

EMC Navisphere Secure CLI (not required on VMware virtual hosts)

EMC Admsnap

Be sure to follow all required installation and configuration steps listed in the documentation for these products.

Create account with administrator privileges

If no administrator account exists on the CLARiiON array, create a new user account on the CLARiiON array and give the new account privileges as an administrator. Replication Manager can use this account to access and manipulate the CLARiiON array as necessary:

1. Open Navisphere Manager and select Security > User

Management

.

2. Create a new user or modify an existing user to an administrator role. Use this user’s credentials when you configure the array in

Replication Manager.

Update the sd.conf entries for dynamic mounts

On Solaris hosts, make sure the sd.conf file has enough entries to support all dynamic mounts of replica LUNs:

Refer to the documentation for your host bus adapter for more information about how to modify your sd.conf files.

Restart each client after making any changes to the sd.conf file.

Setting up clients to use CLARiiON storage arrays

143

CLARiiON Array Setup

Update the

Navisphere host agent configuration file

Install Navisphere Host Agent and modify the agent.config file on each client machine that will host Replication Manager software. The agent.config file should include a line for each SP in each array with which the client machine will interact. There are always two SPs per array.

The line in the agent.config file should have the following format: user system@ip_address where ip_address is the IP address of each SP.

A sample agent.config file is shown in Figure 13 on page 144 .

LastSave 3213203117 clarDescr

Mount Unix Host clarContact Joe Awos x93629 device auto auto user [email protected] user [email protected] user [email protected] user [email protected] user [email protected]

poll 60 eventlog 100 baud 19200

Figure 13 Sample agent.config file

Verify connectivity record settings

For each connectivity record on the CLARiiON storage system, use

Navisphere to verify that the:

ArrayCommPath setting is enabled.

Failover mode value is set according to the settings listed in

Table 20 on page 145

.

144

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Table 20

Failover mode values

Failover method used Setting for failover mode

PowerPath 1

3 (AIX)

VxVM with DMP

Neither

No PowerPath and

FLARE 26 or later

2

0

4 (ALUA)

Note:

If you are using PowerPath on a machine that also uses VERITAS

Dynamic Multipathing (DMP) Version 4.0 or older, you may want to consider disabling DMP. For the procedure to disable DMP, refer to your VERITAS documentation.

Set SAN Policy on

Windows Server

2008

Standard Edition

On Windows Server 2008, the SAN policy determines whether a disk comes in online or offline when it is surfaced on the system. For

Enterprise Edition systems the default policy is offline, but on

Standard Edition the default policy is online. You need to set the policy to offlineshared in order to prevent mount failures.

To set the SAN policy to offline on a Windows Server 2008 Standard

Edition host, open a command line window and run the following commands:

C:\>diskpart

Microsoft DiskPart version 6.0.6001

Copyright (C) 1999-2007 Microsoft Corporation.

On computer: abcxyz

DISKPART> san policy=offlineshared

DiskPart successfully changed the SAN policy for the current operating system.

Setting up clients to use CLARiiON storage arrays

145

CLARiiON Array Setup

Configuring CLARiiON alternate MSCS cluster mount

This section outlines the CLARiiON configuration required to perform alternate MSCS cluster mounts. The EMC Replication

Manager Product Guide provides more information about the procedures involved in mounting to an alternate cluster.

When you configure a CLARiiON storage array for an alternate

MSCS cluster mount, follow these guidelines.

To mount to a specific node of an alternate MSCS cluster in a

CLARiiON environment, ensure that the mount host has visibility to only one node (usually the passive node) of the alternate cluster. This is accomplished by using Navisphere Manager to create an unshared storage group on the CLARiiON and populating it with the devices

that contain the replica to be mounted as shown in Figure 14

.

Alternate cluster mount

CLARiiON mount using storage groups

Cluster 1

Production host standard visibility through storage group

EMC Replication Storage

(storage group)

Cluster 2 restricted visibility through unshared storage group

Mount Node SG

(visible to one cluster node)

Mount host

Figure 14 Mounting to an alternate cluster on CLARiiON

When you mount to a cluster node that resides on a CLARiiON,

Replication Manager checks to make sure that the mount host is only visible to one cluster node. If more than one node is visible, the mount fails.

146

EMC Replication Manager Administrator’s Guide

Mounting to alternate cluster while multiple nodes are visible

CLARiiON Array Setup

Note:

Advanced users may want to allow a mount in the situation where more than one node is visible to the mount host. This can be done but requires additional configuration described in

“Mounting to alternate cluster while multiple nodes are visible” on page 147 .

Mounting to an alternate cluster in a CLARiiON environment while multiple nodes are visible to the mount host is not a standard procedure and is not recommended unless it is performed by an advanced user. In some cases, this mount technique may be desirable.

To mount a replica to an alternate cluster when more than one node is visible to the mount host:

WARNING

Incorrectly modifying the registry may cause serious system-wide problems that may require you to reinstall your system. Use this tool at your own risk.

1. Run regedit.exe.

2. Create the following key as a DWORD entry:

\HKEY_LOCAL_MACHINE\Software\EMC\EMC ControlCenter

Replication

Manager\Client\

<version>

\

CC_CLUSTERMOUNT_NOSGCHECK

Where

<version>

is the Replication Manager version.

3. Add any non-zero, non-null value.

WARNING

Do not alter any of the other values stored in the registry.

4. Click OK to apply.

Configuring CLARiiON alternate MSCS cluster mount

147

CLARiiON Array Setup

Using CLARiiON protected restore

When you restore a replica on a CLARiiON storage array that has protected restore enabled, that Replication Manager restore automatically takes advantage of the CLARiiON array’s protected restore feature. CLARiiON protected restore protects the data on a clone during a reverse synchronization (or restore) procedure by fracturing all other clones in the clone group in order to protect them from corruption should the reverse synchronization option fail.

In addition, since CLARiiON arrays with protected restore capability allow immediate writes to the source LUNs, while the reverse synchronization is taking place, the protected restore capability (if enabled) also prevents writes to the source LUN from being written to the clones that contain the replica while the reverse synchronization (restore) is happening. This maintains the integrity of the replica, but allows, in effect, an instant restore of the data on the replica.

To take advantage of CLARiiON protected restore in Replication

Manager, you only need to enable it on the CLARiiON array. There is no setting to enable protected restore in Replication Manager.

Protected restore is a configurable feature of clones, not of snapshots.

But the behavior you see when restoring a snapshot replica is like that of a protected restore for clones. Control is returned quickly back to the user while the restore happens in the background. The snapshot replica is protected during the background restore process. This is normal behavior for a snapshot replica; there is no setting on the

CLARiiON array to enable or disable.

148

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

CLARiiON snapshots

Clones are complete copies of source LUNs, and can be an effective means of storing critical data for long durations. However, clones have some disadvantages, and may not be the best solutions for all replication types:

Source LUNs that are replicated but do not change significantly waste potential storage space when using clones since the entire source LUN is copied to the clone LUN.

Even though clones use incremental establish capabilities similar to Symmetrix BCVs, the initial establish time of a clone LUN to a source LUN can be quite lengthy.

Administrators must predefine clone LUNs to the exact size of the LUNs they will be replicating. Depending on the amount and size of replication objects, this could be a cumbersome process for the administrator.

CLARiiON SnapView provides another method of creating different types of copies of source LUNs on a CLARiiON array. The method is commonly referred to as snapshots. Unlike a clone, which is an actual copy of a LUN, a snapshot is a virtual point-in-time copy of a LUN.

You determine the point in time when you start a snapshot session.

The snapshot session keeps track of how the source LUN looks at a particular point in time. SnapView includes the ability to roll back, or restore, a SnapView session.

Snapshots consist of data that resides on the source LUN and on the snapshot cache. The data on the source LUN has not been modified since you started the session. The data on the snapshot cache consists of copies of the original source LUN data that have been modified since you started the session. The snapshot appears as a LUN to other hosts, but does not reside on a disk like a conventional LUN.

During a snapshot session, the production host is still able to write to the source LUN and modify data. When this occurs, the software stores a copy of the original data in the snapshot cache. This operation is referred to as copy on first write and occurs only once for each chunk of data that is modified on the source LUN.

As the session continues and additional I/O modifies data chunks on the source LUN, the amount of data stored in the snapshot cache grows. If needed, you can increase the size of the snapshot cache by

CLARiiON snapshots

149

CLARiiON Array Setup

adding more LUNs. The EMC SnapView Version 2.X for Navisphere 6.X

Administrator’s Guide describes how to allocate snapshot cache LUNs.

Note:

An adequate number of snapshot cache LUNs is essential since the

CLARiiON software terminates sessions if the snapshot cache runs out of space.

“Monitoring the snapshot cache” on page 152

provides more information.

By using CLARiiON snapshots as an alternative, or a complement to

CLARiiON clones, Replication Manager can allow an administrator to define activities that better suit the needs of the customer.

Snaps versus clones

Depending on your application needs, you can create clones or snapshots. In general, you should use clones for large amounts of data that change frequently, and snaps for data that is more static.

Clones always require the same amount of disk space as the source

LUN. (Thin LUNs as clones are an exception, as they do not always require the full amount.) Snapshots, on the other hand, only require enough snapshot cache space to support the data that has changed on the source LUN (typically 10 to 20 percent of the source LUN size, but will vary depending on how much data has changed).

Also, clones typically take longer to create than snaps. However, snapshots may have a bigger impact on performance due to the overhead of the CLARiiON Copy on First Write technology for snapshots.

Configuring snapshots

Using Navisphere, the storage administrator needs to:

Decide which LUNs are snapshot candidates.

Verify that existing SnapCache space exists.

Dedicate enough free LUNs in the SnapCache to support the associated replications.

In Navisphere Manager, snapshot cache LUNs are shown as either

allocated or free:

Allocated

— The cache LUN has one or more snapshot sessions associated with a single source LUN.

Free

— The cache LUN is not currently associated with any snapshot sessions for any LUN.

150

EMC Replication Manager Administrator’s Guide

Viewing snapshot information

CLARiiON Array Setup

The EMC SnapView Version 2.X for Navisphere 6.X Administrator’s Guide provides more information on setting up cache LUNs, and creating and managing snapshots.

Although Navisphere provides an intuitive interface for monitoring the snapshot information, access to Navisphere is usually limited to storage administrators. For this reason, Replication Manager also provides information on snapshots, active snapshot sessions, and the

SnapCache for database administrators who may not have access to

Navisphere. This allows them to track their snapshot replications.

Snapshot information is listed in the content panel when you select a

CLARiiON array in the tree:

Snapshot cache.

Snapshot sessions that are associated with replications for the selected array are also displayed. Storage administrators use

Navisphere to create snapshot sessions and dedicate them to replications. Each session is uniquely named in the following format:

Hostname-LUNname-DatabaseID

where Hostname is the name of the Replication Manager host,

LUNname is the name of the CLARiiON source LUN that is being snapped, and DatabaseID is a unique identifier.

Note:

SnapCache and snapshot sessions will not appear for CLARiiON arrays that do not have the proper SnapView software installed. These arrays are incapable of snap replications.

If you attempt to mount, unmount, or restore a replica that has a session that has been deleted on Navisphere, Replication Manager changes the replica’s icon to a red color, signifying that the replica is now invalid. (Replication Manager attempts to auto-unmount when it makes a mounted replica invalid.) The only action you can take on invalid replicas is to delete them.

CLARiiON snapshots

151

CLARiiON Array Setup

Monitoring the snapshot cache

!

IMPORTANT

The frequency at which the original source data changes directly affects how much cache space is required. If the SnapCache fills up to the point where no cache space is available, the CLARiiON array removes the session and frees up the data, thus invalidating the replica associated with the snapshot.

If the CLARiiON snapshot cache runs out of disk space, the

CLARiiON array will attempt to make space and you could potentially lose replicas. For this reason, it is very important to monitor the SnapCache to ensure that you have enough space to handle replications of changing data.

Replication Manager allows you to view SnapCache information directly from the user interface. It also allows you to configure

Replication Manager to send email to the storage administrator when the free cache space or free number of LUNs exceeds a configurable low-limit threshold.

Replication Manager updates current usage of CLARiiON

SnapCache when a snap replica is run for source LUNs on the

CLARiiON array, and when a storage discovery is run to the

CLARiiON array.

To monitor the SnapCache:

1. Expand the Storage Services folder.

2. Right-click the specific CLARiiON array and select Properties.

The CLARiiON Properties dialog box appears.

3. On the General tab, specify whether this array is used for replications and enter a description of the array.

4. On the Connectivity tab, enter the following information:

• SP A and SP B names

• Username and password

5. On the Snap Cache tab, you can configure Replication Manager to send an email to the storage administrator whenever the amount of free cache disk space or free cache LUNs falls below a preconfigured limit. This effectively allows you to monitor the

152

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

SnapCache even when you are not actively viewing the Array

Snap Cache Management panel. The Snap Cache tab is shown in

Figure 15 on page 153

.

Figure 15

CLARiiON Properties, Snap Cache tab

6. Select Monitor Low Snap Cache Levels on the Snap Cache tab.

7. Enter low-level thresholds for the amount of free cache disk space and free cache LUNs:

• Low Free Space Level is the percentage of free snapshot cache available relative to the total snapshot cache space.

• Low Free LUN Level is the percentage of free LUNs available relative to the total LUNs that make up the Reserved LUN

Pool.

8. Enter the email address of the storage administrator, or person responsible for array administration. When either of the thresholds are reached, Replication Manager sends a email to warn the storage administrator.

CLARiiON snapshots

153

CLARiiON Array Setup

Storage discovery is a prerequisite to email notification of low snap cache levels. In addition, be sure to follow the Replication

Manager Server setup procedure for email notification described in

“Email requirements” on page 34

.

Discovering storage

Replication Manager performs a new storage discovery whenever it requires the latest storage information to perform a storage-related task. To have the latest information on snap properties, you can manually discover storage.

To manually discover storage, right-click a storage array and select

Discover Storage

.

Replication Manager uses the previously established connections to access the array and discover the storage that is available for use by

Replication Manager and the storage already in use by Replication

Manager (if any).

Discover Storage discovers information about Replication Manager’s replication target devices. Compare this to Discover Array, which discovers information about a client’s use of source devices.

This process may take several minutes.

When discovering storage, Replication Manager provides updated information on the following:

Associated EMC Replication Storage clone and thin LUNs

Snapshot cache

Associated snapshot sessions

In addition, Replication Manager will purge all out-of-date and unused snapshot sessions during a storage discovery. Replication

Manager only purges sessions that were Replication Manager sessions (based on the name). Normally, sessions are removed when a replica is deleted, but if clients are down or inaccessible when a snapshot replica is deleted, then the sessions will be purged at a later time. User-defined sessions are not affected.

EMC recommends that you set a higher threshold in the initial stages to get a feel for how many cache LUNs your specific configuration will require.

154

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Note:

If you change the CLARiiON array’s username and password after discovering CLARiiON storage in Replication Manager, you should run Add

Storage

from the Replication Manager Console again to update the symcfg auth entries.

CLARiiON snapshots

155

CLARiiON Array Setup

Planning CLARiiON drives for clone replications

To configure a CLARiiON storage array for use with Replication

Manager:

1. Plan replica storage needs.

2. Manage RAID groups and LUNs according to your plan.

3. Create storage groups and assign LUNs to the appropriate storage group.

What is a replica on the CLARiiON array?

Before we describe how to plan for replicas, it is important to understand what a replica is in the context of the CLARiiON storage array. Replicas are full copies of the production data created using

SnapView clones. They are not created using snapshot or any other technology. Refer to the SnapView documentation for more specifics on SnapView clones.

Planning replica storage needs

Before you use Replication Manager with your CLARiiON storage array, it is very important to plan your replica storage needs.

Planning your storage needs means:

Analyzing your existing storage usage to determine what storage you will need in the future.

Calculating the size of the existing or new LUNs you want to replicate.

Determining how many replicas of each LUN you will need.

Choosing a RAID type for the replica LUNs.

If you can create all your LUNs the same size or limit the number of different-sized LUNs, you will have more flexibility when selecting which storage you can use for your replica because replica storage must exactly match the logical size of the source LUN.

Replication Manager does not create LUNs automatically to support your replica needs. Therefore, as part of the CLARiiON setup, you must create all replica LUNs before you can use Replication Manager to create replicas. Remember that although the logical size of the replica LUN must exactly match the logical size of the source LUN, your replica can have a different RAID type than the source.

156

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

For example, assume you have three databases stored on your

CLARiiON array:

Database 1 on a RAID 5 LUN (50 GB capacity)

Database 2 on a RAID 1 LUN (70 GB capacity)

Database 3 on a RAID 1+0 LUN (40 GB capacity)

To be able to maintain five simultaneous replicas of each database on that array, you must create the following LUNs:

At least five LUNs exactly 50 GB (logical size), for Database 1

At least five LUNs exactly 70 GB (logical size), for Database 2

At least five LUNs exactly 40 GB (logical size), for Database 3

Figure 16 on page 157

shows a graphical representation of this configuration.

50 GB 70 GB 40 GB

Production

LUNs

50 GB 70 GB 40 GB

50 GB 70 GB 40 GB

50 GB 70 GB 40 GB

Replica

LUNs

50 GB 70 GB 40 GB

Figure 16

50 GB 70 GB 40 GB

RM-000033

Replica LUNs must match the production LUN size

Planning CLARiiON drives for clone replications

157

CLARiiON Array Setup

Planning for temporary snapshots of clones

Replication Manager lets you mount a temporary snapshot of a clone replica. You can make changes to the mounted snapshot without changing the original contents of the clone replica. When you unmount the replica, the temporary snapshot is destroyed. This feature is sometimes referred to as snaps of clones.

To specify this temporary snapshot when you mount the replica, select Create and mount a snap of the replica under General Mount

Options. If you plan to take advantage of this feature, be sure to allocate additional SnapCache LUNs for the temporary snapshots.

The EMC Replication Manager Product Guide provides more information.

For Linux hosts, refer to “Planning static LUNs for temporary snapshots of clones” on page 177

.

158

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Support for CLARiiON thin LUNs

Replication Manager supports CLARiiON virtual provisioning (thin

LUNs). Replication Manager discovers thin LUNs during storage discovery and displays them with a special icon in the Replication

Manager Console and with the device type “Thin LUN.”

Clone replication combinations other than thin-to-thin or traditional-to-traditional are supported (to allow a replica to run), but they generate a warning in Replication Manager indicating that the replication is making less-than-optimal use of array resources. As a best practice, make sure you have enough traditional and thin LUNs to allow replications to use the same type of LUN as the source.

Once a thin-to-traditional or traditional-to-thin replication is created, the clone will remain in the clone group on the array after replica expiration, and will be used for a future replication of that source

LUN, even if a thin-to-thin or traditional-to-traditional replication would be possible using a different LUN. (The Resource Cleanup command in Replication Manager removes unused clone group items.)

Disks that are dedicated for use by thin LUNs are in one or more thin pools on the array. Replication Manager does not monitor thin pool capacity or utilization.

Support for CLARiiON thin LUNs

159

CLARiiON Array Setup

Managing CLARiiON storage

This section provides information about the following:

Configuring RAID groups and thin pools

Creating and binding LUNs

Assigning LUNs to the storage groups

Configuring RAID groups and thin pools

You can set up RAID groups and (on storage systems supporting virtual provisioning) thin pools by following the steps outlined in

Navisphere Manager documentation.

In general, the process to create a RAID group or thin pool is as follows:

1. Using the Navisphere Manager software, open the Enterprise

Storage

dialog box and click the Equipment or Storage tab.

2. Expand the storage system on which you want to create the RAID group or thin pool and select Create RAID Group or Create Thin

Pool

(the exact location of the command depends on the software version).

3. For a RAID group, specify the following characteristics of the

RAID group:

• RAID group ID

• Number of disks in the RAID group

• RAID type (RAID 0, 1, 1/0, 3, 5, 6)

If you want to specify exactly which spindles to use or specify other features of the RAID group, click Manual disk selection.

Note:

Keep in mind that RAID 0 offers no hardware protection for failed disk storage.

4. For a thin pool, assign a RAID type and let the software select the disks for the pool, or select the disks yourself. You can also specify the consumed capacity that will trigger an alert.

160

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Creating and binding LUNs

Assigning LUNs to storage groups

When you store data on a CLARiiON system, the basic unit of storage is the LUN. Replication Manager therefore has set the granularity for replicas at the LUN level as well. You should size the LUNs you plan to use for replicas to exactly the same size as the LUNs you plan to replicate.

Note:

If you have installed a logical volume manager, such as VERITAS, replication granularity is at the volume group level.

Once all the desired RAID groups have been created, create (or bind)

LUNs to the RAID groups as described in the Navisphere Manager documentation:

1. Open Navisphere Manager.

2. Open the Enterprise Storage dialog box and click the Equipment or Storage tab.

3. Expand the storage system.

4. Right-click the RAID group or thin pool and select Bind LUN or

Create LUN

.

5. In the dialog box, specify the RAID group or thin pool and properties of the LUN. For RAID type, you will be choosing from those RAID groups created earlier.

Be sure to create enough LUNs of the correct size in the proper

RAID groups or thin pools to support all the replicas of existing data that you intend to create.

Once you have created the LUNs that you need for data storage of the replicas that you plan to make, you must add LUNs to storage groups. You will need:

One storage group for LUNs that Replication Manager can use to create replicas.

Storage groups for each production host (these may already exist).

Storage groups for each mount host to use (these may already exist).

You can create storage groups on the CLARiiON array by following the steps outlined in Navisphere Manager documentation.

Managing CLARiiON storage

161

CLARiiON Array Setup

Creating EMC

Replication Storage storage groups

Creating storage groups for production hosts

For Linux clients, refer also to

“Special configuration steps for Linux clients” on page 174 .

Before you start to use Replication Manager, you must create and populate one or more storage groups that track the LUNs that

Replication Manager is allowed to use for clone replicas of production data. The storage group names must begin with “EMC

Replication Storage”.

In most cases a single EMC Replication Storage storage group is sufficient. Use more than one if you need to use more LUNs than the maximum allowed by the array for a single storage group.

To create and populate an EMC Replication Storage storage group:

1. In the Navisphere Enterprise Storage dialog box, click the

Equipment

or Storage tab.

2. Right-click the icon for the storage system on which you want to create the storage group, and then click Create Storage Groups.

3. Specify the following characteristics of the storage group:

• Each storage group name must begin with the string EMC

Replication Storage

(for example, EMC Replication Storage 1,

EMC Replication Storage 2).

• Choose whether LUNs are shared (usually only in clustered environments).

• Add all the LUNs that Replication Manager is authorized to use for making replicas of your production data.

• Do not connect any hosts to these storage groups.

If you are building a new CLARiiON system, you will need to define which hosts have access to which sets of data. You use storage groups to define this. In most systems, each production host has access to its own set of production data and the LUNs that hold that data reside in that host’s storage group. By connecting the host to that storage group, you effectively give that host access to that data on the

CLARiiON array. Refer to the steps in the previous section for information on how to create a storage group. Replication Manager requires that each client see only one storage group per CLARiiON array.

Usually, only one host is attached to each storage group, except in the case of some clustered implementations.

162

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Creating storage groups for mount hosts

Just as you need to create storage groups for production hosts (if they do not already exist), you must create storage groups for each mount host and add at least one LUN to each of these storage groups.

Replication Manager automatically populates these storage groups with the appropriate LUNs after the replicas have been created and before they are mounted to the mount host. For Linux clients, refer to

“Special configuration steps for Linux clients” on page 174 .

You may need to reboot the host after this step in order to make the

LUN visible.

Exposing selected storage to clients

At this point, your storage environment is configured and you can perform the final steps in the Replication Manager Console to expose the arrays and storage to Replication Manager:

1. Register all hosts on the Replication Manager Server.

2. Run Discover Arrays from each client.

3. Run Discover Storage for each discovered array.

“Managing replica storage” on page 97

provides more information about how to complete these last four steps.

Managing CLARiiON storage

163

CLARiiON Array Setup

LUN surfacing issues

This section describes techniques for preventing LUN surfacing issues in Windows and Solaris environments.

Background

The term LUN surfacing refers to Replication Manager’s ability to dynamically present LUNs to a mount host during a mount operation. A LUN surfacing issue arises when a mount host is unable to access a volume that contains replica data.

Note:

Replication Manager experiences LUN surfacing issues during mount operations only. It does not affect replication or restore operations.

Affected environments

Preventing LUN surfacing issues in

Windows

LUN surfacing issues affect only Replication Manager clients attached to CLARiiON storage arrays using Windows and Solaris mount hosts. Replication Manager systems that use only Symmetrix storage arrays are not affected since Replication Manager is not required to present BCVs to the mount host dynamically.

You can ensure reliable LUN surfacing during mount operations on

Windows machines by doing the following:

Install Admsnap on all Replication Manager clients that connect to CLARiiON storage arrays. The EMC Replication Manager

Support Matrix lists supported Admsnap versions.

Be sure to download the Microsoft signed driver from the Emulex website.

To verify that you have the correct HBA driver installed, open the

Windows Device Manager. The driver for the Emulex fibre controller shows the version, for example 5.2.20.12 (which is another designation for 2.20a12), and the digital signer should be identified as Microsoft Windows Hardware Compatibility

Publisher.

On Windows Server 2003, use Storport drivers.

If you have more than one path to the CLARiiON devices, you must install PowerPath on your systems to manage those multiple paths.

164

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Reducing LUN surfacing issues on

Solaris

One connection each to SP A and SP B counts as multiple paths

(two in this case) to CLARiiON devices. Replication Manager requires that SP A and SP B are both up and accessible. In other words, there must be IP connectivity to both service processors.

Verify that the value returned by the hostname commandon the host matches the value shown in Navisphere under Storage

Groups > Hosts tab > Hosts to be Connected.

Refer to EMC Replication Manager Product Guide.

You can minimize problems with LUN surfacing during mount operations on Solaris machines by doing the following:

Upgrade all Solaris systems to the operating-system patch level specified in the EMC Support Matrix. To access the appropriate patch level information, navigate to the CLARiiON > Single Host

> Sun Solaris > Sun section of the document. Then view the footnotes for the appropriate operating system.

Set up the driver configuration files (for example,

/kernel/drv/sd.conf and /kernel/drv/fcaw.conf) so that they can recognize additional LUNs during dynamic mounts. System engineers typically modify these configuration files when the

CLARiiON array is originally connected to the Solaris environment.

Refer to EMC Replication Manager Product Guide.

LUN surfacing issues

165

CLARiiON Array Setup

Configuring CLARiiON arrays for SAN Copy replications

Using EMC SAN Copy software, Replication Manager can create replicas of production data that has been copied:

From one CLARiiON storage array to another CLARiiON array

Within a CLARiiON array (in-frame SAN Copy)

From a Symmetrix array to a CLARiiON array

This section describes configuration steps for SAN Copy replication that involves only CLARiiON arrays. For information on

Symmetrix-to-CLARiiON SAN Copy, refer to “Configuring the

Symmetrix array for SAN Copy replications” on page 194

.

Full and incremental CLARiiON-to-CLARiiON SAN Copy are supported.

Requirements for

CLARiiON-to-

CLARiiON SAN Copy

Complete the following tasks to ensure that the CLARiiON arrays meet the requirements for SAN Copy support. Unless indicated, the tasks apply both to SAN Copy between arrays, and within an array

(in-frame):

Install Navisphere Agent, Navisphere Secure CLI, and Admsnap on the production and mount hosts. (AIX hosts require minimum

Navisphere 6.24.)

Configure the CLARiiON arrays and clients as described earlier in this chapter. The EMC Replication Storage groups on the SAN

Copy target array must contain devices to be used for SAN Copy replications.

“Creating EMC Replication Storage storage groups” on page 162 provides more information.

Ensure that there are target volumes that are equal in size to the

SAN Copy source volumes.

Install SAN Copy software on at least one of the CLARiiON storage systems, preferably on both the target and source. The term SAN Copy storage system refers to a CLARiiON array that has

SAN Copy software installed:

• If you install SAN Copy on only one array, EMC recommends that you install it on the source.

• For incremental SAN Copy, you must install SAN Copy software on the source CLARiiON array.

166

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Install SnapView on the source and target CLARiiON array if application replicas using full SAN Copy, snap of clone replicas, or snap of clone mounts are to be made.

Zone the storage systems according to the recommendations in

the EMC SAN Copy Administrator’s Guide. Also, “Zone SAN Copy ports” on page 169

provides additional information. (Not required for in-frame SAN Copy.)

Use Navisphere to create a SAN Copy storage group on the array that does not have SAN Copy installed (the array that is not a

SAN Copy storage system). If both arrays are SAN Copy storage systems, you need to create a SAN Copy storage group on both arrays. Name the storage group EMC RM SANCopy SG. (Not required for in-frame SAN Copy.)

Assign the SAN Copy storage group to the initiator ports of the remote CLARiiON array. This storage group is used by

Replication Manager to make LUNs visible to the CLARiiON array. Do not assign any LUNs to this storage group. (Not required for in-frame SAN Copy.)

Configure snap cache devices on the source array for the source

LUNs.

“Snap cache devices for intermediate copy” on page 171

provides more information.

Verify that sufficient snapshot cache space is available on the source array.

Verify that at least one LUN is visible to the mount host from the target array.

SAN Copy using CLARiiON virtual provisioning (thin LUNs) is supported if both arrays are running at least FLARE 29.

Create a Replication Manager storage pool and add devices to it

according to the instructions in “Create a storage pool for SAN

Copy” on page 171

.

Configuring CLARiiON arrays for SAN Copy replications

167

CLARiiON Array Setup

Figure 17 on page 168

illustrates these configuration guidelines.

Production

Ho s t

(Ho s t 1)

Ho s t 1

S tor a ge

Group

EMC

Replic a tion

S tor a ge

SAN

Ho s t 2

S tor a ge

Group

EMC RM

S ANCopy

S G

EMC

Replic a tion

S tor a ge

Mount

Ho s t

(Ho s t 2)

CLARiiON Arr a y 1

( S AN Copy S tor a ge S y s tem)

Figure 17

CLARiiON Arr a y 2

EMC2941

Example of full SAN Copy configuration with Replication Manager

Figure 18 on page 168

illustrates the guidelines for in-frame SAN

Copy.

Production

Ho s t

(Ho s t 1)

Ho s t 1

S tor a ge

Group

EMC

Replic a tion

S tor a ge

168

Ho s t 2

S tor a ge

Group Mount

Ho s t

(Ho s t 2)

CLARiiON Arr a y

U s ed for In-Fr a me S AN Copy

Example of in-frame SAN Copy

Figure 18

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Install Navisphere and Admsnap

Be sure Navisphere Secure CLI, Navisphere Agent, and Admsnap are installed on production and mount hosts.

Install SAN Copy on the CLARiiON array

Install SAN Copy software on at least one of the CLARiiON storage arrays. If SAN Copy software is installed on only one CLARiiON array, install it on the source array. For the best performance during replication and restore, we recommend installing SAN Copy on both

CLARiiON arrays.

Zone SAN Copy ports

SAN Copy ports must be correctly zoned so that SAN Copy can have access to the arrays.

Figure 19 on page 169

illustrates the recommended SAN Copy zoning method for CLARiiON arrays that contain two ports (0, 1) per SP.

Note:

If only in-frame SAN Copy is used, this step is not required.

0

SP A

1

0

SP B

1

SAN

CLARiiON Arr a y 1

( S AN Copy S tor a ge S y s tem)

Figure 19 SAN Copy zoning example

0

1

SP A

EMC RM

S ANCopy

S G

0

1

SP B

CLARiiON Arr a y 2

EMC2942

Configuring CLARiiON arrays for SAN Copy replications

169

CLARiiON Array Setup

Table 21

6

7

4

5

8

2

3

Zone

1

This example applies to CLARiiON arrays that have two ports per SP.

Figure 19 on page 169

shows the recommended zoning method is to have eight zones to connect CLARiiON array 1 and CLARiiON array

2. If your CLARiiON array has four ports per SP, you can set up a similar zoning strategy by configuring additional zones using the additional ports.

Table 21 on page 170 explains the SAN Copy zoning

per CLARiiON port.

SAN Copy zoning for CLARiiON arrays with 2 ports

CLARiiON array 1 port

SP A Port 0

SP A Port 0

SP B Port 0

SP B Port 0

SP A Port 1

SP A Port 1

SP B Port 1

SP B Port 1

CLARiiON array 2 port

SP A Port 0

SP B Port 0

SP A Port 0

SP B Port 0

SP A Port 1

SP B Port 1

SP A Port 1

SP B Port 1

Note:

If MirrorView is installed on either CLARiiON array, the highest port will be reserved for use with MirrorView, regardless of whether your

CLARiiON SPs have two or four ports. For example, if your CLARiiON array has MirrorView installed, and the array’s SPs have two ports, you should not

create and use zones 5–8 for SAN Copy; see Figure 19 on page 169

.

Once the CLARiiON arrays have been properly zoned, use

Navisphere to update connections on the SAN Copy CLARiiON arrays. The EMC SAN Copy documentation provides more information.

Create a SAN Copy storage group

To give Replication Manager access to SAN Copy sessions, create a storage group that can be used to make LUNs visible to the SAN

Copy storage system:

Create a new storage group (do not use an existing one). You must name the storage group EMC RM SANCopy SG.

Do not assign any LUNs to this storage group.

170

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

If both CLARiiON arrays have SAN Copy software installed, create this storage group on both CLARiiON arrays.

If only one CLARiiON array has SAN Copy installed, create the storage group on the CLARiiON array that does not have SAN

Copy software installed.

In order for the remote array to have access to the SAN Copy storage group, you must assign the ports of the remote

CLARiiON array to it.

Note:

If only in-frame SAN Copy (SAN Copy within the same array) is used, no storage group is required.

Connect the SAN

Copy storage group to the remote

CLARiiON array

In Navisphere, configure the SAN Copy connection of the

EMC RM SANCopy SG storage group to the remote CLARiiON array by selecting the ports that were zoned for SAN Copy. For each remote port, use the Advanced option to select the local ports that were zoned for SAN Copy. (Not required by in-frame SAN Copy.)

Snap cache devices for intermediate copy

Create a storage pool for SAN Copy

Incremental SAN

Copy

Replication Manager makes an intermediate, temporary copy of the source data before it is copied by SAN Copy software to the target array. For CLARiiON-to-CLARiiON SAN Copy, use SnapView snapshot as a temporary copy.

For Symmetrix-to-CLARiiON SAN Copy, you must create a storage pool. As with all LUNs used by Replication Manager, the LUNs assigned to this storage pool will be those from the EMC Replication

Storage groups on the target array. Assign to the storage pool the

Symmetrix LUNs (BCV/VDEV) that are configured as the intermediate devices.

For full SAN Copy, assign to the storage pool the CLARiiON devices to be used as the target for the full SAN Copy. For incremental SAN

Copy, assign to the storage pool the target devices to be used for incremental SAN Copy.

“Creating storage pools” on page 105

provides more information.

When you create an incremental SAN Copy replica, the source data must be located on a source LUN in a CLARiiON storage array. For incremental SAN Copy sessions, the source CLARiiON array must be running SAN Copy software.

Configuring CLARiiON arrays for SAN Copy replications

171

CLARiiON Array Setup

Replication Manager performs the following steps:

1. Checks to see if a previous replica has been created by the job. If not, an incremental SAN Copy session will be created to a remote

CLARiiON LUN and the SAN Copy session will be marked to start tracking changes to create an incremental copy in the future.

If the job has previously created a replica, then the changes will be updated on the same remote LUN that was previously used.

2. Runs a user-configured job that replicates the remote target LUN using a SnapView snap, a clone, or other replication technology.

This replica is used to protect the integrity of the target LUN in case of future incremental replicas. To support an incremental

SAN Copy replica, you must include CLARiiON LUNs on the target storage array to hold the incremental SAN Copy data, along with enough devices on the target storage array to hold the replica itself.

Figure 20 on page 172

illustrates an example.

2

4

S ource

Production

Ho s t

1

3

Session

SAN

5

T a rget

6

6

Mond a y

Tue s d a y

Wedne s d a y

Figure 20

CLARiiON Arr a y

( S AN Copy S tor a ge S y s tem)

CLARiiON Arr a y 2

Example of incremental SAN Copy replications

(For in-frame SAN Copy, the target LUNs and snapshots reside in the same array as the source.)

Table 22 on page 173 describes step-by-step how incremental SAN

Copy replications work.

172

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Table 22

2

3

4

5

How incremental SAN Copy replications work

Step

1

6

Description

Replication Manager creates an incremental SAN Copy session on the SAN

Copy storage system.

Replication Manager quiesces the application on the production host.

Replication Manager marks the data to be replicated in the incremental SAN

Copy session.

Replication Manager enables the application on the production host.

If this is the first replica, it will be a full copy to the destination LUN (target), with the point-in-time image as the source. Subsequent replicas will be incremental copies of the point-in-time image as marked in step 3.

A Replication Manager copy job makes a replica of the incremental SAN Copy target device which is cataloged as the replica.

Configuring CLARiiON arrays for SAN Copy replications

173

CLARiiON Array Setup

Special configuration steps for Linux clients

Replication Manager requires that you set up your CLARiiON storage arrays to use pre-exposed LUNs for both CLARiiON clone and CLARiiON snapshot replications. Dynamic LUN surfacing on

CLARiiON storage arrays is not supported by the Linux operating system.

This section describes how to configure your CLARiiON storage arrays and Replication Manager to create and use static LUNs for mounting replicas in a Linux environment.

Preparing static

LUNs for mountable clone replicas

To create and mount replicas that use CLARiiON clones in a Linux environment, you must follow these instructions to prepare static

LUNs that support the replication:

1. Place the storage devices that you plan to use for the replicas into the EMC Replication Storage storage groups.

2. Using Navisphere, add all devices that you plan to use for mountable replicas into the CLARiiON storage group for the host to which you plan to mount the replica.

3. Reboot the mount host to make those devices visible to the host.

Note:

For Linux mount VMs, EMC recommends that you set up the storage groups so that any clone can be mounted onto only one mount VM. Once you decide which VM can mount which replicas and where those replicas will be mounted, you can develop storage pools to ensure that the product chooses clones that can be mounted to the selected mount VM, based on the LUNs you have created.

Do not create a storage group for the mount VM. Instead, find the storage group of the mount VM’s ESX Server and place the replication LUNs into it.

Preparing static

LUNs for mountable snapshot replicas

When you create a replica that uses CLARiiON snapshots in a Linux environment, Replication Manager creates a snapshot session. The snapshot session is a point-in-time copy of the source LUN.

To mount this snapshot session, you must create a snapshot device. The snapshot device is the handle used to access the contents of the snapshot session.

174

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Mounting multiple snapshot replicas

For a mount host to access a snapshot session, there are three prerequisites:

A snapshot device must exist.

The snapshot device must be visible to the selected mount host.

The snapshot device must not be activated for a different session.

In this scenario, we have two existing replicas of the same two source

LUNs taken at different times. The replicas have been created on snapshot sessions. We also have created two snapshot devices (one associated with each of the two source LUNs) and made those

devices visible to our mount host. Figure 21 on page 175 provides an

illustration.

Figure 21

So u rce

LUN

So u rce

LUN

Replica 1

(Session)

Replica 1

(Session)

Replica 2

(Session)

Replica 2

(Session)

Snapshot

Device

Mo u nt Host

Snapshot

Device

Snapshot

Device

Snapshot

Device

RM-000034

Two unmounted replicas using snapshot sessions

Let us say we want to mount one of these replicas to the mount host.

In the current configuration, this is not a problem. There are two unused snapshot devices, one for each session in the replica. Also, both of these devices are visible to the mount host. See

Figure 22 on page 176

.

Special configuration steps for Linux clients

175

CLARiiON Array Setup

Source

LUN

Source

LUN

Replica 1

(Session)

Replica 1

(Session)

Replica 2

(Session)

Replica 2

(Session)

Mount Host

Snapshot

Device

Snapshot

Device

Replica 1

Mounted

Figure 22

RM-000035

One replica mounted using snapshot sessions

If you now attempt to mount the second replica to the mount host, the mount operation fails because there is no available snapshot devices that are visible to the mount host. Even if you unmount the first replica, the association between the snapshot device and the snapshot session remains until the replica is deleted.

Be sure to provide enough snapshot devices to accommodate your needs before a mount operation occurs. You should use the following formula to determine how many snapshot devices you need:

(

#_of_mountable_replica

) X (

#_of_LUNs_per_replica

)

For example, to mount the second replica in our scenario to the mount host, you need to create two more snapshot devices (one each associated with the source LUNs) as shown in

Figure 23 on page 177 .

176

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Figure 23

Source

LUN

Replica 1

(Session)

Replica 1

(Session)

Snapshot

Device

Mount Host

Snapshot

Device

Two

Replicas

Mounted

Snapshot

Device

Source

LUN

Replica 2

(Session)

Replica 2

(Session)

Snapshot

Device

RM-000036

Second replica mounted using additional snapshot devices

To prepare snapshot devices for use with Replication Manager:

1. Use Navisphere to create enough snapshot devices for each source LUN from which you plan to create one or more replicas and mount those replicas. Your CLARiiON documentation provides specific information about creating a snapshot device.

2. Use Navisphere to put the snapshot devices into the CLARiiON storage group for the host to which you plan to mount these replicas.

3. Reboot the mount hosts to make the snapshot devices visible to the mount hosts.

Planning static LUNs for temporary snapshots of clones

For Linux clients, follow these steps if you are planning to use temporary snapshots of clones (snaps of clones):

1. Be sure all clones are added to an EMC Replication Storage storage group.

2. In Navisphere, create a snapshot device of each clone to be used for replication and add the snapshot device to the mount host’s storage group. (This manual step is not required for other host operating systems.)

Special configuration steps for Linux clients

177

CLARiiON Array Setup

Steps to perform after an array upgrade

When you upgrade a CLARiiON array that is already discovered in

Replication Manager, perform the following procedure to ensure that the new array is seen by Replication Manager.

In this procedure you rename Replication Manager’s symapi_db_emcrm_client.bin file to a "*.old" version to prevent discovery issues that can arise when data in this file is marked as bad or out of date.

Note:

symapi_db_emcrm_client.bin is almost identical to the symapi_db.bin file used by Solutions Enabler. The symapi_db_emcrm_client.bin contains information about all the arrays that are visible to the Replication Manager client and provides some additional information.

To ensure that Replication Manager sees the new array:

1. Using the Replication Manager Console, uncheck Use this array

for replication

.

2. Right-click the array and select Remove Array.

3. Open a command window on the client and run the following command:

cd c:\Program Files\EMC\symcli\bin

or

cd c:\Program Files (x86)\EMC\symcli\bin

4. Run the following symcfg command:

symcfg -clar discover

This command builds the Solutions Enabler database that contains the list of the CLARiiON arrays this client can discover.

5. In the same command window, run

symcfg -clar list

This command returns the list of visible CLARiiON arrays. Verify that the upgraded CLARiiON array is in the list. If the command fails to return a list of CLARiiONs, Replication Manager will fail

178

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

to discover the array. Replication Manager will run a similar discovery in the next step. Troubleshoot CLARiiON-host connectivity at this point.

6. With the new array in place (agent.config updated, and so on), using the Replication Manager Console, right-click on the attached host and select Discover Arrays. The array will be automatically placed into the clarcnfg and the symapi_db_emcrm_client.bin file.

At this point the array should be usable by Replication Manager.

If discovery continues to fail, complete the steps below. These steps require that the array was visible in the output from the

symcfg -clar

list command in step

5

.

Note:

The old array still appears in the Replication Manager Console; it does not affect Replication Manager.

7. Under Program Files or Program Files (x86), open the file

EMC\symapi\config\clarcnfg.

8. Delete all array information from the clarcnfg file.

(The new array will be automatically added by Replication

Manager during discovery.)

9. Stop the Replication Manager Client service.

10. Under Program Files or Program Files (x86), rename:

EMC\RM\Client\bin\symapi_db_emcrm_client.bin to

EMC\RM\Client\bin\symapi_db_emcrm_client.bin.old

Note:

This file is being renamed because it may contain invalid information that can affect the creation of the clarcnfg file. Replication

Manager recreates this file if it does not exist upon startup of the

Replication Manager Client service. The updated array information will be added during discovery.

11. Start the Replication Manager Client service.

Steps to perform after an array upgrade

179

CLARiiON Array Setup

12. Using the Replication Manager Console, right-click the host and select Discover Arrays. This command adds the new array information to symapi_db_emcrm_client.bin and populates the clarcnfg file. The array should now be available for further configuration in the Replication Manager Console.

180

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Cleaning up CLARiiON resources affecting the LUN limit

Replication Manager can free unused clone group items associated with LUNs that have been included for Replication Manager use. If these LUNs are used again to create a replica of the same job, a full establish of the mirror is necessary. This has an affect on performance but it helps you to avoid reaching the maximum clone group item limit.

To free clone group items, right-click the CLARiiON storage array in the tree and select Resource Cleanup.

Replication Manager then frees LUNs from clone groups. This action prevents Replication Manager from subsequently performing a quick incremental establish to create a new replica. The next several replications must perform a full establish, which takes longer to complete.

The Resource Cleanup utility is designed to free CLARiiON clone group items that are contributing to a CLARiiON array reaching its internal maximum combined limit of clones and sources. Refer to

CLARiiON array documentation for limit information.

The following factors contribute to this issue:

CLARiiON SPs always attempt to use a LUN that is not currently residing in the clone group, even when there are LUNs in the clone group that contain the contents of an already deleted replica.

Source LUNs that reside in the clone group also count towards the maximum number of clone group items.

Note:

The Resource Cleanup utility is designed to free only those CLARiiON clones and sources that are under the control of the Replication Manager. The utility frees only resources that are not part of replicas. You may need to delete replicas first.

Cleaning up CLARiiON resources affecting the LUN limit

181

CLARiiON Array Setup

Requirements for MirrorView support

Replication Manager supports creation of snaps and clones of:

MirrorView/A or MirrorView/S primary volumes

MirrorView/A or MirrorView/S secondary volumes

Note:

One application set cannot contain both MirrorView/A and

MirrorView/S sources.

Replication Manager does not create MirrorView secondaries nor does it manage the MirrorView link.

Note the following configuration requirements related to MirrorView support:

When configuring MirrorView/S or MirrorView/A secondaries, note that Replication Manager can create snaps or clones of the secondary only when there is a single secondary image per primary volume.

The Replication Manager server requires IP connectivity to the array containing the MirrorView/S or MirrorView/A secondary device.

For Linux only, Fiber Channel connectivity of the production host to the remote CLARiiON array is required.

At least one Replication Manager client must be connected and zoned to the remote CLARiiON array.

The MirrorView image state must be Synchronized or Consistent and the secondary condition should not be in fractured state.

MirrorView/A sessions must have the update frequency set to

”manual.”

All mirrors defined in a given job must be either individual mirrors that are not part of a consistency group, or all mirrors must be members of a single consistency group, with no additional LUNs in the consistency group that are not part of the replica.

If a MirrorView/A application set contains multiple LUNs, then

EMC recommends that all LUNs that are part of the replica be placed together into a single consistency group with no additional LUNs in that group.

182

EMC Replication Manager Administrator’s Guide

CLARiiON Array Setup

Reducing storage discovery time with a claravoid file

In environments containing a large number of CLARiiON arrays, replication can take a long time because Replication Manager discovers arrays with each replication. To reduce the amount of time it takes for a discovery, consider using an avoidance file (claravoid) to specify the arrays to be skipped during discovery.

The avoidance file should contain one CLARiiON ID per line, and should be saved in the following locations on each Replication

Manager client host that has arrays you want to skip:

Program Files\EMC\SYMAPI\config

(Windows)

/var/symapi/config

(UNIX and Linux)

Reducing storage discovery time with a claravoid file

183

CLARiiON Array Setup

184

EMC Replication Manager Administrator’s Guide

5

Symmetrix Array Setup

This chapter describes how to set up the Symmetrix array for use with Replication Manager. It covers the following subjects:

Symmetrix setup checklist .............................................................. 186

Symmetrix array hardware and software requirements ............ 189

Support for 128 TimeFinder/Snap sessions................................. 191

Support for thin devices.................................................................. 193

Configuring the Symmetrix array for SAN Copy replications . 194

Configuring for remote replications across SRDF/S .................. 198

Configuring Symmetrix alternate MSCS cluster mount ............ 200

Using Symmetrix protected restore............................................... 202

Long Duration Locks ....................................................................... 203

Configuring virtual devices (VDEVs) ........................................... 205

Configuring TimeFinder clones for Replication Manager ......... 206

Steps to perform after an array upgrade ...................................... 208

Reducing storage discovery time with symavoid....................... 210

Symmetrix Array Setup

185

Symmetrix Array Setup

Symmetrix setup checklist

Symmetrix array setup is performed by service personnel:

Verify that the Symmetrix array meets the requirements listed in

“Symmetrix array hardware and software requirements” on page 189

.

BCVs must be the same size and have the same geometry as the

STD volumes.

On an AIX host, use PowerPath pseudo devices (for example,

rhdiskpower4) when creating a volume group. Do not use native devices in a PPVM volume group.

Configure the mount host to access the appropriate BCVs.

Remember that when you are setting up host access to storage for hosts that you plan to make part of a federated application set, all storage devices that contain all data in all components of the federated application set must be visible to all hosts that take part in that application set.

Verify that BCVs that are planned to be used by the mount host are visible.

Be sure that no BCV is visible to more than one Windows-based host at a time.

Create four six-cylinder gatekeeper devices and export same for each production host and mount host.

Install all necessary software on each Replication Manager client, server, and mount host. Refer to

Chapter 2, ”Installation.”

For TimeFinder/Snap support, be sure virtual devices (VDEVs) have been created and exported, and a save area has been configured.

If you are using Symmetrix TimeFinder clones, verify that the clones (STDs and BCV) are not being used for production data.

Refer to

“Configuring TimeFinder clones for Replication

Manager” on page 206

for additional configuration information.

If you are running Windows 2003/SP1 on MSCS attached to

Symmetrix storage, use the SPC-2 bit. For detailed information on the SPC-2 bit, and its requirements and restrictions, refer to ETA emc117300 and ETA emc134969.

186

EMC Replication Manager Administrator’s Guide

Additional setup steps for Windows

Server 2008 environments

Symmetrix Array Setup

Start the Replication Manager Console and connect to the

Replication Manager Server. Perform the following steps:

• Register all Replication Manager hosts.

• Run Discover Arrays for each host.

• Run Discover Storage for each discovered array.

The EMC Replication Manager Product Guide provides more information about the operation of these procedures.

For TimeFinder clone support:

• Determine which STDs and BCVs to use as targets.

• Verify that the STDs or BCVs are the same size as the STD being replicated.

• Export the target STDs and BCVs to the mount host. For

Windows hosts, be sure that no BCV is visible to more than one Windows-based host at a time.

• If you are using STDs, be absolutely sure that the STDs are not being used by another application. Replication Manager will change the state of these devices to Not Ready (NR) on inclusion.

Configuration steps related to Symmetrix arrays in a VMware environment are described in

Chapter 9, ”VMware Setup.”

Windows Server 2008 hosts require the following FA director flag settings to be configured on the Symmetrix:

• Common Serial Number (C)

• Enable Point-to-point (PP)

• Host SCSI Compliance 2007 (OS2007)

• SCSI-3 SPC-2 Compliance (SPC2)

• SCSI-3 compliance (SC3)

• For FC Switch Base Topology (FC-SW), please XOR the following base setting with the specific operating system setting specified in the dir bit setting table: Enable Auto

Negotiation (EAN), Point-to-Point (PP), Unique WWN

(UWN)

.

• For FC Loop Base Topology (FC-AL), please XOR the following base setting with the specific operating system setting specified in the dir bit setting table: Enable Auto

Negotiation (EAN) and Unique World Wide Name (UWN)

Symmetrix setup checklist

187

Symmetrix Array Setup

Additionally for Windows 2008 Failover Cluster, the Persistent

Reservation attribute SCSI3_persist_reserv must be enabled on each

Symmetrix DMX

device accessed by Windows Server 2008.

For detailed information on changing these settings, refer to the

Solutions Enabler Symmetrix Device Masking CLI Product Guide,

Solutions Enabler Symmetrix Configuration Change CLI Product Guide,

and Solutions Enabler Symmetrix Array Controls CLI Product Guide available on Powerlink.

188

EMC Replication Manager Administrator’s Guide

Symmetrix Array Setup

Symmetrix array hardware and software requirements

This section summarizes Symmetrix array hardware and software requirements.

Supported hardware and microcode levels

Refer to the E-Lab Interoperability Navigator on http://Powerlink.EMC.com for the list of supported models and microcode levels.

Gatekeeper requirements

EMC Solutions

Enabler

LUN sizing

Set SAN Policy on

Windows Server

2008

Standard Edition

Each Symmetrix array must export four six-cylinder gatekeeper devices per production host and mount host. Replication Manager uses the gatekeeper devices to perform discovery of BCVs and to perform replication and restores.

EMC Solutions Enabler storage software should be installed on

Replication Manager hosts. The EMC Replication Manager Support

Matrix lists the minimum versions. Be sure to follow all required installation and post-installation steps in the EMC Solutions Enabler

Installation Guide.

Replication Manager operates with LUNs that are less than 2 TB in all supported operating systems. In selected Windows environments,

Replication Manager supports LUNs that are greater than 2 TB. Refer to the E-Lab Interoperability Navigator on Powerlink.EMC.com for more information.

On Windows Server 2008, the SAN policy determines whether a disk comes in online or offline when it is surfaced on the system. For

Enterprise Edition systems the default policy is offline, but on

Standard Edition the default policy is online. You need to set the policy to offlineshared in order to prevent mount failures.

To set the SAN policy to offline on a Windows Server 2008 Standard

Edition host, open a command line window and run the following commands:

C:\>diskpart

Microsoft DiskPart version 6.0.6001

Copyright (C) 1999-2007 Microsoft Corporation.

On computer: abcxyz

Symmetrix array hardware and software requirements

189

Symmetrix Array Setup

DISKPART> san policy=offlineshared

DiskPart successfully changed the SAN policy for the current operating system.

190

EMC Replication Manager Administrator’s Guide

Symmetrix Array Setup

Support for 128 TimeFinder/Snap sessions

Replication Manager supports the ability to create up to 128

TimeFinder/Snap sessions from a single source device. The

Symmetrix array must be at Enginuity

5772.83 or later.

Expiring sessions after Enginuity upgrade

When you want to use 128-snap session functionality for a given application set, expire TimeFinder/Snap sessions (whether made from Replication Manager or not) after you upgrade Enginuity to release 5772.83. Source devices cannot have snap targets of mixed types.

Identifying a

128-snap session

Replication Manager will return an error if an attempt is made to run a snap replica in which some sources have replicas using the 16-snap technology and others have sources using 128-snap technology.

To determine if the snap for a given source LUN is made with

128-snap technology:

1. Run the following command to determine which LUNs have snapshots. ## is the last two digits of the Symmetrix serial number:

symsnap list -sid ##

symsnap list -sid 69

Symmetrix ID: 000190102669

In the example below, three snap sessions are returned:

Source Device Target Device Status SaveDev

------------------------- -------------------- ------------- -----------

Protected

Sym Tracks Sym G SRC <=> TGT PoolName

------------------------- -------------------- ------------- -----------

00E5 138050 04D2 . CopyOnWrite DEFAULT_POOL

00E7 138050 04D3 . CopyOnWrite DEFAULT_POOL

00E5 138050 04D4 . CopyOnWrite DEFAULT_POOL

Support for 128 TimeFinder/Snap sessions

191

Symmetrix Array Setup

2. To see if the snap on a device is a 16-session or a 128-session snap, run the following command. The example uses 69 for the

Symmetrix ID and E7 for the device name: symdev show -sid 69 E7

Snap Device Information

{

Source (SRC) Device Symmetrix Name : 00E7

Source (SRC) Device Group Name : N/A

Source (SRC) Composite Group Name : N/A

Target (TGT) Device Symmetrix Name : 04D3

Target (TGT) Device Group Name : N/A

Target (TGT) Composite Group Name : N/A

Save Pool Name : DEFAULT_POOL

State of Session (SRC ===> TGT) : CopyOnWrite

Percent Copied : 0%

Time of Last Snap Action : Thu Nov 08 16:13:14 2007

Snap State Flags : (MultiVirtual)

}

The Snap State Flags value MultiVirtual indicates that the snap on

VDEV 04D3 for source LUN 00E7 is a 128-session technology snap, also known as a MultiVirtual snap. If the Snap State Flags value is

None, the snap is a 16-session snap.

192

EMC Replication Manager Administrator’s Guide

Symmetrix Array Setup

Support for thin devices

Replication Manager supports virtual provisioning (“thin devices”) for TimeFinder/Snap and TimeFinder/Clone technology. The EMC

Replication Manager Support Matrix lists the minimum required

Symmetrix microcode version.

TimeFinder/Clone setup

Before performing TimeFinder/Clone replication of thin devices, be sure to:

Create a thin pool

Add data devices to the thin pool

Bind source and target thin devices to the thin pool

The EMC Solutions Enabler Symmetrix CLI Command Reference describes how to perform these tasks.

Note the following Symmetrix Virtual Provisioning restrictions:

Only thin-to-thin source to target pairing is supported for

Timefinder/Clone replication.

Symmetrix Virtual Provisioning does not support RAID 5 devices. Create data devices with RAID 1 mirrored, or RAID 6.

TimeFinder/Snap setup

For TimeFinder/Snap replication, bind only the source thin devices to the thin pool.

Support for thin devices

193

Symmetrix Array Setup

Configuring the Symmetrix array for SAN Copy replications

Replication Manager can create exact replicas of production data from a Symmetrix array to a CLARiiON array using EMC SAN Copy software. This section describes configuration steps for Symmetrix to

CLARiiON SAN Copy replication. CLARiiON to CLARiiON SAN

Copy is described in “Configuring CLARiiON arrays for SAN Copy replications” on page 166

.

Devices for intermediate copy

Replication Manager makes an intermediate, temporary copy of the source data before it is copied by SAN Copy software to the target array. In the case of Symmetrix-to-CLARiiON SAN Copy, use

TimeFinder/Snap virtual devices (VDEVs) to store the intermediate copy. When you create the SAN Copy storage pool you identify the

VDEVs that Replication Manager is to use. One VDEV per source device is recommended.

Note:

In the absence of available VDEV resources, you can use BCV devices for the intermediate copies. But VDEVs are the recommended choice when they are available.

Configuration steps

Complete the following tasks to ensure that the arrays meet the requirements for SAN Copy support:

Install Navisphere Agent, Navisphere Secure CLI, and admsnap on the production and mount hosts. (AIX hosts require minimum

Navisphere CLI 6.24.)

Zone the Symmetrix and CLARiiON arrays as described in “Zone

SAN Copy ports” on page 195

.

Register SAN Copy ports with the Symmetrix array as described in

“Register SAN Copy ports” on page 196

.

Assign Symmetrix volumes to SAN Copy ports as described in

“Assign Symmetrix volumes to SAN Copy ports” on page 196 .

Create a Replication Manager storage pool as described in

“Create a storage pool” on page 196

.

194

EMC Replication Manager Administrator’s Guide

Symmetrix Array Setup

Figure 24 on page 195

shows these configuration guidelines.

Production

Ho s t

S RC

BCV

VDEV

SAN

0

1

SP A

0

1

SP B

Mount

Ho s t

S ymmetrix Arr a y CLARiiON Arr a y

( S AN Copy S tor a ge S y s tem)

EMC294 3

Figure 24

Install Navisphere and

Admsnap

Zone SAN Copy ports

SAN Copy configuration example

Be sure Navisphere Agent, Navisphere Secure CLI, and Admsnap are installed on production and mount hosts.

Use the native switch management tools to zone at least one port from each SP on the CLARiiON array to one or more FA ports on the participating Symmetrix storage systems.

SAN Copy ports act as host initiators to the Symmetrix array.

Note:

If MirrorView is installed on either CLARiiON array, the highest port will be reserved for use with MirrorView, regardless of whether your

CLARiiON SPs have two or four ports. Do not use the highest port SAN

Copy. For example, if your CLARiiON array has MirrorView installed, and the array’s SPs have two ports, use port 0 on each SP for SAN Copy.

Configuring the Symmetrix array for SAN Copy replications

195

Symmetrix Array Setup

196

Register SAN Copy ports

Use Navisphere Manager to register the SAN Copy ports with the

Symmetrix array as follows:

1. In the Storage tree in the Navisphere Enterprise Storage window, right-click the icon for the SAN Copy storage system (the

CLARiiON array in a Symmetrix-to-CLARiiON configuration).

2. Select SAN Copy > Update Connections.

3. To verify that the SAN Copy ports have registered with the

Symmetrix storage system, use the Symmetrix CLI command on the production host or the host connected to the Symmetrix array:

symmask -sid

symmID list logins where

symmID

is the Symmetrix serial ID.

Use the SymmCLI command sympd list -sid to determine the pathname for the VCM database, or run inq on the host, if available.

Assign Symmetrix volumes to SAN Copy ports

Assign each source device and intermediate device on the Symmetrix array to at least one port on each SP of the CLARiiON array.

You can assign the Symmetrix volumes to the SAN Copy ports using

EMC ControlCenter, ESN Manager, or the SymmCLI command

symmask

. The following example uses the symmask command.

The commands in the following example export BCVs 211 through

215 on director 1d, port 0, to a CLARiiON port 0 (5006016010601fd5) on SP A and port 0 (5006016810601fd5) on SP B:

symmask -sid 000187900509 -wwn 5006016010601fd5 -dir 1d -p 0 add devs 211,215 symmask -sid 000187900509 -wwn 5006016810601fd5 -dir 1d -p 0 add devs 211,215 symmask –sid 000187900509 refresh

You need to run these commands for the STDs (to make SAN Copy restore possible) and for any VDEVs or BCVs that will participate in

SAN Copy for Replication Manager.

Create a storage pool

You must create a Replication Manager storage pool in order to make

SAN Copy replicas. For Symmetrix-to-CLARiiON SAN Copy, include the following devices in the storage pool:

The intermediate devices to be used for the temporary SAN Copy replicas

Target CLARiiON devices that are to be used for SAN Copy

“Creating storage pools” on page 105 describes how to make a

storage pool.

EMC Replication Manager Administrator’s Guide

Symmetrix Array Setup

SAN Copy configuration recommendations

Note the following best practices and restrictions for Symmetrix to

CLARiiON SAN Copy configurations:

As a best practice, avoid exporting the intermediate SAN Copy devices to the production host or mount host, especially in a

Windows environment.

On Windows, do not export BCVs and VDEVs to multiple

Windows machines.

Source, intermediate, and target devices must be the same size when measured in number of blocks. Match the LUN size on the

CLARiiON array to the Symmetrix device size. To determine the size of Symmetrix devices, run the symdev command with the show option and make a note of the value for 512-byte Blocks.

Use this same value in Navisphere when configuring CLARiiON devices with the Bind LUN option. In Navisphere, be sure you set the LUN size to be in Block Count units.

Configuring the Symmetrix array for SAN Copy replications

197

Symmetrix Array Setup

Configuring for remote replications across SRDF/S

Replication Manager can create replicas of production data on remote

BCVs, TimeFinder/Snaps and TimeFinder/Clones across an SRDF/S

link. Figure 25 on page 198 shows the configuration.

R1 synchrono u s mode

R2

BCV

BCV synchrono u s mode

R1 R2

TF C

Prod u ction

Host

R1

Local Symmetrix Array synchrono u s mode

R2

TF Sn

Remote Symmetrix Array

Mo u nt

Host

RM-000037

Figure 25 Remote BCV replication across synchronous SRDF

198

EMC Replication Manager Administrator’s Guide

Symmetrix Array Setup

The previous illustration shows:

A source Symmetrix array that is connected to a Replication

Manager host. Replications involve devices that are seen by this host.

All the sources to be replicated are R1 devices. Each of these R1 devices has an R2 counterpart in a remote Symmetrix array. The

SRDF link between the R1 and the R2 devices is in synchronous mode.

The target replica devices are BCVs, TimeFinder snaps, and

TimeFinder clones in the remote Symmetrix array. The sources of these BCVs are the R2 devices that are the counterparts of the R1 devices on which the source resides.

Before installing Replication Manager, verify that:

The SRDF link is online.

R1 and R2 are in synchronous mode. Replication Manager will not put the devices in synchronous mode.

Configuring for remote replications across SRDF/S

199

Symmetrix Array Setup

Configuring Symmetrix alternate MSCS cluster mount

This section outlines the Symmetrix configuration required to perform alternate MSCS cluster mounts.

Follow the guidelines in this section to configure a Symmetrix storage array for an alternate MSCS cluster mount. The EMC Replication

Manager Product Guide provides more information about the procedures involved in mounting to an alternate cluster.

Preparing to mount a physical cluster node to Symmetrix

To mount to a physical MSCS cluster node, zone the storage in such a way that only a single node of the cluster (usually the passive node) is visible to the mount host. Replication Manager mounts the replica to the zoned node as shown in

Figure 26

.

Alternate cluster mount

Symmetrix mount to physical node

Cluster 1 static visibility to all nodes

Production host

Cluster 2 static visibility to just one node

Mount host

Figure 26

Mounting a physical node to an alternate cluster on Symmetrix

Preparing to mount a virtual cluster node to Symmetrix

To mount to a virtual MSCS cluster node, ensure that the devices where the replica resides are visible to the active physical cluster node. To do this, zone those devices to be visible to all physical nodes that own the resource group containing that virtual cluster.

200

EMC Replication Manager Administrator’s Guide

Symmetrix Array Setup

Alternate cluster mount

Symmetrix mount to virtual node

Cluster 1 static visibility to all nodes

Production host

Cluster 2

Mount host static visibility to all nodes that own the virtual node resource

Figure 27

Mounting a virtual node to an alternate cluster on Symmetrix

Replication Manager does not automatically apply full cluster support to replicas mounted on an alternate cluster. To access the application through the cluster, manually add the replica disks to the resource group of the selected node. Replication Manager does not perform this step automatically.

Configuring Symmetrix alternate MSCS cluster mount

201

Symmetrix Array Setup

Using Symmetrix protected restore

When you restore a replica on a Symmetrix storage array that has protected restore enabled, the restore automatically takes advantage of the Symmetrix array’s protected restore feature. No special configuration step is required.

The protected restore feature allows the contents of a BCV to remain unchanged during and after a restore operation, even while the BCV and the standard are joined. Subsequently, any writes to the BCV pair are not propagated to the BCV while the standard and the BCV are joined in a RestInProg or Restored state. This protection offers the same advantage as a reverse split, but without the need for a mirrored BCV.

The effect is that you will see all the restored data immediately. You do not have to wait for the restore to finish. However, if you attempt another restore before the first one finishes, you have to wait for the first restore to finish.

202

EMC Replication Manager Administrator’s Guide

Symmetrix Array Setup

Long Duration Locks

Beginning with Version 5.2.3, new installations of Replication

Manager no longer set a LDL by default when adding a Symmetrix device or creating a replica on a Symmetrix device.

Upgrade installations of Version 5.2.3 can optionally continue to set

LDLs. During upgrade, the Replication Manager installation wizard asks if you want to disable the LDL feature for future operations and remove LDLs from currently added Symmetrix devices. If you say no, LDL behavior will continue as in previous versions.

“Enabling the LDL feature in Replication Manager” on page 204

describes how to turn on the LDL feature manually.

LDLs and

Replication

Manager operations

When LDLs are enabled, Replication Manager does the following during replication, restore, mount, or unmount:

Temporarily removes the LDL from the replication device and any other devices included in Replication Manager that are associated with this source.

Performs the replication, restore, mount or unmount operation.

Applies a new LDL with a new lock ID to devices from which the

LDL was removed.

A small window of time exists between the moment the replication device is unlocked after any of the above operations and the moment

Replication Manager applies a new LDL to that same device. In that short time interval, another application may access that device and apply its own LDL so that Replication Manager cannot access the device. If another application applies the LDL to a device during this small window of time, Replication Manager will be unable to relock the device, and will fail its own operation.

When Replication

Manager removes an LDL

When LDLs are enabled, Replication Manager removes a Long

Duration Lock from a device when:

The replication device is removed; Replication Manager then resets the lock ID in the internal database.

A Symmetrix array is configured to be ignored; LDLs are removed for all devices that contain no active replicas on this ignored array. For a replication device that contains an active

Long Duration Locks

203

Symmetrix Array Setup

Checking the state of LDLs on

Replication

Manager

Disabling the LDL feature in

Replication

Manager

Enabling the LDL feature in

Replication

Manager

Viewing an LDL

Removing an LDL

!

replica, the system removes the LDL when the replica is deleted.

Replication Manager updates the database and deletes the lock ID for each device that has been removed.

To see the state of LDLs in Replication Manager, run the following command from rm\server\bin: ird -s

To disable the LDL feature manually:

1. Stop the Replication Manager Server (ird) service.

2. Run the following command from rm\server\bin:

ird -s -L OFF

3. Start the Replication Manager Server (ird) service.

To enable the LDL feature manually:

1. Stop the Replication Manager Server (ird) service.

2. Run the following command from rm\server\bin:

ird -s -L ON

3. Start the Replication Manager Server (ird) service.

To view a device’s LDL, run the Solutions Enabler command:

symdev -sid

SymmetrixID -lock -range start:end list

To remove an LDL from a device, run the Solutions Enabler command:

symdev -sid

SymmetrixID -lock -range start:end release

CAUTION

You should not release a long duration lock without fully understanding the impact to other applications, especially the application that placed the lock on the devices.

204

EMC Replication Manager Administrator’s Guide

Symmetrix Array Setup

Configuring virtual devices (VDEVs)

Follow the instructions in the EMC Solutions Enabler Symmetrix

TimeFinder Family CLI Product Guide to set up your VDEVs in such a way that they will be visible to and usable by Replication Manager.

The high-level steps are:

Be sure the TimeFinder/Snap component is installed on the host.

Create VDEVs.

Configure TimeFinder/Snap save devices.

Be sure the VDEVs are visible.

Configuring virtual devices (VDEVs)

205

Symmetrix Array Setup

Configuring TimeFinder clones for Replication Manager

Replication Manager supports Symmetrix TimeFinder clone functionality to create one or more TimeFinder/Clone copies of standard, R1 devices or SRDF/S R2 devices.

TimeFinder clone capability is controlled using copy sessions, which pair the source and target devices. The target device can be either standard (STD) or BCV devices as long as they are all the same size and emulation type. Clone copies of striped or concatenated meta devices can also be created but the source and target meta devices must be identical in stripe count, stripe size, and capacity.

There are several device type considerations when using BCV and standard devices to create TimeFinder clone replicas. For example:

If the source or target of a create or activate action is a BCV, the

BCV pair state must be Split.

If a source or target device is used in BCV emulation mode, the device cannot be used in a TimeFinder clone session.

Clones used for replication storage should not be the source for any other clone, VDEV, or BCV relationship.

In addition, certain TimeFinder clone operations are not allowed within Symmetrix storage arrays employing SRDF for remote mirroring. The availability of some actions depends on the current state of SRDF pairs. For example:

STD/R1 devices used as a source must be in one of the following states: synchronized, split, failed over, suspended, or partitioned.

STD/R1 devices used as a target must be in one of the following states: synchronized, split, suspended, or partitioned.

Restoring

TimeFinder clones with long duration locks

Replication Manager cannot perform a TimeFinder clone restore to a source device that has a LDL. The lock must be removed. To view locks, run the command:

symdev -sid

SymmetrixID -lock -range

start:end

list

If it is safe to do so, release the locks using the following command:

symdev -sid

SymmetrixID -lock -range start:end release

206

EMC Replication Manager Administrator’s Guide

Symmetrix Array Setup

!

CAUTION

You should not release a long duration lock without fully understanding the impact to other applications, especially the application that placed the lock on the devices.

For more information on long duration locks, refer to “Long Duration

Locks” on page 203 .

Configuring TimeFinder clones for Replication Manager

207

Symmetrix Array Setup

Steps to perform after an array upgrade

When you upgrade a Symmetrix array that is already discovered in

Replication Manager, perform the following procedure to ensure that the new array is seen by Replication Manager:

1. Delete all relevant Replication Manager replicas.

2. Exclude all storage from the Symmetrix.

3. Right-click the array and select Remove Array.

4. De-zone the array from relevant hosts.

5. Open a command window on the client and run the following command:

cd c:\Program Files\EMC\symcli\bin

6. Run the following symcfg command:

symcfg -symm discover

This command builds the Solutions Enabler database that contains the list of the Symmetrix arrays this client can discover.

7. In the same command window, run:

symcfg list

This command returns the list of visible Symmetrix arrays. Verify that the upgraded Symmetrix array is in the list. If the command fails to return a list of Symmetrix arrays, Replication Manager will fail to discover the array. Replication Manager will run a similar discovery in the next step. Troubleshoot Symmetrix-host connectivity at this point.

8. With the new array in place, use the Replication Manager

Console, right-click on the attached host and select Discover

Arrays

. This will be automatically placed into the symapi_db_emcrm_client.bin file.

At this point the array should be usable by Replication Manager.

If discovery continues to fail, complete the steps below. These steps require that the array was visible in the output from the

symcfg list

command above.

208

EMC Replication Manager Administrator’s Guide

Symmetrix Array Setup

Note:

You will still see the old array in the Replication Manager Console; it does not affect Replication Manager.

9. Stop the Replication Manager Client service.

10. Start the Replication Manager Client service.

11. Using the Replication Manager Console, right-click the host and select Discover Arrays. This command adds the new array information to symapi_db_emcrm_client.bin and populates the symcfg file. The array should now be available for further configuration in the Replication Manager Console.

Steps to perform after an array upgrade

209

Symmetrix Array Setup

Reducing storage discovery time with symavoid

In host environments where a large number of Symmetrix arrays are visible, replication can take a long time because Replication Manager discovers arrays with each replication. To reduce the amount of time it takes for a discovery, consider using an avoidance file (symavoid) to specify the arrays to be skipped during discovery.

The Symmetrix avoidance file should contain Symmetrix IDs of the arrays to be ignored, with one ID per line. The avoidance file should be saved in the following location on each Replication Manager client host that has arrays you want to skip:

Program Files\EMC\SYMAPI\config

(Windows)

/var/symapi/config

(UNIX and Linux)

210

EMC Replication Manager Administrator’s Guide

6

Celerra Array Setup

This chapter describes how to prepare the Celerra storage array to work with Replication Manager. It covers the following subjects:

Celerra hardware and software configurations........................... 212

Configuring iSCSI targets ............................................................... 215

Configuring the iSCSI initiator ...................................................... 226

Configuring iSCSI LUNs as disk drives ....................................... 233

Verifying host connectivity to the Celerra server........................ 238

Configuring Celerra network file system targets ........................ 239

Creating iSCSI replicas using Celerra SnapSure technology..... 242

Creating iSCSI replicas using Celerra Replicator technology ... 243

Troubleshooting Replication Manager with Celerra arrays ...... 261

Celerra Array Setup

211

Celerra Array Setup

Celerra hardware and software configurations

The EMC Replication Manager Support Matrix on http://Powerlink.EMC.com

provides specific information about supported applications, operating systems, high-availability environments, volume managers, and other supported software.

Supported Celerra storage arrays

One or more of the following Celerra Servers are supported in a

Replication Manager environment:

NSX Series

NSX Series/Gateway

NS Series/Integrated

NS Series/Gateway

CNS with 510 Data Mover Model

CNS with 514 Data Mover Model

Minimum required hardware and connectivity

QLogic iSCSI TOE card setup

The following storage and connectivity hardware components are required for any implementation of Replication Manager in a Celerra environment:

One or more supported Celerra storage arrays

Replication Manager Server software (on hosts)

Replication Manager Agent software (on hosts) iSCSI or NFS connectivity between the Replication Manager hosts, with the agent software (IRCCD) installed, and the Celerra storage arrays

TOE card (optional)

If you are using QLogic iSCSI TOE cards, there are a few additional prerequisite setup steps that you must complete, as follows:

1. Ensure that the NIC in the Windows host can ping the iSCSI target IP address.

2. Download and install the Microsoft iSCSI Initiator service (v. 2.0 or later). You can do this by choosing Install iSCSI Service Only during the iSCSI Initiator installation.

212

EMC Replication Manager Administrator’s Guide

Minimum required software for hosts

Celerra Array Setup

3. On the Celerra workstation or in the Celerra Manager, run the following CLI command:

server_param

server_2 -facility iscsi -modify

AsyncEvent -value 0

where server_2 is the Celerra Data Mover name and may be different on any given Celerra unit.

The following software must be installed on all Replication Manager production and mount hosts:

Microsoft iSCSI initiator (Windows hosts only)

VMware initiator (optional, not required for virtual machines with Celerra based virtual disks.)

Note:

For information on configuring your Celerra to work with

Replication Manager and VMware, refer to

“VMware Setup” on page 287

.

Set SAN policy on

Windows Server

2008

Standard Edition

On Windows Server 2008, the SAN policy determines whether a disk comes in online or offline when it is surfaced on the system. For

Enterprise Edition systems the default policy is offline, but on

Standard Edition the default policy is online. You need to set the policy to offlineshared in order to prevent mount failures.

To set the SAN policy to offline on a Windows Server 2008 Standard

Edition host, open a command line window and run the following commands:

C:\>diskpart

Microsoft DiskPart version 6.0.6001

Copyright (C) 1999-2007 Microsoft Corporation.

On computer: abcxyz

DISKPART> san policy=offlineshared

DiskPart successfully changed the SAN policy for the current operating system.

Celerra hardware and software configurations

213

Celerra Array Setup

Install patches and service packs

Depending on the version of Windows and the applications you are running, there may be OS and application patches, hotfixes, and service packs that you need to install on the Replication Manager

Server:

1. Refer to the EMC Replication Manager Support Matrix on: http://Powerlink.EMC.com

2. Obtain the required patches from the manufacturer’s website and install.

214

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

Configuring iSCSI targets

This section lists the prerequisites and the tasks you must perform to configure iSCSI targets and LUNs on the Celerra Network Server in the event that you are using iSCSI connections to perform your replication tasks.

For information on configuring Replication Manager to use NFS in

Celerra NAS environments, refer to “Configuring Celerra network file system targets” on page 239 .

Prerequisites

In order to use targets on the Celerra Network Server, you must have one or more external network interfaces available for the targets to use.

iSCSI target configuration checklist

The checklist below lists the tasks for configuring iSCSI targets:

Start the iSCSI service on the Data Mover.

Create one or more iSCSI targets on the Data Mover.

Create one or more file systems to hold the iSCSI LUNs.

Create one or more iSCSI LUNs on an iSCSI target.

Permit iSCSI initiators to contact specific LUNs by configuring a

LUN mask for the target.

(Optional) Configure the iSNS client on the Data Mover.

(Optional) Enable authentication by creating CHAP entries for initiators and for the Data Mover.

Create iSCSI targets

You must create one or more iSCSI targets on the Data Mover so that an iSCSI initiator can establish a session and exchange data with the

Celerra Network Server.

To configure an iSCSI target on the Data Mover:

1. From Celerra Manager: a. Click iSCSI.

b. Click the Targets tab.

c. Click New. The New iSCSI Target panel appears.

Configuring iSCSI targets

215

Celerra Array Setup

2. From the Choose Data Mover list, choose the Data Mover on which you want to create the target.

3. Complete the following fields:

• Name

Enter a user-friendly name for the new iSCSI target. This name is an alias for the IQN name. A target alias is any alpha-number string containing a maximum of 255 characters.

An alias can include Unicode characters in UTF-8 encoding.

Note:

Since the alias is the key field in the Celerra Network Server’s iSCSI databases, aliases must be unique on the Data Mover.

iSCSI Qualified Target Name (Optional)

You can enter a globally unique name for the target in IQN or

EUI format; however, it is strongly recommended that you do not enter an iSCSI qualified target name and instead let the

Celerra Network Server automatically generate a legal iSCSI name.

Note:

IQNs can have a maximum of 223 characters.

4. (This step is optional unless you plan to use the SendTargets means of target discovery.) Enter one or more network portals for the target to use. An iSCSI target uses network portals to communicate with iSCSI initiators. For the Celerra Network

Server, a network portal is an interface on the Data Mover.

216

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

Create file systems for iSCSI LUNs

Enter network portals in the following formats:

• Without a portal group:

<IP address>

• With a portal group:

<portal group tag>

:

<IP address>

:

<port number>

For example, 3:112.32.168.21:5711

Multiple portals must be separated by commas. For example,

1:112.32.168.12:3260,3:112.32.168.21:5711

If you do not enter a <portal group tag> and a <port number>, the default portal group tag (1) and default iSCSI port (3260) are used. If you leave this field blank, initiators cannot contact the target until you add a network portal to the target.

For more information on network portals and portal groups, refer to the Configuring iSCSI Targets on Celerra technical module.

5. Click OK. The new target is created.

The first step in configuring iSCSI targets on your Celerra Network

Server is to create and mount one or more file systems to provide a dedicated storage resource for the iSCSI LUNs.

Note:

A file system that holds iSCSI LUNs should be dedicated to iSCSI and not used for other purposes. For example, an iSCSI file system should not be exported through a CIFS share or NFS export.

Create a file system through the Celerra Manager or the CLI. Refer to the Celerra Manager help system or the Managing Celerra Volumes and

File Systems technical module for instructions. For important information on determining the proper size of an iSCSI file system, refer to the next section.

Configuring iSCSI targets

217

Celerra Array Setup

Sizing the file system

The file system should be large enough to accommodate your iSCSI

LUNs, any snaps you plan to take of your iSCSI LUNs, and support changes to replicas that you mount using Replication Manager. iSCSI snaps are point-in-time copies of all the data on a LUN. The file system space required to store iSCSI snaps depends on several factors, including your Celerra configuration and the rate of data change that occurs in the interval between each snapshot.

Affect of virtual provisioning and sparse TWS on sizing

Traditionally, iSCSI LUNs allocate space equivalent to their published size in their file system. For example, a 100 GB iSCSI LUN allocates approximately 100 GB in the file system. If only 1 GB of data is actually stored on the iSCSI LUN, the remaining space is wasted because it is unavailable for any other use.

Celerra arrays support a technology called Virtual Provisioning™, which seeks to address this concern. Virtual Provisioning allows the storage administrator to publish a size for a LUN, but allocate space only on an as needed basis. Because of this technology, the host views the LUN with a constant size. However, on Celerra, only the space currently in use is allocated. If this technology is being employed in your environment it could greatly affect the sizing requirements for snapshot replicas.

Similarly, sparse TWS is supported for replica mounts. This reduces the amount of storage required for mounted replicas. You should contact EMC Customer Service to configure sparse TWS, and understand the impact of this configuration.

Where to find detailed sizing information

For detailed information about how to size your Celerra file system for use with Replication Manager, including formulas and detailed examples, refer to the technical note entitled Sizing Considerations for

iSCSI Replication on EMC Celerra available on Powerlink.

Ensure that you adhere to these sizing guidelines. If there is not enough free space in the file system, you cannot take iSCSI snaps or mount snaps. Also, the replica may become unresponsive while waiting for sufficient free space. If you notice that a Celerra replication seems to be taking an abnormally long time to complete, check the free space on the file system and if the file system is over 90 percent full, extend the file system. You can create a file system watermark or warning alert condition in Celerra Monitor to help you monitor the space usage for a file system.

218

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

Create iSCSI LUNs

Sizing LUNs

LUN numbering

After creating an iSCSI target, you must create iSCSI LUNs on the target. The LUNs provide access to the storage space on the Celerra

Network Server. In effect, when you create a LUN, you reserve space on a Celerra file system. From a client system, a Celerra iSCSI LUN appears as any other disk device.

Since you cannot change the size of a partition after you create it, you should create LUNs that are large enough to meet your potential data storage needs. LUNs require a minimum size of 3 MB and cannot exceed 2 TB minus 1 MB (2,097,151 MB). When you create a LUN, the space taken from the file system is slightly larger than the size you specify for the LUN. This is attributable to overhead in allocating and tracking the LUN by the underlying file system.

A LUN is identified by a unique number per target. According to the iSCSI protocol, a target can have up to 256 LUNs (numbered 0 to 255), however the usable range of LUNs is limited by the operating system on the iSCSI host. On Windows, LUNs are limited to a total of 255

LUNs (numbered 0 to 254). The Celerra Network Server complies with the Windows limitation, and by default supports 255 LUNs per target.

It is required that you only use LUN numbers 0–127 for production

LUNs (PLUs). A PLU is a logical unit that serves as a primary (or production) storage device. Doing this leaves the remaining LUNs numbers (128–254) available for mounting iSCSI snaps as snap LUNs

(SLU). SLUs are point-in-time representations (iSCSI snaps) of PLUs that have been promoted to LUN status so that they can be accessed.

If you grant initiator access to these LUNs through LUN masking, you will frequently see phantom disks appear and disappear in the iSCSI host’s device manager.

Creating an iSCSI

LUN

To create an iSCSI LUN:

1. From Celerra Manager: a. Click iSCSI.

b. Click the LUNs tab.

c. Click New. The New iSCSI LUN page appears.

2. From the Choose Data Mover list, choose the Data Mover on which you want to create the LUN.

Configuring iSCSI targets

219

Celerra Array Setup

Create iSCSI LUN masks

3. Complete the following fields:

• Target

Choose the iSCSI target on which you want to create the new

LUN.

• LUN

Enter the number of the new LUN. You cannot use an existing

LUN number. The LUNs already defined for the target are listed above this field.

• File System

Choose the file system on which you want to create the new

LUN.

Note:

The Celerra Manager shows all available file systems in the File

System list—some of the file systems displayed in the File System list may already be in use.

• Size

Enter the size of the new LUN in MBs.

Note:

After creation, the size of the LUN is smaller than the size you indicated. Some of the space on the LUN is needed for the underlying file system to fully allocate the LUN.

• Read Only

(Celerra Manager 5.6 or later only)

Check this box to mark the new LUN as read-only for Celerra

Replicator™ purposes.

4. Click OK. The LUN is created.

To control initiator access to an iSCSI LUN, you must configure an iSCSI LUN mask. A LUN mask controls incoming iSCSI access by granting or denying specific iSCSI initiators access to specific iSCSI

LUNs on a target.

220

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

By default, all initial LUN masks are set to deny access to all iSCSI initiators. To enable an initiator to access the LUNs on your targets, you must create a LUN mask explicitly granting access to that initiator:

You should create a LUN mask for the production host, not the mount host.

Changes to LUN masks take effect immediately. Be careful when deleting or modifying a LUN mask for an initiator with an active session. When you delete a LUN mask or remove grants from a mask, initiator access to LUNs currently in use is cut off and will interrupt applications using those LUNs.

To create an iSCSI LUN mask:

1. From Celerra Manager: a. Click iSCSI.

b. Click the Targets tab.

c. Right-click the name of the target on which you want to create the new LUN mask. d. Select Properties. The iSCSI Target Properties page appears.

2. From the iSCSI Target Properties page: a. Click the LUN Mask tab. The LUN Mask page appears.

b. Click New. The New iSCSI Mask page appears.

Configuring iSCSI targets

221

Celerra Array Setup

Configure iSNS on the Data Mover

3. Complete the following fields:

• Initiator

Enter the qualified name of the iSCSI initiator on the Windows host to which you want to grant access.

Note:

In Celerra Manager 5.6 or later, you can use the drop-down box to choose from a list of all available initiators that are currently logged in to this IQN. For earlier versions of Celerra Manager, you can usually copy the qualified name of the initiator from somewhere on the initiator configuration interface instead of typing it. For the

Microsoft iSCSI Initiator, you can copy the qualified name from the

Initiator Node Name

field on the General tab.

• Grant LUNs

Enter the LUNs to which you are granting access. LUNs can be entered as single integers (for example, 2) or as a range of integers (for example, 2–6). Each LUN or range of LUNs must be separated by commas. For example, you can enter LUNs such as:

0,2-4,57

4. Click OK. The new LUN mask is created.

(Optional task) If you want iSCSI initiators to automatically discover the different iSCSI targets on a Data Mover, you can configure an iSNS client on the Data Mover. Configuring an iSNS client on the

Data Mover causes the Data Mover to register all of its iSCSI targets with an external iSNS server. iSCSI initiators can then query the iSNS server to discover available targets on your Data Movers.

To configure iSNS on a Data Mover:

1. From Celerra Manager: a. Click iSCSI.

b. Click the Configuration tab. The Configuration page appears.

2. From the Show iSCSI Configuration for list, choose the Data

Mover on which you want to configure the iSNS client.

222

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

3. Complete the following fields:

• iSNS Server

Enter the IP address of the iSNS server. If you do not include a port number, the Data Mover uses the default port of 3205. If you want to specify a different port, enter the IP address and port number in the following format:

<IP address>

:

<port number>

Note:

If your iSNS server is on a Microsoft cluster, enter the IP address of the cluster.

Or

ESI Port (Optional)

Enter a port number for the ESI (Entity Status Inquiry) port.

This port is used by the iSNS server to monitor the status of the iSCSI service on the Data Mover. If you do not provide a port, the ESI port is dynamically selected when you start the iSCSI service.

4. Click Apply. The iSNS client is configured.

Create CHAP entries

(Optional task) If you want your Data Mover to authenticate the identity of iSCSI initiators contacting it, you can configure CHAP authentication on the Data Mover. To configure CHAP, you must:

Edit the RequireChap and/or RequireDiscoveryChap parameters so that targets on the Data Mover require CHAP authentication.

For instructions, refer to the Configuring iSCSI Targets on Celerra technical module.

Create a CHAP entry for each initiator that will contact the Data

Mover. CHAP entries are configured on a per Data Mover basis— each initiator contacting a Data Mover has a unique CHAP secret for all targets on the Data Mover.

In some cases, initiators authenticate the identity of the targets they contact. If this situation exists, you must configure a CHAP entry for reverse authentication. Reverse authentication entries are different than regular CHAP entries because there is only one

CHAP secret per Data Mover. The Data Mover uses the same

CHAP secret when any iSCSI initiator authenticates a target on the Data Mover.

Configuring iSCSI targets

223

Celerra Array Setup

To create a regular CHAP entry or a reverse authentication CHAP entry:

1. From Celerra Manager: a. Click iSCSI.

b. Click the CHAP Access tab.

c. Click New. The New iSCSI CHAP Entry page appears.

2. From the Data Mover list, choose the Data Mover on which you want to create the new CHAP entry.

3. For Type, select either:

Specify a new initiator name: Allows you to specify what

CHAP secret the Data Mover uses to authenticate the iSCSI initiator. If you choose to create CHAP entries, every iSCSI initiator can have a different secret.

Or

Reverse authentication for the Data Mover: Allows you to set a secret for reverse authentication when the iSCSI initiator authenticates the iSCSI target. If you choose to configure reverse authentication, you can only set one secret per Data

Mover.

4. Complete the following fields:

• Name

The qualified name of the iSCSI initiator for which you are configuring the CHAP entry.

Note:

If you chose to configure reverse authentication, the value for this field is set automatically to ReverseAuthentication.

• Secret

Enter the secret exchanged between the iSCSI initiator and the iSCSI target during authentication. Secrets must be a minimum of 12 characters and a maximum of 255 characters.

Windows operating systems support CHAP secrets of 12 to 16 characters.

• Re-enter Secret

Enter the secret again.

5. Click OK. The CHAP entry is saved.

224

EMC Replication Manager Administrator’s Guide

Start the iSCSI service

Celerra Array Setup

Before using iSCSI targets on the Celerra Network Server, you must start the iSCSI service on the Data Mover.

To start the iSCSI service on the Data Mover:

1. From Celerra Manager: a. Click iSCSI.

b. Click the Configuration tab. The Configuration page appears.

2. From the Show iSCSI configuration for list, choose the Data

Mover on which you want to start the iSCSI service.

3. Select the iSCSI Enabled checkbox.

4. Click Apply. The iSCSI service is started.

Configuring iSCSI targets

225

Celerra Array Setup

Configuring the iSCSI initiator

To configure the iSCSI initiator on your Replication Manager production and mount host:

1. Register the initiator with the Windows Registry so that you can browse for the initiator name from the Celerra iSCSI host applications.

2. (Optional) Enable the initiator to authenticate iSCSI targets by configuring the initiator with a CHAP secret for reverse authentication.

3. Enable the initiator to find iSCSI targets on the network by manually configuring a target portal’s IP address and/or providing the IP address of an iSNS server.

4. Log the initiator into a target.

This section covers the basic configuration tasks you must perform so that your Replication Manager hosts can connect to a target on a

Celerra Network Server. For detailed configuration information, refer to the Microsoft iSCSI Software Initiator User Guide that was installed with the Microsoft iSCSI initiator.

Note:

Depending on the Windows version, iSCSI initiator screens may vary from those shown. If you are running on Windows Server 2008, the iSCSI

Initiator is bundled with the operating system.

Register the initiator with the Windows

Registry

You must perform the following procedure so that the initiator’s iSCSI Qualified Name (IQN) is written to the Windows Registry. If you do not follow this procedure, the Browse dialog boxes in the

Celerra iSCSI host applications may not be able to find the initiator’s

IQN.

In addition, this procedure converts any uppercase characters in the initiator’s IQN to lowercase. The Celerra Network Server only supports lowercase IQNs.

You must complete this procedure for each iSCSI initiator that will connect to the Celerra Network Server:

1. Open the Microsoft iSCSI Initiator.

226

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

2. On the General tab as shown in Figure 28

, click Change but do not change the initiator’s IQN.

Figure 28 General tab of iSCSI Initiator Properties window

Note:

EMC recommends that you do not change the initiator’s IQN.

IQNs must be globally unique and adhere to IQN naming rules.

3. Click OK to write the existing IQN to the Windows Registry.

Configure CHAP secret for reverse authentication

(Optional) If you plan to use reverse authentication for your iSCSI sessions, you must configure the iSCSI initiator with a CHAP secret.

With regular CHAP, the target challenges the initiator for a CHAP secret; however, with reverse authentication, the initiator challenges the target for a secret. The Microsoft iSCSI initiator only supports one secret per initiator.

You can set your initiator so that it requires reverse authentication for discovery, as described in

“Configure iSCSI discovery on the

Configuring the iSCSI initiator

227

Celerra Array Setup

About CHAP authentication

initiator” on page 229

and/or for sessions, as described in refer to

“Log in to iSCSI target” on page 232 .

CHAP provides a method for iSCSI initiators and targets to authenticate each other by the exchange of a secret, or password, that is shared between the initiator and target.

Because CHAP secrets are shared between the initiator and target, you must configure the same CHAP secret on both the initiator and the target. Both the initiator system and the target system maintain databases of CHAP entries.

By default, targets on the Celerra Network Server do not require

CHAP authentication; however, depending on your needs, you may want to require CHAP authentication. You can enable CHAP authentication at two different points of the iSCSI login:

Discovery authentication

— CHAP authentication is required before the initiator can contact the Celerra Network Server to establish a discovery session during which the initiator attempts to discover available targets. If you have enabled discovery authentication on the iSCSI target, you must provide the CHAP

secret when you configure iSCSI discovery. Refer to “Configure iSCSI discovery on the initiator” on page 229

for procedures.

Session authentication —

CHAP authentication is required before the initiator can establish a regular iSCSI session with the target. If you have enabled session authentication on the Celerra

Network Server, you must provide the CHAP secret when you

log in to the iSCSI target. Refer to “Log in to iSCSI target” on page 232

for information.

Refer to the Configuring iSCSI Targets on Celerra technical module for information on enabling and configuring CHAP authentication on the Celerra Network Server.

Set CHAP secret for reverse authentication

(Optional) To configure a CHAP secret for the initiator:

1. Open the Microsoft iSCSI Initiator.

228

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

2. On the General tab, click Secret. The Chap Secret Setup dialog box appears as shown in

Figure 29 on page 229

.

Configure iSCSI discovery on the initiator

SendTargets discovery

Figure 29

Chap Secret Setup dialog box

3. Enter a CHAP secret that is 12 to 16 characters long.

4. Click OK.

Before an initiator can establish a session with a target, the initiator must first discover where targets are located and the names of the targets available to it. The initiator obtains this information through the process of iSCSI discovery. You can configure the initiator to use the following discovery methods:

SendTargets discovery where you manually configure the initiator with a target’s network portal and then the initiator uses the portal to discover all the targets accessible from that portal.

Automatic discovery using an iSNS server where the initiators and targets all automatically register themselves with the iSNS server. Then when an initiator wants to discover what targets are available, it queries the iSNS server.

These discovery methods are not mutually exclusive—you can use both techniques at the same time. The following sections provide instructions on configuring the initiator to use these discovery methods. For more information on iSCSI Discovery, refer to the

Configuring iSCSI Targets on Celerra technical module.

Use this procedure to add a network portal for manual discovery. You can add multiple network portals for multiple iSCSI targets:

1. Open the Microsoft iSCSI Initiator.

Configuring the iSCSI initiator

229

Celerra Array Setup

2. Click the Discovery tab.

3. In the Target Portals area, click Add or Add Target Portal. The

Add Target Portal

dialog box appears, as shown in

Figure 30 on page 230

.

Figure 30

Add Target Portal dialog box

Enter the IP address of the target’s network portal. If the target uses something other than the default iSCSI port of 3260, enter the port number.

Note:

To ensure that the network is available, you should ping

the target’s IP address before configuring it in the iSCSI initiator. If the

Celerra Network Server is not available, or if you have entered a invalid

IP address, you receive a Connection Failed error.

4. (Optional) If you want to use forward CHAP authentication

(where the target challenges the initiator), click Advanced. The

Advanced Settings

dialog box appears.

This step is only optional if the target does not require CHAP authentication. If the target requires authentication, and you do not configure a forward CHAP secret on the initiator, the initiator cannot log in to the target. If the target does not require CHAP authentication, but the initiator offers it, the target will comply with the initiator’s request. For more information on enabling

CHAP authentication, refer to the Configuring iSCSI Targets on

Celerra technical module.

5. (Optional) Do the following to enter the CHAP secret: a. Select CHAP logon information.

230

EMC Replication Manager Administrator’s Guide

Automatic discovery

Celerra Array Setup

b. In the Target secret field, enter the secret that you configured for the iSCSI target. Microsoft only supports CHAP secrets that are 12 to 16 characters long.

c. If you also want the initiator to authenticate the target for iSCSI discovery, select Perform mutual authentication. Refer

to “Set CHAP secret for reverse authentication” on page 228

for information on configuring reverse authentication on the initiator.

d. Click OK. You return to the Add Target Portal dialog box.

6. Click OK. The network portal is added to the Target Portals list.

If you plan to use automatic discovery to discover targets on your iSCSI network, you must also configure the iSNS server’s IP address on each Data Mover to which you plan to connect. Refer to the

Configuring iSCSI on the Celerra Network Server technical module for instructions on configuring iSNS on the Celerra.

To configure the iSNS server’s IP address on the Data Mover:

1. Open the Microsoft iSCSI Initiator.

2. Click the Discovery tab.

3. In the iSNS Servers area, click Add. The Add iSNS Server dialog box appears.

4. Enter the IP address or fully-qualified DNS name of the iSNS server and click OK. The iSNS server is added to the initiator’s configuration.

Configuring the iSCSI initiator

231

Celerra Array Setup

Log in to iSCSI target

After you configure the initiator with the target’s network portal or the iSNS server’s IP address, a list of available targets appears on the initiator’s Targets properties page. In order to access the target’s

LUNs, the initiator must log in to the target:

1. Open the Microsoft iSCSI Initiator and click the Targets tab. The list of available targets appears.

2. Select the target you want to connect to and click Log On. The

Log On to Target

dialog box appears.

3. Select Automatically restore this connection when the system

boots

.

4. Select Enable multi-path if multiple network paths exist between the Celerra data movers and the host.

5. (Optional) If the iSCSI target requires CHAP authentication, click

Advanced

. The Advanced Settings dialog box appears.

For information on enabling CHAP authentication, refer to the

Configuring iSCSI Targets on Celerra technical module.

6. (Optional) Do the following to enter the CHAP secret: a. Select CHAP logon information.

b. In the Target secret field, enter the secret that you configured for the iSCSI target. Microsoft only supports CHAP secrets that are 12 to 16 characters long.

c. If you also want the initiator to authenticate the target, select

Perform mutual authentication

. Refer to “Set CHAP secret for reverse authentication” on page 228 for information on

configuring reverse authentication on the initiator.

d. Click OK. You return to the Log On to Target dialog box.

7. Click OK. The initiator connects to the target and displays the connection status of the target on the Targets tab. Optionally, select the connected target and click Details for additional session and device properties.

232

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

Configuring iSCSI LUNs as disk drives

This section provides information about how to configure iSCSI

LUNs on your Replication Manager production host only. This information should not be used to configure mount hosts.

After the initiator logs in to a target, each of the target’s LUNs to which the initiator has access appears as an unknown disk in

Windows Disk Manager. Before you can use the iSCSI LUN from

Windows, you must configure the LUN as an accessible disk:

1. Use Windows Disk Manager to configure volumes on the LUN.

2. Bind the iSCSI initiator to the new volumes.

3. (For Microsoft Exchange Servers only) Ensure that the iSCSI service starts before Microsoft Exchange.

4. (For Microsoft SQL Servers only) Ensure that the iSCSI service starts before Microsoft SQL Server.

If you have unformatted disks, when you open Windows Disk

Manager the Initialize and Convert Disk Wizard opens. Be very careful using this wizard to format your iSCSI disks. The default behavior of the wizard is to format disks as dynamic disks and the

Microsoft initiator does not support dynamic disks with iSCSI. You can use the wizard to write signatures to the iSCSI LUNs but do not use it to upgrade the disks. Refer to

“Configure volumes on iSCSI

LUNs” on page 234 for instructions on formatting iSCSI drives

through Disk Manager.

Notes on Disk Numbering

When working with Windows Disk Manager, it is important to understand the difference between iSCSI LUN numbering and

Windows disk numbering. The LUN number identifies the LUN in relation to the iSCSI target on the Celerra Network Server.

When you log in to a target with the Microsoft iSCSI initiator,

Windows assigns disk numbers to each of the available LUNs. The

Disk number identifies the LUN in relation to the set of disks on the

Windows system. The LUN number and the disk number do not necessarily correspond.

Configuring iSCSI LUNs as disk drives

233

Celerra Array Setup

Configure volumes on iSCSI LUNs

You can see the LUN number-disk number association through the iSCSI initiator. Use this procedure to view the LUN number and the associated disk number:

1. Open the Microsoft iSCSI initiator and click the Targets tab.

2. Select the desired target and click Details.

3. On the Target Properties dialog box, click the Devices tab.

4. Select the desired device and click Advanced. The Device Details dialog box opens, showing the Windows Disk Number of each

LUN.

In addition to LUN number and disk number, iSCSI LUNs, when partitioned and formatted through Windows Disk Manager, may be assigned a drive letter if unused drive letters are available. The order of drive letters may have no relationship to the disk number or the

LUN number.

The disk number of a LUN may change after system reboot or after logging off and logging in to a target. This behavior should not affect

Microsoft Exchange since Exchange uses logical drive mappings, not physical disk numbers. Since disk numbers may change, it is a good idea to change the volume label from the default Microsoft label

(New Volume) to something unique when you first format an iSCSI

LUN in Disk Manager.

Configuring LUNs in Windows consists of the following steps:

Writing a signature to the disk.

Creating a partition on the disk.

Formatting a volume on the partition.

Mounting the partition on a drive letter.

To configure the iSCSI disks on your iSCSI host:

1. Open the Microsoft Disk Manager. The iSCSI LUNs to which the initiator is connected appear as unallocated disks.

2. Right-click the unallocated space and select New Partition from the context menu. The New Partition Wizard starts.

234

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

3. Follow the instructions in the wizard to create a partition on the new disk:

Note:

The Celerra iSCSI host applications require that the logical volume uses a 1-to-1 configuration. A 1-to-1 configuration is when the partition is the only partition on the disk.

• You can use any available format on the partition.

• For convenience when working with multiple iSCSI disks, change the volume label from the generic New Volume label to something more descriptive.

• The disk is formatted and assigned the drive letter you indicated in the wizard.

The iSCSI disk now appears as a local disk from Windows

Explorer.

4. Close Disk Manager.

Bind volumes in the iSCSI initiator

Binding volumes causes the initiator to mark volumes that are on iSCSI disks. Then, whenever the system reboots, the system pauses until all bound volumes are available. This feature is very important when using applications like Microsoft Exchange where the application can fail if volumes containing the databases are unavailable. By binding volumes, you ensure that all volumes used by Exchange come back automatically after the system reboots.

Note:

Binding marks all volumes that are on iSCSI LUNs including any iSCSI snaps (SLUs) that have been promoted to LUN status. You should not bind volumes when any SLUs are mounted on the system.

Prerequisite

In order to bind volumes on a target, the target has to be on the persistent target list. Your targets should be on that list if you selected the Automatically restore this connection when the system reboots before logging in to the target. For more information on this setting,

refer to “Log in to iSCSI target” on page 232

.

Use this procedure to bind the new volumes:

1. Open the Microsoft iSCSI Initiator.

2. Click the Bound Volumes/Devices tab.

3. Click Bind All to have the initiator persistently bind all volumes that have been created using iSCSI disks.

Configuring iSCSI LUNs as disk drives

235

Celerra Array Setup

If you make any volume changes (re-partition or reformatting), you need to click Bind All again to update the bound volume list.

Note:

Some of the user interface elements reference above are named differently in Windows Server 2008 version of the iSCSI Initiator. For example, the Bound Volumes/Devices tab is named Volumes and Devices, and the Bind All button is named Autoconfigure.

Make Exchange service dependent on iSCSI service

Make SQL Server service dependent on iSCSI service

(Exchange using Microsoft iSCSI Initiator only) To ensure that iSCSI disks are available when Microsoft Exchange starts, in addition to binding volumes you must also ensure that the Exchange service does not start until the iSCSI service is up and running.

To make the Exchange service dependent on the iSCSI service:

1. Run regedit.exe.

WARNING

Incorrectly modifying the Registry may cause serious system-wide problems that may require you to reinstall your system. Use this tool at your own risk.

2. Locate the following key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\

Services\MSExchangeIS\DependOnService

3. Add the following value to the DependOnService key: msiscsi

WARNING

Do not alter any of the other values stored in this key.

4. Click OK to apply.

(SQL Server using Microsoft iSCSI Initiator only) To ensure that iSCSI disks are available when Microsoft SQL Server starts, in addition to binding volumes you must also ensure that the SQL Server service does not start until the iSCSI service is up and running.

236

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

To make the SQL Server service dependent on the iSCSI service:

1. Run regedit.exe.

WARNING

Incorrectly modifying the Registry may cause serious system-wide problems that may require you to reinstall your system. Use this tool at your own risk.

2. Locate the following key, or add it if it does not exist:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\

Services\MSSQLSERVER\DependOnService

3. Add the following value to the DependOnService key: msiscsi

WARNING

Do not alter any of the other values stored in this key.

4. Click OK to apply.

Configuring iSCSI LUNs as disk drives

237

Celerra Array Setup

Verifying host connectivity to the Celerra server

You can quickly verify iSCSI connectivity between your production or mount host and your Celerra by examining the Targets tab of the iSCSI Initiator Properties dialog box. The Targets tab lists all iSCSI devices that are available to the host.

To view the connectivity status of all targets:

1. From the Replication Manager host, open Microsoft iSCSI

Initiator.

2. Click the Targets tab. The connected status of all targets is shown.

238

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

Configuring Celerra network file system targets

Replication Manager can create replicas of data stored in Celerra network file systems (NFS) as well as in Celerra iSCSI. This section describes how by listing the prerequisites and the tasks you must perform to configure the Celerra Network Server in the event that you are using network file systems to perform your replication tasks.

For information on configuring Replication Manager for Celerra

iSCSI environments, refer to “Configuring iSCSI targets” on page 215

.

Prerequisites

In order to use targets on the Celerra Network Server, you must have one or more external network interfaces available for the targets to use and you must zone the Celerra and use the Add Storage Wizard to attach the Celerra Network Server to the Replication Manager instance.

NFS configuration checklist

The checklist below lists the tasks for configuring Celerra NFS:

In a new Celerra environment, perform steps to create, export and mount a network file system:

Create a source file system on the Celerra Network Server

Mount the filesystem on the Linux host

Export the filesystem to the host

If you are adding Oracle to your Celerra network file system, perform additional setup steps described in the EMC Replication

Manager Product Guide under the section entitled “Configuring

Oracle Celerra NFS DR environment for Replication Manager.”

Refer to Celerra documentation for more information about how to complete the steps outlined above.

Once NFS has been setup on the Celerra Network Server:

Add the Celerra Network Server(s) to the Replication Manager environment through Replication Manager Storage Discovery.

Before you can create replicas on a remote target Celerra, you must create a read-only file system on the target Celerra to which the data will be transferred and create a Celerra Replicator session between the source and target file systems using Celerra

Manager. This can be accomplished by creating a Celerra

Configuring Celerra network file system targets

239

Celerra Array Setup

Replication of the filesystem within Celerra Manager. Refer to

“Create the destination Celerra network file system” on page 240

for more information.

Note:

In order to perform a remote replication, the source file system on a production Celerra should have a Celerra Replicator session established to one and only one target file system on the target Celerra. While it may be possible for Celerra to have one source file system replicating to two target file systems on the same target Celerra, this is an atypical configuration, and is not supported by Replication Manager.

Create an application set to specify the source data to replicate.

Refer to the EMC Replication Manager Product Guide for more information.

Create one or more jobs to set the replication options for the replica. Refer to the EMC Replication Manager Product Guide for more information.

Note:

In Celerra NAS environments, do not schedule jobs to run while the NASDB Backup process is running. The NASDB Backup process locks the NAS database and prevent Replication Manager from completing required tasks.

Naming convention for Celerra NFS snap replicas

When a Replication Manager job creates an Celerra NFS snap, the snap sessions are named using the following naming schema:

<job_name>-<filesystem_name>-Snap<n> where:

<job_name> is the name of the Replication Manager job.

<filesystem_name> is the name of the filesystem that is part of the replica.

<n> is a number between 1 and the number of replicas in the job.

This schema can be used to predict the name of the session in case the administrator needs the session name in order to configure NDMP backups of the replica snaps external to Replication Manager.

Create the destination Celerra network file system

The easiest way to create a read-only file system on the target Celerra to which the data will be transferred is to use Celerra Manager to

240

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

create a replication session between the source and target file systems.

Note:

When creating the asynchronous replication between the source and destination Celerra, EMC recommends using a Time out of Sync value of 1 minute. This is a minimum value, however using higher time out of sync values may result in Replication Manager taking longer to execute a job.

Alternatively, you could also follow these steps to complete this operation:

1. Create a destination file system. To create a mount point on the destination Data Mover, type:

$ server_mountpoint server_2 -create /dst ufs_1

Local replication example:

$ server_mountpoint server_2 -create /local_dst

2. To mount the file system as read-only on the destination Data

Mover, type:

$ server_mount server_2 -option ro dst_ufs1 /dst_ufs1

Local replication example:

$ server_mount server_2 -option ro local_dst /local_dst

Note:

The destination file system can only be mounted on one Data

Mover, even though it is read-only.

Celerra NFS restore considerations

Whenever you restore a Celerra NFS replica, Replication Manager creates a rollback snapshot for every Celerra file system that has been restored. The name of each rollback snapshot can be found in the restore details. The rollback snapshot may be deleted manually after the contents of the restore have been verified and the rollback snapshot is no longer needed. Retaining these snapshots beyond their useful life can fill the Celerra’s snap cache and cause resource issues.

Configuring Celerra network file system targets

241

Celerra Array Setup

Creating iSCSI replicas using Celerra SnapSure technology

Replication Manager allows you to create snapshot replicas of Celerra

LUNs. When you create a replica using Celerra SnapSure™ functionality, Replication Manager creates a point-in-time copy of all the data on the source LUN. For the initial iSCSI snapshot,

Replication Manager creates a full copy of the original LUN, therefore requiring the same amount of space on the file system as the LUN.

Subsequent iSCSI snapshots space usage depends on how much the

data has changed since the last snapshot was taken. See “Create file systems for iSCSI LUNs” on page 217 for more information.

For more information about how to create a replication job, see the

EMC Replication Manager Product Guide.

242

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

Creating iSCSI replicas using Celerra Replicator technology

Replication Manager allows you to create Celerra replication copies of LUNs on remote Celerras using the Celerra Replicator replication technology. The next sections are intended for system administrators who plan to use Celerra Replicator for iSCSI to replicate production iSCSI LUNs by asynchronously distributing local snapshots

(point-in-time copies of the LUNs) to a destination.

Celerra Replicator for iSCSI concepts

Celerra Replicator for iSCSI takes advantage of Celerra replication technology to replicate the source (production) LUN to a destination iSCSI LUN over an IP network. Then, a snapshot (read-only, point-in-time copies of a LUN) of the production LUN is taken and transferred to and associated with the destination LUN. Multiple snapshots of the destination LUN are preserved.

Celerra Replicator for iSCSI provides a one-to-one relationship between the source and destination, although a source can have multiple one-to-one relationships to replicate data to several destinations independently.

In addition, because the production LUN is subject to ongoing activity, which means it is typically in an inconsistent state (with pending writes and cached operations), Celerra Replicator for iSCSI uses snapshots to clone the production LUN to the destination LUN.

Replication Manager generates snapshots of the production LUN consistent from the point of view of the host and client applications.

Celerra Replicator for iSCSI provides data protection and integrity. If data on the production LUN is deleted or corrupted, it can be rebuilt from the replica of the destination LUN.

The replica of the destination LUN is protected (read-only), preventing the iSCSI initiator from writing to it. This replica of the destination LUN can be used for content distribution, data mining, testing, or other applications that offload activity from the production

LUN.

There are three Celerra Replicator replication types:

Remote replication

— The Data Movers for the source and destination LUNs reside in separate Celerra cabinets. This is the typical configuration. For additional protection in case of disaster, the source and destination Celerra cabinets should be geographically distant.

Creating iSCSI replicas using Celerra Replicator technology

243

Celerra Array Setup

How Celerra

Replicator works in an iSCSI

Environment

Local replication

— The Data Movers for the source and destination LUNs are separate, but reside within the same Celerra cabinet.

Loopback replication

— A form of local replication using the same Data Mover for the source and destination LUN. With loopback replication, the data does not travel over the network.

Replication Manager manages iSCSI snapshots by creating the snapshots of the production LUN and marking the snapshots for replication. It controls the creation of snapshots, marks the snapshots for replication, and initiates the copy job from the source to the destination. Before creating a snapshot, Replication Manager ensures applications are in a quiescent state and the cache is flushed so the snapshot is consistent from the point of view of client applications.

When the Celerra Replicator replication is first initiated, a snapshot of the source iSCSI LUN is taken and marked for replication. This snapshot is copied to the destination iSCSI LUN as the baseline snapshot. Because the baseline snapshot is a full copy of the source

LUN, this initial transfer may be time-consuming.

For subsequent replications, the data differential between the latest snapshot of the source LUN and the previous snapshot is calculated and only this delta is sent to the destination LUN. With the delta, the latest snapshot is re-created on the destination. These snapshots kept on the destination can be used either to restore corrupted or deleted data on the production LUN or to enable the destination to take over as the production LUN in a failover scenario.

Replication of iSCSI LUNs requires the source and destination LUNs to be the same size, and the destination LUN must be read-only. For remote replication, the source and destination Data Movers must share a common passphrase (a preconfigured password) for authentication.

When Replication Manager performs a Celerra Replicator remote replication, a clone replica is created the first time the job is run. This replica is created and managed by Replication Manager for use in

Celerra disaster recovery operations. This specialized clone replica is named similar to the following:

Clone_lrmc123_10.241.180.124

244

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

where

Clone

is the prefix that is prepended to all Celerra Replicator clone replicas, lrmc123

is the destination Celerra name, and

10.241.180.124

is the destination data mover IP address.

The timestamp and history for the iSCSI clone replica is the same as the most recent snap replica that used the clone as a source for the snap, and is updated every time the Celerra Replicator job is run. If you delete the most recent snap replica, the timestamp and history of the Celerra Replicator clone replica remains the same.

The Celerra Replicator clone replica is automatically deleted when the last Celerra Replicator snap replica that uses the clone is deleted.

Note:

The Celerra Replicator clone replica cannot be mounted, restored, or deleted. The only actions that you can perform on this replica are to fail it

over or promote it to an alternate host. See “Using the Celerra Replicator clone replica for disaster recovery” on page 258 and “Using the Celerra

Replicator clone replica for repurposing” on page 258

for more information.

Creating iSCSI replicas using Celerra Replicator technology

245

Celerra Array Setup

System requirements, restrictions, and cautions

System requirements

This section lists system requirements, restrictions, and cautions that you should be aware of when using Celerra Replicator for iSCSI.

Restrictions

The following Celerra Network Server software, hardware, network, and storage configurations are required for using Celerra Replicator for iSCSI:

Celerra Network Server version 5.5 or later

Replication Manager version 5.0 or later

An Ethernet 10/100/1000 network with one or more iSCSI hosts configured with the most recent version of the Microsoft iSCSI

Initiator. The minimum iSCSI Initiator version is 2.0.4.

The EMC NAS Interoperability Matrix is available on Powerlink. It contains detailed information on supported software and hardware, such as backup software, Fibre Channel switches, and application support for Celerra network-attached storage (NAS) products.

When using Celerra Replicator for iSCSI, the following restrictions and limitations apply:

The maximum iSCSI LUN size is 1 megabyte less than 2 terabytes

(2,097,151 MB).

Replication Manager allows you to take a maximum of 1,000 snapshots, all of which can be mounted, for each iSCSI LUN.

The maximum number of concurrent active iSCSI replication sessions for a Data Mover is 128.

The maximum number of configured iSCSI replication sessions for a Data Mover is 1024.

A single source can be replicated to multiple destinations through multiple one-to-one iSCSI replication relationships. These replications are independent and not synchronized with each other. In order to support this configuration, you need at least one job configured for each destination. If you have multiple jobs, you must either specify the target IQN for all jobs, or none of the jobs

(Celerra name and IP address only).

246

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

Cautions

One-to-many replication is supported. If you fail over one of the target clone replicas, then the only option for the other clone replica is to promote it to another host, or delete it altogether. This is because the fail over operation changed the production LUN to read-only.

Multihop (cascading) replication, where a destination is replicated to one or more additional sites, is not supported in

Replication Manager.

You can use the iSCSI LUN extension feature with the source

LUN in a replication relationship. After you extend the source

LUN, if you take a snapshot and mark the snapshot for replication, the destination LUN is automatically extended to accommodate the snapshot. If the destination LUN cannot be extended, the replication session is suspended. Then you can either extend the file system that holds the destination LUN and allow the suspended session to continue, or destroy the replication session.

When using a virtually provisioned iSCSI LUN, monitoring file system space is very important. Virtual file system provisioning can aid in ensuring enough space is available. If a virtually provisioned destination LUN runs out of space, the replication session is suspended. You can either destroy the replication session or increase available space on the destination file system and allow the suspended session to continue.

Some management tasks require using a specific user interface.

“Managing iSCSI LUN replication” on page 255

lists the management tasks you can perform with each interface choice.

This section lists the cautions for using this feature on the Celerra

Network Server. If any of this information is unclear, contact an EMC

Customer Support Representative for assistance:

To provide a graceful shutdown in the case of an electrical power loss, both the Celerra Network Server and the storage array require uninterrupted power supply (UPS) protection. Without this protection, iSCSI replication becomes inactive with possible data loss.

The source and destination Data Movers must use the same internationalization mode (either ASCII or Unicode). You cannot replicate a file system from a Unicode-enabled Data Mover to an

ASCII-based Data Mover.

Creating iSCSI replicas using Celerra Replicator technology

247

Celerra Array Setup

Planning considerations

Setting up destination

LUNs for Celerra

Replicator remote replication

Choosing the Celerra

Replicator replication type

File system size requirements

This section includes the following important information for setting up Celerra Replicator remote replication:

“Setting up destination LUNs for Celerra Replicator remote replication” on page 248

“Choosing the Celerra Replicator replication type” on page 248

“File system size requirements” on page 248

You need to set up one iSCSI LUN as an available destination for each source LUN. The destination LUN must be read-only and the same size as the source.

You can set up multiple destinations for a given source, but each source-destination pair is a one-to-one replication relationship. The technical module Configuring iSCSI Targets on Celerra provides information and instructions for estimating file system space requirements, creating iSCSI LUNs, and configuring LUN masks.

Use remote replication for data protection or to offload activity on the local Celerra. For additional protection against disaster, ensure the production and destination systems are geographically distant.

Use local replication to reduce network bandwidth usage, to create a copy of a LUN for test bed purposes, or to perform load balancing.

When estimating file system requirements to support regular iSCSI

LUNs (as described in the technical module Configuring iSCSI Targets

on Celerra), consider the number of snapshots to be used for iSCSI

LUN replication, and the number that will be mounted.

Configuring Celerra

Replicator remote replication

This section provides instructions for configuring the replication of an iSCSI LUN. The online Celerra man pages and the Celerra Network

Server Command Reference Manual provide detailed information about the commands used in this section. The technical module Configuring

iSCSI Targets on Celerra provides detailed instructions and related information on creating and managing iSCSI targets.

248

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

Mounting an iSCSI replica on a remote host

Configuring interconnect security for remote replication

The following tasks are required to set up Celerra Replicator remote replication:

Configure interconnect security for remote replication

Create an interconnect communication path (DART 5.6 or later)

Create a destination LUN for iSCSI replication

Configure a Celerra Replicator replication job

To perform a mount of an iSCSI replica on a remote host, the iSCSI

Initiator must be logged in to the target IQN but does not necessarily need to be logged in to the source IQN.

During a mount operation, Replication Manager attempts to connect to the source IQN first, and if it fails, will connect to the target IQN to perform the mount. If the iSCSI Initiator is not logged in to the source

IQN on the mount host, you may see warnings during the mount phase of the job similar to the following:

Host is not connected to <source IQN>...

These warnings can be ignored, and your mount should still complete successfully on the target Celerra.

A remote iSCSI replication relationship includes Data Movers in two different Celerra cabinets, which requires you to establish a trusted relationship between them. To do so, issue the nas_cel command on the Control Station for the Data Mover of the source iSCSI LUN.

Then, do so on the Control Station for the Data Mover of the destination iSCSI LUN.

Use this procedure to establish a trusted relationship between the source and destination sites for remote iSCSI replication. For this procedure, you need to be the root user.

Note:

This procedure describes how to establish a trusted relationship between data movers using the Celerra control station command line interface. Alternatively, you could use the Celerra Manager 5.6 (or later) user interface to create the trusted relationship.

Creating iSCSI replicas using Celerra Replicator technology

249

Celerra Array Setup

Action

To establish a trusted relationship between both sides for remote iSCSI replication, use this command syntax (first on the Data Mover for the source iSCSI LUN and then on the Data Mover for the destination iSCSI LUN):

# nas_cel -name <cel_name> -create <ip> -passphrase <passphrase> where:

<cel_name>

= name for the primary Control Station on the opposite side.

<ip>

= the IP address of the primary Control Station on the opposite side.

<passphrase>

= a string of 6 to 15 characters. Configure the same passphrase on the source and destination.

Example:

To create the source side of a trusted relationship with the destination cs_110

with the passphrase

7Wq9Xt50y

, type:

# nas_cel -name cs_110 -create 172.24.108.0 -passphrase 7Wq9Xt50y

Output

operation in progress (not interruptible)... id = 1 name = cs_110 owner = 0 device = channel = net_path = 172.24.173.100

celerra_id = APM000420008150000 passphrase = 7Wq9Xt50y

Note

Make note of the destination

<cel_name>

, which you provide to Replication Manager as the authentication name when creating the replication job.

Create an interconnect communication path

(DART 5.6 or later only)

If you are using Celerra Manager 5.6 or later, after setting up the trust relationship, you must establish communication between the pair of remote Data Movers that will support replication sessions. For remote replication between two different Data Movers you must create a communication path, called an interconnect, on both Data

Movers in the pair.

Only one interconnect can be established between a given Data

Mover pair. A Data Mover interconnect supports Celerra Replicator

V2 sessions by defining the communication path between a given

Data Mover pair located on the same cabinet or different cabinets.

250

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

For remote replication, create the local side of the interconnect, then repeat the process on the remote side to create the peer side of the interconnect. You must create both sides of the interconnect before you can successfully create a remote replication session.

To set up one side of a new Data Mover interconnect:

1. In Celerra Manager, select Celerras > [Celerra_name] >

Replications

, click the Data Mover Interconnects tab, and click

New

. The New Data Mover Interconnect panel appears.

2. In the Data Mover Interconnect Name field, type the name to assign to this locally defined interconnect.

3. In the Celerra Network Server field, keep the default of Local

System if the destination is on the same system for local replication, or select another available Celerra from the list for remote replication.

4. In the Peer Data Mover Interconnect Name field, select a Data

Mover for the peer side of the interconnect. This field does not appear when configuring a remote interconnect.

5. In the Data Mover field, select a Data Mover from the list to serve as the local side of the interconnect.

6. In the Interfaces field, select the IP addresses to make available to the interconnect, and/or type the name service interface names to be used.

You must define at least one IP address or name service interface name. Name service interface names should be fully qualified names and must resolve to a single IP address (for example, resolve by using a DNS server).

7. In the Name Service Interface Names field, type a comma-separated list of the interface names available for the local side of the interconnect. These names must resolve to a single IP address (for example, resolve by using a DNS server) and should be fully qualified names.

8. In the Peer Data Mover field, select the peer Data Mover to use for the peer side of the interconnect.

9. In the Peer Interfaces field, select the IP addresses to make available on the peer side of the interconnect, and/or type any name service interface names to use.

Creating iSCSI replicas using Celerra Replicator technology

251

Celerra Array Setup

You must select at least one IP address or name service interface name. Name service interface names must resolve to a single IP address (for example, resolve by using a DNS server). Although not required, these names should be fully qualified names.

10. In the Peer Name Service Interface Names field, type a comma-separated list of the name service interface names available for the peer side of the interconnect. These names must resolve to a single IP address (for example, resolve by using a

DNS server). Although not required, these names should be fully qualified names.

11. In the Bandwidth Schedule field, specify a schedule to control the interconnect bandwidth used on specific days and/or times.

By default, all available bandwidth is used at all times for the interconnect (default). The schedule applies to all sessions using the interconnect.

You can set overlapping bandwidth schedules to specify smaller time intervals first and bigger time intervals next. The limit set for the first applicable time period is taken into account, so the order in which the schedule is created is important.

Specify a schedule by using comma-separated entries, most specific to least specific, as follows:

[{Su|Mo|Tu|We|Th|Fr|Sa}][HH:00-HH:00][/Kbps],[<next_entry>],[...]

Example:

MoTuWeThFr07:00-18:00/2000,/8000 means use a limit of 2000

Kbits/second from 7 a.m. to 6 p.m. Monday through Friday.

Otherwise, use a bandwidth limit of 8000 Kbits/second.

12. In the CRC Enabled field, use the default to enable the Cyclic

Redundancy Check (CRC) method for verifying the integrity of data sent over the interconnect, or disable the CRC checking.

13. Click OK.

The interconnect is created. Validate the interconnect to detect any connectivity issues before you create a replication session.

From the other Data Mover, create the peer side of the interconnect.

Make the same set of IP addresses and/or name service interface names available on both sides of the interconnect to avoid any issues with interface selection.

252

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

Creating a destination

LUN for Celerra

Replicator remote replication

The destination iSCSI LUN stores the replica of the source iSCSI

LUN. The destination LUN must be the same size as the source LUN and be read-only. Use this procedure to create an available destination LUN for Celerra Replicator remote replication.

Note:

This procedure describes how to create read-only destination LUNs using the Celerra control station command line interface. Alternatively, you could create them using the Celerra Manager 5.6 (or later) user interface.

Action

To create a destination iSCSI LUN, use this command syntax:

$ server_iscsi <mover_name> -lun -number <lun_number>

-create <target_alias_name> -size <size>[M|G|T] -fs <fs_name> [-vp {yes|no}]

-readonly where:

<mover_name>

= name of the Data Mover for the destination iSCSI LUN

<lun_number>

= unique number for the LUN, between 0 and 127

<target_alias_name>

= alias name for the iSCSI target

<size>

= size of the corresponding source iSCSI LUN in megabytes (M, the default), gigabytes (G), or terabytes (T)

<fs_name>

= name of the file system

Example:

To create a regular (not virtually provisioned) destination iSCSI LUN of 40 gigabytes in file system ufs1 on server_2

, type:

$ server_iscsi server_2 -lun -number 24 -create t1 -size 40G -fs ufs1

-readonly

Output

server_2: done

If the file system lacks the required space to create the destination

LUN, an error occurs.

Creating iSCSI replicas using Celerra Replicator technology

253

Celerra Array Setup

Configuring an iSCSI

LUN replication session

After the source and destination iSCSI targets are configured to support a replication relationship, perform the following tasks to begin the replication process:

1. Using Replication Manager on the production host, create a

Celerra Replicator job by providing the authentication name for the destination Celerra Network Server and the IP address of the destination Data Mover:

Figure 31

Configuring Celerra replication storage

• For remote replication, enter the authentication name assigned through the <cel_name> value in the nas_cel command for the

Celerra name

field. To verify the value, use the nas_cel -list command on the Celerra.

• For local or loopback replication, enter the authentication name local for the Celerra name field.

• For loopback replication, enter the IP address 127.0.0.1 for the

Data Mover IP address

field.

• Optionally, you can specify a particular target LUN for the replication by entering a specific IQN in the Celerra IQN field.

Otherwise, the Celerra will choose any read-only target LUN that matches the size of the source device.

If you specify an IQN for a job, you must specify an IQN for all other jobs that share the same application set.

254

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

Managing iSCSI LUN replication

Viewing the replication role of iSCSI LUNs

Listing all iSCSI replication sessions

This section describes the following tasks you can use to manage

Celerra Replicator:

Determine whether an iSCSI LUN is part of a replication relationship, and if so, its role.

List configured iSCSI replication sessions.

Obtain the status of an iSCSI replication session.

Restore data from a replicated snapshot.

The online Celerra man pages and the Celerra Network Server

Command Reference Manual provide detailed descriptions of the commands used in these procedures.

Use this procedure to display information about iSCSI LUNs in the

Celerra cabinet, including their role, if any, in LUN replication:

1. From Celerra Manager, click iSCSI.

2. If it is not already selected, click the LUNs tab.

The LUN Replication column shows the replication role for each

LUN (or blank, if none).

Source

— A LUN being replicated (production LUN).

Destination

— A LUN on which replicas of a source are currently stored.

Available Destination

— A read-only LUN created for use as a

replication destination, as described in “Creating a destination

LUN for Celerra Replicator remote replication” on page 253

.

Cascade

— A LUN that is both a source and a destination in different replication sessions (because of a reversed replication, true multihop replication is not supported).

Each replication session is identified by a globally unique identifier generated by the Celerra Network Server. Use this session_id to specify a session when managing iSCSI replication through the

nas_replicate

command.

Note:

Some CLI commands require you to specify a replication session identifier. Instead of typing the lengthy ID, you can use this command to list session IDs and then copy the ID and paste it into the command.

Creating iSCSI replicas using Celerra Replicator technology

255

Celerra Array Setup

Use this procedure to list the identifiers for all configured iSCSI replication sessions.

Note:

The following procedure is specific to DART 5.5. If you are using DART

5.6 or later, use the nas_replicate -info -all command to list all iSCSI replication sessions.

To list the configured iSCSI replication sessions for all Data Movers, type:

$ nas_replicate -list

Sample output

server_2:

Local Source session_id = fs35_T30_LUN2_APM00043500940_0000_fs44_T40_LUN10_APM00045005996_0000 application_label = RM_lrmc107_K:\_ContainsRmDatabase

Local Destination session_id = fs29_T9_LUN60_APM00045005995_0000_fs29_T15_LUN94_APM00043500940_0000 application_label = RM_lrmc102_E:\_ContainsRmDatabase session_id = fs61_T65_LUN2_APM00045005996_0000_fs22_T1_LUN23_APM00043500940_0000 application_label = RM_lrmb146_P:\ session_id = fs49_T54_LUN117_APM00045005996_0000_fs30_T18_LUN10_APM00043500940_0000 application_label = RM_lrmc090_I:\LOG1\

Querying the status of an iSCSI replication session

Use this procedure to query the status of a single iSCSI replication session or for all sessions on the Data Mover.

Action

To query the status of a specific iSCSI replication session or all replication sessions, use this command syntax:

$ nas_replicate -info {id=<session_id> | -all} where:

<session_id>

= unique session identifier

Examples:

To query the status of all replication sessions on the Data Mover, type:

$ nas_replicate -info -all

To query the status of a specific replication session, type:

$ nas_replicate -info id=fs22_T3_LUN0_APM00043807043_0000_fs29_T8_LUN0_

APM00043807043_0000

Note:

Type the entire command without breaking; press Enter at the end only.

256

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

Output

server_2: session_id = fs22_T3_LUN0_APM00043807043_0000_fs29_T8_LUN0_APM00043807043_0000 alias = iscsi_rep1 application_label = SQLServer replication source: src_target_name = iqn.1992-05.com.emc:apm000438070430000-3 src_LUN_number = 0 name = fs22_T3_LUN0_APM00043807043_0000 replicator_state = WaitingForRequest latest_snap_replicated = fs29_T8_LUN0_APM00043807043_0000.ckpt000

current_number_of_blocks = 0 transport_type = DP_REP_TRANSPORT_TYPE_RCP replication destination: dst_target_name = iqn.1992-05.com.emc:apm000438070430000-8 dst_LUN_number = 0 name = fs29_T8_LUN0_APM00043807043_0000 playback_state = WaitingForRequest outstanding snaps: version_name create_time blocks

-------------------------------- ---------------------------- ---------data_communication_state = DP_REP_NETWORK_CONNECTION_NOT_IN_USE ctrl_communication_state = DP_REP_NETWORK_CONNECTION_ALIVE recent_transfer_rate = ~39914 kbits/second avg_transfer_rate = ~39914 kbits/second source_ctrl_ip = 172.24.102.242

source_ctrl_port = 0 destination_ctrl_ip = 172.24.102.243

destination_ctrl_port = 5081 source_data_ip = 172.24.102.242

source_data_port = 0 destination_data_ip = 172.24.102.243

destination_data_port = 8888

QOS_bandwidth = 2147483647 kbits/second

Creating iSCSI replicas using Celerra Replicator technology

257

Celerra Array Setup

Restoring data from a replicated snapshot

If data on the production LUN is deleted or damaged, you can recover the data from the replica on the destination LUN up to the point of the last snapshot.

To restore data from the replica to the production LUN, use

Replication Manager. When the restore operation is initiated, the following events occur:

The source iSCSI LUN is made read-only.

The replication direction is reversed from the original destination

LUN to the original source LUN.

Data is transferred from the replica to the original source LUN.

The original source LUN is made read-write.

You can then resume the original replication session, as described in the EMC Replication Manager Product Guide.

Using the Celerra

Replicator clone replica for disaster recovery

Replication Manager uses the Celerra Replicator clone replica to facilitate disaster recovery of the Celerra array. When a disaster occurs, you can use the Failover command to fail over the clone replica from the production Celerra and host to a disaster recovery

Celerra and host. See “Celerra disaster recovery” on page 353

for more information about setting up and performing Celerra-based disaster recovery.

Note:

The Failover command is available in Celerra Manager 5.6 (or later) environments only.

VMware Site Recovery Manager support is available for Celerra storage in a VMware environment. See

“VMware Site Recovery

Manager (SRM) disaster recovery” on page 384 for more information.

258

Using the Celerra

Replicator clone replica for repurposing

Replication Manager also provides a Promote to Production command for Celerra Replicator clone replicas that allows you to make the clone LUN (and the snaps of this clone) from the original production host's application set operational on an alternate host.

When you promote a Celerra Replicator clone replica, you make it available for repurposing on another production host, or for testing disaster recovery procedures without disrupting your production environment.

You cannot perform a promotion of a clone replica if there are snaps of the clone that are currently mounted. If you do, the promotion will

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

fail. Unmount any snap of clone replicas before starting the promotion of the clone replica.

Note:

The Promote to Production command is available in Celerra Manager

5.6 (or later) environments only.

To repurpose a replica by promoting it to production:

1. Right-click the Celerra Replicator clone replica and select

Promote to Production

. The Promote Replica dialog box appears, as shown in

Figure 32 on page 259 .

Figure 32 Promote Replica Dialog Box

2. Select the name of the alternate production host on which to promote the Celerra Replicator clone replica. The drive letters being used on the production host must be available on the new production host, since the replica promotion will mount the clone

LUNs to the same drive letters.

3. Enter a new application set name for the Celerra Replicator clone replica. This application set will be created to allow you to begin protecting your promoted data.

Creating iSCSI replicas using Celerra Replicator technology

259

Celerra Array Setup

4. Click the Options button to change the following option:

Automatically create jobs for the new application set

— Makes a copy of all local snap jobs of the source application set and copies them to the new destination application set. If there are no local snap jobs defined for the source application set, a general local snap job with no options is created.

5. Click OK. The Promote Replica progress panel appears and displays current status of the replica promotion

Once the replication storage has been promoted to the alternate production host, you can begin to protect this data by running the local snap job in the new application set. After the promotion, the clone replica is deleted and the snaps are copied to the new application set. These snaps are now considered to be local snaps, which means that future restores of them are destructive (newer snap replicas are deleted).

After a clone replica is promoted, the original job that created it will need to find a new target clone LUN when that job is run. When you are finished with promoted clone replica storage, you can make it available for Celerra Replicator again by doing the following:

1. Unmask the promoted clone LUNs from the host using the

Celerra Manager user interface.

2. Use the server_iscsi command on the control station to make the

LUN read-only again.

For example, the following command would make LUN 15 of the lrmh178_target on the server_3 data mover read-only: server_iscsi server_3 -lun -modify 15 -target lrmh178_target -readonly yes

260

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

Troubleshooting Replication Manager with Celerra arrays

This section offers tips on troubleshooting Replication Manager with

Celerra arrays.

Checking for stranded snapshots on a Celerra array

In some cases, the Replication Manager database may contain invalid information, or snapshots may become stranded on your Celerra array. These situations may occur if:

Replication Manager fails during an operation, or is uninstalled.

You restore an old Replication Manager database that does not reference more recent snapshots.

You can use Replication Manager’s Check feature to check for obsolete snaps and Replication Manager database inconsistency.

When you run the Check command, Replication Manager compares all snaps on the array that contain Replication Manager as their label with all snaps in the Replication Manager repository. When it finds a mismatch, it lists the obsolete snapshots and recommends that you run the Resource Cleanup command.

To perform a Check on a specific IQN, verify that at least one valid replica has devices that reside on that IQN.

To run the Check command:

1. In Replication Manager, right-click a specific IQN from the

Storage Services

folder and select Check. The Resource Cleanup

Check

dialog box appears.

2. Examine the progress details for potentially obsolete snapshots.

Click OK.

Replication Manager does not actually remove these obsolete snapshots until you select the Resource Cleanup command.

Cleaning stranded snapshots from the

Celerra array

Once you have verified that the stranded snapshots listed by the

Check

command are no longer desirable, you can use the Resource

Cleanup

command to permanently remove these snapshots to free up space for reallocation.

To remove obsolete snapshots from your Celerra array:

1. In Replication Manager, right-click a specific IQN from the

Storage Services

folder and select Resource Cleanup.

Troubleshooting Replication Manager with Celerra arrays

261

Celerra Array Setup

2. When Replication Manager prompts you to confirm, click Yes if you are sure that these snapshots are no longer needed.

Note:

To perform a Resource Cleanup on a specific IQN, verify that at least one valid replica has devices that reside on that IQN.

Unable to mark the

Celerra iSCSI session

If you see the following error message in the progress window when creating, mounting, or restoring a Celerra Replicator replica:

Replication Manager was unable to mark the Celerra iSCSI session...

Try restarting the Microsoft iSCSI Initiator service. If that does not resolve the error, reboot the hosts.

Expiring all replicas in a Celerra-based application set

If you expire all replicas in a Celerra-based application set,

Replication Manager will perform a full establish the next time you run a job for this application set. This may take longer than expected.

Troubleshooting iSCSI LUN replication

Troubleshooting

Celerra network connections

You can query the EMC WebSupport database for problem information, obtain release notes, or report a Celerra technical problem to EMC on Powerlink, the EMC secure extranet site. The

Celerra Problem Resolution Roadmap contains additional information about using Powerlink and resolving problems.

Consider the following when troubleshooting iSCSI LUN replication:

An “out of space” error occurs if the destination file system does not have enough free space to create the destination iSCSI LUN.

You can either free up space on the file system or extend it.

An error occurs if an extended source iSCSI LUN is replicated but the destination cannot be extended to accommodate the replicated snapshot. Any active replication session is suspended until you destroy the session, free up space on the destination file system by deleting old snapshots, or extend the file system.

If you are experiencing network connectivity issues between

Replication Manager and Celerra, try the following to troubleshoot the issue.

From the production host:

262

EMC Replication Manager Administrator’s Guide

Celerra Array Setup

◆ ping the production Celerra control station

◆ ping the iSCSI IP of the production Celerra data mover

From the production Celerra control station:

◆ ping the target Celerra hostname

◆ ping (server_ping) the IP address of target Celerra data mover used for replication

You can have different IP addresses for iSCSI and for replication, just remember to type in the IP address for the replica in the Replication

Manager Console when you configure the copy job.

Troubleshooting Replication Manager with Celerra arrays

263

Celerra Array Setup

264

EMC Replication Manager Administrator’s Guide

7

Invista Setup

This chapter describes how to prepare the Invista environment to work with Replication Manager. It covers the following subjects:

Invista setup checklist ..................................................................... 266

Supported Invista switches ............................................................ 268

Invista hardware and software configurations............................ 269

Planning your configuration for clone replications .................... 272

Managing virtual frames ................................................................ 274

Replication Manager virtual volume surfacing issues ............... 276

Invista Setup

265

Invista Setup

Invista setup checklist

To set up the Invista instance for use with Replication Manager:

Verify that your environment has the minimum required storage hardware and that the hardware has a standard configuration.

“Minimum required hardware and connectivity” on page 269

provides this information.

Set up the backend storage for the Invista instance. Refer to

Chapter 2 of the EMC Invista Element Manager Administrator’s

Guide for more information:

• Allocate the appropriate Symmetrix devices through both

DAE ports (two for redundancy) that will be zoned to the

Invista switch.

• Assign these Symmetrix devices to the nine Invista initiator

Worldwide Names (WWNs). For information on finding these initiators, refer to

“Assigning Symmetrix devices to Invista initiators” on page 270

.

Confirm that your Replication Manager hosts are connected to the Invista instance through a SAN connection.

Zone the fibre switch appropriately (if applicable). The hosts must be able to access all storage they are using and the mount hosts must be able to access all storage arrays from which replica virtual volumes may be mounted.

Install all necessary software on each Replication Manager host.

Also install the appropriate firmware and software on the switch

itself. “Invista hardware and software configurations” on page 269

provides this information.

On Solaris hosts, verify that there are enough entries in the sd.conf file to support all dynamic mounts of replica LUNs.

“Update the sd.conf entries for dynamic mounts” on page 143

provides additional information.

Verify that the network ports between the network switch and the

CLARiiON SPs are set to auto-negotiate.

Install Replication Manager Agent software on each production

and mount host. Chapter 2, ”Installation,” provides more

information.

266

EMC Replication Manager Administrator’s Guide

Invista Setup

Create a mount virtual frame for each mount host and make sure that virtual frame contains at least one virtual volume, and that the virtual volume is visible to the mount host. This virtual volume does not have to be dedicated or remain empty; you can use it for any purpose. However, if no virtual volumes are visible to the Replication Manager mount host, Replication Manager will not operate. A restart may be needed after this step.

For clones, create a virtual frame named EMC Replication

Storage

and populate it with free virtual volumes that you created in advance for Replication Manager to use for storing

replicas. “Creating the EMC Replication Storage virtual frame” on page 274 describes this procedure.

Start the Replication Manager Console and connect to your

Replication Manager Server. You must perform the following steps:

• Register all Replication Manager hosts.

• Run Discover Arrays on each host.

• Run Discover Storage for each discovered array.

The EMC Replication Manager Product Guide provides more information about the operation of the processes listed above.

Invista setup checklist

267

Invista Setup

Supported Invista switches

The EMC Support Matrix lists the switches that are supported for use with Invista software.

The EMC Invista Element Manager Administration Guide provides specifics of the Invista switch support.

You can configure any Invista switch to perform the following tasks in conjunction with Replication Manager:

Create replicas using Invista.

Mount those replicas to the same or alternate hosts.

Restore information from those replicas.

268

EMC Replication Manager Administrator’s Guide

Invista Setup

Invista hardware and software configurations

EMC Invista is a network-based storage virtualization solution.

Invista combines EMC application software and hardware with Fibre

Channel switches to deliver centralized storage services that span heterogeneous storage systems.

The components of Invista can be installed into new or existing storage area networks (SANs), and are compatible with a range of host environments and storage devices. The EMC Invista

Interoperability Guide provides detailed information.

The EMC Replication Manager Support Matrix provides specific information about supported applications, operating systems, high-availability environments, volume managers, and other supported software.

Minimum required hardware and connectivity

The following storage and connectivity hardware components are required for any implementation of Replication Manager in an Invista environment:

One or more supported Invista instance

Replication Manager Server (on a Windows server)

Replication Manager Agent software (on hosts)

Storage area network (SAN) connectivity between the components listed above

Fibre connectivity between the Replication Manager Agents and

Invista through switched fibre connections

Minimum required software for Invista hosts

The following software must be installed on the Invista host:

◆ admreplicate

Invista Java CLI

Java 2 Platform Standard Edition (J2SE) version 1.5_11

Invista hardware and software configurations

269

Invista Setup

Minimum software for hosts

Assigning

Symmetrix devices to Invista initiators

The following additional software products must be installed on all

Replication Manager hosts (production and mount) in order to run in an Invista environment:

PowerPath (highly recommended)

If you have more than one path to the Invista environment, you must install PowerPath to manage those multiple paths. Install the minimum version specified in the EMC Replication Manager

Support Matrix.

Solutions Enabler (in a UNIX environment, the multithreaded version must be installed).

For Invista support, Solutions Enabler software must be installed on all Replication Manager production and mount hosts. Be sure to follow all required installation and configuration steps listed in the Solutions Enabler documentation.

When installing Solutions Enabler on Windows Server 2003, do not install VSS Provider or VDS Provider.

You must assign Symmetrix devices to Invista host initiators. In order to do so, you must add host initiator records for Invista virtual initiators:

1. Determine the WWNs of the virtual initiators in the Invista instance. Display these through the Invista Element Manager

(using either the Java CLI or GUI).

2. Open the Invista Element Manager GUI, and expand the component tree. Highlight Virtual Initiator 1. The WWN appears on the right pane. Select each subsequent Virtual Initiator (one at a time) to display the WWNs.

The WWNs with the following format represent the initiators to which Symmetrix storage should be assigned:

50:00:XX:XX:XX:XX:XX:81 though 50:00:XX:XX:XX:XX:XX:89

270

EMC Replication Manager Administrator’s Guide

Invista Setup

Discovering Invista storage

Replication Manager performs a new storage discovery whenever it requires the latest storage information to perform a storage-related task. However, to ensure that the latest information is displayed when you view storage properties, you should manually discover storage.

To manually discover storage, right-click a storage array and select

Discover Storage.

Replication Manager uses the previously established connections to access the array and discover the storage that is available for use by

Replication Manager and the storage already in use by Replication

Manager (if any). This process may take several minutes.

When discovering storage, Replication Manager provides updated information on associated EMC replication storage clone virtual volumes and snaps.

Note:

Invista Metavolumes are not supported for use with Replication

Manager.

Invista hardware and software configurations

271

Invista Setup

Planning your configuration for clone replications

To configure an Invista instance for use with Replication Manager:

1. Plan replica storage needs.

2. Manage virtual volumes according to your plan.

3. Create virtual frames and assign virtual volumes to the appropriate virtual frame.

What is a replica on the Invista instance?

Before we describe how to plan for replicas, it is important to understand what a replica is in the Invista context. Replicas are full copies of the production data.

Planning replica storage needs

Before you use Replication Manager with your Invista instance, you must plan your replica storage needs:

Analyze your existing storage usage to determine what storage you will need in the future.

Calculate the size of the existing or new virtual volumes you want to replicate.

Determine how many replicas of each virtual volume you will need.

EMC recommends that you create all your virtual volumes the same size to provide flexibility when selecting storage for a replica. Replica storage must match the logical size of the source virtual volume.

Replication Manager does not create virtual volumes automatically to support your replica needs. Therefore, as part of the setup, you must create all replica virtual volumes before you can use Replication

Manager to create replicas. Remember that the logical size of the replica virtual volume must match the logical size of the source virtual volume.

For example, assume you have three databases stored on your Invista instance:

Database 1 (50 GB capacity)

Database 2 (70 GB capacity)

Database 3 (40 GB capacity)

272

EMC Replication Manager Administrator’s Guide

Invista Setup

To be able to maintain five simultaneous replicas of each database on that array, you must create the following virtual volumes:

For Database 1, at least five virtual volumes exactly 50 GB (logical size)

For Database 2, at least five virtual volumes exactly 70 GB (logical size)

For Database 3, at least five virtual volumes exactly 40 GB (logical size)

Figure 33 on page 273

illustrates this configuration.

50 GB 70 GB 40 GB

Production

LUNs

50 GB 70 GB 40 GB

50 GB 70 GB 40 GB

50 GB 70 GB 40 GB

50 GB 70 GB 40 GB

Replica

LUNs

Figure 33

50 GB 70 GB 40 GB

RM-000033

Replica virtual volumes must match the production virtual volume size

Planning your configuration for clone replications

273

Invista Setup

Managing virtual frames

Creating the EMC

Replication Storage virtual frame

Before you start to use Replication Manager, you must create and populate a virtual frame that tracks all the virtual volumes that

Replication Manager is allowed to use for clone replicas of production data. This virtual frame must be named EMC Replication

Storage.

To create and populate the virtual frame EMC Replication Storage:

1. Open the Invista Element Manager.

2. Right-click Virtual Frames and select Create Virtual Frame. The

Create Virtual Frame

dialog box appears.

3. Assign the name EMC Replication Storage to the Virtual Frame, and click OK to save your changes.

Creating virtual frames for production hosts

Note:

A volume can be a member of only one virtual frame at a time.

4. Expand Storage > Virtual Frames.

5. Right-click the EMC Replication Storage virtual frame that you created in steps 1–3 and select Select Virtual Volumes.

6. Click the Virtual Volumes tab (if not selected).

7. In Show Virtual Volumes, select which volumes you want to display in the Available Virtual Volumes box.

8. Select the volumes in the Available Virtual Volumes list that you want to add to this virtual frame, and click the right arrow to move them into the Assigned Virtual Volumes list.

9. Click OK to assign the volumes to the frame.

If you are building a new system, you will need to define which hosts have access to which sets of data. You use virtual frames to define this. In most systems, each production host has access to its own set of production data and the virtual volumes that hold that data reside in that host’s virtual frame. By connecting the host to that virtual frame, you effectively give that host access to that data on the Invista instance. Refer to the steps in the previous section for information on how to create a virtual frame. Replication Manager requires that each client see only one virtual frame per Invista instance.

274

EMC Replication Manager Administrator’s Guide

Invista Setup

Creating virtual frames for mount hosts

Making volumes in a

Virtual Frame visible to a host

Usually, only one host is attached to each virtual frame, except in the case of some clustered implementations.

Just as you need to create virtual frames for production hosts (if they do not already exist), you must create virtual frames for each mount host and add at least one virtual volume to each of these virtual frames. Replication Manager automatically populates these virtual frames with the appropriate virtual volumes after the replicas have been created and before they are mounted to the mount host.

Additional steps are necessary to make the volumes in the virtual frames that you have created visible to the appropriate production and mount hosts. To do that, refer to the section entitled ”Making volumes in a Virtual Frame visible to a host” in Chapter 10 of the

Invista Element Manager Administrator’s Guide.

You may need to reboot the host after this step in order to make the virtual volume visible.

Exposing selected storage to clients

At this point, your storage environment is configured and you can now perform the last four steps (in the Replication Manager Console) to expose the arrays and the storage to Replication Manager. These last configuration steps are as follows:

1. Register all hosts on the Replication Manager Server.

2. Run Discover Arrays from each host.

3. Run Discover Storage on each discovered array.

“Managing replica storage” on page 97

describes how to complete these last steps.

Managing virtual frames

275

Invista Setup

Replication Manager virtual volume surfacing issues

The term virtual volume surfacing refers to Replication Manager’s ability to dynamically present virtual volumes to a mount host during a mount operation. A virtual volume surfacing issue arises when a mount host is unable to access a volume that contains replica data.

Note:

Replication Manager experiences virtual volume surfacing issues during mount operations only. It does not affect replication or restore operations.

Affected environments

Preventing virtual volume surfacing issues in Windows

Virtual volume surfacing issues affect Replication Manager clients attached to Invista instances. Replication Manager systems that use only Symmetrix storage arrays are not affected since Replication

Manager is not required to present BCVs to the mount host dynamically.

The following sections describe techniques for preventing virtual volume surfacing issues in these environments.

You can ensure reliable virtual volume surfacing during mount operations on Windows machines by doing the following:

Install admreplicate on all Replication Manager hosts that connect to the Invista Instance.

Be sure all HBAs and their associated software is supported for the instance of Invista that you are using. The EMC Support Matrix lists supported HBAs.

To verify that you have the correct HBA driver installed, open the

Windows Device Manager. The driver for the Emulex fibre controller shows the version, for example 5.2.20.12 (which is another designation for 2.20a12), and the digital signer should be identified as Microsoft Windows Hardware Compatibility

Publisher.

On Windows Server 2003, use Storport drivers.

If you have more than one path to the Invista instance, you must install PowerPath on your Windows systems to manage those multiple paths.

276

EMC Replication Manager Administrator’s Guide

Invista Setup

Verify that Java is at version Java 2 Platform Standard Edition

(J2SE) version 1.5_11 and that the ERM_JAVA_PATH environment variable is set properly.

Replication Manager virtual volume surfacing issues

277

Invista Setup

278

EMC Replication Manager Administrator’s Guide

8

RecoverPoint Setup

This chapter describes how to prepare the RecoverPoint to work with

Replication Manager. It covers the following subjects:

RecoverPoint setup checklist.......................................................... 280

RecoverPoint hardware and software configurations ................ 281

Components of Replication Manager and RecoverPoint........... 282

Synchronizing time on hosts .......................................................... 284

Replication Manager configuration steps for RecoverPoint...... 285

RecoverPoint Setup

279

RecoverPoint Setup

RecoverPoint setup checklist

To set up RecoverPoint for use with Replication Manager:

Install and configure RecoverPoint according to the RecoverPoint documentation.

Ensure that the splitters for all mount hosts are attached to the

RecoverPoint target volumes they are going to use.

Synchronize time on all systems.

“Synchronizing time on hosts” on page 284 describes this step.

Identify all volumes belonging to those RecoverPoint consistency groups that are to be covered by Replication Manager. Replication

Manager requires that all volumes in an application set map to the volumes in a consistency group.

Failover preparation: Replication Manager requires that

RecoverPoint CLR consistency groups have both local and remote copies, even in a failover situation. This may require

RecoverPoint administrator configuration steps after failover to configure a local copy on the remote site; Replication Manager

CLR jobs will fail otherwise.

If underlying storage is on a CLARiiON array, Replication

Manager requires one of the following:

• Snapview enabler installed on the CLARiiON or

• A storage group named EMC Replication Storage, populated with one non-target LUN

Chapter 9, ”VMware Setup,” describes requirements related to

Replication Manager support of RecoverPoint in a VMware environment.

280

EMC Replication Manager Administrator’s Guide

RecoverPoint Setup

RecoverPoint hardware and software configurations

The EMC Replication Manager Support Matrix on http://Powerlink.EMC.com

provides specific information about supported applications, operating systems, high-availability environments, volume managers, related to its support of

RecoverPoint.

RecoverPoint hardware and software configurations

281

RecoverPoint Setup

Components of Replication Manager and RecoverPoint

Figure 34 on page 282

illustrates the major components of Replication

Manager and RecoverPoint. The components are described in this section.

RM Server

Application server and RM Client

SAN

RecoverPoint

RecoverPoint

Application server and RM Client

SAN

WAN

CDP

Journal

Continuous data replication

CRR

Journal

Continuous remote replication

Supported

Figure 34 Components of Replication Manager with RecoverPoint

replication options

Replication Manager supports the following RecoverPoint replication options:

Continuous Data Protection

In Continuous Data Protection (CDP), RecoverPoint replicates to another storage array at the same site. In a RecoverPoint installation that is used exclusively for CDP, you install RPAs at only one site and do not specify a WAN interface.

Continuous Remote Replication

In Continuous Remote Replication (CRR), RecoverPoint replicates over a WAN to a remote site. There is no limit to the replication distance.

282

EMC Replication Manager Administrator’s Guide

RecoverPoint

Appliance

RecoverPoint Setup

Concurrent Local and Remote

In Concurrent Local and Remote Replication (CLR), RecoverPoint protects production LUNs locally using CDP and remotely using

CRR. Both copies have different protection windows and RPO policies.

The RecoverPoint Appliance (RPA) manages all aspects of replication. RecoverPoint supplies continuous data protection services for applications operating on production hosts and using production storage. The RPA receives all writes to production storage from the RecoverPoint splitter. The RPA coordinates the journaling of these updates to the journal storage.

The RPA also is the component that manages a replica of your production storage for the point in time that you specify. Your application can use the replica as a production LUN, providing near-instantaneous recovery.

The RPA runs on a dedicated server that is external to the Replication

Manager server and to the production and recovery hosts.

Components of Replication Manager and RecoverPoint

283

RecoverPoint Setup

Synchronizing time on hosts

Replication Manager support for RecoverPoint requires system time to be synchronized on all systems. Follow the steps in your operating system documentation to configure a production host or mount host to synchronize with a time server.

284

EMC Replication Manager Administrator’s Guide

RecoverPoint Setup

Replication Manager configuration steps for RecoverPoint

After RecoverPoint setup is complete, perform the following steps in

Replication Manager:

1. Register production hosts and mount hosts. During host registration, you will enable RecoverPoint support for the hosts, and identify an RPA to be used.

2. Right-click on Storage Services in the Replication Manager

Console window, and select Add Storage to discover the RPA.

If the configuration uses CLARiiON storage, you must also select the associated CLARiiON array in order to provide CLARiiON array credentials.

In a VMware virtual disk configuration, a CLARiiON array containing target storage must be visible to at least one registered host.

3. When the RPA has been added to the Storage Services list, create an application set. Devices found in a RecoverPoint consistency group must match the devices that you specify in the corresponding application set and job options. If the devices do not match, an error is generated when the job is run.

4. Create jobs.

Replication Manager configuration steps for RecoverPoint

285

RecoverPoint Setup

286

EMC Replication Manager Administrator’s Guide

9

VMware Setup

This chapter describes how to prepare VMware environments to work with Replication Manager. It covers the following subjects:

Supported VMware configurations and prerequisites ............... 288

Replicating a VMware VMFS ......................................................... 289

Replicating a VMware NFS Datastore .......................................... 298

Replicating a VMware virtual disk................................................ 304

Replicating an RDM or iSCSI initiator LUN on CLARiiON...... 314

Replicating an RDM on Invista Virtual Volumes ........................ 320

Replicating VMware virtual disks on Invista virtual volumes . 322

Replicating a VMware RDM on Symmetrix ................................ 325

Replicating a VMware RDM with RecoverPoint......................... 326

Replicating a VMware RDM on Celerra....................................... 326

VMware Setup

287

VMware Setup

Supported VMware configurations and prerequisites

Replication Manager can operate (create replicas, mount replicas, and restore replicas) in the following VMware configurations:

VMware VMFS and NFS Datastore

VMware virtual disk

VMware with RDM disks and Microsoft initiator discovered disks

Note:

When creating application sets it is important not to attempt to mix

VMFS environments with RDM environments in the same application set.

That would cause the replication to fail.

This chapter describes the supported configurations and the configuration prerequisites required prior to integrating VMware with Replication Manager in each of these supported environments.

Table 23 on page 288 summarizes Replication Manager support for

VMware configurations. For details beyond the scope of the table, refer to the rest of this chapter and to the EMC Replication Manager

Product Guide and EMC Replication Manager Support Matrix.

Summary of Replication Manager support for VMware configurations Table 23

VMFS

RDM

MS iSCSI initiator discovered

W

VMDK W

W, L W, L W, L W, L W, L W, L

W W

W W W W W

NFS

CLARiiON

SV clone

SV snap

 

SAN

Copy

Symmetrix FC

TF/

C

TF/

M

TF/

S

Celerra

(iSCSI, NFS) Invista RP

Clone SRDF

/A

SRDF

/S

SAN

Copy

Snap

Sure

Repli cator

     

W, L

W

W, L

W

W, L

W

W

W, L

W

W

W

W

W

W

W

W

SV=SnapView, MV=MirrorView, TF/C, M, S=TimeFinderClone, Mirror, Snap, RP=RecoverPoint

W=Supported Windows versions, L=Supported Linux versions

 =VMFS requires proxy host on Windows, NFS requires proxy host on Linux

288

EMC Replication Manager Administrator’s Guide

VMware Setup

Replicating a VMware VMFS

Replication Manager can replicate, mount, and restore a VMFS that resides on an ESX server managed by a VMware VirtualCenter and is attached to CLARiiON, Celerra, RecoverPoint, or Symmetrix.

Figure 35 on page 291

illustrates this configuration.

Since all operations are performed through the VMware

VirtualCenter, neither Replication Manager nor its required software needs to be installed on a virtual machine or on the ESX server where the VMFS resides. Operations are sent from a proxy host that is either a physical host or a separate virtual host. The Replication Manager proxy host can be the same physical or virtual host that serves as the

Replication Manager Server.

General configuration prerequisites

The following configuration prerequisites are required to integrate

Replication Manager with VMware VMFS replication:

VMware VirtualCenter must be used in the environment.

The Replication Manager proxy agent (a Windows virtual machine or physical host with Replication Manager client installed) must be able to connect to the VMware VirtualCenter.

Register the Replication Manager proxy host with the Replication

Manager Server and provide the credentials for the VMware

VirtualCenter that manages that proxy host.

The Replication Manager proxy host communicates with the

VMware VirtualCenter over port 443. Make sure that port is open between the Replication Manager proxy host and the VMware

VirtualCenter.

Replication Manager supports VMware’s use of VSS with VM snapshots when ESX 3.5 Update 2 or later is installed and

VMware Tools are present on the virtual machine on the VMFS you are replicating. Refer to VMware documentation for use of the VSS-related characteristics in the Replication Manager replica and contact VMware regarding considerations related to VSS in this configuration.

In CLARiiON environments, the Replication Manager proxy agent must be able to access at least one placeholder LUN from the CLARiiON where the VMFS data resides.

Replicating a VMware VMFS

289

VMware Setup

In Symmetrix environments, the Replication Manager proxy agent must be zoned to at least four six-cylinder gatekeeper devices on the Symmetrix where the VMFS data resides.

LVM Resignature must be enabled in the ESX Server. For more

information refer to “Enabling LVM Resignature” on page 297 ,

which describes how to enable LVM Resignature.

Whenever you create a new VMFS through the VMware Virtual

Infrastructure Client, rescan the VMFS and refresh storage before you try to create a Replication Manager application set.

In Celerra environments, the Replication Manager proxy agent must have IP access to the Celerra Datamovers. Ping the datamover IP addresses to verify this access and open firewall ports if necessary to ensure that access.

In Celerra environments, Replication Manager uses Celerra

SnapSure to create local replicas of VMware VMFS data and

Celerra Replicator to create remote replicas of VMware VMFS data.

In Celerra environments, VMFS data may reside on more than one LUN, however all LUNs must be from the same Celerra and share the same target IQN.

In Celerra environments, Microsoft iSCSI Initiator can be present in the Celerra environment but is optional.

290

EMC Replication Manager Administrator’s Guide

VMware Setup

Virtual Machines

App App App

OS OS OS

Virtual Machines

App App App

OS OS OS

Virtual Machines

App App App

OS OS OS

ESX Server

VMFS

ESX Server

VMFS

ESX Server

VMFS

Figure 35

RecoverPoint configuration prerequisites

Symmetrix configuration prerequisites

Virtual Machine

App

OS

OR

Replication Manager

Agent (Proxy)

Virtual

Center

VI Client

VMware configuration with Replication Manager proxy server

The following configuration prerequisites are specific to VMFS support in RecoverPoint environments:

RecoverPoint and VMware must be configured according to those products’ documentation.

The Replication Manager proxy agent must be able to connect to the RPA through the network.

For VMFS support in Symmetrix environments, the proxy host must have gatekeepers from the Symmetrix visible as a physical RDM. A minimum of four six-cylinder gatekeepers is recommended.

Replicating a VMware VMFS

291

VMware Setup

Considerations when replicating VMFS environments

When Replication Manager is introduced into a VMware environment for the purpose of creating replicas of VMFS filesystems, the following considerations apply:

All VMware specific operations occur through the VMware

VirtualCenter.

Replication Manager can be configured to require VirtualCenter

Server login credentials in order to allow replication of a certain

VMFS for security purposes.

Unless you instruct Replication Manager to omit this feature for one or more virtual machines, Replication Manager takes a

VMware Snapshot for each virtual machine that is online and residing on the VMFS just prior to replication in order to ensure operating system consistency of the resulting replica. The

procedure described in “Disabling the virtual machine snapshot feature” on page 293

instructs Replication Manager to omit this step.

Replication Manager supports VMware’s use of VSS with VM snapshots when ESX 3.5 Update 2 or later is installed and

VMware Tools are present on the virtual machine on the VMFS you are replicating. Refer to VMware documentation for use of the VSS-related characteristics in the Replication Manager replica and contact VMware regarding considerations related to VSS in this configuration.

In Celerra environments, Replication Manager uses Celerra

Replicator technology to create remote replicas (by creating a copy job of a local VMFS replica). These replicas are actually snapshots that represent a crash-consistent replica of the entire

VMFS.

Federated application sets are not supported in a VMFS environment.

If virtual machines in the datastore have RDMs or iSCSI LUNs visible to them, the resulting replica will not contain those LUNs.

If the virtual machine has virtual disks other than the boot drive, it is possible to capture those as well, provided the underlying datastore is a part of the same application set

292

EMC Replication Manager Administrator’s Guide

VMware Setup

Disabling the virtual machine snapshot feature

By default, Replication Manager creates a VMware snapshot of each virtual machine that is part of a VMFS or NFS Datastore replica prior to creating the hardware based replica. When Replication Manager performs the snapshot, the replica maintains operating system consistency. If the snapshot is disabled the replica will be in a crash-consistent state. Performing the snapshot does impact the speed and performance of the replication process, especially if there is a large number of virtual machines in the VMFS or NFS Datastore for which the replica is being created. Therefore, an administrator may choose to disable the snapshot by completing the following steps:

Disabling VMware Snapshots at the datastore level

To disable VMware snapshots at the datastore level:

1. Access the server where the Replication Manager agent used as the VMware Proxy resides.

2. Navigate to the /rm/client/bin folder and modify the file name

VMSnapDeny.cfg found in that folder.

3. Add lines in the following format for each datastore for which you want to disable VMware snapshots.

<name_of_datastore>:: where

<name_of_datastore> is the name of the datastore for which you want to omit the VMware snapshots.

Diabling VMware Snapshots at the virtual machine level

To disable VMware snapshots at the virtual machine level:

1. Access the server where the Replication Manager agent used as the VMware Proxy resides.

2. Navigate to the /rm/client/bin folder and modify the file name

VMSnapDeny.cfg found in that folder.

3. Add lines in the following format for each datastore for which you want to disable VMware snapshots.

<name_of_datastore>::<name_of_vm> where

<name_of_datastore> is the name of the datastore for which you want to omit the VMware snapshots.

Replicating a VMware VMFS

293

VMware Setup

<name_of_vm> is the name of the virtual machine as it is defined in the VI Client.

Figure 36 on page 294

offers a preview how to create the

VMSnapDeny.cfg file.

# pwd

/opt/emc/rm/client/bin

# cat VMSnapDeny.cfg

#

# Comments can be added if line starts with a #

#

#

# The next line prevents Replication Manager from

# taking a VMware Snapshot for any VM on the datastore

# named nfs-datastore-1.

# nfs-datastore-1::

#

# The next line prevents Replication Manager from

# taking a VMware Snapshot for the specific VMs

# listed.

# vmfs-datastore-2::lrmh179 (RM Server 1) vmfs-datastore-2::lrmh179 (RM Server 2)

#

#End of file

#

Figure 36

Creating a VMSnapDeny.cfg file

294

EMC Replication Manager Administrator’s Guide

VMware Setup

Considerations when mounting a VMFS replica

When you mount a VMFS replica to an alternate ESX Server,

Replication Manager performs all tasks necessary to make the VMFS visible to the ESX Server. Once that is complete, further administration tasks such as restarting the virtual machines and the applications must be completed by scripts or manual intervention.

Note:

VMFS replicas can be mounted to any ESX Server; technologies other than CLARiiON and Celerra require that the ESX Server see the replication devices. When registering the ESX server and VM hosts in the VirtualCenter and the array, you need to use the fully qualified domain name, not the IP address.

When you mount a VMFS, consider whether the deployment is temporary or permanent. The mount operations differ depending upon whether you intend to keep the VMFS mounted for an extended period or not.

Note:

Permanent deployments are not supported in Celerra environments.

Also, EMC does not recommend using VMotion technology with mounted

Celerra replicas.

Temporary deployment of a VMFS replica

If you are mounting a VMFS replica for short-term operations such as backup, data mining, or a manual restore of a single virtual machine, follow these steps to mount and unmount the replica:

1. Mount the replica using a Replication Manager job or mount on demand operation.

2. Perform one of the sets of steps below, depending upon your reasons for creating the replica:

Steps when replica is for data mining or testing operations

a. Add the virtual machine to the VirtualCenter inventory using the Virtual Center console.

b. Power on the virtual machines within the mounted VMFS.

c. Perform data mining or testing operations.

d. Power off the virtual machines within the mounted VMFS.

e. Remove the virtual machines from the VirtualCenter inventory.

Replicating a VMware VMFS

295

VMware Setup

Step when replica is for offline backup only

Perform a backup of the virtual machine files in the replica to offline storage.

Steps when performing a single virtual machine restore

a. Use the VirtualCenter GUI to browse your mounted datastore.

b. Copy and paste the virtual machine to the production datastore.

3. Unmount the replica through Replication Manager.

Permanent deployment of a CLARiiON VMFS replica

If you are planning to perform a permanent or extended redeployment of a CLARiiON VMFS replica and you do not wish to keep that replica under the control of Replication Manager, follow these steps:

1. Mount the replica using a Replication Manager job or mount on demand operation. During the mount, select the Maintain LUN

visibility after unmount

option.

2. Once the replica mounts, ensure that all virtual machines are shutdown and offline.

3. Power off the virtual machines and remove them from the

VirtualCenter inventory. If you do not perform this step,

Replication Manager unmount operation will fail.

4. Unmount the replica (which should leave the LUNs visible on the mount location, due to the option you selected during mount).

5. Expire the replica in Replication Manager.

6. Free the storage devices on which this replica resides from

Replication Manager control. For information on how to free storage devices from Replication Manager control, refer to

“Free storage devices” on page 112 .

7. Rediscover the storage array.

Permanent deployment of a Symmetrix VMFS replica

If you are planning to perform a permanent or extended redeployment of a Symmetrix VMFS replica and you do not wish to keep that replica under the control of Replication Manager, follow these steps:

1. Delete the replica using the Replication Manager Console.

296

EMC Replication Manager Administrator’s Guide

VMware Setup

Enabling LVM

Resignature

2. Remove the replication storage using the Replication Manager

Console.

3. Manually ready the device using symcli.

LVM Resignature allows VMware to write a new signature to the

LUNs when necessary. This switch must be enabled for Replication

Manager so that VMFS can be made visible on the replicated LUNs to the ESX server. This setting should also be enabled on the production

ESX server if you plan to restore to that ESX server at any time in the future. For details on how to set this switch, refer to your VMware documentation.

Replicating a VMware VMFS

297

VMware Setup

Replicating a VMware NFS Datastore

Replication Manager can replicate, mount, and restore a NFS

Datastore that resides on an ESX server managed by a VMware

VirtualCenter and is attached to Celerra. Figure 37 on page 300

illustrates this configuration.

Since all operations are performed through the VMware

VirtualCenter, neither Replication Manager nor its required software needs to be installed on a virtual machine or on the ESX server where the NFS Datastore resides. Operations are sent from a proxy host that is either a physical host or a separate virtual host.

General configuration prerequisites

The following configuration prerequisites are required to integrate

Replication Manager with VMware NFS Datastore replication:

VMware VirtualCenter must be used in the environment.

The Replication Manager proxy agent (a Linux virtual machine or physical host with Replication Manager client installed) must have IP connectivity to the VMware VirtualCenter and the

Celerra control station. Ping the Celerra control station IP addresses to verify this access and open firewall ports if necessary to ensure that access.

DNS should be configured on the Replication Manager hosts. For

NFS operations, the hosts must be able to resolve hostnames to corresponding IP addresses. Also, provide fully qualified host names when configuring replications or mounts.

Register the Replication Manager proxy host with the Replication

Manager Server and provide the credentials for the VMware

VirtualCenter.

The Linux proxy host should be able to resolve the addresses of the Replication Manager server and mount host and the Celerra control station using DNS.

The Replication Manager proxy host communicates with the

VMware VirtualCenter over port 443. Make sure that port is open between the Replication Manager proxy host and the VMware

VirtualCenter.

Replication Manager supports VMware’s use of VSS with VM snapshots when ESX 3.5 Update 2 or later is installed and

VMware Tools are present on the virtual machine on the NFS

298

EMC Replication Manager Administrator’s Guide

VMware Setup

Datastore you are replicating. Refer to VMware documentation for use of the VSS-related characteristics in the Replication

Manager replica and contact VMware regarding considerations related to VSS in this configuration.

LVM Resignature need not be enabled in the ESX Server for NFS datastore, only for VMware VMFS environments.

The administrator must manually configure and discover all

Celerra control stations within Replication Manager.

Replication Manager uses Celerra SnapSure to create local replicas of VMware NFS Datastores and Celerra Replicator to create remote replicas of VMware NFS Datastores.

For a remote replication, the session has to be manually created for the Celerra NFS corresponding to NFS Datastore being replicated; for each NFS being replicated to remote Celerra(s), the replicator session should be unique. The session can be set up for manual or automatic sync.

Note the following examples of entries in the /etc/hosts file on the NFS proxy host for the loopback address.

The following entry is valid:

127.0.0.1 localhost.localdomain localhost

The following entry, containing the hostname for the loopback address, is invalid and can cause problems when you try to run a

Replication Manager job:

127.0.0.1 localhost.localdomain localhost ProxyHostName

Replicating a VMware NFS Datastore

299

VMware Setup

Virtual Machines

App App App

OS OS OS

Virtual Machines

App App App

OS OS OS

Virtual Machines

App App App

OS OS OS

ESX Server

NFS Datastore

ESX Server

NFS Datastore

ESX Server

NFS Datastore

Virtual Machine

App

OS

OR

Replication Manager

Agent (Proxy) Linux

Virtual

Center

VI Client

Figure 37 VMware NFS Datastore configuration with Replication Manager proxy server

300

EMC Replication Manager Administrator’s Guide

VMware Setup

Considerations when replicating NFS

Datastore environments

Considerations when mounting an NFS

Datastore replica

When Replication Manager is introduced into a VMware environment for the purpose of creating replicas of NFS datastores, the following considerations apply:

All VMware specific operations occur through the VMware

VirtualCenter.

Replication Manager can be configured to require VirtualCenter

Server login credentials in order to allow replication of a certain

NFS Datastores for security purposes.

Unless you instruct Replication Manager to omit this feature for one or more virtual machines, Replication Manager takes

VMware Snapshots for all virtual machines that are online and residing on the NFS Datastore just prior to replication in order to ensure operating system consistency of the resulting replica. The

procedure described in “Disabling the virtual machine snapshot feature” on page 293

instructs Replication Manager to omit this step.

Replication Manager supports VMware’s use of VSS with VM snapshots when ESX 3.5 Update 2 or later is installed and

VMware Tools are present on the virtual machine on the NFS

Datastore you are replicating. Refer to VMware documentation for use of the VSS-related characteristics in the Replication

Manager replica and contact VMware regarding considerations related to VSS in this configuration.

Replication Manager uses Celerra Replicator technology to create remote replicas. These replicas are actually snapshots that represent a crash-consistent replica of the entire NFS Datastore.

Federated and/or composite application sets are not supported in an NFS Datastore environment.

When you mount a NFS Datastore replica to an alternate ESX Server,

Replication Manager performs all tasks necessary to make the NFS

Datastore visible to the ESX Server. Once that is complete, further administration tasks such as restarting the virtual machines and the applications must be completed by scripts or manual intervention.

Note:

NFS Datastore replicas can be mounted to any ESX Server. When registering the ESX server and VM hosts in the VirtualCenter and the array, you need to use the fully qualified domain name, not the IP address.

It is possible to mount an NFS Datastore replica to an ESX Server as an NFS Datastore, or to mount directly to a Linux host as an NFS

Replicating a VMware NFS Datastore

301

VMware Setup

filesystem. When you mount an NFS Datastore, consider the deployment as a temporary mount operation.

Note:

Permanent deployments are not supported in Celerra environments.

Also, EMC does not recommend using VMotion technology with mounted

Celerra replicas.

Temporary deployment of a NFS Datastore replica

If you are mounting an NFS Datastore replica for short-term operations such as backup, data mining, or a manual restore of a single virtual machine, follow these steps to mount and unmount the replica:

1. Mount the replica on an ESX Server using a Replication Manager job or mount on demand operation. For a short term operation, mount the replica read-write.

2. Perform one of the sets of steps below, depending upon your reasons for creating the replica:

Steps when replica is for data mining or testing operations

a. Add the virtual machine to the VirtualCenter inventory using the Virtual Center console.

b. Power on the virtual machines within the mounted NFS

Datastore.

c. Perform data mining or testing operations.

d. Power off the virtual machines within the mounted NFS

Datastore.

e. Remove the virtual machines from the VirtualCenter inventory.

Step when replica is for offline backup only

Perform a backup of the virtual machine files in the replica to offline storage.

Steps when performing a single virtual machine restore

a. Use the VirtualCenter GUI to browse your mounted datastore.

b. Copy and paste the virtual machine to the production datastore.

302

EMC Replication Manager Administrator’s Guide

VMware Setup

3. Unmount the replica through Replication Manager. (Remove any deployed virtual machines from the VirtualCenter inventory before unmount.)

Replicating a VMware NFS Datastore

303

VMware Setup

Replicating a VMware virtual disk

The Replication Manager Agent can also replicate, mount, and restore

application data that resides on a VMware virtual disk. Figure 38 on page 304

through Figure 42 on page 308 illustrate the valid

configurations. Use this replication method to create application consistent replication of data on the virtual disk.

Virtual Machine

- No Naviagent

- No PowerPath

Virtual Machine

- No Naviagent

- No PowerPath

VD

ESX Server

VMFS

VD

ESX Server

VMFS

CLARiiON iSCSI

CLARiiON Fibre

Figure 38

VMware support for virtual disk replication on CLARiiON

304

EMC Replication Manager Administrator’s Guide

Virtual Machine

- No Naviagent

- No PowerPath

Virtual Machine

- No Naviagent

- No PowerPath

VD

ESX Server

VMFS

Celerra iSCSI

Figure 39

VMware support for virtual disk replication on Celerra

VMware Setup

Replicating a VMware virtual disk

305

VMware Setup

Symmetrix

Figure 40

VMware support for virtual disk replication on Symmetrix

306

EMC Replication Manager Administrator’s Guide

Virtual Machine

- No Naviagent

- No PowerPath

Virtual Machine

- No Naviagent

- No PowerPath

VD

ESX Server

VMFS

VD

ESX Server

VMFS

RecoverPoint on CLARiiON iSCSI

RecoverPoint

Fibre

(CLAR or Symm)

Figure 41

VMware support for virtual disk replication on RecoverPoint

VMware Setup

Replicating a VMware virtual disk

307

VMware Setup

Virtual Machines

RM agent

Virtual Machines

App

OS

App

OS

App

OS

Virtual Machines

App

OS

App

OS

App

OS

OS

ESX Server

virtual disk

VMFS

ESX Server

ESX Server

max. one LUN one virtual disk

LUN

VirtualCenter

(management interface)

General configuration prerequisites

Figure 42

Storage

Array

Replication Manager VMware virtual disk support

The following configuration prerequisites are required for integrating

Replication Manager into a virtual disk environment, regardless of the storage technology:

Virtual disk support by Replication Manager requires that each virtual disk is to be the only virtual disk in its datastore (VMFS), and there may only be one physical LUN (Celerra, Symmetrix or

CLARiiON) assigned to the datastore.

308

EMC Replication Manager Administrator’s Guide

Symmetrix configuration prerequisites

VMware Setup

Note:

Replication Manager can replicate configurations with multiple virtual disks per datastore (VMFS), but mount and restore will fail in these configurations.

VMware Tools must be installed in this environment.

“Installing

VMware Tools on the virtual machine” on page 312

describes how to install VMware Tools.

Register the production and mount virtual machines with the

Replication Manager Server and provide the credentials for the

VMware VirtualCenter that manages that virtual machine.

Replication Manager is tolerant of VMware VMotion of data stored on the production host only.

LVM Resignature must be enabled in this configuration.

“Enabling LVM Resignature” on page 297 describes how to

enable LVM Resignature.

Replication Manager does not support VSS replicas in a virtual disk environment, so Microsoft Exchange and SharePoint are not supported in this configuration.

The following additional prerequisites are required for Replication

Manager support of Symmetrix on virtual disks:

Replication Manager can only replicate and mount virtual disks in an environment in which the virtual machine has been configured with four six-cylinder Symmetrix gatekeeper devices visible to it via RDM in physical compatibility mode. This applies to production and mount hosts.

In order to mount Symmetrix replicas, the target storage (where the replica resides) must be made visible to the ESX Server of the mount host.

Replication Manager can only replicate environments with one unique SCSI target across all SCSI controllers on the virtual machine. The SCSI target of the virtual disk being replicated must

not be used on other SCSI controllers. Refer to Figure 43 on page 310

for a graphical representation of this concept.

Replicating a VMware virtual disk

309

VMware Setup

CLARiiON configuration prerequisites

The following additional prerequisites are required for Replication

Manager support of CLARiiON on virtual disks:

The Keep LUNs Visible option is not supported if you are mounting CLARiiON snap replicas of the VMware environment.

Replication Manager can only replicate environments with one unique SCSI target across all SCSI controllers on the virtual machine. The SCSI target of the virtual disk being replicated must

not be used on other SCSI controllers. Refer to Figure 43 on page 310

for a graphical representation of this concept.

Figure 43

Unique target across controllers requirement

The number before the colon is the controller; the number after the colon is the target. Target must be unique and not used on more than one controller as shown in the second configuration.

SAN copy over remote arrays, at least one of the following additional configuration pre-requisite is required:

• At least one Replication Manager client, residing on a virtual machine, must have a virtual disk on a VMFS created using a

LUN from the remote CLARiiON storage array.

310

EMC Replication Manager Administrator’s Guide

Celerra configuration prerequisites

RecoverPoint configuration prerequisites

VMware Setup

• At least one Replication Manager client, residing on a virtual machine, must have an RDM device configured from the remote CLARiiON storage array.

LVM Resignature must be enabled in this configuration.

“Enabling LVM Resignature” on page 297 describes how to

enable LVM Resignature.

The following additional prerequisites are required for Replication

Manager support of Celerra on virtual disks:

Any VMFS created on a Celerra DART 5.5 iSCSI LUN is not supported for virtual disk replication.

Failover or promote of Celerra Replicator clone replicas is supported with VMware SRM installations only. Refer to

Chapter 12, ”Disaster Recovery,” for more information.

Replication Manager can only replicate environments with one unique SCSI target across all SCSI controllers on the virtual machine. The SCSI target of the virtual disk being replicated must

not be used on other SCSI controllers. Refer to Figure 43 on page 310

for a graphical representation of this concept.

The following are additional prerequisites for Replication Manager support of RecoverPoint on virtual disks:

RecoverPoint storage must be on a CLARiiON (Fibre Channel and Microsoft iSCSI connectivity supported) or Symmetrix array.

When registering a VM, register the following:

• VMware VirtualCenter credentials

• RecoverPoint RPA name and address

Array-based and switch-based splitters are supported.

Host-based splitters are not supported.

In order to mount RecoverPoint replicas, the target storage

(where the replica resides) must be made visible to the ESX Server of the mount host.

For RecoverPoint support on Symmetrix, the virtual machine must be configured with four six-cylinder Symmetrix gatekeeper devices visible to it via RDM in physical compatibility mode.

Replicating a VMware virtual disk

311

VMware Setup

Virtual disk mount, unmount, and restore considerations

Installing VMware

Tools on the virtual machine

Replication Manager requires one unique SCSI target across all

SCSI controllers on the virtual machine. The SCSI target of the virtual disk being replicated must not be used on other SCSI controllers.

During mount operations in virtual disk environments, keep the following considerations in mind:

Production mount of virtual disk replicas is not supported.

Multiple replicas of the same virtual disk should not be mounted to the same mount host simultaneously.

The Maintain LUN visibility after unmount option instructs

Replication Manager to keep LUNs visible to the ESX server, even after the virtual disk is removed from the virtual machine on unmount.

In CLARiiON environments, VMotion operations can be performed on the mounted virtual machine provided the LUNs are manually made visible to the appropriate ESX servers in advance and the Maintain LUN visibility after unmount option is selected.

A unique SCSI target must be free among all SCSI controllers on the virtual machine.

VMware Tools are a set of useful VMware utilities that Replication

Manager uses to perform its tasks. VMware Tools are a prerequisite for replicating virtual disks. Follow these steps to install VMware

Tools:

1. Open the VMware Infrastructure Client (interface to the VMware

VirtualCenter).

2. In the tree to the left, right-click the virtual machine where the

Replication Manager Agent is installed.

3. Choose Install/Upgrade VMware Tools.

4. Log in to the virtual machine.

5. Open the CD ROM drive and run the setup program.

6. Follow the instructions on the screen to install the toolset.

312

EMC Replication Manager Administrator’s Guide

VMware Setup

Installing VMware

Tools on a physical machine

Additional support information

For more information about installing VMware Tools on a physical machine, refer to VMware’s documentation located at http://www.vmware.com

.

The EMC Replication Manager Support Matrix on http://Powerlink.EMC.com

provides specific information about supported applications, operating systems, high-availability environments, resource schedulers (DRS), volume managers, virtualization technologies, and other supported software.

Replicating a VMware virtual disk

313

VMware Setup

Replicating an RDM or iSCSI initiator LUN on CLARiiON

This section describes Replication Manager support for replication and mount on VMware with Raw Device Mapping (RDM) or

Microsoft iSCSI initiator discovered LUN running Windows in a

CLARiiON environment.

Note:

Replication Manager only supports RDMs in physical compatibility mode. There is no support for RDMs in virtual mode.

Refer to the Replication Manager support information available from the E-Lab Interoperability Navigator on Powerlink for supported

VMware and Windows versions.

Replication host attached to

CLARiiON iSCSI system

When configuring a Windows virtual machine as a replication host attached to a CLARiiON iSCSI system, you can use either the

VMware iSCSI Initiator on the ESX Server or the Microsoft iSCSI

Initiator to present CLARiiON storage to the virtual machine:

If you use the VMware iSCSI Initiator, the device is made visible to the virtual machine using raw device mapping (RDM). Follow these guidelines when using VMware iSCSI Initiator:

• Do not install Naviagent on the virtual machine.

• Do not install PowerPath on the virtual machine.

• Do not install Microsoft iSCSI Initiator.

Follow these guidelines when using Microsoft iSCSI Initiator:

• Install Microsoft iSCSI Initiator.

• Install PowerPath (for use as a single NIC and SP failover only).

• Do not install Naviagent on the virtual machine.

• For easier management, you may want to make the virtual machine’s name available by manually registering it with the

CLARiiON array using Navisphere.

No special installation is required if you use the VMware iSCSI

Initiator, which is installed by default on the ESX Server. Devices presented through the VMware iSCSI Initiator are made visible to the virtual machine through RDM.

314

EMC Replication Manager Administrator’s Guide

VMware Setup

Mount host attached to

CLARiiON iSCSI system

When configuring a Windows virtual machine as a mount host, you can use either the Microsoft iSCSI Initiator or the VMware iSCSI

Initiator. The requirements are the same as those listed in the replication host section above.

Note:

You cannot perform dynamic mounts if you are using the VMware iSCSI Initiator on the mount host.

Figure 44 on page 315

shows supported configurations.

Physical machine

- Naviagent

- PowerPath

- MS iSCSI

Initiator

Replication hosts s u pported

Virtual Machine

- No Naviagent

- No PowerPath

Virtual Machine

- No Naviagent

- PowerPath

2

1

- MS iSCSI

Initiator

RDM

ESX Server

ESX Server

Physical machine

- Naviagent

- PowerPath

- MS iSCSI

Initiator

Mo u nt hosts s u pported

Virtual Machine

- No Naviagent

- No PowerPath

- Static Mo

only

3 u nt

Virtual Machine

- No Naviagent

1

- PowerPath

2

- MS iSCSI

Initiator

RDM

ESX Server

ESX Server

CLARiiON iSCSI

CLARiiON iSCSI

= virt u al NIC

Figure 44 VMware support with CLARiiON iSCSI system

1 - When using RDM on a replication host, manually register the virtual machine on the CLARiiON.

2 - PowerPath is supported as single NIC and SP failover mode only.

3 - Manually place replication LUNs into the ESX storage group and then make the LUN visible to the virtual machine as RDM prior to replication.

Replicating an RDM or iSCSI initiator LUN on CLARiiON

315

VMware Setup

Replication host attached to

CLARiiON Fibre

Channel system

Mount host attached to

CLARiiON Fibre

Channel system

Devices on a CLARiiON Fibre Channel system are presented to the replication host virtual machine through RDM. When configuring a

Windows virtual machine as a replication host attached to a

CLARiiON Fibre Channel system, follow these guidelines:

Do not install Naviagent on the virtual machine.

Do not install PowerPath.

Replication Manager supports mounting to a virtual or physical machine on a CLARiiON Fibre Channel system.

Figure 45 on page 316

illustrates supported configurations with a

CLARiiON Fibre Channel system.

Replication hosts s u pported

Physical machine

- Naviagent

- PowerPath

Virtual Machine

- No Naviagent

- No PowerPath

RDM

Mo u nt host s u pported

Physical machine

- Naviagent

- PowerPath

Virtual Machine

- No Naviagent

- No PowerPath

- Static Mo

only

1 u nt

RDM

ESX Server ESX Server

CLARiiON Fibre CLARiiON Fibre

Figure 45 VMware support with CLARiiON Fibre Channel system

Manually place replication LUN into the ESX storage group and then make the LUN visible to the virtual machine as RDM prior to replication.

VMware replication and mount options for combined

CLARiiON arrays

When a CLARiiON array has both Fibre Channel and iSCSI ports, replicas created using Fibre Channel can be mounted using iSCSI, as long as all requirements for the mount host are met.

316

EMC Replication Manager Administrator’s Guide

VMware Setup

CLARiiON configuration steps for VMware

Preparing static LUNs for mountable clone replicas

Preparing static LUNs for mountable snapshot replicas

In a VMware environment, Replication Manager requires that you set up your CLARiiON storage arrays to use pre-exposed LUNs.

Dynamic LUN surfacing on CLARiiON storage arrays is not supported in a VMware setup for RDM.

This section describes how to configure your CLARiiON storage arrays and Replication Manager to create and use static LUNs for mounting replicas in a VMware environment.

To create and mount replicas that use CLARiiON clones in a VMware environment, you must follow these instructions to prepare static

LUNs that support the replication:

1. Place the clone LUNs that you plan to use for the replicas into the

ESX Server Storage Group.

2. Using VMware VirtualCenter, assign LUNs to the virtual machine as RDM volumes.

EMC recommends that you set up the storage pools so that any clone can be mounted onto only one mount VM. Once you decide which

VM can mount which replicas and where those replicas will be mounted, you can develop storage pools to ensure that the product chooses clones that can be mounted to the selected mount VM, based on the LUNs you have created.

Do not create a storage group for the mount VM. Instead, find the storage group of the mount VM’s ESX Server and place the replication LUNs into it.

When you create a replica that uses CLARiiON snapshots in a

VMware environment, Replication Manager creates a snapshot session. The snapshot session is a point-in-time copy of the source

LUN.

To mount this snapshot session, you must create a snapshot device.

The snapshot device is the handle used to access the contents of the snapshot session.

For a mount host to access a snapshot session, the following prerequisites must be met:

A snapshot device must exist.

The snapshot device must be visible to the selected mount host.

The snapshot device must not be activated for a different session.

Replicating an RDM or iSCSI initiator LUN on CLARiiON

317

VMware Setup

Be sure to provide enough snapshot devices to accommodate your needs before a mount operation occurs. You should use the following formula to determine how many snapshot devices you need:

(

#_of_mountable_replica

) X (

#_of_LUNs_per_replica

)

To prepare snapshot devices for use with Replication Manager:

1. Use Navisphere to create enough snapshot devices for each source LUN from which you plan to create one or more replicas and mount those replicas. Your CLARiiON documentation provides specific information about creating a snapshot device.

Note:

When creating the snap, simply create the snapshot. It is not necessary to start a snap session.

2. Use Navisphere to put the snapshot devices into the ESX Server

Storage Group of the virtual machine to which you plan to mount these replicas.

3. Allocate snap devices to the virtual machine as RDM, using

VirtualCenter.

4. After devices are added as RDMs, perform a bus rescan of the virtual machine so that the snap device will be surfaced. You can do this from Windows Disk Management. Replication Manager will not detect the devices on the virtual machine until they are visible in Windows Disk Management.

5. Configure the Replication Manager job that includes a mount to the new virtual machine host. Make sure you select the mount option Maintain LUN visibility after unmount.

When the Replication Manager job runs, Replication Manager detects the snapshot device already present on the virtual machine and it will reuse that device for the new snap. Replication Manager also renames the snapshot you created using the standard Replication Manager naming convention. Replication Manager continues to use this device for subsequent snaps, renaming it appropriately each time.

Note:

The EMC Knowledgebase article emc184439 offers detailed information on how to perform these steps in a VMware environment.

318

EMC Replication Manager Administrator’s Guide

VMware Setup

Planning static LUNs for temporary snapshots of clones

For VMware virtual machines, follow these steps if you are planning to use temporary snapshots of clones (snaps of clones):

1. Use Navisphere to create snapshot devices for each clone LUN.

2. Place the snapshot devices in the ESX Server Storage Group.

3. Allocate snap devices to the virtual machine as RDM, using

VirtualCenter.

Replicating an RDM or iSCSI initiator LUN on CLARiiON

319

VMware Setup

Replicating an RDM on Invista Virtual Volumes

This section describes Replication Manager support for replication and mount on VMware with Raw Device Mapping (RDM) in a

Invista environment.

Refer to the Replication Manager support information available from the E-Lab Interoperability Navigator on Powerlink for supported

VMware and Windows versions.

Certain tasks must be completed to configure your environment prior to creating or mounting replicas of Windows virtual machines that use RDM Invista virtual volumes.

Figure 46 on page 320 illustrates

VMware RDM support with Invista virtual volumes.

Replication hosts s u pported

Physical machine

Virtual Machine

Replication hosts s u pported

Physical machine

Virtual Machine

RDM

ESX Server

RDM

ESX Server

Invista Virt u al Vol u mes

(iSCSI-attached)

Invista Virt u al Vol u mes

(Fibre-attached)

Configuring Invista environments to replicate VMware

RDMs

Figure 46 VMware RDM support with Invista virtual volumes

Complete these steps before Replication Manager creates replicas in an Invista environment:

1. Expose Invista virtual devices as RDM devices to the VMware host using ESX Server Infrastructure Client.

2. Use Invista Element Manager to create an Invista Virtual Frame for Replication Manager.

320

EMC Replication Manager Administrator’s Guide

VMware Setup

3. Add the Invista virtual volumes you plan to use for Replication

Manager replicas to that Invista virtual frame used to specify devices for Replication Manager use.

4. Add these RDM devices into an Replication Manager pool to isolate them for use when you create replicas that should be replicated onto RDM disks.

Note:

Replicas of RDM Invista virtual volumes are created using

Microsoft Volume Shadowcopy Services (VSS).

Configuring Invista environments to mount VMware

RDMs

Before Replication Manager can mount VMware RDM replicas that reside on Invista virtual volumes, the user must meet the following prerequisites:

Pre-expose all Invista virtual volumes on which the replica resides to the target VMware virtual host using the ESX Server

Infrastructure Client. You can do this by putting these volumes in the mount VM ESX Server's Virtual Frame using Invista Element

Manager. These volumes then needs to be exposed to the mount

VM as RDM devices using the Virtual Infrastructure Client.

Invista virtual volumes storing the replica must be managed by the same Invista Instance as the source virtual volumes.

Production mounts are allowed when you are mounting RDM

Invista virtual volumes.

During the mount to a VMware virtual host, Replication Manager checks to see if those volumes are visible to the mount host. If not,

Replication Manager fails the mount operation.

About mounting an

RDM replica to a physical host

Replication Manager also supports mounts to a physical mount host for Invista RDM disk replicas. In this case, dynamic mounts are supported; in other words, there is no need to make replica volumes visible to the physical mount host before the mount when mounting to a physical host.

The physical mount host should have a virtual frame associated with it in the Invista instance and should have at least one virtual volume in the virtual frame. When you are mounting to a physical host, volumes need not be visible to the mount host in advance, however mounts will be successful even if volumes are made visible to the physical host before the mount.

Replicating an RDM on Invista Virtual Volumes

321

VMware Setup

Replicating VMware virtual disks on Invista virtual volumes

Certain tasks must be completed to configure your environment prior to creating replicas of VMware virtual disks stored on Invista virtual volumes. The configuration is illustrated in

Figure 47 on page 322

.

Virtual Machine

- No Naviagent

- No PowerPath

Virtual Machine

- No Naviagent

- No PowerPath

VD

ESX Server

VMFS

Invista Virt u al Vol u me

Figure 47

VMware virtual disk support with Invista virtual volumes

Configuring Invista environments to replicate VMware virtual disk

Complete these steps before Replication Manager creates replicas in this environment:

1. Ensure that your environment supports this configuration. The following restrictions apply:

• Virtual machine must be running Windows 2003 guest operating system.

• The VMFS must contain only one virtual machine.

• The VMFS should consist of only one Invista virtual volume.

• LVM Resignature must be enabled in the ESX Server. For more information refer to

“Enabling LVM Resignature” on page 297

.

• Replication Manager can only replicate environments with one unique SCSI target across all SCSI controllers on the virtual machine. The SCSI target of the virtual disk being replicated must not be used on other SCSI controllers.

322

EMC Replication Manager Administrator’s Guide

VMware Setup

2. Expose at least one RDM Invista virtual device to the VMware host using ESX Server Infrastructure Client.

3. Register the production and mount virtual machines with the

Replication Manager Server and provide the credentials for the

VMware VirtualCenter that manages that virtual machine.

Figure 48

Register VMware hosts and define VirtualCenter credentials

4. Ensure that VMtools is installed on the virtual machine that is hosting the Replication Manager Agent. Replication Manager uses these tools to perform the replications.

Replicating VMware virtual disks on Invista virtual volumes

323

VMware Setup

Restrictions when mounting virtual disks in an Invista environment

The following restrictions apply when mounting VMware virtual disks in an Invista environment:

This type of virtual disk replication does not use VSS.

Replication Manager mounts the entire VMFS, so adhere to the

Replication Manager guideline that restricts you to one virtual disk per VMFS.

Replication Manager is tolerant of VMware VMotion of virtual production and mount hosts.

Do not mount two copies of the same virtual disk to the same virtual machine. You may however mount the same virtual disk to two separate virtual machines on the same ESX Server.

Remember to register the virtual machine to which the replica will be mounted (with the Replication Manager Server) and provide the credentials for the VMware VirtualCenter that manages the virtual machine.

324

EMC Replication Manager Administrator’s Guide

VMware Setup

Replicating a VMware RDM on Symmetrix

This section describes Replication Manager support for replication and mount on VMware virtual machines running Windows in a

Symmetrix Fibre Channel environment. Figure 49 on page 325

illustrates this support. Refer to the Replication Manager support information available from E-Lab Interoperability Navigator on http://Powerlink.EMC.com

for supported VMware and Windows versions.

Mount host attached to

Symmetrix Fibre

Channel

Replication Manager supports mounting to physical and virtual machines on Symmetrix Fibre Channel attached storage.

Replication hosts s u pported Replication hosts s u pported

Physical machine

Virtual Machine

Physical machine

Virtual Machine

RDM

ESX Server

RDM

ESX Server

Symmetrix Fibre Channel Symmetrix Fibre Channel

Replicating hosts attached to

Symmetrix Fibre

Channel

Figure 49

VMware support with Symmetrix arrays

When configuring a Windows virtual machine as a replication host attached to a Symmetrix array, the device can be made visible to the virtual machine through RDM. The Symmetrix environment requires that you configure four six-cylinder gatekeeper devices for each production and mount host and export them to those hosts in physical compatibility mode.

Replicating a VMware RDM on Symmetrix

325

VMware Setup

Replicating a VMware RDM with RecoverPoint

When configuring a Windows virtual machine as a replication host in a RecoverPoint configuration, be sure that RDM devices are statically visible to mount hosts.

Refer to the Replication Manager support information available from

E-Lab Interoperability Navigator on http://Powerlink.EMC.com

for supported VMware and Windows versions.

Replicating a VMware RDM on Celerra

This section describes Replication Manager support for replication and mount on VMware virtual machines running Windows in a

Celerra environment. Figure 50 on page 327 illustrates the current

support.

Note:

Celerra RDM devices are not supported for Windows 2000 virtual machines. You must use the Microsoft iSCSI initiator instead in that environment.

Refer to the Replication Manager support information available from the EMC E-Lab Interoperability Navigator on Powerlink for complete support information regarding supported VMware and Windows versions.

Replication host attached to Celerra

When configuring a Windows virtual machine as a replication host attached to a Celerra, you can use either the VMware iSCSI Initiator on the ESX Server, or the Microsoft iSCSI Initiator, to present Celerra storage to the virtual machine.

If you use the Microsoft iSCSI Initiator, you must install it on the virtual machine.

If you use the VMware iSCSI Initiator, which is installed by default on the ESX Server, you still must also install the Microsoft iSCSI Initiator on the virtual machine. Devices presented through the VMware iSCSI

Initiator are made visible to the virtual machine through RDM.

326

EMC Replication Manager Administrator’s Guide

VMware Setup

Mount host attached to Celerra

When configuring a Windows virtual machine as a mount host, you must install Microsoft iSCSI Initiator on the virtual machine; VMware iSCSI Initiator is not an option in this case.

Physical machine

- MS iSCSI

Initiator

Replication hosts s u pported

Virtual Machine

- MS iSCSI

Initiator

1

Virtual Machine

- MS iSCSI

Initiator

Mo u nt hosts s u pported

Physical machine

- MS iSCSI

Initiator

Virtual Machine

- MS iSCSI

Initiator

RDM

ESX Server

ESX Server

ESX Server

Celerra iSCSI

= virt u al NIC

Figure 50

VMware support with Celerra

1 - MS iSCSI Initiator is used for replication only.

Celerra iSCSI

Replicating a VMware RDM on Celerra

327

VMware Setup

328

EMC Replication Manager Administrator’s Guide

10

Hyper-V Setup

This chapter describes how to prepare Hyper-V environments to work with Replication Manager. It covers the following subjects:

Hyper-V overview ........................................................................... 330

Supported Hyper-V configurations and prerequisites............... 331

Hyper-V Setup

329

Hyper-V Setup

Hyper-V overview

Replication Manager can be installed on Hyper-V virtual machines and perform replications, mounts, and restores of Microsoft iSCSI

Initiator-discovered devices residing on CLARiiON and Celerra storage.

Figure 51 on page 330

illustrates a sample configuration of

Replication Manager Agent installed on three virtual machines for replication of CLARiiON and Celerra storage.

SAN

Virtual machines

MS iSCSI

OS

RM Agent

App

Celerra

MS iSCSI

OS

RM Agent

App

MS iSCSI

CLARiiON

OS

RM Agent

App

Parent partition

Windows hypervisor

Figure 51

Hyper-V physical server

Replication Manager in a Hyper-V environment

330

EMC Replication Manager Administrator’s Guide

Hyper-V Setup

Supported Hyper-V configurations and prerequisites

This section describes the supported configurations for Replication

Manager in a Hyper-V environment.

Replication

Manager components

Replication Manager Server, Agent, and Console can be installed on

Hyper-V virtual machines.

Virtual machines

Guest operating systems

Storage services

Hyper-V virtual machines must meet host requirements as described in Chapter 1 and in the Replication Manager Support Matrix.

Replication Manager Agent is supported on the following Hyper-V guest operating systems:

Windows Server 2003

Windows Server 2008

Replication Manager Server and Console are supported on the same operating systems for Hyper-V virtual machines as they are on physical machines.

Replication Manager supports the following storage services in a

Hyper-V environment:

Celerra File Server

CLARiiON array

Symmetrix array

Refer to Replication Manager Support Matrix for supported array models.

Storage devices must be exposed to the guest OS as iSCSI LUNs.

There is no support for SCSI passthrough or virtual hard disks (.vhd) in this release.

Supported Hyper-V configurations and prerequisites

331

Hyper-V Setup

CLARiiON settings

For the CLARiiON iSCSI configuration, the following settings are recommended:

On a Windows Server 2008 virtual machine with CLARiiON storage, the Microsoft iSCSI Initiator setting for the Load Balance

Policy must be set to Round Robin; this is not a default setting.

All iSCSI NICs should have registry setting TcpAckFrequency=1.

High-availability

As with physical machines, PowerPath is highly recommended in a

Hyper-V environment improved load balancing and fault tolerance.

Applications

Supported applications (Exchange, SQL Server, SharePoint, Oracle,

UDB, filesystems) are the same on Hyper-V virtual machines as on physical machines.

Licensing

Replication Manager Server and Agent licenses are consumed on

Hyper-V virtual machines following the same rules as on physical machines.

Hyper-V support does not involve a proxy host, so a Replication

Manager proxy host license is not needed.

332

EMC Replication Manager Administrator’s Guide

11

VIO Setup

This chapter describes how to prepare IBM AIX Virtual I/O (VIO)

Logical Partition environments to work with Replication Manager. It covers the following subjects:

IBM AIX VIO overview................................................................... 334

Supported VIO LPAR configurations and prerequisites............ 335

VIO Setup

333

VIO Setup

IBM AIX VIO overview

Replication Manager can be installed on IBM AIX Virtual I/O (VIO)

Logical Partitions (LPARs) and perform replications, mounts, and restores of VIO LPAR in either of the following configurations:

Physical HBAs connected to each LPAR (behaves as any other

AIX environment)

NPIV configuration for each LPAR (described in this chapter)

Replication Manager can leverage the NPIV configuration to manage the storage and create replicas of applications on CLARiiON and

Symmetrix storage.

Figure 52 on page 334

illustrates a sample configuration of

Replication Manager Agent installed on an IBM AIX VIO LPAR.

IBM AIX VIO Client

WWNN WWNN

Replication

Manager

Agent installed

LPARs

(Fibre)

(Fibre)

IBM AIX VIO Server

SAN

EMC CLARiiON or

Symmetrix Storage

Replica

Figure 52 Replication Manager in a IBM VIO LPAR environment

334

EMC Replication Manager Administrator’s Guide

VIO Setup

Supported VIO LPAR configurations and prerequisites

This section describes the supported configurations for Replication

Manager in a VIO LPAR environment.

Replication

Manager components

Replication Manager Agent can be installed on IBM AIX VIO Clients.

VIO requirements

VIO environments must follow these configuration steps:

1. Deploy a POWER6-based system.

2. Deploy an NPIV-enabled SAN switch.

3. Confirm that the following software exists in the environment:

• Virtual I/O Server

• AIX

• SDD

• SDDPCM

(Refer to the latest IBM documentation and the Replication

Manager eLab Interoperability Support Matrix to determine the versions required to support NPIV and Replication Manager).

4. Create logical partitions (LPARs).

5. Create virtual Fibre Channel server adapters on the Virtual I/O

Server partition and associate them with virtual Fibre Channel client adapters on the virtual I/O client partitions.

6. Create virtual Fibre Channel client adapters on each virtual I/O client partition and associate them with virtual Fibre Channel server adapters on the Virtual I/O Server partition.

7. Map the virtual Fibre Channel server adapters on the Virtual I/O

Server to the physical port of the physical Fibre Channel adapter by running the vfcmap command on the Virtual I/O Server.

8. Deploy the Replication Manager client onto the LPARs along with the applications.

9. Configure a physical or virtual mount host to accommodate mounting the replica.

Supported VIO LPAR configurations and prerequisites

335

VIO Setup

Storage services

Configuring VIO for mounts

Additional Symmetrix configuration

Replication Manager supports the following storage services in an

IBM AIX VIO environment:

Symmetrix array

• TimeFinder Clones

• TimeFinder Snaps (local/remote)

• TimeFinder Mirror (local/remote)

• SRDF/S

• SAN Copy

CLARiiON array

• SnapView Clones

• SnapView Snaps

• CLARiiON-to-CLARiiON SAN Copy

Refer to Replication Manager Support Matrix for supported array models.

Storage devices must be exposed to the guest OS using N-Port ID

Virtualization (NPIV). This configuration assigns a physical Fibre

Channel adapter to multiple unique worldwide port names

(WWPN). To access physical storage from the SAN, the physical storage is mapped to logical units (LUNs) and the LUNs are mapped to the ports of physical Fibre Channel adapters. Then the Virtual I/O

Server uses the maps to connect the LUNs to the virtual Fibre

Channel adapter of the virtual I/O client.

In order to facilitate mounts, the target storage must be zoned to the

VIO client or physical server used as the mount host.

For the Symmetrix, the following additional configuration is required:

The virtual machine must be zoned to at least four six-cylinder gatekeeper on the Symmetrix.

Export the gatekeepers in physical compatibility mode.

High-availability

As with physical machines, PowerPath is highly recommended in a

VIO environment.

336

EMC Replication Manager Administrator’s Guide

Applications

Licensing

VIO Setup

Supported applications include Oracle, UDB, and filesystems. These are the same on VIO LPARs as on physical machines.

Replication Manager Agent licenses are consumed on VIO LPARs following the same rules as on physical machines.

VIO support does not involve a proxy host, so a Replication Manager proxy host license is not needed.

Supported VIO LPAR configurations and prerequisites

337

VIO Setup

338

EMC Replication Manager Administrator’s Guide

12

Disaster Recovery

This chapter describes how to set up and perform Replication

Manager Server disaster recovery, and how to use Replication

Manager to implement a Celerra DR solution. It covers the following subjects:

Replication Manager Server disaster recovery ............................ 340

Setting up Replication Manager Server disaster recovery ......... 342

Upgrading a Server DR configuration .......................................... 346

Performing server disaster recovery ............................................. 348

Celerra disaster recovery ................................................................ 353

VMware Site Recovery Manager (SRM) disaster recovery........ 384

Disaster Recovery

339

Disaster Recovery

Replication Manager Server disaster recovery

The typical configuration for Replication Manager Server disaster recovery consists of three hosts:

Primary Replication Manager Server host — Host that is running

Replication Manager server software, and that controls the replication management software environment.

Secondary Replication Manager Server host — Host that is running Replication Manager server software with a read-only configuration. The Replication Manager database is automatically kept synchronized with the primary server host’s Replication

Manager database.

Production host — Host that is running your database/application and contains all of the database production data.

In the event that the primary Replication Manager Server host becomes unavailable, you can designate the secondary Replication

Manager Server as the primary server and have it take over and manage the replication process. Because the secondary server’s

Replication Manager database has been kept in synch with the primary server’s database, there is no loss of data.

Figure 53 on page 341

illustrates this concept.

340

EMC Replication Manager Administrator’s Guide

Disaster Recovery

PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 storage array prod u ction

server

IP network

PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 storage array exchsrv1 exchsrv1 prod u ction

server

IP network rm2_host secondary

RM server rm1_host

RM

database

(read-only)

RM database

primary

RM server rm2_host

RM database is now read/write secondary RM server

becomes primary rm1_host

PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 storage array

primary RM server becomes u navailable rm2_host exchsrv1 prod u ction

server

IP network

primary

RM server rm1_host

RM

database

RM

database

(read only) original primary RM server

becomes secondary

Figure 53 How Replication Manager Server disaster recovery works

Replication Manager Server disaster recovery

341

Disaster Recovery

Setting up Replication Manager Server disaster recovery

This section describes setting up Replication Manager Server disaster recovery.

Prerequisites for

Server DR

The prerequisites for Server DR are:

Server DR is not supported in environments where production servers are running Oracle databases.

Server DR requires an IPv4-only environment. Dual-stack and

IPv6-only environments are not supported for Server DR.

Make sure that your primary and secondary Replication Manager

Servers have a method to resolve the name of their peer server into a network address (for example, DNS). This procedure assumes, and requires, that the primary and secondary

Replication Manager Servers are able to address each other by name, and will not work properly if this assumption is not met. If this setup is not possible in your environment, contact your EMC support representative. Failure to observe the limitations may result in data loss or other problems during disaster recovery.

Installing Server DR

It is recommended that you install Replication Manager on the secondary Replication Manager Server before installing the software on the primary Replication Manager Server:

1. Install Replication Manager Agent software on the production host (exchsrv1 from

Figure 53 on page 341

), using the installation

procedure specified in “Installing the Replication Manager Agent software” on page 57 .

2. Install Replication Manager Server software on the secondary

Replication Manager Server (rm2_host from Figure 53 on page 341

), using the installation procedure specified in

“Installing the Replication Manager Server software” on page 54

. During the secondary Replication Manager Server installation, Setup prompts for the following information: a. Server type (Secondary, Standalone, or Primary) — Choose

Secondary

to designate this server as the secondary

Replication Manager Server to be used specifically for disaster recovery.

342

EMC Replication Manager Administrator’s Guide

Disaster Recovery

b. Solid hot backup mode communication port — Enter the desired communication port through which the primary server will communicate with the secondary server to synchronize data in their respective Replication Manager databases. The default communication port is 1964. The solid hot backup mode communication port number for the secondary server does not have to match the port number for the primary server.

c. Primary Replication Manager Server installed? — Select No when asked if the primary server is installed.

3. Install Replication Manager Server software on the primary

Replication Manager Server (rm1_host from Figure 53 on page 341

). During the primary Replication Manager Server installation, Setup prompts for the following information: a. Server type (Secondary, Standalone, or Primary) — Choose

Primary

to designate this server as the primary Replication

Manager Server.

b. Solid hot backup mode communication port — Enter the desired communication port through which the primary server will communicate with the secondary server to synchronize data in their respective Replication Manager databases. The default communication port is 1964. The solid hot backup mode communication port number for the primary server does not have to match the port number that you entered for the secondary server.

c. Secondary Replication Manager Server installed?—Select

Yes

when asked if the secondary server is installed.

d. Name and port number of the Secondary Server—Enter the name of the secondary Replication Manager server and the hot backup mode communication port number that you entered for the Secondary Replication Manager Server in step

2 .

Converting from a standalone server to a disaster recovery server

If you originally chose Standalone Server as the Replication Manager

Server type when you installed your server, and later decide to implement a disaster recovery solution:

1. Verify that Replication Manager Agent software is installed on the production host.

Setting up Replication Manager Server disaster recovery

343

Disaster Recovery

2. Install Replication Manager Server software on the secondary

Replication Manager Server (rm2_host from Figure 53 on page 341

), using the installation procedure specified in

“Installing the Replication Manager Server software” on page 54

. During the secondary Replication Manager Server installation, Setup prompts for the following information: a. Server type (Secondary, Standalone, or Primary) — Choose

Secondary

to designate this server as the secondary

Replication Manager Server to be used specifically for disaster recovery.

b. Solid hot backup mode communication port — Enter the desired communication port through which the primary server will communicate with the secondary server to synchronize data in their respective Replication Manager databases. The default communication port is 1964. The solid hot backup mode communication port number for the secondary server does not have to match the port number for the primary server.

c. Primary Replication Manager Server installed? — Select No when asked if the primary server is installed, even though the server you plan to use as the primary already exists in the configuration.

3. Install Replication Manager Server software on the standalone server: a. Choose Repair when prompted to modify, repair, or remove the program.

b. When prompted for the disaster recovery role, choose Primary

Server

to designate this server as the primary Replication

Manager Server.

c. Enter the desired communication port through which the primary server will communicate with the secondary server to synchronize data in their respective Replication Manager databases.

d. Select Yes when asked if the secondary server is installed.

e. Enter the name of the Secondary Replication Manager Server and the hot backup mode communication port number that you entered for the Secondary Replication Manager Server in step

2

.

344

EMC Replication Manager Administrator’s Guide

Disaster Recovery

For more information about Replication Manager Server disaster recovery, see

“Performing server disaster recovery” on page 348

.

Replication

Manager database synchronization

When installation completes, the secondary Replication Manager

Server’s solid database synchronizes with the primary Replication

Manager Server’s solid database. Keep in mind that larger databases, such as those containing many replication jobs, may take longer to synchronize than smaller databases.

Once the initial synchronization completes, the Replication Manager databases continue to synchronize in real-time whenever the primary database changes. For example, each time a replication job is created or run on the primary server, the databases synchronize.

The secondary Replication Manager Server’s database is a read-only copy of the primary Replication Manager Server’s database.

Therefore, you cannot execute any action on the secondary server that would require a database write, such as editing or running jobs.

You also cannot view job progress on the secondary server. You may, however, open the Replication Manager Console on the secondary server for monitoring purposes. For example, you could open a

Replication Manager Console on the secondary Replication Manager

Server to verify that the databases are synchronized, or view properties of all items stored in the Replication Manager database

(jobs, replicas, application sets, and so on).

Setting up Replication Manager Server disaster recovery

345

Disaster Recovery

Upgrading a Server DR configuration

This section describes how to upgrade an existing Replication

Manager Server DR configuration to the current Replication Manager version.

Prerequisites

This procedure requires the existing Replication Manager server DR configuration to be running version 5.2.0, 5.2.1, 5.2.2, or 5.2.2.1. The

Replication Manager Support Matrix lists supported OS versions and platforms.

If the Replication Manager Server (primary or secondary) is installed as a cluster resource, make sure the cluster resource group containing

Replication Manager Server is running on the node where

Replication Manager Server was originally installed.

Procedure

To upgrade a Replication Manager Server DR configuration:

1. On the secondary DR server, stop the Replication Manager Server

Service.

2. If the secondary DR server is in a clustered setup, stop the

Replication Manager Server Service running as clustered resource.

3. Upgrade the Replication Manager Server on the primary DR server.

4. On the primary server, run the following CLI commands:

C:\Program Files\EMC\rm\gui> rmcli connect host=primaryhostname port=serverport

Connected to 'primaryhostname' (port=port).

RM-CLI : login user=Administrator password=password

Login as user 'Administrator' successful.

RM-CLI : dr-get-state

0 PRIMARY ALONE

The output should say PRIMARY ALONE.

5. Upgrade Replication Manager Server on the secondary DR server.

6. The install wizard asks if you want the leave the Replication

Manager Server Service stopped. Select Start the RM Server

Service

.

7. On the primary server, run the following CLI commands:

C:\Program Files\EMC\rm\gui> rmcli connect host=primaryhostname port=serverport

346

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Connected to 'primaryhostname' (port=port).

RM-CLI : login user=Administrator password=password

Login as user 'Administrator' successful.

RM-CLI : dr-get-state

0 PRIMARY ACTIVE

The output should say PRIMARY ACTIVE. If the value is not

PRIMARY ACTIVE, shut down both the primary and the secondary servers and start the Replication Manager Server

Services one at a time, starting with the primary.

8. Re-check the status of the primary server with dr-get-state.

Upgrading a Server DR configuration

347

Disaster Recovery

Performing server disaster recovery

This section details the steps to perform disaster recovery of the

Replication Manager server.

Verifying the disaster recovery server configuration

The Replication Manager command line interface (CLI) provides a helpful command, dr-get-state, to help you verify the status of your disaster recovery configuration. The following example illustrates the use of the dr-get-state command:

C:\Program Files\EMC\rm\gui> rmcli connect host=rm2_host port=65432

Connected to 'rm2_host' (port=65432).

RM-CLI : login user=Administrator password=mypassword

Login as user 'Administrator' successful.

RM-CLI : dr-get-state

0 SECONDARY ACTIVE

Table 24

In the previous example:

The Replication Manager Server is actively designated as the secondary Replication Manager Server.

The command completed successfully (0).

Table 24 on page 348 lists all possible results for the dr-get-state

command.

Descriptions of dr-get-state results (page 1 of 2)

Result

SECONDARY ACTIVE

SECONDARY ALONE

PRIMARY ACTIVE

Meaning

Server is a secondary Replication Manager Server, and is fully synchronized with a primary server.

Changes to the primary server’s Replication Manager database are also written to this server’s database.

Server is a secondary Replication Manager Server, but is not synchronized to a primary server.

Changes to the primary server’s Replication Manager database are not written to this server’s database. This may occur if the primary server is down or unreachable.

Server is a primary Replication Manager Server, and a secondary server is synchronized to it.

Changes to this server’s Replication Manager Database are also written to the secondary server’s database.

348

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Table 24

Descriptions of dr-get-state results (page 2 of 2)

Result

PRIMARY ALONE

Meaning

Server is a primary Replication Manager Server, but a secondary server is not synchronized to it.

Changes to this server’s Replication Manager Database are not written to the secondary server’s database. This may occur if the secondary server is down or unreachable.

Once communication with the secondary server resumes, its database is resynchronized with the latest data.

PRIMARY UNCERTAIN

Server is a primary Replication Manager Server, but a secondary server is not synchronized to it due to an unknown event. This may occur if the secondary server was not shut down properly, or experienced an unexpected failure.

This is a non-hotstandby server

Server is not designated for disaster recovery as a primary or secondary Replication Manager Server.

For more information on using the Replication Manager CLI, see

Appendix A, “Using the Command Line Interface.”

Failing over to a secondary server

In the event that the primary Replication Manager Server becomes unavailable, Replication Manager breaks the connection between the primary and secondary Replication Manager Server’s databases and reclassifies both servers as standalone servers.

WARNING

Be certain that the original primary server is not running at this time. If both servers are running as primary, data loss can occur.

To designate the secondary server as the primary server:

1. Run the following command on a command line on the secondary Replication Manager Server:

RM-CLI : dr-set-primary

The secondary Replication Manager Server now becomes the primary Replication Manager Server.

2. Verify that the Schedule User listed in Replication Manager

Server properties on the new primary server has permissions to create schedule tasks.

Performing server disaster recovery

349

Disaster Recovery

Failing back to the original primary server

3. Set the desired logging levels (Normal and Debug) on the new primary server. Logging levels not synchronized between the old and new primary servers because they are stored in the registry, not in the internal database.

4. Restart the Replication Manager Server service on this server.

5. On the new primary server, enable CIM event publishing if used on the old primary. It is not automatically enabled on the new primary server after failover.

When the original primary Replication Manager Server becomes available, you can fail back to the original server by doing the following:

1. Verify that the Replication Manager Server service on the alternate server (the current primary—rm2_host) is running (for example, has a status of Started).

WARNING

It is very important that the Replication Manager Server service on the alternate server (rm2_host) is running at this point. If rm2_host is not running, rm1_host will be set as primary, causing there to be two primary servers, which can result in data loss.

2. Start the Replication Manager Server service on the original primary server (rm1_host). This server now becomes the secondary server.

3. Stop the Replication Manager Server service on the original secondary server (rm2_host), which is now the primary server.

4. Run the following CLI command on the original primary server

(rm1_host):

RM-CLI : dr-set-primary

5. Restart the Replication Manager Server service on the original primary server (rm1_host), which now becomes the primary server again.

6. Start the Replication Manager Server service on the original secondary server (rm2_host), which now becomes the secondary server again.

350

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Troubleshooting server disaster recovery

Properties do not match

Both servers running as primary

This section describes symptoms and resolutions for problems related to configuration of primary and secondary Replication

Manager Servers.

Symptom:

Properties of objects on the secondary server do not match those on the primary. This is caused when incorrect data is entered during installation.

Resolution:

Reinstall the secondary server with the correct information.

Symptom:

Both servers are running as primary. This can happen if the two servers were not able to connect to each other and the CLI command

dr-set-primary

command was mistakenly issued on both servers.

Resolution:

1. Stop the Replication Manager Server service on both Replication

Manager Servers.

2. Decide which server is more accurate and is to be the primary server.

3. Start the Replication Manager Server service on the Replication

Manager Server that is to be the primary.

4. Issue the dr-get-state command to verify that the server is in

PRIMARY ALONE state.

5. Start the Replication Manager Server service on the Replication

Manager Server that is to be secondary.

WARNING

When this server starts up, its database will be overwritten with the primary's database.

6. If the Replication Manager Server which is to become secondary has changed the state of any device, these changes will be overwritten and manual cleanup of the storage devices may be necessary.

Performing server disaster recovery

351

Disaster Recovery

Scheduled jobs do not run after failover

Symptoms:

Jobs run when started on demand but not when scheduled.

Inability to schedule new jobs or change existing scheduled jobs.

Resolution:

Make sure the credentials for the Schedule User in Replication

Manager Server properties are correct.

352

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Celerra disaster recovery

This section outlines all phases of the disaster recovery (DR) solution for Replication Manager Celerra environments from setup through failover and failback. It details the exact sequence of steps and events that must take place for DR processes to operate successfully in the

Replication Manager Celerra environment. It is important that each step be completed in its entirety as described. It is also important that

DR procedures be tested and documented at each site before being deployed into production.

Replication Manager supports various forms of disaster recovery in

Celerra environments. This chapter discusses:

Celerra failover for installation with Celerra LUNs presented to the host using Network File Systems.

Celerra disaster recovery for installations with Celerra LUNs presented to the host using the Microsoft iSCSI Initiator. See

“Celerra iSCSI disaster recovery” on page 354

.

Note:

Disaster recovery of Celerra LUNs presented to a virtual machine using Raw Device Mapping requires an RPQ. Disaster recovery of Celerra

LUNs in a VMware VMFS or virtual disk environment is discussed in

“VMware Site Recovery Manager (SRM) disaster recovery” on page 384

.

Celerra NFS failover

Replication Manager operation with Celerra NFS to remote storage is unique. For other remote technologies, such as Symmetrix SRDF or

CLARiiON MirrorView, failover of the storage to the remote site may require you to create new application sets and jobs. With Celerra NFS support, it is possible to continue to use the same replicas as before and after a failover, providing your environment has been correctly configured.

In environments where Celerra LUNs are presented to the host using network file systems (NFS), Replication Manager is tolerant of

Celerra Replicator failovers. This means the following is true in these environments:

Replication Manager can be configured to create replicas based on Celerra Replicator data stored on either the source or target

Celerra.

Celerra disaster recovery

353

Disaster Recovery

Remote replications of the target Celerra in these configurations can be mounted or restored on the remote site either before or after a Celerra Replicator failover occurs.

Formerly remote replicas become local replicas after failover, provided that the DR host (new production host) is referenced using the same host name or IP address as the original production host.This can be achieved using a virtual IP, DNS aliasing, or by naming the DR host the same name as the former production host.

There is a specific set of steps that must be followed in order to configure you Celerra Replicator environment to work with

Replication Manager.

“Configuring Celerra network file system targets” on page 239 outlines the steps necessary to configure your

Celerra to be failover tolerant in a network file system environment.

For Celerra NFS, Replication Manager does not perform any Celerra failover processing. Failover must be performed external to

Replication Manager.

Celerra iSCSI disaster recovery

Application specific notes

The remainder of this chapter describes disaster recovery in Celerra iSCSI environments.

The DR solution described in this section requires that you are using

Replication Manager 5.1.1 or later and DART 5.6 or later. For information on DR solutions involving earlier versions of Replication

Manager or DART, see the following solutions guides on EMC

Powerlink:

iSCSI Nonclustered Disaster Recovery Solution Implementation Guide iSCSI Clustered Disaster Recovery Solution Implementation Guide

While reading this chapter, pay particular attention to information regarding specific tips for environments running Microsoft SQL

Server and Microsoft Exchange Server. Tips specific to these applications are called out under subheadings such as the following:

Microsoft SQL Server

If you are protecting a SQL Server application, make sure you read the information under this heading as it occurs throughout the section. EMC Replication Manager and Microsoft SQL Server - A Detailed

Review, available on Powerlink, provides more information on how

Replication Manager integrates with Microsoft SQL Server.

354

EMC Replication Manager Administrator’s Guide

Types of disaster recovery

Disaster Recovery

Microsoft Exchange Server

If you are protecting a Microsoft Exchange Server application, make sure you read the information under this heading as it occurs throughout the section. The following documents, available on EMC

Powerlink, provide more information on how Replication Manager integrates with Microsoft Exchange, as well as recommendations for configuration, mount, restore, and mailbox recovery:

EMC Replication Manager and Microsoft Exchange Server 2007 - A

Detailed Review

EMC Replication Manager and Microsoft Exchange Server 2000/2003 -

A Detailed Review

Replication Manager allows for two types of disaster recovery:

Real disaster recovery

— This type of disaster recovery is unplanned, involving a situation in which the production Celerra is unavailable due to a true disaster at the production site.

Test disaster recovery

— This type of disaster recovery is planned for ahead of time, and falls under one of two categories:

Non-disruptive — Involves using Replication Manager to

promote a replica on the production Celerra to make it available on another Celerra. This allows you to test your DR procedures without bringing down the application on the production array.

Disruptive — Involves intentionally simulating a disaster.

This section focuses on real and disruptive test disaster recovery scenarios which involve failover of the production replica to a secondary production environment and, optionally, the failback of the replica to the original production environment. In this chapter, the term failback is used to describe a failover back to the original host.

Non-disruptive disaster recovery is less involved than these other types of disaster recovery and is not covered in this section. To implement non-disruptive disaster recovery, you simply need to repurpose the production replica to another Celerra array and host using the Promote the Production operation in the Replication

Manager user interface. To learn more about promoting a replica to make it available on another host, see

“Using the Celerra Replicator clone replica for repurposing” on page 258.

Celerra disaster recovery

355

Disaster Recovery

Disaster recovery phases

The subsections within this section correspond to the phases in the

disaster recovery solution as shown in Figure 54 on page 356 .

SETTING UP DISASTER RECOVERY

Install hardware and software

Establish tr u st

Create and r u n replication jobs

FAILING OVER

Prepare DR system

Fail over live data to DR system

PREPARING FOR FAILBACK

Install new prod u ction system

Establish tr u st

Create and r u n replication jobs

(optional if not failing back immediately)

FAILING BACK AND FINAL RECOVERY

Fail back from DR system to new prod u ction system

Initiate replication from new prod u ction system back to DR system

Done

Limitations and minimum system requirements

Figure 54 Phases of Celerra disaster recovery solution

Each DR phase consists of a sequence of steps and procedures

summarized in Figure 54 on page 356

and is detailed in the following sections.

Disaster recovery requires a defined set of hardware, software, documentation, and networking components. Before using the procedures in this chapter, ensure that you understand the limitations and minimum system requirements.

356

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Limitations

The Windows server in the secondary data center must be a dedicated disaster recovery server. You should not use it for serving any primary applications or data. The failover process described in

“Failing Over” on page 370 assumes that this is a real or disruptive

test disaster recovery, and therefore the primary site is unavailable.

Hardware

The following hardware should be installed in both the primary and secondary data centers. The DR system hardware configuration should match the production system:

Windows server (which could be a VMware virtual host), configured for the applications you are running, and in the case of the secondary data center, dedicated only to the DR function

Celerra Network Server with at least one Data Mover

IP network between the two data centers

Software

The following software is required in both the primary and secondary data centers. The DR system software should exactly match the production system software. Software revision differences between the two systems are not supported:

Celerra Network Server

Microsoft Windows Server 2003 or 2008 and latest patches

Microsoft iSCSI Initiator

Solutions Enabler

Replication Manager

For the latest support information about the software listed above, including all of the specific versions supported, see the EMC

Replication Manager Support Matrix which is available from the

Powerlink website at http://Powerlink.EMC.com

.

Celerra disaster recovery

357

Disaster Recovery

Setting up Celerra disaster recovery

Overview

Replication Manager

Server DR and Celerra

DR

This section details the steps to set up a primary data center and a secondary data center for disaster recovery in a Celerra environment.

Successful disaster recovery begins with proper setup. At the primary data center that hosts the production system, you need to provide certain parameters to ensure a successful failover to the disaster recovery system in the event of failure at the primary data center.

The setup phase consists of activities that take place at two different sites for two different systems, as shown in

Figure 55 on page 359 .

The primary data center (PDC) represents the location where the production system resides, serving live data. The secondary data center (SDC) represents the location where the disaster recovery system resides. It stores the replicas of the live data.

Depending on your replication environment, your Replication

Manager Server may or may not exist on the production host. To limit the affects that a disaster may have on your environment, it is recommended that the Replication Manager Server be located on a separate host from your production host. Before continuing, it is recommended that you perform the Replication Manager Server setup procedures described in

“Setting up Replication Manager

Server disaster recovery” on page 342

.

In the event that your Replication Manager Server is also affected by the disaster, you need to perform the server DR steps in

“Performing server disaster recovery” on page 348

prior to performing Celerra disaster recovery efforts.

358

EMC Replication Manager Administrator’s Guide

Disaster Recovery

PRIMARY DATA CENTER (PDC)

(Prod u ction System)

SECONDARY DATA CENTER (SDC)

(Disaster Recovery System )

PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 pdc-celerra

Create RW iSCSI LUNs

Create tr u st pdcserver

Install RM Agent

PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 sdc-celerra

Create RO iSCSI LUNs

Create tr u st sdcserver

Install RM Agent

IP Network

SETTING UP FOR CELERRA

DISASTER RECOVERY

Setting up the production and disaster recovery servers

Figure 55 Setup locations and activities

The production server at the primary data center (PDC) and the disaster recovery server at the secondary data center (SDC) need to be configured according to the appropriate recommendations for the applications you are running. The remainder of this chapter refers to the production server installed at the primary data center as pdcserver and the disaster recovery server at the secondary data center as

sdcserver.

It is recommended that you perform the following procedures on the production server, disaster recovery server, and failback server.

Celerra disaster recovery

359

Disaster Recovery

Install Microsoft

Windows

Install Microsoft iSCSI

Initiator

Install EMC Solutions

Enabler

Install the Replication

Manager Agent software on each host

It is assumed that this procedure is well understood so it is not covered in any detail here:

1. If not already installed, install a supported version of Windows

Server on the host.

2. Install all recommended updates from Microsoft, as well as any of those recommended when you run the Replication Manager

Config Checker. Install any optional updates listed in this section as well. Updates for Microsoft software are available using

Windows Update at windowsupdate.microsoft.com

.

The Initiator is used to connect to the iSCSI LUNs on the Celerra systems. Full details on installing and using the iSCSI Initiator can be found in the accompanying documentation.

Install the version of the Microsoft iSCSI Initiator that is specified in the EMC Replication Manager Support Matrix. This document is available from Powerlink.

Solutions Enabler provides a common set of tools for communicating with other EMC products.

As with the iSCSI initiator, install the version of Solutions Enabler that is specified in the EMC Replication Manager Support Matrix.

Accept all default options during installation. Refer to the Solutions

Enabler documentation for more information.

Install Replication Manager Agent software on each host. During the installation process, make sure to set up Replication Manager Server

disaster recovery on each host, using the instructions in “Setting up

Replication Manager Server disaster recovery” on page 342 .

If you are installing Microsoft Exchange 2007 Management Tools, those tools must be installed before installing the Replication

Manager Exchange 2007 agent. You must also reboot the system after the Exchange 2007 Management Tools are installed to ensure that the

Replication Manager Exchange Interface component will register properly.

360

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Create source LUNs

Using the Celerra Manager user interface, create iSCSI LUNs for each application that you want to protect in the DR environment. The following list gives an example of which LUNs need to be replicated.

All LUNs containing files that are being shared from this server.

All Microsoft SQL Server database and transaction log files. In most cases, these will be on separate iSCSI LUNs. All of these

LUNs must be replicated.

All Microsoft Exchange Server storage groups and mailbox servers.

Note:

Each Replication Manager application set must contain iSCSI LUNs from only one Celerra data mover. Otherwise, you will not be able to perform a failback operation.

Set up your user application

It is assumed that this procedure is well understood outside of the DR context. The following sections outline specific considerations for applications in the DR environment. When installing each application, ensure that you select the appropriate iSCSI LUN drives for data storage during the installation process.

For detailed instructions for creating and using iSCSI LUNs, see the

Configuring iSCSI Targets on Celerra iSCSI technical module, available on Powerlink.

NTFS

This procedure protects only shared data that resides on iSCSI

LUNs on pdc-celerra.

Microsoft SQL Server

Your Microsoft SQL Server system databases may reside on local storage or on iSCSI LUNs if they do not need to be replicated. All databases that require protection must be on iSCSI LUNs on pdc-celerra.

Microsoft Exchange Server

All of the Microsoft Exchange storage groups for the server must be located on iSCSI LUNs on pdc-celerra.

Celerra disaster recovery

361

Disaster Recovery

Create destination

LUNs

!

In the Celerra Manager user interface, create read-only destination

LUNs on pdc-celerra for each application that you want to protect in the DR environment. Make sure to create the LUNs on an appropriately sized files system as described in

“Sizing the file system” on page 218

. Use the guidelines in

“Create source LUNs” on page 361

for an example of which LUNs need to be replicated.

The following steps show you how to set up a destination LUN.

CAUTION

All drive letters and mount points on pdcserver that are associated with data to be replicated must be reserved on sdcserver and may not be used at this time.

Before you begin

As you go through the procedure in this step, notice that while you create replication destinations based on iSCSI LUNs, the replication jobs are created based on the applications that use those LUNs. For that reason, create all iSCSI destination LUNs for each application before you create a replication job for that application.

Note:

This procedure describes how to create read-only LUNs using the

Celerra control station command line interface. Alternatively, you could use the Celerra Manager user interface to create the read-only LUNs.

1. Determine the size of the LUN to replicate.

The DR LUN must be the same size as the production LUN. This was specified when you set up the LUN, but if you need to verify the size, use the following command:

Note:

All of the commands in step 1 should be run from pdc-celerra.

$> server_iscsi <movername> -lun -list

The resulting list displays iSCSI LUNs configured for a specific datamover. Obtain the LUN number that you need for the next command:

$> server_iscsi <movername> -lun -info <lun_number>

The resulting information lists the LUN size and the target with which it is associated.

362

EMC Replication Manager Administrator’s Guide

Disaster Recovery

2. Create the target LUN from the CLI on sdc-celerra.

Note:

All of the commands in step 2 should be run from sdc-celerra.

The command is as follows:

$> server_iscsi <movername> -lun -number <lun_number>

-create

<target_alias_name> -size <size> [M|G|T]

-fs

<fs_name> -readonly yes

Note:

The read-only option is required. It designates the LUN as a replication target. You can see this by getting LUN information for the newly created LUN.

Example 1 Creating an iSCSI LUN from sdc-celerra

[root@sdc-celerra root]# server_iscsi server_2 -lun -number 25 -create

NonClust_tgt -size 25G -fs secondary_site_iscsi -readonly yes server_2 : done

Install Microsoft SQL

Server and NTFS shares

Microsoft SQL Server and NTFS

If you are running Microsoft SQL Server or NTFS you may set up the application as described in the published best practices. Doing so now will save time if you need to fail over to the disaster recovery system.

The configuration of the sdcserver SQL server instances must match those on the pdcserver.

Microsoft Exchange Server

With Exchange 2003, you can do the installation in disaster recovery mode and then disable the Exchange services. With Exchange 2007, you cannot pre-install the mailbox server role in disaster recovery mode. During its installation it will look for the drive letters used at the production site. Hence, the setup will fail.

Celerra disaster recovery

363

Disaster Recovery

Setting up replication

Replication requires that you set up a Celerra Network Server at both the primary and secondary data centers. Both systems must be running at least version 5.6 of the Celerra Network Server software.

Set up trust relationship

At this point our effort has been directed at setting up two servers that are almost identical. Now we need to set up replication between them so that disaster recovery becomes a possibility. To do so, the production and disaster recovery Celerra systems must have a trust relationship. The trust relationship establishes bidirectional communication between the systems.

The following steps summarize the procedure for the purposes of disaster recovery. For a complete description of how to set up a trust relationship, see the Using Celerra Replicator for iSCSI technical module, available on EMC Powerlink:

1. Determine a passphrase to be shared between systems. The passphrase must be the same for both pdc-celerra and sdc-celerra.

2. Run the following command as root on pdc-celerra:

#> nas_cel -create <cel_name> -ip <ipaddr> -passphrase <passphrase> where <cel_name> is the name of sdc-celerra, <ipaddr> is the IP address of the primary Control Station on sdc-celerra, and

<passphrase> is the passphrase that you chose in the previous step. Make note of the values returned from this command.

Refer to

Example 2

for an example illustration.

Example 2

Establish trust relationship from production Celerra to disaster recovery

Celerra

[root@pdc-celerra root]# nas_cel -create sdc-celerra -ip 10.6.29.126 -passphrase passphrase operation in progress <not interruptible>...

id 4 name owner device channel

-

-

-

sdc-celerra

0 net_path 10.6.29.126

celerra_id APM000548000410000 passphrase passphrase

[root@pdc-celerra root]#

364

EMC Replication Manager Administrator’s Guide

Disaster Recovery

3. In the same way, establish trust from the disaster recovery Celerra in the secondary data center to the production Celerra in the primary data center. Run the command from the previous step from sdc-celerra, with the appropriate values:

#> nas_cel -create <cel_name> -ip <ipaddr> -passphrase <passphrase> where <cel_name> is the name of pdc-celerra, <ipaddr> is the IP address of the primary Control Station on pdc-celerra, and

<passphrase> is the passphrase that you chose. Note these values as well.

Create an

Interconnect communication path

Once you have established the trust relationship between the production Celerra and the disaster recovery Celerra, each Celerra

Network Server should appear in the other’s Replications panel in

Celerra Manager. For example, the disaster recovery Celerra should appear in the production Celerra’s Replications panel, as shown in

Figure 56 .

Figure 56

Celerra Manager Replications panel

Next, you need to create a communication path, called an

interconnect, between the pair of remote Data Movers that will support replication sessions. First, create the local side of the interconnect on the production Celerra, then repeat the process on the remote side to create the peer side of the interconnect on the disaster recovery Celerra. You must create both sides of the interconnect before you can successfully create a remote replication session:

1. In Celerra Manager, select Celerras > [Celerra_name] >

Replications

, and click the Data Mover Interconnects tab. The

Data Mover Interconnects

panel appears and displays all existing Data Mover interconnects.

Celerra disaster recovery

365

Disaster Recovery

Create replication jobs

2. Click New. The New Data Mover Interconnect panel appears.

3. Fill out the appropriate fields in the New Data Mover

Interconnect

panel in order to set up the Data Mover interconnect between the production Celerra and the disaster recovery

Celerra. See

“Create an interconnect communication path (DART

5.6 or later only)” on page 250 or the Celerra Manager online help

for more information on these fields.

You will need to repeat this procedure for each application set containing data to be protected. You should create additional jobs for each application set that needs to be protected:

1. Start the Replication Manager Console on the primary

Replication Manager Server.

2. If the production host has not already been registered, right-click

Hosts

and select New Host to add the production host to the list of managed hosts.

Microsoft SQL Server

In the case of Microsoft SQL Server, you also need to register the

DR Server to the list of managed hosts. In the case of clustered

SQL Server, you should also register the SQL virtual instance.

Microsoft Exchange Server

In the case of Microsoft Exchange servers its not necessary to register the DR host, because eventually the DR server will be renamed to the production host name.

Note:

For complete details on building a replication job, refer to the EMC

Replication Manager Product Guide. The following procedure highlights significant Replication Manager dialog boxes related to disaster recovery setup.

3. Right-click Application Sets and select New Application Set.

Use the Application Set Wizard to configure an application set containing the production host’s databases and/or file systems.

Microsoft SQL Server, Microsoft Exchange Server

If prompted to do so, supply an administrative username and password for these applications.

4. Right-click the application set and select Create job to create a replication job for the application set.

366

EMC Replication Manager Administrator’s Guide

Disaster Recovery

5. In the Replication Technology drop-down box, select Celerra

Replicator

and then click Next.

6. In the Celerra replication storage panel, enter the name and Data

Mover IP address for the secondary Celerra (sdc-celerra) as shown in

Figure 57 .

Figure 57 Job Wizard — Celerra replication storage

Note:

When specifying the disaster recovery Celerra for the replication, use the IP address for the Data Mover where the destination iSCSI target is listening.

7. If backup to disk or tape is part of your data recovery policy, you can (optionally) set up this job to automatically mount the replica on another host using the Job Wizard Mount Options panel. This can be used for various operations that are out of the scope of this procedure. These include, but are not limited to:

• Backup operations

• Consistency checking

• Reporting

• Disaster recovery testing

See the

EMC Replication Manager Product Guide

for more information.

Celerra disaster recovery

367

Disaster Recovery

Verify replication

Note:

By default, the remote replica will be mounted as a read-only device. There are options to mount it as a temporarily writable device.

8. Set up replication for your other applications at this time.

The following points are important to remember:

• One of the available LUNs with the proper size will be selected from the destination machine, but the selection order is not guaranteed to be predictable. If your environment requires that a specific LUN number be used as the destination, ensure that when each replication job is set up, there is only one acceptable destination LUN available.

(Optionally you can choose to select a specific Celerra IQN from the dropdown list in the Job Wizard to further ensure that the correct target LUN will be selected.)

• It is possible to replicate LUNs with more than one logical partition, however this is not supported.

• In a Microsoft SQL Server environment, you cannot replicate the system databases. Replicate user databases only. If you need to make copies of the system databases, follow the documented procedures from Microsoft.

9. Select Jobs in the tree panel. Individual jobs appear in the content panel. Right-click a job in the content panel and select Run. The job starts processing after you confirm that you want to run the job.

Run nas_replicate –info -all on each Celerra Network Server to verify that both servers know about the replication. The output should show one job for each volume that you are replicating, and each job should be tagged with the hostname that currently owns the volume as well as the drive letter on which that volume is mounted.

Refer to

Example 3 on page 369 for examples.

Note:

You can also view all remote replications by viewing the contents of the

Replications

folder within the Celerra Manager user interface. If you choose this method to view remote replications, do not use any of the dialog box options on these replications unless recommended by an EMC support representative.

368

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Example 3

Replication information from pdc-celerra

ID = fs37_T15_LUN7_APM00074001148_0000_fs33_T10_LUN19_APM00074001147_0000

Name =

Source Status = OK

Network Status = OK

Destination Status = OK

Last Sync Time =

Type = iscsiLun

Celerra Network Server = lrmc154

Dart Interconnect = DM2_lrmc154

Peer Dart Interconnect = 20004

Replication Role = destination

Source Target = iqn.1992-05.com.emc:apm000740011480000-15

Source LUN = 7

Source Data Mover = server_2

Source Interface = 10.241.180.165

Source Control Port = 0

Source Current Data Port = 0

Destination Target = iqn.1992-05.com.emc:apm000740011470000-10

Destination LUN = 19

Destination Data Mover = server_2

Destination Interface = 10.241.180.206

Destination Control Port = 5085

Destination Data Port = 8888

Max Out of Sync Time (minutes) = Manual Refresh

Application Data = RM_lrmg151_N:\

Next Transfer Size (Kb) = 0

Latest Snap on Source = fs37_T15_LUN7_APM00074001148_0000.ckpt009

Latest Snap on Destination = fs33_T10_LUN19_APM00074001147_0000.ckpt000

Current Transfer Size (KB) = 0

Current Transfer Remain (KB) = 0

Estimated Completion Time =

Current Transfer is Full Copy = No

Current Transfer Rate (KB/s) = 0

Current Read Rate (KB/s) = 14369

Current Write Rate (KB/s) = 3732

Previous Transfer Rate (KB/s) = 0

Previous Read Rate (KB/s) = 0

Previous Write Rate (KB/s) = 0

Average Transfer Rate (KB/s) = 0

Average Read Rate (KB/s) = 0

Average Write Rate (KB/s) = 0

The disaster recovery solution is now configured and ready to respond in the event of a catastrophic failure at the primary data

center. Continue to “Failing Over” on page 370 to implement the

failover phase of DR.

Celerra disaster recovery

369

Disaster Recovery

Failing Over

Failover overview

This section contains the procedures for failing over the live data from a failed production system to the disaster recovery system in the secondary data center. Although this section discusses steps to perform in the event of a true disaster, the steps for a failover test are the same. Prior to performing the failover test, remember to stop the production application from writing to the Celerra LUN and then run the Celerra Replicator job one more time prior to performing the failover. This will guarantee that the data on the target Celerra LUN will be current with the production LUN.

The failover phase is shown in

Figure 58 on page 370 . The primary

data center (PDC) has failed and the live data services move to the secondary data center (SDC).

PRIMARY DATA CENTER (PDC)

(Prod u ction System)

SECONDARY DATA CENTER (SDC)

(Disaster Recovery System serves live data)

PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 pdc-celerra pdcserver

PS0 PS1 PS2 PS3 PS4 SMB0 SMB1 sdc-celerra sdcserver

Config u re

Exchange and application services

Figure 58

FAILING OVER

CIP-000716

Failover locations and activities

If you have a Celerra-based application set that includes replicas from local SnapSure jobs and remote Celerra Replicator jobs, then a restore of the local SnapSure replica will cause the session to the target

Celerra to be removed. This is not an error; this is how Celerra maintains consistency. This action also causes the clone replica to be

370

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Preparing for application data failover

Determine necessity of failover

removed. As a result, until you re-run a Celerra Replicator job, you will not be able to perform a failover, since the clone replica has been removed.

Likewise, if you perform a clone replica failover using Replication

Manager, and then perform a restore of any of the Celerra Replicator remote snap replicas, the session will be removed, since the remote replicas are now considered local snaps to the failed over LUN. If you perform a restore after a failover, you will have to reconfigure the

Celerra Replicator job and run it to redefine the session and the clone replica.

At this point in the procedure, a disaster has caused the primary data center to cease normal operations.

Before deciding to fail over services, perform the following actions:

1. Determine if failover is appropriate for your situation. Failover is recommended if:

• The magnitude of the primary data center outage makes the overhead of failover worthwhile.

or

• The primary data center has experienced data loss or data corruption.

2. If you determine that failover is the necessary course of action, verify that the secondary data center is ready to accept the services that you need to fail over: a. Ensure that all required network services such as Active

Directory, DNS, and NTP are up and running at the secondary data center. The exact list of necessary services depends on the applications being protected and is beyond the scope of this document.

b. Identify the DR server at the secondary data center that will take over the responsibilities of the production server. In our example, the disaster recovery server at the secondary data center is called sdcserver. As detailed in

“Setting up the production and disaster recovery servers” on page 359 , there

are several requirements for this system related to drive letter mapping and network name. These are not covered in detail here.

Celerra disaster recovery

371

Disaster Recovery

Install user applications

Pre-failover application steps

See the following application-specific notes covering installation requirements:

NTFS

There are no additional requirements.

Microsoft SQL Server

For Microsoft SQL Server, install the user application on sdcserver using the appropriate documentation from Microsoft. You may have installed this as an optional step in DR setup. The SQL Server instances on sdcserver must match the instances that are on pdcserver.

Microsoft Exchange Server

If you are failing over Microsoft Exchange Server, then you must first change the network identity of the disaster recovery server to match the server it is replacing. This is not required for Microsoft SQL

Server or NTFS. To accomplish the identity change do the following:

1. Reset the Active Directory Computer Account for pdcserver.

2. Rename sdcserver to pdcserver.

3. Join the new pdcserver to the Active Directory domain.

We will continue to refer to the server at the secondary data center as sdcserver even though it now has the same Active Directory name as pdcserver.

If you are performing a DR test or any other sort of scheduled failover, you must make sure that you run a final Celerra Replicator job to the target Celerra prior to the failover to ensure that the production LUN and target LUN are logically the same. You should run the job after you stop the application that may be writing to the source data.

For example, if you are replicating and failing over databases from a

SQL Server instance, the application that is writing to the SQL server databases should be suspended before you run the Celerra Replicator job. For Exchange, you will need to stop the Exchange services to prevent users from writing to the database.

In either case, Exchange and SQL Server must be online so that

Replication Manager can take the last replica prior to the failover.

372

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Failing over to the secondary data center

Clustered environments

If you are performing a DR test with a clustered production server, you must remove any dependencies and take the appropriate clustered resources offline prior to the failover, since the failover will result in the production LUNs becoming read-only. This would cause the production cluster to perform repeated cluster failovers searching for writable storage.

If you fail over to a cluster and plan to make the failover storage clustered resources, and you also plan to fail back to the production cluster, you will need to actually remove the clustered resources on the production cluster for the storage that will be failed over. The reason for this is that the failover cluster will mark the storage with its own cluster ID. If you perform a failback to the original production cluster without removing the clustered resources, the production cluster will reject the failback storage.

Microsoft SQL Server

For SQL Server, if you intend to failback to the production environment (for example, DR test), stop the SQL server instance containing the databases after running the Celerra Replicator job to refresh the target clones. If you shutdown the instance, this will allow the instance to come up quickly after failback. If you do not shutdown the production instance, SQL Server may mark databases as suspect when the failover process makes the LUNs read-only on the production Celerra. If you are failing over to a clustered SQL

Server, you should select the virtual SQL Server instance as the failover host.

Microsoft Exchange Server

For Exchange, you must stop the Exchange instance prior to the failover.

Replication Manager uses the Celerra Replicator clone replica to facilitate disaster recovery of the Celerra array. When a disaster occurs, you can fail over this replica, allowing Replication Manager to make the clone LUN (and the snaps of this clone) from the original production host's application set operational on the disaster recovery host. Failover also makes the production storage read-only.

If an application set on Celerra storage contains both local and remote replications, the local replications will be marked "Not Restorable" when the clone replica is failed over. This situation occurs because there is no restore path from these snapshots to the failed-over clone

Celerra disaster recovery

373

Disaster Recovery

LUNs. Upon failback to the original production LUNs, the local snap replicas will be marked restorable again.

Note:

Before running the failover job, you should ensure that both the iSCSI

Initiator and target IQN are connected from the MS iSCSI Initiator at the DR

Server.

Microsoft Exchange 2003 Server

Before running the failover job, you should keep a copy of eseutil.exe and ese.dll at the system environmental variable path location.

Note:

You cannot perform a failover of a clone replica if there are any local or remote replicas of the clone that are currently mounted. If you do, the failover will fail. Unmount any local or remote replicas of the clone before starting the failover of the clone replica.

To fail over data from the primary data center to the secondary data center, perform the following steps on the production Replication

Manager Server:

1. Right-click the Celerra Replicator clone replica and select

Failover

. The Failover dialog box appears, as shown in

Figure 59 on page 374 .

Figure 59 Failover dialog box

2. Select the name of the new production host on which to fail over the iSCSI clone replica. If failing over to a cluster, be sure to select the appropriate virtual cluster name.

374

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Note:

Ensure that the new production host selected is logged in to the appropriate IQN(s) on the target Celerra. Furthermore that IQN must contain the target LUNs and snaps, otherwise the failover will not complete successfully.

Celerra disaster recovery

375

Disaster Recovery

3. Click the Options button to select from the following options:

Edit job properties after failover

— Allows you to edit the replica's job properties once a failover successfully completes.

This option is useful if you plan to run for a period of time on the dedicated DR Celerra array before failing back to the normal production array. Depending on your Celerra Replicator job configuration, you may need to modify the Storage, Mount and

Startup

tabs after the failover has completed.

Don't fail if drive(s) already mounted (failback scenario)

Under normal circumstances, if you attempt to mount a replica on a drive that is currently in use, the mount operation will fail. This option overrides that behavior, and informs Replication Manager that the replica has already existed on this particular DR site and the drives are still assigned. When this option is enabled,

Replication Manager will not fail the mount operation when attempting to mount the drive on a LUN that is currently in use.

You should select this option when you know that the LUNs and drives exist and are read-only.

Note:

If you are not in a failback situation, and you are not sure if the necessary drives are in use on the DR host, it is recommended that you do not select the Don’t fail if drive(s) already mounted option.

Replication Manager will inform you of any conflicts, which you can then address and restart the failover.

376

EMC Replication Manager Administrator’s Guide

Disaster Recovery

4. Click OK. The Failover Replica progress panel appears as shown

in Figure 60 on page 377

and displays current status of the replica failover.

Post-failover

Figure 60

Failover progress panel

Once this process completes without errors, continue to

“Post-failover considerations” on page 377

.

Prior to a failover, restoring a local SnapSure replication that has a more current (newer) SnapSure replication will result in the deletion of the newer replica. This is normal Celerra behavior that is used to maintain consistency, and is referred to as a destructive restore.

Celerra Replicator replications do not behave this way unless the clone replica for the Celerra Replicator snap replications is failed over. After a failover, a Celerra Replicator replica is treated like a local

SnapSure replica. If you restore a replica that has a more current snapshot replica, then all newer replicas than the one that was restored will be deleted.

considerations

How you proceed from the replica failover is largely dependent on your original intent for the disaster recovery. Are you planning to run production on the secondary data center permanently? Are you planning to fail back immediately to the primary data center? A true disaster may require that you run on the secondary data center for a

Celerra disaster recovery

377

Disaster Recovery

Immediate failback to original data center

Eventual failback to original data center

significant period of time. Consider the following post-failover options to decide which one best suits your needs:

This was a planned disruptive disaster recovery test, and you still intend to keep the original production center (consisting of pdc-celerra and pdc-server) operational.

Course of action: Fail back the production data to the original production center

This was an unplanned disaster, causing the production center to be down temporarily, but it has or will eventually become available.

Course of action:

Reconfigure the replication jobs (adjusting schedules if necessary)

Run the replication jobs (manually or using schedules)

Fail back the production data to the original production center when it becomes available

378

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Failover to a new primary data center

This was an unplanned disaster, causing unrecoverable loss to the original production center.

Course of action:

Set up new production host and Celerra array

Configure new replication jobs

Run the replication jobs

Fail over the production data to the new production center

Recover applications

NTFS

Clustered NTFS

Now that you have successfully failed over to a new production host,

(or back to the original production host), you may need to perform varying degrees of filesystem or application recovery. If your environment includes Microsoft Exchange or SQL Server, you may need to perform extra steps to get the production databases up and running on the new host. Specific application database recovery steps are beyond the scope of the Replication Manager documentation.

However, this section does offer generic application recovery steps that are designed to apply to many environments. Since disaster recovery is a complex process that varies based on the configuration of the data center, proper testing and documentation of the specific steps should be performed to ensure a consistent and repeatable process. It is recommended that you follow the disaster recovery processes and strategies related to your specific application that have been published by Microsoft and are available on their website.

After a failover of a file system application set to a non-clustered host, the drives are mounted to the same locations on the sdcserver as they were on the pdcserver. Access to the drives is immediately available and the only step that you should need to perform is to modify the

Replication Manager jobs for the application set to make sure that jobs are correctly configured and scheduled to run.

After a failover of a file system application set to a clustered virtual server, the drives are mounted to the same locations on one of the physical nodes of the cluster as non-clustered physical drives.

If you wish to make a clustered resource out of the failed over storage, you should use the cluster administration tool to create the appropriate clustered resource. You will also then have to manually mask the failed over storage LUNs to the other nodes of the cluster using the Celerra Manager user interface.

Celerra disaster recovery

379

Disaster Recovery

Note:

During failover, Replication Manager displays the names of all physical nodes of the cluster in the Failover progress window to help you finalize the cluster portion of the failover.

Microsoft SQL Server

Clustered Microsoft

SQL Server

After an initial failover of a SQL Server application set to a non-clustered host that has never seen the production databases, attach each of the SQL Server databases that exist for the application set using the SQL Server Management Studio.

If you had previously performed a failover of this application set to this host, and the SQL Server instance was shut down during the failover, you can start up the SQL Server and the databases will be available.

You can also restore the most recent Celerra Replicator snapshot replica that was part of the failed over clone replica. This is known as a recovery restore. By restoring this snapshot replica, you allow

Replication Manager to perform the SQL database recovery. This causes the Celerra Replicator session to be removed, and will require that you reconfigure the Celerra Replicator job to use the original production Celerra data mover (or a new Celerra). You must then run the job back to the production Celerra before performing a failback.

After the failover:

Add the appropriate disk resources.

Add the appropriate dependencies for the drive resources and dependencies to the SQL Server instance resource group and bring it online.

Check for the databases using the SQL Server Management

Studio.

If this is the first time that you have failed over to this cluster, you may need to attach each of the databases, or you can use

Replication Manager to perform a restore of the most recent

Celerra Replicator snapshot replica that was part of the failed-over clone replica. (The same caveat about the Celerra

Replicator session described in the non-clustered SQL Server section above applies.)

If it is not the first time that you have failed over to this cluster

(for example, if this is a failback), the new databases should be visible when you bring the SQL server instance resource group online.

380

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Microsoft Exchange

Server

If you will be running for a period of time on the DR cluster, you should use the Celerra Manager user interface to mask the failed-over LUNs to the other physical nodes of the cluster.

Note:

There are other methods for performing DR of Exchange databases.

This section lists one such method for Exchange 2003 and Exchange 2007.

Prerequisite steps:

Reset the Active Directory Computer Account for the production host.

Rename the DR host (sdcserver) to the production host

(pdcserver).

Join the new production host to the Active Directory domain.

Exchange 2003

1. Before running the failover job, keep a copy of eseutil.exe and ese.dll file at the system environmental variable path location.

Otherwise, the failover job will fail.

2. Install Exchange 2003 in disaster recovery mode by running the following command from the command prompt:

%setup_file_location%> setup.exe /disasterrecovery

3. Run the Replica failover job.

4. Mount the Mailbox databases.

Note:

Steps 2 and 3 can be interchanged.

Exchange 2007

1. Before installing the Replication Manager Exchange 2007 agent on sdcserver, the Exchange 2007 management tools have to be installed on the secondary server and you must reboot the system after the Installation to ensure that Replication Manager

Exchange Interface component is registered properly.

2. Run the Replica failover job.

3. To recover the Exchange 2007 server roles, run the following command from the command prompt:

% setup_file_location %> setup /m:recoverserver

4. Mount the Mailbox databases.

Celerra disaster recovery

381

Disaster Recovery

Note:

Steps 2 and 3 cannot be interchanged. To recover the mailbox server role, the drives used to store the database and log files should be mounted to the sdcserver.

Troubleshooting

Celerra Disaster

Recovery

Preparing for a failover

In the event of a problem during a Celerra clone replica failover, the administrator may need to perform some manual recovery operations to return Celerra LUNs to a usable state on the appropriate host and Celerra.

Once Celerra Replicator jobs are running, check the Replications folder of the Celerra Manager user interface on both the production and target Celerras. Here you will find a list of the current Celerra

Replicator sessions.

The entries listed on the production Celerra (lrmc154) are shown as depicted in

Figure 61 on page 382 and on the target Celerra (lrmc154) entries are shown as depicted in Figure 62 on page 382 .

Figure 61 Celerra Manager User Interface showing source sessions

Figure 62

Celerra Manager User Interface showing target sessions

The Data Mover Interconnect icon identifies whether the Celerra is the source of the sessions (right pointing arrow) or the target (left pointing arrow). Once established, these session definitions should not change unless a failover occurs. It is a good practice to record the information contained in these screens (perhaps by taking a screenshot of the user interface on each Celerra) so that in the case of a failover, you can revert your systems to their original configuration.

382

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Reverting back to the production Celerra after a failover failure

Production Celerra not available after a failover failure

Before any failover operation is performed, Replication Manager performs a “pre-check” to determine whether the failover will be successful. If the “pre-check” fails, there is no changes made to the storage or the Replication Manager database.

In the unlikely event that the “pre-check” succeeds, but the failover fails, Replication Manager does not update any of its internal databases to update ownership of the application sets, jobs, or replicas to the target Celerra. If this occurs, complete the following:

1. Determine whether or not Replication Manager in fact failed over sessions. (Check the screenshots you captured above against the current Celerra Manager user interface.)

2. If so, use the Failover button on each session and instruct the

Celerra to failover the session. (When asked instruct the Celerra to perform a failover on both the source and target Celerras.)

3. After all sessions have been failed back to the original Celerra, run the original Celerra Replicator job that created the remote replicas.

If there is a failover failure, the sessions have failed over and the original Celerra is no longer available, follow these steps:

1. Manually mask the failed over LUNs to the new disaster recovery host using the Celerra Manager user interface.

2. Manually map the disks on the disaster recovery host to the appropriate drive letters. (Use the Microsoft iSCSI Initiator to identify which LUN numbers on the target Celerra correspond to each physical drive on the disaster recovery host.)

3. Use the rmmountvolume.bat script in the rm/client/bin folder to clear the VSS bits and allow the device to become writable by

Windows. The syntax of that command is as follows:

rmmountvolume

<physicaldrive> <driveletter>

For example, if the target LUN on the disaster recovery host was

PHYSICALDRIVE5, and the drive should be mapped to X:\, the syntax for the rmmountvolume command would be:

rmmountvolume 5 x:

For more information about this manual recovery process, refer to either the iSCSI Nonclustered Disaster Recovery Solutions

Implementation Guide or the iSCSI Clustered Disaster Recovery

Solutions Implementation Guide available on Powerlink.

Celerra disaster recovery

383

Disaster Recovery

VMware Site Recovery Manager (SRM) disaster recovery

Replication Manager can integrate into VMware SRM environments and help facilitate disaster recovery operations in that environment if the VMware SRM environment has been implemented on either

CLARiiON or Celerra storage. Below is a general overview of these environments. Later sections provide step-by-step instructions for each environment.

Understanding

VMware Site

Recovery Manager

VMware Site Recovery Manager (SRM) is a disaster recovery management tool that integrates tightly with VMware’s

VirtualCenter software to automate and simplify failover and disaster recovery of a VMware Virtual Infrastructure.

VMware SRM protects the virtual infrastructure by creating a disaster recovery environment with the following components:

Protection site (or primary site)

— This site hosts the production virtual machines residing on ESX servers. The protection site is managed by a particular instance of VMware VirtualCenter software. The VMware SRM Administrator creates a protection group to identify which components on the protection site should be failed over in the event of a disaster. Refer to VMware documentation for information on how to configure a VMware

SRM protection group.

Recovery site (or paired site)

— Each protection site has a corresponding recovery site, which is another set of ESX servers, managed by a separate VirtualCenter instance to which the protection site can failover. The administrator configures a recovery plan to recover the failed over protection group onto the recovery site. Once again, refer to the VMware SRM documentation for specifics on how to configure a recovery plan that administrators can use to recover a site after a failover.

384

EMC Replication Manager Administrator’s Guide

Disaster Recovery

VMware SRM facilitates the failover and the coordination of the disaster recovery environment.

Figure 63 on page 385 illustrates a

VMware SRM environment with Replication Manager integration.

VirtualCenter

App

Virtual Machines

App App

RM Proxy 1

OS OS OS

OS

VMFS VMFS VMFS

VMFS

VirtualCenter VMware SRM RM Proxy 2

MirrorView

Link

App

Virtual Machines

App App

RM Proxy 1

OS OS OS OS

VMFS VMFS VMFS

VMFS

VMware Virtual Infrastructure

Protection Site

VMware Virtual Infrastructure

Recovery Site

Figure 63

MirrorView

Link

CLARiiON

Protection site

CLARiiON

Recovery site

VMware SRM Environment with Replication Manager Integration

Role of VMware SRM

VMware SRM manages the following tasks to facilitate failover and disaster recovery operations in a VMware environment:

Controls which site plays the role of protection (or primary) site and which site plays the role of recovery (or paired) site. The protection site has read/write permissions on LUNs the recovery site has read-only permissions on the LUNs. SRM manages those permissions.

Reverses the direction of the link between the sites. The two sites can be linked using either MirrorView (for CLARiiON-based sites) or Celerra Replicator (for Celerra-based sites). SRM swaps the direction of the link upon failover, unless the failover is due to a real disaster, in which case there is no protection site to link to.

Controls the powered on status of virtual machines. Virtual machines in the protection site are powered on while virtual machines in the recovery site are powered off.

VMware Site Recovery Manager (SRM) disaster recovery

385

Disaster Recovery

Role of Replication

Manager in a

VMware SRM environment

When Replication Manager resides in a VMware SRM environment, it facilitates the following tasks:

Creates replicas of protection or recovery site data

Mount those replicas

Restore certain replicas (restrictions apply)

Automatically manage changes to jobs and replicas after a failover (in Celerra environments)

Note:

In CLARiiON environments certain manual steps are required after a failover. Those manual tasks are described in this chapter.

386

EMC Replication Manager Administrator’s Guide

Disaster Recovery

VMware SRM disaster recovery in

CLARiiON environments

Replication Manager can integrate with an existing VMware SRM environment that has been configured with MirrorView links between the protection and the recovery site. In the case of a

CLARiiON integration, the MirrorView links must be set up prior to incorporating Replication Manager into the environment. Replication

does not set up the MirrorView links. Figure 64

illustrates a VMware

SRM CLARiiON DR environment.

VirtualCenter

VirtualCenter VMware SRM RM Proxy 2

App

Virtual Machines

App App

RM Proxy 1

OS OS OS OS

VMFS VMFS VMFS VMFS

MirrorView

Link

App

Virtual Machines

App App

RM Proxy 1

OS OS OS OS

VMFS VMFS VMFS

VMFS

VMware Virtual Infrastructure

Protection Site

VMware Virtual Infrastructure

Recovery Site

MirrorView

Link

CLARiiON

Protection site

CLARiiON

Recovery site

Figure 64

VMware SRM CLARiiON DR environment

VMware SRM disaster recovery in

Celerra environments

Replication Manager in a Celerra environment works differently from that of a CLARiiON environment described above. In a Celerra environment, it is necessary to use Replication Manager to set up all the Celerra Replicator links between the protection and recovery site.

Replication Manager cannot manage Celerra Replicator environments that were not created as a result of running a

VMware Site Recovery Manager (SRM) disaster recovery

387

Disaster Recovery

Replication Manager job.

Figure 65

illustrates a VMware SRM Celerra

DR environment.

Figure 65

VMware SRM Celerra DR environment

In a Celerra environment, you must create two different application sets / jobs in order to protect the entire environment from disaster.

These two different jobs create the following replicas:

VMFS replica

— Protects the system drive (for example, C:\ drive) of the virtual machine where the SQL Server application is installed.

Virtual disk replica

— Protects the SQL Server database, with application-level consistency, or file system replicas. These replicas contain the virtual disks that hold the application data on the virtual machine. It protects the drives that hold the application data and logs, but not the system drive (for example,

C:\drive).

Note:

It is possible to set up the job that creates the virtual disk replica as a link job from the job that creates the VMFS replica. However, this does not guarantee consistency between the VMFS replica and the virtual disk replica in this environment.

388

EMC Replication Manager Administrator’s Guide

Disaster Recovery

The remainder of this chapter describes how to integrate Replication

Manager into a VMware SRM environment and configure both SRM and Replication Manager to work together.

VMware SRM disaster recovery in

RecoverPoint environments

For Replication Manager in a RecoverPoint environment, it is necessary only to set up a consistency group on the storage you want to protect with SRM.

VirtualCenter VMware SRM

RM Proxy 1

Virtual Machines

App App App

OS OS OS

VMFS VMFS VMFS

VMware Virtual Infrastructure

VirtualCenter VMware SRM RM Proxy 2

Virtual Machines

App App App

OS OS OS

VMFS VMFS VMFS

VMware Virtual Infrastructure

CG CG CG

RecoverPoint

Protection Site

WAN

Figure 66

VMware SRM RecoverPoint DR environment

CG CG CG

RecoverPoint

Recovery Site

Configuring SRM and Replication

Manager on CLARiiON

Replication Manager can create two different kinds of VMware replicas in an SRM environment as follows:

VMFS replicas

— A crash-consistent replica of the VMware

VMFS (datastore). Prior to the replication, Replication Manager will create a VMware snapshot for every live virtual machine in the datastore. This VMware snapshot remains for each VM on the replica, and will be removed for each live VM after the replica is created.

VMware Site Recovery Manager (SRM) disaster recovery

389

Disaster Recovery

Creating a MirrorView relationship for each

LUN

Configuring your environment for VMFS replicas

Note:

When Replication Manager creates the VMware snapshot, the snapshot provides a level of consistency for the VMs in the datastore, depending on which version of VMware the datastore resides on. At a minimum, the memory for the VMs is flushed to disk prior to creating the replication.

More recent versions of ESX (3.5 update 2 or later) create VSS snapshots for all VSS writers on a Windows VM when a VMware snapshot is created by replicating a VMFS with live VMs.

Virtual disk replicas

— An application consistent replica of the data located on a virtual disk mounted to a virtual machine. This does not replicate the C: drive but a separate virtual disk that contains application data.

This section describes how to configure your VMware SRM environment to facilitate each of these two types of VMware replications.

A MirrorView relationship must be configured ahead of time between the protection and recovery site LUNs on the two

CLARiiONs as a prerequisite to using SRM to protect the VMware environment. If you add LUNs to your environment, the MirrorView relationships must be configured for the new LUNs and VMware

SRM configuration described must be completed for those new LUNs as well. This manual procedure is described in MirrorView documentation.

Perform the following steps to configure your environment for VMFS replication:

1. Create a virtual machine or a physical host with VCenter credentials in your VMware SRM protection site and add it to the protection group of the site. This becomes the Replication

Manager proxy host.

Note:

Replication Manager proxy hosts do not have to be protected by

VMware SRM in order to work as proxy hosts in this environment, however in the case of a disaster, you will lose the configuration controlled by the client if the proxy host is not protected in this way.

2. Install the Replication Manager Agent software on that virtual machine. This host will be available in the protection site prior to failover and in the recovery site in the event of a failover.

390

EMC Replication Manager Administrator’s Guide

!

Disaster Recovery

Replication Manager uses this proxy host to communicate with the VMware VirtualCenter software for the protection site. This allows Replication Manager to interact with VMware and perform the tasks necessary to create, mount, unmount, restore, and expire VMFS replicas between any ESX server on the site.

3. Choose either a virtual machine or a physical machine in your

VMware SRM recovery site and install Replication Manager agent software on that recovery site location to act as a proxy server for the recovery site.

Note:

Replication Manager proxy hosts on both the protection site and the recovery site must have IP access to the Celerra data movers on both sites.

4. Install the Replication Manager server software and protect it from a disaster. Disaster protection for the server can be provided

either as described in “Setting up Replication Manager Server disaster recovery” on page 342

or by installing the server on a virtual machine that is protected using VMware SRM, just as the

Replication Manager Proxy Agent is protected.

CAUTION

Replication Manager Server can optionally be on an unprotected virtual machine or physical host, but these configurations are not optimal and could result in data loss in the event of a disaster.

5. Register the two new proxy hosts on the Replication Manager

Server. Refer to the EMC Replication Manager Product Guide for instructions on how to register a VMware proxy host on the

Replication Manager Server.

6. Make the following modifications to the VMware Infrastructure

Clients that constitute the VMware SRM environment:

• Set the LVM EnableResignature flag to 1.

VMware Site Recovery Manager (SRM) disaster recovery

391

Disaster Recovery

• Install the VMware Site Recovery Map extension (Plugins >

Manage Plugins

on the VMware Virtual Infrastructure menu).

Figure 67

illustrates the location of Replication Manager Proxy

Servers in a CLARiiON VMware SRM environment.

VirtualCenter

Virtual Machines

App App App

RM Proxy 1

OS OS OS

OS

VMFS VMFS VMFS

VMFS

VirtualCenter VMware SRM RM Proxy 2

MirrorView

Link

App

Virtual Machines

App App

RM Proxy 1

OS OS OS

OS

VMFS VMFS VMFS

VMFS

VMware Virtual Infrastructure

Protection Site

VMware Virtual Infrastructure

Recovery Site

MirrorView

Link

CLARiiON

Protection site

CLARiiON

Recovery site

Figure 67

Configuring a

CLARiiON VMware

SRM setup for virtual disk replicas

Location of Replication Manager Proxy Servers in a CLARiiON VMware

SRM environment

Configure your VMware SRM/CLARiiON storage environment to use Replication Manager for virtual disk replication. If you have not already done so, configure the MirrorView relationship as described in

“Creating a MirrorView relationship for each LUN” on page 390 .

Perform the following steps to complete the configuration of the environment:

1. Install Replication Manager Agent software on the production and mount virtual machines associated with the virtual disks you want to replicate in the protection site.

2. Install Replication Manager Agent software on the mount virtual machines associated with the virtual disks you want to mount in the recovery site.

392

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Additional VMware

SRM tasks in a

CLARiiON environment

3. Install the Replication Manager server software and protect it from a disaster. Disaster protection for the server can be provided

either as described in “Setting up Replication Manager Server disaster recovery” on page 342

or by installing the server on a virtual machine that is protected using VMware SRM, just as the

Replication Manager Proxy Agent is protected.

4. Register the virtual machine hosts to the Replication Manager

Server.

5. Register a virtual machine for mounting virtual disks on each site registered to the respective VirtualCenter. The mount virtual machine on the recovery site can be used to mount a remote replica. The mount virtual machine could be configured as protected virtual machine for SRM failover.

6. Make the following modifications to the VMware Infrastructure

Clients that constitute the VMware SRM environment:

• Set the LVM EnableResignature flag to 1.

• Install the VMware Site Recovery Map extension (Plugins >

Manage Plugins

on the VMware Virtual Infrastructure menu).

Additional tasks must be performed in VMware SRM in order to prepare your CLARiiON/VMware SRM environment to work with

Replication Manager.

If you have not already done so, perform these VMware SRM steps to configure your VMware SRM environment:

1. Create the structure of the protection and recovery sites with ESX servers in the recovery site that mirror the protection site.

2. On the protection site, create a protection group including each datastore you intend to protect with VMware SRM.

Note:

Make sure that no Replication Manager replicas of the protection site are mounted while you create the protection group.

3. Use the VMware Virtual Infrastructure Manager to configure each virtual machine within the protection group that you want to failover. A placeholder for each virtual machine you configure for failover appears in the recovery site.

4. On the recovery site, create a recovery plan for each protection group you created on the protection site. Choose the corresponding protection group.

VMware Site Recovery Manager (SRM) disaster recovery

393

Disaster Recovery

For more information on how to complete these VMware SRM tasks, refer to your VMware documentation.

Protecting the

VMware SRM environment before failover on

CLARiiON

Configuring application set of

VMware SRM data before a failover

This section describes how to use Replication Manager to protect the

VMware SRM environment before a VMware SRM failover has occurred.

This section includes the following topics:

Configuring VMware SRM application sets

Configuring jobs for those application sets

Mounting replicas created by those jobs

Configure the application sets to protect data in the VMware SRM environment:

Note:

For a full description of how to use Replication Manager to create application sets and jobs, refer to the EMC Replication Manager Product Guide and the online help.

1. Discover CLARiiON storage and add the appropriate LUNs for use in Replication Manager. Refer to the EMC Replication Manager

Product Guide for step-by-step instructions.

2. Configure an application set to protect data in the VMware SRM environment.

394

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Application set differences for VMFS versus virtual disk replication

When creating virtual disk replicas the selections are identical to working in non-SRM environments. When choosing objects, expand applications and select application objects.

When creating VMFS replicas the selections are the same as non-SRM VMware environments. Expand the tree and select a

VMFS datastore to replicate.

Note:

VMFS replicas are crash consistent copies of the VMFS and include snapshots of each virtual machine that allow you to revert the virtual machines to a consistent state. These replicas do not provide application consistency.

Configuring Jobs for

VMware SRM application sets

Configure Replication Manager jobs to create replicas of the application sets that have been created to define the VMware SRM objects you want to replicate.

Note:

For a full description of how to use Replication Manager to complete the tasks mentioned here, refer to the EMC Replication Manager Product Guide and the online help.

While configuring jobs in a CLARiiON VMware SRM environment, set the Replication Source to MirrorView Primary or MirrorView

Secondary

on the Job Name and Settings panel of the Job Wizard.

The panel is shown in

Figure 68 .

VMware Site Recovery Manager (SRM) disaster recovery

395

Disaster Recovery

Figure 68

Mounting VMFS replicas of SRM environments

Job wizard choosing replication source

The following describes each of these options:

MirrorView Primary

— Choose this option to make a replica of the data on the protection site. This type of replica exists only on the protection site. It is restorable on the protection site, but upon failover, the contents of this type of replica would be lost.

MirrorView Secondary

— Choose this option to make a replica of data on the recovery site. This uses the proxy host on the protection site to create the replica and the proxy host on the recovery site to mount the replica. The replica can be mounted on the recovery site and would still be available in the event of a failover to that recovery site. This type of replica cannot be restored after a failover; however, it could be mounted and data could be transferred manually.

When mounting a VMFS replica of a VMware SRM environment as part of the job, or on demand, you must type in the name of the ESX

Server you plan to mount to in the Mount host field.

396

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Note:

Replication Manager does not validate the ESX Server name you enter in the mount host field. The name you enter must be part of the site on which you are performing the replication. If you chose MirrorView Primary, the mount host ESX Server must be part of the protection site. If you chose

MirrorView Secondary, your mount host ESX Server must be part of the recovery site. You must also be sure to choose the appropriate Replication

Manager Proxy Host.

Replication

Manager after SRM failover on

CLARiiON

Results of failover on a protected VMFS environment

When a failover occurs in a VMware SRM environment, the affect of the failover differs depending upon your SRM configuration.

This section assumes your environment has the following configuration before the failover:

Replication Manager Proxy hosts (hosting the Replication

Manager Agent) are installed on virtual machines protected using

VMware SRM.

Replication Manager Server is protected with Replication

Manager Server disaster recovery as described in

“Setting up

Replication Manager Server disaster recovery” on page 342

or

Replication Manager Server protected using VMware SRM.

If these prerequisites have been met, failover has the following impact on VMFS replicas:

Application sets are still valid.

Jobs for the protection site are still valid.

Replicas created on the protection site (by choosing MirrorView

Primary as the source) are invalid.

Replicas created on the protection site (by choosing MirrorView

Primary as the source) are valid for mounting only.

Replicas created on the recovery site (by choosing MirrorView

Secondary as the source) cannot be restored.

Replicas created on the recovery site cannot be restored.

Jobs that create replicas on the recovery site are invalid.

The procedures that you must follow to reinstate your Replication

Manager environment are described below.

VMware Site Recovery Manager (SRM) disaster recovery

397

Disaster Recovery

Results of failover on an unprotected VMFS environment

Reinstating VMFS replication after SRM failover

Results of a failover on a virtual disk hosts protected by SRM

If you have not protected your environment with the prerequisites described above, all of the following objects become invalid upon failover:

Application Sets

Jobs

Replicas (mounted and unmounted)

The procedures that you must follow to reinstate your Replication

Manager environment are described below.

To reinstate VMFS replication after a VMware SRM failover, follow these steps:

1. Force unmount all mounted VMFS replicas.

2. Expire all protection site VMFS replicas (they are no longer valid after failover).

3. Delete all recovery site VMFS jobs (they are no longer valid after failover).

4. Change the mount ESX Server on all former protection site jobs that include mount operations so that they point to a mount ESX

Server on the recovery site.

5. Rename all the failed-over VMFS datastores on the DR ESX server that have been renamed due to the LVM EnableResignature flag.

Each failed over VMFS datastore will have a prefix starting with snap-<ID>- followed by the original VMFS datastore.

In the case of an unprotected VMFS environment, delete and recreate all application sets and jobs, since they are all invalid and rerun jobs to recreate the replicas of the data after failover.

If the virtual machines hosting your Replication Manager Agent and being used to create virtual disk replicas are also protected with

VMware SRM, failover has the following impact on the virtual disk replicas:

Hosts that have failed over require a change to their properties to point to the VirtualCenter on the recovery site and include the appropriate credentials for that new VirtualCenter

Application sets are still valid.

Jobs for the protection site are still valid.

398

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Jobs for the protection site that mount replicas are valid without modification if the mount hosts are protected by VMware SRM.

Replicas created on the protection site (by choosing MirrorView

Primary as the source) are invalid.

Replicas created on the recovery site (by choosing MirrorView

Secondary as the source) are valid for mounting only.

Replicas created on the recovery site cannot be restored.

Jobs that create replicas on the recovery site are invalid.

Configuring SRM and Replication

Manager on Celerra

Configuring a Celerra

VMware SRM setup for

VMFS replicas

Replication Manager can create two different kinds of VMware replicas in an SRM environment as follows:

VMFS replicas

— A crash-consistent replica of the VMware

VMFS (datastore). Prior to the replication, Replication Manager will create a VMware snapshot for every live virtual machine in the datastore. This VMware snapshot remains for each VM on the replica, and will be removed for each live VM after the replica is created.

Note:

When Replication Manager creates the VMware snapshot, the snapshot provides a level of consistency for the VMs in the datastore, depending on which version of VMware the datastore resides on. At a minimum, the memory for the VMs is flushed to disk prior to creating the replication.

Virtual disk replicas

— An application consistent replica of the data located on a virtual disk mounted to a virtual machine. This does not replicate the system drive but a separate virtual disk that contains application data.

This section describes how to configure your Celerra-based VMware

SRM environment to facilitate each of these two types of VMware replications.

Perform the following steps to configure your environment for VMFS replication:

1. Create a physical or virtual machine in your VMware SRM protection site. You should configure one (or more) proxy hosts in the protection site that are configured for the VirtualCenter that serves the protection site. You should also configure one (or more) proxy hosts in the recovery site that are configured for the

VirtualCenter that serves the recovery site.

VMware Site Recovery Manager (SRM) disaster recovery

399

Disaster Recovery

Replication Manager proxy hosts must not be protected by

VMware SRM in Celerra environments. Replication Manager failover of the replica reconfigures which proxy host should be used for replication tasks in Celerra environments. The

Replication Manager proxy host may be the same physical or virtual host that serves as the Replication Manager server.

2. Install the Replication Manager Agent software on the selected physical or virtual machine. This host will be available in the protection site.

Replication Manager uses this proxy host to communicate with the VMware VirtualCenter software for the protection site. This allows Replication Manager to interact with VMware and perform the tasks necessary to create, mount, unmount, restore, and expire VMFS replicas between any ESX server on the site.

3. Choose either a virtual machine or a physical machine in your

VMware SRM recovery site and install Replication Manager agent software on that recovery site location to act as a proxy server for the recovery site.

Note:

Replication Manager Proxy hosts on both the protection site and the recovery site must have IP access to the storage arrays on both sites.

4. Install the Replication Manager server software and protect it from a disaster. Disaster protection for the server must be provided as described in

“Setting up Replication Manager Server disaster recovery” on page 342

. In Celerra environments, the server cannot be protected using VMware SRM.

5. Register the two new proxy hosts on the Replication Manager

Server. Refer to the EMC Replication Manager Product Guide for instructions on how to register a VMware proxy host on the

Replication Manager Server.

6. Make the following modifications to the VMware Infrastructure

Clients that constitute the VMware SRM environment:

• Set the LVM EnableResignature flag to 1.

• Install the VMware Site Recovery Map extension (Plugins >

Manage Plugins

on the VMware Virtual Infrastructure menu).

• Install the Celerra Site Recovery Adaptor (SRA). Figure 69

illustrates the location of Replication Manager Proxy Servers in a Celerra VMware SRM environment.

400

EMC Replication Manager Administrator’s Guide

Disaster Recovery

VirtualCenter VMware SRM

RM Proxy 1

Virtual Machines

App App App

OS OS OS

VMFS VMFS VMFS

Celerra

Replicator

Link

(created by RM)

VirtualCenter VMware SRM RM Proxy 2

Virtual Machines

App App App

OS OS OS

VMFS VMFS VMFS

VMware Virtual Infrastructure

Protection Site

VMware Virtual Infrastructure

Recovery Site

Celerra

Replicator

Link

(created by RM)

Celerra

Protection site

Celerra Recovery site

Figure 69

Configuring a Celerra

VMware SRM setup for virtual disk replicas

Location of Replication Manager Proxy Servers in a Celerra VMware

SRM environment

To configure a Celerra VMware SRM setup for virtual disk replicas:

1. Install Replication Manager Agent software on the production and mount virtual machines associated with the virtual disks you want to replicate in the protection site.

2. Install Replication Manager Agent software on the production and mount virtual machines associated with the virtual disks you want to replicate in the recovery site.

3. Install the Replication Manager server software and protect it from a disaster. Disaster protection for the server must be provided as described in

“Setting up Replication Manager Server disaster recovery” on page 342

. In Celerra environments, the server cannot be protected using VMware SRM.

4. Register the virtual machine hosts to the Replication Manager

Server.

VMware Site Recovery Manager (SRM) disaster recovery

401

Disaster Recovery

Creating a Celerra

Replicator relationship

5. Register a virtual machine for mounting virtual disks on each site registered to the respective VirtualCenter. The mount virtual machine on the recovery site can be used to mount a remote replica. The mount virtual machine could be configured as protected virtual machine for SRM failover.

6. Make the following modifications to the VMware Infrastructure

Clients that constitute the VMware SRM environment:

• Set the LVM EnableResignature flag to 1

• Install the VMware Site Recovery Map extension (Plugins >

Manage Plugins

on the VMware Virtual Infrastructure menu.)

In a Celerra environment, you can use Replication Manager to create a Celerra Replicator relationship for you automatically as part of your

VMware SRM configuration.

Follow these steps for all the entities that you plan to protect with

SRM:

1. Configure an application set.

• When creating virtual disk replicas the selections are identical to working in non-SRM environments. When choosing objects, expand applications and select application objects.

• When creating VMFS replicas the selections are also similar to non-SRM environments, with the exception of the choice of objects. Expand the tree and select one or more VMFS datastores to replicate. If you are replicating virtual disks in a separate application set, you should not specify the datastores for those virtual disks.

Note:

VMFS replicas are crash consistent copies of the VMFS and include snapshots of each virtual machine that allow you to revert the virtual machines to a consistent state. These replicas do not provide application consistency.

2. Configure jobs. When configuring jobs to prepare for a VMware

SRM environment, you have the option to set the Replication

Source

to Celerra SnapSure or Celerra Replicator on the Job

Name and Settings

panel of the Job Wizard.

402

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Note:

Currently the Celerra adaptor limits the protection and recovery

Celerras from having Celerra Replicator sessions with any other Celerras (the adaptor supports peer-to-peer connections only). Additionally, SRM is not supported if the Celerra Replicator sessions are between movers on the same

Celerra, referred to as ”local configuration.”

The following describes the affect of choosing each of these options:

Celerra SnapSure

— Choose this option to make a replica of the data on the protection site. This type of replica exists only on the protection site. It is restorable on the protection site. Upon failover, the contents of this type of replica are accessible after you perform the Replication Manager failover operation.

Celerra Replicator

— Choose this option to make a replica of data on the recovery site. This uses the proxy host on the protection site to create the replica and the proxy host on the recovery site to mount the replica. The replica can be mounted on the recovery site and would still be available in the event of a failover as a local replica (once you perform the Replication Manager failover operation).

3. Run the jobs to create the Celerra Replicator link necessary to protect your data with VMware SRM.

VMware Site Recovery Manager (SRM) disaster recovery

403

Disaster Recovery

VMware SRM steps

If you have not already done so, perform these VMware SRM steps to configure your VMware SRM environment:

1. Create the structure of the protection and recovery sites with ESX servers in the recovery site that mirror the protection site.

2. On the protection site, create a protection group including each datastore you intend to protect with VMware SRM.

Note:

Make sure that no Replication Manager replicas of the protection site are mounted while you create the protection group.

3. Use the VMware Virtual Infrastructure Manager to configure each virtual machine within the protection group that you want to failover. A placeholder for each virtual machine you configure for failover appears in the recovery site.

4. On the recovery site, create a recovery plan for each protection group you created on the protection site. Choose the corresponding protection group.

For more information on how to complete these VMware SRM tasks, refer to your VMware documentation.

Replication

Manager before

SRM failover on

Celerra

VMware SRM application sets and jobs

Once the VMware SRM environment has been configured, this section describes how to use Replication Manager in a VMware SRM

Celerra environment.

Remember to create a protection group on the protection site of your

SRM environment, and a recovery plan on the recovery site.

In a Celerra environment, the setup of your VMware SRM environment requires you to create application sets and jobs then run the jobs in order to setup the Celerra Replicator links. Therefore, once the setup steps have been completed, all the Replication Manager application sets and jobs already exist. To use them, you can run them on demand or schedule them to run as needed.

For a full description of how to use Replication Manager to create application sets and jobs, refer to the EMC Replication Manager Product

Guide and the online help.

Mounting VMFS replicas of SRM environments

When mounting a VMFS replica of a VMware SRM environment as part of the job, or on demand, you must type in the name of the ESX

Server you plan to mount to in the Mount host field.

404

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Note:

Replication Manager does not validate the ESX Server name you enter in the mount host field. The name you enter must be part of the site on which you are performing the replication. If you chose Celerra SnapSure, the mount host ESX Server must be part of the protection site. If you chose Celerra

Replicator, your mount host ESX Server must be part of the recovery site. You must also be sure to choose the appropriate Replication Manager Proxy Host.

Replication

Manager after SRM failover on Celerra

Results of a failover in a Celerra environment

In a Celerra environment, managing a VMware SRM failover requires that you follow these steps:

1. If this is a real disaster, perform the Replication Manager Server disaster recovery steps outlined in the Celerra DR section. If the original Replication Manager Server is still valid, there is no need to perform Replication Manager Server disaster recovery steps.

2. Right-click each Celerra Replicator replica and select Perform

Post Failover SRM Steps

.

3. When prompted for a host name, select the appropriate host as follows:

• For a clone replica of a VMFS application set, select the new proxy host on the recovery site.

• For a clone replica of a virtual disk replica, select the production host name of the VM(s) that were failed over. In most cases, the production host name will be the same.

4. After failover to the new proxy host completes, reconfigure the

Replication Manager Celerra Replicator jobs to point back to the original Celerra (if it still exists) or a new target Celerra.

5. Rename all the failed over VMFS datastores on the DR ESX server that have been renamed due to the LVM EnableResignature flag.

Each failed over VMFS datastore will have a prefix starting with snap-<ID>- followed by the original VMFS datastore name.

Replication Manager components are now failed over and ready to start running again in the new environment.

The following statements are true in the new environment:

No replicas that existed before failover have been lost.

Celerra Replicator replicas that existed prior to the SRM failover are now considered “local” replicas and may be restored to the failed over VMFS, or mounted.

VMware Site Recovery Manager (SRM) disaster recovery

405

Disaster Recovery

Celerra SnapSure replicas that existed prior to the SRM failover are now considered “remote” replicas and may be mounted

(assuming the original production Celerra is still operational).

Configuring SRM and Replication

Manager on

RecoverPoint

Configuring a

RecoverPoint VMware

SRM setup for VMFS replicas

Replication Manager can create two different kinds of VMware replicas in an SRM environment as follows:

VMFS replicas

— A crash-consistent replica of the VMware

VMFS (datastore). Prior to the replication, Replication Manager will create a VMware snapshot for every live virtual machine in the datastore. This VMware snapshot remains for each VM on the replica, and will be removed for each live VM after the replica is created.

Note:

When Replication Manager creates the VMware snapshot, the snapshot provides a level of consistency for the VMs in the datastore, depending on which version of VMware the datastore resides on. At a minimum, the memory for the VMs is flushed to disk prior to creating the replication.

Virtual disk replicas

— An application consistent replica of the data located on a virtual disk mounted to a virtual machine. This does not replicate the system drive but a separate virtual disk that contains application data. Due to the consistency group requirements for RecoverPoint in an SRM environment, you cannot make virtual disk replicas of SRM protected disks. You must use Replication Manager to replicate the VMFS that the virtual disks reside on.

This section describes how to configure your RecoverPoint-based

VMware SRM environment to facilitate each of these two types of

VMware replications.

Perform the following steps to configure your environment for VMFS replication:

1. Create a physical or virtual machine in your VMware SRM protection site. You should configure one (or more) proxy hosts in the protection site that are configured for the VirtualCenter that serves the protection site. You should also configure one (or more) proxy hosts in the recovery site that are configured for the

VirtualCenter that serves the recovery site.

406

EMC Replication Manager Administrator’s Guide

Disaster Recovery

Replication Manager proxy hosts must not be protected by

VMware SRM in RecoverPoint environments. Replication

Manager failover of the replica reconfigures which proxy host should be used for replication tasks in RecoverPoint environments. The Replication Manager proxy host may be the same physical or virtual host that serves are the Replication

Manager server.

2. Install the Replication Manager Agent software on the selected physical or virtual machine. This host will be available in the protection site.

Replication Manager uses this proxy host to communicate with the VMware VirtualCenter software for the protection site. This allows Replication Manager to interact with VMware and perform the tasks necessary to create, mount, unmount, restore, and expire VMFS replicas between any ESX server on the site.

3. Choose either a virtual machine or a physical machine in your

VMware SRM recovery site and install Replication Manager agent software on that recovery site location to act as a proxy server for the recovery site.

Note:

Replication Manager Proxy hosts on both the protection site and the recovery site must have IP access to the storage arrays on both sites.

4. Install the Replication Manager server software and protect it from a disaster. Disaster protection for the server must be provided as described in

“Setting up Replication Manager Server disaster recovery” on page 342

. In RecoverPoint environments, the server cannot be protected using VMware SRM.

5. Register all proxy hosts on the Replication Manager Server. Refer to the EMC Replication Manager Product Guide for instructions on how to register a VMware proxy host on the Replication Manager

Server.

6. Make the following modifications to the VMware Infrastructure

Clients that constitute the VMware SRM environment:

• Set the LVM EnableResignature flag to 1

• Install the VMware Site Recovery Map extension (Plugins >

Manage Plugins

on the VMware Virtual Infrastructure menu).

• Install the RecoverPoint Site Recovery Adaptor (SRA)

VMware Site Recovery Manager (SRM) disaster recovery

407

Disaster Recovery

VMware SRM steps

If you have not already done so, perform these VMware SRM steps to configure your VMware SRM environment:

1. Create the structure of the protection and recovery sites with ESX servers in the recovery site that mirror the protection site.

2. On the protection site, create a protection group including each datastore you intend to protect with VMware SRM.

Note:

Make sure that no Replication Manager replicas of the protection site are mounted while you create the protection group.

3. Use the VMware Virtual Infrastructure Manager to configure each virtual machine within the protection group that you want to failover. A placeholder for each virtual machine you configure for failover appears in the recovery site.

4. On the recovery site, create a recovery plan for each protection group you created on the protection site. Choose the corresponding protection group.

For more information on how to complete these VMware SRM tasks, refer to your VMware documentation.

Replication

Manager before

SRM failover on

RecoverPoint

VMware SRM application sets

Once the VMware SRM environment has been configured, this section describes how to use Replication Manager in a VMware SRM

RecoverPoint environment.

Remember to create a protection group on the protection site of your

SRM environment, and a recovery plan on the recovery site.

In a RecoverPoint environment, the Replication Manager application set should have all the VMFS datatstores that correspond to the LUNs in the RecoverPoint consistency group that is being protected by

SRM.

Mounting VMFS replicas of SRM environments

When mounting a VMFS replica of a VMware SRM environment as part of the job, or on demand, you must type in the name of the ESX

Server you plan to mount to in the Mount host field.

Note:

Replication Manager does not validate the ESX Server name you enter in the mount host field. The name you enter must be part of the site on which you are performing the replication. If you chose RecoverPoint SnapSure, the mount host ESX Server must be part of the protection site. If you chose

408

EMC Replication Manager Administrator’s Guide

Disaster Recovery

RecoverPoint Replicator, your mount host ESX Server must be part of the recovery site. You must also be sure to choose the appropriate Replication

Manager Proxy Host.

Replication

Manager after SRM failover on

RecoverPoint

In a RecoverPoint environment, managing a VMware SRM failover requires that you follow these steps:

1. If this is a real disaster, perform the Replication Manager Server disaster recovery steps. If the original Replication Manager Server is still valid, there is no need to perform Replication Manager

Server disaster recovery steps.

2. If you are using just one proxy host for both the protection and recovery sites, use the RM Console, select the Virtual Center host and change the Virtual center host name to the name of the ESX on the recovery side. (If you are using separate proxy hosts for the protection and recovery sites, this step is not required.)

3. Rename all the failed over VMFS datastores on the DR ESX server that have been renamed due to the LVM EnableResignature flag.

Each failed over VMFS datastore will have a prefix starting with snap-<ID>- followed by the original VMFS datastore name.

4. If you are using separate proxy hosts for the protection and recovery sites, configure the proxy host on the recovery site and create application sets and jobs similar to the ones that were created for the protection site.

Replication Manager components are now failed over and ready to start running again in the new environment. All bookmarks are now updated based on the new journals. When the next job runs,

Replication Manager automatically updates the replicas to reflect the

RecoverPoint bookmarks available for the consistency group.

VMware Site Recovery Manager (SRM) disaster recovery

409

Disaster Recovery

410

EMC Replication Manager Administrator’s Guide

I

A

Using the Command

Line Interface

You can perform certain Replication Manager tasks using a command line interface (CLI). This appendix describes how to install, configure, and create replicas using the CLI:

Installation with the setup command ........................................... 412

Configuration and replication with rmcli .................................... 417

Using the Command Line Interface

411

Using the Command Line Interface

Installation with the setup command

Replication Manager provides a command line interface (CLI) that allows installation of the Replication Manager Server, Agent, and

Console components.

To install Replication Manager using the CLI:

1. Log in as root (UNIX) or Administrator (Windows).

2. Open a command prompt window.

3. On UNIX, execute setup.sh with the -silent option, as follows:

# cd /cdrom/cdrom0

# setup.sh -silent

On Windows, execute setup.bat with the -silent option. In the following example, D:\ is the drive letter where the DVD device is mounted:

> cd D:\

> setup.bat -silent

Table 25

On UNIX, the command structure is as follows:

setup.sh -silent

[-install_oracle]

[-install_udb]

[-options_file optionsfile]

On Windows, the command structure is:

setup.bat -silent

[-options_file optionsfile]

If you omit an option, install uses the corresponding default value for that option.

Table 25 on page 412

describes each option.

Install CLI options (page 1 of 2)

Options

-silent

Description

Command line installation of Replication Manager

412

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

Table 25

Install CLI options (page 2 of 2)

Options

-options_file optionsfile

-install_oracle

-install_udb

Description

Command line installation of Replication Manager components and options specified in the file optionsfile

Installs Oracle agent

Installs UDB agent

Sample commands

This section lists some sample setup commands for installing

Replication Manager.

The following command installs Replication Manager software on a

Windows system with all default install wizard selections:

setup.bat -silent

The following command installs the UDB agent and takes the defaults for all options:

setup.sh -silent -install_udb

Using a response file

(UNIX)

For UNIX installation, you can specify additional installation options by editing a copy of the IRresponse.txt file and using the -options_file option of the setup.sh command. The IRresponse.txt file lets you specify any option that can be set from the installation wizard.

To use a response file to install Replication Manager on a UNIX machine:

1. Make a copy of IRresponse.txt, which is located at the top level of the distribution DVD.

2. Edit the copy. Remove leading ’###’ characters to activate a

setting. “Response file options (UNIX)” on page 414 provides

option descriptions and values.

3. Save the IRresponse.txt file.

4. Specify the -options_file option when you run the setup.sh

command. “Sample commands” on page 413 provides examples.

Installation with the setup command

413

Using the Command Line Interface

Response file options

(UNIX)

This section describes the options contained in the IRresponse.txt file.

The options are also described in the IRresponse.txt file itself.

-P installLocation=value

The install location of the product. Specify a valid directory into which the product should be installed. If the directory contains spaces, enclose it in double-quotes. For example, to install the product to /opt/myrm:

-P installLocation=/opt/myrm

-P replicationmanageragentoracle.active=value

Installs Oracle Agent.

true

— Indicates that the feature is selected for installation.

false

— Indicates that the feature is not selected for installation.

-P replicationmanageragentudb.active=value

Installs UDB Agent.

true

— Indicates that the feature is selected for installation.

false

— Indicates that the feature is not selected for installation.

-W agentportsslandsecuremodeselectionpanel.AgentControlPort="value"

Agent control port — Used to communicate control information to the agent.

-W agentportsslandsecuremodeselectionpanel.AgentDataPort="value"

Agent data port — Used to pass data to the agent.

-W agentportsslandsecuremodeselectionpanel.AgentSSLEnabler="value"

Select SSL communication — Selecting SSL causes the agent to communicate with the server using the SSL protocol.

true

— Turns on SSL.

-W agentportsslandsecuremodeselectionpanel.AgentSecureMode="value"

414

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

Agent secure mode — Determines whether scripts run on the agent are run in secure mode with a username and password.

true

— Turns on secure mode.

-W configcheckerprompt.ConfigChecker="value"

Run Config Checker on the system. Determines whether Config

Checker should be run on the system during client install.

yes

— Indicates run Config Checker.

no

— Indicates do not run Config Checker (default).

Installation with the setup command

415

Using the Command Line Interface

Using a response file

(Windows)

Creating the response file

Installing Replication

Manager with the response file

For a Windows installation, you create a response file by running the wizard once, and capturing the responses in a file. Unlike UNIX, you do not start with a template.

To create a response file for a Windows installation:

1. Mount the Replication Manager distribution DVD.

2. Run the following command:

E:\> setup.exe /r where

E:\>

is the top level of the Replication Manager distribution DVD.

Note:

In this case, use setup.exe (not setup.bat).

The Install Wizard appears.

3. Follow the steps in the Install Wizard. The selections you make will be stored in a response file named setup.iss in the Windows directory, for example:

C:\WINDOWS\setup.iss

To install Replication Manager on a Windows system using a response file:

1. Mount the Replication Manager distribution DVD.

2. Run the setup.bat command with the -options_file option, specifying the setup.iss file that you created above. A sample command is:

E:\> setup.bat -silent -options_file C:\WINDOWS\setup.iss where

E:\>

is the top level of the distribution DVD.

416

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

Configuration and replication with rmcli

You can use the rmcli command to perform certain configuration operations instead of using the GUI. The rmcli command is installed with Replication Manager Server, Agent, and Console.

Interactive and batch

You can run rmcli interactively or in batch mode.

To run in batch mode, use the file= option to specify the name of your batch script: unix# rmcli

file=/user/rmadmin/scripts/mybatchfile.txt

Connected to 'rmsrv02' (port=8600).

Login as user 'Admin1' successful.

/Users/

/Hosts/

/Schedules/

/Application Sets/

/Storage Pools/

/Storage/

/Tasks/

In this example, the option file= specifies a batch file that contains

CLI commands (in this case, commands specifying the host, port number, user credentials, and the command list).

To run the command in interactive mode, omit the file= option:

> rmcli

RM-CLI (Not connected): connect host=rmsrv02

port=8600

Connected to ’rmsrv02’ (port=8600)

RM-CLI : login user=Admin1 password=E13y$em8

Login as user ’Admin1’ successful.

RM-CLI : list

/Users/

/Hosts/

/Schedules/

/Application Sets/

/Storage Pools/

/Storage/

/Tasks/

Configuration and replication with rmcli

417

Using the Command Line Interface

Recommended batch script

How to run rmcli in interactive mode

How to run rmcli in batch mode

In this example, the connection, login, and list commands are entered at the RM-CLI prompt instead of in the batch script.

Previous versions of Replication Manager recommended the use of an initialization file, .erm-cli.init, to pass connection information to the CLI. Use of the file is indicated by the init=yes option, and the file is required to be located in the user’s home directory.

The initialization file continues to be supported, but its use is no longer recommended. EMC recommends use of the file= option alone for specifying all batch commands, including those for connection and login information as well as for Replication Manager operations.

Refer to the Replication Manager Administrator’s Guide, versions 5.2.1 and earlier, for information on the initialization file.

To run rmcli in interactive mode:

1. Open a command window on a computer running Replication

Manager Server or Console, or access it using Telnet or some other mechanism.

2. Change directory to the gui

directory (typically under

C:\Program Files\EMC\rm or /opt/emc/rm.)

3. Enter the rmcli command:

> rmcli

RM-CLI (Not connected) :

4. Enter commands at the RM-CLI prompt. The “rmcli commands” on page 421 describes each command.

5. To exit rmcli, use the exit command.

To run rmcli in batch mode:

1. Create a batch file containing rmcli commands.

“Script structure” on page 419 and

“rmcli commands” on page 421 provide more

information.

2. Run the following command. The option file=batch_file specifies

the batch file you created in step 1 :

> rmcli file=batch_file

418

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

Connected to 'rmsrv02' (port=8600).

Login as user 'Admin1' successful.

output from batch file is displayed here

Script structure

Using if/then/else conditionals

This section describes script structure in Replication Manager:

Each command must be on its own line. For example, connect host=rmsrv02 port=65432 login user=Administrator password=E13y$em8 list name=Application Sets exit

Start a comment with the # character:

# Here is my comment on the script.

Use the print command to print out a literal string to the console while the script runs. A sample command is: print "This is a literal string."

Use the exit command as the last command in the batch file:

command command

exit

You can incorporate if/then/else statements into your scripts to perform a task only if another task is completed successfully. The following is an example of a script segment with this kind of conditional:

...

if mount-replica job=oracle1 name=05/03/2004 15:08:23 client=oraclehostc1 auto-mount=yes auto-unmount=yes mount-options=Read Only then delete-replica name="05/03/2004 15:08:23" appset=oracle1 else print "The script did not run successfully, did not delete replica."

In this script segment, Replication Manager attempts to mount an existing replica and run a script to print reports from that replica. If the replica mounts and the script runs successfully (meaning it returns a zero on the exit code), then Replication Manager deletes the replica immediately, freeing that storage for other uses since it is no

Configuration and replication with rmcli

419

Using the Command Line Interface

longer needed for report generation. If the script fails, Replication

Manager prints an error message to the log.

Note:

The if/then/else structure in the CLI does not allow you to include more than one command in the then or else sections of the if/then/else structure.

Using if-not/then/else conditionals

The conditional command if-not/then/else is similar to if/then/else, but it performs a certain task only if another task does not run successfully. The following script segment shows an example using if-not/then/else:

...

if-not run-job name="scrub data" appset=oracle1 then run-job name="run reports" appset=oracle1 else exit -1 exit

In the previous script segment, Replication Manager checks for the existence of an activity named "scrub data" on the job. If that activity exists, it runs that activity. If the activity does not exist, Replication

Manager attempts to run the activity named "run reports." If neither of these activities exists, Replication Manager aborts the operation and exits with an error status of -1 (failure).

If either of the commands succeed, Replication Manager exits with a 0 status (success).

Note:

The if-not/then/else structure in the CLI does not allow you to include more than one command in the then or else sections of the structure.

420

Script example

To run in batch mode, you must provide all the required information at once, either in a file or on the command line. Use batch mode to create scripts that can perform tasks automatically. The following is an example of commands as they would appear in a script: login user=jdoe password=mypassword run-job job=Department name=Department Test Activity exit

Once you create the file with the batch commands, you can run it using a command such as:

rmcli host=rmsrv02 port=23456 file=

file where file is the name of the file that contains the batch commands.

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

rmcli commands

The change-app-creds command

This section describes the commands that you can enter at the

RM-CLI prompt or in a script file:

change-app-creds connect delete-replica disable-schedule dr-get-primary dr-set-state edit-replica enable-schedule exit help list list-properties login mount-replica print run-job set prompt set-client-options set-server-options simulate-job sleep unmount-replica

The change-app-creds command changes the username, password, and related values that Replication Manager uses to access an application. The syntax is: app-type=[oracle|sql|exchange|udb]

Specifies the application.

For SharePoint, use the app-type=sql option for each SQL

Server instance in the farm; the password must be the same for each instance.

Configuration and replication with rmcli

421

Using the Command Line Interface

host=hostname

Specifies the application host.

app-name=application_instance_name

Specifies the application instance.

user-name=instance_username

Specifies the username that Replication Manager will use to access the application instance.

password=instance_password

Specifies the password that Replication Manager will use to access the application. ASCII only. Invalid characters: | (pipe)

# (pound) ' (apostrophe) " (double quote).

confirm-password=instance_password

Specifies the confirmation password that Replication Manager will use to access the application.

[use-windows-auth=[yes|no]]

Specifies whether Windows authentication is used. This is mandatory for Exchange. Use it for SQL Server if Windows authentication is in use.

[windows-domain=domain_name]

Specifies the Windows domain name. Required for Exchange, and for SQL Server if Windows authentication is used.

[udb-db-name=UDB_database_name]

[udb-db-user-name=UDB_database_user_name]

[udb-db-password=UDB_database_password]

[udb-db-confirm-password=

UDB_database_confirmation_password]

Specifies the UDB database name, username, password, and confirmation password. Use these options if you want to change the credentials of a specific UDB database.

(To change the credentials of the UDB instance and all its databases when all of them are the same, use the user-name=instance_username, password=instance_password,

and confirm-password=instance_password

options listed earlier.)

Only one set of udb-db-* options can be used for each

change-app-creds

command.

Password restrictions: ASCII only. Invalid characters: | (pipe)

# (pound) ' (apostrophe) " (double quote).

422

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

The connect command

The delete-replica command

Use the connect command to connect to a Replication Manager

Server. The syntax is: connect host=host port=port host=host

specifies the Replication Manager Server to which to connect.

port=port

specifies the port number with which to communicate with the server.

For example: connect host=rmsrv02 port=23456

Use the connect command only when you are not using the host and port parameters directly as part of the rmcli command line. This connect command is an alternative to that connection method.

Use the delete-replica command to delete any replica (not only those created by the run-job

command). The command syntax is: delete-replica

[ name=replica_name | position=[first|last] ] appset=application_set_name name=replica_name specifies the replica to delete by name.

position=first

deletes the first (oldest) replica for the application set; position=last

deletes the newest.

appset=application_set_name

specifies the name of the application set that created the replica.

Note:

The delete-replica command replaces the old expire-replica command. Scripts using expire-replica will continue to work. New scripts should use delete-replica only.

The disable-schedule command

The dr-set-primary command

The disable-schedule command accepts a schedule name and disables that schedule until it is reenabled. The command syntax is: disable-schedule name=schedule_name

The dr-set-primary command changes the state of a secondary

Replication Manager Server to a primary server, in a Replication

Manager Server disaster recovery configuration.

Configuration and replication with rmcli

423

Using the Command Line Interface

The dr-get-state command

The edit-replica command

The enable-schedule command

The exit command

The help command

The dr-get-state command returns the disaster recovery state of the

Replication Manager Server. The syntax is: dr-get-state

For example: dr-get-state

0 SECONDARY ACTIVE

Table 24 on page 348 describes dr-get-state results.

The edit-replica command is used to modify the display name of a replica. The command syntax is: edit-replica

[name=replica_name | position=first|last] appset=application_set_name display-name=replica_display_name name=replica_name

or position=[first|last]

specifies the replica to rename. position=first

renames the first (oldest) replica for the application set. position=last

renames the newest. appset=application_set_name

specifies the application set for the replica. display-name=replica_display_name is the option that specifies the new name for the replica.

The enable-schedule command accepts a schedule name and enables that schedule that has been previously disabled. The command syntax is: enable-schedule name=schedule_name

Use the exit command to exit CLI mode, either in batch or interactive mode. Use the exit command at the end of batch script. The syntax is: exit [ n ]

rmcli

will exit with the return value specified by n.

The help command lists the usage statement for all CLI commands that are defined in this section.

424

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

The list command

The list-properties command

Use the list command to list folders and elements in the Replication

Manager tree (as shown in the console). The command syntax is: list name=node recursive=[yes|no] format=[long|short]

[add-replica-name=yes|no] name=node

is the name of the node or folder in the tree to start the list.

recursive=yes

displays all nodes or sub folders. recursive=no displays only one level of the tree.

format=long

displays the full path to each node. format=short displays an indented list of nodes. add-replica-name= shows the replica names of existing replicas as well.

The following example lists the top-level nodes of the Replication

Manager tree:

CLI> : list

/Users/

/Hosts/

/Schedules/

/Application Sets/

/Storage Pools/

/Storage/

/Tasks/

The next example lists the nodes under the Application Sets folder:

CLI> : list name=Application Sets recursive=yes

/Application Sets/

/Application Sets/localsql1/

/Application Sets/localsql1/Jobs/

/Application Sets/localsql1/Jobs/2sqldbjobforlocalsql1

/Application Sets/localsql1/Jobs/newjob

/Application Sets/localsql1/Jobs/recoversqljob

/Application Sets/localsql1/Replicas/

/Application Sets/localsql1/Replicas/12-21-2004 13:43:41

/Application Sets/localsql1/Replicas/12-21-2004 13:58:14

/Application Sets/localsql1/Replicas/12-21-2004 15:16:21

/Application Sets/localsql1/Replicas/12-21-2004 15:19:49

/Application Sets/localsql1/Replicas/12-21-2004 15:23:59

Use the list-properties command to display the details of an element in the Replication Manager tree. This command displays the same information that is displayed in the content panel of the Replication

Manager Console. The command syntax is: list-properties name=node

Configuration and replication with rmcli

425

Using the Command Line Interface

The login command

The mount-replica command

name=node

specifies the name of the node in the tree whose properties you want to display. Use the full path as shown below.

The following example lists the properties of the selected node:

list-properties name=/Application Sets/localsql1

Application Set Name: localsql1

Description: Testing local sql db

Expiration Enabled: Yes

Owner: Administrator

Host Name: rmsrv02

Administrator: Access

DBA: Access

Operator: Access

Powerdba: Access

Poweruser: Access

Use the login command to log in after connecting to a server. The command syntax is: login user=username password=password user=username specifies an Replication Manager user.

password=password

is the user’s password.

Password restrictions: ASCII only. Invalid characters: | (pipe) #

(pound) ' (apostrophe) " (double quote).

For example: login user=jdoe password=E13y$em8 logs the user into the server using the account jdoe and the password

E13y$em8

.

Use the mount-replica command to mount a replica to one or more alternate host(s). The basic command syntax is:

mount-replica appset

=application_set_name

[name=replica_name] | [position=first|last]

[apit= yes|no] time= (Mandatory if apit=yes)

[replica-name=replica_name]

[job-name=job_name]

426

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

[source-client=original_hostname]

client

=alternate_host

client-interface

=alternate_host_interface

discard-mount-changes=yes

|no

mount-read-only

=yes|no

cluster-volume-group-import=yes|no

additional options ...

see additional sections below

appset=application_set_name

is the name of the application set from which the replica was created.

Indicate a specific replica with the name=replica_name

option, or with position=first

(oldest replica) or position=last

(newest replica).

apit= mounting an any point in time replica for a certain date/time given in the time=

parameter, specifying the date and time of the point in time that should be mounted. replica-name=replica_name

is the name of the replica that this job is designed to copy. job-name=job_name

is the job that creates the replica that this job is designed to copy. source-client

= The name of the host on which the data was residing when the replica was taken. This field is required for mounts in SharePoint and federated environments.

client=alternate_host

specifies the alternate host where the replica is to be mounted. client-interface=alternate_host_interface

(Celerra NFS only) indicates a hostname or IP address for a specific network interface on the mount host (or ESX server) when a Celerra NFS replica is mounted. If a hostname is used it must be resolvable to an IP address on the mount host or ESX server as well as the

Celerra Data Mover.

discard-mount-changes

specifies whether changes made while the replica was mounted should be retained.

mount-read-only

specifies whether the replica is mounted read-only. This option is valid for Windows Server 2003 and

Windows Server 2008 only.

cluster-volume-group-input specifies whether to import the volume group to all cluster nodes. Applies to mounts on HP

Serviceguard or IBM HACMP only.

Configuration and replication with rmcli

427

Using the Command Line Interface

maintain_lun_visibility

specifies whether to maintain visibility of CLARiiON SnapView snaps and clones, or Full SAN

Copy LUNs, after a replica is unmounted.

Additional options to the mount-replica command are listed in the following sections:

Mount path types mount-path-type

=Use original production host location

Use this option to mount the replica to its original path. Use it only if you are mounting the replica to an alternate host. Do not use when mounting a VMware NFS datastore replica if the proxy

(Linux) host is also the mount host.

mount-path-type

=Use an alternate root with the original

path(s) appended alternate-root

=alt_root alternate-root=alt_root

specifies the new root path to which the replica will be mounted. For example, if you specify alternate-root=/mymount

, and your application data files reside in a path called /payroll/reports/, the replica will be mounted at /mymount/payroll/reports/.

mount-path-type

=Use pathname substitution table

opath1

=original_path_1

apath1

=alternate_path_1

opath2

=original_path_2

apath2

=alternate_path_2

...

opath

n=original_path_n

apath

n=alternate_path_n

Use this option and the opathn=original_path_n

and apath1=alternate_path_n

pairs to specify alternate paths on which to mount one or more filesystems.

File system options app-name

=filesystem

Use this option to specify the file system to be mounted.

Oracle options app-name

=oracle_instance_name

mount-options

=[No Recovery | Read Only | Recover | FSMount |

Catalog RMAN

]

os-user

=user

428

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface oracle-home

=dir

alternate-database

=alt_database

alt-sid

=[yes | no]

alt-sid-name

=alt_sid_name

oracle-password

=password

app-name

=oracle_instance_name

option specifies the Oracle instance to mount.

mount-options=No Recovery

performs a file system mount only. mount-options=Read Only performs a recovery and restarts the database in read-only mode. Use this option for performing tests on the database. mount-options=Recover

performs a recovery and restarts the database in read write mode. This option could prevent you from restoring this replica in the future (since it applies the reset logs to the database when recovered in read-write mode). Make sure you understand the implications before you specify this option. mount-options=FSMount performs a filesystem mount, which allows access the files on that filesystem without using any Oracle functions. You can decide to attach the database files to the Oracle instance on the mount host if desired. Specifying this option could prevent you from restoring this replica in the future since it applies the reset logs to the database. Make sure you understand the implications before specifying this option.

mount-options=Catalog RMAN performs a mount, which allows

Replication Manager to prepare the initfile, start the Oracle instance and mount the database from the Oracle replica so that the Oracle Recovery Manager (RMAN) can catalog it and use that catalog information to perform RMAN operations with that replica. This uses the STARTUP MOUNT; command.

os-user=user specifies the username associated with the operating system user that is the owner of the Oracle binaries.

This field is mandatory for UNIX-based hosts.

Note:

In previous releases there was also an os-user-pw=password option but that has been deprecated. Replication Manager no longer requires the password associated with the os-user

. If you submit this option Replication Manager ignores it. oracle-home=dir sp ecifies the location of the Oracle user's home directory on the mount host.

Configuration and replication with rmcli

429

Using the Command Line Interface

alternate_database=database

specifies an alternate database name to which Replication Manager will perform the mount.

alt-sid=yes

and alt-sid-name=alt_sid_name

mounts the database using an alternate Oracle SID name. If you choose to mount the database onto the production host, you may need to mount it to an alternate SID name for the mount operation to complete successfully. Replication Manager creates the SID if it does not already exist. oracle-password=password

specifies the SYS user password.

Password restrictions: ASCII only. Invalid characters: | (pipe) #

(pound) ' (apostrophe) " (double quote).

ASM options dg-rename-prefix

=prefix

asm-oracle-home

=oracle_home dg-rename-prefix=prefix

specifies the string to be used as a prefix when renaming an Oracle ASM disk group.

asm-oracle-home=oracle_home

specifies the ORACLE_HOME value for ASM.

RAC options mount-to-rac

=[ yes | no ] mount-to-rac=

specifies the whether you are mounting to a RAC cluster or not.

Catalog RMAN options rman-username

=

username

rman-password

=

password

rman-connectstring

=

connectstring

rman-tnsadmin

=

tnsadmin

rman-username=

username of the account used to access RMAN.

This is also known as the Recovery Catalog Owner.

rman-password=

password of the account used to access RMAN.

rman-connectstring=

Connect string Replication Manager should use to communicate with RMAN.

rman-tnsadmin=

Points to the directory where the network configuration files (such as TNSNAMES.ora) for RMAN reside.

430

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

UDB DB2 options app-name

=udb_instance_name[/db_name]

mount-options

=[ No Recovery | Snapshot | FSMount]

udb-inst-username

=username

udb-inst-password

=password

udb-altinstance-name

=instance_name

udb-altdatabase-name

=db_name app-name=udb_instance_name

specifies the UDB instance to mount. To specify mount options for an individual database, use the syntax app-name=udb_instance_name/db_name

. mount-options=Snapshot recovers the database in Snapshot mode. This results in a crash recovery, making the database consistent (the system starts a new log chain and applies the archive logs). This option does not allow you to roll forward using any logs from the original database. The database will be fully available after the mount, and can perform any operations, including backup.

mount-options=No Recovery

is not a standard UDB recovery mode. Replication Manager can import archive logs and control files to the ERM_TEMP_BASE directory (or /tmp if

ERM_TEMP_BASE has not been defined) and restores archive logs, and control files, on the new host. Subsequently, the

Database Administrator can log in to the instance and recover the database manually by using the stored control file to bring the database online.

mount-options=FSMount performs a filesystem mount, which allows access the files on that filesystem without using any UDB functions. udb-inst-username=username

and udb-inst-password=password specify the username and password of the UDB database in the replica. The account must have the proper permissions to access the database or the mount will fail. Password restrictions: ASCII only. Invalid characters: |

(pipe) # (pound) ' (apostrophe) " (double quote).

udb-altinstance-name=instance_name

specifies the name of the alternate instance on the server where you plan to mount the replica. If you are mounting to the production host, you must use a separate instance to mount the database. Refer to the IBM UDB documentation for more information about how to create a separate instance.

Configuration and replication with rmcli

431

Using the Command Line Interface

udb-altdatabase-name=db_name mounts the UDB database under a different name. Replication Manager creates and mounts the replicated database using the new name you specify. This option is only available if mount-options=Snapshot

.

SQL Server options app-name

=sqlserver_instance_name[/db_name]

mount-options

=[ No Recovery | Recovery | Standby |

Attach Database

| Filesystem Mount ]

sqlserver-instance-name

=instance_name

sqlserver-username

=username

sqlserver-password

=password

sqlserver-windows-domain

=domain_name

alternate-database

=db_name

metadata-path

=path app-name=sqlserver_instance_name

specifies the SQL Server instance to mount. To specify mount options for an individual database, use the syntax app-name=sqlserver_instance_name/db_name

.

(The option sqlserver-instance-name=instance_name

is for compatibility with earlier versions of Replication Manager.) mount-options=No Recovery

instructs the restore operation not to roll back any uncommitted transactions. Use this option if you want to apply transaction logs to the mounted copy of the database. The database is unusable in this intermediate and non recovered state. Available with VDI mode only. mount-options=Recovery

instructs the restore operation to roll back any uncommitted transactions. After the recovery process, the database is ready for use. Available with VDI mode only.

mount-options=Standby

restores files and opens the database in read-only mode. Subsequently, the Database Administrator can manually apply additional transaction log backups. Available with VDI mode only.

mount-options=Attach Database

mounts the file system on which the database files are located and then attaches the database to SQL Server. Available only for replicas created using consistent split and not using VDI.

mount-options=Filesystem Mount

performs a filesystem mount, which allows users to access the files on that filesystem without using any SQL Server functions. Users can decide to attach the database files to the SQL Server instance on the mount

432

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

host if desired. Specifying this option could prevent you from restoring this replica in the future since it applies the reset logs to the database. Make sure you understand the implications before specifying this option. metadata-path

=path specifies the path to the metadata on SQL

Server on Windows Server 2003 only.

Password restrictions: ASCII only. Invalid characters: | (pipe) #

(pound) ' (apostrophe) " (double quote).

Exchange options app-name

=exchange_instance_name

eseutil

=[yes|no]

utildir

=dir

[run-eseutil-in-parallel=yes|no]

[required-logs=yes |no]

[required-logs-path=path_used_for_working_directory]

[throttle-eseutil=yes|no]

[throttle-pause=i/o_count_sent_betwn_one_sec_pauses]

OR

[throttle-pause-duration=time_in_milliseconds]

metadata-path

=path app-name=exchange_instance_name

specifies the Exchange instance to mount. eseutil=yes

to automatically run the appropriate Exchange consistency checking utility. Use utildir=dir to specify the location of the utility.

run-eseutil-in-parallel

=yes instructs Replication Manager to run eseutil checks in parallel or run-eseutil-in-parallel

=no to run eseutil checks sequentially one after the other.

required-logs=yes

instructs Replication Manager to minimize log checking to required logs only and uses the required-logs-path=dir

as the working directory for collecting the logs during the operation. throttle-eseutil=yes

throttles the bandwidth used by the

Exchange consistency checking utility using one of the following additional methods.

Use

throttle-pause=number_of_I/Os_to_send

to allow you to choose how many I/Os can occur between one second pauses.

You can choose any value between 100 and 10000 I/Os.

Configuration and replication with rmcli

433

Using the Command Line Interface

434

Use

throttle-pause-duration=time_in_milliseconds

(available in Exchange 2007/2010 only). This option allows you to specify the duration of the pause in milliseconds. 1000 milliseconds = 1 second. If this option is not available or not set, the pause will be one second long.

metadata-path

=path specifies the path to the Exchange metadata for Exchange 2003 on Windows 2003, Exchange 2007, or Exchange

2010.

VMware options

proxy-host=VMFS_proxy_host

Host-Center=esx_server proxy-host=VMFS_proxy_host

identifies the machine name of the host where the Replication Manager proxy software is installed. This is the Replication Manager agent software that communicates with VMware VirtualCenter and manages the replication activities through VirtualCenter.

Host-Center=esx_server

identifies the ESX server that is associated with the mount operation.

RecoverPoint options name

=replica_name

position

=[first|last]

apit

=[yes|no]

time

=time

replica-name

=name

Indicate a specific replica with the name=replica_name

option, or with position=first

(oldest replica) or position=last

(latest replica).

apit

differentiates between an any-point-in-time mount

( apit=yes

) and a specific-point-in-time mount ( apit=no

).

Note:

An any-point-in-time replica is referred to elsewhere as a crash-consistent replica. Specific-point-in-time replicas are application-consistent replicas.

If apit=yes

is specified, RecoverPoint must be active on the application set. The time

and replica-name

parameters must be specified in this case.

For an APIT, the replica-name

can be any value that you want to use to identify the replica.

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

When apit=yes

is specified, the name, position and job-name parameters are ignored.

If apit=no

is specified, then a regular mount is performed

(default).

time

specifies the time in the format yyyy mm dd hh:mm:ss (for example, time=2007 03 06 12:34:56

). Quotation marks are not allowed.

Specifying different mount options for multiple objects in an application set

If an application set has multiple objects, use the following syntax when you need to specify different mount options for the objects. The example shows the command structure for specifying multiple SQL

Server objects: mount-replica appset=application_set_name name=replica_name

...

appname=sqlserver_instance_name1

mount-options=mount_option sqlserver-username=username sqlserver-password=password

...

appname=sqlserver_instance_name2

mount-options=mount_option sqlserver-username=username sqlserver-password=password

...

Specifying different mount options for multiple hosts in an application set

If an application set has multiple hosts (a federated application set), specify each source-client that you want to mount, followed immediately by the client

to which you want to mount that production host’s data. Omit any source-client s that you do not want to mount.

Use the following syntax when you need to specify different mount options for each production host:

Configuration and replication with rmcli

435

Using the Command Line Interface

The print command

mount-replica appset=application_set_name name=replica_name

...

source-client=production_host1_name client=mount_host_for_production_host1_data

info for one or more app on that host goes here

...

source-client=production_host2_name client=mount_host_for_production_host2_data

info for one or more app on that host goes here

...

In the example above, the mount host specified in the client parameter of each set can be the same mount host (to mount more than one set of production data to the same host) or different mount hosts for each production host.

Use the print command to print out a literal string to the console while the script runs. The following is an example: print "This is a literal string."

The run-job command

The set prompt command

Use the run-job command to create replicas, mount those replicas, run scripts on a replica, and perform backups of replicas. The appset and its corresponding job must be pre-defined in the Replication

Manager Console before you can run this command. The command syntax is:

run-job name

=job_name appset=application_set appset=application_set

specifies the name of the application set as you defined it in the console.

name=job_name

specifies the name of the job to run.

retry-count=number_of_retries

specifies how many times you want Replication Manager to retry the job after a failure.

retry_wait

=

seconds_to_wait

specifies how many seconds (0 to

3600) before starting the next retry.

Use the set prompt command to set the command prompt to be used in interactive mode.

The command structure for setting a prompt is as follows:

set prompt

=new_prompt

436

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

The set-client-options command

The set-server-options command

where new_prompt is the text you want to use as a prompt

For example:

RM-CLI: set prompt=hello> hello>

Use the set-client-options command to set the client name, debug level, and the maximum size of the log directory and log file. The command syntax is:

set-client-options name

=client_name

debug-level

=[Normal | Debug]

maximum-log-directory-size

=max_dir_size

maximum-size-per-log-file

=max_file_size name=client_name

specifies the name of the client.

debug-level=[Normal | Debug]

Refer to

Table 18 on page 118

for information on debug-level=Normal

and debug-level=Debug.

maximum-log-directory-size=max_dir_size

specifies the maximum size of the log file directory in bytes.

maximum-size-per-log-file=max_file_size

specifies the maximum size of each log file in bytes.

You can set the server options in the CLI using this command. The command structure is as follows:

set-server-options name

=server_name

debug-level

=[Normal | Debug]

maximum-log-directory-size

=max_dir_size

maximum-size-per-log-file

=max_file_size name=server_name

specifies the name of the server.

debug-level=[Normal | Debug]

Refer to

Table 18 on page 118

for information on debug-level=Normal

and debug-level=Debug.

Do not set debug-level=Debug

unless instructed to do so by

Customer Service. The debug log fills quickly at this level.

maximum-log-directory-size=max_dir_size

specifies the maximum size of the log file directory in bytes.

Configuration and replication with rmcli

437

Using the Command Line Interface

The simulate-job command

The sleep command

The unmount-replica command

maximum-size-per-log-file=max_file_size

specifies the maximum size of each log file in bytes.

Use simulate-job to simulate all actions of the run-job command

(create replicas, mount those replicas, run scripts on a replica, and perform backups of replicas) without actually performing them. The activity and its corresponding job must be pre-defined in the

Replication Manager Console before you can run this command. The command syntax is:

simulate-job name

=job_name

appset

=application_set_name

run-time

=HH:MM name=job_name

specifies the job as you defined it in the console appset=application_set_name

specifies the application set for which this job has been assigned.

run-time=HH:MM

is the hour (HH) and minute (MM) to start the job (in 24-hour format). This is an optional parameter.

The sleep command suspends execution for a period of time defined by the seconds

option. The command syntax is:

sleep seconds

=noOfSecs seconds=noOfSecs

specifies the number of seconds to suspend execution.

Use the unmount-replica command to unmount a replica that has been mounted to an alternate host. The command syntax is:

unmount-replica

[name=replica_name | position=[first|last] ]

appset

=appset_name

Use either the name

= option or the position

= option to specify the replica to unmount: name=replica_name

specifies the name of the replica (which is usually a date format) describing when the replica was created.

position=first

specifies the first (oldest) replica for the application set; position=last specifies the newest

.

appset=appset_name

is the name of the job from which this replica has been created.

438

EMC Replication Manager Administrator’s Guide

Using the Command Line Interface

If you want to unmount a replica of a federated application set, you can optionally choose to unmount only selected parts of the replica, based on where those parts of the replica originated.

For example, If you had three production hosts as part of the replica and all three are mounted, but you want to unmount only two of them, you can do that as follows: unmount-replica name=replica_name appset=appset_name

source-clients=production_host1, production_host2

[proxy-host=VMFS_proxy_host]

[Host-Center=esx_server]

[cdp-mount=cdp|crr]

The data that originally resided on the production hosts listed in the source-clients

parameter will be unmounted. Any data that resided on hosts that are not listed, remains mounted.

If source-clients

is omitted, all data that is mounted will be unmounted.

proxy-host=VMFS_proxy_host identifies the machine name of the host where the Replication Manager proxy software is installed. This is the Replication Manager agent software that communicates with

VMware VirtualCenter and manages the replication activities through VirtualCenter.

Host-Center=esx_server

identifies the ESX server that is associated with the unmount operation.

Configuration and replication with rmcli

439

Using the Command Line Interface

440

EMC Replication Manager Administrator’s Guide

Index

A

administrator

CLARiiON account 143

host access privileges 79

password 72

agent component

described 46

installing 57

agent.config file 144

AIX

required ODM packages 43

application sets

federated 211

application sets, federated 186

automatic discovery 231

available targets 232

B

backing up the internal database 109

baseline snapshot 244

basic disks 33

C

Celerra Network Server

configuring iSCSI targets and LUNs 215, 239 create targets 215, 239

disaster recovery 353

failover 370

post-failover considerations 377

Celerra Replicator

available destination 248

concepts 243

delta between snapshots 244

destination LUN 248

failing over clone replica 258

how it works 244

interconnect security 249

local replication 243 loopback replication 243

planning 248

promoting clone replica 258

remote replication 243 replication types 243

setting up replication 248

CHAP authentication

about 228

configuring secret 227

protecting a CLARiiON 135

CIM events 86

CLARiiON iSCSI 135 configuration for CLARiiON arrays 135 prerequisites 135 using CHAP security 135

CLARiiON storage array

"EMC Replication Storage" group 162

avoiding inadvertent overwrites 142

cleaning up resources 99

connectivity record 144

drive configuration, planning 156

exposing storage to clients 163

hardware requirements 132

Leadville HBA driver support 27

Linux clients 174

LUNs 142, 161

preparing for SAN Copy replicas 166

protected restore 148

EMC Replication Manager Administrator’s Guide

441

Index

442

RAID groups 160

required host software 133

setting up clients 143

setup checklist 128

snapshots 149

snapshots of clones 158

supported models 131

upgrading 178

client

security mode 95

software requirements 42

cluster

requirements 38

components

overview 46

Config Checker

agent installation 59

command syntax 70

installing and running 68

connectivity record 144

console

described 47

installing 61

requirements 40

ControlCenter

integration with Replication Manager 112

D

data mover, configuring IP address 231

databases

federated 186, 211

debug messages, turning on 118

disaster recovery

application recovery 379

Celerra 353

phases 356

post-failover considerations 377

RM Server 340, 348

disk numbering 233

disk types 27

DVD contents 46

dynamic disks 33

E

EMC ControlCenter

viewing Replication Manager from 112

EMC Replication Manager Administrator’s Guide

event messages 119

F

failback

to original primary server 350

failover

Celerra 258, 370

to secondary server 349

federated application sets 186, 211, 292, 301

file system

agent type 46, 47

requirements for Celerra Replicator 248

supported 38

H

host

defining user access to 78

deleting 83

modifying 80

user access 79

HP-UX, mounting a DVD on 57

Hyper-V 329

I

initiator, registering 138, 226

installation

agent component 57

before installing 47

console component 61

log file 62

server 54

setup command 412 using the command line 412

internal database

backup and restore 109

changing password for 111

deployment 109

installation 55

Invista

checklist 266

clone replications 272

configurations 269

planning 272

supported switches 268

virtual frames 274

Index

virtual volume surfacing issues 276

iSCSI

configuration for CLARiiON arrays 135

iSCSI discovery 229

manual 137 multiple network portals 137

iSCSI LUN replication

configuring LUNs 233

configuring replication session 254

configuring volumes in Windows 234

managing 255

numbering LUNs 233

restoring data from remote snap 258

session listing 255

troubleshooting 262

viewing replication role 255

iSCSI replication session

querying status 256

K

KDriver

component overview 283

L

Leadville driver support 27

license

adding 94

as prerequisite 47

freeing 91 grace period 91

Hyper-V 332, 337

license file, specifying during installation 55

management 92, 93

obtaining 48

removing 94

status 90 types 90

viewing information about 93

warnings 92

log files

installation logs 62

server and client 116

sizes, setting 118

Logical Partitions 333

logon banner 89

LPARs 333

LUN

creating iSCSI 219

destination, Celerra Replicator 248

number 233

numbering 219 sizing 219

M

management events

logging 86 overview 86 management events, publishing 86

manual discovery 229

Microsoft Cluster Server 38

Microsoft Exchange

VMware virtual disk support 309

Microsoft iSCSI Initiator

available targets, list of 139

configuring on Replication Manager hosts

136

discovering target portals 137 installing 137

log in procedure 139

login procedure 232

MirrorView

configuration requirements 182

mounting a DVD on HP-UX 57

multiple network portals 229

N

nas_cel command 249

nas_replicate command 255, 256

Navisphere

agent.config file 144

configuring RAID groups 160

connecting SAN Copy SG to remote

CLARiiON 171

connectivity record 144

creating administrator account 143

zoning SAN Copy 170, 196

network portal, Microsoft iSCSI Initiator

adding 137, 229

NTP 284

EMC Replication Manager Administrator’s Guide

443

Index

O

Oracle

and Replication Manager internal database

36

overview of components 46

P

patches

viewing patch level 67

ports used by RM 50

primary RM Server 340

promoting

Celerra Replicator clone 258

protected restore

Symmetrix array 202

R

RecoverPoint

components 282

enabling 77

RPA credentials 98

VMware environment 291, 311

RecoverPoint Appliance (RPA)

described 283

repurposing

Celerra Replicator clone 258

requirements

Celerra Replicator 246

client software 42

connectivity 212

console 40

hardware 25, 212

patches and service packs 214

server software 34

software 213

reverse authentication 227

rmcli

batch mode 419

change-app-creds command 421

connect command 423 delete-replica command 423

example script 420

exit command 423 expire-replica command 423

help command 424

444

EMC Replication Manager Administrator’s Guide

initialization files 418

interactive mode 419

list command 425 list-properties command 425

login command 426 mount-replica command 426

Exchange options 433

Oracle options 428

RecoverPoint options 434

SQL Server options 432

UDB options 431

print command 436 run-job command 436

script structure 419

set prompt command 436

set-client-options command 437 set-server-options command 437

simulate-job command 438 sleep command 438 unmount-replica command 438

S

SAN Copy configuration checklist

CLARiiON-to-CLARiiON 166

Symmetrix-to-CLARiiON 194 intermediate device 171, 194

recommendations 197

storage group 170

storage pool 171, 196

zoning 169

secondary RM Server 340

server

description 46

disaster recovery 340, 348

installing 54

log settings 85

security options 87

software requirements 34

setup

Celerra disaster recovery 358

overview 22

RM Server disaster recovery 342

snaps of clones 158, 177

snapshots

baseline 244

monitoring snapshot cache 152

overview 149

viewing 151

vs. clones 150

SQL Server

and Replication Manager internal database

36

storage

discovering 98

excluding 104

including 99, 101

viewing 100

viewing properties 102

storage array

discovering 98

storage pool

creating 105

deleting 107 editing 107 modifying 107

symmask command 196

Symmetrix clones, configuring 206

Symmetrix storage array

array upgrade steps 208

gatekeeper requirements 189 hardware requirements 189

preparing for SAN Copy replicas 194

protected restore 202

setup checklist 186

thin device support 193

T

target portals, Microsoft iSCSI Initiator 137

time synchronizing 284

TimeFinder clones

configuring 206 restoring with application locks 206

TimeFinder snap

128-session 191

configuring VDEVs 205

multivirtual 191

troubleshooting

RM Server DR 351

U

unknown disk, configuring iSCSI LUNs 233

upgrade

general information 64

RM Server DR 346

supported upgrade paths 64

user account

adding 74

deleting 75 modifying 75

V

VDEV devices, configuring 203

VIO 333

Virtual I/O 333

VMware

Celerra environment 326

CLARiiON environment 289

Invista environment 320

RecoverPoint environment 291, 311

support 34

Symmetrix environment 320, 325

virtual disk support 304

VMFS support 289, 298

W

Windows

Disk Manager 233 disk numbering 233

Event Viewer 119

Index

EMC Replication Manager Administrator’s Guide

445

Index

446

EMC Replication Manager Administrator’s Guide

advertisement

Was this manual useful for you? Yes No
Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertisement

Table of contents