EMC VNX Replication Technologies – An Overview

EMC VNX Replication Technologies – An Overview
EMC VNX REPLICATION TECHNOLOGIES
An Overview
Abstract
This white paper highlights the VNX replication technologies. It provides
information on EMC® MirrorView™, VNX Replicator, RecoverPoint,
Replication Manager, Symmetrix® Remote Data Facility (SRDF), and
VPLEX®.
August 2013
Copyright © 2013 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate as
of its publication date. The information is subject to change
without notice.
The information in this publication is provided “as is.” EMC
Corporation makes no representations or warranties of any kind
with respect to the information in this publication, and
specifically disclaims implied warranties of merchantability or
fitness for a particular purpose.
Use, copying, and distribution of any EMC software described in
this publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC
Corporation Trademarks on EMC.com.
VMware is a registered trademark of VMware, Inc. All other
trademarks used herein are the property of their respective
owners.
Part Number h12079
VNX Replication Technologies
2
Table of Contents
Executive summary.................................................................................................. 4
Audience ............................................................................................................................ 4
Terminology ............................................................................................................ 5
Introduction ............................................................................................................ 6
Supported Replication Technologies .................................................................................. 6
Replication Technologies Overview .......................................................................... 7
RecoverPoint ...................................................................................................................... 7
RecoverPoint Topologies ................................................................................................ 7
RecoverPoint Integration with VNX Unisphere ............................................................... 10
Virtual RecoverPoint Appliance (vRPA) .......................................................................... 11
Benefits of RecoverPoint .............................................................................................. 11
VNX Replicator.................................................................................................................. 12
Virtual Data Mover ........................................................................................................ 13
Advanced Topologies ................................................................................................... 14
Checkpoint and Incremental Attach .............................................................................. 14
Benefits of VNX Replicator ............................................................................................ 15
MirrorView ........................................................................................................................ 15
MirrorView/Synchronous .............................................................................................. 15
MirrorView/Asynchronous ............................................................................................ 16
Benefits of MirrorView .................................................................................................. 17
Symmetrix Remote Data Facility (SRDF)............................................................................. 18
SRDF Replication Modes ............................................................................................... 18
VNX Gateway ................................................................................................................ 19
Unisphere Link and Launch .......................................................................................... 21
VPLEX ............................................................................................................................... 22
Benefits of VPLEX ......................................................................................................... 23
Replication Manager ........................................................................................................ 24
Benefits of Replication Manager ................................................................................... 25
Use Cases ............................................................................................................. 26
Use Case 1: Big Telecommunications Company ................................................................ 26
Use Case 2: Retail Distribution Company .......................................................................... 27
Use Case 3: Small School district ..................................................................................... 29
Use Case 4: Financial Firm ................................................................................................ 30
Conclusion ............................................................................................................ 32
References ............................................................................................................ 32
VNX Replication Technologies
3
Executive summary
The amount of data being generated across the world is increasing exponentially.
Data generated by various organizations is being stored, mined, transformed, and
utilized continuously. Data is a critical component in the operation and function of
organizations. Implementing data protection methodologies enables data centers to
avoid disruptions in business operations. In every data center there is a need to
replicate data for disaster recovery (DR) and redundancy. To protect data from
disasters there is a need to implement replication technologies that enable you to
securely store multiple copies of data.
There are many factors involved in choosing the correct replication solution. For
example, amount of data that can be lost, time taken to recover, distance between
the sites, and so on. VNX systems support various replication technologies developed
by EMC which provide a solution to protect your data from disasters. When choosing a
replication technology, it is important to choose the appropriate solution that best fits
your environment.
This paper provides information about the following replication technologies
supported by the VNX series:
•
RecoverPoint
•
VNX Replicator
•
MirrorView
•
SRDF
•
VPLEX
•
Replication Manager
Note: This paper does not cover data migrations. Please contact EMC professional
services for guidance on choosing the best solution for migration.
Audience
This white paper is intended for EMC customers, partners, and employees who want
to evaluate and choose a replication solution that best fits their VNX implementation.
VNX Replication Technologies
4
Terminology
Asynchronous Replication – A replication mode that enables you to replicate data
over long distance, while maintaining a write consistent copy of data at the remote
site.
Bandwidth – The amount of data that can be transferred in a given period of time.
Bandwidth is usually represented in bytes per second (Bps) or MB/s.
Common Internet File System (CIFS) – An access protocol that allows data access
from Windows/Linux hosts located on a network.
Data Mover – A Data Mover is a component that runs its own operating system. It
retrieves data from a storage device and makes it available to a network client.
iSCSI Protocol – The iSCSI (internet small computer system interface) protocol
provides a mechanism for accessing block-level data storage over network
connections. The iSCSI protocol is based on a network-standard client/server model
with iSCSI initiators (hosts) acting as storage clients and iSCSI targets acting as
storage servers.
Network Attached storage (NAS) – File-based storage for a wide range of clients and
applications that access storage over IP connectivity.
Network File System (NFS) – An access protocol that allows data access from
Linux/UNIX hosts located on a network
Recovery Point Objective (RPO) – RPO is the maximum amount of data that an
organization is willing to lose in case of a disaster. For example, an RPO of 30
seconds means that in case of a disaster, the data that can be lost should not be
more than the data generated in 30 seconds.
Recovery Time Objective (RTO) – RTO is the duration of time within which a business
process must be restored after a disaster. For example, an RTO of 1 hour means that
in case of a disaster, the data needs to be restored in 1 hour.
Round Trip Time (RTT) – RTT is the length of time it takes for a signal to be sent plus
the length of time it takes for an acknowledgment of that signal to be received.
Synchronous Replication – A replication mode in which the host initiates a write to
the system at local site and the data must be successfully stored in both local and
remote sites before an acknowledgement is sent back to the host.
Standby Data Mover – A data mover held in reserve against a failure of an active
partner.
Unisphere – A Web-based EMC management interface for creating storage resources,
configuring and scheduling protection for stored data. Unisphere is also used for
managing and monitoring other storage operations.
Throughput – The rate at which data is transmitted in a given amount of time and is
usually represented in IOPs.
VNX Replication Technologies
5
Introduction
To protect against events that may disrupt the production data availability, it is
essential for the data to have a redundant copy. You can use Data Replication to
create this copy. Replication is a process in which data is duplicated at a remote
location, providing an enhanced level of redundancy in case the storage systems at
the main production site fail. Having a proper disaster recovery site minimizes the
downtime-associated costs and simplifies the recovery process from a disaster.
VNX series are available in the following configurations: VNX Block only system, VNX
File only system, VNX Unified, and VNX Gateway. EMC provides different replication
solutions that are available based on the VNX configuration and the data type that
needs to be replicated.
Supported Replication Technologies
Figure 1 provides a high level overview of the replication/mirroring technologies
supported by the VNX storage systems.
Figure 1 –Supported Replication Technologies with VNX Systems
VNX Unified systems combine File and Block capabilities. You can choose the
replication technology for a unified system based on the data type that needs to be
replicated.
MirrorView, RecoverPoint, and SRDF replicate at a block level and provide protection
for the entire NAS system. Hence for file system or Virtual Data Mover (VDM) level
replication granularity, use VNX replicator.
Note: This white paper focuses on the information about the replication technologies
supported by the VNX series. The same replication technologies may also support
legacy EMC Celerra (File) and EMC CLARiiON (Block) systems. Please refer to the E-Lab
Interoperability Matrix on the EMC Online Support Website for more information.
VNX Replication Technologies
6
Replication Technologies Overview
RecoverPoint
The EMC RecoverPoint family provides appliance-based, continuous data protection
solutions designed to ensure the integrity of production data at local and/or remote
sites. RecoverPoint will enable you to centralize and simplify your data protection
management, and allow for the recovery of data to nearly any point in time.
RecoverPoint provides efficient asynchronous replication capability over Internet
Protocol (IP) or synchronous replication over Fibre Channel networks. With
RecoverPoint, you can create point-in-time, Fibre Channel/iSCSI LUN copies on local
or remote sites using one or more storage systems.
RecoverPoint Topologies
RecoverPoint supports three different topologies of replication which are described
below:
RecoverPoint Local Continuous Data Protection
RecoverPoint local Continuous Data protection provides a local replication mode
enabling you to roll back to any point in time for effective recovery from events such
as database corruption or human errors. Local replication mode is efficient for
replication within the local Storage Area Network (SAN) environment.
RecoverPoint Continuous Remote Replication
RecoverPoint Continuous Remote Replication provides dynamic synchronous and
asynchronous replication for disaster recovery. It provides the option to switch
between modes based on user-defined policies for throughput, latency, and
bandwidth reduction. RecoverPoint features bi-directional replication (data can be
replicated in either direction, source to destination and destination to source
systems) and any-point-in-time recovery capability. This allows the destination LUNs
to be rolled back to a previous point in time and used for read/write operations
without affecting the ongoing replication or data protection.
Concurrent Local and Remote Replication
Concurrent Local and Remote Replication protect the same LUNs locally and remotely.
It provides simultaneous block-level local and remote replication for the same
application LUNs. Recovery of one copy can occur without affecting the other copy. It
also supports bi-directional replication and any-point-in-time recovery capability.
VNX Replication Technologies
7
Figure 2 – RecoverPoint Continuous Local, Remote and Concurrent Local and Remote
Replication
Multi-Site
RecoverPoint multi-site support protects data across remote/branch offices. It
reduces infrastructure vulnerability and enables replication of a primary data center
to more than one remote site. Remote multi-site synchronous and asynchronous
replication helps to meet expanding business continuity requirements. With remote
multi-site replication RecoverPoint improves business continuity and affordably
replicates remote offices to one central location.
The RecoverPoint splitter is proprietary software that is embedded on storage
subsystems, and is built-into the VNX, Symmetrix, and VPLEX storage systems. The
RecoverPoint splitter is used to “split” the application writes. It splits the writes and
sends a copy of the write to the RecoverPoint Appliance (RPA). The splitter carries out
this activity efficiently, with little perceptible impact on host performance, since all
CPU-intensive processing necessary for replication is performed by the RPA.
The RecoverPoint family includes the following three products:
•
•
RecoverPoint/SE is targeted for VNX and CLARiiON series systems. It supports the
replication of data between LUNs that reside inside the same storage system or
between storage systems. It also supports one array per RPA cluster and up to two
RPA clusters. It only supports one array per data center for replication and license
is based per storage system (full capacity). RecoverPoint/SE can be installed
using the Deployment Manager.
RecoverPoint/EX is targeted for VNX, Symmetrix, and VPLEX systems. It can be
used for replication between multiple systems located in different data centers.
VNX Replication Technologies
8
•
The licensing is per storage system for the VNX and per registered capacity on the
VMAX/VPLEX. It supports up to five RPA clusters.
RecoverPoint/CL is targeted for VNX, Symmetrix, VPLEX, and third party systems
via VPLEX systems. It can be used for replication between multiple systems
located in different data centers. The licensing is per replicated capacity, and local
and remote licenses require separate licenses. It also supports up to five RPA
clusters.
Table 1 represents the various features associated with the RecoverPoint product
family.
Table 1 Comparison of the RecoverPoint products
Features
RecoverPoint/SE
RecoverPoint/EX
RecoverPoint/CL
Operating system
Heterogeneous 1
Heterogeneous1
Heterogeneous1
Storage systems
supported
VNX, CLARiiON, CX and NS
series
VNX, CLARiiON, CX and NS
series, Symmetrix VMAX
and VPLEX
Number of arrays and
sites
One array per site with
two site limit
Unlimited arrays with five
site limit
Unlimited arrays with five
site limit
Splitter types
Array splitter
Array splitter
Array splitter
Licensing
Per array at each site
Per registered capacity
per array
Per replicated capacity
Number of appliances
Two to eight per site
Two to eight per site
Two to eight per site
Bandwidth reduction
Built in
Built in
Built in
Journal compression
Not supported
Built in
Built in
Virtualization support
Hyper-V, VMware vCenter
monitoring, VMware SRM
Hyper-V, VMware vCenter
monitoring, VMware SRM
Hyper-V, VMware vCenter
monitoring, VMware SRM
Multipathing
Heterogeneous1
Heterogeneous1
Heterogeneous1
Capacity
Up to 2 Peta Bytes (PB)
Up to 2PB
Up to 2PB
Virtual RecoverPoint
Appliance (vRPA)
Supported
Supported
Supported
EMC Storage systems and
third party systems via
VPLEX
You can use RecoverPoint to make an initial copy over the network, or by physically
transporting the image via tape or an additional storage system to the remote
location. After the initial synchronization, RecoverPoint uses compressed differential
snapshots to send only the changes over the network.
For data recovery, you can make the local or remote secondary copy read/write, and
production can continue from the local or remote secondary copy. When the primary
copy becomes available, incremental changes at the secondary copy are used to resynchronize the primary copy.
1
Indicates type of operating systems/storage systems supported. Refer to the E-Lab navigator for interoperability related
information.
VNX Replication Technologies
9
Figure 3 illustrates how you can use RecoverPoint for replication. Using the
RecoverPoint continuous local and remote replication, a local copy and remote copy
of the production data can be created.
Figure 3 – RecoverPoint Local and Remote Replication
In RecoverPoint, volumes are protected by consistency groups. If two data sets are
dependent on one another (such as a database and a database log), they should be
part of the same consistency group. For example, a motion picture film. The video
frames are saved on one volume, the audio on another. Neither volume will make
sense without the other. The saves must be coordinated so that they will always be
consistent with one another. In other words, the volumes must be replicated together
in one consistency group to guarantee that at any point in time, the saved data will
represent a true state of the film. The consistency group ensures that updates to the
production volumes are also written to the copies in consistent and correct writeorder. The copies can now always be used to continue working from or to restore the
production source.
RecoverPoint also supports simultaneous bi-directional replication, where the same
RPA can serve as the source RPA for one consistency group and the target RPA for
another consistency group.
RecoverPoint also supports replication with VNX File and VNX Gateway systems for
File replication. RecoverPoint replicates at a block level and provides protection for
the entire NAS system. For file system or VDM level replication granularity, use VNX
Replicator.
RecoverPoint Integration with VNX Unisphere
VNX Unisphere is designed to accept plug-ins that will extend its management
capabilities. Unisphere provides a plug-in for RecoverPoint which enables monitoring
and managing replication between VNX systems with RecoverPoint from a central
location.
VNX Replication Technologies
10
Figure 4 – RecoverPoint Integration with VNX Unisphere
Virtual RecoverPoint Appliance (vRPA)
RecoverPoint 4.0 introduced virtual RecoverPoint Appliance (vRPA) for the EMC VNX
series. This software-only replication solution supports the advanced capabilities
RecoverPoint customers depend on and is packaged to run on a virtual machine.
vRPA runs on a VMware virtual machine (VM) which is ideal in environments that have
a virtualized infrastructure using VMWare technologies. vRPA works similar to the
physical RecoverPoint appliances (RPA) with some limitations. Instead of Host Bus
Adapters (HBA) and Fibre Channel, the vRPA uses iSCSI over a standard IP network;
therefore, there are no hardware requirements for the vRPA other than a standard ESX
server. vRPA supports synchronous and asynchronous replication over IP.
Figure 5 – vRPA with RecoverPoint/SE
Benefits of RecoverPoint
The RecoverPoint solution has the following benefits:
•
•
Any point in time recovery – For recovery to the millisecond using a unique DVRlike rollback mechanism.
Any application – Support for the applications in your datacenter and application
consistency for application data stored across multiple systems.
VNX Replication Technologies
11
•
•
•
•
•
Global reach – Synchronous and asynchronous continuous local and remote
replication.
Protect data on any system – RecoverPoint with VPLEX protects data for any
vendor’s storage.
Reduce Total Cost of Ownership using software only Virtual Appliance (vRPA).
Reduce bandwidth costs – With de-duplication and compression, RecoverPoint
can reduce the overall data on the network and associated WAN costs by up to
90%.
Multi-site replication – Enhances protection by replicating data to and from
multiple sites.
VNX Replicator
VNX Replicator is an asynchronous file system level replication technology that
complies with customer-specified RPO. Replicator is included in the Remote
Protection Suite and the Total Protection Pack for VNX systems.
When disastrous events occur, access to the NAS storage objects that contain
production data can be lost. VNX Replicator provides organizations with the ability to
handles disastrous events by transferring the NFS and/or CIFS responsibilities to
disaster recovery site.
The Data Mover Interconnect is the communication channel used to transfer data
between the source and destination. VNX Replicator works by sending periodic
updates to the target File system. Configure the source and target VNX systems for
communication by creating a relationship between the systems, and then creating the
data interconnects between participating data movers.
Figure 6 represents the replication session and data mover interconnect configuration
between a source and destination File system.
Figure 6 – Replication Session Between Two File Systems (FS)
After communication is established, you can then set up a remote replication session
to create and periodically update the destination File system at a remote destination
site.
VNX Replication Technologies
12
VNX replicator provides a manual failover capability to remedy a disaster affecting a
production file system or an entire VNX system, making it unusable or unavailable.
After failover, target file systems are changed from read only to read/write mode. You
can then use the target file systems to provide access to the data. Failover may cause
data loss, but it is managed by the user configurable RPO policy which is known as
'max time out of sync' and is set to be 10 minutes by default.
There are three types of replication sessions supported with VNX Replicator:
•
•
•
Loopback Replication – Replication of a source object occurs within the same
Data Mover in the system. Communication is established by using a predefined
Data Mover interconnect, which is the communication channel used to transfer
data between the source and destination file system with the same Data Mover.
You can use loopback replication to have a copy of the file system on the same
Data Mover.
Local Replication – Replication occurs between two Data Movers within the same
system. Both Data Movers must be configured to communicate with one another
using Data Mover interconnects. You can use local replication to have a copy of
the file system on two Data Movers within a system.
Remote Replication – Replication occurs between a local Data Mover and a Data
Mover on a remote system. Both systems must be configured to communicate
with one another by Data Mover interconnects. After communication is
established, you can set up a remote replication session to create and periodically
update a destination object at a remote destination site. You can make an initial
copy of the source file system over an IP network. You can use remote replication
to have a copy of the file system on a remote system.
Because of the size of data transfers between the replication pairs and that network
environmental conditions can fluctuate greatly, VNX Replicator dynamically monitors
the rate of data change and allocates resources to ensure that file systems are
replicated in compliance with customer-configurable RPOs.
Virtual Data Mover
A Virtual Data Mover (VDM) is a VNX software feature that enables the grouping of
CIFS or NFS file systems and servers into virtual containers. By using VDMs, it is
possible to separate CIFS or NFS servers from each other and from the associated
environment. VDMs allow the replication or movement of CIFS or NFS environments to
another local or remote Data Mover. They also support disaster recovery by isolating
and securing independent CIFS or NFS server configurations on the same Data Mover.
VDMs are implemented for several reasons:
•
They enable replication of segregated CIFS environments.
•
They enable you to separate or isolate CIFS or NFS servers to provide a higher level
of security.
•
They allow a physical system to appear as many virtual servers for simplified
consolidation.
VNX Replication Technologies
13
•
You can move VDMs from one Data Mover to another in the same system to help
load balance Data Mover resources.
VNX Replicator supports VDMs and offers the same benefits provided for file systems
to VDMs.
Advanced Topologies
One-To-Many
In one-to-many replication, a single source object may be replicated to a maximum of
three destinations. Each replication destination object uses a separate and
independent replication session with its own RPO. For example, A->B and A->C.
Cascading
In a cascading topology, a file system can be replicated from a source file system to a
destination file system and have that destination file system act as a source to
another replication session. For example, A->B->C.
Figure 7 illustrates that the local DR site acts as both a destination and a source for
replication. Similar to the one-to-many topology, each hop uses a separate and
independent replication session with its own RPO. It is common to see a higher RPO
used for the second hop. Two hops are supported and those hops may be loopback,
local, or remote.
Figure 7 – Example of VNX Replicator cascading topology
Checkpoint and Incremental Attach
Checkpoint is a logical point-in-time view of a file system that is maintained by using
pointers and copies of any data that was modified since the establishment of the
checkpoint. VNX Replicator uses checkpoints to establish the common base and
subsequent delta sets, and to maintain consistency on the destination object during
data transfer.
Incremental attach is a software feature that provides an enhancement to the VNX
Replicator functionality. Incremental attach allows you to use user-created
checkpoints as common base pool for a replication pair, but more importantly in
advanced replication topologies, such as cascading and one-to-many.
VNX Replication Technologies
14
You can leverage user-created checkpoints to start a replication session between file
systems in the advanced topologies that did not have a prior replication relationship
by transferring a differential copy for an initial synchronization.
Benefits of VNX Replicator
The solution has the following benefits:
•
•
•
•
•
VNX Replicator replicates file systems and VDMs.
Replicating to the same data mover, replication to the same system, and
replication to a remote system.
Advanced topologies such as one-to-many, cascading, and incremental attach are
available.
In a one-to-many replication, a single source object may be replicated to up to
three destinations.
Data Mover interconnects support bandwidth scheduling and throttling
capabilities.
MirrorView
EMC MirrorView offers two remote mirroring products—MirrorView/Synchronous
(MirrorView/S) and MirrorView/Asynchronous (MirrorView/A). For the VNX series,
both MirrorView products are included in the Remote Protection Suite and the Total
Protection Pack. MirrorView is a VNX technology that mirrors an active block data set
to a remote VNX system, which is usually located at a disaster recovery location.
MirrorView is LUN centric. It provides end-to-end data protection by replicating the
contents of a primary LUN to a secondary remote LUN that resides on a different VNX
system.
MirrorView can be used in the following modes:
MirrorView/Synchronous
MirrorView/S is a limited-distance, synchronous remote mirroring facility that offers a
disaster recovery solution without data loss for VNX storage systems. In a failure
scenario, MirrorView/S enables you to perform a manual failover from a source site to
a destination site and then restore operations on the source site following a failover.
MirrorView/S is best suited for replication when the distance between the systems is
up to 10ms Round Trip Time (RTT).
MirrorView/S with VNX File offers disaster recovery without data loss for VNX File and
VNX Gateway configurations. MirrorView ensures that the MirrorView/S protected file
systems on a source VNX system are recoverable, even if the source VNX file is
unavailable or not functioning.
MirrorView replicates at a block level and provides protection only for the entire NAS
system. For file system or VDM level replication granularity, use VNX Replicator.
With MirrorView/S replication, each server write on the primary side remains
unacknowledged to the host until the IO is written to the secondary side. The primary
VNX Replication Technologies
15
benefit of this is that the RPO is zero. Figure 8 illustrates the data flow of MirrorView/S
replication and the following steps:
1. Server attached to the primary VNX system initiates a write to the system.
2. The primary VNX system replicates the data to the secondary VNX system.
3. The primary VNX system waits for the acknowledgement from the secondary VNX.
4. Once the Primary VNX receives the acknowledgement it sends an
acknowledgement back to the server.
In case of a disaster at the primary side, data at the secondary side is exactly the
same as data at the primary side.
Figure 8 - MirrorView/S replication
MirrorView/Asynchronous
MirrorView/A provides replication over long distances. MirrorView/A can be used for
replication between VNX systems which are separated by more than 10ms RTT and up
to 200ms RTT. MirrorView/A is also optimized for low network bandwidth.
MirrorView/A works on a periodic update model that tracks changes on the primary
side, and then applies those changes to the secondary at a user-determined interval.
With MirrorView/A replication, writes are sent to the remote system as they are
received from the server. Acknowledgement to the server is not held for a response
from the secondary, like in MirrorView/S. In MirrorView/A, if writes are coming into
the primary VNX system faster than they can be sent to the secondary VNX system,
multiple IOs are stored on the primary system and sent at once to the secondary VNX
system at user defined intervals. This helps better utilize the WAN link.
Figure 9 illustrates the data flow of MirrorView/A replication and the following steps:
1. Server attached to the primary VNX system initiates a write to the system.
2. The primary VNX system sends an acknowledgement to the server.
3. The primary VNX system tracks the changes and replicates the data to the
secondary VNX system at a user defined frequency.
VNX Replication Technologies
16
4. Once the secondary VNX system receives the data it sends an acknowledgement
back to the primary VNX system.
Figure 9 MirrorView/A Replication
MirrorView supports consistency groups. A consistency group is a collection of LUNs
that function together as a unit within a storage system. With consistency groups,
MirrorView maintains write ordering across secondary volumes in the event of an
interruption of service to one, some, or all of the write-order dependent volumes.
Consistency groups protect against data corruption in the event of partial failures, for
example on one SP, LUN, or disk. With partial failures, it is possible for the data set at
the secondary site to become out of order or corrupt.
For example, you can create a consistency group for a database application with
LUNS associated with both, the application data and LUNs containing the logs of the
applications that are grouped together. You can use MirrorView to replicate all the
LUNs which are part of the consistency group. In this case, if one member of the
consistency group is affected, then all members of the consistency group are affected
and data integrity is preserved across the set of secondary images.
Benefits of MirrorView
The MirrorView solution has the following benefits:
•
•
•
•
•
Synchronous and asynchronous replication from the same VNX system.
Bidirectional mirroring – Any storage system can host primary and secondary
images as long as the primary and secondary images within any mirror reside on
different storage systems.
Replication over Fibre Channel and iSCSI front-end ports.
MirrorView can setup using the VNX Unisphere.
Consistency groups for maintaining data consistency across the dependent
volumes.
VNX Replication Technologies
17
Symmetrix Remote Data Facility (SRDF)
The EMC Symmetrix Remote Data Facility (SRDF) remote replication software offers
various levels of Symmetrix-based business continuance and disaster recovery
solutions. SRDF offers the capability to maintain multiple copies of data, independent
of the host and operating system. VMAX storage systems are the latest generation
storage systems of the Symmetrix family.
SRDF configurations can be established between two VMAX systems. The systems can
be located in the same room, in different buildings within the same campus, or
separated by up to 200ms of RTT. In a SRDF environment the source LUNs are referred
to as R1 devices and destination LUNs are represented as R2 devices.
SRDF Replication Modes
The SRDF family consists of three base solutions, each discussed further in this
section:
Symmetrix Remote Data Facility / Synchronous (SRDF/S)
The SRDF/S mode maintains a real-time mirror image of data between the R1 and R2
devices. Data must be successfully stored in Symmetrix cache at both the primary
and the secondary site before an acknowledgement is sent to the production host at
the primary site. SRDF/S is ideally suited in environments where the Symmetrix
systems are separated by a limited distance of up to 10ms RTT.
SRDF/S provides the following benefits:
•
•
•
•
•
•
Provides a no data loss solution (Zero RPO).
No server resource contention for remote mirroring operation.
Can perform restoration of primary site with minimal impact to application.
Performance on remote site.
Enterprise disaster recovery solution.
Supports replicating over IP and Fibre Channel protocols.
Symmetrix Remote Data Facility / Asynchronous (SRDF/A)
The SRDF/A mode mirror R1 devices by maintaining a consistent copy of the data on
the secondary (R2) site at all times. SRDF/A session data is transferred from the
primary to the secondary site in cycles using delta sets. This mechanism eliminates
the redundancy caused by multiple changes within the same cycle being transferred
over the SRDF links and potentially reducing network bandwidth requirements.
The point-in-time copy of the data at the secondary site is only slightly behind that on
the primary site. SRDF/A has little or no impact on performance at the primary site as
long as the SRDF links contain sufficient bandwidth, and the secondary system is
capable of accepting the data as quickly as it is being sent. This level of protection is
intended for users who require a fast host response time while maintaining a
dependent-write consistent image of data at the secondary site. SRDF/A is ideally
suited for long distance replication where the Symmetrix systems are separated by a
limited distance of up to 200ms RTT.
VNX Replication Technologies
18
SRDF/A provides the following benefits:
•
•
•
•
Extended-distance data replication that supports longer distances than SRDF/S.
SRDF/A does not affect host performance, because host activity is decoupled from
the remote copy process.
Efficient link utilization that results in lower link-bandwidth requirements.
Facilities to invoke failover and restore operations.
Supports replicating over IP and Fibre Channel protocols.
Symmetrix Remote Data Facility / Data Mobility (SRDF/DM)
SRDF/DM is a two-site SRDF data migration and replication solution that operates in
only adaptive copy modes. It enables fast data transfer from R1 to R2 devices over
extended distances. It is also referred to as SRDF/Adaptive Copy. Adaptive copy
modes allow the R1 and R2 devices to be more than one IO out of synchronization.
Unlike the asynchronous mode, adaptive copy modes do not guarantee a dependentwrite consistent copy of data on R2 devices.
Figure 10 – Symmetrix Remote Data Facility (SRDF)
VNX Gateway
The VNX Series Gateway products are a set of dedicated network servers optimized
for File access and advanced functionality in a scalable, easy-to-use package.
•
•
•
•
They deliver NAS capabilities to consolidate application storage and file servers in
a gateway configuration connected to a VNX or VMAX system.
VG2/VNX VG10 support one or two data mover configurations.
VG8/VNX VG50 supports two to eight data mover configurations.
They also support the advanced functionalities provided by VNX File systems.
The two new VNX Series Gateway products—the VNX VG10 or VNX VG50—are
dedicated network servers optimized for File access and advanced functionality in a
scalable, easy-to-use package. The VNX VG10 and VG50 can only connect to, boot
from, and work with Symmetrix VMAX 10K, 20K, and 40K back-end system
technologies. The VNX VG10 and VNX VG50 Gateways are also offered in an
VNX Replication Technologies
19
integrated model that provides physical integration into a Symmetrix VMAX 10K
system.
The new VNX Gateway models VNX VG10 and VNX VG 50 support the following VMAX
features:
•
Unisphere Link and Launch
•
VMAX Compression
•
Front End Quota
•
Federated Tiered Services (FTS)
Figure 11 VNX Gateway Configuration with VMAX
VNX Gateway support for SRDF includes SRDF/S and SRDF/A.
SRDF supports two types of configurations with VNX Gateways:
Active/Passive – Unidirectional setup where one VNX Gateway, with its attached
VMAX storage system, serves as the source (production) file server and another VNX
Gateway, with its attached VMAX storage system, serves as the destination (backup).
This configuration provides failover capabilities in the event that the source site is
unavailable.
Active/Active – Bidirectional configuration with two production sites, where each site
acting as the standby for the other. Each VNX Gateway system has production and
standby Data Movers. If one site fails, the other site takes over and serves the clients
of both sites.
VNX Replication Technologies
20
When planning the Data Mover configuration:
•
•
For every source (production) Data Mover that you choose to protect with a remote
SRDF standby Data Mover, you must provide a dedicated standby Data Mover at
the destination site. There must be a one-to-one relationship between a source
Data Mover that you choose to protect, and a dedicated remote standby Data
Mover at the destination site.
If the source Data Mover with a remote SRDF standby Data Mover also has a local
standby Data Mover, then that local standby must have a remote SRDF standby
Data Mover at the destination site. This prevents issues with failover. An SRDF
standby Data Mover at the destination can be paired only with one source Data
Mover.
SRDF replicates at a block level and provides protection for the entire Data Mover. For
File system or VDM level replication granularity, use VNX Replicator.
Unisphere Link and Launch
VNX Unisphere 1.3 and later enable you to link and launch from Unisphere (on VNX
Gateway systems) directly to Unisphere for VMAX (UVMAX) for storage-related
operations. The link and launch features enable storage administrators working in
Unisphere for VNX Gateway to go directly to Unisphere for VMAX.
Figure 12 illustrates the link and launch capabilities provided by VNX Unisphere for
the VNX Gateways.
Figure 12 – Link and Launch for VMAX from VNX Unisphere
VNX Replication Technologies
21
VPLEX
VPLEX is a mirroring technology. VNX with VPLEX delivers local or remote fault
tolerance by enabling you to build out an infrastructure that rides through component
or system failures. In other words, VNX with VPLEX delivers on zero RPO and zero RTO
for mission critical data. VNX with VPLEX provides a solution that allows a high
availability model by maintaining copies of data on multiple VNX systems. This
enables you to deploy multiple entry-level VNX platforms as opposed to deploying
one larger system. VPLEX is not a replication technology but a mirroring technology
that allows Active/Active and seamless data mobility.
The VPLEX family provides the capability to deliver information mobility and access
within, across, and between data centers. The current VPLEX family consists of the
following:
•
•
•
VPLEX Local – For managing data availability and access across heterogeneous
storage systems within a single local cluster.
VPLEX Metro – For data availability and mobility with access across two VPLEX
clusters separated by synchronous distances (up to 5ms RTT apart).
VPLEX Geo – For data availability and mobility with access across two VPLEX
clusters separated by asynchronous distances (up to 50ms).
VPLEX technology enables new models of computing, leveraging
distributed/federated virtual storage. For example, VPLEX is specifically optimized for
virtual server platforms (for example, VMware, Hyper-V, and Oracle Virtual Machine)
and can streamline, and even accelerate, transparent workload relocation over
distances. This includes moving virtual machines over distance with VMware
VMotion. VPLEX handles the storage vMotion, ensuring the data is available at both
sites, saving time, because data does not have to be moved as part of application
vMotion (certified by both VMware and EMC).
GeoSynchrony™ is the operating system running on VPLEX. GeoSynchrony is an
intelligent, multitasking, locality-aware operating environment that controls the data
flow for virtual storage. GeoSynchrony is:
•
•
•
•
Optimized for mobility, availability, and collaboration.
Designed for highly available, robust operation in geographically distributed
environments.
Driven by real-time IO operations.
Intelligent about locality of access.
Using VPLEX for application and data mobility has several advantages over traditional
solutions. VPLEX is designed as a cluster, and two clusters can be connected together
for a VPLEX Metro. Once this is established, you can have immediate access to your
data, because VPLEX can present the same data at each cluster’s location
simultaneously.
You can automatically balance loads through VPLEX, using storage and compute
resources from either cluster’s location. And when combined with server
virtualization, VPLEX enables you to transparently move and relocate virtual machines
VNX Replication Technologies
22
and their corresponding applications and data over distance. This provides a unique
capability that enables you to relocate, share, and balance infrastructure resources
between sites—which can be within a campus or between data centers up to 5ms
apart with VPLEX Metro, or 50ms apart across asynchronous distances with VPLEX
Geo.
With VPLEX, you no longer need to spend significant time and resources preparing to
move data and applications. You do not have to accept a forced outage and restart
the application after the move is completed. Instead, a move can be made instantly
between sites, over distance, and the data remains online and available during the
move—no outage or downtime is required. VPLEX also provides a single, all-inclusive
interface for both EMC and non-EMC systems. So even if you have a mixed storage
environment, VPLEX still provides an easy, all-encompassing solution.
VPLEX Local also provides ongoing non-disruptive data mobility across multi-vendor
systems within a single data center. But when you want to move full applications or
data over distance, ensure that you use VPLEX Metro or VPLEX Geo to fulfill your
requirement. Although VPLEX works with server virtualization to relocate virtual
machines over distance, all VPLEX products can also non-disruptively move and
relocate physical application data across heterogeneous back-end systems.
Benefits of VPLEX
The EMC VPLEX solution has the following benefits:
•
•
•
Mobility – Once VNX with VPLEX is implemented, you can perform migrations and
technology refreshes at will. Since there is no downtime associated with data
movement, these migrations can be done even during work hours. Even more
interestingly, because migrations can be accelerated, you can get far higher value
from your storage purchases. It also permits application instances to be actively
relocated across distance.
Continuous availability – VPLEX with distributed federation delivers unparalleled
capabilities, including the ability to present one or more LUNs on two separate
clusters, in an active/active access mode. This capability offers new models of
high availability and nondestructive planned outages/workload relocation.
Stretched clusters – VPLEX enables stretch VMware, Oracle RAC, and other
industry leading clusters over distance for new levels of availability.
Figure 13 – VPLEX for High Availability, Mobility and Collaboration
VNX Replication Technologies
23
Replication Manager
EMC Replication Manager is a software product that enables you to automate and
manage EMC’s point-in-time replication technologies for EMC VNX series, VNXe
series, Symmetrix VMAX, and RecoverPoint.
It enables you to have an application focus when managing your data copies. It
enables discovery and management of applications, and instruments them as
necessary to ensure consistent and recoverable copies. For example, Replication
Manager can place Oracle database into hot backup mode, initiate a copy, and return
the application to normal operations. It also supports application-level restore and
recovery.
Replication Manager can simplify management of storage replication and integrate
with critical business applications. You can use it to create, mount, and restore pointin-time replicas of databases or file systems residing on supported storage systems.
It automates management of point-in-time replicas from the context of the application
by placing them in a known state prior to taking a replica, essentially creating
application-consistent replicas. It discovers and maps applications on the host to the
underlying storage infrastructure to ensure it knows the location of the production
database prior to taking a replica.
As a Replication Manager Administrator, who can perform all the functions within
Replication Manager, you can delegate replication tasks to others within the
organization through various user roles with varying degrees of privileges. This
improves efficiency through the elimination of scripting activities and delegation of
replication tasks.
Replication Manager delivers point-and-click replica management for instant restore
back to production (creating a “gold copy” of production data for instant restore
should a corruption occur). Also ideal for backup acceleration, Replication Manager
streamlines the backup of production data without impacting performance. Similarly,
you can create copies of your production database for data repurposing activities
such as testing, development, reporting, and training to minimize the impact to
production and leverage your production data to perform double duty.
Figure 14 Replication Manager for Backup Acceleration, Instant Restore and Data
Repurposing
VNX Replication Technologies
24
Replication Manager offers the following:
•
•
•
Application-consistent replication – In the case of replicating Microsoft Exchange,
SQL Server, and SharePoint, the calls to Volume Shadow Copy Service (VSS) and
Virtual Device Interface (VDI) are part of the product specifications. Replication
manager automates the process and maintains application consistency.
Simplicity – A single graphical user interface (GUI) controls a host’s storage,
replication jobs, schedules, and replicas. The GUI provides a very simple means of
pointing to the applications and File systems, replicating them on demand or per
schedule, and restoring them with correct choices for such details as log handling.
Auto-discovery – Replication Manager will automatically discover applications on
the host as well as the underlying storage and replication technology. If the
environment changes, those changes will also be discovered.
These benefits are targeted to save time, money, and human-resource allocation that are
normally required for custom scripting that is needed to keep pace with the changing needs of
a growing business.
Benefits of Replication Manager
Replication Manager has the following benefits:
•
•
•
•
•
•
Automated management of point-in-time replicas VNX Storage.
Application consistent replication of Microsoft, Oracle, and UDB applications.
Reduces or eliminates the need for scripting solutions for replication tasks.
Provides a single management console and wizards to simplify replication tasks.
Improved recovery and restore features, including application recovery.
Integration with physical, VMware, Hyper-V, or IBM AIX VIO virtual environments.
VNX Replication Technologies
25
Use Cases
In this section, we will review some use cases where a replication technology was
chosen based on the requirements of the use case. There is no single solution that
will help all scenarios. But, EMC provides many options that can help you protect your
data at all times. This section describes some of the scenarios where certain
customers chose a replication technology that best fit their needs.
Use Case 1: Big Telecommunications Company
A big telecommunications company, based out of Atlanta, GA has VMAX systems at
their primary data center in Atlanta and VMAX systems in their secondary data center
in Denver. They recently acquired another telecommunications company, located in
Atlanta, and consolidated all the storage systems in one data center. The acquired
telecommunications company has two VNX 7600s in primary data center and
VNX5800 in their secondary data center also located in Atlanta.
The goal of the storage administrators and network administrators was to choose a
replication technology that will work in their heterogeneous storage environment. The
WAN link between the 2 sites dedicated for replication was an OC-3 pipe (155Mbps).
The requirements for choosing the replication technology were:
•
•
•
•
•
•
Support replication in a heterogeneous storage environment. The primary data
center in Atlanta has VMAX and VNX systems and secondary site in Denver has
VMAX systems.
Support local and remote replication.
Provide network bandwidth optimization features.
Ability to restore data from any point in time and support consistency groups.
Provide RPO of 20 seconds and RTO of 5 minutes.
Ability to manage replication of their Microsoft Exchange environment.
After considering all the available options, the storage administrators chose
RecoverPoint/EX, which works with both VMAX and VNX systems. They chose
RecoverPoint/EX with continuous local and remote replication as the solution to
replicate between the primary data center and DR site.
The following factors influenced their decision:
•
•
•
•
•
RecoverPoint local and remote replication enables them to choose a single
solution for doing both local and remote replication.
RecoverPoint provides WAN bandwidth optimization features such as
compression and de-duplication which helps in limiting the throughput on the
WAN link.
RecoverPoint supports synchronous replication. This can enable them to achieve
an RPO of 20 seconds and RTO of 5 minutes for their environment.
RecoverPoint supports replication from both VMAX and VNX systems which
simplifies management of replication.
RecoverPoint also provides features that will enable them to restore data from any
point in time.
VNX Replication Technologies
26
Along with RecoverPoint, they also chose Replication Manager as it supports
RecoverPoint in Microsoft exchange environments. This solution helps them to restore
Exchange to an application consistent point-in-time. Through Replication Manager’s
single console, they can now manage application and crash-consistent point-in-time
copies locally and remotely. It coordinates, automates, and simplifies the entire data
replication process—from environment discovery to creation and management of
multiple, application-consistent, disk-based replicas. Replicas can be created on
demand or based on schedules and policies that they define.
Figure 17 illustrates the topology of the solution implemented where RecoverPoint is
used for replication between the primary data center in Atlanta and secondary data
center in Denver.
Figure 15 – RecoverPoint with Replication Manager
Use Case 2: Retail Distribution Company
A retail distribution company which delivers products to various supermarkets has
distribution centers spread across the United States. The distribution centers are
centrally located based on regions to be able to deliver products in a timely fashion.
The company headquarters is located in Colorado and the distribution centers are
located in Seattle, San Jose, Detroit, and Boston.
The distribution centers have local storage systems which host applications, data,
and inventory management systems. They are managed, provisioned, stored, and
protected locally from the distribution centers. Traditionally, they were isolated to
their territories. In the past year, there has been a push to have a disaster recovery
solution.
The goal is to be able to access and provide data to all the distribution centers and
protect the data with a disaster recovery strategy. This will help them to manage their
inventory, evaluate the supply and demand of goods per region, and forecast based
on trends.
VNX Replication Technologies
27
The distribution centers have VNX5400 systems. The goal of the project is to
implement a replication technology that will help them protect their data with remote
replication.
They want to implement a solution that can be used for CIFS and NFS, to replicate
data from the data center in Colorado to various distribution centers and between the
distribution centers. The WAN link for replication is a leased line where they are
charged based on its utilization.
After reviewing the various options, they have chosen VNX Replicator as the best
solution that will enable them to replicate from the distributed data centers to the
primary data center and between distribution centers. They chose this solution based
on the following:
•
•
•
•
•
Replicator supports asynchronous file system replication, which enables them to
replicate to all the distribution centers.
Replicator supports bandwidth throttling and scheduling. This will allow optimal
utilization of the leased WAN link.
Support for CIFS shares, NFS shares, and file systems.
Replicator can be used for replicating from one file system to another file system
that may be mounted on the same Data Mover (loopback), different Data Mover on
the same system (local replication), or another VNX system (remote replication).
Replicator is easy to configure as it can be configured using Unisphere.
They implemented the following topology in their environment to replicate between
the five distributions centers to their data center in Colorado.
Figure 16 – Topology for VNX Replicator Implementation for Many to One Replication
VNX Replication Technologies
28
Use Case 3: Small School district
A small school district, located in Boston, started to see an explosion of data which
needed to be managed, stored, and protected. After careful consideration they chose
a VNX5400 Block storage system because of the leading Storage Efficiency features.
After setting up the primary data center and leveraging VNX Snapshots technology for
local data protection, they wanted to architect a disaster recovery solution. The
disaster recovery site was chosen to be in Nebraska.
The storage administrators were impressed with the VNX5400 storage efficiency
features, and the ease with which the system was managed and storage was
provisioned for their development and QA teams. So, they proposed to have another
VNX5400 system set up at their new DR data center in Nebraska.
They wanted a replication technology that fulfilled the following requirements:
•
•
•
•
•
Block Replication technology that supports replication over 2000 miles.
RPO of 2 minutes and RTO of 30 minutes.
Support for iSCSI replication as they have leased a WAN link for IP replication.
Solution should be scalable for future needs.
Due to the location of their DR site, they did not want to manage any additional
equipment.
Figure 17 – MirrorView/S Solution between Nebraska and Boston
After considering the available options, they chose MirrorView/A replication based on
the following supported features:
•
•
•
MirrorView supports asynchronous replication, which can be used for replication
between Boston and Nebraska.
MirrorView/A provides RPO that is less than 2 minutes and the RTO less than 30
minutes.
MirrorView supports iSCSI replication, which replicates data over an IP network.
VNX Replication Technologies
29
•
•
•
MirrorView is a system based replication technology, so there would be no
additional hardware expenditure.
MirrorView is a license based feature on the VNX systems.
MirrorView can be managed, provisioned, and setup using VNX Unisphere, which
they are already familiar with.
Use Case 4: Financial Firm
A financial trading firm has a primary data center in New York and a secondary data
center in New Jersey. The firm deals with trading, where continuous availability of
data at any given point in time is critical. The financial firm has multiple VNX systems
and use synchronous replication technologies such as MirrorView/S and
RecoverPoint for replication. This provided them with an Active/Passive disaster
recovery solution. They have a fully virtualized data center that leverage various
solutions provided by VMware.
They want to implement a solution that provides continuous availability with
Active/Active capability. They also have a new data center located in Iowa and would
like to use it as a disaster recovery site for asynchronous replication.
After reviewing the various options available in the market, they decided to choose
VPLEX for their environment. VPLEX not only provides them with a storage
virtualization solution, but also an Active/Active solution that provides continuous
availability and mobility of data within and between their primary and secondary data
centers.
VPLEX provides the high availability solution. This will enable them to move more and
more mission critical applications to VMware. All of the mainstream VMware tools can
now be leveraged across systems and over distance, powerfully improving the scope
and value of VMware to the enterprise.
With VPLEX Metro a distributed volume can be created between the two data centers.
Distributed volumes are mirrored volumes that are spread across the two VPLEX
clusters. With Distributed volumes VPLEX uses its own data synchronization
mechanism to keep the data in sync between the two clusters. VPLEX manages the
data synchronization and ensures that both sides have read/write access to the
distributed virtual volume.
Using VPLEX has the following benefits:
•
•
•
•
•
vMotion across datacenters is non-disruptive for load balancing, maintenance,
and workload relocation.
They can also fail over automatically and restart between the sites using VMware
High Availability (HA)—no manual intervention is required.
The loss of a storage system or planned downtime does not result in application
interruption.
VMware Distributed Resource Scheduler (DRS) can be used for full utilization of
compute and storage resources across domains.
Using VPLEX provides automate sharing, balancing, and failover of IO across
clusters.
VNX Replication Technologies
30
Figure 18 illustrates the implementation with VPLEX that provides an Active/Active
solution between the data centers in New York and New Jersey.
Figure 18 – VPLEX Metro
In addition, the company decided to implement RecoverPoint-CRR with VPLEX for the
DR solution. VPLEX supports the RecoverPoint Splitter. It has an advantage of the
active/active configuration with high availability and mobility provided by VPLEX.
RecoverPoint provides continuous data protection and remote replication to the DR
site in Iowa. This solution offers the highest level of protection with a disaster
recovery solution, should a regional disaster disable the two primary data centers
located in New York and New Jersey. Figure 19 illustrates the solution implemented by
the financial firm.
Figure 19 – VPLEX with RecoverPoint
The complete solution provided with VPLEX and RecoverPoint delivers an
Active/Active solution along with a remote disaster recovery solution. This will enable
them to have a zero RPO and zero RTO with continuous availability.
VNX Replication Technologies
31
Conclusion
This paper provided information on the replication technologies supported by the
VNX series. Implementing a replication technology allows you to have a redundant
copy of data at a remote location. Having a disaster recovery site minimizes the cost
associated with down time and simplifies the recovery process in the event of
disaster at the primary data center.
RecoverPoint replication technology is best suited with VNX Block storage and Unified
systems when replicating block data. VNX Replicator is best suited for VNX File,
Unified and VNX Gateway systems for file systems replication. VPLEX provides an
Active/Active mirroring solution that provides the capability for continuous
availability of Block data. Replication Manager enables you to automate and manage
application based with supported replication technologies.
Each replication technology offers features and benefits that will help you implement
a data protection solution in your data center. Choose the appropriate solution to
meet your business needs.
References
For more information on the replication technologies mentioned in this paper, refer to
the following documents:
•
•
•
•
•
•
•
•
RecoverPoint 4.0 Administrators Guide.
EMC RecoverPoint On-Demand Operational Recovery with EMC VPLEX.
Using VNX Replicator.
MirrorView Knowledge book.
VPLEX Product Guide.
SRDF Product Guide.
Symmetrix Remote Data Facility (SRDF) Connectivity Guide.
Replication Manager 5.4.4 Product Guide.
VNX Replication Technologies
32
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising