Dell FluidFS Migration Guide

Dell FluidFS Migration Guide
Dell EMC FluidFS
Migration Guide
FluidFS Systems Engineering
Dell EMC
February 2017
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS
OR IMPLIED WARRANTIES OF ANY KIND.
© 2017 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the
express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
Dell, the DELL logo, and the DELL badge are trademarks of Dell Inc. Microsoft®, Windows®, Windows
Vista®, Windows Server®, and Active Directory® are either trademarks or registered trademarks of
Microsoft Corporation in the United States and/or other countries. Other trademarks and trade names
may be used in this document to refer to either the entities claiming the marks and names or their
products. Dell disclaims any proprietary interest in the marks and names of others.
Table of Contents
1.
Preface ..................................................................................................................................................................... 2
2.
Introduction............................................................................................................................................................. 4
FluidFS Overview ............................................................................................................................................ 4
Migration Overview ....................................................................................................................................... 4
3.
Planning and Preparing for NAS Migration ......................................................................................................... 5
NAS Migration Methodology........................................................................................................................ 5
Discovery Phase......................................................................................................................................... 5
Mapping and Design Phase...................................................................................................................... 6
Performance Analysis Phase .................................................................................................................... 6
Provisioning Phase .................................................................................................................................... 7
Active Data Migration Phase .................................................................................................................. 10
Data Validation and Cutover Phase ...................................................................................................... 10
Post-Migration Phase............................................................................................................................... 11
NAS Migration Topologies ........................................................................................................................... 11
Single File Server to a Single FluidFS NAS System ............................................................................... 11
Many File Servers to a Single FluidFS NAS System.............................................................................. 12
4.
FluidFS Considerations for NAS Migration........................................................................................................ 14
FluidFS Network Load Balancing ............................................................................................................... 14
File System Daemons (FSD’s) and NAS Volume Domains ..................................................................... 16
Optimizing Resource Utilization on FluidFS ............................................................................................ 20
Optimizing Resource Utilization on the Migration Server ..................................................................... 21
Additional FluidFS Migration Guidelines ................................................................................................... 23
Linux/NFS Environments ........................................................................................................................ 23
Windows/SMB Environments ................................................................................................................ 24
5.
Performing Data Migration to FluidFS ............................................................................................................... 26
Prerequisites ................................................................................................................................................. 26
SMB Migration Procedures ......................................................................................................................... 26
Robocopy ................................................................................................................................................. 27
NFS Migration Procedures .......................................................................................................................... 38
rsync .......................................................................................................................................................... 38
Mixed Mode Migration Procedures ........................................................................................................... 39
Migrating SMB Permissions .................................................................................................................... 40
Migrating NFS Permissions..................................................................................................................... 40
Using Global Namespace to assist with migration ................................................................................. 44
6.
Validation and Cutover ........................................................................................................................................ 49
3
Pre-Cutover Verification ............................................................................................................................. 49
Cutover .......................................................................................................................................................... 49
Updating Service Principle Names (SPN) ................................................................................................. 49
Post-Cutover Verification ........................................................................................................................... 52
Post-Cutover Maintenance ........................................................................................................................ 52
7.
Conclusion ............................................................................................................................................................ 53
8.
Appendix A – Migrating between FluidFS Products Using FluidFS Replication .......................................... 54
Overview and Scope ................................................................................................................................... 54
Prerequisites ................................................................................................................................................. 54
Migration Workflow ..................................................................................................................................... 55
Replication: Conceptual Overview ........................................................................................................... 56
Creating Target Volumes............................................................................................................................ 56
Establishing Replication Partnership ......................................................................................................... 56
Creating a Replication Policy ..................................................................................................................... 57
Performing the Replication ........................................................................................................................ 58
Verifying Target Data ................................................................................................................................... 58
Deleting the Replication Partnership ........................................................................................................ 59
9.
Appendix B – Migrating using Quest Secure Copy (deprecated) ................................................................. 60
Secure Copy ................................................................................................................................................. 60
Initial GUI view in Secure Copy 7 .............................................................................................................. 61
Create a New Job ........................................................................................................................................ 62
New Job Configuration: Configuring File Copy Locations .............................................................. 64
New Job Configuration: Synchronization........................................................................................... 65
New Job Configuration: Optional Filter Settings ............................................................................... 66
New Job Configuration: Performance and Throttling Settings................................................... 67
New Job Configuration: Email Settings............................................................................................... 68
New Job Configuration: Scheduling ............................................................................................... 69
Job Status ......................................................................................................................................................71
Job Logs and Reports Dashboard............................................................................................................. 73
Secure Copy – Command Line Interface (CLI) ...................................................................................... 75
10.
Additional Resources ....................................................................................................................................... 76
4
Acknowledgements
This white paper was produced by the FluidFS Systems Engineering Team and the Dell EMC Solutions
Engineering team.
Authors:
Bryan Lusk and the FluidFS Systems Engineering team
Feedback
Please give us feedback on the quality and usefulness of this document by sending an email to:
FluidFS-System-Engineering@Dell.com
Revision History
Revision
Date
Description
A
July 2012
Initial Release
B
October 2013
Updated for FluidFS v3 release
C
March 2014
D
Sept 2014
Section 4.2: Added clarification around domain affinity
Section 4.5.1: Updated FluidFS mount rsize and wsize
Section 4.5.2: Added note to add migration user to "Backup Operators"
Section 5.2.1: Updated for Secure Copy 7
Section 5.2.2.6: Added section for Robocopy GUI
Section 5.4.2: Made wording more generic regarding competitor’s products
Fixed references to Secure Copy throughout the document
Section 4.2: Added clarification around “safety valve” feature
E
Jan 2015
F
Jan 2016
Updated for FluidFS v4 Release
Updates include:
Renaming all instances of “CIFS” to “SMB”
Added version/date to filename
Section 3.1.2: Added notes around quota directories and NAS volume subnet
restrictions
Section 3.1.4: Updated/Added new NAS volume attributes (File permission types,
auditing, metadata redundancy, inode distribution, NAS Volumes subnet restrictions)
Section 3.1.4: Added detail about metadata redundancy and quota directories
Section 3.1.4: Added note about Storage Profile setting for FS8600
Section 3.2.2: Added note about limiting NAS volumes to subnets
Section 4.2: Updated section for inode distribution feature
Section 4.3: Added clarification around SecureCopy 7 licensing
Section 4.5.1: Updated NFS mount options
Section 5.2.2.2: Updated Robocopy flags for SACL support
Section 8.6: Updated for dissimilar replication feature
Updated for FluidFS v5 release
Updated screenshots for Enterprise Manager redesign for FluidFS v5
Section 3.1.4.2 Added section about the ability to set ACL on SMB shares from EM
Section 4.3: Added a note to method 2 to reinforce there must be multiple source
IP's and multiple Virtual IP's used
1
G
Feb 2017
Section 5.5: New section for using redirection folders to assist with migration
Updated for FluidFS v6 release
Replace Dell with Dell EMC where relevant
Replace Enterprise Manager with Dell Storage Manager where relevant
Updated EM/DSM screenshots
Moved Secure copy to an appendix and removed Dell branding
Updated section 3.1.4.3 regarding migrating data into tier 3
Updated Section 4.2 with screenshot on enabling few writers optimization
Updated section 6.2 to mention SPN updates, and added a section regarding
updating SPN's
Added section 3.1.4.4 - FluidFS Multitenancy and Migrations
Added section 3.1.4.5 - FluidFS Metadata Tiering
1. Preface
Dell EMC network-attached storage (NAS) systems based on the Dell EMC Fluid File System (FluidFS)
®
®
deliver scalable and highly available enterprise-class file services to clients running Microsoft Windows ,
Linux, UNIX , and Mac OS X operating systems under the Server Message Block (SMB) protocol and
Network File System (NFS) protocols. FluidFS supports protocol versions up to SMB 3.1.1 and NFS v4.1. The
SMB protocol is commonly referred to as the Common Internet File System (CIFS). In the context of this
whitepaper, and in the FluidFS user interface, it is referred to as SMB. Dell EMC’s FluidFS NAS systems
integrate seamlessly into existing environments, and consolidate file and block data into a unified storage
system.
The Dell EMC Fluid File System supports online capacity scaling, data deduplication and compression, thin
provisioning, thin volume clones, snapshots, asynchronous replication, quotas, the Network Data
Management Protocol (NDMP), and many other features.
This white paper presents guidance for migrating from non-FluidFS systems, to the Dell EMC FluidFS NAS
system in an environment with clients running Microsoft® Windows® and Linux under the SMB and NFS
protocols. Please refer to Appendix A for guidance on FluidFS-to-FluidFS migrations. The following topics
are presented:







Overview of Dell EMC FluidFS NAS systems
Preparation and planning guidelines for migration to a Dell EMC FluidFS NAS system
Insight into common migration topologies
Special considerations for migrating to a Dell EMC FluidFS NAS system
Guidance for migrating NAS data to a FluidFS NAS system using commonly available tools
Guidance for validating data and functionality on a new FluidFS NAS system
Description of fail-over capability from the old NAS storage system to the FluidFS NAS system
The target audience for this paper is solution architects, application and storage engineers, system
administrators, and IT managers. The reader is assumed to be generally knowledgeable about Microsoft
Windows and Linux operating systems, SMB and NFS protocols, network technologies, file system
permissions, and user authentication technologies.
Dell EMC Migration Services
For customers who would like personalized service, Dell EMC Migration Services has a full range of service
offerings for migrating file and block data. Dell EMC Migration Services will walk through every detail of
planning a migration, including providing estimates on the amount of downtime that will be needed. After
the planning phase, Dell EMC Migration Services will come on site to set up the migration software and
2
begin the migrations. Afterwards, the migrations will be monitored until they are completed to the
customer’s satisfaction. Please contact your Dell EMC Sales representative for more information on Dell
EMC Migration Services.
3
2. Introduction
FluidFS Overview
FluidFS is an enterprise-class distributed file system that gives customers tools for easily and efficiently
managing file data. FluidFS removes the scaling limitations of traditional file systems. It also supports
scale-out performance and scale-up capacity expansion, all within a single namespace for easier
administration. Because FluidFS optimizes performance and scalability, it is an excellent choice for a wide
range of use cases and deployment environments.
Figure 1. Dell EMC FluidFS NAS System
Migration Overview
In this paper, data migration refers to the process of transferring file data from one or more existing file
servers to a new Dell EMC FluidFS NAS system. The individual procedures required to migrate file data in
each environment can vary greatly, but the overall methodology remains the same. This paper presents
methodologies, common tools, and scenarios for a successful data migration to a Dell EMC FluidFS
platform.
4
3. Planning and Preparing for NAS Migration
Planning and preparing for a NAS migration is an important and integral part of ensuring that data is
migrated successfully to a Dell EMC FluidFS NAS system. In this document, we assume the following:





The FluidFS NAS system is adequately sized for capacity and performance.
The FluidFS NAS system is deployed using best practices as defined in relevant Dell EMC
collateral.
The FluidFS NAS system has already been integrated into the NAS environment; authentication
services — Active Directory (AD), Network Information Service (NIS), or Lightweight Directory
Access Protocol (LDAP) — are set up; and network configuration is complete.
Existing backup infrastructure is in place to protect data before and after migration.
Adequate IP and networking resources are available to be used during the migration.
NAS Migration Methodology
The overall data migration methodology includes the following phases:
1)
2)
3)
4)
5)
6)
7)
Discovery
Mapping and Design
Performance Analysis
Provisioning
Active Data Migration
Data Validation and Cutover
Post-Migration
Figure 2 captures the general flow of these phases, which are detailed in the following sections.
Figure 2. FluidFS Data Migration Phases
Discovery Phase
During the discovery phase, the existing environment is analyzed to identify the data to be migrated and
its size. Existing NAS systems are assessed to locate source data sets to be migrated, along with
associated shares or exports. Dell EMC Migration Services has scripts that can be run against existing shares to
determine file count, total size of data, change rate, and other attributes that can help provide estimations on how
long the initial copy and differential scan/copy will take.
Other functions of this phase include documenting existing quotas, snapshots and snapshot schedules,
permissions, and user-authentication methods. Lastly, this phase is a good opportunity to identify stale
and unused data that should be deleted or archived to an archival system. Reducing the amount of data to
be migrated reduces the duration of the data migration window
5
Mapping and Design Phase
The outcome of the discovery phase plays a large role in how the data structure is planned and mapped
from the existing NAS system(s) to the FluidFS NAS system. The two most common scenarios are:


Migration of a single file server to a single scalable Dell EMC FluidFS NAS system.
Consolidation of many file servers to a single scalable Dell EMC FluidFS NAS system.
In these scenarios, data can be copied as is, or the structure can be reorganized during the migration. If
copied as is, the mapping and design phase is minimal, but data may not be organized in the most
desirable layout. It is also important to consider the Quota Directory feature during this phase. Prior to
migration, the actual directories must be created, and Quota Directory definitions created on FluidFS,
since Quota Directories must be created on empty directories.
Furthermore, for cases in which many file servers are being consolidated into FluidFS, occasionally,
network separation between the data on these multiple file servers needs to be preserved after the data is
migrated to FluidFS. FluidFS has the ability to restrict all shares and exports on NAS volumes to one or
more specific subnets. This allows the administrator to preserve network separation between datasets.
Migrations are typically a good time to look at the layout of the data structure and ensure that it is in line
with the functional organization of the company it serves. When consolidating many file servers to a
single Dell EMC FluidFS NAS system, carefully consider how these separate shares and file systems will be
merged into a structured and consolidated environment.
Performance Analysis Phase
During this phase, various aspects of the existing and new infrastructure are analyzed to determine the
amount of time required for the data migration. The following should be documented:




Total size of the data set to be migrated
Composition of data set (large or small files)
End-to-end network throughput capabilities of existing file server(s)
Typical user workload characteristics
A rough estimate of the amount of time needed for the baseline copy of the migration can be derived
from full backups of the file server(s). However, many factors will affect the actual transfer time and
throughput, including user workload and simultaneous threads in the migration tool. Additional factors
that can limit transfer throughput include:







CPU
Memory
Disk subsystem
Number of network interfaces
Speed of network interfaces (10 Mbit, 100 Mbit, 1 Gbit, 10 Gbit)
Network load-balancing mechanisms
Overall network infrastructure
NOTE: The overall throughput is determined by the slowest component in the data path
from the source system to the destination NAS system.
6
Provisioning Phase
During the provisioning phase, NAS volumes, quotas, NFS exports, and SMB shares may be created
depending on the requirements determined in previous phases.
NOTE: The following features MUST be configured prior to migration: Quota Directories,
metadata reduced redundancy, and inode distribution. This section describes how these
features work, and how they should be configured based on the desired usage.
The illustration below shows the logical architecture of FluidFS. NAS Volumes serve as virtual file systems,
carving space out of the NAS Pool. They also function as an administrative entity, providing flexibility to
the storage administrator to separate datasets as needed, based on business demands.
NAS Volumes can host multiple SMB shares and NFS exports as well. Many folders of varying depths can
be shared or exported from the same NAS volume to help structure data.
Figure 3. FluidFS Logical Architecture
Some basic guidelines regarding when to create separate NAS Volumes, SMB Shares, or NFS Exports are
as follows.

File permission types: FluidFS supports multiple file permission types which include NTFS, UNIX,
or mixed. Files on NTFS volumes can only have NTFS style Access Control Lists (ACL’s). Files on
UNIX volumes can only have UNIX 12-bit permissions (managed with chmod, chown, chgrp) or
NFS v4 ACL’s (managed with nfs4_getfacl, nfs4_setfacl). Files on mixed volumes can have either
(only one or the other), but mixed volumes should only be used when it’s required. On mixed
volumes, a UNIX user can overwrite NTFS ACL’s, and NTFS Administrator can overwrite UNIX
permissions. It is highly recommended to choose either NTFS or UNIX, and use the user mapping
feature to facilitate cross protocol access.
7





Application/User separation: The administrator might want to segregate space to prevent a
certain application or set of users from using each other’s space or potentially accessing each
other’s files. Typically users’ home shares are given their own NAS volume.
Quotas: User and Group quotas are set on a per NAS volume basis. Quotas are applied on a NAS
volume per single user, or a single quota for a group’s total used space, or a single group quota
that gives every individual user in that group a limit. If the administrator wants a user to have
different quotas on different datasets, those datasets must be split into separate NAS Volumes.
Alternatively, Directory Quotas can be used to limit the size of specific directories or shares.
Data Reduction: The administrator might want to enable data deduplication/compression on one
volume and not on another, or might want to use different data reduction settings. For example,
one volume may have data that is accessed every 45 days and the administrator does not want
that data to be deduplicated, so that administrator would configure data reduction to run on files
older than 46 days (to give one extra day on top of the 45 day window). The administrator might
want to set another volume to archive mode (data reduce immediately), and configure a third
volume to start data reduction when files are 5 days old or older. It is important to note that files
that contain de-duplicated and/or compressed chunks will have a small performance hit when
they are accessed.
Thin Provisioning/Space Setting: The administrator might want a certain volume to have thin
provisioning, some thin with a reserve space, and others thick provisioned. Or, perhaps the
administrator will want a different amount of reserve space, or total volume space.
Auditing: In FluidFS, auditing using System Access Control Lists is supported (over SMB only). This
feature is toggled at the NAS Volume level, and once enabled, audit events are triggered by the
SACL set in the “Auditing” tab of each file and folder.
Note: If SACL’s are present on the migration source, they should be enabled on the migration
destination volumes (FluidFS) prior to migration.





3.1.4.1
Snapshot Schedule: For some NAS volumes, the administrator might want a very aggressive
snapshot schedule, which takes up more space. For other NAS Volumes, the administrator might
not want snapshots, or might desire a less frequent snapshot schedule.
Replication: The administrator may wish to replicate some NAS volumes, and may not want to
replicate others. Replication policies are set on a NAS volume to NAS volume basis.
Inode Distribution: By default, all files on a FluidFS NAS Volume are owned by the FSD which
initially wrote them. However, FluidFS has the ability (on a per-NAS-volume basis) to actively
distribute all new writes evenly across all FSD’s. Typically, this mode is recommended for
migrations, or workloads that will have more readers (or wants to favor readers) after the
migration completes. See Section 4.2 – File System Daemons (FSD’s) and Domains for more
information.
Subnet Restrictions: Often times, administrators wish to have complete network separation
between different datasets, such that some NAS volumes (and the shares or exports on them) are
completely inaccessible from certain subnets. FluidFS has the ability to restrict access to
shares/exports based on the subnet used to access them, on a per-NAS-volume basis.
Default UNIX Permissions: For some NAS volumes of UNIX style, the administrator might want the
default UNIX permissions for files created inside Windows to be different between two NAS
volumes, depending on the needs of the applications using those NAS volumes.
Setting up Quota Directories
Quota Directories allow the administrator to limit the amount of space a given folder can consume.
The most important aspect of Quota Directories in regards to migration is that Quota Directories
must be created on empty directories. If the administrator attempts to assign a quota directory to a
8
directory that has data, FluidFS will deny the action and return a descriptive error message. This is why
it is very important to create the directories and the Quota Directory objects on FluidFS prior to
beginning the data migration.
3.1.4.2
Setting permissions on SMB shares via Dell Storage Manager
Immediately following share creation, the administrator has the ability to set Share Level Permissions
(SLP) on the share, as well as define an Access Control List (ACL) for the root of the share. This can be
done by right clicking on the SMB share in Dell Storage Manager and clicking Edit Settings. Share Level
Permissions are set in the tab titled “Share Security” and the Access Control List for the root of the
share can be set in the tab titled “Folder Security”.
3.1.4.3
Migrating Data Into Tier 3 on Compellent FS8600
On the Dell Compellent FS8600 FluidFS NAS product, it is possible to configure the system such that
all new writes go into Tier 3 storage on the backing Storage Center. This is achieved by going to the
“NAS Pool” tab in Dell Storage Manager, click “Change Storage Profile”, and select “Low Priority (Tier
3)”, as is shown in Figure 4 below. If the administrator wishes for the FS8600 to use more than only
Tier 3 on the Storage Center, after the data migration finishes, this setting can be changed at a later
time, and Data Progression will move the data into higher tiers accordingly.
Figure 4. Dell EMC FluidFS Storage Profile Setting
Additional information on provisioning the Dell EMC FluidFS NAS system can be found in the
Administrators Guide associated with the FluidFS platform.
9
3.1.4.4
FluidFS Multitenancy and Migrations
For FluidFS NAS systems that are utilizing multitenancy, each tenant should have a completely separate
migration strategy and migration server(s). In order to preserve permissions properly, the migration server
should be joined to the same Active Directory domain/forest, and for UNIX/NFS environments should be
using the same LDAP/NIS infrastructure. Additionally, each tenant has its own set of Virtual IP addresses,
effectively isolating data access between tenants.
3.1.4.5
FluidFS Metadata Tiering
FluidFS metadata tiering should be configured prior to beginning a migration in order to achieve the
most performance benefit from the feature. The metadata tiering feature can be enabled by
expanding the NAS pool by some small amount. On the same screen that the new NAS pool size is
entered, you can also enable the Metadata Tiering feature. After expanding the NAS Pool and enabling
metadata tiering, dedicated metadata LUNs will be created, which will use faster disks than the
regular data LUNs. FluidFS will automatically begin to put all new/modified metadata onto these
faster metadata LUNs. However, FluidFS will not migrate existing metadata from regular data LUNs
and put it on metadata LUNs. Since FluidFS currently does not have the ability to migrate existing
metadata from regular data LUNs onto metadata LUNs, metadata tiering should be enabled prior to
performing a migration.
Active Data Migration Phase
This phase begins when the environment has been set up according to the recommendations in this
paper, and all previous phases are complete. In this phase, data is transferred from the source systems to
the FluidFS NAS system. A variety of tools can be used, including Microsoft Robocopy for Windows SMB
environments, and GNU rsync for Linux or Unix NFS environments. (See section SMB Migration Procedures,
for more information about these tools.)
All migrations follow the same basic outline during active data migration:
1.
Initial full copy – Set up the migration software to copy a set of data from the source to
destination, while the source share is in read/write mode
2. Online synchronization – After the initial full copy, scan the same dataset for any new or
changed files, and copy only those new or changed files. This is performed repeatedly for some
amount of time until the amount of time the online re-synchronizations take to complete
reaches a “constant minimum value”. This is done to estimate and minimize the amount of time
the source dataset must be taken into read only mode (or offline for users) to do the final offline
synchronization
3. Offline synchronization – One final differential scan and copy must be done between the
source and destination, while the source data cannot be changed. This is accomplished by
making the source data read only, or otherwise disabling users’ access to it. Once this is done,
the migration software will perform one last differential scan and copy (offline re-sync).
Data Validation and Cutover Phase
In this phase, data is validated on the destination system to verify that it was copied consistently during
the “Active Data Migration” phase. Data can be validated using a variety of methods, including log files
from the migration process, spot-checking source and destination volumes for consistency, and
delegating verification to end users. After the data has been validated, the cutover from the original source
10
file server to the new FluidFS NAS system can occur.
Post-Migration Phase
This phase is used to perform tasks that need to occur after the migration has completed such as verifying
proper functionality on the new FluidFS NAS system and cleaning up previous configurations from the old
file server. Some examples include: verifying that backups are occurring on the new FluidFS NAS system,
decommissioning the old file server(s), verifying quotas and snapshot configurations are correct, and
verifying that the FluidFS NAS system is performing as expected.
NOTE: An important post-migration task is to remove the old/unused Computer object in Active Directory
for the filer that is being migrated off of.
NAS Migration Topologies
Many potential migration topologies exist, based on customer environments and business requirements.
This paper focuses on two of the most common topologies, which can be broadly applied to most
environments.


Single file server to a single FluidFS NAS system
Many file servers to a single FluidFS NAS system
Single File Server to a Single FluidFS NAS System
This topology assumes that the customer has a single NAS storage system that will be migrated to a Dell
EMC FluidFS NAS system. In this scenario, there is a one-to-one ratio of source-to-destination
relationships.
Source NAS Storage System
Dell EMC FluidFS Storage System
Figure 5. Single File Server Migration
11
Within this topology, two migration approaches can be used: all-at-once or staggered.
3.2.1.1
“All-at-once” or “Bucket-Dump” Migration
This method migrates data in its entirety without any structural changes. The data is kept completely
intact and the folder hierarchy for the target is an exact replica of the source. All data is migrated in a
single migration time window with a single cutover, and the source system is typically decommissioned
after completion.
3.2.1.2
“Staggered” or “Staged” Migration
This method migrates data chunk-by-chunk in a staged or staggered manner. Typically, the data is
restructured as captured in the “Mapping and Design” phase, and there may be multiple “mini migrations”
and cutovers for each data set. This approach is typically used when careful structural design must be
adhered to, or when business use cases prevent simultaneous cutover for the entire storage system.
Additional considerations to ensure optimal performance and balanced resource utilization during and
after the data migration are outlined in section 4. FluidFS Considerations for NAS Migration.
Many File Servers to a Single FluidFS NAS System
This topology assumes that the customer has many NAS storage systems that will be migrated to a single
Dell EMC FluidFS NAS system. In this scenario, there is a many-to-one ratio of source-to-destination
relationships.
Typically, this topology is used when many islands of data must be consolidated to a single location. Due
to the nature of this type of migration, it is highly likely the data structure will need to change, as
determined in the “Mapping and Design” phase. As with one-to-one migrations, this type of migration can
be done all-at-once or staggered.
If the original separation of shares/exports is to be preserved after the data is migrated to FluidFS, the
administrator can use the FluidFS feature that allows NAS volumes to be restricted to specific subnets.
Source NAS Systems
Dell FluidFS System
12
Figure 6. Multiple File Server Migration
Additional considerations to ensure optimal performance and balanced resource utilization during and
after the data migration are outlined in section, 4. FluidFS Considerations for NAS Migration.
13
4. FluidFS Considerations for NAS Migration
FluidFS Network Load Balancing
The Dell EMC Fluid File System is designed to serve organizations that use flat networks, routed networks,
or a combination. FluidFS uses ARP (Address Resolution Protocol) to load balance on flat (non-routed)
networks, and DNS (Domain Name System) round robin to load balance in routed and combination type
networks.
The diagram below illustrates FluidFS load balancing in a flat network. FluidFS has an internal mechanism
called the “Client Load Balancer” that uses ARP to determine if a connection is coming from a new MAC
address that hasn’t previously connected to the FluidFS appliance. The Client Load Balancer will load
balance incoming connections across controllers, and network interfaces on each controller. Additionally
MAC addresses that have connected to the FluidFS NAS cluster in the past will be connected to the node
and network interface that they were paired with the first time they connected to the cluster. If the cluster
ever becomes unbalanced due to this, individual connections can be moved to a different node/interface.
Alternatively, a “mass rebalance” operation can be performed to rebalance all connections evenly. This will
cause a very brief interruption in service.
Figure 7. Load Balancing in Flat Networks
14
The diagram below shows FluidFS load balancing when using DNS round robin. In this example, the
cluster name is “dellfluidfs01”. It is a single 10GbE FluidFS NAS system, so each controller in the appliance
has 2x 10GbE network interfaces. Since there are a total of 4 physical client network ports, we will use 4x
Virtual IP Addresses (VIP’s). The administrator creates 4 DNS A record entries, all using the same name
“dellfluidfs01”, and each one holding a different VIP. Every time a system accesses the NAS appliance using
the DNS name, DNS will resolve with the next VIP of the 4, in a round robin fashion.
A note to make here that is pertinent to migration is that a single MAC address, when accessing the NAS
appliance with multiple VIP’s, will establish multiple connections to the appliance. However, all of these
connections will use only a physical network interface on the system that is accessing the NAS appliance.
This white paper will describe later how to use multiple network interfaces on the system accessing the
NAS appliance using static routes.
Figure 8. Load Balancing in Routed and Flat Networks
15
File System Daemons (FSD’s) and NAS Volume Domains
The Dell EMC Fluid File System is a highly distributed file system. The core building blocks of FluidFS are
the file system daemons (FSD’s). The FSD’s are the core file system processes that serve file system data to
the SMB and NFS protocols server, which the SMB and NFS users connect to. On a FluidFS controller, the
number of FSD’s running is equal to the number of CPU cores. FSD’s are the mechanism FluidFS uses to
optimize and divide hardware resource utilization, in order to provide a scalable solution.
Each FS8600 controller has two quad core processors. The number of FSD’s running on the controller is
equal to the number of physical processor cores running on that controller. FluidFS will automatically load
balance new connections across FSD’s, in addition to the network load balancing that was described in
the previous section. Connections that have connected to the FluidFS NAS cluster previously will be
reconnected to the same controller, network port, and FSD they previously connected to. The time to live
value for this affinity is 4 days. In the case of a controller failure, the NAS volume domains that the FSD’s
on the failed controller were serving, prior to failure, fail over to the peer controller.
The table below illustrates the FSD and Domain architecture for a 2 appliance/4 controller NAS system,
which has 8 physical CPU cores per controller.
Controller0
Controller1
FSD0
Domain0
FSD0
Domain1
FSD1
Domain2
FSD1
Domain3
FSD2
Domain4
FSD2
Domain5
FSD3
Domain6
FSD3
Domain7
FSD4
Domain8
FSD4
Domain9
FSD5
Domain10
FSD5
Domain11
FSD6
Domain12
FSD6
Domain13
FSD7
Domain14
FSD7
Domain15
Controller2
Controller3
FSD0
Domain16
FSD0
Domain17
FSD1
Domain18
FSD1
Domain19
FSD2
Domain20
FSD2
Domain21
FSD3
Domain22
FSD3
Domain23
FSD4
Domain24
FSD4
Domain25
FSD5
Domain26
FSD5
Domain27
FSD6
Domain28
FSD6
Domain29
FSD7
Domain30
FSD7
Domain31
Table 9. FSD and Domain Architecture
16
When client systems disconnect from FluidFS and reconnect, they will be connected to the same
controller and FSD they were using prior to the disconnection. This is designed to maintain client/files
affinity. For environments where data is consumed by the same client that created it, this provides optimal
performance.
Figure 10. FluidFS FSDs and Domains
For the purpose of migrations, however, maintaining affinity might not be optimal, especially if the
number of clients performing the migration is small. In order to prevent performance bottlenecks, for
large NAS volumes, it becomes important to distribute the data across FSDs (and therefore across more
CPU cores and memory). To achieve that, FluidFS introduces NAS Volume Domains. NAS volumes in
FluidFS distribute their data and meta-data (of all levels) among the FSD’s by dissecting the NAS volume
contents into NAS volume domains.
For migrations, FluidFS includes a feature that will dynamically distribute new writes evenly across all NAS
volume domains. This feature (which is disabled by default) is to be used when one or few migration
servers are in use, but when there will be many users/hosts accessing the data post-migration. In this
scenario, using this feature will result in the most optimal usage of CPU and memory resources after the
FluidFS cluster is put into production. This is because the FluidFS system will distribute the files across all
FSD’s, instead of overloading a few FSD’s due to the fact they are serving a disproportionately larger
amount of data than the rest.
17
During migration, in the simplest case, the topology in terms of FSD’s will look like Figure 12.
Figure 11. Migration Topology
Most topologies look like Figure 11 during migration, but after migration completes and the FluidFS cluster
is put into production, the topology looks more like Figure 12. This is the most common case, and this is
the case in which the “optimize inode distribution to favor readers over writers” feature should be enabled.
Since the data will be distributed among all FSD’s (along with the many hosts accessing it) resources will
be utilized evenly and optimally. If the “optimize inode distribution to favor readers over writers” feature
was not enabled (for this example in the Figure 11), the FSD’s that the migration servers were connected to
when performing the migration could be disproportionately loaded once the cluster is put into
production.
Figure 12. Post Migration Topology
18
NOTE: In some cases, the server that will be modifying/reading the data post-migration, is the same
server that will actually be performing the migration. In such cases, the “optimize inode distribution to
favor readers over writers” feature should be disabled. In other words, for NAS volumes/datasets in which
the hosts that initially write files will be predominantly the same host that is modifying the files later, the
“optimize inode distribution to favor readers over writers” feature should be left disabled. In this case, NAS
volume domain affinity will be maintained, providing optimal performance for these types of workloads.
This is the default behavior of any new FluidFS NAS Volume.
NOTE: The inode distribution feature only applies to new writes, and cannot be applied to data that has
been written prior to enabling the feature.
For customers who have already migrated in a single threaded manner, without the “optimize inode
distribution to favor readers over writers” feature enabled, it is not necessary to re-migrate the data.
FluidFS includes a “safety valve” that will dynamically redistribute files into other NAS volume domains
during migrations when one NAS volume domain becomes overloaded when compared to the rest, in
terms of inode count. Once one NAS volume domain has a large number of files/inodes and other
domains have much less, FluidFS will begin dynamically re-allocating NAS volume domain membership of
new files/inodes to other domains. The FSD that is communicating with the migration server will continue
to migrate files into its domain as well, but this functionality reduces the chances of creating a bottleneck
in single threaded migrations to FluidFS.
NOTE: There is no substitution for a well-distributed migration. Administrators should always strive to
distribute migrations as much as possible, using the “optimize inode distribution to favor readers over
writers” feature, multiple source IP’s, FluidFS VIP’s, or migration users, as described in this guide. The
mechanism described in the paragraph above is only intended to be used as a “safety valve” and kicks in
only in environments with millions of files. It is not intended to be used as the primary distribution
mechanism for migrations.
To enable the “optimize inode distribution to favor readers over writers” enable this check box in the NAS
Volume properties in Dell Storage Manager:
19
Optimizing Resource Utilization on FluidFS
FluidFS is designed to serve many client systems simultaneously accessing data. As discussed in previous
sections, network load balancing is accomplished using multiple VIP’s. File system-level load balancing is
accomplished by using multiple FSD’s. During data migration, this distributed nature can be leveraged to
achieve better migration performance through multithreading.
Data migrations can be completed quicker if different sets of the data are migrated in parallel to each
other. For example, instead of migrating one share at a time, migrate several shares in parallel to one
another. Or if there is only one very large share, break the migration up into groups of the top level folders
in that one large share.
For each one of the separate migration jobs, the administrator has two options to invoke multithreading
on FluidFS.
Multithreaded Migration Options:
1.
The first option, if using SMB to migrate, is to connect to the NAS system with multiple users. If X
number of users are logged onto the same machine, all accessing the NAS system using the same
FluidFS VIP, FluidFS will spawn X number of SMB sessions. This migration option will only work for
SMB (not NFS). The figure below illustrates this model. Each SMB session will use a separate FSD
as well. This method is preferred, which can also minimize the number of licenses needed for
migration software. It is important to note that by default, Microsoft Windows Server only allows 2
simultaneous users to be logged in. A terminal server is best used for this migration option since it
allows more than 2 users to be logged into the same Windows machine simultaneously. It is also
important to optimize the resources of the migration server, so as not to create a bottleneck by
using only one NIC on the migration server. It is recommended to use a 10GbE infrastructure for
this type of multi-threaded migration. The next section, 4.4 Optimizing Resource Utilization on
the Migration Server, describes optimization for the migration server.
Figure 13. Multi-streaming using multiple SMB users
20
2. The second option is to use multiple VIPs, and optionally static routes on the client. This option
works for both SMB and NFS migrations. A single user with a single or multiple host IP’s, can
access the NAS system using multiple VIPs, and one SMB or NFS session will be spawned for each
VIP used. However, the downside is that some migration software requires a separate license for
each VIP used. Starting in Dell SecureCopy 7 however, using multiple VIPs does not consume
multiple licenses. The next section, 4.4 Optimizing Resource Utilization on the Migration Server,
describes optimization for the migration server, including setting up static routes.
Note: The most important aspect to this topology is that FluidFS sees multiple source IP
addresses, AND each one of those unique source IP addresses is accessing its own, unique FluidFS
VIP.
Figure 14. Multi streaming using multiple VIPs
Optimizing Resource Utilization on the Migration Server
Dell EMC FluidFS is a highly scalable, distributed clustered file system. Performance can be scaled by
adding additional NAS controllers or back-end, block-based storage controllers and disks. The distributed
nature of FluidFS enables performance to scale as additional clients are added. This aspect of the FluidFS
architecture can be leveraged to increase migration performance. A migration server equipped with
multiple network interface controllers (NICs) that emulate multiple clients can be used to increase the
FluidFS resource utilization and boost migration throughput.
The migration can be performed by a dedicated server or by multiple clients. As described in the previous
section, FluidFS resources can be optimized by using multiple users to migrate, or multiple unique VIP’s,
or a combination of both.
Figure 16 shows a migration (or “staging”) server equipped with four NICs and four IP addresses. In this
scenario, the FluidFS NAS system is also configured with four virtual IP addresses (VIPs). Static routes are
defined on the migration server to ensure that each SMB share or NFS mount uses a different VIP than
that used by the FluidFS NAS system. Setting up the static routes optimizes resources on the migration
server, by preventing a single NIC from becoming a bottleneck. An alternative to using static routes in this
manner is to use NIC teaming, access a single VIP, and use multiple users to cause FluidFS to spawn
multiple SMB sessions. This method only works for SMB based migrations however.
If the source file server has multiple NICs, additional alias IP addresses can be added to the old file server
(the one which is being migrated off of), with corresponding static routes defined on the migration
server. This approach ensures that the data transfer between the existing file server and the migration
server is distributed across all available NICs. If the source file server has just a single NIC, the procedure
outlined in Figure 16 on the “Source Side” can be skipped, because it provides no additional throughput or
balancing advantages.
21
Figure 15. IP and Static Routing Configuration for Optimal Throughput
As mentioned earlier, the migration design that includes a migration server and associated static routes can
be avoided if multiple clients of adequate size are used for the migration process. Ideally, the number of
NICs and IP addresses on the migration server — or the number of clients to be used for the migration —
should be equal to the number of FSD’s supported by the FluidFS platform being introduced into the
environment. However, this high of a number is not always achievable, and also is not as necessary if
the “optimize inode distribution” feature is being used. The primary benefit of using more clients
and/or VIPs will be to decrease the overall time taken to perform the migration. If using multiple
clients and/or VIPs, this means that the data set must be logically partitioned into the same number of
equally-sized chunks, and migrated across their own respective data paths. As shown in Figure 17, this
enables simultaneous data copy operations that use separate dedicated storage and network resources
within and across the network and FluidFS.
Existing NAS
FluidFS NAS
System
System
22
Figure 16. Dataset Migration with FluidFS
During the migration, on the FS8600 platform, the administrator can monitor how much cache each FSD
is using. This ensures that multiple FSD’s are being utilized for the migration. In Figure 17, a screenshot of
Dell Storage Manager shows the screen in which this can be monitored.
Figure 17. FSD Write Cache Utilization in Dell Storage Manager
Additional FluidFS Migration Guidelines
The following additional guidelines can facilitate a smooth migration to a FluidFS NAS system.
Linux/NFS Environments
1)
Configure the Dell EMC FluidFS NAS system to use the local user repository, LDAP, or NIS,
depending on the environment. If using local users, migrate the local users manually, or use scripts
to synchronize local files.
2) Configure Kerberos authentication for NFS on the systems that will be performing the migration.
3) Ensure that the Domain Name System (DNS) or hosts file have been configured, and that name
23
resolution is working correctly on all systems.
4) It is recommended to ensure that the Network Time Protocol (NTP) has been configured
correctly on the FluidFS NAS system.
5) Create NAS volumes and associated high-level NFS exports with appropriate permissions, based
on the discovery and mapping phase described earlier in this paper. Upon initial creation of an
NFS export, only the root user has access. First mount the export as root, and set appropriate
permissions to allow users and groups other than root to access the NFS export.
6) Map the source and destination NFS exports to the migration server or clients using distinct
mount points. An example for NFS v3 is shown below:
# mount -o proto=tcp,vers=3 FluidFS-VIP:/<NFS Export> /<Mount Point>
If NFS v4 ACL’s are in use:
# mount -t nfs4 -o proto=tcp,vers=4,sec=sys,acl FluidFS-VIP:/<NFS Export> /<Mount Point>
Or NFS 4.1 and NFS v4 ACL’s:
# mount -t nfs4 -o proto=tcp,vers=4,minorversion=1,sec=sys,acl FluidFS-VIP:/<NFS Export>
/<Mount Point>
7) Apply appropriate permissions to the root level of each destination NFS export, based on the
source from which it will be migrated. This is especially important, because it is difficult to set
permissions after the entire data set has been migrated.
8) Decide whether permissions will be migrated with the data. Each tool mentioned in this
document is capable of migrating only the data or migrating the data along with associated user
identification (UID) and group identification (GID) permissions.
9) FluidFS uses UTF-8 character encoding. Many other NAS vendors use ISO-8859-1 as the default
character encoding scheme. If the character encoding needs to be converted during migration,
this can be done on the fly using rsync as detailed later in this document.
Windows/SMB Environments
1)
2)
3)
4)
5)
6)
7)
Ensure that the Dell EMC FluidFS NAS system is configured to authenticate locally or to Active
Directory (AD). If using local authentication, migrate users manually or use scripts to synchronize
local files.
If configured to authenticate using AD, verify the FluidFS cluster is joined to the Active Directory
domain.
To ensure that the user(s) that are being utilized to perform the migration do not receive any
“Access Denied” type errors on the FluidFS NAS system, add the user(s) to the “Backup Operators”
local group. These can be local users or Active Directory users.
Ensure that the DNS or hosts file have been configured, and that name resolution is working
correctly on all systems.
Ensure that NTP is configured correctly on the FluidFS NAS system. This is especially important in
a Microsoft Active Directory environment in which a time difference of more than five minutes
prevents communications.
Create NAS volumes and associated high-level SMB shares with appropriate permissions, based
on the discovery and mapping phase described earlier in this paper.
Map both the source and destination SMB shares to the migration server or clients using distinct
mount points.
a. Use Windows Explorer. Go to Tools  Map.
b. Use the CMD prompt, for example:
24
C:\> net use N: \\<FluidFS-VIP>\<Share-Name>
8) Apply appropriate permissions to the root level of each destination SMB share based on the
source from which it will be migrated. This is especially important, because it can take a lot of
time to set permissions after the entire data set has been migrated.
9) Decide whether permissions will be migrated with the data. Each tool mentioned in this paper is
capable of migrating only the data or migrating the data along with associated Access Control
Lists (ACL).
25
5. Performing Data Migration to FluidFS
Prerequisites
Before starting the data migration, the following prerequisites should be met:
1.
2.
3.
4.
5.
Disable snapshots on FluidFS destination NAS volumes.
Disable antivirus on FluidFS destination NAS volumes/SMB shares
Disable NDMP backups of FluidFS
Ensure that full backups of the source data have completed successfully.
Ensure that the root of the destination NAS volumes, SMB shares, and NFS exports are configured
with appropriate permissions that are equivalent to the source locations.
NOTE: It is important to disable FluidFS NAS volume snapshots before the migration
begins. The size of the snapshots will increase based on the change rate or amount of data
that is ingested. The snapshot space requirements could be as large as the size of the data
being copied, requiring twice the space in the NAS volume for the data set. Snapshots can
be re-enabled after the migration is complete.
SMB Migration Procedures
The following sections cover SMB migrations to a FluidFS NAS system with Microsoft Robocopy. Another
tool is Quest Secure Copy, which is a licensed product that has an easy-to-use graphical user interface
(GUI), is covered in an appendix at the end of this document. Robocopy has been included with each
version of Microsoft Windows since Windows Vista®. For older versions of Windows, Robocopy can be
downloaded from the Microsoft Download Center; see section 9: Additional Resources for download
information.
26
Robocopy
Robocopy is an industry standard utility produced by Microsoft for migrating data and metadata in SMB
environments. It is a powerful and flexible tool, so it's important to use it properly to achieve efficient data
migrations to Dell EMC FluidFS NAS systems. When migrating data to a Dell EMC FluidFS NAS system, one
of the main concerns is preserving the Access Control Lists (ACLs) associated with each file and folder. As
the amount of migrated data is usually large, the migration process is lengthy and it is important to
complete the migration successfully on the first attempt.
5.2.1.1
File Permissions When Copying with Robocopy
If on the Source system the ACL permissions of the files / directories for migration have only Access
Control Entries (ACEs) that were inherited from their parent directory, they will acquire “inherited
permissions” on the target FluidFS system. In this case, on the destination share the ACEs that are applied
on the migrated files are ONLY the ACEs that are inherited from the target parent directory.
If there are new ACEs in addition to the inherited ones, these will be applied as well on the files in
the destination share.
5.2.1.2
Recommended Robocopy Flags
Some administrators prefer to migrate permissions first, and then data. In the example shown below, the
root permissions are copied using Robocopy from the source volume (D) to the destination volume (N):
C:\>robocopy D: N: /COPYALL /MIR /XD * /XF *
The recommended Robocopy CLI flags to copy data and metadata are:
robocopy \\SourceSystem\Share\Path \\TargetSystem\Share\Path /MIR /COPYALL /V /MT:12
/FP /NP /LOG+:”c:\logs\Migration_LogX.log” /FFT /R:0 /W:0 /TEE
The Robocopy GUI options and command line options are comparable.
Source Path and Target Path
Specify the location of the data in the "Source Path" field and the new location in the "Target Path” field.
Quotes are required if the path contains spaces. Please note that quotes can be problematic if the path
ends with a back-slash, because it will be confused with the \” control character.
It is possible to use local locations, such as c:\dataset, or mapped drives. In addition, network locations are
supported through the \\<computer-name>\<share>\<path> format.
To avoid typing errors, it is best to browse to the source and target using Windows Explorer and to copy
the location from there.
Please make sure the share level permissions for the migration user are set to “Full Access”
27
Dataset
The /S flag is used to include subdirectories. The /E flag does the same, but includes empty subdirectories.
The /MIR flag is again equivalent to the /E flag, but includes additional /PURGE functionality which
removes from the target files and directories that no longer exist in the source.
The /MIR flag is recommended in most cases.
It is very common to perform an initial migration at one point in time and then perform a delta migration
right before the switch from one system to another, thus reducing the service outage. The /MIR flag is
suitable for both the initial migration and the delta migration.
Attributes
The /COPY flag specifies the type of information to copy: Data, Attributes, Time, Security, Owner and
aUditing.
The full syntax, /COPY:DATSOU is equivalent to /COPYALL, while the inclusion of the S is equivalent to
/SEC (security, ACLs).
Auditing information (SACL) is supported by FluidFS.
The recommended flag is /COPYALL to copy all data and metadata.The /SEC flag is redundant when
using this flag.
/FFT
Dell EMC FluidFS system uses an internal time granularity that is different from the Windows time
granularity. As a result, delta migrations can result in some files being mistakenly identified as changed on
the target, which will cause them to be unnecessarily re-migrated. The default settings on FluidFS NAS
volumes do not need to be adjusted for migrations to function properly.
The /FFT flag will cause Robocopy to ignore time differences of up to 2 seconds
The use of /FFT is recommended on every execution, but will be effective only on delta migrations.
Retry Options
Robocopy assumes all operations will be successful, so by default it will retry failed copies for 1 million
times, with a 30 second pause between them.
Using the /R:0 /W:0 flags will ensure that failures will be skipped quickly. Log analysis is required in any
case at the end of the execution
28
Logging Options
It is very important to enable logging during migration, in order to record any issues, without having to
monitor the screen constantly.
The /LOG:<log file> option will write a log file instead of displaying progress on the screen. The
/LOG+:<log file> option will do the same, but will append the new logs instead of overwriting the log file.
The /TEE option adds screen output in addition to the log output, but it is missing from the Robocopy GUI
utility (which sends the job to the background). It is recommended for console executions.
The /V option will include verbose information (including skipped files), so using it during delta migrations
will cause the logs to be as large as the initial migration. On the other hand, it will show progress even if
no files were modified in the delta migration.
The /NP option disables the per-file progress meters, which are more suitable for screen output as
opposed to log files.
The /FP option displays the full path, which can be very useful if the migration is split between different
jobs (machines or processes).
The /MT flag is for multithreading.
The recommended flags are /LOG+:<log file> /TEE /V /NP /FP /MT:12.
5.2.1.3
Known Limitations and Untested Options
Other than the limitations described above, some Robocopy features are not compatible with Dell EMC
FluidFS systems.
Compression and Encryption
Dell EMC FluidFS system does not support the Windows "Compression" and "Encryption" attributes, but
the migration process should not suffer as a result, and uncompressed and unencrypted data will be
migrated.
Empty Folder Removal
When using the /MIR option (or /E /PURGE), the target location is changed to give a "mirror" image of the
source. This includes removal of folders from the target, which do not exist in the source location.
This feature is usually useful when the migration is done in several stages, with only the delta being
synched each time, and some folders are removed from the source.
Manual removal of these folders is possible, as Windows Explorer and Robocopy use different methods for
directory removal.
Backup Flag
In order for Administrators or Backup Operators to backup files that they have no permissions to read, the
SMB protocol allows those special users to open files with a special "Backup" flag, which overrides the
security protection and allows the users to read the files.
In order for FluidFS to implement usage of the /B or /ZB flags, the migration user must be added to the
local “Backup Operators” group on the FluidFS NAS system.
Resume Flag
The /Z option allows resuming the migration progress, and is intended to be used for huge files or
29
transfers over unreliable lines (internet/VPN).
This option has been reported to slow transfers down significantly, and it was not tested.
The /ZB flag uses the same mechanism (equivalent to /Z + fallback to /B if access is denied), and is
supported. (See the Backup Flag option above for more information).
Drive Mapping
This option is supposed to create drive mapping of the target share on the source machine.
This was not fully tested and is not recommended.
Monitoring Options
Robocopy can start a never-ending job that will run a single pass of the migration process and then listen
for file-system changes and run additional migrations, based on parameters that control the time between
migrations and the minimum number of changes.
Listening to file-system changes on the Dell EMC FluidFS system is very limited, so using Dell EMC FluidFS
system as the source is not supported.
See the Robocopy documentation for more info.
Share Level Permissions
It is recommended to set the share-level permissions to allow Full Control for the migration user. This can
be achieved by defining a dedicated share for the migration, or by changing share level permissions for an
existing share and then changing them back after the migration is complete.
30
5.2.1.4
Troubleshooting Robocopy
The following table summarizes problems you may encounter when migrating with Robocopy, and the
steps to take in order to solve them.
Problem
Possible Cause
Solution
ERROR: You do not
have the Manage
Auditing user right
Using /COPYALL, or
/COPY:U
Use /COPY:DATSO
Migration doesn't start,
Robocopy shows Error
53
Source or Target are
misspelled.
Try to access using Windows Explorer,
and copy the path from there
Source or Target end
with a "\"
Remove the last "\" character from the
path. The “\" combination is confusing
for windows. Alternatively, if the path
contains no spaces, the quotes can be
removed, as with mapped drives or
drive root.
Migration doesn't start,
Robocopy shows "No
Destination Directory
specified" error
Source or Target end
with a "\"
Remove the last "\" character from the
path. The \" combination is confusing
for Windows. Alternatively, if the path
contains no spaces, the quotes can be
removed, as with mapped drives or
drive root.
Migration doesn't start,
Robocopy shows
"Access is denied" right
from the start, or
produces Error 5 on all
files and folders
The migration user
doesn't have permissions
to write to the
destination
Fix the ACLs of the destination folder to
allow the migration user full
permissions on the destination folder.
The "force unknown acl
user" setting is not set to
"yes" in the samba
configuration
Properly configure Samba to handle
un-translatable SIDs.
The destination
permissions are set
properly, but after the
failed migration there are
no longer valid
permissions
The permissions of the source are
migrated as well, and they could be
inappropriate.
One possible reason could be that the
source has only inherited permissions.
Add a “full access” ACE for the
migration user in the source folder.
31
Problem
Data is migrated, but
ownership is not set
Possible Cause
Solution
The target has share level
permissions different
from “Full Control”
The destination path should be through
a share with full-access share level
permissions for the migration user.
An older version of
Robocopy is used
Use only the described XP026 version.
The migration user is not
part of the "Domain
Admins" group, or does
not have the "Take
Ownership" privilege
An attempt to run a second time on the
same dataset with an appropriate user
is possible.
Rerun the migration using an
appropriate user.
Please contact Dell EMC Support for
more information
The /COPY:DATSO flag is
missing from the
execution.
Make sure the /COPY:DATSO flag,
including the O is used.
The migration logs
show "ERROR
5…Copying security
information…Access is
denied"
The migration is being
used with the /COPYALL
or /COPY:DATSOU flags.
Do not try to replicate auditing
information. Do not use /COPYALL.
Instead use /COPY:DATSO without the
U that represents Auditing.
Delta migrations
replicate files that were
not changed
The /FFT flag was not
used, causing files and
directories to be falsely
identified as changed,
because Dell EMC
FluidFS system time
granularity is different
from the granularity used
by Windows.
Use the /FFT flag.
Time zone on the Dell
EMC FluidFS system has
changed, resulting in a 1
hour difference
Changing the time zone manually on
the Dell EMC FluidFS system may cause
time shifts for files. For daylight savings
time, special time zones exist that
make the transitions automatically,
which prevents this problem.
Data and security is
successfully migrated.
32
Problem
Possible Cause
Solution
Migration speed is slow
The /Z or /ZB flags were
used
Remove the /Z and/or /ZB flags as they
can reduce performance dramatically
in some scenarios.
Increase the /MT:<value>
During delta migrations,
the /V flag was not used,
causing skipped files and
directories not to be
shown in the logs,
leading to progress not
being shown
5.2.1.5
Add the /V flag so all entries will be
logged.
Working with Robocopy Scripts
Robocopy is a command line utility. While GUI front ends were added at a later stage, the same utility is
invoked with command line arguments based on the flags chosen in the GUI utility.
While working with Robocopy GUI is convenient, using the command line has some advantages.
Robocopy GUI allows you to enjoy both worlds by saving the current execution in the form of a .CMD file.
Robocopy GUI will allow you to save scripts in addition to executing, allowing for more flexibility.
5.2.1.6
Robocopy GUI
Setting up Robocopy GUI
Microsoft maintains documentation for Robocopy and the Robocopy GUI. The Robocopy GUI
documentation is included in the download package from Microsoft. This section will give some basic
screenshots and guidelines on setting up Robocopy GUI for a migration job.
The Robocopy GUI, when first opened, will prompt the user for source and target path in the first tab.
33
Figure 18. Robocopy GUI Path Tab
The next tab will prompt the user for the copy options. They should be selected as follows.
Note: If the user hovers the mouse over any text or copy argument in Robocopy GUI, an explanation for
that argument is displayed. See below how “Time to wait between retries” is displayed when hovering over
the “/W:” argument. This is a very useful feature for those who are new to Robocopy.
Note: We cannot use multithreading in Robocopy GUI. Multithreading is not an option that can be
selected. If multithreading is desired, it must be saved as a script and the multithreading argument added.
34
Figure 19. Robocopy GUI Copy Options
The user can also set up filters, to exclude certain files. Notice that the user does not need to remember
what the arguments mean. Hovering over them with the mouse displays an explanation of each
argument’s function.
35
Figure 20. Robocopy GUI Filtering
The Logging tab allows the user to customize the formatting of the log file. This is useful to review after
migration completes, to determine if any errors occurred during the migration.
36
Figure 21. Robocopy GUI Logging
At this point, the copy job can either be run by clicking the Run button, or saved as a script.
Saving Scripts in Robocopy GUI
After configuring all the flags in Robocopy GUI, you can tick the "Save Script" checkbox, which changes
the “Run” button into a “Save” button and adds the script path location.
The administrator can now edit the .CMD file to add the /TEE option (for screen output), add the /MT for
multithreading, or make several instances of the file in order to take advantage of multiple migration
machines and/or multiple instances per machine.
Note: Running the .CMD file in a new location may result in the use of the wrong Robocopy version. It is
recommended to place the Robocopy.exe executable with the .CMD file in the same directory.
37
NFS Migration Procedures
The following sections cover NFS migrations to FluidFS using the commonly available rsync software
found on most Unix and Linux distributions.
rsync
The rsync utility is a software application and network protocol for Unix-like and Linux systems. This utility
synchronizes files and directories from one location to another, using delta encoding when appropriate to
minimize data transfers. A key feature of rsync that is not included in most similar utilities is that mirroring
occurs with just one transmission in each direction. The rsync utility can copy or display directory
contents and copy files, with optional compression and recursion.
Copying Data and Permissions from Source to Destination Volume
5.3.1.1
In this step, data and permissions are copied from the source volume to the destination using rsync. The
syntax and switches used in this example are detailed after the example below.
To copy data and permissions, run the following command, substituting the appropriate parameters for
the source, destination, and log file locations:
rsync -av /<Source Directory> /<Destination Directory>
NOTE: Trailing slashes after the source and destination directories can drastically change
the folder organization. You must read and fully understand the impact of using trailing
slashes before starting the migration of important data.
The command line switches for the above command are shown in the table below.
Options
Description
-a
-v
-r
-p
-t
-g
-o
-D
--iconv=source,dest
archive mode; equals –rlptgoD
increase verbosity
recurse into directories
preserve permissions
preserve modification times
preserve group
preserve owner (super-user only)
same as --devices –specials
Convert between character sets such as ISO-8859-1 and UTF-8
Table 22. Common rsync Arguments
38
5.3.1.2
Migrating Data from Solutions that Use Non-UTF-8 Encoding
FluidFS uses UTF-8 character encoding. When migrating data over NFS using rsync, and migrating from a
character set other than UTF-8, it may be necessary to convert the character encoding of files on the fly.
This is typically only needed if filenames contain non-English characters. If you do have files with nonEnglish characters in the filename, failure to use --iconv will result in failures to “stat” files and other
“Permission Denied” types of errors. The correct rsync syntax to convert from an ISO-8859-1 character set
(very common) to UTF-8 is as follows:
rsync -av --iconv=iso88591,utf8 /<Source Directory> /<Destination Directory>
To find more command line switches and information, run the man rsync command, or visit
rsync.samba.org
NOTE: In addition to the baseline copy, Dell EMC recommends performing multiple delta
copies of the data to reduce the cutover time. To achieve this, run the above command
manually at regular intervals, or schedule it as a cron job.
Mixed Mode Migration Procedures
Dell EMC recommends the use of mixed mode only when utilizing the user mapping feature is not an
option. FluidFS facilitates cross protocol interoperability using user mapping, and it is preferable to use this
option rather than mixed mode. Using mixed mode can lead to confusing permissions issues in the future.
A file on a mixed mode volume will have whatever permission type was last set on it. For example, if a
Domain Admin sets a complex ACL on a file, a Unix user can later overwrite that permission with a
“chmod” operation run on Linux. The inverse of this statement is also true, and can result in problematic
scenarios with permissions if they are not managed properly.
For business-critical data, it is recommended to use NTFS style only, or UNIX style only permissions. The
decision should depend on whether the administrator wants Windows clients to be able to set permissions
via SMB, or UNIX/Linux clients to be able to set permissions via NFS. Cross protocol interoperability (e.g.
Linux users over NFS accessing an NTFS style volume) is facilitated using either automatic user mapping or
manual user mapping.
If mixed mode permission style is required, and files must have the same mixed permissions after the
migration as before, use the following procedure:
1.
Analyze the mixed data and determine whether the majority of the data has NTFS ACL’s, or UNIX
style permissions.
2. If the majority are NTFS ACL’s, the administrator will migrate using SMB. If the majority are UNIX
style permissions, the administrator will migrate using NFS.
3. Follow the procedures documented in the previous sections based on whether you are migrating
via SMB or NFS.
4. After the data has been migrated and verified, apply the other type of permissions:
a. If SMB was used to migrate, use a script to scan the source for UNIX style files and apply
the same permissions to the destination, all via NFS. This is described in section 5.4.2
Migrating NFS Permissions.
b. If NFS was used to migrate, use Robocopy to synchronize permissions. This is described in
section 5.4.1 Migrating SMB Permissions.
39
Migrating SMB Permissions
When migrating mixed mode data, sometimes the administrator will need to go back and overwrite NFS
UNIX-style permissions that were set during migration with valid NTFS Access Control Lists. Robocopy can
be used to copy permissions only , using the command below:
robocopy <source> <destination> /E /SECFIX /COPY:ASO /XO /XN /XC
Migrating NFS Permissions
When migrating mixed mode data, sometimes the administrator will need to go back and overwrite NTFS
ACL’s permissions that were set during migration with valid NFS UNIX-style permissions. Typically this is
done using a script that will scan the source data for NFS files, and generate another script that will apply
those permissions. The following is an example procedure to do this. This entire procedure should be run
under the root user.
1.
Mount your source and destination on a Linux system. For this example the source is
/mnt/any_nfs_mount and the destination is /mnt/fluidfs_mount
2. This script is fed using the “find” command:
find <topLevelDirectory> -exec ./genPermSetScript \{\} \;
The find command will enumerate all files and folders below the top level directory that the
administrator inputs. It will pass each line into the script that is titled “genPermSetScript” below.
3. The genPermSetScript will generate its own script that will be used to set permissions on the
migration destination. The administrator must set the filename and location of this output file in
the script example below (outfile=<SET_THIS>).
As an example, on NetApp files, when viewing files with Windows ACL’s set via NFS, they appear as
being owned by user root, and group bin. That correlates to UID=0 and GID=1. The script leaves
these values to be set by the administrator. The administrator must change the values <UID> and
<GID> in the script below for it to function properly. This setting will be different for every vendor.
For example, on FluidFS, when viewing files with Windows ACL’s set via NFS, they show up as
being owned by nobody/nobody ( UID=99 and GID=99).
40
genPermSetScript:
#!/bin/bash
#this will need to be set to the path and name of the outputted script
outfile=<SET_THIS>
#get permissions, UID, and GID
perms=$(stat "$1" | grep "Access: (" | awk -F[\(\/] '{print $2}')
uid=$(stat "$1" | grep "Access: (" | awk -F[\(\/] '{print $4}' | awk '{sub(/^[ \t]+/, "")};1')
gid=$(stat "$1" | grep "Access: (" | awk -F[\(\/] '{print $6}' | awk '{sub(/^[ \t]+/, "")};1')
#you will need to set what UID and GID to ignore as a Windows file. The conditional is set
to <UID> for where you input the UID and <GID> for the GID
#example: exit if its owned by user root UID=0 and group bin GID=1 (Windows file on
NetApp)
#example: exit if its owned by user nobody UID=99 and group nobody UID=99 (Windows
file on FS8600)
if [[ $uid -eq <UID> && $gid -eq <GID> ]]; then
exit
fi
#write out permissions with the command to set
echo "chmod "$perms" "$1 >> $outfile
#write out uid and gid with the command to set
echo "chown "$uid":"$gid" "$1 >> $outfile
#chmod on output script so it can be executed by anybody and only changed by root
chown root:root $outfile
chmod 755 $outfile
echo "Completed entry for UNIX folder or file: "$1
echo "Entry added to output script: "$outfile
echo -e "\n"
41
42
The script output will be similar to the following:
chmod 7777 /mnt/any_nfs_mount/something
chown 500:500 /mnt/any_nfs_mount/something
chmod 0755 /mnt/any_nfs_mount/New Text Document (4).txt
chown 99:99 /mnt/any_nfs_mount/New Text Document (4).txt
chmod 0755 /mnt/any_nfs_mount/efasefasef.txt
chown 99:99 /mnt/any_nfs_mount/efasefasef.txt
chmod 0755 /mnt/any_nfs_mount
chown 0:0 /mnt/any_nfs_mount
4. The next step is to modify the script output and replace “any_nfs_mount” with “fluidfs_mount”.
You can use your favorite text editor and search and replace to do this.
5. Finally, you can run the generated script to set permissions on the migration destination,
/mnt/fluidfs_mount.
43
Using Global Namespace to assist with migration
FluidFS includes a feature called “Global Namespace”, which is also referred to as “Redirection
Folders”, that allow an administrator to aggregate multiple NAS namespaces together, as long as all of
the NAS servers support SMB2 or NFS4, or greater. This feature can be a useful utility during a
migration, as it can allow the administrator to “cut-over” to the namespace hosted on FluidFS much
closer to day one of deployment, and perform the migration from the old NAS devices in the
background. This section describes the theory and procedure behind this.
Historically, the administrator is prohibited from cutting over to a new NAS device until the new NAS
device has all of the latest data on it. With the Global Namespace feature, this is no longer the case.
The administrator can create a redirection folder on an SMB share or NFS export that is hosted on
FluidFS, which redirects the end user to the existing/old SMB share or NFS export. DNS can be
changed to use the same hostname as the old NAS, but to point to the IP’s for the new FluidFS NAS
device. End users will not notice any difference.
Here is an example of how this would work. We have a Windows file server at 192.10.58.5 (which is the
“old NAS” in this case). We also have a new FS8600 at 192.10.58.185.
The old Windows file server has a share on it called “sharedfolder” and the contents are shown below:
The new file server (the FS8600) has a share on it called “test” and it is empty, shown below:
44
Now, we will want to create shares out of the Department1 through Department3 folders. Redirection
Folders can only redirect a remote SMB share to a FluidFS folder. It cannot redirect a remote SMB
share plus a directory. The mapping is remote SMB share -> FluidFS redirection folder. Here is the
new list of shares on our old Windows file server. Notice how we now have shares called
“Department1” through “Department3”, and they are one level below the original “sharedfolder” share.
Now, we will go to Enterprise Manager/Dell Storage Manager, and create our Redirection Folders on
the FS8600.
45
Now they have been created:
46
Now, looking at the FS8600 “test” share, we see that we have redirection folders for “Department1”
through “Department3”
47
At this point, users can access the exact same data as they can when accessing 192.10.58.5 (the old
NAS), except they are accessing it “through” a redirection folder that is defined on the FS8600.
However, this only a temporary state. Ultimately, the IT administrator will still want to migrate all the
data from the old Windows file server onto the FS8600, retire the Windows file server, and then delete
the redirection folders.
To facilitate this need, we can create some folders on the FS8600 to serve as the migration
destination, which have DIFFERENT names than the source. Once the data has been fully migrated, it
is simply a matter of deleting the redirection folders, and then renaming these migration destination
folders to match the source. For example, during the migration, the FS8600 share would look like this
below. Users are actively accessing data using the redirection folders, but in the background, a
migration is taking place into the “DeptXMigrationDest” folders.
Of course, this migration strategy will still require downtime during cutover to do the final copy (while
the source is read-only). The cut over steps for this migration strategy are as follows:
1.
2.
3.
4.
Make the old NAS (the migration source) read only (downtime starts here)
Perform final differential copy from the old NAS to the FS8600
Delete the redirection folders on the FS8600
Rename the “DeptXMigrationDest” to “DepartmentX” so it matches the source (downtime ends
here)
With this migration strategy, the step where you change DNS entries to point to the Virtual IP’s of the
FS8600, and update SPN’s in Active Directory happens BEFORE the migration, instead of AFTER.
48
6. Validation and Cutover
Validation and cutover are the final steps required to complete the migration project and put the Dell EMC
FluidFS NAS system into production. Care should be taken to validate that all data has migrated correctly
and, if not, that appropriate corrective actions are taken.
Pre-Cutover Verification
In pre-cutover verification, Dell EMC recommends checking log files generated by the migration tool to
identify errors and skipped files. In addition, a manual comparison of selected source and destination files
and folders provides a good spot check.
Cutover
The cutover procedure will vary, depending on several factors. If downtime is acceptable in the customer
environment, the data is already accessible and the only copy required is the baseline copy. In all other
cases, downtime must be scheduled for this step of the migration process. A rough indicator of the length
of the downtime window is the time required to complete the previous delta copy.
Once the downtime window is scheduled, the following tasks must be completed:
1) Restrict end-user access to the source file server (preferably by making it read only)
2) Perform a final delta copy of the dataset using the chosen migration tool.
3) Create any remaining or additional SMB shares or NFS exports on the FluidFS NAS system using
the Dell Storage Manager (FS8600), Group Manager (FS76xx/FS7500) or the FluidFS NAS CLI.
4) Create any quotas on the FluidFS NAS system that correspond to the source file server.
5) Enable access to the FluidFS NAS system by reconfiguring it to look like the original file server(s)
through VIP modification, DNS aliases, SPN updates, and/or hostname modifications.
6) If the original file server(s) must coexist, redirect user mappings or inform users of the updated
network share structure.
7) Configure and enable FluidFS NAS volume snapshots to provide data protection and rapid
recovery
Updating Service Principle Names (SPN)
This step must be performed for SMB access which uses Kerberos. If the source file server is being
accessed via SMB using a name (such as nas01.mycompany.com, as opposed to IP) then odds are
Kerberos is being used. In order for the Dell EMC FluidFS NAS cluster to provide service using the
original/source file server’s DNS name, the Service Principle Names (SPN’s) must be updated in Active
Directory.
This step must be performed by an Active Directory administrator, preferably someone with Domain
Administrator rights. This is done from the Windows cmd.exe
1.
Remove SPNs from the OriginalSystem. This takes away the ability of the Original/old NAS device
(computer object) to serve using the hostname it has, because now you want the new FluidFS NAS
(the new FluidFS NAS computer object) to serve using the same hostname the original/old NAS
device was using.
a. setspn –D HOST/<nas hostname> <OriginalSystem name>
b. setspn –D nfs/<nas hostname> <OriginalSystem name>
c. setspn –D nfs/<nas FQDN> <OriginalSystem name>
d. setspn –D HOST/<nas FQDN> <OriginalSystem name>
49
2. Create SPNs on the NewFluidFSSystem. This gives the new FluidFS NAS device (computer object)
permission to serve using the hostname that the original/old NAS device (computer object) was using
a. setspn –S HOST/<nas hostname> <NewFluidFSSystem name>
b. setspn –S nfs/<nas hostname> <NewFluidFSSystem name>
c. setspn –S nfs/<nas FQDN> <NewFluidFSSystem name>
d. setspn –S HOST/<nas FQDN> <NewFluidFSSystem name>
SPN’s for a computer object can be listed using “setspn –l <ComputerObjectName>
Example:
The original NAS system has a hostname of revdev5.reveille.lab and a computer object name of
revdev5.
The new FluidFS system is revdev8.reveille.lab and a computer object name of revdev8. You may
or may not have a hostname for the new NAS.
The default SPN’s for these two systems are as follows. This is set automatically when joining
Active Directory
When I want revdev8 to provide services using the revdev5.reveille.lab hostname, the following
commands will be run at cmd
3.
Remove SPNs from the OriginalSystem
setspn –D nfs/revdev5 revdev5
setspn –D HOST/revdev5 revdev5
setspn –D nfs/revdev5.reveille.lab revdev5
setspn –D HOST/revdev5.reveille.lab revdev5
50
4.
Create SPNs on the NewFluidFSSystem
setspn –S nfs/revdev5 revdev8
setspn –S HOST/revdev5 revdev8
setspn –S nfs/revdev5.reveille.lab revdev8
setspn –S HOST/revdev5.reveille.lab revdev8
51
Post-Cutover Verification
Post-cutover verification typically happens after the system has gone live. This step focuses primarily on
ensuring that all data is available and consistent. It also involves checking that quotas, snapshots, SMB
share permissions, and NFS export permissions have been configured correctly. Backups should be
watched closely to ensure successful completion. In addition, backup logs from before and after the
migration can be compared for total file count and capacity.
Post-Cutover Maintenance
Post-cutover maintenance typically involves any remaining clean-up tasks, including:



Decommissioning old file server(s)
o Can be done immediately or kept for a specific period of time.
o If the original file server(s) contains snapshot data not contained elsewhere, the servers
can remain in service until the data is aged off.
Decommissioning the migration server, if used, from the environment.
Removing unused VIPs on the FluidFS NAS system to free up IP addresses.
NOTE: An important post-migration task is to remove the old/unused Computer object in Active Directory
for the filer that is being migrated off of.
52
7. Conclusion
Many decisions, complexities, and methodologies come into play when performing a NAS file server
migration. This document has outlined some of the most common scenarios, provided general guidance,
technical how-to, and discussed FluidFS specific considerations when planning, executing, and validating
a migration.
53
8. Appendix A – Migrating between FluidFS
Products Using FluidFS Replication
Overview and Scope
Replication is the recommended way of copying data between FluidFS systems, including migration
scenarios. FluidFS replication has multiple advantages compared to client based file replication:
1.
FluidFS replication is significantly faster:
a. The scan for changed data is local and snapshot-based, and therefore independent of
the amount of data in the volume.
b. Only changed blocks are replicated.
c. Data is moved directly between the FluidFS systems.
2. FluidFS replication preserves FluidFS file semantics instead of resetting them. This includes:
a. File security style and permissions
b. File system Domain
As a result, replication is also less sensitive to network fluctuations and therefore more predictable.
This appendix describes the procedure for migrating data between FluidFS-based NAS systems using
FluidFS replication capabilities.
Prerequisites
This appendix is based on standard replication procedures and can be applied only where a standard
replication process can be applied. Not all combinations of FluidFS products can be replication partners.
Please check the FluidFS Support Matrix for the supported replication configurations and prerequisites. In
addition to the configurations specified in the Support Matrix, migration by replication is also supported
from NX3610 to FS8600 systems, given that all other prerequisites, especially regarding FluidFS version
compatibility, are also satisfied.
The actions in this appendix are described using NX3600 commands; however equivalent commands
(CLI, Dell Storage Manager, and EqualLogic Group Manager) can also be used. For the Dell Compellent
FS8600 product, the FS8600 Disaster Recovery Solutions Guide can be referenced to configure
replication. All of the steps detailed in this appendix can be performed without the assistance of copilot or
Dell EMC Support.
Dell PowerVault NX3610 to Dell Compellent FS8600
FluidFS Replication is a valid option for migrating from NX3610 to FS8600. This assumes the
appliance count is the same on the NX3610 cluster and FS8600 cluster (unless both the source and
destination are running FluidFS v4 or greater). Alternatively, the methods documented in this guide
can be used as well (using an external migration server). As a last option, an NDMP backup of a
NX3610 can be restored to a Dell Compellent FS8600.
Dell PowerVault NX3500, NX3600 to Dell Compellent FS8600
To migrate between these two solutions, the preferred method is to use the procedures documented
in this guide (by using an external migration server/application). As an alternative, an NDMP backup of
a NX3500 or NX3600 can be restored to a Dell Compellent FS8600 if needed.
Dell PowerVault NX3xx0 to Dell EqualLogic FS7xx0
54
To migrate between these two solutions, the preferred method is to use the procedures documented
in this guide (by using an external migration server/application). As an alternative, an NDMP backup of
a NX3500 or NX3600 can be restored to a Dell EqualLogic FS7xx0 if needed.
Migration Workflow
The table below outlines the workflow for migrating data.
Full details appear in the following sections, as referenced in the table below.
Step
Description
Verify Migration PreConditions
Verify that the pre-conditions for the
migration are met. This includes verifying
product/version compatibility and equal
numbers of appliances and controllers.
Creating Target Volumes
Create a corresponding target volume for
each source volume.
Establish a Replication
Partnership
Set up a replication relationship between
the source and target systems.
Create Replication Policy
Create a replication policy for the
partnership. You can define a periodic
replication schedule in case the source
system must remain live during the
migration, and new data must be migrated
periodically (during the migration process).
Performing the Replication Trigger the replication process, or monitor
a scheduled process, until the replication is
complete and the destination volumes are
Idle.
Disable Replication Policy
and Verify Target Data
Verify that the source data has been copied
completely and correctly to the target
volumes.
Deleting the Replication
Partnership
Delete the replication partnership to
prevent future replication actions. The
source system can now be disabled.
55
Replication: Conceptual Overview
Replication in the Dell EMC FluidFS NAS solutions is block-based and asynchronous.




Block-based — Only blocks that have changed since the last replication are copied.
Snapshot-based – Leverages a point-in-time copy of the primary NAS volume.
File-system-driven – Utilizes snapshot and replication logic of FluidFS, not the underlying block
array.
Asynchronous — Communication with the client continues even while the data is being
replicated.
Replication is used in various scenarios to achieve different levels of data protection. These include:




Fast backup and restore - Maintain full copies of data for protection against data loss, corruption,
or user mistakes.
Disaster recovery - Mirror data to remote locations for failover.
Remote data access - Applications can access mirrored data in read-only or read-write mode.
Online data migration - Minimize downtime associated with data migration.
Replication leverages the snapshot technology in FluidFS. After the first replication, only changed or new
data are replicated. This allows for faster replication and efficient use of the processor cycles. It also saves
on storage space while keeping data consistent.
Replication is NAS volume based and can be used to replicate NAS volumes on the same NAS cluster or to
a volume on another NAS cluster. When replicating a volume to another NAS cluster, it must be set up as
a replication partner.
Creating Target Volumes
For each source volume, create a volume in the target system.
The target volume must have enough space to accommodate all data from the corresponding source
volume. If, when defining the replication, the destination volume contains data, you will be asked to
approve its deletion.
Establishing Replication Partnership
FluidFS replication supports bi-directional data transfer. One system may hold target volumes for the
other system as well as source volumes to replicate to the other system. Replication data flows through a
secure SSH tunnel over the network.
In case of migration, the replication is uni-directional, from the source system to the target system. A
replication policy can be setup to run on a predefined schedule as well as on demand.
Volume related configuration information (SMB shares, NFS exports, snapshot schedules, quotas, etc.) is
stored on each volume and is replicated together with customer data. When removing the replication
policy and promoting the replica volume, an option is provided to enable these configuration settings on
the replica volume.
56
Figure 23. Partner Replication
To set up the replication partnership:
NOTE: The product documentation is always the most accurate source of information on
product functions such as setting up replication.
1. Log onto the source system.
2. Select Data Protection → Replication → Replication Partners.
The Replication Partners screen is displayed.
3. Click Add. The Add Replication Partner screen is displayed.
4. In Remote NAS management VIP, enter the management VIP address of the target system.
5. In User name and Password, enter the username and password of an administrator account on
the target system.
NOTE: User name and password values are not stored in the Dell EMC FluidFS System.
6. Click Save Changes.
Creating a Replication Policy
Replication between NAS volumes is managed through replication policies. Creating a NAS replication
policy (also known as attaching volumes) requires that a replication partnership (trust) be established
between the source and target systems.
Creating a replication policy requires selecting the source system and volume, the destination system and
volume, and specifying a periodic schedule for the replication, or specifying that the replication will be
performed manually on demand.
NOTE: The product documentation is always the most accurate source of information on
product functions such as setting up replication.
NOTE: If the destination system has data that is not available on the source system, a
warning is issued, and you are asked to approve losing this data.
57
If you want to continue working with the source volumes throughout the migration process, assign a
periodic replication schedule to each volume so that changes will continuously be reflected to the target
system.
Otherwise, select “Replicate on Demand” and start a one-time replication (see section 8.8 Performing the
Replication).
To create a replication policy:
1. Log on to the source system
2. Select Data Protection → Replication → NAS Replication.
The NAS Replication page displays a list of existing NAS replication policies.
3. Click Add. The Add NAS Replication Policy page is displayed.
4. In Source NAS volume, enter the source NAS volume or click the Browse button and select the
appropriate NAS volume.
5. Select the Destination system from the list of trusted systems.
6. In Destination NAS volume, enter the destination NAS volume or click the Browse button and
select the appropriate NAS volume.
7. Select a recovery point schedule: hourly, daily, On demand
8. Click Save Changes.
Performing the Replication
If you have created a scheduled replication policy, the replication process is triggered automatically. If you
chose the “Replicate on Demand” policy, start the replication with:
NOTE: The product documentation is always the most accurate source of information on
product functions such as performing replication.
1.
2.
3.
4.
On the source system go to Data Protection →Replication →NAS Replication
Select the relevant source volumes.
Click Replicate Now.
The replication duration depends on the amount of data to be replicated and the bandwidth of
the link between the sites.
5. After the replication process has been initiated, either automatically or manually, wait until all
destination volumes reach the desired “Achieved Recovery Point” date and are Idle (Data
Protection →Replication →NAS Replication).
6. The progress of the replication can be monitored on the Data Protection →Replication →NAS
Replication screen
Verifying Target Data
Upon conclusion of replication, the administrator may want to temporarily disable replication and verify
the secondary volumes.
To verify the migrated data, disable the Replication Policy and test the data available on the target system:
1. Log on to the source system.
2. Select Data Protection →Replication →NAS Replication.
3. Select all replicated volumes and click Disable. This suspends the replication relation and
promotes the replica volumes.
4. Select the option to apply the source volumes’ configuration (NFS exports, SMB shares, etc.) to the
destination volume.
5. Connect/Mount the target shares/exports and verify the data from relevant clients/applications.
58
NOTE: You can re-enable the incremental replication, e.g. to allow for more up-to-date
info to be available on the target system, by selecting the volumes and clicking Enable.
This will demote the destination volumes to read-only and resume the replication schedule
where it left off. Any changes applied to the destination volumes after the replication
partnership was disabled will be erased and only data replicated from the source volumes
will remain.
Deleting the Replication Partnership
Once the necessary data is available on the target system, permanently remove the replication policies for
the relevant volumes:
NOTE: The product documentation is always the most accurate source of information on
product functions such as deleting replication partnerships.
1. Log on to the source system.
2. Select Data Protection →Replication →NAS Replication.
The NAS Replication page displays a list of existing NAS replication policies.
3. Select each NAS volume that should stop replicating to the target system and click Delete.
4. When deleting a replication policy, the system prompts to allow setting the source system
configuration to the target volume. This configuration includes SMB shares, NFS exports, quotas,
snapshot policies, security style, and other properties.
5. At this point the target NAS volumes are promoted to standalone, read-write state.
The source volumes can now be taken offline (by removing their SMB shares or NFS exports), removed or
repurposed for another application.
Full replication will be performed if replication partnership needs to be re-established.
59
9. Appendix B – Migrating using Quest Secure
Copy (deprecated)
Secure Copy
Quest Secure Copy is a comprehensive migration solution that automates the copying of data between
file servers without the use of agents or scripts. It can easily copy files, folders, and NT File System (NTFS)
permissions. With its multi-threaded, high-speed architecture, Secure Copy simplifies and dramatically
reduces the time required for migration projects.
This section demonstrates the high level use of Secure Copy for file data migration from an existing file
server to the FluidFS NAS system.
For the most up-to-date information on Quest Secure Copy, please refer to the Quest Secure Copy
documentation. This document is meant to give a brief, high-level summary of setting up migration jobs
in Secure Copy.
Quest Secure Copy is a licensed product, and it has several advantages over free tools such as Robocopy:






Easy to use GUI
Easy to read reports
Analyzes source data and copy speeds to estimate migration project time and downtime.(This
feature can also help reduce the amount of time a migration takes.)
Pre- and post- migration scripts can be automated to execute related tasks.
Job analysis and “test job” tools estimate time and also validate the migration. This results in more
confidence in the migration than can be attained with free tools.
Global 24x7 tech support is available.
60
Initial GUI view in Secure Copy 7
Figure 19 shows the first screen of the Secure Copy application, which is the starting point for configuring
the jobs to be used during the migration process. The horizontal icon bar at the top contains shortcuts to
commonly used tasks. The left section has tabs for Jobs, Job Status, and Logs and Reports. Most of the
configuration will occur within the Jobs tab. The right section gives detailed configuration options based
on the task selected on the left pane. This interface also provides context-sensitive help for quick
reference.
Figure 24. Secure Copy Graphical User Interface
61
Create a New Job
To create a new file data migration job, click New on the horizontal icon bar at the top. In the Create New
Job window, Dell EMC recommends including the FluidFS virtual IP in the job name for tracking purposes.
Figure 25. Secure Copy New Job Creation with VIP Number for Reference
62
After entering the Job Name, click OK. The new job is created as shown below in Figure 21.
Figure 26. Secure Copy GUI Showing Newly Created Job
63
New Job Configuration: Configuring File Copy Locations
Under the newly created job, select Copy Locations. The right section will list the available input options,
such as setting the source and target folders. These should be configured based on the results of the
Mapping and Design phase discussed earlier in this paper. This section also provides the option to create a
VSS snapshot on the source (if supported) to ensure that open files are copied. If required, check the
Create initial source folder under target folder box shown below.
Figure 27. Secure Copy Source and Target Job Configuration
64
New Job Configuration: Synchronization
Under the newly created job, select Synchronization to access options for configuring which files to copy
or purge, how dates are handled, and whether or not to copy permissions.
Figure 28. Secure Copy Job Synchronization Settings
65
New Job Configuration: Optional Filter Settings
As shown below in Figure 24, select Filters under the new jobs to access file and folder filter settings
based on name, date, size, and folder depth or recursion level.
Figure 29. Secure Copy New Job Filter Settings
66
New Job Configuration: Performance and Throttling Settings
As shown below in Figure 25, select Performance under the new job to specify file verification, and set
parameters for multi-threading, bandwidth throttling, and how to handle locked files. These options
should already be defined as part of the migration plan.
NOTE: FluidFS supports running Secure Copy jobs at the maximum configurable
performance settings which are shown below. However, Dell EMC recommends using a
thread count of 20, batch count of 100, and batch size of 10.
Figure 30. Secure Copy New Job Performance Settings
67
New Job Configuration: Email Settings
As shown below in Figure 26, select Email under the new job to configure settings for email notifications
if you want to be notified when the job completes or other job thresholds are crossed .
Figure 31. Secure Copy New Job Email Settings
68
New Job Configuration: Scheduling
Select the new job in the left pane and click Schedule on the horizontal icon bar at the top of the GUI to
launch the scheduler as shown below in Figure 27.
Figure 32. Secure Copy New Job Task Tab
Click the Triggers tab, then click New, for options shown below to specify time, date, duration, repetition,
and other schedule-related parameters for the job.
69
Figure 33. Secure Copy New Job Schedule Tab
70
After completing the schedule configuration, click OK to save the scheduled task. View scheduled tasks by
accessing the Windows Task Scheduler Library in Microsoft Server Manager. An example screenshot from
Microsoft Windows Server 2008 R2 is shown below in Figure 29.
Figure 34. Microsoft Windows Task Scheduler Library
Job Status
To view the status of the job, click Job Status in the left pane. Double click on the currently running job to
view additional information as shown below in Figure 30.
71
Figure 35. Secure Copy Job Status Window
72
Job Logs and Reports Dashboard
To see the detailed dashboard view shown below in Figure 31, click Logs and Reports in the left pane and
select a job. This provides a high-level view of each completed job, with information such as processing
time, errors, job speed, file types, total data processed, and average file size.
These reports are valuable to help determine the amount of downtime will be needed for the last
synchronization. Most migrations will follow the process of doing an initial copy, then online differential
scans and copies (to re-sync the data that has changed). Online re-sync’s will be done repeatedly until
they complete in as short an amount of time as possible. A final downtime will be planned during which
the users cannot write any more data. This is to enable Secure Copy to do one last differential scan and
copy, in order to re-sync the last bit of data that has changed since the last online re-sync. After this final
offline re-sync is done, Validation and Cutover are performed, which is described in the next section.
Figure 36. Secure Copy Job Logs and Reports - Dashboard
73
74
Click the Reports tab to see available predefined reports.
Figure 37. Secure Copy Job Logs and Reports-Reports Tab
Secure Copy – Command Line Interface (CLI)
Secure Copy can also be run through a Command Line Interface (CLI). Different Secure Copy jobs can be
included in a batch file and executed simultaneously. A sample batch file that runs four predefined jobs
simultaneously is shown below:
Secure Copy.bat:
start "Window1" "C:\Program Files\ScriptLogic Corporation\Secure Copy 6\Secure CopyCmd.exe"
/Job="FluidFS_VIP_1" /Q & start "Window2" "C:\Program Files\ScriptLogic Corporation\Secure Copy 6\Secure
CopyCmd.exe" /Job="FluidFS_VIP_2" /Q & start "Window3" "C:\Program Files\ScriptLogic Corporation\Secure
Copy 6\Secure CopyCmd.exe" /Job="FluidFS_VIP_3" /Q & start "Window4" "C:\Program Files\ScriptLogic
Corporation\Secure Copy 6\Secure CopyCmd.exe" /Job="FluidFS_VIP_4" /Q &
75
10. Additional Resources
For more information about data migration solutions for your organization, contact your Dell EMC
account representative or visit www.dell.com/services.
http://DellTechCenter.com is an IT Community where you can connect with Dell EMC customers and
employees to share knowledge, best practices, and information about Dell EMC products and your
installations.
http://kc.compellent.com/ Documentation and Best Practices for Dell Compellent FS8600 and Dell
Compellent Storage Center
https://eqlsupport.dell.com/support Documentation and Best Practices for Dell EqualLogic FS7500 and
FS76x0 as well as EqualLogic Arrays
http://support.dell.com Documentation for all other Dell EMC products including PowerVault NX series,
PowerEdge, PowerConnect, and Force10
Referenced or recommended Dell publications:

FluidFS overview: Dell Fluid File System:
http://i.dell.com/sites/content/shared-content/data-sheets/en/Documents/FluidFSOverview.pdf

Robocopy for older versions of Microsoft Windows: Windows Download Center, Windows Server
2003 Resource Kit Tools: http://www.microsoft.com/en-us/download/details.aspx?id=17657
76
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising