Hitachi Universal Replicator


Add to my manuals
294 Pages

advertisement

Hitachi Universal Replicator | Manualzz

3

Planning volumes, VSP systems

This chapter provides information and instructions for planning Universal

Replicator volumes, VSP systems, and other important requirements and restrictions.

Plan and design workflow

Assessing business requirements for data recovery

Write-workload

Sizing journal volumes

Planning journals

Data transfer speed considerations

Planning journal volumes

Planning pair volumes

Disaster recovery considerations

Sharing volumes with other VSP software volumes

Planning UR in multiple VSPs using a consistency group

Planning for previous models

Guidelines for preparing systems for UR

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

3–1

Plan and design workflow

Planning the Universal Replicator system is tied to your organization’s business requirements and production system workload. This means defining business requirements for disaster downtime and measuring the amount of changed data your storage system produces over time. With this information, you can calculate the size of journal volumes and the amount of bandwidth required to transfer update data over the data path network.

The plan and design workflow consists of the following:

• Assess your organization’s business requirements to determine recovery requirements.

• Measure your host application’s write-workload in MB per second and write-input/output per second (IOPS) to begin matching actual data loads with the planned UR system.

• Use collected data along with your organization’s recovery point objective (RPO) to size UR journal volumes. Journal volumes must have enough capacity to hold accumulating data over extended periods.

The sizing of journal volumes can be influenced by the amount of bandwidth you settle on. Both efforts are interrelated. You may actually adjust journal volume size in conjunction with bandwidth to fit the organization’s needs.

• Use IOPS to determine data transfer speed into and out of the journal volumes. Data transfer speed is determined by the number of Fibre-

Channel ports you assign to UR, and by RAID group configuration. You need to know port transfer capacity and the number of ports that your workload data will require.

• Use collected workload data to size bandwidth for the fibre-channel data path. As mentioned, bandwidth and journal volume sizing, along with data transfer speed, are interrelated. Bandwidth may be adjusted with the journal volume capacity and data transfer speed you plan to implement.

• Design the data path network configuration, based on supported configurations, fibre-channel switches, and the number of ports your data transfer requires.

• Plan data volumes (primary and secondary volumes), based on the sizing of P-VOL and S-VOL, RAID group considerations, and so on.

• Understand operating system requirements for data and journal volumes.

• Adjust cache memory capacity for UR.

Some tasks will be handled by HDS’ personnel. The planning information you need to address is provided in the following topics.

3–2

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

Assessing business requirements for data recovery

In a UR system, when the data path continues to transfer changed data to the remote site, journals remain fairly empty. However, if a path failure or a prolonged spike in write-data that is greater than bandwidth occurs, data flow stops. Changed data that is no longer moving to the remote system builds up in the master journal.

To ensure that journals can hold the amount of data that could accumulate, they must be sized according to the following:

• The maximum amount of time that journals could accumulate data. You develop this information by determining your operation’s recovery point objective (RPO).

• The amount of changed data that your application generates. This is done by measuring write-workload.

Determining your RPO

Your operation’s recovery point is the maximum time that can pass after a failure or disaster occurs before data loss is greater than the operation can survive.

For example, if the operation can survive one hour’s worth of lost data, and a disaster occurs at 10:00 am, then the system must be corrected by 11 a.m.

In regards to journal sizing, the journal must have the capacity to hold the data that could accumulated in one hour. If RPO is 4 hours, then the journal must be sized to hold 4-hour’s worth of accumulating data.

To assess RPO, the host application’s write-workload must be known.

With write-workload and IOPS, you or your organization’s decision-makers can analyze the number of transactions write-workload represents, determine the number of transactions the operation could loose and still remain viable, determine the amount of time required to recover lost data from log files or key it in, and so on. The result is your RPO.

Write-workload

Write-workload is the amount of data that changes in your production system in MB per second. As you will see, write-workload varies. according to the time of day, week, month, quarter. That is why workload is measured over an extended period.

With the measurement data, you can calculate workload averages, locate peak workload, and calculate peak rolling averages, which show an elevated average. With one of these base data you will calculate the amount of data that accumulates over your RPO time, for example, 2 hours. This will be a base capacity for your journal volumes or represent a base amount of bandwidth your system requires.

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

3–3

Whether you select average, rolling average, or peak workload is based on the amount of bandwidth you will provide the data path (which is also determined by write-workload). Bandwidth and journal volume capacity work together and depend on your strategy for protecting data.

Measuring write-workload

Workload data is collected using Hitachi Performance Monitor or your operating system’s performance-monitoring feature. The number of read/ write transactions, or input/output per second (IOPS), is also collected by the software. You will use IOPS to set up a proper data transfer speed, which you ensure through RAID group configuration and by establishing the number of fibre-channel ports your UR system requires. Each RAID group has a maximum transaction throughput; the ports and their microprocessors have an IOPS threshold.

Workload and IOPS collection is best performed during the busiest time of month, quarter, and year. This helps you to collect data that shows your system’s actual workloads during high peaks and spikes, when more data is changing, and when the demands on the system are greatest. Collecting data over these periods ensures that the UR design you develop will support your system in all workload levels.

To measure write-workload and IOPS

1. Using your performance monitoring software, collect the following:

Disk-write bytes-per-second (MB/s) for every physical volume that will be replicated.

Data should be collected over a 3 or 4-week period to cover a normal, full business cycle.

Data should be collected at 5 minute intervals. If you use averages, shorter intervals provide more accuracy.

2. At the end of the collection period, convert the data to MB/second, if needed, and import into a spreadsheet tool.

Sizing journal volumes

You calculate the size of your journal volumes using write-workload and

RPO.

To calculate journal size

1. Follow the instructions for

Measuring write-workload on page 3-4

.

2. Use your system’s peak write-workload and your organization’s RPO to calculate journal size. For example:

RPO = 2 hours

Write-workload = 30 MB/sec

Calculate write-workload for the RPO. In the example, write-workload over a two-hour period is calculated as follows:

30 MB/second x 60 seconds = 1800 MB/minute

1800 MB/minute x 60 minutes = 108,000 MB/hour

3–4

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

108000 MB/hour x 2 = 416,000 MB/2 hours

Basic journal volume size = 416,000 MB (416 GB)

Journal volume capacity and bandwidth size work together. Also, your strategy for protecting your data may allow you to adjust bandwidth or the

size of your journal volumes. For a discussion on sizing strategies, see Five sizing strategies on page 4-2 .

Note: If you are planning for disaster recovery, the remote system must be large enough to handle the production workload, and therefore, must be the same size as master journals. If not planning a disaster recovery solution, remote journal volumes may be smaller than master journal volumes.

Planning journals

UR manages pair operations for data consistency through the use of journals. UR journals enable update sequence consistency to be maintained across a group of volumes.

Understanding the consistency requirements for an application (or group of applications) and their volumes will indicate how to structure journals.

For example, databases are typically implemented in two sections. The bulk of the data is resident in a central data store, while incoming transactions are written to logs that are subsequently applied to the data store.

If the log volume “gets ahead” of the data store, it is possible that transactions could be lost at recovery time. Therefore, to ensure a valid recovery image on a replication volume, it is important that both the data store and logs are I/O consistent by placing them in the same journal.

The following information about journal volumes and journals will help you plan your journals.

• A journal consists of one or more journal volumes and associated data volumes.

• A journal can have only P-VOLs/master journals, or S-VOLs/restore journals.

• A journal cannot belong to more than one storage system (local or remote).

• All the P-VOLs, or S-VOLs, in a journal must belong to the same storage system.

• Journal numbers of master and restore journals that are paired can be different.

If using a consistency group number, the consistency group number of the P-VOL and S-VOL must be the same.

• Each pair relationship in a journal is called a "mirror". Each pair is assigned a mirror ID. The maximum number of mirror IDs is 4 (0 to 3) per system.

• When UR and URz are used in the same system, individual journals must be dedicated either to one or the other, not both.

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

3–5

• Master and restore journals are managed according to the journal number.

• Review journal specifications in System requirements on page 2-2

.

• A journal can contain up to 64 journal volumes.

Data transfer speed considerations

The previous topics and the topics later in this chapter on bandwidth discuss the amount of data that must be stored temporarily in journals volumes and transferred over the data path network. This topic discusses the speed that data must be transferred in order to maintain the UR system your are designing.

The ability of your UR system to transfer data in a timely manner depends directly on the following two factors:

• RAID group configuration

• Fibre-channel port configuration

Both of these elements must be planned to be able to handle the amount of data and number of transactions your system will move under extreme conditions.

RAID group configuration

A RAID group can consist of physical volumes with a different number of revolutions, physical volumes of different capacities, and physical volumes of different RAID configurations (for example, RAID-1 and RAID-5). The data transfer speed of RAID groups is affected by physical volumes and

RAID configurations.

• The data transfer speed of a journal volume depends on the data transfer speed of the RAID group to which it belongs. A RAID group can consist of one or more volumes, including journal volumes.

• Each RAID group has a different throughput rating. The number of MB/ sec that volumes in a RAID group are capable of processing is published in UR specifications.

• Journal volumes must be configured in RAID groups according to the group’s throughput specification and your system’s peak writeworkload. If write-workload exceeds the RAID group’s throughput rating, then the number of RAID groups must be increased.

• Frequent read/write activity to non-journal volumes in a RAID group results in fewer read/writes by journal volumes in the same RAID group.

This can cause a drop in the data transfer speed of journal volumes. To avoid this effect, place journal volumes and frequently accessed nonjournal volumes in different RAID groups.

3–6

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

Fibre-channel port configuration

The fibre-channel ports on your VSP system have an IOPS threshold. Use the performance monitoring information for the number of IOPS your production system generates to calculate the number of fibre-channel ports the UR system requires.

Please see Planning ports for data transfer on page 4-7

for a full discussion on the type and number of fibre-channel ports required for your system.

Planning journal volumes

In addition to sizing journal volumes, you should be aware of the following requirements and restrictions.

• Journal volumes must be registered in a journal before the initial paircopy operation is performed.

• Journal volumes must be registered on both the local and remote systems.

• Emulation type for journal volumes must be OPEN-V.

• Journal volumes should be sized according to RPO and write-workload.

See Sizing journal volumes on page 3-4 for more information.

• If a path is defined from a host to a volume, the volume cannot be registered as a journal volume.

• Journal volumes in a journal can have different capacities.

• A master journal volume and the corresponding restore journal volume can have different capacities.

• A data volume and its associated journal volume can belong to only one journal.

• Do not register a volume to a journal during quick formatting. Doing so stalls the operation.

• Data volumes and journal volumes in the same journal must belong to the same controller.

• The number of journal volumes in the master journal does not have to be equal to the number of volumes in the restore journal.

• Journal volumes consist of two areas: One area is used for storing journal data, and the other area is used for storing metadata for remote copy.

• Journal volumes support all RAID configurations and physical volumes that are supported by VSP.

• Journal volume capacity is not included in accounting capacity.

• Customized volumes can be used for journal volumes.

See the following for more information about journals and journal volumes:

• The “Journals” item in

System requirements on page 2-2

Planning journals on page 3-5

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

3–7

Planning pair volumes

The following information can help you prepare volumes for configuration.

Also, see system requirements and specifications in Requirements and specifications on page 2-1 for more information.

• The emulation and capacity for the S-VOL must be the same as for the

P-VOL

• When the S-VOL is connected to the same host as the P-VOL, the S-VOL must be defined to remain offline.

• You can create a UR pair using a TrueCopy initial copy, which takes less time. To do this, system option 474 must be set on the primary and secondary systems. Also, a script is required to perform this operation.

For more on system option 474 and how to do this operation, see

System option modes on page 3-18 .

• UR supports the LUN Expansion (LUSE) feature, which allows you to configure a LUSE volume by using 2 to 36 sequential LDEVs. If two LUSE volumes are assigned to a UR pair, the capacity and configuration of the

UR S-VOL must be the same as the UR P-VOL. For example, when the

P-VOL is a LUSE volume in which 1-GB, 2-GB, and 3-GB volumes are combined in this order, the S-VOL must be a LUSE volume in which 1-

GB, 2-GB, and 3-GB volumes are combined in this order. In addition,

RAID1, RAID5, and RAID6 can coexist in the LUSE volume.

• UR supports the Virtual LUN feature, which allows you to configure custom LUs that are smaller than standard LUs. When custom LUs are assigned to a UR pair, the S-VOL must have the same capacity as the P-

VOL. For details about Virtual LUN feature, see the Provisioning Guide

for Open Systems.

• Identify the volumes that will become the P-VOLs and S-VOLs. Note the port, group ID (GID), and LUN of each volume. This information is used during the initial copy operation.

• You can create multiple pairs at the same time. Review the prerequisites and steps in

Creating the initial copy on page 6-2

.

• When you create a UR pair, you will have the option to create only the relationship, without copying data from P-VOL to S-VOL. You can use this option only when data in the two volumes is identical.

• When configuring the pair, best practice is to specify different serial numbers for the primary and secondary systems.

Maximum number of pairs allowed

You can create up to 32,768 pairs on a VSP system. The maximum number for your system is limited by:

• The number of cylinders in the volumes.

3–8

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

• The number of bitmap areas required for Universal Replicator data and journal volumes. This is calculated using the number of cylinders.

Caution: The bitmap areas that are used for Universal Replicator are also used for Universal Replicator for Mainframe, TrueCopy, TrueCopy for

Mainframe, and High Availability Manager. If you use UR with any of these products, use the total number of each pair’s bitmap areas to calculate the maximum number of pairs. In addition, if UR and TC share the same volume, use the total number of both pairs regardless of whether the shared volume is primary or secondary.

Calculating maximum number of pairs

Note: In the calculations below, note the following:

• ceil () indicates that the value between the parentheses should be rounded up to the nearest integer.

• Number of logical bloxcks - Volume capacity (in bytes) / 512

To calculate the number of cylinders

Use the following formula:

Number of cylinders = (ceil ( (ceil (number of logical blocks /

512) ) /15) )

To calculate the number of required bitmap areas:

Use the following formula ceil((number of cylinders x 15) / 122,752) ) where:

• number of cylinders x 15 indicates the number of slots

• 122,752 is the number of slots that a bitmap area can manage

Note: Doing this calculation for multiple volumes can result in inaccuracies. Perform the calculation for each volume seperately, then total the bitmap areas. The following examples show correct and incorrect calculations. Two volumes are used: one of 10,017 cylinders and another of 32,760 cylinders

Correct calculation ceil ((10,017 x 15) / 122,752) = 2 ceil ((32,760 x 15) /

122,752) = 5

Total: 7

Incorrect calculation

10,017 + 32,760 = 42,777 cylinders ceil ((42,777 x 15) /

122,752) = 6

Note: If using LUSE volumes, add 1 to the required number of bitmap areas calculated in the formula above.

To calculate the maximum number of pairs

The maximum number of pairs is determined by the following:

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

3–9

• The number of bitmap areas required for Universal Replicator (calculated above).

• The total number of bitmap areas in the storage system, which is

65,536.

Bitmap areas reside in an additional shared memory, which is required for Universal Replicator.

Bitmap areas are used not only by Universal Replicator, but also by

Universal Replicator for Mainframe, TrueCopy, TrueCopy for

Mainframe, and High Availability Manager. Therefore, the number of bitmap areas used by these other program products (if any) must be subtracted from 65,536, with the difference used to calculate the maximum number of pairs for Universal Replicator.

If TrueCopy and Universal Replicator share the same volume, you must use the total number of bitmap areas for both pairs regardless of whether the shared volume is main or remote.

• The maximum number of pairs supported per storage system, which is

32,768. If CCI is used, it is 32,767.

Calculate the maximum number of pairs using the following formula.

Maximum number of pairs = floor( Number of bitmap areas / required number of bitmap areas )

If the calculated maximum number of pairs exceeds the total number of

LDEVs, and the total LDEVs are less than 32,768, then the total LDEV number is the maximum number of pairs that can be created.

Maximum initial copy operations and priorities

During configuration, you specify the maximum number of initial copies that can be run at one time. The system allows up to 128 initial copies to run concurrently. You do this for performance reasons (the more initial copies running concurrently, the slower the performance).

You will also specify the priority for each initial copy during the create pair operation. Priority is used when you are creating multiple initial copies during an operation. Creating multiple initial copies in one operation is possible because you can specify multiple P-VOLs and S-VOLs in the

Paircreate dialog box. The pair with priority 1 runs first, and so on up to 256.

When you create more pairs than the maximum initial copy setting, the pairs with priorities within the maximum number specified run concurrently, while the pairs with priorities higher than the maximum number wait. When one pair completes, a waiting pair begins, and so on.

If you perform a pair operation for multiple pairs (for a specific kind of data, for example), and then perform another operation for multiple pairs (for another specific kind of data, for example), the pairs in the first operation are completed in the order of their assigned priorities. The system begins processing pairs in the second set when the number of pairs left in the first set drops below the maximum number of initial copy-setting. This is shown in the following example.

3–10

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

See Specifying number of concurrent initial/resync copies on page 5-8 to set

the maximum initial copies.

The Priority field is discussed in

Creating the initial copy on page 6-2

.

Disaster recovery considerations

You begin a disaster recovery solution when planning the UR system. The following are the main tasks for preparing for disaster recovery:

• Identify the data volumes that you want to back up for disaster recovery.

• Pair the identified volumes using UR.

• Establish file and database recovery procedures.

• Install and configure host failover software error reporting communications (ERC) between the primary and secondary sites.

For more information on host failover error reporting, see the following topic. Also, review

Disaster recovery operations on page 9-1 to become

familiar with disaster recovery processes.

Host failover software

Host failover software is a critical component of any disaster recovery effort.

When a primary system fails to maintain synchronization of a UR pair, the primary system generates sense information. This information must be

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

3–11

transferred to the remote site using the host failover software for effective disaster recovery. CCI provides failover commands that interface with industry-standard failover products.

Sharing volumes with other VSP software volumes

Universal Replicator volumes can be shared with other product volumes.

Sharing pair volumes enhances replication solutions, for example, when

Universal Replicator and TrueCopy or ShadowImage volumes are shared.

For planning information, see the following:

Sharing volumes on page B-1

Configurations with TrueCopy on page C-1

Configurations with ShadowImage on page D-1

Planning UR in multiple VSPs using a consistency group

Copy operations can be run simultaneously on multiple UR pairs residing in multiple primary and multiple secondary systems. This is done by placing journals in the primary systems in a CCI consistency group. Data update order in copy processing is guaranteed to the secondary systems.

With multiple systems, the journals in the paired secondary systems are automatically placed in the consistency group.

With multiple systems, you can also place the journals from both OPEN and mainframe systems in the same CCI consistency group. When this configuration is used, URz journals cannot be placed in an EXCTG.

In addition, Universal Replicator volumes in multiple systems can be shared with other UR pairs and with TrueCopy pairs. See the following for more information:

3 UR data-center configurations on page A-1

Configurations with TrueCopy on page C-1

You can register up to four journals in a single consistency group.

Any combination of primary and secondary system journals can be used.

For example, you can include journals from four primary systems and four secondary systems, two primary systems and one secondary system, and so on.

An example configuration is shown in the following figure.

3–12

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

When data is sent to the secondary systems, the systems check time stamps, which are added when data is written by the hosts to the P-VOLs.

The secondary systems then restore the data to the S-VOLs in chronological order (older data are restored earlier). This ensures that the update sequence is maintained.

Note the following when planning for multiple systems:

• Storage Navigator is required at the primary and secondary sites.

• CCI is required on the host at the primary and secondary sites.

• Journal data is updated in the secondary system based on the time stamp issued from CCI and the sequence number issued by the host with write requests to the primary system. Time and sequence information remain with the data as it moves to the master and restore journals and then to the secondary volume.

• With CCI consistency groups, when a pair is split from the S-VOL side

(P-VOL status = PAIR), each storage system copies the latest data from the P-VOLs to the S-VOLs. P-VOL time stamps might differ by storage system, depending on when they were updated.

• Disaster recovery can be performed with multiple storage systems,

including those with UR and URz journals, using CCI. See Switching host operations to the secondary site on page 9-3 for information

• An error in one journal can cause suspension of all journals. See

Suspension among journals on page 10-25 for more information.

• Time stamps issued by CCI and the mainframe host are different. The time stamps issued by the mainframe host become invalid when the URz journal is included in a CCI consistency group.

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

3–13

• Restoring data to the secondary system is performed when the time stamp of the copied journal is updated. The recommended interval between time stamps is one second.

Consider the following before setting the interval:

I/O response time slows when time stamps are updating among multiple storage systems. If you shorten the interval, more time stamps are issued, resulting in an I/O response time that is slower yet.

If the interval is lengthened, the amount of time that journal data can accumulate increases, which results in an increased amount of data to be copied.

None of the above is true during the initial copy or resynchronization.

During these operations, lengthening the interval between time stamps does not result in more accumulated journal data, because data restoring takes place regardless of time stamp.

• The recommended method for executing CCI commands is the in-band

(host-based) method. This prevents I/O response from deteriorating, which may occur with the out-of-band (LAN-based) method.

• Do not register a URz journal in an EXCTG if it is included in a CCI consistency group. A journal is automatically released from the EXCTG when the CCI pair resynchronization operation is performed.

• In a CCI consistency group containing both UR and URz journals, data consistency is maintained in a UR system when its storage system’s microcode is changed to a version previous to 70-03-0x. However, you cannot lower the microcode for the URz storage system.

• It is not possible to register a journal to multiple CCI consistency groups.

Multiple journals per CCI consistency group

Normally, only one journal can be registered in a CCI consistency group.

With multiple VSP systems, however, up to four journals, including URz and

UR journals, can be registered in a CCI consistency group. The following figures show different configurations in which multiple journals are registered in a single CCI consistency group.

3–14

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

3–15

3DC configurations using 3 UR sites

With Universal Replicator, you normally use two data centers—the primary and secondary sites.

You can employ a third site to create a 3-data-center (3DC) configuration.

Using three sites makes a third copy of production data available in the event of primary and secondary site failure.

You can set up three UR sites in multi-target or cascade configurations. You also have the option of adding a delta resync pair.

For details, see 3 UR data-center configurations on page A-1 .

Planning for previous models

Universal Replicator can be used to perform remote copy operations between VSP and USP V/VM or TagmaStore USP.

To perform remote copy between VSP and USP V/VM or TagmaStore USP, observe the following:

• Configure a remote path between LDKC00 of the VSP system and the

USP V/VM.

More than one USP V/VM can be connected to LDKC00 of VSP.

LDKC01 cannot be used.

Use the configuration instructions in

Configuration operations on page 5-

1 .

• Both systems must be set up as shown in the figure below.

3–16

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

Figure 3-1 Remote path between LDKC00 of VSP and USP V/VM

• When connecting VSP with TagmaStore USP or USP V/VM, contact your

HDS representative for information regarding supported microcode versions.

• When connecting VSP with TagmaStore USP, set up the VSP using a

CU:LDEV number between 00:00 to 3F:FF, but do not use 40:00 or higher.

• When connecting VSP with USP V/VM, set up the VSP volume using a

CU:LDEV number between 00:00 to EF:FF. The volume must be on

LDKC00.

• Up to 32,768 volumes can be used for volume pairs.

• VSP and USP V/VM can be set up in 3-data-center (3DC) cascade or multi-target configurations. These configurations are used when combining TrueCopy and Universal Replicator systems. See

Configurations with TrueCopy on page C-1

to review these configurations. There are no restrictions for combining primary and secondary sites between VSP and USP V/VM.

Guidelines for preparing systems for UR

Use the following guidelines to ensure that your VSP systems are ready for

UR:

• Identify the locations where your UR primary and secondary data volumes will be located, then install and configure the VSP systems.

• Make sure that primary and secondary systems are configured for

Storage Navigator operations. See Hitachi Storage Navigator User Guide for information.

• Make sure that primary and secondary systems are properly configured for UR operations, for example, cache memory considerations. See the entry for Cache and Nonvolatile Storage in the requirements table,

System requirements on page 2-2 . Also consider the amount of Cache

Residency Manager data to be stored in cache when determining the required amount of cache.

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

3–17

• Make sure that primary and secondary systems have the system option modes specified that may be required for your UR configuration. See

System option modes on page 3-18 , below, for more information.

• Make sure that primary systems are configured to report sense information to the host. Secondary systems should also be attached to a host server to enable reporting of sense information in the event of a problem with an S-VOL or secondary system. If the remote system is not attached to a host, it should be attached to a primary site host server so that monitoring can be performed.

• If power sequence control cables are used, set the power select switch for the cluster to LOCAL to prevent the primary system from being powered off by the host. Make sure the secondary system is not powered off during UR operations.

• Install the UR remote copy connections (cables, switches, and so on) between the primary and secondary systems.

• When setting up data paths, distribute them between different storage clusters and switches to provide maximum flexibility and availability. The remote paths between the primary and secondary systems must be separate from the remote paths between the host and secondary system.

System option modes

To provide greater flexibility, the Virtual Storage Platform has additional operational parameters called system option modes (SOMs) that allow you to tailor the VSP to your unique operating requirements. The SOMs are set on the SVP by your Hitachi Data Systems representative.

The system option modes can be used for several kinds of UR customizations, including:

• 2DC configuration

• Delta resync configuration

• Configuring split options for mirrors

• Improving initial copy time

The following table lists and describes the SOMs for Universal Replicator. For a complete list of SOMs for the VSP, see the Hitachi Virtual Storage Platform

User and Reference Guide.

Note: The SOM information may have changed since this document was published. Contact your Hitachi Data Systems representative for the latest

SOM information.

3–18

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

448

Mode

449

466

Table 3-1 System option modes

Description

When the SVP detects a blocked path:

Mode 448 ON: An error is assumed and the mirror is immediately suspended.

Mode 448 OFF: If the path does not recover within a specified period of time, an error is assumed and the mirror is suspended.

Note: Mode 448 setting is available only when mode 449 is set to OFF.

Mode 449 ON: The SVP does not detect blocked paths.

Mode 449 OFF: The SVP detects blocked paths and monitors the time until the mirrors are suspended.

It is strongly recommended that the path between the main and remote storage systems have a minimum data transfer speed of

100 Mbps. If the data transfer speed falls to 10 Mbps or lower,

UR operations cannot be properly processed. As a result, many retries occur and UR pairs may be suspended. This SOM is provided to ensure proper system operation for data transfer speeds of at least 10 Mbps.

ON: Data transfer speeds of 10 Mbps and higher are supported.

The JNL read is performed with 4-multiplexed read size of 256

KB.

OFF (default): For conventional operations. Data transfer speeds of 100 Mbps and higher are supported. The JNL read is performed with 32-multiplexed read size of 1 MB by default.

Note: The data transfer speed can be changed using the Change

JNL Group options.

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

3–19

474

Mode

506

Description

With mode 474 set to ON, the initial copy time for a UR pair is improved when using the TC initial copy operation.

The procedure requires use of a script, with the following operations:

• UR initial copy operation, with "None" specified for the Initial

Copy parameter.

• Split the UR pair.

• TC initial copy operation, using the split UR pair volumes.

• Delete the TC pair.

• Resynchronize the UR pair.

Mode 474 ON: Using CCI improves performance.

Mode 474 OFF: Operations run normally.

If the P-VOL and S-VOL are both DP-VOLs, initial copy performance might not improve with SOM 474 set to ON. This is because with DP-VOLs, not all areas in a volume are allocated for UR; therefore not all areas in the P-VOL are copied to the S-

VOL. With less than the full amount of data in the P-VOL being copied, the initial copy completes in a shorter time, which may not be improved with SOM 474.

Notes:

1. Set this mode for both MCU and RCU.

2. When this mode is set to ON:

-Execute all pair operations from CCI/BCM.

-Use a dedicated script.

-Initial copy operation is prioritized over update I/O.

Therefore, the processing speed of the update I/O slows down.

-Version downgrade is disabled.

-Take Over is not available.

3. If this mode is not set to ON for both sides, the behavior is as follows:

-With setting on MCU/without setting on RCU: TC Sync pair creation fails.

-Without setting on MCU/with setting on RCU: The update data for P-VOL is copied to the S-VOL in synchronous manner.

4. This mode cannot be applied to a UR pair that is the second mirror in a URxUR multi-target configuration, URxUR cascade configuration, or 3DC multi-target or cascading configuration of three UR sites. If applied, TC pair creation is rejected with SSB=CBED output.

Enables the delta resync operation.

Mode 506 ON: The delta resync operation is performed if there are no update I/Os.

Mode 506 OFF: The copy processing of all data is performed if there is no update I/Os.

3–20

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

690

Mode

908

Description

Controls whether to prevent Read JNL or JNL Restore when the

Write Pending rate on RCU exceeds 60% as follows:

• When CLPR of JNL-Volume exceeds 60%, Read JNL is prevented.

• When CLPR of Data (secondary)-Volume exceeds 60%, JNL

Restore is prevented.

MCU/RCU: This SOM applies to only the RCU.

Mode 690 ON: Read JNL or JNL Restore is prevented when the

Write Pending rate on RCU exceeds 60%.

Mode 690 OFF (default): Read JNL or JNL Restore is not prevented when the Write Pending rate on RCU exceeds 60%

(the same as before).

Notes:

1. This SOM can be set online.

2. If the Write Pending status long keeps 60% or more on RCU, it takes extra time for the initial copy to be completed by making up for the prevented copy operation.

3. If the Write Pending status long keeps 60% or more on RCU, the pair status may become Suspend due to the JNL-VOL being full.

4. When TagmaStore USP/TagmaStore NSC is used on P-VOL side, this SOM cannot be used. If this SOM is set to ON,

SSB=8E08 on P- VOL side and SSB=C8D1 on S-VOL side may frequently be output.

Changes cache memory (CM) capacity allocated to MPBs with different workloads.

Mode 908 ON: Difference in CM allocation capacity among MPBs with different workloads is large.

Mode 908 OFF (default): Difference in CM allocation capacity among MPBs with different workloads is small.

Notes:

1. Apply this SOM to CLPRs used only for UR JNLGs.

2. Since CM capacity allocated to MPBs with low workload is small, the performance is affected by a sudden increase in workload.

3. This SOM is effective for a CLPR. Therefore, when setting this SOM to ON/OFF, select target "LPRXX (XX=00 to 31)".

For example, even when CLPR0 is defined (CLPR1 to 31 are not defined), select "LPR00" first and then set the SOM to

ON/OFF.

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

3–21

3–22

Planning volumes, VSP systems

Hitachi Virtual Storage Platform Hitachi Universal Replicator User Guide

advertisement

Was this manual useful for you? Yes No
Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Related manuals

advertisement

Table of contents