Multiple Paths to LUNs. EMC FC4700, Stereo Receiver FC4700

Add to My manuals
110 Pages

advertisement

Multiple Paths to LUNs. EMC FC4700, Stereo Receiver FC4700 | Manualzz

About MirrorView Remote Mirroring Software

3

Highly available cluster

File Server Mail Server

Operating system A

Operating system A

The following figure shows two sites and a primary and secondary image that includes one LUN. Notice that the storage-system SP As and SP Bs are connected.

Database

Server 1

Operating system B

Accounts

Server

Operating system A

Database

Server 2

Operating system B

Switch fabric Switch fabric Switch fabric Switch fabric

Cluster

Storage

Group

Database

Server

Storage

Group

SP A SP B

LUN

LUN

LUN

LUN

LUN

LUN

LUN

Extended Distance Connections

Accounts Server

Storage Group

Database Server

Remote Mirror

SP A SP B

LUN

LUN

LUN

LUN

LUN

LUN

LUN

Storage system 1

Figure 3-1

Storage system 2

EMC2000

Sites with MirrorView Primary and Secondary Images

The connections between storage systems require fibre channel cable and GigaBit Interface Converters (GBICs) at each SP. If the connections include extender boxes, then the distance between storage systems can be up to the maximum supported by the extender — generally 40-60 kilometers.

Without extender boxes, the maximum distance is 500 meters.

What Is EMC MirrorView Software?

3-3

3

About MirrorView Remote Mirroring Software

MirrorView Features and Benefits

MirrorView mirroring adds value to customer systems by offering the following features:

• Provision for disaster recovery with minimal overhead

• Local high availability

• Cross mirroring

• Integration with EMC SnapView LUN snapshot copy software

Provision for Disaster Recovery with Minimal Overhead

Provision for disaster recovery is the major benefit of MirrorView mirroring. Destruction of the primary data site would cripple or ruin many organizations. MirrorView lets data processing operations resume within a working day.

MirrorView is transparent to servers and their applications. Server applications do not know that a LUN is mirrored, and the effect on performance is minimal.

MirrorView uses synchronous writes, which means that server writes are acknowledged only after all secondary storage systems commit the data. This type of mirroring is in use by most disaster recovery systems sold today.

MirrorView is not server-based, therefore it uses no server I/O or

CPU resources. The mirror processing is performed on the storage system.

3-4

Local High Availability

MirrorView operates in a highly available environment. There are two host bus adapters (HBAs) per host, and there are two SPs per storage system. If a single adapter or SP fails, the path in the surviving SP can take control of (trespass) any LUNs owned by the failed adapter or SP. The high availability features of RAID protect against disk failure. Mirrors are resilient to an SP failure in the primary or secondary storage system.

Cross Mirroring

The primary or secondary role applies to just one remote mirror. A storage system can maintain a primary image with one mirror and a secondary image with another mirror. This allows the use of server

EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

About MirrorView Remote Mirroring Software

3 resources at both sites while maintaining duplicate copies of all data at both sites.

Integration with EMC SnapView LUN Copy Software

EMC SnapView software allows users to create a snapshot copy of an active LUN at any point in time. The snapshot copy is a consistent image that can serve for backup while I/O continues to the original

LUN. You can use SnapView in conjunction with MirrorView to make a backup copy at a remote site.

A common situation for disaster recovery is to have a primary and a secondary site that are geographically separate. MirrorView ensures that the data from the primary site replicates to the secondary site.

The secondary site sits idle until there is a failure of the primary site.

With the addition of SnapView at the secondary site, the secondary site can take snapshot copies of the replicated images and back them up to other media, providing time-of-day snapshots of data on the production host with minimal overhead.

How MirrorView Handles Failures

When a failure occurs during normal operations, MirrorView implements several actions to recover.

Primary Image Failure

When the server or storage system running the primary image fails, access to the mirror stops until a secondary is promoted to primary or until the primary is repaired. If promotion occurred, then the primary was demoted to secondary and it must be synchronized before rejoining the mirror. If the primary was repaired, then the mirror continues as before the failure.

For fast synchronization of the images after a primary failure,

MirrorView provides a write-intent log feature. The write intent log records the current activity so that a repaired primary need only copy over data that recently changed (instead of the entire image), thus greatly reducing the recovery time.

How MirrorView Handles Failures 3-5

3

About MirrorView Remote Mirroring Software

Secondary Image Failure

Table 3-1

A secondary image failure may bring the mirror below the minimum number of images required; if so, this triggers a mirror failure. When a primary cannot communicate with a secondary image, it marks the secondary as unreachable and stops trying to write to it. However, the secondary image remains a member of the mirror.

The primary also attempts to minimize the amount of work required to synchronize the secondary after it recovers. It does this by

fracturing the mirror. This means that, while the secondary is unreachable, the primary keeps track of all write requests so that only those blocks that were modified need to be copied to the secondary during recovery. When the secondary is repaired, the software writes the modified blocks to it, and then starts mirrored writes to it.

The following table shows how MirrorView might help you recover from system failure at the primary and secondary sites. It assumes that the mirror is active and is in the in-sync or consistent state.

MirrorView Recovery Scenarios

Event

Server or storage system running primary image fails.

Storage system running secondary image fails.

Result and recovery

Option 1 - Catastrophic failure, repair is difficult or impossible.

The mirror goes to the attention state. If a host is attached to the secondary storage system, the administrator promotes secondary image, and then takes other prearranged recovery steps required for application startup on standby host.

Note: Any writes in progress when the primary image fails may not propagate to the secondary image. Also, if the remote image was fractured at the time of the failure, any writes since the fracture will not have propagated.

Option 2 -Non-catastrophic failure, repair is feasible.

The mirror goes to the attention state. The administrator has the problem fixed, and then synchronizes the secondary image. The write intent log, if used, shortens the sync time needed. If a write intent log is not used, or the secondary LUN was fractured at the time of failure, then a full synchronization is necessary.

The mirror goes to attention state, rejecting I/O. The administrator has a choice: If the secondary can easily be fixed (for example, if someone pulled out a cable), then the administrator can have it fixed and let things resume. If the secondary can't easily be fixed, the administrator can reduce the minimum number of secondary images required to let the mirror become active. Later, the secondary can be fixed and the minimum number of required images can be changed.

3-6 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

MirrorView Example

Highly available cluster

File Server Mail Server

Operating system A

Operating system A

Database

Server 1

Operating system B

About MirrorView Remote Mirroring Software

3

Accounts

Server

Operating system A

Database

Server 2

Operating system B

Switch fabric Switch fabric

Switch fabric Switch fabric

Cluster

Storage

Group

Database

Server

Storage

Group

SP A SP B

LUN

LUN

LUN

LUN

LUN

LUN

LUN

Storage system 1

Extended Distance Connections

Accounts Server

Storage Group

Database Server

Remote Mirror

SP A SP B

LUN

LUN

LUN

LUN

LUN

LUN

LUN

Storage system 2

EMC2000

Figure 3-2 Sample MirrorView Configuration

In the figure above, Database Server 1, the production host, executes customer applications. These applications access data on

Storage system 1, in the database server Storage Group.

Storage system 2 is 40 km away and mirrors the data on the database server Storage Group. The mirroring is synchronous, so that

Storage system 2 always contains all data modifications that are acknowledged by Storage system 1 to the production host.

Each server has two paths — one through each SP — to each storage system. If a failure occurs in a path, then the storage-system software

MirrorView Example 3-7

3

About MirrorView Remote Mirroring Software may switch to the path through the other SP (transparent to any applications).

The server sends a write request to an SP in Storage-system 1, which then writes data to its LUN. Next, the data is sent to the corresponding SP in Storage-system 2, where it is stored on its LUN before the write is acknowledged to the production host.

Database server 2, the standby host, has no direct access to the mirrored data. (There need not be a server at all at the standby site; if there is none, the LAN connects to the SPs as shown.) This server runs applications that access other data stored on Storage system 2. If a failure occurs in either the production host or Storage system 1, an administrator can use the management station to promote the image on Storage-system 2 to the primary image. Then the appropriate applications can start on any connected server (here,

Database server 2) with full access to the data. The mirror will be accessible in minutes, although the time needed for applications to recover will vary.

3-8 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

About MirrorView Remote Mirroring Software

3

MirrorView Planning Worksheet

To plan, you must decide whether you want to use a write intent log and, if so, the LUNs you will bind for this. You will also need to complete a MirrorView mirroring worksheet.

Note that you must assign each primary image LUN to a Storage

Group (as with any normal LUN), but must not assign a secondary image LUN to a Storage Group.

MirrorView Mirroring Worksheet

Production host name

Primary LUN ID, size, and file system name

Storage

Group

Number/Name

Use Write Intent

Log - Y/N (about

256 Mbytes per storage system)

SP

(A/B)

Remote mirror name

Secondary image contact person

Secondary image LUN ID

What Next?

This chapter explained the MirrorView remote mirroring software.

For information on SnapView snapshot copy software, continue to

the next chapter. To plan LUNs and file systems, skip to Chapter 5.

For details on the storage-system hardware, skip to Chapter 6. For

storage-system management utilities, skip to Chapter 7.

MirrorView Planning Worksheet 3-9

3

About MirrorView Remote Mirroring Software

3-10 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

nvisible Body Tag

4

About SnapView

Snapshot Copy

Software

This chapter introduces EMC SnapView software that creates LUN snapshots to be used for independent data analysis or backup with

EMC FC 4700 Fibre Channel disk-array storage systems.

Major sections are

• What Is EMC SnapView Software? .................................................4-2

• Sample Snapshot Session ..................................................................4-4

• Snapshot Planning Worksheet .........................................................4-5

About SnapView Snapshot Copy Software 4-1

4

About SnapView Snapshot Copy Software

What Is EMC SnapView Software?

EMC SnapView is a software application that captures a snapshot image of a LUN and retains the image independently of subsequent changes to the LUN. The snapshot image can serve as a base for decision support, revision testing, backup, or in any situation where you need a consistent, copyable image of real data.

SnapView can create or destroy a snapshot in seconds, regardless of the LUN size, since it does not actually copy data. The snapshot image consists of the unchanged LUN blocks and, for each block that changes from the snapshot moment, a copy of the original block. The software stores the copies of original blocks in a private LUN called the snapshot cache. For any block, the copy happens only once, when the block is first modified. In summary: snapshot copy = unchanged-blocks-on-source-LUN + blocks-cached

As time passes, and I/O modifies the source LUN, the number of blocks stored in the snapshot cache grows. However, the snapshot copy, composed of all the unchanged blocks — some from the source

LUN and some from the snapshot cache — remains unchanged.

The snapshot copy does not reside on disk modules like a conventional LUN. However, the snapshot copy appears as a conventional LUN to another host. Any other server can access the copy for data processing analysis, testing, or backup.

The following figure shows how a snapshot session works: the production host with the source LUN, the snapshot cache, and second host with access to the snapshot copy.

4-2 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

About SnapView Snapshot Copy Software

4

Production host

Continuous

I/O

Storage system

Second host

Source

LUN

The snapshot is a composite of source LUN and cache data that is accessible as long as the session lasts.

Snapshot

Figure 4-1

EMC1822

SnapView Operations Model

SnapView offers several important benefits:

• It allows full access to production data with minimal impact on performance;

• For decision support or revision testing, it provides a coherent, readable and writable copy of real production data at any given point in time; and

• For backup, it practically eliminates the time that production data spends offline or in hot backup mode. And it off-loads the backup overhead from the production host to another host.

Snapshot Components

A snapshot session uses three components: a production host, a second host, and a snapshot copy session.

• The production host runs the customer applications on the LUN that you want to copy, and allows the management software to create, start, and stop snapshot sessions.

• The second host reads the snapshot during the snapshot session, and performs analysis or backup using the snapshot.

• A snapshot session makes the snapshot copy accessible to the second host; it starts and stops according to directives you give using Navisphere software on the production host.

What Is EMC SnapView Software?

4-3

4

About SnapView Snapshot Copy Software

Sample Snapshot Session

The following figure shows how a sample snapshot session starts, runs, and stops.

1. Before session starts

Production

Host

Second

Host

2. At session start (2:00 p.m.)

Production

Host

Second

Host

3. At start of operation (2:02 p.m.)

Production host

Second host

Source

LUN

Snapshot cache

Snapshot Source

LUN

Cache Snapshot Source

LUN

Cache Snapshot

(pointers to chunks)

4. At end of operation (4:15 p.m.)

Production

Host

Second

Host

5. At session end (4:25 p.m.)

Production

Host

Second

Host

Source

LUN Cache

Snapshot

(pointers to chunks)

Source

LUN

Cache Snapshot

Key:

Unchanged chunks on source LUN

Changed chunks on source LUN

Unchanged chunks in cache and snapshot

Figure 4-2 How a Snapshot Session Starts, Runs, and Stops

EMC1823

4-4 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

About SnapView Snapshot Copy Software

4

Snapshot Planning Worksheet

The following information is needed for system setup to let you bind one or more LUNs for the snapshot cache.

Snapshot Cache Setup Information (For Binding)

Snapshot source LUN size SP

A

B

RAID type for snapshot cache

RAID Group ID of parent RAID

Group

LUN size (Mbytes, we suggest 20% of source LUN size)

Cache LUN ID

(complete after binding)

For each session, you must complete a snapshot session worksheet.

Note that you must assign the LUN and snapshot to different Storage

Groups. One Storage Group should include the production host and source LUN; another Storage Group should include the second host and the snapshot.

Snapshot Session Worksheet

Production host name LUN ID

Storage

Group

ID

Size

(Mb)

Application, file system, or database name LUN ID

Size

(Mb)

Chunk

(cache write) size

SP

(both

LUN and cache)

Time of day to copy

Session name

Snapshot Planning Worksheet 4-5

4

About SnapView Snapshot Copy Software

What Next?

This chapter explained the SnapView snapshot copy software. To plan LUNs and file systems, continue to the next chapter. For details

on the storage-system hardware, skip to Chapter 6. For

storage-system management utilities, skip to Chapter 7.

4-6 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

Invisible Body Tag

5

Planning File Systems and LUNs

This chapter shows a sample RAID, LUN, and Storage Group installation with sample shared switched and unshared direct storage, and then provides worksheets for planning your own storage installation. Topics are

• Multiple Paths to LUNs ....................................................................5-2

• Sample Shared Switched Installation..............................................5-3

• Sample Unshared Direct Installation ..............................................5-7

• Planning Applications, LUNs, and Storage Groups .....................5-8

Planning File Systems and LUNs 5-1

5

Planning File Systems and LUNs

Multiple Paths to LUNs

A shared storage system includes one or more servers, two Fibre

Channel switches, one or more storage systems, each with two SPs and the Access Logix option.

With shared storage (switched or direct), there are at least two paths to each LUN in the storage system. The storage-system Base Software detects both paths and, using optional Application Transparent

Failover (ATF) software, can automatically switch to the other path, without disrupting applications, if a device (such as a host-bus adapter or cable) fails.

With unshared storage (one server direct connection), if the server has two adapters and the storage system has two SPs, ATF performs the same function as with shared systems: automatically switches to the other path if a device (such as host bus adapter or cable) fails.

And with two adapters and two SPs (switched or unshared), ATF can send I/O to each available paths in round-robin sequence (multipath

I/O) for dynamic load sharing and greater throughput.

5-2 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

Planning File Systems and LUNs

5

Sample Shared Switched Installation

The following figure shows a sample shared storage system connected to three servers: two servers in a cluster and one server running a database management program.

Disk IDs have the form b e d, where b is the FC4700 back-end bus number (0, which can be omitted, or 1), e is the enclosure number, set on the enclosure front panel (always 0 for the DPE), and d is the disk position in the enclosure (left is 0, right is 9).

Sample Shared Switched Installation 5-3

5

Planning File Systems and LUNs

Highly available cluster

Database Server (DS) File Server (FS)

Operating system B

Operating system A

Mail Server (MS)

Operating system B

Switch fabric Switch fabric

Private storage

Cluster

Storage

Group

Database

Server

Storage

Group

SP A

Unbound disks

SP B

Disk IDs

030-039

FS R5

Apps

FS R5

Users

FS R5

Files A

MS R5

Users

FS R5

Files B

MS R5

ISP A mail

MS R5

ISP B mail

MS R5

Specs

DS R5

Users

DS R5

Dbase2

120-129

020-029

110-119

010-019

100-109

DS R1

Log D1

DS R1

Log D2

DS R5

(6 disks)

Dbase1

000-009

Path 1

Path 2

EMC1824

Figure 5-1 Sample Shared Switched Storage Configuration

5-4 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

Planning File Systems and LUNs

5

The storage-system disk IDs and Storage Group LUNs are as follows.

Clustered System LUNs

Database Server LUNs (DS) - SP A

File Server LUNs (FS) - SP B

Disk IDs RAID type, storage type

010-014 RAID 5, Applications

015-019 RAID 5, Users

110-114 RAID 5, ISP A mail

115-119 RAID 5, ISP B mail

Mail Server LUNs (MS) - SP A

Disk IDs RAID type, storage type

000,001 RAID 1, Log file for database Dbase1

002, 003 RAID 1, Log file for database Dbase2

004-009 RAID 5 (6 disks), Dbase1

100-104 RAID 5, Users

105-109 RAID 5, Dbase2

020, 021 – Hot spare (automatically replaces a failed disk in any server’s LUN)

Disk IDs RAID type, storage type

020-024 RAID 5, Files A

025-029 RAID 5, Files B

120-124 RAID 5, Users

125-129 RAID 5, Specs

With 36-Mbyte disks, the LUN storage capacities and drive names are as follows.

Database Server — 540 Gbytes on four LUNs

DS R5

Users

Unit users on five disks bound as a RAID 5 Group for

144 Gbytes of storage; for user directories.

DS R5

Dbase2

DS R1

Log 1

Unit dbase2 on five disks bound as a RAID 5 Group for 144 Gbytes of storage; for the second database system.

Unit logfDbase1 on two disks bound as a RAID 1 mirrored pair for 36 Gbytes of storage; for database 1 log files.

DS R1

Log 2

DS R5

Dbase1

Unit logfDbase2 on two disks bound as a RAID 1 mirrored pair for 36 Gbytes of storage; for database 2 log files.

Unit dbase on six disks bound as a RAID 5 Group for

180 Gbytes of storage; for the database 1 system.

Sample Shared Switched Installation 5-5

5

Planning File Systems and LUNs

File Server — 576 Gbytes on four LUNs.

FS R5

Apps

Unit S on five disks bound as a RAID 5 Group for

144 Gbytes of storage; for applications.

FS R5

Users

FS R5

FilesA

FS R5

FilesB

Unit T on five disks bound as a RAID 5 Group for

144 Gbytes of storage; for user directories and files.

Unit U on five disks bound as a RAID 5 Group for

144 Gbytes of storage; for file storage.

Unit V on five disks bound as a RAID 5 Group for

144 Gbytes of storage; for file storage.

Mail Server — 576 Gbytes on four LUNs

MS R5

Users

Unit Q on five disks bound as a RAID 5 Group for

144 Gbytes of storage; for user directories and files.

MS R5

Specs

MS R5

ISP mail

MS R5

ISP mail

Unit R on five disks bound as a RAID 5 Group for

144 Gbytes of storage; for specifications.

Unit O on five disks bound as a RAID 5 Group for

144 Gbytes of storage; for the mail delivered via ISPA.

Unit P on five disks bound as a RAID 5 Group for

144 Gbytes of storage; for the mail delivered via ISP B.

5-6 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

Planning File Systems and LUNs

5

Sample Unshared Direct Installation

This section shows the disks and LUNs in an unshared direct storage-system installation.

To repeat from the previous section: disk IDs have the form b e d, where b is the FC4700 back-end bus number (0, which can be omitted, or 1), e is the enclosure number, set on the enclosure front panel

(always 0 for the DPE), and d is the disk position in the enclosure (left is 0, right is 9).

Server

Disk IDs

100-109

010-019

SP A

Database

RAID 5

Sys

RAID 1

SP B

Users

RAID 5

Clients, mail

RAID 5

Path 1

Path 2

EMC1825

Figure 5-2 Unshared Direct Storage Example

If each disk holds 36 Gbytes, then the storage system provides the server with 576 Gbytes of disk storage, all highly available; it provides Server 2 with 180 Gbytes of storage, all highly available. The storage-system disk IDs and LUNs are as follows.

LUNs - SP A and SP B, 576 Gbytes

Disk IDs RAID type, storage type, capacity

000, 001 RAID 1, System disk, 36 Gbytes

002-009 RAID 5 (8 disks), Clients and Mail, 252 Gbytes

100-104 RAID 5, Database, 144Gbytes

105-109 RAID 5, Users, 144 Gbytes

Sample Unshared Direct Installation 5-7

5

Planning File Systems and LUNs

Planning Applications, LUNs, and Storage Groups

This section helps you plan your storage use — the applications to run, the LUNs that will hold them, and, for shared storage, the

Storage Groups that will belong to each server. The worksheets to help you do this include

• Application and LUN planning worksheet — lets you outline your storage needs.

• LUN and Storage Group planning worksheet — lets you decide on the disks to compose the LUNs and the LUNs to compose the

Storage Groups for each server.

Unshared storage systems do not use Storage Groups. For unshared storage, on the LUN and Storage Group worksheet, skip the Storage

Group entry.

• LUN details worksheet — lets you plan each LUN in detail.

Make as many copies of each blank worksheet as you need. You will need this information later when you configure the storage system(s).

Sample worksheets appear later in this chapter.

Application and LUN Planning

Use the following worksheet to list the applications you will run, and the RAID type and size of LUN to hold them. For each application that will run, write the application name, file system (if any), RAID type, LUN ID (ascending integers, starting with 0), disk space required, and finally the name of the servers and operating systems that will use the LUN.

5-8 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

Planning File Systems and LUNs

5

Application and LUN Planning Worksheet

Application

File system, partition, or drive

RAID type of

LUN

LUN

ID (hex)

Disk space required

(Gbytes)

Server hostname and operating system

Application

Users

Dbase2

Log file for

Dbase1

Log file for

Dbase2

Dbase1

A sample worksheet begins as follows:

File system, partition, or drive

RAID type of

LUN

RAID 5

RAID 5

RAID 1

0

0

LUN

ID (hex)

0

RAID 1 1

Disk space required

(Gbytes)

72 GB

72 GB

18 GB

18 GB

Server hostname and operating system

Server1,UNIX

Server1,UNIX

Server1,UNIX

Server1,UNIX

RAID 1/0 2 90 GB Server2,UNIX

Completing the Application and LUN Planning Worksheet

Application . Enter the application name or type.

File system, partition , or drive. Write the drive letter (for Windows only) and the partition, file system, logical volume, or drive letter name, if any.

With a Windows operating system, the LUNs are identified by drive letter only. The letter does not help you identify the disk configuration (such as RAID 5). We suggest that later, when you use

Planning Applications, LUNs, and Storage Groups 5-9

5

Planning File Systems and LUNs the operating system to create a partition on a LUN, you use the disk administrator software to assign a volume label that describes the

RAID configuration. For example, for drive T, assign the volume ID

RAID5_T. The volume label will then identify the drive letter.

RAID type of LUN . This is the RAID Group type you want for this partition, file system, or logical volume. The features of RAID types

are explained in Chapter 3. For a RAID 5, RAID 1, RAID 1/0, and

RAID 0 Group, you can create one or more LUNs on the RAID

Group. For other RAID types, you can create only one LUN per RAID

Group.

LUN ID. The LUN ID is a hexadecimal number assigned when you bind the disks into a LUN. By default, the ID of the first LUN bound is 0, the second 1, and so on. Each LUN ID must be unique within the storage system, regardless of its Storage Group or RAID Group.

The maximum number of LUNs supported on one host-bus adapter depends on the operating system.

Disk space required (Gbytes) . Consider the largest amount of disk space this application will need, and then add a factor for growth.

Server hostname and operating system . Enter the server hostname

(or, if you don’t know the name, a short description that identifies the server) and the operating system name, if you know it.

LUN and Storage Group Planning Worksheet

Use the following worksheet to select the disks that will make up the

LUNs and Storage Groups in each storage system. A storage system is any group of enclosures connected to a DPE; it can include up to nine DAE enclosures for a total of 100 disks. A storage system can include up to 100 disks.

Unshared storage systems do not use Storage Groups. For unshared storage, skip the Storage Group entry.

5-10 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

Planning File Systems and LUNs

5

Bus 0 enclosures

LUN and Storage Group Planning Worksheet

0 4 0 0 4 1 0 4 2 0 4 3 0 4 4 0 4 5 0 4 6 0 4 7 0 4 8 0 4 9

DAE

1 3 0 1 3 1 1 3 2 1 3 3 1 3 4 1 3 5 1 3 6 1 3 7 1 3 8 1 3 9

DAE

0 3 0 0 3 1 0 3 2 0 3 3 0 3 4 0 3 5 0 3 6 0 3 7 0 3 8 0 3 9

DAE

1 2 0 1 2 1 1 2 2 1 2 3 1 2 4 1 2 5 1 2 6 1 2 7 1 2 8 1 2 9

DAE

0 2 0 0 2 1 0 2 2 0 2 3 0 2 4 0 2 5 0 2 6 0 2 7 0 2 8 0 2 9

DAE

1 1 0 1 1 1 1 1 2 1 1 3 1 1 4 1 1 5 1 1 6 1 1 7 1 1 8 1 1 9

DAE

0 1 0 0 1 1 0 1 2 0 1 3 0 1 4 0 1 5 0 1 6 0 1 7 0 1 8 0 1 9

DAE

1 0 0 1 0 1 1 0 2 1 0 3 1 0 4 1 0 5 1 0 6 1 0 7 1 0 8 1 0 9

Bus 1 enclosures

DAE

0 0 0

0 0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0 9

DPE

Navisphere Manager displays disk IDs as n-n-n

CLI recognizes disk IDs as n_n_n

Storage-system number or name:_______________

Storage Group ID or name:______ Server hostname:_________________ Dedicated Shared

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________________

Storage Group ID or name:______ Server hostname:_________________ Dedicated Shared

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________________

Storage Group ID or name:______ Server hostname:_________________ Dedicated Shared

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________________

Planning Applications, LUNs, and Storage Groups 5-11

5

Planning File Systems and LUNs

Part of a sample LUN and Storage Group worksheet follows.

LUN 3

RAID 5

1 0 0 1 0 1 1 0 2 1 0 3 1 0 4 1 0 5 1 0 6 1 0 7 1 0 8 1 0 9

LUN 0

RAID 1

0 0 0

DAE

0 0 1 0 0 2 0 0 3 0 0 4 0 0 5

0 0 6 0 0 7 0 0 8 0 0 9

DPE

LUN 1

RAID 1

LUN 4

RAID 5

LUN 2

RAID 5

Storage-system number or name:_______________

Storage Group ID or name:______ Server hostname:_____________________ Dedicated Shared

0

1

0

1

1

18

18

Server1

000, 001

002, 003

X

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________

2

3

5 90

72

004,005,006,007,008,009

100,101,102,103,104,105

X

4 5 72

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________

Storage Group ID or name:______ Server hostname:_____________________ Dedicated Shared

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________

LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________

Completing the LUN and Storage Group Planning Worksheet

As shown, draw circles around the disks that will compose each

LUN, and within each circle specify the RAID type (for example,

RAID 5) and LUN ID. This is information you will use to bind the disks into LUNs. For disk IDs, use the form shown. This form is

enclosure_diskID, where enclosure is the enclosure number (the bottom one is 0, above it 1, and so on) and diskID is the disk position (left is 0, next is 1, and so on).

None of the disks 000 through 008 may be used as a hot spare.

Next, complete as many of the Storage System sections as needed for all the Storage Groups in the SAN (or as needed for all the LUNs with unshared storage). Copy the (blank) worksheet as needed.

5-12 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

Planning File Systems and LUNs

5

For shared storage, if a Storage Group will be dedicated (not accessible by another server in a cluster), mark the Dedicated box at the end of its line; if the Storage Group will be accessible to one or more other servers in a cluster, write the hostnames of all servers and mark the Shared box.

For unshared storage, ignore the Dedicated/Shared boxes.

LUN Details Worksheet

Use the LUN details worksheet to plan the individual LUNs. Blank and sample completed LUN worksheets follow.

Complete as many blank worksheets as needed for all LUNs in storage systems. For unshared storage, skip the Storage Group entries.

Planning Applications, LUNs, and Storage Groups 5-13

5

Planning File Systems and LUNs

LUN Details Worksheet

Storage system (complete this section once for each storage system)

Storage-system number or name:______

Storage-system installation type

❏ Unshared Direct o Shared-or-Clustered Direct ❏ Shared Switched

SP information: SP A: IP address or hostname:_______Port ALPA ID:_____ Memory(Mbytes):_____

SP B: IP address or hostname:_______Port ALPA ID:_____ Memory(Mbytes):_____

❏ Caching Read cache size:__ MBWrite cache size: __ MB

❏ RAID-3

Cache page size (Kbytes):___

LUN ID:_____ SP owner: ❏ A ❏ B SP bus (0 or 1):___

RAID Group ID: Size,GB: LUN size,GB: Disk IDs:

RAID type: ❏ RAID 5

❏ RAID 1/0

RAID 3 - Memory, MB:___

Individual disk

Caching: ❏ Read and write ❏ Write ❏ Read ❏ None

Servers that can access this LUN’s Storage Group:

Operating system information: Device name:

RAID 1 mirrored pair

Hot spare

File system, partition, or drive:

RAID 0

LUN ID:_____ SP owner: ❏ A ❏ B SP bus (0 or 1):___

RAID Group ID: Size, GB: LUN size,GB: Disk IDs:

RAID type: ❏ RAID 5

❏ RAID 1/0

RAID 3 - Memory, MB:___

Individual disk

Caching: ❏ Read and write ❏ Write ❏ Read ❏ None

Servers that can access this LUN’s Storage Group:

❏ RAID 1 mirrored pair ❏ RAID 0

❏ Hot spare

Operating system information: Device name: File system, partition, or drive:

LUN ID:_____ SP owner: ❏ A ❏ B SP bus (0 or 1):__ _

RAID Group ID: Size,GB: LUN size,GB: Disk IDs:

RAID type: ❏ RAID 5

❏ RAID 1/0

RAID 3 - Memory, MB:___

Individual disk

Caching: ❏ Read and write ❏ Write ❏ Read ❏ None

❏ RAID 1 mirrored pair ❏ RAID 0

❏ Hot spare

Servers that can access this LUN’s Storage Group:

Operating system information: Device name: File system, partition, or drive:

5-14 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

Planning File Systems and LUNs

5

LUN Details Worksheet

Storage system (complete this section once for each storage system)

Storage-system number or name:__ SS1 ____

Storage-system installation type

X Unshared Direct ❏ Shared-or-Clustered Direct ❏ Shared Switched

SP information: SP A: IP address or hostname:__ SS1spa _Port ALPA ID:__ 0 _ Memory(Mbytes):_ 256 __

SP B: IP address or hostname:__ SS1spb _Port ALPA ID:_ 1 ___ Memory(Mbytes):_ 256 __

❏ Caching Read cache size:_ 80 _ MBWrite cache size:_ 160 _ MB

❏ RAID-3

Cache page size (Kbytes):_ 2 _

LUN ID:__ 0 __ SP owner: ❏ A ❏ B SP bus (0 or 1):_ 0 __

RAID Group ID: 0 Size,GB: 18 LUN size,GB: 18 Disk IDs:

RAID type: ❏ RAID 5

❏ RAID 1/0

RAID 3 - Memory, MB:___

Individual disk

Caching: ❏ Read and write ❏ Write ❏ Read ❏ None

Servers that can access this LUN’s Storage Group: Server1

000, 001

X RAID 1 mirrored pair ❏ RAID 0

❏ Hot spare

2

Operating system information: Device name: File system, partition, or drive: V

LUN ID:__ 1 ___ SP owner: ❏ A ❏ B SP bus (0 or 1):___

002, 003

RAID Group ID: 1 Size,GB: 18 LUN size,GB: 18 Disk IDs:

RAID type: ❏ RAID 5

❏ RAID 1/0

RAID 3 - Memory, MB:___

Individual disk

Caching: ❏ Read and write ❏ Write ❏ Read ❏ None

X RAID 1 mirrored pair ❏ RAID 0

X

Hot spare

Servers that can access this LUN’s Storage Group: Server1

Operating system information: Device name: File system, partition, or drive: T

LUN ID:__ 2 ___ SP owner: ❏ A ❏ B SP bus (0 or 1):__ 1 _

RAID Group ID: 2 Size, GB: 72 LUN size,GB: 72 Disk IDs: 104,105,106,107,108,109

RAID type: X RAID 5

❏ RAID 1/0

❏ RAID 3 - Memory, MB:___

❏ Individual disk

Caching: ❏ Read and write ❏ Write ❏ Read ❏ None

❏ RAID 1 mirrored pair ❏ RAID 0

❏ Hot spare

Servers that can access this LUN’s Storage Group: Server1

Operating system information: Device name: File system, partition, or drive: U

Planning Applications, LUNs, and Storage Groups 5-15

5

Planning File Systems and LUNs

Competing the LUN Details Worksheet

Complete the header portion of the worksheet for each storage system as described next. Copy the blank worksheet as needed.

Storage-System Entries

Storage-system installation type: specify Unshared Direct,

Shared-or-Clustered Direct, or Shared Switched.

SP information: IP address or hostname. The IP address is required for communication with the SP. You don’t need to complete it now, but you will need it when the storage system is installed so that you can set up communication with the SP.

Port ALPA ID. This must be unique for each SP in a storage system. The SP Port ALPA ID, like the IP address, is generally set at installation. One easy way to do this is to set SP Port 0 to ALPA

ID 0 and SP 1 Port 1 to ALPA Port 1.

Memory (Mbytes) . Each SP can have 256 or 512 Mbytes of memory.

Caching. You can use SP memory for read/write caching or

RAID 3. (Using both caching and RAID 3 in the same storage system is not recommended.) You can use different cache settings for different times of day. For example, for user I/O during the day, use more write cache; for sequential batch jobs at night, use more read cache. You enable caching for specific LUNs — allowing you to tailor your cache resources according to priority.

If you choose caching, check the box and continue to the next cache item; for RAID 3, skip to the LUN ID entries.

Read cache size.

If you want a read cache, it should generally be about one third of the total available cache memory.

Write cache size.

The write cache should be two thirds of the total available. Some memory is required for system overhead, so you cannot determine a precise figure at this time. For example, for

256 Mbytes of total memory, you might have 240 Mbytes available, and you would specify 80 Mbytes for the read cache and 160 Mbytes for the write cache.

Cache page size. This applies to both read and write caches. It can be 2, 4, 8, or 16 Kbytes.

As a general guideline, we suggest 8 Kbytes. The ideal cache page size depends on the operating system and application.

5-16 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

Planning File Systems and LUNs

5

RAID 3 . If you want to use the SP memory for RAID 3, check the box.

RAID Group/LUN Entries

Complete a RAID Group/LUN entry for each LUN and hot spare.

LUN ID . The LUN ID is a hexadecimal number assigned when you bind the disks into a LUN. By default, the ID of the first LUN bound is 0, the second 1, and so on. Each LUN ID must be unique within the storage system, regardless of its Storage Group or

RAID Group.

The maximum number of LUNs supported on one host-bus adapter depends on the operating system.

SP owner . Specify the SP that will own the LUN: SP A or SP B.

You can let the management program automatically select the SP to balance the workload between SPs; to do so, leave this entry blank.

SP bus (0 or 1).

Each FC4700 SP has two back-end buses, 0 and 1.

Ideally, you will place the same amount of load on each bus. This may mean placing two or three heavily-used LUNs on one bus, and six or eight lightly used LUNs on the other bus. The bus designation appears in the disk ID (form bus-enclosure-disk). For disks on bus 0, you can omit the bus designation from the disk ID; that is, 0-1-3 and 1-3 both indicate the disk on bus 0, in enclosure

1, in the third position (fourth from left) in the storage system.

RAID Group ID . This ID is a hexadecimal number assigned when you create the RAID Group. By default, the number of the first RAID Group in a storage system is 0, the second 1, and so on, up to the maximum of 1F (31).

Size (RAID Group size). Enter the user-available capacity in gigabytes (Gbytes) of the whole RAID Group. You can determine the capacity as follows:

RAID5 or RAID-3 Group: disk-size * (number-of-disks - 1)

RAID 1/0 or RAID-1 Group: (disk-size * number-of-disks) / 2

RAID 0 Group:

Individual unit: disk-size * number-of-disks disk-size

Planning Applications, LUNs, and Storage Groups 5-17

5

Planning File Systems and LUNs

For example,

• A five-disk RAID 5 or RAID 3 Group of 18-Gbyte disks holds

72 Gbytes;

• An eight-disk RAID 1/0 Group of 18-Gbyte disks also holds

72Gbytes;

• A RAID 1 mirrored pair of 18-Gbyte disks holds 18 Gbytes; and

• An individual disk of an 18-Gbyte disk also holds 18 Gbytes.

Each disk in the RAID Group must have the same capacity; otherwise, you will waste disk storage space.

LUN size . Enter the user-available capacity in gigabytes (Gbytes) of the LUN. You can make this the same size as the RAID Group, described previously. Or, for a RAID 5, RAID 1, RAID 1/0, or

RAID 0 Group, you can make the LUN smaller than the RAID

Group. You might do this if you wanted a RAID 5 Group with a large capacity and wanted to place many smaller capacity LUNs on it; for example, to specify a LUN for each user. However, having multiple LUNs per RAID Group may adversely impact performance. If you want multiple LUNs per RAID Group, then use a RAID Group/LUN series of entries for each LUN.

Disk IDs . Enter the IDs of all disks that will make up the LUN or hot spare. These are the same disk IDs you specified on the previous worksheet. For example, for a RAID 5 Group in the DPE

(enclosure 0, disks 2 through 6), enter 003, 004, 005, 006, and 007.

RAID type . Copy the RAID type from the previous worksheet.

For example, RAID 5 or hot spare. For a hot spare (not strictly speaking a LUN at all), skip the rest of this LUN entry and continue to the next LUN entry (if any).

If this is a RAID 3 Group, specify the amount of SP memory for that group. To work efficiently, each RAID 3 Group needs at least

6 Mbytes of memory.

Caching . If you want to use caching (entry on page 5-16), you can specify whether you want caching — read and write, read, or write for this LUN. Generally, write caching improves performance far more than read caching. The ability to specify caching on a LUN basis provides additional flexibility, since you can use caching for only the units that will benefit from it. Read and write caching recommendations follow.

5-18 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

Planning File Systems and LUNs

5

Table 5-1 Cache Recommendations for Different RAID Types

RAID 5 RAID 3

Highly

Recommended

Not allowed

RAID 1 RAID 1/0 RAID 0 Individual Unit

Recommended Recommended Recommended Recommended

What Next?

Servers that can access this LUN’s Storage Group . For shared switched storage or shared-or-clustered direct storage, enter the name of each server (copied from the LUN and Storage Group worksheet). For unshared direct storage, this entry does not apply.

Operating system information: Device name. Enter the operating system device name, if this is important and if you know it. Depending on your operating system, you may not be able to complete this field now.

File system, partition, or drive . Write the name of the file system, partition, or drive letter you will create on this LUN. This is the same name you wrote on the application worksheet.

On the following line, write any pertinent notes; for example, the file system mount- or graft-point directory pathname (from the root directory). If any of this storage system’s LUNs will be shared with another server, and the other server is the primary owner of this LUN, write secondary. (As mentioned earlier, if the storage system will be used by two servers, we suggest you complete one of these worksheets for each server.)

This chapter outlined the LUN planning tasks for storage systems. If you have completed the worksheets to your satisfaction, you are ready to learn about the hardware needed for these systems as

explained in Chapter 6.

Planning Applications, LUNs, and Storage Groups 5-19

5

Planning File Systems and LUNs

5-20 EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide

advertisement

Related manuals

Download PDF

advertisement

Table of contents