Managing EMC Celerra Volumes and File Systems with

Managing EMC Celerra Volumes
and File Systems with
Automatic Volume Management
P/N 300-004-148
Rev A06
Version 5.6.45
June 2009
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
System requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Cautions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
User interface choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Related information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
System-defined storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
System-defined virtual storage pools. . . . . . . . . . . . . . . . . . . . . . . . .15
User-defined storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
File system and automatic file system extension . . . . . . . . . . . . . . .15
AVM and automatic file system extension options . . . . . . . . . . . . . .16
Storage pool attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
System-defined storage pool volume and storage profiles . . . . . . .24
File system and storage pool relationship . . . . . . . . . . . . . . . . . . . . .30
Automatic file system extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
Virtual Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
Planning considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
Configuring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
Configure disk volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
Create file systems with AVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
Extend file systems with AVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54
Create file system checkpoints with AVM . . . . . . . . . . . . . . . . . . . . .69
Managing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70
List existing storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70
Display storage pool details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71
Display storage pool size information . . . . . . . . . . . . . . . . . . . . . . . .71
Modify system-defined and user-defined storage pool attributes . .74
Extend a user-defined storage pool . . . . . . . . . . . . . . . . . . . . . . . . . .80
Extend a system-defined storage pool . . . . . . . . . . . . . . . . . . . . . . . .81
Remove volumes from storage pools . . . . . . . . . . . . . . . . . . . . . . . .82
Delete user-defined storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . .83
Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85
Where to get help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85
1 of 92
EMC E-Lab Interoperability Navigator . . . . . . . . . . . . . . . . . . . . . . . . 86
Known problems and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Error messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
EMC Training and Professional Services . . . . . . . . . . . . . . . . . . . . . 87
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2 of 92
Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Introduction
Automatic Volume Management (AVM) is an EMC® Celerra® Network Server
feature that automates volume creation and management. By using the Celerra
command options and interfaces that support AVM, system administrators can
create and expand file systems without creating and managing the underlying
volumes.
The Celerra automatic file system extension feature automatically extends file
systems created with AVM when the file systems reach their specified high water
mark (HWM). Virtual Provisioning™, also known as thin provisioning, works with
automatic file system extension and allows the file system to grow on demand. With
Virtual Provisioning, the space presented to the user or application is the maximum
size setting, while only a portion of that space is actually allocated to the file system.
This document is part of the Celerra Network Server documentation set and is
intended for system administrators responsible for creating and managing Celerra
volumes and file systems by using AVM.
System requirements
Table 1 on page 3 describes the Celerra Network Server software, hardware,
network, and storage configurations.
Table 1
System requirements
Software
Celerra Network Server version 5.6.45
Hardware
No specific hardware requirements
Network
No specific network requirements
Storage
Any Celerra-qualified storage system
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
3 of 92
Restrictions
The restrictions listed below are applicable to AVM, automatic file system
extension, Virtual Provisioning, and CLARiiON®.
AVM restrictions
The restrictions applicable to AVM are:
◆
Create a file system by using only one storage pool. If you need to extend a file
system, extend it by using either the same storage pool or by using another
compatible storage pool. Do not extend a file system across storage systems
unless it is absolutely necessary.
◆
File systems might reside on multiple disk volumes. Ensure that all disk
volumes used by a file system reside on the same storage system for file
system creation and extension. This is to protect against storage-system and
data unavailability.
◆
RAID 3 is only supported with EMC CLARiiON Advanced Technology-Attached
(ATA).
◆
When building volumes on a Celerra Network Server attached to an EMC
Symmetrix® storage system, use standard Symmetrix volumes (also called
hypervolumes), not Symmetrix metavolumes.
◆
Use AVM to create the primary EMC TimeFinder®/FS (NearCopy or FarCopy)
file system, if the storage pool attributes indicate that no sliced volumes are
used in that storage pool. AVM does not support Business Continuance
Volumes (BCVs) in a storage pool with other disk types.
◆
AVM storage pools must contain only one disk type. Disk types cannot be
mixed. Table 4 on page 17 provides a complete list of disk types. Table 5 on
page 18 provides a list of storage pools and the description of the associated
disk types.
Automatic file system extension restrictions
The restrictions applicable to automatic file system extension are:
4 of 92 Version 5.6.45
◆
Automatic file system extension does not work on MGFS, the EMC file system
type used while performing data migration from either CIFS or NFS to the
Celerra Network Server by using CDMS.
◆
Automatic file system extension is not supported on file systems created with
manual volume management. You can enable automatic file system extension
on the file system only if it is created or extended by using an AVM storage pool.
◆
Automatic file system extension is not supported on file systems used with
TimeFinder NearCopy or FarCopy.
◆
While automatic file system extension is running, the Control Station blocks all
other commands that apply to this file system. When the extension is complete,
the Control Station allows the commands to run.
◆
The Control Station must be running and operating properly for automatic file
system extension, or any other Celerra feature, to work correctly.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
◆
Automatic file system extension cannot be used for any file system that is part
of a Remote Data Facility (RDF) configuration. Do not use nas_fs command
with -auto_extend option for file systems associated with RDF configurations.
Doing so generates the error message: Error 4121: operation not supported for
file systems of type EMC SRDF®.
◆
The options associated with automatic file system extension can be modified
only on file systems mounted with read/write permission. If the file system is
mounted read-only, you must remount the file system as read/write before
modifying the automatic file system extension, HWM, or maximum size options.
◆
Enabling automatic file system extension and Virtual Provisioning does not
automatically reserve the space from the storage pool for that file system.
Administrators must ensure that adequate storage space exists, so that the
automatic extension operation can succeed. When there is not enough storage
space available to extend the file system to the requested size, automatic file
system extension extends the file system to use all the available storage. For
example, if automatic file system extension requires 6 GB but only 3 GB is
available, the file system automatically extends to 3 GB. Although the file
system was partially extended, an error message appears indicating there was
not enough storage space available to perform automatic extension. When
there is no available storage, automatic file system extension fails. You must
manually extend the file system to recover from this issue.
◆
Automatic file system extension is supported with EMC Celerra Replicator™.
Enable automatic file system extension only on the source file system in a
replication scenario. The destination file system synchronizes with the source
file system and extends automatically. Do not enable automatic file system
extension on the destination file system.
◆
You cannot create iSCSI dense LUNs on file systems with automatic file system
extension enabled. You cannot enable automatic file system extension on a file
system if there is a storage mode iSCSI LUN present on the file system. You will
receive an error, “Error 2216: <fs_name>: item is currently in use by iSCSI.”
However, iSCSI virtually-provisioned LUNs are supported on file systems with
automatic file system extension enabled.
◆
Automatic file system extension is not supported on the root file system of a
Data Mover or on the root file system of a Virtual Data Mover (VDM).
Virtual Provisioning restrictions
◆
Celerra supports Virtual Provisioning on Symmetrix DMX-4 and CLARiiON CX4
disk volumes.
◆
The options associated with Virtual Provisioning can be modified only on file
systems mounted with read/write permission. If the file system is mounted readonly, you must remount the file system as read/write before modifying the
Virtual Provisioning, HWM, or maximum size options.
◆
Celerra virtually-provisioned objects (either iSCSI LUNs or File Systems) should
not be used with Symmetrix or CLARiiON virtually-provisioned devices. A single
file system should not span virtual and standard Symmetrix or CLARiiON
volumes.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
5 of 92
◆
Virtual Provisioning is supported with EMC Celerra Replicator. Enable Virtual
Provisioning only on the source file system in a replication scenario. The
destination file system synchronizes with the source file system and extends
automatically. Do not enable Virtual Provisioning on the destination file system.
◆
With Virtual Provisioning enabled, the NFS, CIFS, and FTP clients see the
actual size of the Replicator destination file system, while they see the virtually
provisioned maximum size of the source file system. "Interoperability
considerations" on page 35 provides more information on using automatic file
system extension with Celerra Replicator.
◆
Virtual Provisioning is supported on the primary file system, but not supported
with primary file system checkpoints. NFS, CIFS, and FTP clients cannot see
the virtually provisioned maximum size of any EMC SnapSure™ checkpoint file
system.
◆
If a file system is created using a virtual storage pool, the -vp option of the
nas_fs command cannot be enabled because Celerra Virtual Provisioning and
CLARiiON Virtual Provisioning cannot coexist on a file system.
◆
Closely monitor Symmetrix Thin Pool space that contains virtually-provisioned
devices. Use the command /usr/symcli/bin/symcfg list -pool -thin -all to display
pool usage.
CLARiiON restrictions
◆
EMC does not recommend creating system RAID group and control LUNs on
CLARiiON virtual (thin) pools and virtual LUNs.
◆
CLARiiON virtual pools only support RAID 5 and RAID 6:
• RAID 5 is the default, with a minimum of 3 drives (2+1). EMC recommends
using multiples of 5 drives.
• RAID 6 has a minimum of 4 drives (2+2). EMC recommends using multiples
of 8 drives.
6 of 92 Version 5.6.45
◆
CLARiiON virtual pools do not support SSD drives.
◆
The Navisphere® Manager is required to provision virtual devices on the
CLARiiON. Any platforms that do not provide Navisphere access cannot use
this feature.
◆
Closely monitor CLARiiON Thin Pool space that contains virtually-provisioned
devices. Use the command nas_pool -size <AVM virtual pool name> and look
for the physical usage information. An alert is generated when a CLARiiON Thin
Pool runs out of space.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Cautions
If any of this information is unclear, contact your EMC Customer Support
Representative for assistance:
◆
All parts of a file system must use the same type of disk storage and be stored
on a single storage system. Spanning more than one storage system increases
the chance of data loss or data unavailability or both.
◆
If you plan to set quotas on a file system to control the amount of space that
users and groups can consume, turn on quotas immediately after creating the
file system. Turning on quotas later, when the file system is in use, can cause
temporary file system disruption, including slow file system access. Using
Quotas on EMC Celerra contains instructions on turning on quotas and general
quotas information.
◆
If your user environment requires international character support (that is,
support of non-English character sets or Unicode characters), configure the
Celerra Network Server to support this feature before creating file systems.
Using International Character Sets with EMC Celerra contains instructions to
support and configure international character support on a Celerra Network
Server.
◆
If you plan to create TimeFinder/FS (local, NearCopy, or FarCopy) snapshots,
do not use slice volumes (nas_slice) when creating the production file system
(PFS). Instead, use the full portion of the disk presented to the Celerra Network
Server. Using slice volumes for a PFS slated as the source for snapshots
wastes storage space and can result in loss of PFS data.
◆
Automatic file system extension is interrupted during Celerra software
upgrades. If automatic file system extension is enabled, the Control Station
continues to capture the HWM events, but the actual file system extension does
not start until the Celerra upgrade process completes.
◆
Insufficient space on a Symmetrix Thin Pool that contains a virtually-provisioned
device might result in a Data Mover panic and data unavailability. To avoid this
situation, pre-allocate 100 percent of the TDEV when binding it to the Thin Pool.
If you do not use 100 percent pre-allocation, there is the possibility of
overallocation; therefore, you must closely monitor the pool usage.
◆
Insufficient space on a CLARiiON Thin Pool that contains a virtually-provisioned
device might result in a Data Mover panic and data unavailability. You cannot
pre-allocate space on a CLARiiON Thin Pool so you must closely monitor the
thin pool usage to avoid running out of space.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
7 of 92
User interface choices
The Celerra Network Server offers flexibility in managing networked storage that is
based on your support environment and interface preferences. This document
describes how to use AVM by using the command line interface (CLI). You can also
perform many of these tasks by using one of the Celerra management applications:
◆
Celerra Manager — Basic Edition
◆
Celerra Manager — Advanced Edition
◆
Celerra Monitor
◆
Microsoft Management Console (MMC) snap-ins
◆
Active Directory Users and Computers (ADUC) extensions
For additional information about managing your Celerra:
◆
Learning about EMC Celerra on the EMC Celerra Network Server
Documentation CD
◆
Celerra Manager online help
◆
Application’s online help system on the EMC Celerra Network Server
Documentation CD
Installing EMC Celerra Management Applications includes instructions on
launching Celerra Manager, and on installing the MMC snap-ins and the ADUC
extensions.
Table 2 on page 8 identifies the storage pool tasks you can perform in each
interface, and the command syntax or the path to Celerra Manager page to use to
perform the task. Unless otherwise noted in the task, the operations apply to userdefined and system-defined storage pools. The EMC Celerra Network Server
Command Reference Manual contains information on the commands described in
Table 2 on page 8.
Table 2
Storage pool tasks supported by platform (page 1 of 3)
Task
Celerra Control Station CLI
Celerra Manager
Create a new user-defined storage
pool.
nas_pool -create <name> -volumes
<volumes>
Select Celerras > [Celerra_name]
> Storage > Pools, and click New.
nas_pool -list
Select Celerras > [Celerra_name]
> Storage > Pools.
Note: Applies only to user-defined
storage pools.
List existing storage pools.
8 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Table 2
Storage pool tasks supported by platform (page 2 of 3)
Task
Celerra Control Station CLI
Celerra Manager
Display storage pool details.
nas_pool -info <name>
Select Celerras > [Celerra_name]
> Storage > Pools, and doubleclick the storage pool name.
Note: When you perform this operation in the CLI,
the total_potential_mb does not include the space
in the storage pool in the output.
Note: When you perform this
operation from Celerra Manager,
the total_potential_mb represents
the total available storage,
including the storage pool.
Display storage pool size information.
nas_pool -size <name>
Select Celerras > [Celerra_name]
> Storage > Pools, and view the
Storage Capacity and Storage
Used(%) columns.
Specify if AVM uses slice volumes or
entire unused disk volumes from the
storage pool to create or expand a file
system.
nas_pool -modify {<name>|id=<id>}
-default_slice_flag {y|n}
Select Celerras > [Celerra_name]
> Storage > Pools, double-click
the storage pool name to open its
properties page, and select or
clear Slice Pool Volumes by
Default? as required.
Specify whether AVM extends the
storage pool automatically with
unused disk volumes whenever the
pool needs more space.
nas_pool -modify {<name>|id=<id>}
-is_dynamic {y|n}
Select Celerras > [Celerra_name]
> Storage > Pools, double-click
the storage pool name to open its
properties page, and select or
clear Automatic Extension
Enabled as required.
nas_pool -modify {<name>|id=<id>}
-is_greedy {y|n}
Select Celerras > [Celerra_name]
> Storage > Pools, double-click
the storage pool name to open its
properties page, and select or
clear Obtain Unused Disk
Volumes as required.
nas_pool -xtend {<name>|id=<id>}
-volumes <volume_name>
[,<volume_name>,...]
Select Celerras > [Celerra_name]
> Storage > Pools, select the
storage pool you want to extend,
click Extend, and select one or
more volumes to add to the pool.
Note: Applies only to system-defined
storage pools.
Specifying y tells AVM to allocate new,
unused disk volumes to the storage
pool when creating or expanding, even
if there is available space in the pool.
Specifying n tells AVM to allocate all
available storage pool space to create
or expand a file system before adding
volumes to the pool.
Note: Applies only to system-defined
storage pools.
Add volumes to a user-defined storage
pool.
Note: Applies only to user-defined
storage pools.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
9 of 92
Table 2
Storage pool tasks supported by platform (page 3 of 3)
Task
Celerra Control Station CLI
Celerra Manager
Extend a system-defined storage pool
by size and specify a storage system
from which to allocate storage.
nas_pool -xtend {<name>|id=<id>}
-size <integer> [M|G] -storage
<system_name>
Select Celerras > [Celerra_name]
> Storage > Pools, select the
storage pool you want to extend,
and click Extend. Select the
Storage System to be used to
extend the file system, and type
the size requested in MB or GB.
Note: Applies only to system-defined
storage pools, and only when the
is_dynamic attribute for the storage
pool is set to n.
Note: The drop-down list shows all
the available storage systems, and
the volumes shown are only those
created on the storage system that
is highlighted.
Remove volumes from a storage pool.
nas_pool -shrink {<name>|id=<id>}
-volumes <volume_name>
[,<volume_name>,...]
Select Celerras > [Celerra_name]
> Storage > Pools, select the
storage pool you want to shrink,
click Shrink, and select one or
more volumes not in use, to be
removed from the pool.
Delete a storage pool.
nas_pool -delete {<name>|id=<id>}
Select Celerras > [Celerra_name]
> Storage > Pools, select the
storage pool you want to delete,
and click Delete.
nas_pool -modify {<name>|id=<id>}
-name <name>
Select Celerras > [Celerra_name]
> Storage > Pools, double-click
the storage pool name to open its
properties page, and type the new
name in the Name text box.
$ nas_fs -name <name> -type <type>
-create pool=<pool_name>
storage=<system_name>
{size=<integer>[T|G|M]}
-auto_extend {no|yes}
Select Celerras > File Systems >
New, and select Automatic
Extension Enabled.
Note: Applies only to user-defined
storage pools.
Change the name of a storage pool.
Note: Applies only to user-defined
storage pools.
Create a file system with automatic file
system extension enabled.
10 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Terminology
The EMC Celerra Glossary provides a complete list of Celerra terminology.
automatic file system extension: Configurable Celerra file system feature that
automatically extends a file system created or extended with AVM when the high
water mark (HWM) is reached. See also high water mark.
Automatic Volume Management (AVM): Feature of the Celerra Network Server that
creates and manages volumes automatically without manual volume management
by an administrator. AVM organizes volumes into storage pools that can be
allocated to file systems.
Celerra Data Migration Service (CDMS): Feature for migrating file systems from NFS
and CIFS source file servers to a Celerra Network Server. The online migration is
transparent to users once it starts.
disk volume: On Celerra systems, a physical storage unit as exported from the
storage array. All other volume types are created from disk volumes. See also
metavolume, slice volume, stripe volume, and volume.
file system: Method of cataloging and managing the files and directories on a
storage system.
high water mark (HWM): Trigger point at which the Celerra Network Server performs
one or more actions, such as sending a warning message, extending a volume, or
updating a file system, as directed by the related feature's software/parameter
settings.
logical unit number (LUN): Identifying number of a SCSI or iSCSI object that
processes SCSI commands. The LUN is the last part of the SCSI address for a
SCSI object. The LUN is an ID for the logical unit, but the term is often used to refer
to the logical unit itself.
metavolume: On a Celerra system, a concatenation of volumes, which can consist
of disk, slice, or stripe volumes. Also called a hypervolume or hyper. Every file
system must be created on top of a unique metavolume. See also disk volume,
slice volume, stripe volume, and volume.
slice volume: On a Celerra system, a logical piece or specified area of a volume
used to create smaller, more manageable units of storage. See also disk volume,
metavolume, stripe volume, and volume.
storage pool: Automated Volume Management (AVM), a Celerra feature, organizes
available disk volumes into groupings called storage pools. Storage pools are used
to allocate available storage to Celerra file systems. Storage pools can be created
automatically by AVM or manually by the user.
storage system: Array of physical disk devices and their supporting processors,
power supplies, and cables.
stripe volume: Arrangement of volumes that appear as a single volume. Allows for
stripe units that cut across the volume and are addressed in an interlaced manner.
Stripe volumes make load balancing possible. See also disk volume, metavolume,
slice volume, and volume.
thin LUN: A LUN whose storage capacity grows by using a shared virtual (thin) pool
of storage when needed.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
11 of 92
thin pool: A user-defined CLARiiON storage pool that contains a set of disks on
which thin LUNs can be created.
Universal Extended File System (UxFS): High-performance, Celerra Network Server
default file system, based on traditional Berkeley UFS, enhanced with 64-bit
support, metadata logging for high availability, and several performance
enhancements.
Virtual Provisioning: Configurable Celerra file system feature that lets you allocate
storage based on your longer term projections, while you dedicate only the file
system resources you currently need. Users — NFS or CIFS clients and
applications — see the virtual maximum size of the file system of which only a
portion is physically allocated. In addition, combining the automatic file system
extension and Virtual Provisioning features lets you grow the file system gradually
on an as-needed basis.
volume: On a Celerra system, a virtual disk into which a file system, database
management system, or other application places data. A volume can be a single
disk partition or multiple partitions on one or more physical drives. See also disk
volume, metavolume, slice volume, and stripe volume.
12 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Related information
Specific information related to the features and functionality described in this
document are included in:
◆
EMC Celerra Network Server Command Reference Manual
◆
Online Celerra man pages
◆
EMC Celerra Network Server Parameters Guide
◆
Configuring NDMP Backups to Disk on EMC Celerra
◆
Controlling Access to EMC Celerra System Objects
◆
Managing EMC Celerra Volumes and File Systems Manually
The EMC Celerra Network Server Documentation CD, supplied with Celerra and
also available on the EMC Powerlink® website, provides the complete set of EMC
Celerra customer publications. After logging in to Powerlink, go to Support >
Technical Documentation and Advisories > Hardware/Platforms
Documentation > Celerra Network Server. On this page, click Add to Favorites.
The Favorites section on your Powerlink home page provides a link that takes you
directly to this page.
Celerra Support Demos are available on Powerlink. Use these instructional videos
to learn how to perform a variety of Celerra configuration and management tasks.
After logging in to Powerlink, go to Support > Product and Diagnostic Tools >
Celerra Tools > Celerra Support Demos.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
13 of 92
Concepts
The AVM feature automatically creates and manages file system storage. AVM is
storage-system independent and supports existing requirements for automatic
storage allocation (SnapSure, SRDF, and IP replication).
You can configure file systems created with AVM to automatically extend. The
automatic file system extension feature allows you to configure a file system to
extend automatically, without system administrator intervention, to support file
system operations. Automatic file system extension causes the file system to
extend when it reaches the specified usage point, the HWM. You set the size for the
file system you create, and also the maximum size to which you want the file
system to grow. The Virtual Provisioning option lets you present the maximum size
of the file system to the user or application, of which only a portion is actually
allocated. Virtual Provisioning allows the file system to slowly grow on demand as
the data is written.
Note: Enabling Virtual Provisioning with automatic file system extension does not
automatically reserve the space from the storage pool for that file system. Administrators
must ensure that adequate storage space exists, so that the automatic extension operation
can succeed. If the available storage is less than the maximum size setting, then automatic
extension fails. Users receive an error message when the file system becomes full, even
though it appears that there is free storage space in the file system.
To create file systems, use one or more types of AVM storage pools:
◆
System-defined storage pools
◆
System-defined virtual storage pools
◆
User-defined storage pools
System-defined storage pools
System-defined storage pools are predefined and available with the Celerra
Network Server. You cannot create or delete these predefined storage pools
because they are set up to make managing volumes and file systems easier than
manually managing them. You can modify some of the attributes of the systemdefined storage pools, but this is unnecessary.
AVM system-defined storage pools do not preclude the use of user-defined storage
pools or manual volume and file system management, but instead give system
administrators a simple volume and file system management tool. With Celerra
command options and interfaces that support AVM, you can use system-defined
storage pools to create and expand file systems without manually creating and
managing stripe volumes, slice volumes, or metavolumes. If your applications do
not require precise placement of file systems on particular disks or on particular
locations on specific disks, using AVM is an easy way for you to create file systems.
AVM system-defined storage pools are adequate for most high availability and
performance considerations. Each system-defined storage pool manages the
details of allocating storage to file systems. When you create a file system by using
AVM system-defined storage pools, storage is automatically allocated from the pool
14 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
to the new file system. After the storage is allocated to that pool, the storage pool
can dynamically grow and shrink to meet the file system needs.
System-defined virtual storage pools
System-defined virtual storage pools are automatically created during the normal
storage discovery (diskmark) process. A system-defined virtual storage pool
contains a set of disks on which thin LUNs can be created for use by the Virtual
Provisioning capability. When the last virtual disk volume from a specific virtual
CLARiiON storage pool is deleted, the system-defined virtual AVM storage pool and
its profiles are automatically removed.
User-defined storage pools
User-defined storage pools allow you to create containers or pools of storage, filled
with manually created volumes. When the applications require precise placement of
file systems on particular disks or locations on specific disks, consider using AVM
user-defined storage pools for more control. User-defined storage pools also allow
you to reserve disk volumes so that the system-defined storage pools cannot use
them.
If the applications require precise placement of file systems on particular disks or
locations on specific disks, AVM user-defined storage pools give you more control.
They also allow you to reserve disk volumes so that the system-defined storage
pools cannot use them.
User-defined storage pools provide a better option for those who want more control
over their storage allocation while still using the more automated management tool.
User-defined storage pools are not as automated as the system-defined storage
pools. You must specify some attributes of the storage pool and the storage system
from which the space is allocated to create file systems. While somewhat less
involved than creating volumes and file systems manually, using these storage
pools requires more manual involvement on your part than the system-defined
storage pools. When you create a file system by using a user-defined storage pool,
you must create the storage pool, choose and add the volumes to it, expand it with
new volumes when required, and remove volumes you no longer require in the
storage pool.
File system and automatic file system extension
You can create or extend file systems with AVM storage pools and configure the file
system to automatically extend as needed. You can enable automatic file system
extension on a file system when it is created, or you can enable and disable it at
any later time by modifying the file system. The options that work with automatic file
system extension are:
◆
HWM
◆
Maximum size
◆
Virtual Provisioning
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
15 of 92
The HWM is the point at which the file system must be extended to meet the usage
demand. The default HWM is 90 percent.
The default supported maximum size for any file system is 16 TB.
With automatic file system extension, the maximum size is the size to which the file
system could grow, up to the supported 16 TB. Setting the maximum size is
optional with automatic file system extension, but mandatory with Virtual
Provisioning. With Virtual Provisioning enabled, users and applications see the
maximum size, while only a portion of that size is actually allocated to the file
system.
Automatic file system extension allows the file system to grow as needed without
system administrator intervention, making it easier to meet system operations
requirements continuously, without interruptions.
AVM and automatic file system extension options
AVM provides a range of options for configuring your storage. The Celerra Network
Server can choose the configuration and placement of the file systems by using
system-defined storage pools, or you can create a user-defined storage pool and
define its attributes.
AVM storage pools
An AVM storage pool is a container or pool of volumes. Table 3 on page 16 lists the
major difference between system-defined and user-defined storage pools.
Table 3
System-defined and user-defined storage pool difference
Functionality
System-defined storage pools
User-defined storage pools
Ability to grow and
shrink
Automatic, but the dynamic
behavior can be disabled
Manual only — Administrators
must manage the volume
configuration, addition, and
removal of storage from these
storage pools
"Managing" on page 70 provides more detailed information.
16 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Disk types
A storage pool must contain volumes from only one disk type.
Table 4 on page 17 lists the available disk types associated with the storage pools
and the disk type descriptions.
Table 4
Disk types (page 1 of 2)
Disk type
Description
CLSTD
Standard CLARiiON disk volumes.
CLATA
CLARiiON Advanced Technology-Attached (ATA) disk volumes.
CLSAS
CLARiiON Serial Attached SCSI (SAS) disk volumes.
CLSSD
CLARiiON Fibre Channel Solid State Drives (FC SSD) disk volumes.
STD
Standard Symmetrix disk volumes, typically RAID 1 configuration.
R1STD
Symmetrix Fibre Channel (FC) disk volumes, set up as source for mirrored
storage that uses SRDF functionality.
R2STD
Standard Symmetrix disk volume that is a mirror of another standard
Symmetrix disk volume over RDF links.
SSD
High performance Symmetrix disk volumes built on solid state drives,
typically RAID 5 configuration.
ATA
Standard Symmetrix disk volumes built on SATA drives, typically RAID 1
configuration.
R1ATA
Symmetrix SATA disk volumes, set up as source for mirrored storage that
uses SRDF functionality.
R2ATA
Symmetrix SATA disk volumes, set up as target for mirrored storage using
SRDF functionality.
CMATA
CLARiiON Advanced Technology-Attached (ATA) disk volumes for use
with MirrorView®/S. The selection box lists the size of free disk volumes
and their RAID protection information.
CMSTD
Standard CLARiiON disk volumes for use with MirrorView/S. The selection
box lists the size of free disk volumes and their RAID protection
information.
BCV
Business continuance volume (BCV) for use by TimeFinder/FS operations.
BCVA
Business continuance volume (BCV) built from SATA disks for use by
TimeFinder/FS operations.
R1BCA
BCV built from SATA disks that is mirrored to a different Symmetrix over
RDF links, RAID 1 configuration; used as a source volume by
TimeFinder/FS operations.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
17 of 92
Table 4
Disk types (page 2 of 2)
Disk type
Description
R2BCA
BCV built from SATA disks that is a mirror of another BCV over RDF links;
used as a target of destination volume by TimeFinder/FS operations.
R1BCV
BCV that is mirrored to a different Symmetrix over RDF links, RAID 1
configuration; used as a source volume by TimeFinder/FS operations.
R2BCV
BCV that is a mirror of another BCV over RDF links; used as a target of
destination volume by TimeFinder/FS operations.
System-defined storage pools
Choosing system-defined storage pools to build the file system is the easiest way to
manage volumes and file systems. They are associated with the type of attached
storage system you have. If you have a CLARiiON storage system attached, the
CLARiiON storage pools are available to you through the Celerra Network Server. If
you have a Symmetrix storage system attached, the Symmetrix storage pools are
available to you through the Celerra Network server.
System-defined storage pools are dynamic by default. The AVM feature adds and
removes volumes automatically from the storage pool as needed. Table 5 on
page 18 lists the system-defined storage pools supported on the Celerra Network
Server. Table 6 on page 21 contains additional information about RAID group
combinations for system-defined storage pools.
Note: A storage pool can include disk volumes of only one type.
Table 5
18 of 92 Version 5.6.45
System-defined storage pools (page 1 of 4)
Storage pool name
Description
symm_std
Designed for high performance and availability at medium cost.
This storage pool uses STD disk volumes (typically RAID 1).
symm_ata
Designed for high performance and availability at low cost. This
storage pool uses ATA disk volumes (typically RAID 1).
symm_std_rdf_src
Designed for high performance and availability at medium cost,
specifically for storage that will be mirrored to a remote Celerra
Network Server that uses SRDF, or to a local Celerra Network
Server that uses TimeFinder/FS. Using SRDF/S with EMC
Celerra for Disaster Recovery and Using TimeFinder/FS,
NearCopy, and FarCopy with EMC Celerra provide more
information about the SRDF feature.
symm_std_rdf_tgt
Designed for high performance and availability at medium cost,
specifically as a mirror of a remote Celerra Network Server
using SRDF. This storage pool uses Symmetrix R2STD disk
volumes. Using SRDF/S with EMC Celerra for Disaster
Recovery provides more information about the SRDF feature.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Table 5
System-defined storage pools (page 2 of 4)
Storage pool name
Description
symm_ata_rdf_src
Designed for archival performance and availability at low cost,
specifically for storage mirrored to a remote Celerra Network
Server using SRDF. This storage pool uses Symmetrix R1ATA
disk volumes. Using SRDF/S with EMC Celerra for Disaster
Recovery provides more information about the SRDF feature.
symm_ata_rdf_tgt
Designed for archival performance and availability at low cost,
specifically as a mirror of a remote Celerra Network Server
using SRDF. This storage pool uses Symmetrix R2ATA disk
volumes. Using SRDF/S with EMC Celerra for Disaster
Recovery provides more information about the SRDF feature.
symm_ssd
Designed for very high performance and availability at high cost.
This storage pool uses SSD disk volumes (typically RAID 5).
clar_r1
Designed for high performance and availability at low cost. This
storage pool uses CLSTD disk volumes created from RAID 1
mirrored-pair disk groups.
clar_r6
Designed for high availability at low cost. This storage pool uses
CLSTD disk volumes created from RAID 6 disk groups.
clar_r5_performance
Designed for medium performance and availability at low cost.
This storage pool uses CLSTD disk volumes created from 4+1
RAID 5 disk groups.
clar_r5_economy
Designed for medium performance and availability at low cost.
This storage pool uses CLSTD disk volumes created from 8+1
RAID 5 disk groups.
clarata_archive
Designed for use with infrequently accessed data, such as
archive retrieval. This storage pool uses CLATA disk drives in a
RAID 5 configuration.
clarata_r3
Designed for archival performance and availability at low cost.
This AVM storage pool uses LCFC, SATA II, and CLATA disk
drives in a RAID 3 configuration.
clarata_r6
Designed for high availability at low cost. This storage pool uses
CLATA disk volumes created from RAID 6 disk groups.
clarata_r10
Designed for high performance and availability at medium cost.
This storage pool uses two CLARiiON CLATA disk volumes in a
RAID 1/0 configuration.
clarsas_archive
Designed for medium performance and availability at medium
cost. This storage pool uses CLSAS disk volumes created from
RAID 5 disk groups.
clarsas_r6
Designed for high availability at medium cost. This storage pool
uses CLSAS disk volumes created from RAID 6 disk groups.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
19 of 92
Table 5
20 of 92 Version 5.6.45
System-defined storage pools (page 3 of 4)
Storage pool name
Description
clarsas_r10
Designed for high performance and availability at medium cost.
This storage pool uses two CLARiiON Serial Attached SCSI
(SAS) disk volumes in a RAID 1/0 configuration.
clarssd_r5
Designed for very high performance and availability at high cost.
This storage pool uses CLSSD disk volumes created from 4+1
and 8+1 RAID 5 disk groups.
cm_r1
Designed for high performance and availability at low cost. This
storage pool uses CMSTD disk volumes created from RAID 1
mirrored-pair disk groups for use with MirrorView/Synchronous.
cm_r5_performance
Designed for medium performance and availability at low cost.
This storage pool uses CMSTD disk volumes created from 4+1
RAID 5 disk groups for use with MirrorView/Synchronous.
cm_r5_economy
Designed for medium performance and availability at low cost.
This storage pool uses CMSTD disk volumes created from 8+1
RAID 5 disk groups for use with MirrorView/Synchronous.
cm_r6
Designed for high availability at low cost. This storage pool uses
CMSTD disk volumes created from RAID 6 disk groups for use
with MirrorView/Synchronous.
cmata_archive
Designed for use with infrequently accessed data, such as
archive retrieval. This storage pool uses CLARiiON Advanced
Technology-Attached (ATA) CMATA disk drives in a RAID 5
configuration for use with MirrorView/Synchronous.
cmata_r3
Designed for archival performance and availability at low cost.
This AVM storage pool uses CMATA disk drives in a RAID 3
configuration for use with MirrorView/Synchronous.
cmata_r6
Designed for high availability at low cost. This storage pool uses
CMATA disk volumes created from RAID 6 disk groups for use
with MirrorView/Synchronous.
cmata_r10
Designed for high performance and availability at medium cost.
This storage pool uses two CLARiiON CMATA disk volumes in a
RAID 1/0 configuration for use with MirrorView/Synchronous.
cmsas_archive
Designed for medium performance and availability at medium
cost. This storage pool uses CMSAS disk volumes created from
RAID 5 disk groups for use with MirrorView/Synchronous.
cmsas_r6
Designed for high availability at low cost. This storage pool uses
CMSAS disk volumes created from RAID 6 disk groups for use
with MirrorView/Synchronous.
cmsas_r10
Designed for high performance and availability at medium cost.
This storage pool uses two CLARiiON CMSAS disk volumes in
a RAID 1/0 configuration for use with MirrorView/Synchronous.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Table 5
System-defined storage pools (page 4 of 4)
Storage pool name
Description
cmssd_r5
Designed for very high performance and availability at high cost.
This storage pool uses CMSSD disk volumes created from 4+1
and 8+1 RAID 5 disk groups for use with
MirrorView/Synchronous.
RAID groups and storage characteristics
Table 6 on page 21 correlates the storage array to the RAID groups for systemdefined storage pools.
Table 6
RAID group combinations
Storage
RAID 5
RAID 6
RAID 1
NX4 SAS
2+1 RAID 5
4+2 RAID 6
1+1 RAID 1/0
or SATA
3+1 RAID 5
1+1 RAID 1
4+1 RAID 5
5+1 RAID 5
NS20 /
4+1 RAID 5
4+2 RAID 6
NS40 /
8+1 RAID 5
6+2 RAID 6
NS80 FC
12+2 RAID 6
NS20 /
4+1 RAID 5
4+2 RAID 6
NS40 /
6+1 RAID 5
6+2 RAID 6
NS80 ATA
8+1 RAID 5
12+2 RAID 6
NS-120 /
4+1 RAID 5
4+2 RAID 6
NS-480 FC
8+1 RAID 5
6+2 RAID 6
Not supported
1+1 RAID 1/0
12+2 RAID 6
NS-120 /
4+1 RAID 5
NS-480 ATA
6+1 RAID 5
4+2 RAID 6
1+1 RAID 1/0
8+1 RAID 5
User-defined storage pools
For some customer environments, more user control is required than the systemdefined storage pools offer. One way for administrators to have more control is to
create their own storage pools and define the attributes of the storage pool.
AVM user-defined storage pools allow you to have more control over how the
storage is allocated to file systems. Administrators can create a storage pool, and
choose the volumes to contain within it, but must also manually manage the pool
and its contents. Administrators must add and remove volumes from the storage
pools you create. While user-defined storage pools have attributes similar to
system-defined storage pools, user-defined storage pools are not dynamic. They
require administrators to explicitly add and remove volumes manually.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
21 of 92
If you define the storage pool, you must also explicitly add and remove storage from
the storage pool and define the attributes for that storage pool. Use the nas_pool
command to list, create, delete, extend, shrink, and view storage pools, and to
modify the attributes of storage pools. "Create file systems with AVM" on page 42
and "Managing" on page 70 provide more information.
Understanding how AVM storage pools work enables you to determine whether
system-defined storage pools or user-defined storage pools, or both, are
appropriate for the environment. It is also important to understand the ways in
which you can modify the storage-pool behavior to suit your file system
requirements. "Modify system-defined and user-defined storage pool attributes" on
page 74 provides a list of all the attributes and the procedures to modify them.
22 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Storage pool attributes
System-defined and user-defined storage pools have attributes that control how
they create volumes and file systems. Table 8 on page 74 lists the storage pool
attributes, the type of entry, the value, whether the attribute is modifiable and for
which storage pools, and the description of the attribute.The system-defined
storage pools are shipped with the Celerra Network Server. They are designed to
optimize performance based on the hardware configuration. Each of the systemdefined storage pools has associated profiles that define the kind of storage used,
and how new storage is added to, or deleted from, the storage pool.
The system-defined storage pools are designed for use with the Symmetrix and
CLARiiON storage systems. The structure of volumes created by AVM might differ
greatly depending on the type of storage system used by the various storage pools.
This difference allows AVM to exploit the architecture of current and future block
storage devices that are attached to the Celerra Network Server.
Figure 1 on page 23 shows how the different storage pools are associated with the
disk volumes for each storage-system type attached. The nas_disk -list command
lists the disk volumes. These are the Celerra Network Server’s representation of
the LUNs exported from the attached storage system.
Note: Any given disk volume must be a member of only one storage pool.
cmata_r6
clarata_r3
cmata_r3
clarata_archive
cmata_archive
AVM storage pools
clar_r5_economy
symm_std
clar_r5_performance
symm_std_rdf_src
clar_r1
d3
d4
Symmetrix
storage
system
dn
Disk
volumes in
the storage
pools
Storage
systems
dm
dx
dy
dz
dn
CLARiiON
storage
system
CNS-000884
Figure 1 AVM system-defined storage pools
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
23 of 92
System-defined storage pool volume and storage profiles
Volume profiles are the set of rules and parameters that define how new storage is
added to a system-defined storage pool. A volume profile defines a standard
method of building a large section of storage from a set of disk volumes. This large
section of storage can be added to a storage pool that might contain similar large
sections of storage. The system-defined storage pool is responsible to satisfy
requests for any amount of storage.
Users cannot create or delete system-defined storage pools and their associated
profiles. Users can list, view, and extend the system-defined storage pools, and
also modify storage pool attributes.
Volume profiles have an attribute named storage_profile. A volume profile’s storage
profile defines the rules and attributes that are used to aggregate some number of
disk volumes (listed by the nas_disk -list command) into a volume that can be
added to a system-defined storage pool. A volume profile uses its storage profile to
determine the set of disk volumes to select (or match existing Celerra disk
volumes), where a given disk volume might match the rules and attributes of a
storage profile.
"CLARiiON system-defined storage pool algorithms" on page 24, "CLARiiON
system-defined storage pools for RAID 5, RAID 3, and RAID 1/0 ATA support" on
page 27, and "Symmetrix system-defined storage pools algorithm" on page 28
explain how these profiles help system-defined storage pools aggregate the disk
volumes into storage pool members, place the members into storage pools, and
then build file systems for each storage-system type. When using the systemdefined storage pools without modifications, through the Celerra Manager or the
command line interface (CLI), this activity is transparent to users.
CLARiiON system-defined storage pool algorithms
When you request for a new file system that requires new storage, AVM attempts to
create the most optimal stripe volume for a CLARiiON storage system. Systemdefined storage pools for CLARiiON storage systems work with LUNs of a specific
type, for example, 4+1 RAID 5 LUNs for the clar_r5_performance storage pool.
CLARiiON storage systems integrated models use CLARiiON storage templates to
create the LUNs that the Celerra Network Server recognizes as disk volumes.
CLARiiON storage templates are a combination of template definition files and
scripts (you see just the scripts) that create RAID groups and bind LUNs on
CLARiiON storage systems. These CLARiiON storage templates are invoked
through the CLARiiON setup script (root only) or through Celerra Manager. Celerra
NS600/NS600S/NS700/NS700S with Integrated Array Setup Guide contains more
information on using CLARiiON storage templates with Celerra.
Disk volumes exported from a CLARiiON storage system are relatively large and
might vary in size from approximately 18 GB to 136 GB, depending on physical disk
size. A CLARiiON system also has two storage processors (SPs). Most CLARiiON
storage templates create two LUNs per RAID group, one owned by SP A, and the
other by SP B. Only the CLARiiON RAID 3 storage templates create both LUNs
owned by one of the SPs.
24 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
If no disk volumes are found when a request for space is made, AVM considers the
storage pool attributes, and initiates the next step based on these settings:
◆
The is_greedy setting indicates if the storage pool must add a new member
volume to meet the request, or if it must use all the available space in the
storage pool before adding a new member volume. AVM then checks the
is_dynamic setting.
◆
The is_dynamic setting indicates if the storage pool can dynamically grow and
shrink. If set to yes, it allows AVM to automatically add a member volume to
meet the request. If set to no, and a member volume must be added to meet the
request, then the user must manually add the member volume to the storage
pool.
◆
The file-system request slice flag indicates if the file system can be built on a
slice volume from a member volume.
◆
default_slice_flag setting indicates if AVM can slice storage pool member
volumes to meet the request.
Most of the system-defined storage pools for CLARiiON storage systems first
search for four same-size disk volumes, from different buses, different SPs, and
different RAID groups.
The absolute criteria that the volumes must meet are:
◆
A disk volume cannot exceed 2 TB.
◆
Disk volumes must match the type specified in the storage pool storage profile.
◆
Disk volumes must be of the same size.
◆
No two disk volumes can come from the same RAID group.
◆
Disk volumes must be on a single storage system.
If found, AVM stripes the LUNs together and inserts the stripe into the storage pool.
If AVM cannot find the four disk volumes that are bus-balanced, it looks for four
same-size disk volumes that are SP-balanced from different RAID groups, and if
not found, AVM then searches for four same-size disk volumes from different RAID
groups.
Next, if AVM has been unable to satisfy these requirements, it looks for three samesize disk volumes that are SP-balanced from different RAID groups, and so on, until
the only option left is for AVM to use one disk volume. The criteria that the one disk
volume must meet are:
◆
A disk volume cannot exceed 2 TB.
◆
A disk volume must match the type specified in the storage pool storage profile.
◆
If multiple volumes match the first two criteria, then the disk volume must be
from the least-used RAID group.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
25 of 92
Figure 2 on page 26 shows the algorithm used to create a file system by adding a
pool member to the AVM CLARiiON system-defined storage pools clar_r1,
clar_r5_performance, and clar_r5_economy.
4/3/2 disk
volumes
available?
Request
Select volumes
Yes
Meets absolute
criteria for multiple
disk volumes?
that are:
Yes
balanced across
buses
No
balanced across
storage processors
No
1
Least used defined by # of disk volumes used in
RAID group/ # disk volumes visible in RAID group
1 disk volume
available?
No
Yes
Meets
absolute criteria
for 1 disk
volume
from least used
RAID groups
No
Error.
Unable to fill
request
Yes
Slice from stripe
(smaller of free
space available
or file system
request)
Stripe volumes
together using
8 K stripe size
Place meta
volume on
the stripe
Insert stripe
into the
storage pool
Place disk
volumes in pool
(no stripe or
meta on top)
Done
Yes
Is space in
pool enough?
No
1
CNS-000885
Figure 2 CLARiiON system-defined storage pool algorithm (clar_r1, clar_r5_performance, and
clar_r5_economy)
Figure 3 on page 26 shows the structure of a clar_r5_performance storage pool.
The volumes in the storage pools are balanced between SP A and SP B.
clar_r5_performance
storage pool
CLARiiON
4+1 RAID5 disk
volumes
stripe_volume1
dx
dy
Owned by
storage
processor A
stripe_volume2
dz
dw
3
Owned by
storage
processor B
dm
3
dn
CNS-000776
Figure 3 clar_r5_performance storage pool
26 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
CLARiiON system-defined storage pools for RAID 5, RAID 3, and
RAID 1/0 ATA support
The three CLARiiON system-defined storage pools that provide support for the ATA
environment are clarata_r3, clarata_archive, and clarata_r10.
The clarata_r3 storage pool follows the basic CLARiiON algorithm explained in
"CLARiiON system-defined storage pool algorithms" on page 24, but uses only one
disk volume and does not allow striping of volumes. One of the applications for this
pool is backup to disk. Users can manage the RAID 3 disk volumes manually in a
user-defined storage pool. However, using the system-defined storage pool
clarata_r3 helps users maximize the benefit from AVM disk selection algorithms.
The clarata_r3 storage pool supports only CLARiiON ATA drives, not FC drives.
The criteria that the one disk volume must meet are:
◆
Disk volume cannot exceed 2 TB.
◆
Disk volume must match the type specified in the storage pool storage profile.
◆
If multiple volumes match the first two criteria, then the disk volume must be
from the least-used RAID group.
Figure 4 on page 27 shows the storage pool clarata_r3 algorithm.
1
Request
1 disk volume
available?
No
Error.
Unable to fill
request.
No
Error.
Unable to fill
request.
Yes
Meets
absolute criteria
for 1 disk
volume.
Yes
Create meta on
disk volume.
Place meta in
storage pool.
Yes
Done
CNS-000886
Figure 4 clarata_r3 system-defined algorithm
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
27 of 92
The storage pools clarata_archive and clarata_r10 differ from the basic CLARiiON
algorithm. These storage pools use two disk volumes, or a single disk volume, and
all ATA drives are the same.
Figure 5 on page 28 shows the profile algorithm used to create a file system with
the clarata_archive and clarata_r10 storage pools.
One volume created
1
Receive
request
N=2
Request new pool
volume made of N
disk volumes
Sort N pool volumes
by utilization
Pick first Slice minimum of free
entry
space available or
space needed
from pool entry
Creation failed
Yes
Pool
volume
created in
1?
Yes
No
Other Pool
volumes
available?
No
Yes
No
Yes, N = 1
Disk volume
available?
Error: Unable to
fill request
Space req
satisfied?
Done
Put request on meta
Concatenate slices
together (if
necessary)
No
CNS-000783
Figure 5 clarata_archive and clarata_r10 system-defined storage pools algorithm
Symmetrix system-defined storage pools algorithm
AVM works differently with Symmetrix storage systems because of the size and
uniformity of the disk volumes involved. Typically, the disk volumes exported from a
Symmetrix storage system are small and uniform in size. The aggregation strategy
used by Symmetrix storage pools is primarily to combine many small disk volumes
into larger volumes that can be used by file systems. AVM attempts to distribute the
Input/Output (I/O) to as many Symmetrix directories as possible. The Symmetrix
storage system can distribute I/O among the physical disks by using slicing and
striping on the storage system, but this is less of a concern for the AVM aggregation
strategy.
A Symmetrix storage pool creates a stripe volume across one set of Symmetrix disk
volumes, or creates a metavolume, as necessary to meet the request. The stripe or
metavolume is added to the Symmetrix storage pool. When the administrator asks
for n GB space from the Symmetrix storage pool, the space is allocated from this
system-defined storage pool. AVM adds and takes from the system-defined storage
pool as required. The stripe size is set in the system-defined profiles, and you
cannot modify the stripe size of a system-defined storage pool. The default stripe
size for Symmetrix storage pool is 32 KB. Multi-path file system (MPFS) requires a
stripe depth of 32 KB or greater.
28 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
The algorithm that AVM uses looks for a set of eight disk volumes, and if not found,
a set of four disk volumes, and if not found, then a set of two disk volumes, and
finally one disk volume. AVM stripes the disk volumes together, if the disk volumes
are all of the same size. If the disk volumes are not the same size, AVM creates a
metavolume on top of the disk volumes. AVM then adds the stripe or the
metavolume to the storage pool.
If AVM cannot find any disk volumes, it looks for a metavolume in the storage pool
that has space, takes a slice from that metavolume, and makes a metavolume over
that slice.
Figure 6 on page 29 shows the AVM algorithm used to create the file system with a
Symmetrix system-defined storage pool.
Is there a set
of 8/4/2/1 disk
volumes?
Received FS
request
1
Error. Unable
to fill FS
request
Stripe the
LUNs together,
or build a
meta on top
of the LUNs
Yes
No
No
Is there a
meta in the pool
with space
remaining?
Take a slice from the
meta (smaller of free
space avail or FS
request)
Yes
Make meta
on slice
First time
through
loop?
No
Yes
Concentrate
new volume to
end of
"in progress"
meta
Disk space
requirement
satisfied?
Yes
Build FS
on meta
Done
No
1
CNS-000777
Figure 6 Symmetrix system-defined storage pool algorithm
Figure 7 on page 29 shows the structure of a Symmetrix storage pool.
Symmetrix
storage pool
Symmetrix
STD disk
volumes
stripe_volume1
d3
d4
d5
d6
3
d7
3
d8
3
stripe_volume2
d9
3
d10
3
CNS-000784
Figure 7 Symmetrix storage pool
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
29 of 92
All this system-defined storage pool activity is transparent to users and provides an
easy way to create and manage file systems. The system-defined storage pools do
not allow users to have much control over how AVM aggregates storage to meet file
system needs, but most users prefer ease-of-use over control.
When users make a request for a new file system that uses the system-defined
storage pools, AVM:
◆
Determines if more volumes need to be added to the storage pool; if so, selects
and adds volumes.
◆
Selects an existing, available storage pool volume to use for the file system and
might slice it to obtain the correct size for the file system request. If the request
is larger than the largest volume, AVM concatenates the volumes to create the
size required to meet the request.
◆
Places a metavolume on the resulting volume and builds the file system within
the metavolume.
◆
Returns the file system information to the user.
All system-defined storage pools have specific, predictable rules for getting disk
volumes into storage pools, provided by their associated profiles.
File system and storage pool relationship
When you request for a file system that uses a system-defined storage pool, AVM
consumes disk volumes either by adding new members to the pool, or by using
existing pool members. To create a file system by using a user-defined storage
pool, create the storage pool and add the volumes you want to use manually,
before creating the file system.
Deleting a file system associated with either a system-defined or user-defined
storage pool returns the unused space to the storage pool, but the storage pool
might continue to reserve that space for future file system requests. Figure 8 on
page 30 shows two file systems built from an AVM storage pool.
FS1
FS2
Metavolume
Metavolume
Slice
Slice
Member volumes
Storage pool
CNS-000780
Figure 8 File systems built by AVM
30 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
As Figure 9 on page 31 shows, if FS2 is deleted, the storage used for that file
system is returned to the storage pool, and AVM continues to reserve it, as well as
any other member volumes available in the storage pool, for a future request. This
is true of system-defined and user-defined storage pools.
FS1
Metavolume
Slice
Member volumes
Storage pool
CNS-000779
Figure 9 FS2 deletion returns storage to the storage pool
If FS1 is also deleted, the storage that was used for the file systems is no longer
required for file systems.
A system-defined storage pool removes the volumes from the storage pool and
returns the disk volumes to the storage system for use with other features or
storage pools. You can change the attributes of a system-defined storage pool so it
is not dynamic, and does not grow and shrink dynamically. Doing that increases
your direct involvement in managing the volume structure of the storage pool,
including adding and removing volumes.
A user-defined storage pool does not have any capability to add and remove
volumes. To use volumes contained in a user-defined storage pool for another
purpose, you must remove the volumes. "Remove volumes from storage pools" on
page 82 provides more information on removing volumes. Otherwise, the userdefined storage pool continues to reserve the space for use by that pool.
Figure 10 on page 31 shows that the storage pool container still exists after the file
systems are deleted, and the volumes continue to be reserved by AVM for future
requests of that storage pool.
Member volumes
Storage pool
CNS-000778
Figure 10 FS1 deletion leaves storage pool container with volumes
If you have modified the attributes that control the dynamic behavior of a systemdefined storage pool, use the procedure in "Remove volumes from storage pools"
on page 82 to remove volumes from the system-defined storage pool.
For a user-defined storage pool, to reuse the volumes for other purposes, remove
the volumes or delete the storage pool.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
31 of 92
Automatic file system extension
Automatic file system extension works only when an AVM storage pool is
associated with a file system. You can enable or disable automatic file system
extension when you create a file system or modify the file system properties later.
"Create file systems with AVM" on page 42 provides the procedure to create file
systems with AVM system-defined or user-defined storage pools and enable
automatic file system extension on a newly created file system. "Enable automatic
file system extension and options" on page 61 provides the procedure to modify an
existing file system and enable automatic file system extension.
You can set the HWM and maximum size for automatic file system extension. The
Control Station might attempt to extend the file system several times, depending on
these settings.
HWM
The HWM identifies the threshold for initiating automatic file system extension. The
HWM value must be between 50 percent and 99 percent. The default HWM is
90 percent of the file system size.
Automatic file system extension guarantees that the file system usage is at least 3
percent below the HWM. For example, a 100 GB file system reaches its 80 percent
HWM at 80 GB. The file system then automatically extends to 110 GB and is now at
72.72 percent usage (80 GB), which is well below the 80 percent HWM for the 110
GB file system:
◆
If automatic file system extension is disabled, when the file system reaches the
HWM, an HWM event notification is sent. You must then manually extend the
file system. Ignoring the notification could cause data loss.
◆
If automatic file system extension is enabled on a file system, when the file
system reaches the HWM, an automatic extension event notification is sent to
sys_log and the file system automatically extends without any administrative
action:
• A file system that is smaller than 10 GB extends by its size when it reaches
the HWM. For example, a 3 GB file system, after reaching its HWM (for
example, default of 90 percent), automatically extends to 6 GB.
• A file system that is larger than 10 GB extends by 5 percent of its size or 10
GB, whichever is larger, when it reaches the HWM. For example, a 100 GB
file system extends to 110 GB, and a 500 GB file system extends to 525 GB.
Maximum size
The default maximum size for any file system is 16 TB. The maximum size for
automatic file system extension is from 3 MB up to 16 TB. If Virtual Provisioning is
enabled and the selected storage pool is a traditional RAID group (non-virtual
CLARiiON thin) storage pool, the maximum size is required; otherwise, this field is
optional. The extension size is also dependent on having additional space in the
storage pool associated with the file system.
32 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Automatic file extension conditions
◆
If the file system size reaches the specified maximum size, the file system
cannot extend beyond that size, and the automatic extension operation is
rejected.
◆
If the available space is less than the extend size, the file system extends by the
maximum available space.
◆
If only the HWM is set with automatic file system extension enabled, the file
system automatically extends when that HWM is reached, if the space available
and the file system size is less than 16 TB.
◆
If only the maximum size is specified with automatic file system extension
enabled, the file system automatically extends when the default HWM of 90
percent is reached, if there is space available and the maximum size has not
been reached. If the file system reaches or exceeds the set maximum size,
automatic extension is rejected.
◆
If the HWM or maximum file size is not set, but either automatic file system
extension or Virtual Provisioning is enabled, the file system’s HWM and
maximum size are set to the default values of 90 percent and 16 TB,
respectively.
Virtual Provisioning
The Virtual Provisioning option allows you to allocate storage capacity based on
anticipated needs, while you dedicate only the resources you currently need.
Combining automatic file system extension and Virtual Provisioning lets you grow
the file system gradually as needed.
When Virtual Provisioning is enabled and a virtual storage pool is not being used,
the virtual maximum file system size is reported to NFS and CIFS clients; if a virtual
storage pool is being used, the actual file system size is reported to NFS and CIFS
clients.
Note: Enabling Virtual Provisioning with automatic file system extension does not
automatically reserve the space from the storage pool for that file system. Administrators
must ensure that adequate storage space exists, so that the automatic extension operation
can succeed. If the available storage is less than the maximum size setting, automatic
extension fails. Users receive an error message when the file system becomes full, even
though it appears that there is free space in the file system.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
33 of 92
Calculating automatic file system extension size
During each automatic file system extension, fs_extend_handler located on the
Control Station (/nas/sbin/fs_extend_handler) calculates the extension size by
using the algorithm shown in Figure 11 on page 34.
Calculate autoextension size (a) {extend_size (a)} based on how often the HWM reached event is
polled (10 second), and the I/O rate (=100)
extend_size(a)=event_polling_interval * io_rate * 100 / (100 - HWM)
Compare extend_size (a) with the current file system size (current_fs_size)
If extend_size (a) < 5% of current_fs_size
If extend_size (a) > current_fs_size
extend_size (b)=5% of current_fs_size
extend_size (b)=current_fs_size
Calculate the required extension size (req_ext_size)
req_ext_size=used*current_fs_size/(HWM-3)-current_fs_size
used= percentage of file size used
Compare req_ext_size with extend_size (b)
Final autoextension size=extend_size (c)
Yes
Is
req_ext_size >
extend_size (b)
extend_size (c)=req_ext_size
No
extend_size (c)=extend_size (b)
CIFS or NFS
client
CIFS
NFS
DART sends the file system target size (target_size) to the Control Station
target_size = current_fs_size+extend_size (c)
Control Station calculates the extension size
dart_request_ext_size= target_size-current_fs_size
Yes
dart_request_
ext_size >
extend_size (c)
Final autoextension size=dart_request_ext_size
No
Final autoextension size=extend_size (c)
CNS-000790
Figure 11 Automatic file system extension size calculation
34 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Planning considerations
This section covers important volume and file system planning information and
guidelines, interoperability considerations, storage pool characteristics, and Celerra
upgrade considerations that you need to know when implementing AVM and
automatic file system extension.
Review these topics:
◆
Celerra Network Server file system management and the nas_fs command
◆
The Celerra SnapSure feature (checkpoints) and the fs_ckpt command
◆
Celerra Network Server volume management concepts (metavolumes, slice
volumes, stripe volumes, and disk volumes) and the nas_volume, nas_server,
nas_slice, and nas_disk commands
◆
RAID technology
◆
Symmetrix storage systems
◆
CLARiiON storage systems
Interoperability considerations
Consider these when using Celerra automatic file system extension with replication:
◆
Enable automatic extension and Virtual Provisioning only on the source file
system. The destination file system is synchronized with the source and
extends automatically.
◆
When the source file system reaches its HWM, the destination file system
automatically extends first and then the source file system automatically
extends. Set up the source replication file system with automatic extension
enabled, as explained in "Create file systems with automatic file system
extension" on page 50, or modify an existing source file system to automatically
extend, by using the procedure "Enable automatic file system extension and
options" on page 61.
◆
If the extension of the destination file system succeeds but the extension of the
source file system fails, the automatic extension operation stops functioning.
You receive an error message indicating that the failure is due to the limitation
of available disk space on the source side. Manually extend the source file
system to make the source and destination file systems the same size, by using
the nas_fs -xtend <fs_name> -option src_only command. Using EMC Celerra
Replicator (V1) and Using EMC Celerra Replicator (V2) contain instructions to
recover from this situation.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
35 of 92
Other interoperability considerations are:
◆
The automatic extension and Virtual Provisioning configuration is not moved
over to the destination file system during Replicator failover. If you intend to
reverse the replication and the destination file system becomes the source, you
must enable automatic extension on the new source file system.
◆
With Virtual Provisioning enabled, NFS, CIFS, and FTP clients see the actual
size of the Replicator destination file system, while they see the virtually
provisioned maximum size on the source file system. Table 7 on page 36
describes this client view.
Table 7
Client view of Replicator source and destination file systems
Client view
Destination file
system
Source file system
without Virtual
Provisioning
Source file system
with Virtual
Provisioning
Clients see
Actual size
Actual size
Maximum size
Using EMC Celerra Replicator (V1) and Using EMC Celerra Replicator (V2) contain
more information on using automatic file system extension with Celerra Replicator.
AVM storage pool considerations
Consider these AVM storage pool characteristics:
◆
System-defined storage pools have a set of rules governing how the Celerra
Network Server manages storage. User-defined storage pools have attributes
that you define for each storage pool.
◆
All system-defined storage pools (virtual and non-virtual) are dynamic; they
acquire and release disk volumes as needed. Administrators can modify the
attribute to disable this dynamic behavior.
User-defined storage pools are not dynamic; they require administrators to
explicitly add and remove volumes manually. You are allowed to choose disk
volume storage from only one of the attached storage systems when creating a
user-defined storage pool.
◆
Striping never occurs above the storage-pool level.
◆
The system-defined CLARiiON storage pools attempt to use all free disk
volumes before maximizing use of the partially used volumes. This behavior is
considered to be a “greedy” attribute. You can modify the attributes that control
this greedy behavior in system-defined storage pools. "Modify system-defined
and user-defined storage pool attributes" on page 74 describes the procedure.
Another option is to create user-defined storage pools to group disk volumes to
keep system-defined storage pools from using them. "Create file systems with
user-defined storage pools" on page 46 provides more information on creating
user-defined storage pools. You can create a storage pool to reserve disk
volumes, but never create file systems from that storage pool. You can move
the disk volumes out of the reserving user-defined storage pool if you need to
use them for file system creation or other purposes.
36 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
◆
The system-defined Symmetrix storage pools maximize the use of disk volumes
acquired by the storage pool before consuming more. This behavior is
considered to be a “not greedy” attribute.
◆
AVM does not perform storage system operations necessary to create new disk
volumes, but consumes only existing disk volumes. You might have to add
LUNs to your storage system and configure new disk volumes, especially if you
create user-defined storage pools.
◆
A file system might use many or all the disk volumes that are members of a
system-defined storage pool.
◆
You can use only one type of disk volume in a user-defined storage pool. For
example, if you create a storage pool and then add a disk volume based on ATA
drives to the pool, add only other ATA-based disk volumes to the pool to extend
it.
◆
SnapSure checkpoint SavVols might use the same disk volumes as the file
system of which the checkpoints are made.
◆
AVM does not add members to the storage pool if the amount of space
requested is more than the sum of the unused and available disk volumes, but
less than or equal to the available space in an existing system-defined storage
pool.
◆
Some AVM system-defined storage pools designed for use with CLARiiON
storage systems acquire pairs of storage-processor balanced disk volumes with
the same RAID type, disk count, and size. When reserving disk volumes from a
CLARiiON storage system, it is important to reserve them in similar pairs.
Otherwise, AVM might not find matching pairs, and the number of usable disk
volumes might be more limited than was intended.
"Create file systems with AVM" on page 42 provides more information on creating
file systems by using the different pool types. Managing EMC Celerra Volumes and
File Systems Manually contains instructions to recover from this situation.
"Related information" on page 13 provides a list of related documentation.
Upgrading Celerra software
When you upgrade to Celerra Network Server version 5.6 software, all systemdefined storage pools are available.
The system-defined storage pools for the currently attached storage systems with
available space appear in the output when you list storage pools, even if AVM is not
used on the Celerra Network Server. If you have not used AVM in the past, these
storage pools are containers and do not consume storage until you request for a file
system by using AVM.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
37 of 92
If you have used AVM in the past, in addition to the system-defined storage pools,
any user-defined storage pools you have created also appear when you list the
storage pools.
!
!
CAUTION
Automatic file system extension is interrupted during Celerra software upgrades. If
automatic file system extension is enabled, the Control Station continues to capture
HWM events, but actual file system extension does not start until the Celerra upgrade
process completes.
File system and automatic file system extension considerations
Consider your environment, most important file systems, file system sizes, and
expected growth, before implementing AVM. Follow these general guidelines when
planning to use AVM in your environment:
◆
Create the most important and most used file systems first to access them
quickly and easily. AVM system-defined storage pools use free disk volumes to
create a new file system. For example, there are 40 disk volumes on the
storage system. AVM takes eight disk volumes, creates stripe1, slice1,
metavolume1, and then creates the file system ufs1:
• Assuming the default behavior of the system-defined storage pool, AVM
uses eight more disk volumes, creates stripe2, and builds file system ufs2,
even though there is still space available in stripe1.
• File systems ufs1 and ufs2 are on different sets of disk volumes and do not
share any LUNs, making it easier to locate and access them.
◆
If you plan to create user-defined storage pools, consider LUN selection and
striping, and do your own disk volume aggregation before putting the volumes
into the storage pool. This ensures that the file systems are not built on a single
LUN. Disk volume aggregation is a manual process for user-defined storage
pools.
◆
For file systems with sequential I/O, two LUNs per file system are generally
sufficient.
◆
If you use AVM for file systems with sequential I/O, consider modifying the
attribute of the storage pool to restrict slicing.
◆
Automatic file system extension does not alleviate the need for appropriate file
system usage planning. Create the file systems with adequate space to
accommodate the estimated file system usage. Allocating too little space to
accommodate normal file system usage makes the Control Station rapidly and
repeatedly attempt to extend the file system. If the Control Station cannot
adequately extend the file system to accommodate the usage quickly enough,
the automatic extension fails. "Known problems and limitations" on page 86
provides more information on how to identify and recover from this issue.
Note: When planning file system size and usage, consider setting the HWM, so that the
free space above the HWM setting is a certain percentage above the largest average file for
that file system.
38 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
◆
Use of AVM with a single-enclosure CLARiiON storage system could limit
performance because AVM does not stripe between or across RAID group 0
and other RAID groups. This is the only case where striping across 4+1 RAID 5
and 8+1 RAID 5 is suggested.
◆
If you want to set a stripe size that is different from the default stripe size for
system-defined storage pools, create a user-defined storage pool. "Create file
systems with user-defined storage pools" on page 46 provides more
information.
◆
Take disk contention into account when creating a user-defined pool.
◆
If you have disk volumes you would like to reserve, so that the system-defined
storage pools do not use them, consider creating a user-defined storage pool
and add those specific volumes to it.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
39 of 92
Configuring
The tasks to configure volumes and file systems with AVM are:
1. "Configure disk volumes" on page 40
2. "Create file systems with AVM" on page 42
3. "Create a user-defined storage pool" on page 47
4. "Extend file systems with AVM" on page 54
5. "Create file system checkpoints with AVM" on page 69
Configure disk volumes
The EMC Celerra NS500G, NS500GS, NS600G, NS600GS, NS700G, NS700GS,
and NS704G system network servers are gateway network-attached storage (NAS)
systems that connect to EMC Symmetrix and CLARiiON arrays. A Celerra gateway
system stores data on CLARiiON user LUNs or Symmetrix hypervolumes. If the
user LUNs or hypervolumes are not configured correctly on the array, Celerra AVM
and Celerra Manager cannot be used to manage the storage.
Typically, EMC support personnel does the initial setup of disk volumes on these
gateway storage systems.
However, if your Celerra gateway system is attached to a CLARiiON array and you
want to add disk volumes to the configuration, use the procedure outlined in this
section. In this two-step procedure, you first use EMC Navisphere Manager or the
EMC Navisphere CLI to create the CLARiiON user LUNs, and then use Celerra
Manager to make the new user LUNs available to the Celerra as disk volumes. The
user LUNs must be created before you create Celerra file systems.
Note: To add CLARiiON user LUNs, you must be familiar with EMC Navisphere Manager or
the EMC Navisphere CLI and the process of creating RAID groups and CLARiiON user
LUNs for the Celerra volumes. The documentation for EMC Navisphere Manager and EMC
Navisphere CLI, available on Powerlink, describes how to create RAID groups and user
LUNs.
If the disk volumes are configured by EMC, go to "Create file systems with AVM" on
page 42.
40 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Add CLARiiON user LUNs
Step
1.
Action
Create RAID groups and CLARiiON user LUNs (as needed for Celerra volumes) by using
EMC Navisphere Manager or EMC Navisphere CLI. Ensure that you add the LUNs to the
Celerra gateway system’s storage group:
• Always create the user LUNs in balanced pairs, one owned by SP A and one owned by
SP B. The paired LUNs must be the same size.
• For FC disks, the paired LUNs do not have to be in the same RAID group.
• For RAID 5 on FC disks, the RAID group must use five or nine disks. RAID 1 groups
always use two disks. For ATA disks, all LUNs in a RAID group must belong to the
same SP; create pairs by using LUNs from two RAID groups. RAID 6 groups have no
restrictions on the number of disks. ATA disks must be configured as RAID 5, RAID 6,
or RAID 3.
• The host LUN identifier (HLU) must be greater than or equal to 16 for user LUNs.
Use these settings when creating user LUNs:
• RAID Type: RAID 5, RAID 6, or RAID 1 for FC disks and RAID 5, RAID 6, or RAID 3
for ATA disks
• LUN ID: Select the first available value
• Element Size: 128
• Rebuild Priority: ASAP
• Verify Priority: ASAP
• Enable Read Cache: Selected
• Enable Write Cache: Selected
• Enable Auto Assign: Cleared (off)
• Number of LUNs to Bind: 2
• Alignment Offset: 0
• LUN size: Must not exceed 2 TB
Note: If you create 4+1 RAID 3 LUNs, the Number of LUNs to Bind value should be 1.
• When you add the LUN to the storage group for a gateway system, set the HLU to 16
or greater.
2.
Perform these steps by using Celerra Manager to make the new user LUNs available to
the Celerra system:
a. Open the Storage System page for the Celerra system (Storage > Systems).
b. Click Rescan.
Note: Do not change the host LUN identifier of the Celerra LUNs after rescanning. This
might cause data loss or unavailability.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
41 of 92
Create file systems with AVM
This section describes the procedures to create a Celerra file system by using AVM
storage pools and explains how to create file systems by using the automatic file
system extension feature.
You can enable automatic file system extension on new or existing file systems if
the file system has an associated AVM storage pool. When you enable automatic
file system extension, use the nas_fs command options to adjust the HWM value,
set a maximum file size to which the file system can be extended, and enable
Virtual Provisioning. "Create file systems with automatic file system extension" on
page 50 provides more information.
You can create file systems by using system-defined, system-defined virtual, or
user-defined storage pools, with automatic file system extension enabled or
disabled. Specify the storage system from which to allocate space for either type of
storage pool.
Choose one or more of these procedures to create file systems:
◆
"Create file systems with system-defined storage pools" on page 42
The simplest way to create file systems without having to create the underlying
volume structure.
◆
"Create file systems with user-defined storage pools" on page 46
Allows more administrative control of the underlying volumes and placement of
the file system. Use these storage pools to prevent the system-defined storage
pools from using certain volumes.
◆
"Create file systems with automatic file system extension" on page 50
Allows you to create a file system that automatically extends when it reaches a
certain threshold by using space from either a system-defined or a user-defined
storage pool.
Create file systems with system-defined storage pools
When you create a Celerra file system by using the system-defined storage pools, it
is not necessary to create volumes before setting up the file system. AVM allocates
space to the file system from the storage pool you specify, residing on the storage
system associated with that storage pool, and automatically creates any required
volumes when it creates the file system. This ensures that the file system and its
extensions are created from the same type of storage, with the same cost,
performance, and availability characteristics.
The storage system appears as a number associated with the storage system, and
is dependent on the type of attached storage system. A CLARiiON storage system
appears as a set of integers, prefixed with APM, for example, APM000339001240019. A Symmetrix storage system appears as a set of integers, for example,
002804000190-003C.
42 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Step
1.
Action
Obtain the list of available system-defined storage pools and system-defined virtual
storage pools by using this command syntax:
$ nas_pool -list
Example:
To list the storage pools, type:
$ nas_pool -list
Output:
id
1
2
3
4
5
6
7
8
40
41
2.
in_use acl name
y
0 symm_std
n
0 clar_r1
y
0 clar_r5_performance
y
0 clar_r5_economy
n
0 clarata_r3
n
0 clarata_archive
n
0 symm_std_rdf_src
n
0 clar_r1
y
0 engineer_APM0084401666
y
0 tp1_FCNTR074200038
Display the size of a specific storage pool by using this command syntax:
$ nas_pool -size <name>
where:
<name> = name of the storage pool
Example:
To display the size of the clar_r5_performance storage pool, type:
$ nas_pool -size clar_r5_performance
Output:
id = 3
name = clar_r5_performance
used_mb = 128000
avail_mb = 0
total_mb = 260985
potential_mb = 260985
Note: To display the size of all storage pools, use the -all option instead of the <name>
option.
3.
Obtain the system name of an attached Symmetrix storage system by using this
command syntax:
$ nas_storage -list
Example:
To list the system name of an attached Symmetrix storage system, type:
$ nas_storage -list
Output:
id acl name
1
0 000183501491
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
serial number
000183501491
Version 5.6.45
43 of 92
Step
4.
Action
Obtain information of a specific Symmetrix storage system in the list by using this
command syntax:
$ nas_storage -info <system_name>
where:
<system_name> = name of the storage system
Example:
To obtain information about the Symmetrix storage system 000183501491, type:
$ nas_storage -info 000183501491
Output:
type num slot ident stat scsi vols ports
p3_stat
R1 1 1 RA-1A Off NA
0 1 Off NA
DA 2 2 DA-2A On WIDE
25 2 On
Off
DA 3 3 DA-3A On WIDE
25 2 On
Off
SA 5 5 SA-5A On ULTRA
0 2 On
On
SA 12 12 SA-12A On ULTRA
0 2 Off On
DA 14 14 DA-14A On WIDE
27 2 On
Off
DA 15 15 DA-15A On WIDE
26 2 On
Off
R1 16 16 RA-16A On NA
0 1 On
NA
R2 17 1 RA-1B Off NA
0 1 Off NA
DA 18 2 DA-2B On WIDE
26 2 On
Off
DA 19 3 DA-3B On WIDE
27 2 On
Off
SA 21 5 SA-5B On ULTRA
0 2 On
On
SA 28 13 SA-12B OnULTRA
0 2 On
On
DA 30 14 DA-14B On WIDE
25 2 On
Off
DA 31 15 DA-15B On WIDE
25 2 On
Off
R2 32 16 RA-16B On NA
0 1 On
NA
44 of 92 Version 5.6.45
p0_stat p1_stat p2_stat
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Step
5.
Action
Create a file system by size with a system-defined storage pool by using this command
syntax:
$ nas_fs -name <fs_name> -create size=<size> pool=<pool>
storage=<system_name>
where:
<fs_name> = name of the file system
<size> = amount of space to add to the file system; specify the size in GB by typing
<number>G (for example, 250G) or in MB by typing <number>M (for example, 500M), or
by typing <number>T for TB (for example, 1T)
<pool> = name of the storage pool
<system_name> = name of the storage system from which space for the file system is
allocated
Example:
To create a file system ufs1 of size 10G with a system-defined storage pool, type:
$ nas_fs -name ufs1 -create size=10G pool=symm_std
storage=00018350149
Note: To mirror the file system with SRDF, you must specify the symm_std_rdf_src
storage pool. This directs AVM to allocate space from volumes configured when installing
for remote mirroring by using SRDF. Using SRDF/S with EMC Celerra for Disaster
Recovery contains more information.
Output:
id
=
name
=
acl
=
in_use
=
type
=
volume
=
pool
=
member_of
=
rw_servers
=
ro_servers
=
rw_vdms
=
ro_vdms
=
auto_ext
=
deduplication=
stor_devs
=
disks
=
1
ufs1
0
False
uxfs
avm1
symm_std
no,virtual_provision=no
off
00018350149
d20,d12,d18,d10
Note: The EMC Celerra Network Server Command Reference Manual contains
information on the options available for creating a file system with the nas_fs command.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
45 of 92
Create file systems with user-defined storage pools
The AVM system-defined storage pools are available for use with the Celerra
Network Server. If you require more manual control than the system-defined
storage pools allow, create a user-defined storage pool and then create the file
system by using that pool.
Note: Create a user-defined storage pool and define its attributes to reserve disk volumes
so that your system-defined storage pools cannot use them.
Prerequisites
Prerequisites include:
◆
Creating a user-defined storage pool requires manual volume management.
You must first stripe the volumes together and add the resulting volumes to the
storage pool you create. Managing EMC Celerra Volumes and File Systems
Manually describes the steps to create and manage volumes.
◆
You cannot use disk volumes you have reserved for other purposes. For
example, you cannot use any disk volumes reserved for a system-defined
storage pool. Controlling Access to EMC Celerra System Objects contains more
information on access control levels.
◆
AVM system-defined storage pools designed for use with CLARiiON storage
systems acquire pairs of storage-processor balanced disk volumes that have
the same RAID type, disk count, and size. "Modify system-defined and userdefined storage pool attributes" on page 74 provides more information.
◆
When creating a user-defined storage pool to reserve disk volumes from a
CLARiiON storage system, use storage-processor balanced disk volumes with
these same qualities. Otherwise, AVM cannot find matching pairs, and the
number of usable disk volumes might be more limited than was intended.
To create a file system with user-defined storage pool:
46 of 92 Version 5.6.45
◆
"Create a user-defined storage pool" on page 47
◆
"Create the file system" on page 48
◆
"Create file systems with automatic file system extension" on page 50
◆
"Create automatic file system extension-enabled file systems" on page 52
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Create a user-defined storage pool
To create a user-defined storage pool (from which space for the file system is
allocated), add volumes to the storage pool and define the storage pool attributes.
Action
To create a user-defined storage pool, use this command syntax:
$ nas_pool -create -name <name> -acl <acl> -volumes [<volume_name>,....]
-description <desc> -default_slice_flag {y|n}
where:
<name> = name of the storage pool
<acl> = designates an access control level for the new storage pool; default value is 0
<volume_name> = names of the volumes to add to the storage pool; can be a metavolume, slice
volume, stripe volume, or disk volume; use a comma to separate each volume name
<desc> = assigns a comment to the storage pool; type the comment within quotes
-default_slice_flag = determines whether members of the storage pool can be sliced when
space is dispensed from the storage pool; if set to y, then members might be sliced and if set to n,
then the members of the storage pool cannot be sliced, and volumes specified cannot be built on
a slice.
Example:
To create a user-defined storage pool named marketing with a description, with the disk members
d126, d127, d128, and d129 specified, and allow the volumes to be built on a slice, type:
$ nas_pool -create -name marketing -description “storage pool for
marketing” -volumes d126,d127,d128,d129 -default_slice_flag y
Output
id = 5
name = marketing
description = Storage pool for marketing
acl = 0
in_use = False
clients =
members = d126,d127,d128,d129
default_slice_flag = True
is_user_defined = True
disk_type = CLSTD
server_visibility = server_2,server_3,server_4
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
47 of 92
Create the file system
To create a file system, you must first create a user-defined storage pool. "Create a
user-defined storage pool" on page 47 provides more information.
Use this procedure to create a file system by specifying a user-defined storage pool
and an associated storage system.
Step
1.
Action
List the storage system by using this command syntax:
$ nas_storage -list
Example:
To list the storage system, type:
$ nas_storage -list
Output:
id
1
2.
acl name
serial number
0 APM00033900125 APM00033900125
Get detailed information of a specific attached storage system in the list by using this
command syntax:
$ nas_storage -info <system_name>
where:
<system_name> = name of the storage system
Example:
To get detailed information of the storage system APM00033900125, type:
$ nas_storage -info APM00033900125
Output:
id
arrayname
name
model_type
model_num
db_sync_time
num_disks
num_devs
num_pdevs
num_storage_grps
num_raid_grps
cache_page_size
wr_cache_mirror
low_watermark
high_watermark
unassigned_cache
failed_over
captive_storage
Active Software
Navisphere
ManagementServer
Base
48 of 92 Version 5.6.45
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
1
APM00033900125
APM00033900125
RACKMOUNT
630
1073427660 == Sat Jan
30
21
1
0
10
8
True
70
90
0
False
True
6 17:21:00 EST 2007
= 6.6.0.1.43
= 6.6.0.1.43
= 02.06.630.4.001
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Step
Action
Storage Processors
SP Identifier
signature
microcode_version
serial_num
prom_rev
agent_rev
phys_memory
sys_buffer
read_cache
write_cache
free_memory
raid3_mem_size
failed_over
hidden
network_name
ip_address
subnet_mask
gateway_address
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
A
926432
2.06.630.4.001
LKE00033500756
3.00.00
6.6.0 (1.43)
3968
749
32
3072
115
0
False
True
spa
128.221.252.200
255.255.255.0
128.221.252.100
num_disk_volumes
= 11 - root_disk root_ldisk d3 d4 d5 d6 d8
d13 d14 d15 d16
SP Identifier
signature
microcode_version
serial_num
prom_rev
agent_rev
phys_memory
raid3_mem_size
failed_over
hidden
network_name
ip_address
subnet_mask
gateway_address
num_disk_volumes
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
B
926493
2.06.630.4.001
LKE00033500508
3.00.00
6.6.0 (1.43)
3968
0
False
True
OEM-XOO25IL9VL9
128.221.252.201
255.255.255.0
128.221.252.100
4 - disk7 d9 d11 d12
Note: This is not a complete output.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
49 of 92
Step
3.
Action
Create the file system from a user-defined storage pool and designate the storage system
on which you want the file system to reside by using this command syntax:
$ nas_fs -name <fs_name> -type <type> -create <volume_name>
pool=<pool> storage=<system_name>
where:
<fs_name> = name of the file system
<type> = type of the file system UxFS (default), mgfs, or rawfs
<volume_name> = name of the volume
<pool> = name of the storage pool
<system_name> = name of the storage system on which the file system resides
Example:
To create the file system ufs1 from a user-defined storage pool and designate the
APM00033900125 storage system on which you want the file system ufs1 to reside, type:
$ nas_fs -name ufs1 -type uxfs -create MTV1 pool=marketing
storage=APM00033900125
Output:
id
=
name
=
acl
=
in_use
=
type
=
volume
=
pool
=
member_of
=
rw_servers
=
ro_servers
=
rw_vdms
=
ro_vdms
=
auto_ext
=
deduplication=
stor_devs
=
disks
=
2
ufs1
0
False
uxfs
MTV1
marketing
root_avm_fs_group_2
no,virtual_provision=no
off
APM00033900125-0111
d6,d8,d11,d12
Create file systems with automatic file system extension
Use the -auto_extend option of the nas_fs command to enable automatic file
system extension on a new file system created with AVM; the option is disabled by
default.
Note: Automatic file system extension does not alleviate the need for appropriate file
system usage planning. Create the file systems with adequate space to accommodate the
estimated file system usage. Allocating too little space to accommodate normal file system
usage makes the Control Station rapidly and repeatedly attempt to extend the file system. If
the Control Station cannot adequately extend the file system to accommodate the usage
quickly enough, the automatic extension fails.
If automatic file system extension is disabled and the file system reaches 90
percent full, a warning message is written to the sys_log. Any action necessary is at
the administrator’s discretion.
Note: You do not have to set the maximum size for a newly created file system when you
enable automatic file system extension. The default maximum size is 16 TB. With automatic
file system extension enabled, even if the HWM is not set, the file system automatically
extends up to 16 TB, if the storage space is available in the storage pool.
50 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Use this procedure to create a file system with a system-defined storage pool and a
CLARiiON storage system, and enable automatic file system extension.
Action
To create a file system with automatic file system extension enabled, use this command syntax:
$ nas_fs -name <fs_name> -type <type> -create size=<size>
pool=<pool_name> storage=<system_name> -auto_extend {no|yes}
where:
<fs_name> = name of the file system
<type> = type of the file system
<size> = amount of space to add to the file system; specify the size in GB by typing <number>G
(for example, 250G) or in MB by typing <number>M (for example, 500M), or in TB by typing
<number>T (for example, 1T)
<pool_name> = name of the storage pool from which to allocate space to the file system
<system_name> = name of the storage system associated with the storage pool
Example:
To enable automatic file system extension as you create a new 10 GB file system, from a systemdefined storage pool, and a CLARiiON storage system, type:
$ nas_fs -name ufs1 -type uxfs -create size=10G pool=clar_r5_performance
storage=APM00042000814 -auto_extend yes
Output
id
=
name
=
acl
=
in_use
=
type
=
worm
=
volume
=
pool
=
member_of
=
rw_servers
=
ro_servers
=
rw_vdms
=
ro_vdms
=
auto_ext
=
deduplication=
stor_devs
=
disks
434
ufs1
0
False
uxfs
off
v1634
clar_r5_performance
root_avm_fs_group_3
hwm=90%,virtual_provision=no
off
APM00042000814-001D,APM00042000814-001A,
APM00042000814-0019,APM00042000814-0016
= d20,d12,d18,d10
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
51 of 92
Create automatic file system extension-enabled file systems
When you create a file system with automatic file system extension enabled, you
can set the point at which you want the file system to automatically extend (the
HWM) and the maximum size to which the file system can grow. You can also
enable Virtual Provisioning at the same time that you create or extend a file system.
"Enable automatic file system extension and options" on page 61 provides
information on modifying the automatic file system extension options.
If you set the slice=no option on the file system, the actual file system size might be
bigger than the size that you specify for the file system, and could exceed the
maximum size. In this case, a warning indicates the file system size might exceed
the maximum size and the automatic extension fails. If you do not specify the file
system slice option (-option slice=yes|no) when you create the file system, the file
system defaults to the setting of the storage pool. "Modify system-defined and userdefined storage pool attributes" on page 74 provides more information.
Note: If the actual file system size is above the HWM when Virtual Provisioning is enabled,
the client sees the actual file system size instead of the specified maximum size.
Enabling automatic file system extension and Virtual Provisioning does not
automatically reserve the space from the storage pool for that file system.
Administrators must ensure that adequate storage space exists, so that the
automatic extension operation can succeed. If the available storage is less than the
maximum size setting, automatic extension fails. Users receive an error message
when the file system becomes full; even though it appears that there is free space
in the file system. The file system must be manually extended.
52 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Use this procedure to simultaneously set the automatic file system extension
options when you are creating the file system.
Step
1.
Action
Create a file system of a specified size, enable automatic file system extension and Virtual
Provisioning, and set the HWM and the maximum file system size simultaneously by using
this command syntax:
$ nas_fs -name <fs_name> -type <type> -create
size=<integer>[T|G|M] pool=<pool name> storage=<system_name>
-auto_extend {no|yes} -vp {yes|no} -hwm <50-99>% -max_size
<integer>[T|G|M]
where:
<fs_name> = name of the file system
<type> = type of the file system
<integer> = size requested in MB, GB, or TB; the maximum size is 16 TB
<pool name> = name of the storage pool
<system_name> = attached storage system on which the file system and storage pool
reside
<50-99> = percentage between 50 and 99, at which you want the file system to
automatically extend
Example:
To create a 10 MB, UxFS from an AVM storage pool, with automatic file system extension
enabled, maximum file system size of 200M, HWM of 90 percent, and Virtual
Provisioning enabled, type:
$ nas_fs -name ufs2 -type uxfs -create size=10M
pool=clar_r5_performance -auto_extend yes -vp yes -hwm 90%
-max_size 200M
Output:
id
=
name
=
acl
=
in_use
=
type
=
worm
=
volume
=
pool
=
member_of
=
rw_servers
=
ro_servers
=
rw_vdms
=
ro_vdms
=
auto_ext
=
deduplication=
stor_devs
=
disks
435
ufs2
0
False
uxfs
off
v1637
clar_r5_performance
root_avm_fs_group_3
hwm=90%,max_size=200M,virtual_provision=yes
off
APM00042000814-001D,APM00042000814-001A,
APM00042000814-0019,APM00042000814-0016
= d20,d12,d18,d10
Note: When you enable Virtual Provisioning on a new or existing file system, you must
also specify the maximum size to which the file system can automatically extend.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
53 of 92
Step
2.
Action
Verify the settings for the specific file system after enabling automatic file system
extension by using this command syntax:
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Example:
To verify the settings for the file system ufs2 after enabling automatic file system
extension, type:
$ nas_fs -info ufs2
Output:
id
=
name
=
acl
=
in_use
=
type
=
worm
=
volume
=
pool
=
rw_servers
=
ro_servers
=
rw_vdms
=
ro_vdms
=
backups
=
auto_ext
=
deduplication=
stor_devs
=
disks
2
ufs2
0
False
uxfs
off
v1637
clar_r5_performance
ufs2_snap1,ufs2_snap2
hwm=66%,max_size=16769024M,virtual_provision=yes
off
APM00042000814-001D,APM00042000814-001A,
APM00042000814-0019,APM00042000814-0016
= d20,d12,d18,d10
You can also set the options -hwm and -max_size on each file system with
automatic file system extension enabled. When enabling Virtual Provisioning on a
file system, you must set the maximum size but setting the high water mark is
optional.
Extend file systems with AVM
Increase the size of a Celerra file system nearing its maximum capacity by
extending the file system. You can:
54 of 92 Version 5.6.45
◆
Extend a file system by size to add space if the file system has an associated
system-defined storage pool. You can specify the storage system from which to
allocate space. "Extend file systems with system-defined storage pools" on
page 55 provides instructions.
◆
Extend a file system by using a storage pool other than the one used to create
the file system. "Extend file systems by using a different storage pool" on
page 57 provides instructions.
◆
Extend a file system by volume if the file system has an associated user-defined
storage pool. "Extend file systems with user-defined storage pools" on page 59
provides instructions.
◆
Extend an existing file system by enabling automatic file system extension on
that file system. "Enable automatic file system extension and options" on
page 61 provides instructions.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
◆
Extend an existing file system by enabling Virtual Provisioning on that file
system. "Enable Virtual Provisioning" on page 65 provides instructions.
Managing EMC Celerra Volumes and File Systems Manually contains the
instructions to extend file systems manually.
Extend file systems with system-defined storage pools
All file systems created by using the AVM feature have an associated storage pool.
Extend a file system created with a system-defined storage pool (either virtual or
non-virtual) by specifying only the size and the name of the file system. AVM
allocates storage from the storage pool to the file system. You can specify the
storage system you want to use. If you do not, the last storage system associated
with the storage pool is used.
Note: A file system created using a system-defined virtual storage pool can be extended on
its existing pool or by using a compatible system-defined virtual storage pool that contains
the same disk type.
Use this procedure to extend a file system with a system-defined storage pool by
size.
Note: Use either a system-defined or user-defined storage pool to extend a file system.
Step
1.
Action
Check the file system configuration to confirm that the file system has an associated
storage pool by using this command syntax:
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Note: If you see a storage pool defined in the output, the file system was created with
AVM and has an associated storage pool.
Example:
To check the file system configuration to confirm that the file system ufs1 has an
associated storage pool, type:
$ nas_fs -info ufs1
Output:
id
=
name
=
acl
=
in_use
=
type
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
stor_devs =
disks
=
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
8
ufs1
0
False
uxfs
v121
clar_r5_performance
root_avm_fs_group_3
APM00023700165-0111
d7,d13
Version 5.6.45
55 of 92
Step
2.
Action
Extend the file system by using this command syntax:
$ nas_fs -xtend <fs_name> size=<size> pool=<pool>
storage=<system_name>
where:
<fs_name> = name of the file system
<size> = amount of space to add to the file system; specify the size in GB by typing
<number>G (for example, 250G) or in MB by typing <number>M (for example, 500M)
<pool> = name of the storage pool
<system_name> = name of the storage system; if you do not specify a storage system,
the default storage system is the one on which the file system resides and if the file
system spans multiple storage systems, the default is any one of the storage systems on
which the file system resides.
Note: The first time you extend the file system without specifying a storage pool, the
default storage pool is the one used to create the file system. If you specify a storage pool
that is different from the one used to create the file system, the next time you extend this
file system without specifying a storage pool, the last pool in the output list is the default.
Example:
To extend the size of the file system ufs1 by 10M, type:
$ nas_fs -xtend ufs1 size=10M pool=clar_r5_performance
storage=APM00023700165
Output:
id
=
name
=
acl
=
in_use
=
type
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
stor_devs =
disks
=
3.
8
ufs1
0
False
uxfs
v121
clar_r5_performance
root_avm_fs_group_3
APM00023700165-0111
d7,d13,d19,d25,d30,d31,d32,d33
Check the size of the file system after extending it to confirm that the size increased by
using this command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the size of the file system ufs1 after extending it to confirm that the size
increased, type:
$ nas_fs -size ufs1
Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)
56 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Extend file systems by using a different storage pool
You can use more than one storage pool to extend a file system. Ensure that the
storage pools have space allocated from the same storage system to prevent the
file system from spanning more than one storage system.
Note: A file system created using a system-defined virtual storage pool can be extended on
its existing pool or by using a compatible system-defined virtual storage pool that contains
the same disk type.
Use this procedure to extend the file system by using a different storage pool than
the one used to create the file system.
Step
1.
Action
Check the file system configuration to confirm that the file system has an associated
storage pool by using this command syntax:
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the file system configuration to confirm that the file system ufs2 has an
associated storage pool, type:
$ nas_fs -info ufs2
Output:
id
=
name
=
acl
=
in_use
=
type
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
stor_devs =
disks
=
9
ufs2
0
False
uxfs
v121
clar_r5_performance
root_avm_fs_group_3
APM00033900165-0111
d7,d13
Note: The storage pool used earlier to create or extend the file system is shown in the
output as associated with this file system.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
57 of 92
Step
2.
Action
Optionally, if you want to extend the file system by using a different storage pool other
than the one used to create the file system, by using this command syntax:
$ nas_fs -xtend <fs_name> size=<size> pool=<pool>
where:
<fs_name> = name of the file system
<size> = amount of space you want to add to the file system; specify the size in GB by
typing <number>G (for example, 250G) or in MB by typing <number>M (for example,
500M)
<pool> = name of the storage pool
Example:
To extend the file system ufs2 by using a different storage pool other than the one used to
create the file system, type:
$ nas_fs -xtend ufs2 size=10M pool=clar_r5_economy
Output:
id
=
name
=
acl
=
in_use
=
type
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
stor_devs =
disks
=
9
ufs2
0
False
uxfs
v123
clar_r5_performance,clar_r5_economy
root_avm_fs_group_3,root_avm_fs_group_4
APM00033900165-0112
d7,d13,d19,d25
Note: The storage pools used to create and extend the file system now appear in the
output. There is only one storage system from which space for these storage pools is
allocated.
3.
Check the file system size after extending it to confirm the increase in size by using this
command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the size of the file system ufs2 after extending it to confirm the increase in size,
type:
$ nas_fs -size ufs2
Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)
58 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Extend file systems with user-defined storage pools
If you created a file system with a user-defined storage pool, you must extend the
file system manually, by specifying the volumes to add.
Note: With user-defined storage pools, you must manually create the underlying volumes,
including striping, before adding the volume to the storage pool. Managing EMC Celerra
Volumes and File Systems Manually describes the detailed procedures needed to perform
these tasks before creating or extending the file system.
If you do not specify a storage system when extending the file system, the default
storage system is the one on which the file system resides. If the file system spans
multiple storage systems, the default is any one of the storage systems on which
the file system resides.
Use this procedure to extend the file system by using the same user-defined
storage pool that was used to create the file system.
Step
1.
Action
Check the configuration of the file system to confirm the associated user-defined storage
pool by using this command syntax:
$ nas_fs -info <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the configuration of the file system ufs3 to confirm the associated user-defined
storage pool, type:
$ nas_fs -info ufs3
Output:
id
=
name
=
acl
=
in_use
=
type
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
stor_devs =
disks
=
10
ufs3
0
False
uxfs
V121
marketing
APM00033900165-0111
d7,d8
Note: The user-defined storage pool used to create the file system is defined in the
output.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
59 of 92
Step
2.
Action
Extend the file system by using this command syntax:
$ nas_fs -xtend <fs_name> <volume_name> pool=<pool>
storage=<system_name>
where:
<fs_name> = name of the file system
<volume_name> = name of the volume to add to the file system
<pool> = storage pool associated with the file system; it can be user-defined or systemdefined
<system_name> = name of the storage system on which the file system resides
Example:
To extend the file system ufs3, type:
$ nas_fs -xtend ufs3 v121 pool=marketing storage=APM00023700165
Output:
id
=
name
=
acl
=
in_use
=
type
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
stor_devs =
disks
=
10
ufs3
0
False
uxfs
v121
marketing
APM00023700165-0111
d7,d8,d13,d14
Note: The next time you extend this file system without specifying a storage pool, the last
pool in the output list is the default.
3.
Check the size of the file system ufs3 after extending it to confirm that the size increased
by using this command syntax:
$ nas_fs -size <fs_name>
where:
<fs_name> = name of the file system
Example:
To check the size of the file system ufs3 after extending it to confirm that the size
increased, type:
$ nas_fs -size ufs3
Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)
60 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Enable automatic file system extension and options
You can automatically extend an existing file system created with AVM systemdefined or user-defined storage pools. The file system automatically extends by
using space from the storage system and storage pool with which the file system is
associated.
If you set the (slice=no) option on the file system, the actual file system size might
be bigger than the size you specify for the file system, and could exceed the
maximum size. In this case, you receive a warning indicating that the file system
size might exceed the maximum size, and automatic extension fails. If you do not
specify the file system slice option (-option slice=yes|no) when you create the file
system, the file system defaults to the setting of the storage pool.
"Modify system-defined and user-defined storage pool attributes" on page 74
describes the procedure to modify the default_slice_flag attribute on the storage
pool.
Use the -modify option to enable automatic extension on an existing file system.
You can also set the HWM and maximum size.
To enable automatic file system extension and options:
◆
"Enable automatic file system extension" on page 62
◆
"Set the HWM" on page 63
◆
"Set the maximum file system size" on page 64
You can also enable Virtual Provisioning at the same time that you create or extend
a file system. "Enable Virtual Provisioning" on page 65 describes the procedure to
enable Virtual Provisioning on an existing file system.
"Enable automatic extension, Virtual Provisioning, and all options simultaneously"
on page 67 describes the procedure to simultaneously enable automatic extension,
Virtual Provisioning, and all options on an existing file system.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
61 of 92
Enable automatic file system extension
If the HWM or maximum size is not set, the file system automatically extends up to
the default maximum size of 16 TB when the file system reaches the default HWM
of 90 percent, if the space is available.
An error message appears if you try to enable automatic file system extension on a
file system created manually.
Note: The HWM is 90 percent by default when you enable automatic file system extension.
Action
To enable automatic extension on an existing file system, use this command syntax:
$ nas_fs -modify <fs_name> -auto_extend {no|yes}
where:
<fs_name> = name of the file system
Example:
To enable automatic extension on an existing file system ufs3, type:
$ nas_fs -modify ufs3 -auto_extend yes
Output
id
=
name
=
acl
=
in_use
=
type
=
worm
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext =
stor_devs =
disks
disk=d20
disk=d20
disk=d18
disk=d18
disk=d14
disk=d14
disk=d11
disk=d11
62 of 92 Version 5.6.45
28
ufs3
0
True
uxfs
off
v157
clar_r5_performance
root_avm_fs_group_3
server_2
hwm=90%,virtual_provision=no
APM00042000818-001F,APM00042000818-001D
APM00042000818-0019,APM00042000818-0016
= d20,d18,d14,d11
stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2
stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2
stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2
stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2
stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2
stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2
stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2
stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Set the HWM
With automatic file system extension enabled on an existing file system, use the
-hwm option to set a threshold. To specify a threshold, type an integer between 50
and 99 percent; the default is 90 percent.
If the HWM or maximum size is not set, the file system automatically extends up to
the default maximum size of 16 TB when the file system reaches the default HWM
of 90 percent, if the space is available. The value for maximum size, if specified,
has an upper limit of 16 TB.
Action
To set the HWM on an existing file system, with automatic file system extension enabled, use this
command syntax:
$ nas_fs –modify <fs_name> -hwm <50-99>%
where:
<fs_name> = name of the file system
<50-99> = an integer representing the file system usage point at which you want it to
automatically extend
Example:
To set the HWM on an existing file system ufs3, with automatic extension already enabled, type:
$ nas_fs -modify ufs3 -hwm 85%
Output
id
=
name
=
acl
=
in_use
=
type
=
worm
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext =
stor_devs =
28
ufs3
0
True
uxfs
off
v157
clar_r5_performance
root_avm_fs_group_3
server_2
hwm=85%,virtual_provision=no
APM00042000818-001F,APM00042000818-001D,
APM00042000818-0019,APM00042000818-0016
disks
= d20,d18,d14,d11
disk=d20
stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2
disk=d20
stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2
disk=d18
stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2
disk=d18
stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2
disk=d14
stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2
disk=d14
stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2
disk=d11
stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2
disk=d11
stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
63 of 92
Set the maximum file system size
Use the -max_size option to specify a maximum size to which a file system can
grow. To specify the maximum size, type an integer and specify T for TB, G for GB
(default), or M for MB.
When you enable automatic file system extension, the file system automatically
extends up to the default maximum size of 16 TB. Set the HWM at which you want
the file system to automatically extend. If the HWM is not set, the file system
automatically extends up to 16 TB when the file system reaches the default HWM of
90 percent, if the space is available.
Action
To set the maximum file system size with automatic file system extension already enabled, use
this command syntax:
$ nas_fs -modify <fs_name> -max_size <integer>[T|G|M]
where:
<fs_name> = name of the file system
<integer> = maximum size requested in MB, GB, or TB
Example:
To set the maximum file system size on the existing file system, type:
$ nas_fs -modify ufs3 -max_size 16T
Output
id
=
name
=
acl
=
in_use
=
type
=
worm
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext =
stor_devs =
28
ufs3
0
True
uxfs
off
v157
clar_r5_performance
root_avm_fs_group_3
server_2
hwm=85%,max_size=16769024M,virtual_provision=no
APM00042000818-001F,APM00042000818-001D,
APM00042000818-0019,APM00042000818-0016
disks
= d20,d18,d14,d11
disk=d20
stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2
disk=d20
stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2
disk=d18
stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2
disk=d18
stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2
disk=d14
stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2
disk=d14
stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2
disk=d11
stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2
disk=d11
stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2
64 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Enable Virtual Provisioning
You can enable Virtual Provisioning at the same time that you create or extend a file
system. Use the -vp option to enable Virtual Provisioning. You must also specifically
set the maximum size to which you want the file system to automatically extend. An
error message appears if you attempt to enable Virtual Provisioning and do not set
the maximum size. "Set the maximum file system size" on page 64 describes the
procedure to set the maximum file system size.
The upper limit for the maximum size is 16 TB. The maximum size you set is the file
system size that is presented to users, if the maximum size is larger than the actual
file system size.
Note: Enabling automatic file system extension and Virtual Provisioning does not
automatically reserve the space from the storage pool for that file system. Administrators
must ensure that adequate storage space exists, so that the automatic extension operation
can succeed. If the available storage is less than the maximum size setting, automatic
extension fails. Users receive an error message when the file system becomes full, even
though it appears that there is free space in the file system. The file system must be
manually extended.
Enable Virtual Provisioning on the source file system when the feature is used in a
replication situation. With Virtual Provisioning enabled, NFS, CIFS, and FTP clients
see the actual size of the Replicator destination file system, while they see the
virtually provisioned maximum size of the Replicator source file system.
"Interoperability considerations" on page 35 provides additional information.
Action
To enable Virtual Provisioning with automatic extension enabled on the file system, use this
command syntax:
$ nas_fs -modify <fs_name> -max_size <integer>[T|G|M] -vp {yes|no}
where:
<fs_name> = name of the file system
<integer> = size requested in MB, GB, or TB
Example:
To enable Virtual Provisioning, type:
$ nas_fs -modify ufs1 -max_size 16T -vp yes
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
65 of 92
Output
id
=
name
=
acl
=
in_use
=
type
=
worm
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext =
stor_devs =
27
ufs3
0
True
uxfs
off
v157
clar_r5_performance
root_avm_fs_group_3
server_2
hwm=85%,max_size=16769024M,virtual_provision=yes
APM00042000818-001F,APM00042000818-001D,
APM00042000818-0019,APM00042000818-0016
disks
= d20,d18,d14,d11
disk=d20
stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2
disk=d20
stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2
disk=d18
stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2
disk=d18
stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2
disk=d14
stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2
disk=d14
stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2
disk=d11
stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2
disk=d11
stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2
66 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Enable automatic extension, Virtual Provisioning, and all options
simultaneously
Note: An error message appears if you try to enable automatic file system extension on a
file system that was created without using a storage pool.
Action
To simultaneously enable automatic file system extension and Virtual Provisioning on an existing
file system, set the HWM and the maximum size, use this command syntax:
$ nas_fs -modify <fs_name> -auto_extend {no|yes} -vp {yes|no} -hwm <5099>% -max_size <integer>[T|G|M]
where:
<fs_name> = name of the file system
<50-99> = an integer representing the file system usage point at which you want it to
automatically extend
<integer> = size requested in MB, GB, or TB
Example:
To modify a UxFS to enable automatic extension, enable Virtual Provisioning, set a maximum file
system size of 16 TB, with an HWM of 90 percent, type:
$ nas_fs -modify ufs4 -auto_extend yes -vp yes -hwm 90% -max_size 16T
Output
id
=
name
=
acl
=
in_use
=
type
=
worm
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext =
stor_devs =
disks
29
ufs4
0
False
uxfs
off
v157
clar_r5_performance
root_avm_fs_group_3
hwm=90%,max_size=16769024M,virtual_provision=yes
APM00042000818-001F,APM00042000818-001D,
APM00042000818-0019,APM00042000818-0016
= d20,d18,d14,d11
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
67 of 92
Verify the maximum size of the file system
Automatic file system extension fails when the file system reaches the maximum
size.
Action
To force an extension to determine whether the maximum size has been reached, use this
command syntax:
$ nas_fs -xtend <fs_name> size=<size>
where:
<fs_name> = name of the file system
<size> = size to extend the file system by, in MB
Example:
To force an extension to determine whether the maximum size has been reached, type:
$ nas_fs -xtend ufs1 size=4M
Output
id
= 759
name
= ufs1
acl
= 0
in_use
= True
type
= uxfs
worm
= off
volume
= v2459
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_4
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = hwm=90%,max_size=16769024M (reached)
virtual_provision=yes
<<<
stor_devs = APM00041700549-0018
disks
= d10
disk=d10
stor_dev=APM00041700549-0018 addr=c16t1l8 server=server_4
disk=d10
stor_dev=APM00041700549-0018 addr=c32t1l8 server=server_4
disk=d10
stor_dev=APM00041700549-0018 addr=c0t1l8 server=server_4
disk=d10
stor_dev=APM00041700549-0018 addr=c48t1l8 server=server_4
68 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Create file system checkpoints with AVM
Use either AVM system-defined or user-defined storage pools to create file system
checkpoints. Specify the storage system on which you want the file system
checkpoint to reside.
Use this procedure to create the checkpoint, specifying a storage pool and storage
system.
Note: You can only specify the storage pool for the checkpoint SavVol when there are no
existing checkpoints of the PFS.
Step
1.
Action
Obtain a list of available storage systems by using this command syntax:
$ nas_storage -list
Note: To obtain more detailed information on the storage system and associated names
use the -info option instead.
2.
Create the checkpoint by using this command syntax:
$ fs_ckpt <fs_name> -name <name> -Create [size=<integer>[T|G|M|%]]
pool=<pool> storage=<system_name>
where:
<fs_name> = name of the file system for which you want to create a checkpoint
<name> = name of the checkpoint
<integer> = amount of space to allocate to the checkpoint; type the size in TB, GB, or
MB
<pool> = name of the storage pool
<system_name> = storage system on which the file system checkpoint resides
Note: Virtual Provisioning is not supported with checkpoints. NFS, CIFS, and FTP clients
cannot see the virtually provisioned maximum size of a SnapSure checkpoint file system.
Example:
To create the checkpoint ckpt1, type:
$ fs_ckpt ufs1 -name ckpt1 -Create size=10G
pool=clar_r5_performance storage=APM00023700165
Output:
id
=
name
=
acl
=
in_use
=
type
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
stor_devs =
disks
=
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
1
ckpt1
0
False
uxfs
V126
clar_r5_performance
APM00023700165-0111
d7,d8
Version 5.6.45
69 of 92
Managing
The tasks to manage AVM storage pools are:
◆
"List existing storage pools" on page 70
◆
"Display storage pool details" on page 71
◆
"Display storage pool size information" on page 71
◆
"Modify system-defined and user-defined storage pool attributes" on page 74
◆
"Extend a user-defined storage pool" on page 80
◆
"Extend a system-defined storage pool" on page 81
◆
"Remove volumes from storage pools" on page 82
◆
"Delete user-defined storage pools" on page 83
List existing storage pools
When the existing storage pools are listed, all the system-defined storage pools
and three user-defined storage pools (marketing, engineering, and sales) appear in
the output. All existing storage pools are listed, regardless of whether they are in
use.
Action
To list all existing system-defined and user-defined storage pools, use this command syntax:
$ nas_pool -list
Example:
To list the storage pools, type:
$ nas_pool -list
Output
id
1
2
3
4
5
6
7
8
9
10
11
40
70 of 92 Version 5.6.45
in_use acl name
y
0 symm_std
n
0 clar_r1
y
0 clar_r5_performance
y
0 clar_r5_economy
y
0 marketing
y
0 engineering
y
0 sales
n
0 clarata_r3
n
0 clarata_archive
n
0 symm_std_rdf_src
n
0 clar_r1
y
0 engineer_APM008440166
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Display storage pool details
Action
To display detailed information of a specified system-defined, system-defined virtual, or userdefined storage pool, use this command syntax:
$ nas_pool -info <name>
where:
<name> = name of the storage pool
Example:
To display detailed information of the storage pool marketing, type:
$ nas_pool -info marketing
Output
id
=
name
=
description
=
acl
=
in_use
=
clients
=
members
=
default_slice_flag
=
is_user_defined
=
virtually_provisioned=
disk_type
=
server_visibility
=
5
marketing
0
True
fs24,fs26
d320,d319
True
True
True
CLSTD
server_2,server_3
Display storage pool size information
The size information of the storage pool appears in the output. If there are more
than one storage pool, the output shows the size information for all the storage
pools.
The storage pool size information includes:
◆
The total used space in the storage pool in MB (used_mb)
◆
The total unused space in the storage pool in MB (avail_mb)
◆
The total used and unused space in the storage pool in MB (total_mb)
◆
The total space available from all sources in MB that could potentially be added
to the storage pool (potential_mb). For user-defined storage pools, the output
for potential_mb is 0 because they must be manually extended and shrunk. In
this example, total_mb and potential_mb are the same because the total
storage in the storage pool is equal to the total potential storage available.
Note: If either non–MB-aligned disk volumes or disk volumes of different sizes are striped
together, truncation of storage might occur. The total amount of space added to a pool might
be different than the total amount taken from potential storage. Total space in the pool
includes the truncated space, but potential storage does not include the truncated space.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
71 of 92
In Celerra Manager, the potential MB in the output represents the total available
storage, including the storage pool. In the CLI, the output for potential_mb does not
include the space in the storage pool.
Note: Use the -size -all option to display the size information for all storage pools.
Action
To display the size information for a specific storage pool, use this command syntax:
$ nas_pool -size <name>
where:
<name> = name of the storage pool
Example:
To display the size information for the clar_r5_performance storage pool, type:
$ nas_pool -size clar_r5_performance
Output
id = 3
name = clar_r5_performance
used_mb = 128000
avail_mb = 0
total_mb = 260985
potential_mb = 260985
Action
To display the size information for a specific virtual storage pool, use this command syntax:
$ nas_pool -size <name>
where:
<name> = name of the storage pool
Example:
To display the size information for the ThinPool0 storage pool, type:
$ nas_pool -size ThinPool0_APM00084401664
Output
id
= 49
name
= ThinPool0_APM00084401664
used_mb
= 0
avail_mb
= 0
total_mb
= 0
potential_mb = 1023
Physical storage usage in Thin Pool Thin Pool 0 on APM00084401664
used_mb
= 2048
avail_mb
= 1093698
total_mb
= 1095746
72 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Display Symmetrix storage pool size information
Sliced volumes do not appear in the output because the Symmetrix storage pools
default_slice_flag value is set to no.
Use the -size -all option to display the size information for all storage pools.
Action
To display the size information of Symmetrix-related storage pools, use this command syntax:
$ nas_pool -size <name> -slice y
where:
<name> = name of the storage pool
Example:
To request size information for a specific Symmetrix storage pool, type:
$ nas_pool -size symm_std -slice y
Output
id = 5
name = symm_std
used_mb = 128000
avail_mb = 0
total_mb = 260985
potential_mb = 260985
Note
• Use the -slice y option to include any space from sliced volumes in the available result.
• The size information for the system-defined storage pool named clar_r5_performance appears in
the output. If you have more storage pools, the output shows the size information for all the
storage pools.
• used_mb is the used space in the specified storage pool in MB.
• avail_mb is the amount of unused available space in the storage pool in MB.
• total_mb is the total of used and unused space in the storage pool in MB.
• potential_mb is the potential amount of storage that can be added to the storage pool available
from all sources in MB. For user-defined storage pools, the output for potential_mb is 0 because
they must be manually extended and shrunk. In this example, total_mb and potential_mb are the
same because the total storage in the storage pool is equal to the total potential storage
available.
• If either non–MB-aligned disk volumes or disk volumes of different sizes are striped together,
truncation of storage might occur. The total amount of space added to a pool might be different
than the total amount taken from potential storage. Total space in the pool includes the truncated
space, but potential storage does not include the truncated space.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
73 of 92
Modify system-defined and user-defined storage pool
attributes
System-defined and user-defined storage pools have attributes that control how
they manage the volumes and file systems. Table 8 on page 74 lists the modifiable
storage pool attributes, the value, and the attribute description.
Table 8
Storage pool attributes
Attribute
Values
Modifiable
Description
name
Quoted
string
Yes
Unique name. If a name is not specified during creation, one
is automatically generated.
Quoted
string
Yes
A text description.
User-defined storage
pools
Default is “” (blank string).
Integer. For
example, 0.
Yes
Access control level.
User-defined storage
pools
Controlling Access to EMC Celerra System Objects contains
instructions to manage access control levels.
“y” | “n”
Yes
Answers the question, can AVM slice member volumes to
meet the file system request?
description
acl
default_slice_flag
User-defined storage
pools
System-defined and
user-defined storage
pools
A y entry tells AVM to create a slice of exactly the correct
size from one or more member volumes.
An n entry gives the primary or source file system exclusive
access to one or more member volumes.
Note: If using TimeFinder or automatic file system
extension, this attribute should be set to n. You cannot
restore file systems built with sliced volumes to a previous
state by using TimeFinder/FS.
is_dynamic
“y” | ”n”
Yes
System-defined storage
pools
is_greedy
“y” | “n”
Note: Only applicable if volume_profile is not blank.
Answers the question, is this storage pool allowed to
automatically add or remove member volumes? The default
answer is n.
Yes
Note: Only applicable if volume_profile is not blank.
System-defined storage
pools
This field answers the question, is this storage pool greedy?
When a storage pool receives a request for space, a greedy
storage pool attempts to create a new member volume
before searching for free space in existing member volumes.
The attribute value for this storage pool is y.
A storage pool that is not greedy uses all available space in
the storage pool before creating a new member volume. The
attribute value for this storage pool is n.
74 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
You can change the attribute default_slice_flag for system-defined and userdefined storage pools. It indicates whether member volumes can be sliced. If the
storage pool has member volumes built on one or more slices, you cannot set this
value to n.
Action
To modify the default_slice_flag for a system-defined or user-defined storage pool, use this
command syntax:
$ nas_pool -modify {<name>|id=<id>} -default_slice_flag {y|n}
where:
<name> = name of the storage pool
<id> = ID of the storage pool
Example:
To modify a storage pool named marketing and change the default_slice_flag to prevent members
of the pool from being sliced when space is dispensed, type:
$ nas_pool -modify marketing -default_slice_flag n
Output
id
=
name
=
description
=
acl
=
in_use
=
clients
=
members
=
default_slice_flag=
is_user_defined
=
disk_type
=
server_visibility =
5
marketing
Storage pool for marketing
0
False
d126,d127,d128,d129
True
True
STD
server_2,server_3,server_4
Note
• When the default_slice_flag is set to y, it appears true as shown in the output.
• If using automatic file system extension, the default_slice_flag should be set to n.
Modify system-defined storage pool attributes
The system-defined storage pool’s attributes that can be modified are:
◆
-is_dynamic indicates whether the system-defined storage pool is allowed to
automatically add or remove member volumes.
◆
If -is_greedy is set to y, the system-defined storage pool attempts to create new
member volumes before using space from existing member volumes. A systemdefined storage pool that is not greedy (set to n) consumes all the existing
space in the storage pool before trying to add additional member volumes.
The tasks to modify the attributes of a system-defined storage pool are:
◆
"Modify the -is_greedy attribute of a system-defined storage pool" on page 76
◆
"Modify the -is_dynamic attribute of a system-defined storage pool" on page 77
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
75 of 92
Modify the -is_greedy attribute of a system-defined storage pool
Action
To modify the -is_greedy attribute of a specific system-defined storage pool to allow the storage
pool to use new volumes rather than existing volumes, use this command syntax:
$ nas_pool -modify {<name>|id=<id>} -is_greedy {y|n}
where:
<name> = name of the storage pool
<id> = ID of the storage pool
Example:
To change the attribute -is_greedy to false, for the storage pool named clar_r5_performance, type:
$ nas_pool -modify clar_r5_performance -is_greedy n
Output
id
=
name
=
description
=
acl
=
in_use
=
clients
=
members
=
default_slice_flag
=
is_user_defined
=
virtually_provisioned=
is_greedy
=
is_dynamic
=
disk_type
=
server_visibility
=
3
clar_r5_performance
0
False
d126,d127,d128,d129
True
False
False
False
True
STD
server_2,server_3,server_4
Note
The n entered in the example delivers a False answer to the is_greedy attribute in the output.
76 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Modify the -is_dynamic attribute of a system-defined storage pool
Action
To modify the -is_dynamic attribute of a specific system-defined storage pool to not allow the
storage pool to add or remove new members, use this command syntax:
$ nas_pool -modify {<name>|id=<id>} -is_dynamic {y|n}
where:
<name> = name of the storage pool
<id> = ID of the storage pool
Example:
To change the attribute -is_dynamic to false to not allow the storage pool to add or remove new
members, for the storage pool named clar_r5_performance, type:
$ nas_pool -modify clar_r5_performance -is_dynamic n
Output
id
=
name
=
description
=
acl
=
in_use
=
clients
=
members
=
default_slice_flag
=
is_user_defined
=
virtually_provisioned=
is_greedy
=
is_dynamic
=
disk_type
=
server_visibility
=
3
clar_r5_performance
0
False
d126,d127,d128,d129
True
False
False
False
False
STD
server_2,server_3,server_4
Note
The n entered in the example delivers a False answer to the is_dynamic attribute in the output.
Modify user-defined storage pool attributes
The user-defined storage pool’s attributes that can be modified are:
◆
-name: Changes the name of the specified user-defined storage pool to the new
name.
◆
-acl: Designates an access control level for a user-defined storage pool. The
default value is 0.
◆
-description: Changes the description comment for the user-defined storage
pool.
The tasks to modify the attributes of a user-defined storage pool are:
◆
"Modify the name of a user-defined storage pool" on page 78
◆
"Modify the access control of a user-defined storage pool" on page 78
◆
"Modify the description of a user-defined storage pool" on page 79
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
77 of 92
Modify the name of a user-defined storage pool
Action
To modify the name of a specific user-defined storage pool, use this command syntax:
$ nas_pool -modify <name> -name <new_name>
where:
<name> = old name of the storage pool
<new_name> = new name of the storage pool
Example:
To change the name of the storage pool named marketing to purchasing, type:
$ nas_pool -modify marketing -name purchasing
Output
id
=
name
=
description
=
acl
=
in_use
=
clients
=
members
=
default_slice_flag
=
is_user_defined
=
virtually_provisioned=
disk_type
=
server_visibility
=
5
purchasing
Storage pool for marketing
0
False
d126,d127,d128,d129
True
True
False
STD
server_2,server_3,server_4
Note
The name change to purchasing appears in the output. The description does not change unless
the administrator changes it.
Modify the access control of a user-defined storage pool
Controlling Access to EMC Celerra System Objects contains instructions to
manage access control levels.
Note: The access control level change to 1 appears in the output. The description does not
change unless the administrator modifies it.
Action
To modify the access control level for a specific user-defined storage pool, use this command
syntax:
$ nas_pool -modify {<name>|id=<id>} -acl <acl>
where:
<name> = name of the storage pool
<id> = ID of the storage pool
<acl> = designates an access control level for the new storage pool; default value is 0
Example:
To change the access control level for the storage pool named purchasing, type:
$ nas_pool -modify purchasing -acl 1
78 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Output
id
=
name
=
description
=
acl
=
in_use
=
clients
=
members
=
default_slice_flag
=
is_user_defined
=
virtually_provisioned=
disk_type
=
server_visibility
=
5
purchasing
Storage pool for marketing
1
False
d126,d127,d128,d129
True
True
False
STD
server_2,server_3,server_4
Modify the description of a user-defined storage pool
Action
To modify the description of a specific user-defined storage pool, use this command syntax:
$ nas_pool -modify {<name>|id=<id>} -description <description>
where:
<name> = name of the storage pool
<id> = ID of the storage pool
<description> = descriptive comment about the pool or its purpose; type the comment within
quotes
Example:
To change the descriptive comment for the storage pool named purchasing, type:
$ nas_pool -modify purchasing -description “storage pool for purchasing”
Output
id
=
name
=
description
=
acl
=
in_use
=
clients
=
members
=
default_slice_flag
=
is_user_defined
=
virtually_provisioned=
disk_type
=
server_visibility
=
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
15
purchasing
Storage pool for purchasing
1
False
d126,d127,d128,d129
True
True
False
STD
server_2,server_3,server_4
Version 5.6.45
79 of 92
Extend a user-defined storage pool
You can add a slice volume, a metavolume, a disk volume, or a stripe volume to a
user-defined storage pool.
Action
To extend the volumes for an existing user-defined storage pool, use this command syntax:
$ nas_pool -xtend {<name>|id=<id>} -volumes [<volume_name>,....]
where:
<name> = name of the storage pool
<id> = ID of the storage pool
<volume_name> = names of the volumes separated by commas
Example:
To extend the volumes for the storage pool named engineering, with volumes d130, d131, d132,
and d133, type:
$ nas_pool -xtend engineering -volumes d130,d131,d132,d133
Output
id
=
name
=
description
=
acl
=
in_use
=
clients
=
members
=
default_slice_flag
=
is_user_defined
=
virtually_provisioned=
disk_type
=
server_visibility
=
6
engineering
0
False
d126,d127,d128,d129,d130,d131,d132,d133
True
True
False
STD
server_2,server_3,server_4
Note
The original volumes (d126, d127, d128, and d129) appear in the output, followed by the volumes
added in the example.
80 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Extend a system-defined storage pool
Specifying a size by which you want AVM to expand a system-defined pool and
turning off the dynamic behavior of the system pool keeps it from consuming
additional disk volumes. Doing this:
◆
Uses the disk selection algorithms that AVM uses to create system-defined
storage pool members.
◆
Prevents system-defined storage pools from rapidly consuming a large number
of disk volumes.
Prerequisites
You can specify the storage system from which to allocate space to the pool. The
dynamic behavior of the system-defined storage pool must be turned off by using
the nas_pool -modify command before extending the pool.
On successful completion, the system-defined storage pool expands by at least the
specified size. The storage pool might expand more than the requested size. The
behavior is the same as when the storage pool is expanded during a file-system
creation.
If a storage system is not specified and the pool has members from a single storage
system, then the default is the existing storage system. If a storage system is not
specified and the pool has members from multiple storage systems, the existing set
of storage systems is used to extend the storage pool.
If a storage system is specified, space is allocated from the specified storage
system:
◆
The specified pool must be a system-defined pool.
◆
The specified pool must have the is_dynamic attribute set to n, or false. "Modify
system-defined storage pool attributes" on page 75 provides instructions to
change the attribute.
◆
There must be enough disk volumes to satisfy the size requested.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
81 of 92
Extend a system-defined storage pool by size
Action
To extend a system-defined storage pool by size and specify a storage system from which to
allocate space, use this command syntax:
$ nas_pool -xtend {<name>|id=<id>} -size <integer> -storage
<system_name>
where:
<name> = name of the system-defined storage pool
<id> = ID of the storage pool
<integer> = size requested in MB or GB; default size unit is MB
<system_name> = name of the storage system from which to allocate the storage
Example:
To extend the system-defined clar_r5_performance storage pool by size and designate the
storage system from which to allocate space, type:
$ nas_pool -xtend clar_r5_performance -size 128M -storage
APM00023700165-0011
Output
id
=
name
=
description
=
acl
=
in_use
=
clients
=
members
=
default_slice_flag
=
is_user_defined
=
virtually_provisioned=
disk_type
=
server_visibility
=
3
clar_r5_performance
0
False
d11,d12,d13,d14
True
True
False
CLSTD
server_2,server_3,server_4,server_5
Remove volumes from storage pools
Action
To remove volumes from a system-defined or user-defined storage pool, use this command
syntax:
$ nas_pool -shrink {<name>|id=<id>} -volumes [<volume_name>,....]
where:
<name> = name of the storage pool
<id> = ID of the storage pool
<volume_name> = names of the volumes separated by commas
Example:
To remove volumes d130 and d133 from the storage pool named marketing, type:
$ nas_pool -shrink marketing -volumes d130,d133
82 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Output
id
=
name
=
description
=
acl
=
in_use
=
clients
=
members
=
default_slice_flag
=
is_user_defined
=
virtually_provisioned=
disk_type
=
server_visibility
=
5
marketing
Storage pool for marketing
0
False
d126,d127,d128,d129,d131,d132
True
True
True
STD
server_2,server_3,server_4
Delete user-defined storage pools
You can delete only a user-defined storage pool that is not in use. You must remove
all storage pool member volumes before deleting a user-defined storage pool. This
delete action only removes the volumes in the specified storage pool and deletes
the storage pool, not the volumes. System-defined storage pools cannot be
deleted.
Action
To delete a user-defined storage pool, use this command syntax:
$ nas_pool -delete <name>
where:
<name> = name of the storage pool
Example:
To delete the user-defined storage pool named sales, type:
$ nas_pool -delete sales
Output
id
=
name
=
description
=
acl
=
in_use
=
clients
=
members
=
default_slice_flag
=
is_user_defined
=
virtually_provisioned=
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
7
sales
0
False
True
True
True
Version 5.6.45
83 of 92
Delete a user-defined storage pool and its volumes
The -deep option deletes the storage pool and also recursively deletes each
member of the storage pool unless it is in use or is a disk volume.
Action
To delete a user-defined storage pool and the volumes in it, use this command syntax:
$ nas_pool -delete {<name>|id=<id>} [-deep]
where:
<name> = name of the storage pool
<id> = ID of the storage pool
Example:
To delete the storage pool named sales, type:
$ nas_pool -delete sales -deep
Output
id
=
name
=
description
=
acl
=
in_use
=
clients
=
members
=
default_slice_flag
=
is_user_defined
=
virtually_provisioned=
84 of 92 Version 5.6.45
7
sales
0
False
True
True
False
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Troubleshooting
As part of an effort to continuously improve and enhance the performance and
capabilities of its product lines, EMC periodically releases new versions of its
hardware and software. Therefore, some functions described in this document may
not be supported by all versions of the software or hardware currently in use. For
the most up-to-date information on product features, refer to your product release
notes.
If a product does not function properly or does not function as described in this
document, contact your EMC Customer Support Representative.
Consider these steps when troubleshooting AVM:
◆
Obtain all files and subdirectories in /nas/log/ and /nas/volume/ from the Control
Station before reporting problems, which helps to diagnose the problem faster.
Additionally, save any files in /nas/tasks when problems are seen from Celerra
Manager. The support material script collects information related to Celerra
Manager and APL.
◆
Set the environment variable NAS_REPLICATE_DEBUG=1 to log additional
information in /nas/log/nas_log.al.tran.
◆
Report any useful SYR data.
Where to get help
Product information – For documentation, release notes, software updates, or for
information about EMC products, licensing, and service, go to the EMC Powerlink
website (registration required) at http://Powerlink.EMC.com.
Troubleshooting – For troubleshooting information, go to Powerlink, search for
Celerra Tools, and select Celerra Troubleshooting from the navigation panel on
the left.
Technical support – For technical support, go to Powerlink and choose Support.
On the Support page, you can access Support Forums, request a product
enhancement, talk directly to an EMC representative, or open a service request. To
open a service request, you must have a valid support agreement. Please contact
you EMC sales representative for details about obtaining a valid support agreement
or to answer any questions about your account.
Note: Do not request a specific support representative unless one has already been
assigned to your particular system problem.
Problem Resolution Roadmap for EMC Celerra contains additional information
about using Powerlink and resolving problems.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
85 of 92
EMC E-Lab Interoperability Navigator
The EMC E-LabTM Interoperability Navigator is a searchable, web-based
application that provides access to EMC interoperability support matrices. It is
available at http://Powerlink.EMC.com. After logging in to Powerlink, go to
Support > Interoperability and Product Lifecycle Information > E-Lab
Interoperability Navigator.
Known problems and limitations
Table 9 on page 86 describes known problems that might occur when using AVM
and automatic file system extension and presents workarounds.
Table 9
Known problems and workarounds
Known problem
Symptom
Workaround
AVM system-defined storage pools
and checkpoint extensions
recognize temporary disks as
available disks.
Temporary disks
might be used by
AVM systemdefined storage
pools or checkpoint
extension.
Place the newly marked disks in a
user-defined storage pool. This
protects them from being used by
system-defined storage pools (and
manual volume management).
In an NFS environment, the write
activity to the file system starts
immediately when a file changes.
When the file system reaches the
HWM, it begins to automatically
extend but might not finish before
the Control Station issues a file
system full error. This causes an
automatic extension failure.
An error message
indicating the
failure of automatic
extension start,
and a full file
system.
Alleviate this timing issue by
lowering the HWM on a file system
to ensure automatic extension can
accommodate normal file system
activity.
In a CIFS environment, the
CIFS/Windows Microsoft client does
Persistent Block Reservation (PBR)
to reserve the space before the
writes begin. As a result, the file
system full error occurs before the
HWM is reached and before
automatic extension is initiated.
86 of 92 Version 5.6.45
Set the HWM to allow enough free
space in the file system to
accommodate write operations to
the largest average file in that file
system. For example, if you have a
file system that is 100 GB, and the
largest average file in that file
system is 20 GB, set the HWM for
automatic extension to 70%.
Changes made to the 20 GB file
might cause the file system to
reach the HWM, or 70 GB. There is
30 GB of space left in the file
system to handle the file changes,
and to initiate and complete
automatic extension without failure.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Error messages
As of version 5.6, all new event, alert, and status messages provide detailed
information and recommended actions to help you troubleshoot the situation.
To view message details, use any of these methods:
◆
Celerra Manager:
• Right-click an event, alert, or status message and select to view Event
Details, Alert Details, or Status Details.
◆
Celerra CLI:
• Type nas_message -info <MessageID>, where MessageID is the message
identification number.
◆
EMC Celerra Network Server Error Messages Guide:
• Use this guide to locate information about messages that are in the earlierrelease message format.
◆
Powerlink:
• Use the text from the error message’s brief description or the message’s ID
to search the Knowledgebase on Powerlink. After logging in to Powerlink, go
to Support > Knowledgebase Search > Support Solutions Search.
EMC Training and Professional Services
EMC Customer Education courses help you learn how EMC storage products work
together within your environment in order to maximize your entire infrastructure
investment. EMC Customer Education features online and hands-on training in
state-of-the-art labs conveniently located throughout the world. EMC customer
training courses are developed and delivered by EMC experts. Go to EMC
Powerlink at http://Powerlink.EMC.com for course and registration information.
EMC Professional Services can help you implement your Celerra Network Server
efficiently. Consultants evaluate your business, IT processes, and technology and
recommend ways you can leverage your information for the most benefit. From
business plan to implementation, you get the experience and expertise you need,
without straining your IT staff or hiring and training new personnel. Contact your
EMC representative for more information.
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
87 of 92
88 of 92 Version 5.6.45
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Index
A
algorithm
automatic file system extension 34
CLARiiON 26
Symmetrix 29
attributes
storage pool, modifying 74, 75
storage pools 23
system-defined storage pools 75
user-defined storage pools 77
automatic file system extension
algorithm 34
and Celerra Replicator interoperability considerations 35
enabling 42
guidelines 38
how it works 16
maximum size 64
maximum size option 50
options 15
restrictions 4
Virtual Provisioning 65
Automatic Volume Management (AVM) 11
restrictions 4
storage pool 16
AVM. See Automatic Volume Management (AVM) 11
C
cautions 7
spanning storage systems 7
Celerra upgrade
automatic file system extension issue 7
character support, international 7
clar_r1 storage pool 19
clar_r5_economy storage pool 19
clar_r5_performance storage pool 19
clar_r6 storage pool 19
clarata_archive storage pool 19
clarata_r10 storage pool 19
clarata_r3 storage pool 19
clarata_r6 storage pool 19
CLARiiON thin pool, insufficient space 7
clarsas_archive storage pool 19
clarsas_r10 storage pool 20
clarsas_r6 storage pool 19
clarssd_r5 storage pool 20
cm_r1 storage pool 20
cm_r5_economy storage pool 20
cm_r5_performance storage pool 20
cm_r6 storage pool 20
cmata_archive storage pool 20
cmata_r10 storage pool 20
cmata_r3 storage pool 20
cmata_r6 storage pool 20
cmsas_archive storage pool 20
cmsas_r10 storage pool 20
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
cmsas_r6 storage pool 20
cmssd_r5 storage pool 21
concepts, AVM explanation 14
considerations 35
interoperability 35
D
details, displaying 71
displaying
details 71
size information 71
E
extending file systems 54, 55
with different storage pool 57
with user-defined storage pools 59
extending storage pools
system-defined 81
user-defined 80
F
file system
creating 69
default type 45
extending by size 55, 59
quotas 7
file system considerations 38
G
guidelines
automatic file system extension 38
I
International character support 7
K
known limitations 86
P
planning guidelines 38
profiles, volume, and storage 24
Q
quotas for file system 7
R
RAID group combinations 21
restrictions
automatic file system extension 4
AVM 4
Celerra file systems 7
Symmetrix volumes 4
TimeFinder/FS 7
Version 5.6.45
89 of 92
S
U
storage pools
attributes 30
clar_r1 19
clar_r5_economy 19
clar_r5_performance 19
clar_r6 19
clarata_archive 19
clarata_r10 19
clarata_r3 19
clarata_r6 19
clarsas_archive 19
clarsas_r10 20
clarsas_r6 19
clarssd_r5 20
cm_r1 20
cm_r5_economy 20
cm_r5_performance 20
cm_r6 20
cmata_archive 20
cmata_r10 20
cmata_r3 20
cmata_r6 20
cmsas_archive 20
cmsas_r10 20
cmsas_r6 20
cmssd_r5 21
deleting user-defined storage pools 83
displaying details 71
displaying size information 71
explanation 16
extending system-defined storage pools 81
extending user-defined storage pools 80
list 70
managing 70
modifying attributes 74
remove volumes from user-defined storage pools 82
supported types 18
symm_ata 18
symm_ata_rdf_src 19
symm_ata_rdf_tgt 19
symm_ssd 19
symm_std 18
symm_std_rdf_src 18
symm_std_rdf_tgt 18
system-defined CLARiiON 24
system-defined Symmetrix 28
symm_ata storage pool 18
symm_ata_rdf_src storage pool 19
symm_ata_rdf_tgt storage pool 19
symm_ssd storage pool 19
symm_std storage pool 18
symm_std_rdf_src storage pool 18
symm_std_rdf_tgt storage pool 18
Symmetrix thin pool, insufficient space 7
system-defined storage pools 55
Unicode characters 7
upgrading, nas software 37
90 of 92
Version 5.6.45
V
volume management, automatic 14
volume, AVM 14
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Notes
Managing EMC Celerra Volumes and File Systems with
Automatic Volume Management
Version 5.6.45
91 of 92
About this document
As part of its effort to continuously improve and enhance the performance and capabilities of the Celerra Network Server product line, EMC
periodically releases new versions of Celerra hardware and software. Therefore, some functions described in this document may not be
supported by all versions of Celerra software or hardware presently in use. For the most up-to-date information on product features, see your
product release notes. If your Celerra system does not offer a function described in this document, contact your EMC Customer Support
Representative for a hardware upgrade or software update.
Comments and suggestions about documentation
Your suggestions will help us improve the accuracy, organization, and overall quality of the user documentation. Send a message to
techpubcomments@EMC.com with your opinions of this document.
Copyright © 1998-2009 EMC Corporation. All rights reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS
IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC
Powerlink.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
All other trademarks used herein are the property of their respective owners.
92 of 92 Version 5.6.45
Click below to find more
Mipaper at www.lcis.com.tw
Mipaper at www.lcis.com.tw