Embedded NAS CLI Reference Guide

Embedded NAS CLI Reference Guide
EMC™ VMAX eNAS
Version 8.1.11.24
CLI Reference Guide
REVISION 01
Copyright © 2016 EMC Corporation. All rights reserved. Published in the USA.
Published September 2016
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
EMC², EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other
countries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).
EMC Corporation
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.EMC.com
EMC VMAX eNAS 8.1.11.24 CLI Reference Guide
PREFACE
As part of an effort to improve its product lines, EMC periodically releases revisions of its
software and hardware. Therefore, some functions described in this document might not
be supported by all versions of the software or hardware currently in use. The product
release notes provide the most up-to-date information on product features.
Contact your EMC representative if a product does not function properly or does not
function as described in this document.
Note
This document was accurate at publication time. New versions of this document might be
released on EMC Online Support (https://support.emc.com). Check to ensure that you
are using the latest version of this document.
Purpose
This reference guide provides man pages for all the eNAS CLI commands.
Audience
This manual provides reference information for command-line users and script
programmers that focus on configuring and managing eNAS on VMAX arrays.
Related documentation
The following documents provide additional eNAS information:
l
VMAX eNAS Release Notes
Describes new features and identifies any known functionality restrictions and
performance issues that may exist with the current version and your specific storage
environment.
l
VMAX eNAS File Auto Recovery with SRDF/S
Describes how to install and use File Auto Recovery to failover/move eNAS Virtual
Data Movers from source eNAS systems to destination eNAS systems using SRDF/S.
l
Using SRDF/S with VNX for Disaster Recovery
Explains how to configure and manage SRDF/S.
l
EMC VNX Command Line Interface Reference for File
Explains the command used to configure and manage an EMC file storage system.
l
Managing Volumes and File Systems on VNX Manually
Explains how to create and aggregate different volume types into usable file system
storage.
l
UsingVNX SnapSure
Explains how to use EMC SnapSure to create and manage checkpoints.
l
Configuring Virtual Data Movers on VNX
Explains how to configure and manage VDMs on a file storage system.
l
Configuring CIFS on VNX
Explains how to configure and manage NFS.
l
Parameters Guide for VNX for File
Explains how to view and modify parameters and system settings.
Where to get help
EMC support, product, and licensing information can be obtained as follows:
PREFACE
3
PREFACE
Note
To open a service request through EMC Online Support (https://support.emc.com), you
must have a valid support agreement. Contact your EMC sales representative for details
about obtaining a valid support agreement or to answer any questions about your
account.
Product information
For documentation, release notes, software updates, or information about EMC
products, go to EMC Online Support at https://support.emc.com.
Technical support
EMC offers a variety of support options.
l
l
Support by Product — EMC offers consolidated, product-specific information on
the Web at: https://support.EMC.com/products
The Support by Product web pages offer quick links to Documentation, White
Papers, Advisories (such as frequently used Knowledgebase articles), and
Downloads, as well as more dynamic content, such as presentations,
discussion, relevant Customer Support Forum entries, and a link to EMC Live
Chat.
EMC Live Chat — Open a Chat or instant message session with an EMC Support
Engineer.
eLicensing support
To activate your entitlements and obtain your VMAX license files, visit the Service
Center on https://support.EMC.com, as directed on your License Authorization Code
(LAC) letter emailed to you.
l
l
l
For help with missing or incorrect entitlements after activation (that is, expected
functionality remains unavailable because it is not licensed), contact your EMC
Account Representative or Authorized Reseller.
For help with any errors applying license files through Solutions Enabler, contact
the EMC Customer Support Center.
If you are missing a LAC letter, or require further instructions on activating your
licenses through the Online Support site, contact EMC's worldwide Licensing
team at licensing@emc.com or call:
n
n
North America, Latin America, APJK, Australia, New Zealand: SVC4EMC
(800-782-4362) and follow the voice prompts.
EMEA: +353 (0) 21 4879862 and follow the voice prompts.
Your comments
Your suggestions help us improve the accuracy, organization, and overall quality of the
documentation. Send your comments and feedback to:
VMAXContentFeedback@emc.com
EMC VMAX eNAS 8.1.11.24 CLI Reference Guide
eNAS components
The following terminology is used throughout this document:
l
Management Module Control Station (MMCS): Used by EMC Customer Support to
configure eNAS, if necessary.
l
Network Address Translation (NAT) Gateway: Used to configure the external IP
address of the Control Station.
l
Control Station (CS): Provides management functions to the file-side components
referred to as Data Movers.
l
Data Mover (DM): Clients communicate with a Data Mover using either/both NFS and
CIFS/SMB protocols. Clients are physically connected to the Data Mover through I/O
modules on the storage array that are assigned to the Data Mover. The Data Mover
accesses the client data by way of an internal interface to the storage array on which
the Data Mover resides
eNAS components
5
eNAS components
EMC VMAX eNAS 8.1.11.24 CLI Reference Guide
Control station
The Control Station provides utilities for managing, configuring, and monitoring of the
Data Movers in the eNAS system.
As the system administrator, you may type commands through the Control Station to
perform tasks that include the following:
l
Managing and Configuring the database and Data Movers
l
Monitoring statistics of the eNAS components
Accessing the Control Station
You may use either local or remote access to the Control Station.
Note
To access locally a connection to serial port have to be established.
l
Local access to the command line interface is available directly at the Control Station
console.
l
Remote access to the command line interface by using a secure, encrypted login
application allows the use of the eNAS command set.
Accessing the command line interface
A description of how to gain local or remote access to the command line interface for the
eNAS follows.
Note
For a local connection, connect a client to the Control Station serial port.
l
For local access to the command line interface, at the prompt, log in with your
administrative username and password.
Establish the connection to the Control Station with the following settings:
Table 1 Control Station serial port connection settings
Setting
Value
Bits per second
19200
Data bits
8
Parity
None
Stop bits
1
Flow control
None
Emulation
Auto Detect
Telnet terminal ID ANSI
l
For remote access to the command line interface:
Control station
7
Control station
1. Use a secure, encrypted, remote login application capable of SSH. Type the IP
address of the Control Station.
2. Log in with your administrative username and password.
EMC VMAX eNAS 8.1.11.24 CLI Reference Guide
Role-Based access
The administrative user account you use to access the command line interface is
associated with specific privileges, also referred to as roles. A role defines the privileges
(operations) a user can perform on a particular eNAS object. The ability to select a
predefined role or define a custom role that gives a user certain privileges is supported
for users who access eNAS through the CLI, EMC Unisphere™, and the XML API.
The Security Configuration Guide for VNX provides detailed information about how rolebased access is used to determine the commands a particular user can execute. You
create and manage user accounts and roles in Unisphere by using Settings > User
Management.
Role-Based access
9
Role-Based access
EMC VMAX eNAS 8.1.11.24 CLI Reference Guide
Command set conventions
This manual uses commonly known command set conventions for the eNAS for file man
pages. Each man page presents the command name at the top of the man page followed
by a brief overview of what the command does. The synopsis contains the actual
command usage. The description contains a more detailed breakdown of the features of
the command, and the options describe what each switch or option does specifically.
The ’See Also’ section refers to the technical modules that support the feature, in
addition to any other commands that interact with the command.
The examples are at the end of the command.
The naming convention for the Data Mover variable in the command line interface is
<movername> (default = server_2 to server_9).
The commands are prefixed, then appear in alphabetical order.
Synopsis
The synopsis is usage of each command. The synopsis appears in courier typeface, with
variables such as movername, filename, and device name enclosed by angle brackets,
with the command name appearing in bold. The switches and other options also appear
in bold and, in most cases, are prefixed by a minus sign:
server_umount {<movername>|ALL}[-perm|-temp]{-all|<fs_name>|
<pathname>}
Required entries
A switch or variable enclosed with curly brackets, or not enclosed at all, indicates a
required entry:
{<movername>|ALL}
Optional entries
A switch or variable enclosed with square brackets indicates an optional entry:
[-perm|-temp]
Formatting
The variable name enclosed by angle brackets indicates the name of a specified object:
{<movername>|ALL}
Options
An option is prefixed with a minus (-) sign: -perm
If the option is spelled out, for example, -perm, in the command syntax, you may use just
the first letter: -p
Options and names are case-sensitive. If an uppercase letter is specified in the syntax, a
lowercase letter is not accepted.
The vertical bar symbol ( | ) represents or, meaning an alternate selection:
{-all|<fs_name>|<pathname>}
Command set conventions
11
Command set conventions
Command prefixes
Commands are prefixed depending on what they are administering. For example,
commands prefixed with:
l
cel_ execute to the remotely linked eNAS system.
l
cs_ execute to the Control Station.
l
fs_ execute to the specified file system.
l
nas_ execute directly to the Control Station database.
l
server_ require a movername entry and execute directly to a Data Mover.
EMC VMAX eNAS 8.1.11.24 CLI Reference Guide
General notes
Note the following:
l
l
If a command is interrupted by using Ctrl-C, then the following messages or traces on
the console are expected:
n
nas_cmd: system execution failed.
n
nas_cmd: PANIC: caught signal #11 (Segmentation fault) -- Giving up
Use eNAS CLI for file to add IPv6 addresses to the NFS export host list. Enclose the
IPv6 address in { } or square brackets in the CLI. The IPv6 addresses added to the NFS
export list by using the CLI are displayed as read-only fields in the Unisphere
software.
General notes
13
General notes
EMC VMAX eNAS 8.1.11.24 CLI Reference Guide
NASCLI Commands
This chapter lists the eNAS Command Set provided for managing,
configuring, and monitoring of NAS database. The commands are prefixed
with nas and appear alphabetically. The command line syntax (Synopsis), a
description of the options, and an example of usage are provided for each
command.
nas_acl
nas_autodiskmark
nas_automountmap
nas_ca_certificate
nas_cel
nas_checkup
nas_ckpt_schedule
nas_config
nas_connecthome
nas_copy
nas_cs
nas_dbtable
nas_devicegroup
nas_disk
nas_diskmark
nas_emailuser
nas_environment
nas_event
nas_fs
nas_fsck
nas_halt
nas_inventory
nas_license
nas_logviewer
nas_message
nas_migrate
nas_mview
nas_pool
nas_quotas
nas_rdf
nas_replicate
nas_server
nas_stats
nas_storage
nas_syncrep
nas_task
nas_version
nas_volume
15
nas_acl
Manages the access control level table.
SYNOPSIS
-------nas_acl
-list
| -info {-user|-group|-owner} <numerical_id>
| -delete {-user|-group} <numerical_id>
|[-name <name>] -create {-user|-group} <numerical_id> level=<acl_level>
| -modify {-user|-group} <numerical_id>
{[num_id=<numerical_id>][,level=<acl_level>]}
DESCRIPTION
----------nas_acl creates, lists, and displays information for access control level
entries within the table, and deletes the specified group or entries.
The access control level table is created and recognized in the NAS database
and contains assigned levels for users and groups. A user must be defined in
the /etc/passwd file prior to being assigned an entry in the table. Creating
an access control level entry defines the access level allowed for the user or
group once a value has been established for an object.
Note: root privileges are required to create, modify, or delete the access
control level table. The root user is permitted access to all objects.
OPTIONS
-------list
Lists the access control level table.
-info {-user|-group|-owner} <numerical_id>
Displays information for the user, group, or index entry of the owner as
specified by the <numerical_id>.
-delete {-user|-group} <numerical_id>
Deletes the entry for the specified user or group from the access control
level table.
-create {-user|-group} <numerical_id> level= <acl_level>
Creates an access control level entry for the specified user or group. The
<numerical_id> can be a user ID (UID) or group ID (GID).
Note: Before executing this command, the user or group must exist in the
Control Station in the /etc/passwd file or the /etc/group file.
The <acl_level> is a single-digit (between numbers 2 and 9) input representing
available access control levels. Levels 2, 3, and 4 which are established by
default are:
Level 2. admin: Is the most privileged level and includes privileges allowed
from the operator and observer levels.
Level 3. operator: Includes privileges from the observer level.
Level 4. observer: The least privileged.
Levels 5-9 are available for configuration.
[-name <name>]
The name is case-sensitive and indicates a name by which the entry is
referred.
Once a value has been set, the level assigned the user or group is checked in
the ACL table and the level of access to the object is determined.
-modify {-user|-group} <numerical_id> {[num_id=<numerical_id>]
[,level=<acl_level>] }
Modifies the <numerical_id> and level for an access control level entry.
SEE ALSO
-------Controlling Access to System Objects on VNX, nas_fs, nas_volume, nas_rp, and
nas_storage.
EXAMPLE #1
---------Before creating access control level entries, su to root. To create entries in
the access control level table, type:
# nas_acl -name user1 -create -user 211 level=3
done
# nas_acl -name user2 -create -user 212 level=2
done
# nas_acl -name user3 -create -user 213 level=4
done
# nas_acl -name user4 -create -user 214 level=2
done
# nas_acl -name user5 -create -user 215 level=3
done
# nas_acl -name user6 -create -user 216 level=4
done
EXAMPLE #2
---------To display the access control level table, type:
$ nas_acl -list
index
1
2
3
4
5
6
7
type
user
user
user
user
user
user
user
level
admin
operator
admin
observer
admin
operator
observer
num_id
201
211
212
213
214
215
216
name
nasadmin
user1
user2
user3
user4
user5
user6
Where:
Value
index
type
level
num_id
name
Definition
Access control level table index entry number.
User or group for the entry.
Level of access permitted.
Numerical ID for identifying the entry.
Name given to the entry.
EXAMPLE #3
---------To display information for an access control level entry, type:
$ nas_acl -info -user 211
17
id
name
level
user_id
Where:
Value
id
name
level
user_id
=
=
=
=
2
user1
operator
211
Definition
Index entry.
Name given for the entry.
Level of access permitted.
Also known as the num_id.
EXAMPLE #4
---------To modify an access control level entry, type:
# nas_acl -modify -user 211 level=7
done
EXAMPLE #5
---------To delete an access control level entry, type:
# nas_acl -delete -user 211
done
-------------------------------------Last Modified: March 3, 2011 12:05 pm
nas_autodiskmark
enable/disable autodiskamrk feature.
SYNOPSIS
-------nas_autodiskmark
-info
| -modify -enabled { yes | no }
DESCRIPTION
----------This command is used to enable/disable autodiskamrk feature.
OPTIONS
-------info
Displays whether autodiskmark feature is enabled or not.
-modify
-enabled { yes | no }
Enables or disables autodiskmark feature.
EXAMPLE #1
---------To check autodiskmark feature is enabled or not, type:
$ nas_autodiskmark -info
Feature Enabled = No
$ nas_autodiskmark -info
Feature Enabled = Yes
EXAMPLE #2
---------To enable/disable autodiskmark feature, type:
$ nas_autodiskmark -modify -enabled yes
OK
19
nas_automountmap
Manages the automount map file.
SYNOPSIS
-------nas_automountmap
-list_conflict <infile> [-out <outfile>]
| -create [-in <infile>] [-out <outfile>]
DESCRIPTION
----------nas_automountmap creates and displays an automount map containing all
permanently exported file systems used by the automount daemon.
OPTIONS
-------list_conflict <infile>
Prints a list of the mount points that are used more than once.
[ -out <outfile>]
Prints a conflicting list and saves it to an <outfile>.
-create
Creates an automount map and prints it to the screen only.
[-in <infile>] [-out <outfile>]
Merges an automount map with an existing map <infile> and outputs
it to an <outfile>.
[-out <outfile>]
Creates an automount map and outputs it to an <outfile>.
EXAMPLE #1
---------To create an automount map, type:
$ nas_automountmap -create
ufs1 -rw,intr,nosuid 127.0.0.1,10.172.128.47,128.221.253.2,128.221.252.2:/ufs1
ufs2 -rw,intr,nosuid 127.0.0.1,10.172.128.47,128.221.253.2,128.221.252.2:/ufs2
EXAMPLE #2
---------To create an automount map and save it to a file, type:
$ nas_automountmap -create -out automountmap
$ more automountmap
ufs1 -rw,intr,nosuid 127.0.0.1,10.172.128.47,128.221.253.2,128.221.252.2:/ufs1
ufs2 -rw,intr,nosuid 127.0.0.1,10.172.128.47,128.221.253.2,128.221.252.2:/ufs2
EXAMPLE #3
---------To print a conflicting list, type:
$ nas_automountmap -list_conflict automountmap
Conflicting lists:
ufs1 -rw,intr,suid 172.16.21.202:/ufs1
ufs1_172.16.21.203 -rw,intr,suid 172.16.21.203:/ufs1
EXAMPLE #4
---------To merge an automount map file with an existing map file, type:
$ nas_automountmap -create -in automountmap -out automountmap1
-------------------------------------Last Modified: March 3, 2011 12:10 pm
21
nas_ca_certificate
Manages the Control Station as a Certificate Authority (CA) for VNX’s Public
Key Infrastructure (PKI).
SYNOPSIS
-------nas_ca_certificate
-display
| -generate
DESCRIPTION
----------nas_ca_certificate generates a public/private key set and a CA certificate for
the Control Station. When the Control Station is serving as a CA, it must have
a private key with which to sign the certificates it generates for the Data
Mover. The Control Station CA certificate contains the corresponding public
key, which is used by clients to verify the signature on a certificate
received from the Data Mover.
nas_ca_certificate also displays the text of the CA certificate so you can
copy it and distribute it to network clients. In order for a network client to
validate a certificate sent by a Data Mover that has been signed by the
Control Station, the client needs the Control Station CA certificate
(specifically the public key from the CA certificate) to verify the signature
of the Data Mover.s certificate.
The initial Control Station public/private key set and CA certificate are
generated automatically during a VNX software 5.6 install or upgrade. A new
Control Station public/private key set and CA certificate is not required
unless the CA key set is compromised or the CA certificate expires. The
Control Station CA certificate is valid for 5 years.
You must be root to execute the -generate option from the /nas/sbin directory.
Once a Control Station CA certificate is generated, you must perform several
additional tasks to ensure that the new certificate is integrated into VNX.s
PKI framework. The Security Configuration Guide for File and the Unisphere
online help for the PKI interface explain these tasks.
OPTIONS
-------display
Displays the Control Station CA certificate. The certificate text is displayed
on the terminal screen. Alternatively, you can redirect it to a file.
-generate
Generates a new CA public/private key set and certificate for the Control
Station. This certificate is valid for 5 years from the date it is generated.
SEE ALSO
-------server_certificate.
EXAMPLE #1
---------To generate a new Control Station CA certificate, type:
# /nas/sbin/nas_ca_certificate -generate
New keys and certificate were successfully generated.
EXAMPLE #2
---------To display the Control Station.s CA certificate, type:
# /nas/sbin/nas_ca_certificate -display
Clients need only the certificate text enclosed by BEGIN CERTIFICATE and END
CERTIFICATE although most clients can handle the entire output.
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 3 (0x3)
Signature Algorithm: sha1WithRSAEncryption
Issuer: O=Celerra Certificate Authority, CN=eng173100
Validity
Not Before: Mar 23 21:07:40 2007 GMT
Not After : Mar 21 21:07:40 2012 GMT
Subject: O=Celerra Certificate Authority, CN=eng173100
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public Key: (2048 bit)
Modulus (2048 bit):
00:da:b2:37:86:05:a3:73:d5:9a:04:ba:db:05:97:
d2:12:fe:1a:79:06:19:eb:c7:2c:c2:51:93:7f:7a:
93:59:37:63:1e:53:b3:8d:d2:7f:f0:e3:49:42:22:
f4:26:9b:b4:e4:a6:40:6d:8d:e7:ea:07:8e:ca:b7:
7e:88:71:9d:11:27:5a:e3:57:16:03:a7:ee:19:25:
07:d9:42:17:b4:eb:e6:97:61:13:54:62:03:ec:93:
b7:e6:f1:7f:21:f0:71:2d:c4:8a:8f:20:d1:ab:5a:
6a:6c:f1:f6:2f:26:8c:39:32:93:93:67:bb:03:a7:
22:29:00:11:e0:a1:12:4b:02:79:fb:0f:fc:54:90:
30:65:cd:ea:e6:84:cc:91:fe:21:9c:c1:91:f3:17:
1e:44:7b:6f:23:e9:17:63:88:92:ea:80:a5:ca:38:
9a:b3:f8:08:cb:32:16:56:8b:c4:f7:54:ef:75:db:
36:7e:cf:ef:75:44:11:69:bf:7c:06:97:d1:87:ff:
5f:22:b5:ad:c3:94:a5:f8:a7:69:21:60:5a:04:5e:
00:15:04:77:47:03:ec:c5:7a:a2:bf:32:0e:4d:d8:
dc:44:fa:26:39:16:84:a7:1f:11:ef:a3:37:39:a6:
35:b1:e9:a8:aa:a8:4a:72:8a:b8:c4:bf:04:70:12:
b3:31
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
35:06:F2:FE:CC:21:4B:92:DA:74:C9:47:CE:BB:37:21:5E:04:E2:E6
X509v3 Authority Key Identifier:
keyid:35:06:F2:FE:CC:21:4B:92:DA:74:C9:47:CE:BB:37:21:5E:04:E2:E6
DirName:/O=Celerra Certificate Authority/CN=eng173100
serial:00
X509v3 Basic Constraints:
CA:TRUE
X509v3 Subject Alternative Name:
DNS:eng173100
Signature Algorithm: sha1WithRSAEncryption
09:c3:13:26:16:be:44:56:82:5d:0e:63:07:19:28:f3:6a:c4:
f3:bf:93:25:85:c3:55:48:4e:07:84:1d:ea:18:cf:8b:b8:2d:
54:13:25:2f:c9:75:c1:28:39:88:91:04:df:47:2c:c0:8f:a4:
ba:a6:cd:aa:59:8a:33:7d:55:29:aa:23:59:ab:be:1d:57:f6:
20:e7:2b:68:98:f2:5d:ed:58:31:d5:62:85:5d:6a:3f:6d:2b:
2d:f3:41:be:97:3f:cf:05:8b:7e:f5:d7:e8:7c:66:b2:ea:ed:
58:d4:f0:1c:91:d8:80:af:3c:ff:14:b6:e7:51:73:bb:64:84:
26:95:67:c6:60:32:67:c1:f7:66:f4:79:b5:5d:32:33:3c:00:
8c:75:7d:02:06:d3:1a:4e:18:0b:86:78:24:37:18:20:31:61:
59:dd:78:1f:88:f8:38:a0:f4:25:2e:c8:85:4f:ce:8a:88:f4:
4f:12:7e:ee:84:52:b4:91:fe:ff:07:6c:32:ca:41:d0:a6:c0:
9d:8f:cc:e8:74:ee:ab:f3:a5:b9:ad:bb:d7:79:67:89:34:52:
b4:6b:39:db:83:27:43:84:c3:c3:ca:cd:b2:0c:1d:f5:20:de:
23
7a:dc:f0:1f:fc:70:5b:71:bf:e3:14:31:4c:7e:eb:b5:11:9c:
96:bf:fe:6f
-----BEGIN CERTIFICATE----MIIDoDCCAoigAwIBAgIBAzANBgkqhkiG9w0BAQUFADA8MSYwJAYDVQQKEx1DZWxl
cnJhIENlcnRpZmljYXRlIEF1dGhvcml0eTESMBAGA1UEAxMJZW5nMTczMTAwMB4X
DTA3MDMyMzIxMDc0MFoXDTEyMDMyMTIxMDc0MFowPDEmMCQGA1UEChMdQ2VsZXJy
YSBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxEjAQBgNVBAMTCWVuZzE3MzEwMDCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANqyN4YFo3PVmgS62wWX0hL+GnkG
GevHLMJRk396k1k3Yx5Ts43Sf/DjSUIi9CabtOSmQG2N5+oHjsq3fohxnREnWuNX
FgOn7hklB9lCF7Tr5pdhE1RiA+yTt+bxfyHwcS3Eio8g0ataamzx9i8mjDkyk5Nn
uwOnIikAEeChEksCefsP/FSQMGXN6uaEzJH+IZzBkfMXHkR7byPpF2OIkuqApco4
mrP4CMsyFlaLxPdU73XbNn7P73VEEWm/fAaX0Yf/XyK1rcOUpfinaSFgWgReABUE
d0cD7MV6or8yDk3Y3ET6JjkWhKcfEe+jNzmmNbHpqKqoSnKKuMS/BHASszECAwEA
AaOBrDCBqTAdBgNVHQ4EFgQUNQby/swhS5LadMlHzrs3IV4E4uYwZAYDVR0jBF0w
W4AUNQby/swhS5LadMlHzrs3IV4E4uahQKQ+MDwxJjAkBgNVBAoTHUNlbGVycmEg
Q2VydGlmaWNhdGUgQXV0aG9yaXR5MRIwEAYDVQQDEwllbmcxNzMxMDCCAQAwDAYD
VR0TBAUwAwEB/zAUBgNVHREEDTALggllbmcxNzMxMDAwDQYJKoZIhvcNAQEFBQAD
ggEBAAnDEyYWvkRWgl0OYwcZKPNqxPO/kyWFw1VITgeEHeoYz4u4LVQTJS/JdcEo
OYiRBN9HLMCPpLqmzapZijN9VSmqI1mrvh1X9iDnK2iY8l3tWDHVYoVdaj9tKy3z
Qb6XP88Fi3711+h8ZrLq7VjU8ByR2ICvPP8UtudRc7tkhCaVZ8ZgMmfB92b0ebVd
MjM8AIx1fQIG0xpOGAuGeCQ3GCAxYVndeB+I+Dig9CUuyIVPzoqI9E8Sfu6EUrSR
/v8HbDLKQdCmwJ2PzOh07qvzpbmtu9d5Z4k0UrRrOduDJ0OEw8PKzbIMHfUg3nrc
8B/8cFtxv+MUMUx+67URnJa//m8=
-----END CERTIFICATE-------------------------------------------------Last Modified: March 3, 2011 12:37 pm
nas_cel
Performs management of remotely linked VNX or a linked pair of Data Movers.
SYNOPSIS
-------nas_cel
-list
| -delete {<cel_name>|id=<cel_id>} [-Force]
| -info {<cel_name>|id=<cel_id>}
| -update {<cel_name>|id=<cel_id>}
| -modify {<cel_name>|id=<cel_id>}
{[-passphrase <passphrase>][-name <new_name>][-ip <ipaddr>][,<ipaddr>,.
..]}
| -create <cel_name> -ip <ipaddr>[,<ipaddr>,...] -passphrase <passphrase>
| -interconnect <interconnect_options>
| -syncrep <syncrep_options>
DESCRIPTION
----------nas_cel manages the linking of the remote VNX to the local VNX. nas_cel also
creates the trusted relationship between source and destination eNAS Control
Stations in configurations such as EMC eNAS Replicator.
For eNAS Replicator only, nas_cel -interconnect also builds the connection
(interconnect) between a pair of Data Movers.
For VDM Sync Replication only, nas_cel -syncrep also sets up the RP/SRDF
connection between a pair of EmbeddedNAS systems.
Linked VNX systems are acknowledged:
1. Automatically during the installation
2. When executing the nas_cel -create
When performing a nas_rdf -init to set up the SRDF relationship between two
eNAS system.
OPTIONS
-------list
Lists all VNX linked to the current VNX. The hostname of the Control Station
active during installation appears as the <cel_name>.
The ID of the object is an integer and is assigned automatically. The name of
the VNX might be truncated if it is too long for the display. To view the full
name, use the -info option with the VNX ID.
-delete <cel_name>|id=<cel_id>} [-Force]
Deletes the relationship of the remote VNX, and removes its entry from the NAS
database on the local VNX.
The -Force option applies to SRDF and EMC MirrorView./S configurations only.
If the VNX to be deleted is part of an SRDF or MirrorView/S configuration,
-delete must be specified with the -Force option; otherwise, an error is
generated. You cannot use -Force if the specified VNX is also being used by
VNX Replicator, file system copy (for example, with nas_copy), or
TimeFinder/FS NearCopy or FarCopy. If the deletion is necessary, clean up
these configurations before performing the forced deletion.
-info {<cel_name>|id=<cel_id>}
Displays information for the remote VNX. To view the <cel_id> of configured
VNX, use -list.
-update {<cel_name>|id=<cel_id>}
25
Updates the local VNX entry with the local Control Station’s hostname and IP
address configuration. It also updates the local Data Mover-to-Data Mover
authentication setup.
For the remote VNX, updates all Data Movers that were down or experiencing
errors during the -create or -modify and restores them to service by using the
configuration required for Data Mover authentication.
Data Mover authentication is used in iSCSI replication as the mechanism
enabling two Data Movers (local or remote) to authenticate themselves and
perform the requested operations. The -update option communicates with each
Data Mover and either updates the configuration, or creates the configuration
if it is being done for the first time.
-modify {<cel_name>|id=<cel_id>}
{[-passphrase <passphrase>][-name <new_name>][-ip <ipaddr>]}
Changes the current passphrase, name, or IP address of the remote VNX to the
new passphrase, name, or IP address in the local VNX database and modifies the
remote Data Mover authentication setup by communicating with each Data Mover
in the cabinet. The passphrase must have 6 to 15 characters.
-create <cel_name> -ip <ipaddr>[,<ipaddr,...] -passphrase <passphrase>
Builds the trusted relationship between one VNX and another VNX in a
configuration such as VNX Replicator, SRDF, and MirrorView/S.
The -create must be executed twice to ensure communication from both sides,
first on the source VNX (to identify the destination VNX) and then on the
destination VNX (to identify the source VNX). You must assign a name when you
create the relationship (for example, a name that identifies the remote VNX in
a local entry). The IP address specified represents the appropriate remote
VNX.s primary Control Station (in slot 0); the passphrase specified is used to
manage the remote VNX. The passphrase must have 6 to 15 characters and be the
same between the source and destination VNXs to enable communication.
INTERCONNECT OPTIONS
-------------------Type nas_cel -interconnect to display interconnect options:
-interconnect
{ -create <name>
-source_server <movername>
-destination_system {<cel_name>|id=<cel_id>}
-destination_server <movername>
-source_interfaces {<name_service_interface_name>|ip=<ipaddr>}
[,{<name_service_interface_name>|ip=<ipaddr>},...]
-destination_interfaces {<name_service_interface_name>|
ip=<ipaddr>}[,{<name_service_interface_name>|ip=<ipaddr>},...]
[-bandwidth <bandwidthSched>]
| -modify {<name>|id=<interConnectId>}
{[-source_interfaces {<name_service_interface_name>|ip=<ipaddr>},...]
[-destination_interfaces {<name_service_interface_name>|ip=<ipaddr>},..
.]
|
|
|
|
|
|
[-bandwidth <bandwidthSched>]
[-name <newName>]}
-pause {<name>|id=<interConnectId>}
-resume {<name>|id=<interConnectId>}
-delete {<name>|id=<interConnectId>}
-info {<name>|id=<interConnectId>|-all}
-list [-destination_system {<cel_name>|id=<cel_id>}]
-validate {<name>|id=<interconnectId>}}
An interconnect supports VNX Replicator sessions by defining the
communications path between a given Data Mover pair located on the same
cabinet or different cabinets. The interconnect configures a list of local
(source) and peer (destination) interfaces (using IP addresses and interface
names), and a bandwidth schedule for all replication sessions using the
interconnect. Only one interconnect can be established for a given Data Mover
pair in any direction.
Note: You must delete all user-defined interconnects configured for a Data
Mover before you can rename it. After you rename the Data Mover, you must
re-create the source and peer interconnects with the new Data Mover name and
then restart any associated replication sessions.
To fully establish an interconnect, nas_cel -interconnect must be issued
twice, once from each side (the local side and its peer side). Both sides of
the interconnect must exist before VNX Replicator sessions for local or remote
replication can use the interconnect. Only the local side of an interconnect
on which the source replication object resides is specified when creating the
replication session. Loopback interconnects are created and named
automatically and can be viewed using nas_cel -interconnect -list. You cannot
create, modify, or delete loopback interconnects.
-create <name>
Assigns a name, up to 255 characters, to the appropriate side of the
interconnect. The name must be unique for each Data Mover. Make the name
meaningful, identifying servers and, for remote replication, VNX names or
sites.
Remote replication naming example:
s2CelA_s3CelB or NYs3_LAs4 (local side)
s3CelB_s2CelA or LAs4_NYs3 (peer side)
Local replication naming example:
s2_s3 (source side on local system)
s3_s2 (peer side on the same system)
-source_server <moverName>
Specifies the name of an available local Data Mover to use for the
local side of the interconnect.
-destination_system {<cel_name>|id=<cel_id>}
Specifies the name or ID of the VNX where the peer Data Mover resides.
-destination_server <movername>
Specifies the name of an available Data Mover, on the same or different
system, to use for the peer side of the interconnect.
-source_interfaces {<name_service_interface_name>|ip=<ipaddr>}
[,{<name_service_interface_name>|ip=<ipaddr>},.]
Configures a list of interfaces available for the local side of the
interconnect. You can define the list by using IP addresses (IPv4 or IP
v6)
or name service interface names or a combination of both, but how you
specify an interface determines how it must be specified by the
replication session later (by name service interface name or IP address
).
If you define an interface by using an IP address, make sure that the
source interface list uses the same IPv4/IPv6 protocol. An IPv4 interfa
ce
cannot connect to an IPv6 interface and vice versa. Both sides of the
connection must use the same protocol.
For each network protocol type (IPv4/IPv6) specified in the source inte
rface
list, at least one interface from the same type must be specified in th
e
destination interfaces list and vice versa. For example, if the source
interface list includes one or more IPv6 addresses, the destination int
erface
list must also include at least one IPv6 address.
The name service interface name is a fully qualified name given to27 a ne
twork
interface that must resolve to a single IP address (for example, using
a DNS
server).
Note: To prevent potential errors during interface selection (especiall
y after
a failover/switchover), it is highly recommended that you specify the s
ame
local and peer interface lists when configuring each side of the interc
onnect.
-destination_interfaces {<name_service_interface_name>| ip=<ipaddr>}
[,{<name_service_interface_name>|ip= <ipaddr>},.]
Configures a list of interfaces available on the peer side of the
interconnect. You can define the list by using IP addresses (IPv4 or IP
v6) or
name service interface names or a combination of both, but how you spec
ify
each interface determines how it is specified by the replication sessio
n.
If you define an interface using an IP address, make sure that the sour
ce
interface list uses the same IPv4/IPv6 protocol. An IPv4 interface cann
ot
connect to an IPv6 interface and vice versa. Both sides of the connecti
on
must use the same protocol.
For each network protocol type (IPv4/IPv6) specified in the destination
interface list, at least one interface from the same type must be speci
fied
in the source interfaces list and vice versa. For example, if the sourc
e
interface list includes one or more IPv6 addresses, the destination int
erface
list must also include at least one IPv6 address. The name service inte
rface
name is a fully qualified name given to a network interface that must r
esolve
to a single IP address (for example, using a DNS server).
[-bandwidth <bandwidthSched>]
Specifies a schedule to control the interconnect bandwidth used on spec
ific
days, or times instead of using all available bandwidth at all times fo
r the
interconnect (the default).
Note: The bandwidth schedule executes based on Data Mover time, not Con
trol
Station time.
The schedule applies to all VNX Replicator sessions using the interconn
ect.
Specify a schedule with one or more comma-separated entries, most speci
fic to
least specific, as follows:
[{Su|Mo|Tu|We|Th|Fr|Sa}][HH:00-HH:00][/Kbps]
,[ <next_entry>],[...]
Example:
MoTuWeThFr07:00-18:00/2000,/8000 means use a limit of 2000 Kb/s from 7
A.M. to
6 P.M. Monday through Friday; otherwise, use a bandwidth limit of 8000
Kb/s.
-interconnect -modify{<name>|id=<interConnectId>}
Modifies one or more of the following characteristics of an existing
interconnect, as specified by the name or ID for the appropriate side o
f the
interconnect.
Note: You cannot modify the peer side of an interconnect configured on
a remote
system; you must modify it from that system. Also, you cannot modify an
interface in use by a replication session.
[-source_interfaces{<name_service_interface_name>| ip= <ipAddr>},. ]
Modifies the list of interfaces (name service interface name or IP addr
esses
or both) available for use on the local side of an interconnect. The ne
w list
of interfaces completely replaces the previous list.
Note: To avoid problems with interface selection, any changes made to t
he
interface lists should be reflected on both sides of an interconnect.
[-destination_interfaces{<name_service_interface_name>|ip= <ipAddr>},.]
Modifies the list of interfaces (name service interface name or IP addr
esses
or both) available for use on the peer side of an interconnect. The new
list
of interfaces completely replaces the previous list.
[-bandwidth <bandwidth>]
Modifies the existing bandwidth schedule for the specified interconnect
, or
creates a schedule if none existed previously. The schedule allocates t
he
interconnect bandwidth for specific days or times or both instead of us
ing all
available bandwidth at all times for the interconnect (the default). Th
e
schedule applies to all replication sessions using the interconnect. Sp
ecify a
schedule with one or more comma-separated entries, most specific to lea
st
specific, as follows:
[{Su|Mo|Tu|We|Th|Fr|Sa}][HH:00-HH:00][/Kbps],[ <next_entry>],[...]
Example:
MoTuWeThFr07:00-18:00/2000,/8000 means use a limit of 2000 Kb/s from 7
A.M. to
6 P.M. Monday through Friday; otherwise, use a bandwidth limit of 8000
Kb/s.
[-name <newName>]
Changes the name of the specified interconnect to a new name.
-interconnect -pause {<name>|id=<interConnectId>}
Halts data transmission over the existing Data Mover interconnect until
you
resume transmission over the interconnect or delete the interconnect. T
his
affects all replication sessions using the specified interconnect.
-interconnect -resume {<name>|id= <interConnectId>}
Resumes data transmission over the Data Mover interconnect, making the
interconnect available for use by replication sessions.
29
-interconnect -delete {<name>|id= <interConnectId>}
Deletes the Data Mover interconnect, thereby making the interconnect
unavailable for use by any replication sessions. You cannot delete an
interconnect if it is in use by a replication session. You can delete a
paused interconnect.
-interconnect -info {<name>|id=<interConnectId>| -all}
Displays information about the specified interconnect or about all
interconnects known to the local system.
-interconnect -list [-destination_system <cel_name> |id=<cel_id>]
By default, lists the interconnects available on the local VNX. Specify
ing the
name or ID of a remote VNX also lists the interconnects available on th
at VNX.
-interconnect -validate {<name>|id= <interconnectId>}
Verifies the interconnect, verifying that authentication is configured
properly by opening the connection between the Data Mover pair. Validat
ion is
done for loopback, local, and remote configuration.
SYNCREP OPTIONS
--------------Type nas_cel -syncrep to display syncrep options:
-syncrep
{ -enable { <cel_name> | id=<cel_id> }
-local_fsidrange <from>,<to>
-remote_fsidrange <from>,<to>
-local_storage <symm_id> sym_dir=<director:port[,director:port,..]>
rdf_group=<group_num>
-remote_storage <symm_id> sym_dir=<director:port[,director:port,..]>
rdf_group=<group_num>
| -start { <cel_name> | id=<cel_id> }
| -disable { <cel_name> | id=<cel_id> }
| -info { <cel_name> | id=<cel_id> | -all } [-verbose]
| -list
}
The syncrep option is used for creating & deleting VDM Sync Replication Service
RDF
sessions between the source and remote Control Stations for Embedded NAS.
-enable { <cel_name> | id=<cel_id> }
Enables the VDM Sync Replication Service and creates the RDF session between th
e local and
remote systems.
-disable { <cel_name> | id=<cel_id> }
Disables the VDM Sync Replication Service and deletes the RDF session(s) betwee
n the local and
remote systems.
-start { <cel_name> | id=<cel_id> }
Starts the SRDF VDM Sync Replication Service.
-info { <cel_name> | id=<cel_id> | -all } [-verbose]
Displays information of local or remote or all the VDM Sync Replication
Service(s).
-list
Lists the local and remote VDM Sync Replication Service
SEE ALSO
-------Using VNX Replicator, nas_copy, nas_replicate, and nas_task.
EXAMPLE #1
---------To create an entry for the remote VNX, type:
$ nas_cel -create cs110 -ip 172.24.102.240 -passphrase nasdocs
operation in progress (not interruptible)...
id
= 3
name
= cs110
owner
= 0
device
=
channel
=
net_path
= 172.24.102.240
VNX_id = APM000438070430000
passphrase = nasdocs
Where:
Value
-----
Definition
----------
id
name
owner
device
ID of the remote VNX on the local VNX.
Name assigned in the local view to the remote VNX.
ACL ID assigned automatically.
R2 device mounted by the local Control Station to read the database of
the remote Control Station in the SRDF environment. This value is uniq
ue to
channel
the Symmetrix storage system.
Pair of devices used in the rdf channel. One is used for writing
messages to the remote (wdev), the other to read messages from them. T
his
value is unique to the Symmetrix storage system.
net_path IP address of the remote VNX.
VNX_id
Unique VNX ID number.
passphrase Used for authentication with a remote VNX.
EXAMPLE #2
---------For the VNX for block, to list all remote VNXs, type:
$ nas_cel -list
id
0
name
cs100
owner mount_dev
0
3
cs110
0
channel
net_path
172.24.102.236
CMU
APM000420008180
172.24.102.240
APM000438070430
000
000
For the VNX with a Symmetrix storage system, to list all remote VNXs, type:
$ nas_cel -list
id
0
name
cs30
owner mount_dev
0
channel
net_path
172.24.172.152
CMU
002804000190000
1
cs40
500
/dev/sdg
172.24.172.151
002804000218000
6
/dev/sdj1
0
Where:
Value
Definition
31
-----
----------
id
name
owner
mount_dev
channel
ID of the remote VNX on the local VNX.
Name assigned in the local view to the remote VNX.
ACL ID assigned automatically.
Mounted database from the remote VNX in the SRDF environment.
This value is unique to the Symmetrix storage system.
RDF channel from where information is read and written. This va
net_path
CMU
unique to the Symmetrix storage system.
IP address of the remote VNX.
VNX Management Unit (unique VNX ID number).
lue is
EXAMPLE #3
---------To display information for the remote VNX, cs110, type:
$ nas_cel -info cs110
id
= 3
name
= cs110
owner
= 0
device
=
channel
=
net_path
= 172.24.102.240
VNX_id = APM000438070430000
passphrase = nasdocs
EXAMPLE #1 provides information for a description of command outputs.
EXAMPLE #4
---------To update the Control Station entry for cs110, type:
$ nas_cel -update cs110
operation in progress (not interruptible)...
id
= 3
name
= cs110
owner
= 0
device
=
channel
=
net_path
= 172.24.102.240
VNX_id = APM000438070430000
passphrase = nasdocs
EXAMPLE #1 provides information for a description of command outputs.
EXAMPLE #5
---------To modify the passphrase and name for the remote Control Station cs110, type:
$ nas_cel -modify cs110 -passphrase nasdocs_replication -name cs110_target
operation in progress (not interruptible)...
id
name
owner
device
channel
net_path
=
=
=
=
=
=
3
cs110_target
0
172.24.102.240
VNX_id = APM000438070430000
passphrase = nasdocs_replication
EXAMPLE #1 provides information for a description of command outputs.
EXAMPLE #6
---------To delete the Control Station entry of the remote VNX, cs110_target, type:
$ nas_cel -delete cs110_target
operation in progress (not interruptible)...
id
= 3
name
= cs110_target
owner
= 0
device
=
channel
=
net_path
= 172.24.102.240
VNX_id = APM000438070430000
passphrase = nasdocs_replication
EXAMPLE #1 provides information for a description of command outputs.
EXAMPLE #7
---------To create an interconnect NYs3_LAs2 between Data Mover server_3 and remote
Data Mover server_2, and use a bandwidth limit of 2000 Kb/s from 7 A.M. to 6
P.M. Monday through Friday; otherwise, use a bandwidth limit of 8000 Kb/s,
type:
$ nas_cel -interconnect -create NYs3_LAs2 -source_server server_3
-destination_system cs110 -destination_server server_2 -source_interfaces
ip=10.6.3.190 -destination_interfaces ip=10.6.3.173 -bandwidth
MoTuWeThFr07:00-18:00/2000,/8000
operation in progress (not interruptible)...
id
name
source_server
source_interfaces
destination_system
destination_server
destination_interfaces
bandwidth schedule
crc enabled
number of configured replications
number of replications in transfer
status
=
=
=
=
=
=
=
=
=
=
=
=
30003
NYs3_LAs2
server_3
10.6.3.190
cs110
server_2
10.6.3.173
MoTuWeThFr07:00-18:00/2000,/8000
yes
0
0
The interconnect is OK.
Where:
Value
-----
Definition
----------
id
name
source_server
ID of the interconnect.
Name of the interconnect.
Name of an available local Data Mover to use for the local
side of the interconnect.
IP addresses available for the local side of the
interconnect (at least one, or a name service interface nam
source_interfaces
e).
destination_system
Control Station names of the VNX systems available for
use in a remote replication session. Local System is the
de
33
fault.
destination_server
Name of an available peer Data Mover to use for the
peer side of the interconnect.
destination_interfaces IP addresses available for the peer side of the
interconnect (at least one, or a name service interface nam
e).
For loopback interconnects, the interface is fixed at 127.0
.0.1.
bandwidth schedule
crc enabled
Bandwidth schedule with one or more comma-separated
entries, most specific to least specific.
Indicates that the Cyclic Redundancy Check (CRC) method is
in
use for verifying the integrity of data sent over the inter
connect.
CRC is automatically enabled and cannot be disabled.
number of configured replications Number of replication sessions currently conf
igured.
number of replications in transfer Number of replications are currently in tran
sfer.
status
Status of the interconnect.
EXAMPLE #8
---------To modify the bandwidth schedule of the interconnect NYs3_LAs2, type:
$ nas_cel -interconnect -modify NYs3_LAs2 -bandwidth
MoWeFr07:00-18:00/2000,TuTh07:00-18:00/4000,/8000
operation in progress (not interruptible)...
id
= 30003
name
= NYs3_LAs2
source_server
= server_3
source_interfaces
= 10.6.3.190
destination_system
= cs110
destination_server
= server_2
destination_interfaces
= 10.6.3.173
bandwidth schedule
=
MoWeFr07:00-18:00/2000,TuTh07:00-18:00/4000,/8000
crc enabled
= yes
number of configured replications = 0
number of replications in transfer = 0
status
= The interconnect is OK.
EXAMPLE #7 provides a description of the command outputs.
EXAMPLE #9
---------To list available interconnects, type:
$ nas_cel -interconnect -list
id
20001
30001
30003
name
loopback
loopback
NYs3_LAs2
source_server
server_2
server_3
server_3
destination_system
cs100
cs100
cs110
EXAMPLE #10
----------To pause the interconnect with id=30003, type:
$ nas_cel -interconnect -pause id=30003
done
destination_server
server_2
server_3
server_2
EXAMPLE #11
----------To resume the interconnect NYs3_LAs2, type:
$ nas_cel -interconnect -resume NYs3_LAs2
done
EXAMPLE #12
----------To validate the interconnect NYs3_LAs2, type:
$ nas_cel -interconnect -validate NYs3_LAs2
NYs3_LAs2: validating 9 interface pairs: please wait...ok
EXAMPLE #13
----------To display the detailed information about the interconnect NYs3_LAs2, type:
$ nas_cel -interconnect -info NYs3_LAs2
id
= 30003
name
= NYs3_LAs2
source_server
= server_3
source_interfaces
= 10.6.3.190
destination_system
= cs110
destination_server
= server_2
destination_interfaces
= 10.6.3.173
bandwidth schedule
=
MoWeFr07:00-18:00/2000,TuTh07:00-18:00/4000,/8000
crc enabled
= yes
number of configured replications = 0
number of replications in transfer = 0
status
= The interconnect is OK.
EXAMPLE #7 provides a description of the command outputs.
EXAMPLE #14
----------To delete interconnect NYs3_LAs2, type:
$ nas_cel -interconnect -delete NYs3_LAs2
operation in progress (not interruptible)...
id
= 30003
name
= NYs3_LAs2
source_server
= server_3
source_interfaces
= 10.6.3.190
destination_system
= cs110
destination_server
= server_2
destination_interfaces
= 10.6.3.173
bandwidth schedule
=
MoWeFr07:00-18:00/2000,TuTh07:00-18:00/4000,/8000
crc enabled
= no
number of configured replications = 0
number of replications in transfer = 0
status
= The interconnect is OK.
EXAMPLE #7 provides a description of the command outputs.
35
--------------------------------------------Last Modified Date: December 3, 2014 1:15 pm
EXAMPLE #15
----------To enable VDM syncrep service on local and remote Embedded NAS systems.
$ nas_cel -syncrep -enable L9C26_CS0
-local_fsidrange 4096,12287 -remote_fsidrange 12288,24575 -local_storage
000196700261 sym_dir=1E:27 rdf_group=99 -remote_storage 000197100007
sym_dir=1E:27 rdf_group=99
Now saving FSID range [12288,24575] on remote system...
done
Now saving FSID range [4096,12287] on local system...
done
Now creating LUN mappings (may take several minutes)...
done
Now adding CTD access to local server server_2...
done
Now adding CTD access to local server server_3...
done
Now creating mountpoint for sync replica of NAS database... done
Now mounting sync replica of NAS database...
done
Now enabling sync replication service on remote system...
done
done
EXAMPLE #16
----------To disable a VDM syncrep service on local and remote Embedded NAS systems.
$ nas_cel -syncrep -disable L9C26_CS0
Now unmounting sync replica of NAS database...
Now deleting mountpoint for sync replica of NAS database...
Now removing CTD access to local server server_2...
Now removing CTD access to local server server_3...
Now deleting local LUN mapping...
Now disabling service (including deleting LUN mapping) on remote system...
Now removing FSID range [12288,24575] on remote system...
Now removing FSID range [4096,12287] on local system...
Now removing other sync replication service settings on local system...
done
done
done
done
done
done
done
done
done
done
nas_checkup
Provides a system health checkup for the VNX.
SYNOPSIS
-------nas_checkup
[-version|-help|-rerun]
DESCRIPTION
----------nas_checkup runs scheduled and unscheduled health checks on the VNX and
reports problems that are found, and the actions needed to fix the problem and
acts as a system health monitor.
The scheduled run time for the nas_checkup command is every 2 weeks by
default. If a warning or error is discovered during this time, an alert is
posted on the Unisphere.
Set up email notification for warnings or errors in the Unisphere
Notifications page, or modify and load the sample nas_checkup event
configuration file.
If a problem is discovered that requires EMC Service Personnel assistance,
nas_checkup will notify EMC.
OPTIONS
------No arguments
Runs a series of system health checks on the VNX and reports the problems that
are found and the actions needed to fix the problem.
No email, callhome, or Unisphere alert is posted when the health check is run
unscheduled.
-version
Displays the version of health check that is run on the VNX.
-help
Provides help.
-rerun
Reruns the checks that produce error messages in the previous health checkup.
It does not rerun the checks that produce warning or information messages. If
there are no checks that produce error messages, then the -rerun switch
generates a message that there is nothing to rerun.
CHECKS
-----Nas_checkup runs a subset of the available checks based on the configuration
of your system. The complete list of available checks are:
Control Station Checks:
Check if minimum free space exists
Check if minimum free space exists ns
Check if enough free space exists
Check if enough free space exists ns
Check if NAS Storage API is installed correctly
Check if NAS Storage APIs match
Check if NBS clients are started
Check if NBS configuration exists
Check if NBS devices are accessible
Check if NBS service is started
Check if standby is up
37
Check
Check
Check
Check
Check
Check
if Symapi data is present
if Symapi is synced with Storage System
integrity of NASDB
if primary is active
all callhome files delivered
if NAS partitions are mounted
Data Mover Checks:
Check boot files
Check if hardware is supported
Check if primary is active
Check if root filesystem has enough free space
Check if using standard DART image
Check MAC address
Check network connectivity
Check status
Storage System Checks:
Check disk emulation type
Check disk high availability access
Check disks read cache enabled
Check disks and storage processors write cache enabled
Check if access logix is enabled
Check if FLARE is committed
Check if FLARE is supported
Check if microcode is supported
Check no disks or storage processors are failed over
Check that no disks or storage processors are faulted
Check that no hot spares are in use
Check that no hot spares are rebuilding
Check control lun size
Check if storage processors are read cache enabled
FILES
----The files associated with system health checkups are:
/nas/log/nas_ checkup-run.<timestamp>.log
Contains information about the che
cks that
were run, problems found, and acti
ons needed
/nas/log/nas_checkup.<timestamp>.log
to fix the problem.
Produced when a scheduled nas_chec
kup
is run and contains the same infor
mation as
the nas_checkup-run.<timestamp>.lo
g.
/nas/log/syslog
s_checkup.
/nas/site/checkup_eventlog.cfg
t
Contains the overall results of na
Provides a sample nas_checkup even
configuration file. This is the fi
le to be
d load the
modified to add email addresses an
file.
SEE ALSO
-------Configuring Events and Notifications on VNX for File.
EXAMPLE #1
---------To run a health check on the VNX, type:
$ nas_checkup
Check Version:
Check Command:
Check Log
:
5.6.23.1
/nas/bin/nas_checkup
/nas/log/checkup-run.070611-064115.log
-------------------------------------Checks-----------------------------------Control Station: Checking if file system usage is under limit..............Pass
Control Station: Checking if file systems have enough space to upgrade.....Pass
Control Station: Checking if NAS Storage API is installed correctly........Pass
Control Station: Checking if NBS clients are started.......................Pass
Control Station: Checking if NBS configuration exists......................Pass
Control Station: Checking if NBS devices are accessible....................Pass
Control Station: Checking if NBS service is started........................Pass
Control Station: Checking if standby is up.................................N/A
Control Station: Checking if Symapi data is present........................Pass
Control Station: Checking if Symapi is synced with Storage System..........Pass
Control Station: Checking integrity of NASDB...............................Pass
Control Station: Checking all callhome files delivered.....................Pass
Control Station: Checking resolv conf......................................Pass
Control Station: Checking if NAS partitions are mounted....................Pass
Control Station: Checking ipmi connection..................................Pass
Control Station: Checking nas site eventlog configuration..................Pass
Control Station: Checking nas sys mcd configuration........................Pass
Control Station: Checking nas sys eventlog configuration...................Pass
Control Station: Checking logical volume status............................Pass
Control Station: Checking ups is available.................................Fail
Data Movers
: Checking boot files.......................................Pass
Data Movers
: Checking if primary is active.............................Pass
Data Movers
: Checking if root filesystem has enough free space.........Pass
Data Movers
: Checking if using standard DART image.....................Pass
Data Movers
: Checking network connectivity.............................Pass
Data Movers
: Checking status...........................................Pass
Data Movers
: Checking dart release compatibility.......................Pass
Data Movers
: Checking dart version compatibility.......................Pass
Data Movers
: Checking server name......................................Pass
Data Movers
: Checking unique id........................................Pass
Data Movers
: Checking CIFS file server configuration...................N/A
Data Movers
: Checking domain controller connectivity and configuration.N/A
Data Movers
: Checking DNS connectivity and configuration...............N/A
Data Movers
: Checking connectivity to WINS servers.....................N/A
Data Movers
: Checking connectivity to NTP servers......................N/A
Data Movers
: Checking connectivity to NIS servers......................Pass
Data Movers
: Checking virus checker server configuration...............N/A
Data Movers
: Checking if workpart is OK................................Pass
Data Movers
: Checking if free full dump is available...................?
Data Movers
: Checking if each primary data mover has standby...........Fail
Storage System : Checking disk emulation type..............................Pass
Storage System : Checking disk high availability access....................Pass
Storage System : Checking disks read cache enabled.........................Pass
Storage System : Checking disks and storage processors write cache enabled.Pass
Storage System : Checking if access logix is enabled.......................Pass
Storage System : Checking if FLARE is committed............................Pass
Storage System : Checking if FLARE is supported............................Pass
Storage System : Checking if microcode is supported........................Pass
Storage System : Checking no disks or storage processors are failed over...Pass
Storage System : Checking that no disks or storage processors are faulted..Pass
Storage System : Checking that no hot spares are in use....................Pass
Storage System : Checking that no hot spares are rebuilding................Pass
Storage System : Checking minimum control lun size.........................Pass
Storage System : Checking maximum control lun size.........................Fail
Storage System : Checking system lun configuration.........................Pass
Storage System : Checking if storage processors are read cache enabled.....Pass
Storage System : Checking if auto assign are disabled for all luns.........Pass
Storage System : Checking if auto trespass are disabled for all luns.......Pass
Storage System : Checking backend connectivity.............................Pass
------------------------------------------------------------------------------39
One or more warnings are shown below. It is recommended that you follow the
instructions below to correct the problem then try again.
-----------------------------------Information--------------------------------Control Station: Check ups is available
Symptom: The following UPS emcnasUPS_i0 emcnasUPS_i1 is(are)
not available
Data Movers: Check if each primary data mover has standby
Symptom: The following primary Data Movers server_2, server_3 does
not have a standby Data Mover configured. It is recommended that each
primary Data Mover have a standby configured for it with automatic
failover policy for high availability.
Storage System: Check maximum control lun size
Symptom:
* The size of control LUN 5 is 32 GB. It is larger than the
recommended size of 14 GB. The additional space will be reserved by
the system.
------------------------------------------------------------------------------------------------------------------Warnings----------------------------------Data Movers: Check if free full dump is available
Symptom: Cannot get workpart structure. Command failed.
* Command: /nas/sbin/workpart -r
* Command output: open: Permission denied
* Command exit code: 2
Action : Contact EMC Customer Service and refer to EMC Knowledgebase
emc146016. Include this log with your support request.
------------------------------------------------------------------------------EXAMPLE #2
---------To display help for nas_checkup, type:
$ nas_checkup -help
Check Version:
Check Command:
5.6.23.1
/nas/bin/nas_checkup
usage: nas_checkup
[ -help | -version ]
EXAMPLE #3
---------To display the version of nas_checkup utility, type:
$ nas_checkup -version
Check Version:
Check Command:
DIAGNOSTICS
-----------
5.6.23.1
/nas/bin/nas_checkup
nas_checkup returns one of the following exit statuses:
0 . No problems found
1 . nas_checkup posted information
2 . nas_checkup discovered a warning
3 . nas_checkup discovered an error
255 . Any other error
Examples of errors that could cause a 255 exit status include, but are not
limited to:
-If nas_checkup is run when another instance of nas_checkup is running
-If nas_checkup is run by someone other than root or the administrator group
(generally nasadmin)
-If nas_checkup is run on the standby Control Station
-------------------------------------Last Modified: March 3, 2011 1:30 pm
41
nas_ckpt_schedule
Manages SnapSure checkpoint scheduling for the VNX.
SYNOPSIS
-------nas_ckpt_schedule
-list
| -info {-all|<name>|id=<id>}
| -create <name>
-filesystem {<name>|id=<id>} [-description <description>]
-recurrence {
once [-start_on <YYYY-MM-DD>] -runtimes <HH:MM>
[-ckpt_name <ckpt_name>]
| daily [-every <number_of_days>]
[-start_on <YYYY-MM-DD>][-end_on <YYYY-MM-DD>]
-runtimes <HH:MM>[,...]
{-keep <number_of_ckpts>|-ckpt_names <ckpt_name>[,...]}
| weekly [-every <number_of_weeks>]
-days_of_week {Mon|Tue|Wed|Thu|Fri|Sat|Sun}[,...]
[-start_on <YYYY-MM-DD>][-end_on <YYYY-MM-DD>]
-runtimes <HH:MM>[,...]
{-keep <number_of_ckpts>|-ckpt_names <ckpt_name>[,...]}
| monthly [-every <number_of_months>] -days_of_month <1-31>[,...]
[-start_on <YYYY-MM-DD>][-end_on <YYYY-MM-DD>]
-runtimes <HH:MM>[,...]
{-keep <number_of_ckpts>|-ckpt_names <ckpt_name>[,...]}}
[{-cvfsname_prefix <prefix>|-time_based_cvfsname }]
| -modify {<name>|id=<id>}
[-name <new_name>]
[{-cvfsname_prefix <prefix>| -time_based_cvfsname}]
[-description <description>]
[-recurrence {daily|weekly|monthly}]
[-every {number_of_days|number_of_weeks|<number_of_months}]
[-days_of_week {Mon|Tue|Wed|Thu|Fri|Sat|Sun}[,...]]
[-days_of_month <1-31>[,...]][ -start_on <YYYY-MM-DD>]
[-end_on <YYYY-MM-DD>][ -runtimes <HH:MM>[,...]]
| -delete {<name>|id=<id>}
| -pause {<name>|id=<id>}
| -resume {<name>|id=<id>}
DESCRIPTION
----------nas_ckpt_schedule creates and lists the schedules for the SnapSure
checkpoints. Schedules can be run once, daily, weekly, or monthly and can be
modified, paused, resumed, and deleted.
OPTIONS
-------list
Lists all checkpoint schedules on the system, the name of the schedule, the
next run date, the state, and the description.
-info {-all|<name>|id=<id>}
Lists detailed information for all schedules or for the specified schedule.
-create <name> -filesystem {<name>|id=<id>}
[-description <description>] -recurrence {
Creates a checkpoint schedule for the file system that is specified by <name>
or <id>. The schedule name in -create <name> must be unique. The -description
option provides a label for the schedule. The -recurrence option specifies if
the checkpoint operation occurs once, daily, weekly, or monthly.
Note: It is recommended that a time interval of at least 15 minutes in between
the creation of two checkpoints on the same production file system. Using VNX
SnapSure provides information on checkpoint scheduling.
once [-start_on <YYYY-MM-DD>] -runtime <HH:MM> [-ckpt_name <ckpt_name>]
If once is specified, the hours and minutes for the snapshot to be run must
be
specified. A start date and name may be optionally assigned to the checkpoin
t.
For a one-time checkpoint schedule, only one runtime can be provided. For
one-time schedules, the option -ckpt_name can specify a name for the single
checkpoint; if omitted, the default naming is used
(<schedule_name>_<fs_name>_<num>) where <num> is a four digit integer
beginning with 0001.
|daily [-every <number_of_days>] [-start_on <YYYY-MM-DD>][-end_on
<YYYY-MM-DD>] -runtimes <HH:MM>[,...]
{-keep <number_of_ckpts>|-ckpt_names <ckpt_name>[,...]}
If daily is specified, the checkpoint is taken every day unless -every is
specified indicating the number of days between runs. The -start_on option
indicates the day when the checkpoints will start and -end_on indicates the
day when they end.
The -runtimes option specifies one or more times to take a checkpoint on eac
h
scheduled day. The -keep option specifies the maximum number of checkpoints
to
be kept at any one time (using default checkpoint naming). <number_of_ckpts>
should be equal to the number of checkpoint names specified for a schedule.
The
-ckpt_name option assigns one or more specific names to each checkpoint as
it is taken.
|weekly [-every <number_of_weeks>] -days_of_week {Mon|Tue|Wed|Thu|Fri|Sat|Su
n}
[,...][-start_on <YYYY-MM-DD>][-end_on <YYYY-MM-DD>]-runtimes <HH:MM>[,...]
{-keep <number_of_ckpts>|-ckpt_names <ckpt_name>[,...]}
If weekly is specified, the checkpoint is taken every week unless the -every
option is specified indicating the number of weeks between runs. The
-days_of_week option specifies one or more days during the week on which to
run the schedule. The -start_on option indicates the day when the checkpoint
s
will start and -end_on indicates the day when they end.
The -runtimes option specifies one or more times to take a checkpoint on eac
h
scheduled day. The -keep option specifies the maximum number of checkpoints
to
be kept at any one time (using default checkpoint naming). The -ckpt_name
option assigns one or more specific names to each checkpoint as it is taken.
|monthly [-every <number_of_months>] -days_of_month <1-31>[,...][-start_on
<YYYY-MM-DD>][-end_on <YYYY-MM-DD>]-runtimes <HH:MM>[,...]
{-keep <number_of_ckpts> |-ckpt_names <ckpt_name>[,...]}}
If monthly is specified, the checkpoint is taken every month unless the -eve
ry
is specified indicating the number of months between runs. The -days_of_mont
h
option specifies one or more days during the month on which to run the
schedule. <days> is specified as an integer 1 through 31. The -start_on opti
on
indicates the day when the checkpoints will start and -end_on indicates the
day when they end.
The -runtimes option specifies one or more times to take a checkpoint on eac
h
scheduled day. The -keep option specifies either the maximum number of 43
checkpoints to be kept at any one time (using default checkpoint naming) or
using the -ckpt_name option, one or more specific names to assign each
checkpoint as it is taken.
The schedule that is set takes effect immediately unless -start_on is
specified. Daily, weekly, and monthly schedules run indefinitely unless
-end_on is included.
The -cvfsname_prefix option specifies the customized prefix of a CVFS name.
This prefix along with the cvfsname_delimiter and the cvfs_starting_index ma
ke
up the CVFS name. The -time_based_cvfsname option specifies the CVFS name
based on the creation time of the CVFS. It is the default method for
generating CVFS names and will be used if the prefix is not specified.
Note: The prefix must be a PFS-wide unique string and can contain up to 20 A
SCII
characters. The prefix must not include intervening spaces, colons (:), or
slashes (/).
-modify {<name>|id=<id>} [-name <new_name>] [{-cvfsname_prefix <prefix>|
-time_based_cvfsname}] [-description <description>] [-recurrence
{daily|weekly|monthly}] [-every <number_of_days>| <number_of_weeks>
|<number_of_months>] [-days_of_week {Mon|Tue|Wed|Thu|Fri|Sat|Sun}[,...]]
[-days_of_month <1-31>[,...]][-start_on <YYYY-MM-DD>]
[-end_on <YYYY-MM-DD> ][ -runtimes <HH:MM>[,...]]
Modifies the scheduled checkpoint entry as specified.
-delete {<name>|id=<id>}
Deletes the specified checkpoint schedule. This operation does not delete any
checkpoints.
-pause {<name>|id=<id>}
Pauses the specified checkpoint schedule, including checkpoint creations.
-resume {<name>|id=<id>}
Resumes a paused checkpoint schedule.
SEE ALSO
-------Using VNX SnapSure.
EXAMPLE #1
---------To create a checkpoint schedule that creates a checkpoint of the file system
ufs1 daily at 8 A.M. and 8 P.M. starting on 11/13/06 with the last run on
12/13/07, and keep 7 checkpoints, type:
$ nas_ckpt_schedule -create ufs1_ckpt_sched1 -filesystem ufs1 -description
"Daily
Checkpoint schedule for ufs1" -recurrence daily -every 1 -start_on 2006-11-13
-end_on 2007-12-13 -runtimes 8:00,20:00 -keep 7 -cvfsname_prefix daily
This command returns no output.
EXAMPLE #2
---------To create a checkpoint schedule that creates a checkpoint of the file system
ufs1 weekly on Mondays at 6 P.M., starting on 11/13/06 with the last run on
12/13/07, and name new checkpoints ufs1_ckpt_mon1, ufs1_ckpt_mon2,
ufs1_ckpt_mon3, ufs1_ckpt_mon4, type:
$ nas_ckpt_schedule -create ufs1_ckpt_sched2 -filesystem ufs1 -description
"Weekly Checkpoint schedule for ufs1" -recurrence weekly -every 1
-days_of_week Mon -start_on 2006-11-13 -end_on 2007-12-13 -runtimes 18:00
-ckpt_names ufs1_ckpt_mon1,ufs1_ckpt_mon2,ufs1_ckpt_mon3,ufs1_ckpt_mon4
-cvfsname_prefix weekly
This command returns no output.
EXAMPLE #3
---------To create a checkpoint schedule that creates a checkpoint of the file system
ufs1 every other 15th of the month at 7 P.M., and keep 12 checkpoints, type:
$ nas_ckpt_schedule -create ufs1_ckpt_sched3 -filesystem ufs1 -description
"Monthly Checkpoint schedule for ufs1" -recurrence monthly -every 2
-days_of_month
15 -runtimes 19:00 -keep 12 -cvfsname_prefix monthly
This command returns no output.
EXAMPLE #4
---------To create a checkpoint schedule that creates a checkpoint of the file system
ufs1 once at 3:09 P.M., type:
$ nas_ckpt_schedule -create ufs1_ckpt_sched4 -filesystem ufs1 -description
"One-time Checkpoint Schedule for ufs1" -recurrence once -runtimes 15:09
This command returns no output.
EXAMPLE #5
---------To list all checkpoint schedules, type:
$ nas_ckpt_schedule -list
id
name
description
state
next run
=
=
=
=
=
6
ufs1_ckpt_sched2
Weekly Checkpoint schedule for ufs1
Pending
Mon Nov 13 18:00:00 EST 2006
id
name
description
state
next run
=
=
=
=
=
80
ufs1_ckpt_sched4
One-time Checkpoint Schedule for ufs1
Pending
Tue Nov 14 15:09:00 EST 2006
id
name
description
state
next run
=
=
=
=
=
5
ufs1_ckpt_sched1
Daily Checkpoint schedule for ufs1
Pending
Mon Nov 13 20:00:00 EST 2006
id
name
description
state
next run
=
=
=
=
=
7
ufs1_ckpt_sched3
Monthly Checkpoint schedule for ufs1
Pending
Wed Nov 15 19:00:00 EST 2006
EXAMPLE #6
---------To modify the recurrence of the checkpoint schedule ufs1_ckpt_sched3 to run
every 10th of the month, type:
45
$ nas_ckpt_schedule -modify ufs1_ckpt_sched3 -recurrence monthly -every 1
-days_of_month 10
This command returns no output.
EXAMPLE #7
---------To get detailed information about checkpoint schedule, type:
$ nas_ckpt_schedule -info ufs1_ckpt_sched3
id = 7
name = ufs1_ckpt_sched3
description = Monthly Checkpoint schedule for ufs1
CVFS name prefix = monthly
tasks = Checkpoint ckpt_ufs1_ckpt_sched3_001 on filesystem id=25, Checkpoint
ckpt_ufs1_ckpt_sched3_002 on filesystem id=25, Checkpoint
ckpt_ufs1_ckpt_sched3_003 on filesystem id=25, Checkpoint
ckpt_ufs1_ckpt_sched3_004 on filesystem id=25, Checkpoint
ckpt_ufs1_ckpt_sched3_005 on filesystem id=25, Checkpoint
ckpt_ufs1_ckpt_sched3_006 on filesystem id=25, Checkpoint
ckpt_ufs1_ckpt_sched3_007 on filesystem id=25, Checkpoint
ckpt_ufs1_ckpt_sched3_008 on filesystem id=25, Checkpoint
ckpt_ufs1_ckpt_sched3_009 on filesystem id=25, Checkpoint
ckpt_ufs1_ckpt_sched3_010 on filesystem id=25, Checkpoint
ckpt_ufs1_ckpt_sched3_011 on filesystem id=25, Checkpoint
ckpt_ufs1_ckpt_sched3_012 on filesystem id=25
next run = Sun Dec 10 19:00:00 EST 2006
state = Pending
recurrence = every 1 months
start on = Mon Nov 13 16:47:51 EST 2006
end on =
at which times = 19:00
on which days of week =
on which days of month = 10
EXAMPLE #8
---------To pause a checkpoint schedule, type:
$ nas_ckpt_schedule -pause ufs1_ckpt_sched1
This command returns no output.
EXAMPLE #9
---------To resume a checkpoint schedule, type:
$ nas_ckpt_schedule -resume ufs1_ckpt_sched1
This command returns no output.
EXAMPLE #10
----------To delete a checkpoint schedule, type:
$ nas_ckpt_schedule -delete ufs1_ckpt_sched2
This command returns no output.
---------------------------------------------------------------------Last Modified: March 4 2011, 11:20 am
nas_config
Manages a variety of configuration settings on the Control Station, some of
which are security based.
SYNOPSIS
-------nas_config
-IPalias {-list
| -create [-name <device_name>] <numeric_id>
| -delete [-name <device_name>] <numeric_id>}
| -ssl
| -sessiontimeout [<number_in_minutes>|off]
| -password [-min <6..15>] [-retries <max_allowed>] [-newchars <min_num>]
[-digits <min_num>] [-spechars <min_num>] [-lcase <min_num>] [-ucase <min_num>]
| -password -default
DESCRIPTION
----------nas_config -IPalias configures different IP addresses to point to the same
network device allowing use of IP aliasing to manage the Control Station. This
enables communication with the primary Control Station using a single IP
address regardless of whether the primary Control Station is running in slot 0
or slot 1.
nas_config -ssl generates an X.509 digital certificate on the Control Station.
Unisphere uses the Secure Socket Layer (SSL) protocol to create a secure
connection between a user.s Web browser and the Control Station.s Apache Web
server. When a VNX system is initially installed, a generic certificate is
generated. After configuring the Control Station’s network configuration
(hostname, DNS domain name, or IP address) and before using the Unisphere, a
new certificate should be generated.
nas_config -sessiontimeout sets a system-wide value that automatically times
out a Control Station shell session after a specified period of inactivity.
nas_config -password supports a password quality policy by requiring that
passwords chosen by VNX users adhere to certain rules.
You must be root to execute this command from the /nas/sbin directory.
OPTIONS
-------IPalias -list
Lists IP aliases configured on the VNX.
-IPalias -create [-name <device_name>] <numeric_id>
Creates an IP alias for the Control Station.
<device_name> is the name for a specified device:
1. If you specify a device name that device must have an IP address.
2. If you do not specify a device name, the system uses the external network
interface.
<numeric_id> is a user-defined number, and can be an integer between 0 and
255. The system allows up to 256 aliases for any device.
-delete [-name <device_name>] <numeric_id>
Deletes an IP alias for the Control Station.
-ssl
Installs a SSL certificate on the Control Station and restarts the HTTP
server.
47
-sessiontimeout [<number_in_minutes>|off]
Displays the current session timeout value in minutes. <number_in_minutes>
sets the number of minutes a Control Station shell sesssion can be inactive
before it is timed out. Possible values are 5 to 240 minutes. The default
value is 60 minutes. Session timeout is enabled by default. To disable session
timeout, type off or 0 to indicate zero minutes.
The -sessiontimeout option enables the native timeout properties of the
underlying shells on the Control Station. The relevant shell man page provides
a description of how the mechanism works.
-password
Prompts for specific password policy definitions. The current value for each
policy definition is shown in brackets.
[-min <6..15>] defines the minimum length of the new password. The default
length is eight characters. The length has to be a value between 6 and 15
characters.
[-retries <max_allowed>] defines the number of attempts a user can make to
define an acceptable new password before the command fails. The default value
is three attempts.
[-newchars <min_num>] defines the minimum number of characters that must be i
n
the new password that were not included in the old password. The default valu
e
is three characters.
[-digits <min_num>] defines the minimum number of digits that must be include
d
in the new password. The default value is one digit.
[-spechars <min_num>] defines the minimum number of special characters (such
as ! @ # $ % & ^ and *) that must be included in the new password. The defaul
t
value is 0.
[-lcase <min_num>] defines the minimum number of lowercase characters that
must be included in the new password. The default value is 0.
[-ucase <min_num>] defines the minimum number of uppercase characters that
must be included in the new password. The default value is 0.
-password -default
Resets the password policy definitions to their default values.
SEE ALSO
-------Security Configuration Guide for File.
EXAMPLE #1
---------To create an IP alias for the Control Station, type:
# /nas/sbin/nas_config -IPalias -create 0
Do you
Please
Do you
Please
want slot_0
enter an IP
want slot_0
enter a new
EXAMPLE #2
----------
IP address
address to
IP address
IP address
<1.2.3.4> as your alias [yes or no]: no
use as an alias: 1.2.3.6
<1.2.3.4> as your alias [yes or no]: yes
for slot_0: 1.2.3.6
To view the IP alias that you created, type:
# /nas/sbin/nas_config -IPalias -list
alias IPaddress state
eth2:0 1.2.3.6 UP
EXAMPLE #3
---------To delete an IP alias, type:
# /nas/sbin/nas_config -IPalias -delete 0
All current sessions using alias eth2:0 will terminate
Do you want to continue [yes or no]: yes
done
EXAMPLE #4
---------To generate and install a certificate for the Apache Web server on the Control
Station, type:
# /nas/sbin/nas_config -ssl
Installing a new SSL certificate requires restarting the Apache web server.
Do you want to proceed? [y/n]: y
New SSL certificate has been generated and installed successfully.
EXAMPLE #5
---------To change the session timeout value from the default value of 60 minutes to
100 minutes, type:
# /nas/sbin/nas_config -sessiontimeout 100
done
EXAMPLE #6
---------To disable session timeout, type:
# /nas/sbin/nas_config -sessiontimeout 0
done
or
# /nas/sbin/nas_config -sessiontimeout off
done
EXAMPLE #7
---------To set specific password policy definitions, type:
# /nas/sbin/nas_config -password
Minimum length for a new password (Between 6 and 15): [8]
Number of attempts to allow before failing: [3]
Number of new characters (not in the the old password): [3]
Number of digits that must be in the new password: [1]
Number of special characters that must be in a new password: [0]
49
Number of lower case characters that must be in password: [0]
Number of upper case characters that must be in password: [0]
EXAMPLE #8
---------To set the minimum length of a new password to 10 characters, type:
# /nas/sbin/nas_config -password -min 10
EXAMPLE #9
---------To reset the current password policy definitions to their default values,
type:
# /nas/sbin/nas_config -password -default
--------------------------------------------Last Modified: March 4, 2011 12:45 pm
nas_connecthome
Configures email, FTP, modem, HTTPS and ESRS transport mechanisms for transport
ing
Callhome event files to user-configured destinations.
SYNOPSIS
-------nas_connecthome
-info
| -test {-email_1|-email_2|-ftp_1|-ftp_2|-modem_1|-modem_2|-https|-esrs}
| -modify [-modem_priority {Disabled|1|2|3}]
[-modem_number <phone_number>]
[-modem_number_2 <phone_number>]
[-ftp_priority {Disabled|1|2|3}]
[-ftp_server {<hostname>|<ip_addr>}]
[-ftp_port <port>]
[-ftp_user <username>]
[-ftp_passwd [<passwd>]]
[-ftp_folder <path>]
[-ftp_ipprotocol {IPV4|IPV6}]
[-ftp_mode {active|passive}]
[-ftp_server_2 {<hostname>|<ip_addr>}]
[-ftp_port_2 <port>]
[-ftp_user_2 <username>]
[-ftp_passwd_2 [<passwd>]]
[-ftp_folder_2 <path>]
[-ftp_ipprotocol_2 {IPV4|IPV6}]
[-ftp_mode_2 {active|passive}]
[-email_priority {Disabled|1|2|3}]
[-email_from <email_addr>]
[-email_to {<email_addr>[,<email_addr>]}
[-email_subject <email_subject>]
[-email_server {<hostname>|<ip_addr>}]
[-email_ipprotocol {IPV4|IPV6}]
[-email_server_2 {<hostname>|<ip_addr>}]
[-email_ipprotocol_2 {IPV4|IPV6}]
[-esrs_priority {Disabled|1|2|3}]
[ -https_priority {Disabled|1|2|3}]
[ -https_url {url}
[ -https_ipprotocol {IPv4|IPv6}
[-dial_in_number <phone_number>]
[-serial_number <serial_number>]
[-site_id <site_id>]
[-encryption_enabled {yes|no}]
[-dial_in_enabled {yes|no}]
| -help
DESCRIPTION
----------nas_connecthome pauses and resumes the ConnectHome service,
displays and configures parameters for email, FTP, modem, HTTPS,and ESRS,
which are mechanisms used for transmitting event files.
nas_connecthome enables a user to configure primary and optional
secondary destinations for each transport mechanism.
nas_connecthome also tests connectivity to the destination
configured for a transport mechanism.
This command must be executed from /nas/sbin/.
51
OPTIONS
-------info
Displays the enabled and disabled configuration parameters for all
transport mechanisms.
-test {-email_1|-email_2|-ftp_1|-ftp_2|-modem_1|-modem_2|-https|-esrs}
Tests connectivity to the destination configured and enabled for the
specified transport mechanism.
-modify
Modifies the following configuration parameters for any or all
transport mechanisms:
[-modem_priority {Disabled|1|2|3}]
Enables modem as a Primary, Secondary, or Tertiary transport
mechanism. Specifying Disabled removes modem as a transport
mechanism.
[-modem_number <phone_number>]
Sets or modifies the primary phone number of the modem.
Note: Specifying "" (empty double quotes) disables the use of the
existing phone number.
[-modem_number_2 <phone_number>]
Sets or modifies the secondary phone number of the modem.
Note: Specifying "" (empty double quotes) disables the use of the
existing phone number for this transport mechanism.
[-ftp_priority {Disabled|1|2|3}]
Enables FTP as a Primary, Secondary, or Tertiary transport
mechanism. Specifying Disabled removes FTP as a transport
mechanism.
[-ftp_server {<hostname>|<ip_addr>}]
Sets or modifies the hostname or IP address of the primary FTP
server and corresponding port. The allowable input is IPv4
address, IPv6 address, or domain name.
[-ftp_port <port>]
Sets or modifies the port of the primary FTP server and
corresponding port. The valid input is an integer between 1 and
65535. If an empty string " " is provided for this option, the port
number is reset to the default value 21.
[-ftp_user <username>]
Sets or modifies the username of the login account on the primary
FTP server.
Note: Specifying "" (empty double quotes) reverts to the default value
of onalert.
[-ftp_passwd [<passwd>]]
Sets or modifies the password of the login account on the primary
FTP server.
Note: Specifying "" (empty double quotes) reverts to the default value
of EMCCONNECT.
[-ftp_folder <path>]
Sets or modifies the path to the folder on the primary FTP server
where the event files have to be deposited.
Note: Specifying "" (empty double quotes) reverts to the default value
of incoming.
[-ftp_ipprotocol {IPV4|IPV6}]
Sets or modifies the transfer mode of the primary FTP transport
mechanism. If an IPv4 address is provided to FTP server, the
corresponding IP protocol is changed to IPv4 automatically. If an
IPv6 address is used, the IP protocol is changed to IPv6. When
hostname is specified, no IP protocol change is made.
[-ftp_mode {active|passive}]
Sets or modifies the transfer mode of the primary FTP transport
mechanism.
Note: Specifying "" (empty double quotes) reverts to the default value
of active.
[-ftp_server_2 <hostname>[<ip_addr>]]
Sets or modifies the hostname or IP address of the secondary FTP
server and corresponding port. The allowable input is IPv4
address, IPv6 address, or domain name.
[-ftp_port_2 <port>]
Sets or modifies the port of the secondary FTP server and
corresponding port. The valid input is an integer between 1 and
65535. If an empty string "" is provided for this option, the port
number is reset to the default value of 21.
[-ftp_user_2 <username>]
Sets or modifies the username of the login account on the
secondary FTP server.
Note: Specifying "" (empty double quotes) reverts to the default value
of onalert.
[-ftp_passwd_2 [<passwd>]]
Sets or modifies the password of the login account on the
secondary FTP server.
Note: Specifying "" (empty double quotes) reverts to the default value
of EMCCONNECT.
[-ftp_folder_2 <path>]
Sets or modifies the path of the folder on the secondary FTP
server where the event files have to be deposited.
Note: Specifying "" (empty double quotes) reverts to the default value
of
incoming.
[-ftp_ipprotocol_2 {IPv4|IPv6}]
Sets or modifies the transfer mode of the secondary FTP transport
mechanism.
[-ftp_mode_2 { active|passive}]
Sets or modifies the transfer mode of the secondary FTP transport
mechanism.
Note: Specifying "" (empty double quotes) reverts to the default value
of
active.
[-email_priority {Disabled|1|2|3 }]
Enables email as a Primary, Secondary, or Tertiary transport
mechanism. Specifying Disabled removes email as a transport
mechanism.
[-email_from <email_addr>]
53
Sets or modifies the sender’s email address. The maximum
number of characters that can be specified is 63.
Note: Specifying "" (empty double qoutes) reverts to the default value
of
connectemc@emc.com.
[-email_to <email_addr>[,<email_addr> ]]
Sets or modifies the destination email addresses that receive the
event files. Multiple email addresses can be specified with a
comma separating each address. The maximum number of
characters that can be specified is 255.
Note: Specifying "" (empty double quotes) reverts to the default value
of
emailalert@emc.com.
[-email_subject <email_subject>]
Sets or modifies the subject of the email message.
Note: Specifying "" (empty double quotes) reverts to the default value
of
CallHome Alert.
[-email_server {<hostname>|<ip_addr>}]
Sets or modifies the primary email server that accepts and routes
email messages.
Note: Specifying "" (empty double quotes) disables the use of the exist
ing
email server for this transport mechanism.
[-email_ipprotocol {IPv4|IPv6}]
Sets or modifies the secondary email server that accepts and
routes email messages.
[-email_server_2 {<hostname>|<ip_addr>}]
Sets or modifies the secondary email server that accepts and
routes email messages.
Note: Specifying "" (empty double quotes) disables the use of the exist
ing
email server for this transport mechanism.
[-email_ipprotocol_2 {IPv4|IPv6}]
Sets or modifies the secondary email server that accepts and
routes email messages.
[-esrs_priority {Disabled|1|2|3}]
Enables ESRS as a Primary, Secondary, or Tertiary transport mechanism.
Specifying Disabled removes ESRS as a transport mechanism.
[-https_priority {Disabled|1|2|3}]
Enables HTTPS as a Primary, Secondary, or Tertiary transport
mechanism. Specifying Disabled removes HTTPS as a transport
mechanism.
[-https_url]
The url of the monitoring station.
[-https_ipprotocol {IPv4|IPv6}]
Sets or modifies the transfer mode of the secondary HTTPS
transport mechanism.
[-dial_in_number <phone_number>]
Sets or modifies the dial-in phone number of the modem.
Note: Specifying "" (empty double quotes) does not disable the number
or restore its default value. The empty string is stored as is.
[-serial_number <serial_number>]
Sets or modifies the VNX serial number, if it was not
automatically detected.
Note: Specifying "" (empty double quotes) does not disable the number
or restore its default value. The empty string is stored as is.
[-site_id <site_id>]
Sets or modifies the site ID.
Note: Specifying "" (empty double quotes) does not disable the number
or restore its default value. The empty string is stored as is.
[-encryption_enabled {yes|no}]
Enables or disables the encryption of the CallHome payload
during transmission.
Note: Specifying "" (empty double quotes) reverts to the default value
of yes.
[-dial_in_enabled {yes|no}]
Enables or disables dial-in login sessions.
Note: Specifying "" (empty double quotes) reverts to the default value
of
yes.
SEE ALSO
-------Configuring Events and Notifications on VNX for File.
EXAMPLE #1
---------To display configuration information, type:
# /nas/sbin/nas_connecthome -info
ConnectHome Configuration:
Encryption Enabled
= yes
Dial In :
Enabled
= yes
Modem phone number = 9123123123
Site ID
= MY SITE
Serial number
= APM00054703223
ESRS :
Priority
= 1
Email :
Priority
= 1
Sender Address
= admin@yourcompany.com
Recipient Address(es) = emailalert@emc.com
Subject
= CallHome Alert
Primary :
Email Server
= backup.mailhub.company.com
Secondary :
Email Server =
FTP :
Priority
= 2
Primary :
FTP Server
= 1.2.3.4
FTP Port
= 22
55
FTP User Name = onalert
FTP Password = **********
FTP Remote Folder = incoming
FTP Transfer Mode = active
Secondary :
FTP Server = 1.2.4.4
FTP Port = 22
FTP User Name = onalert
FTP Password = **********
FTP Remote Folder = incoming
FTP Transfer Mode = active
Modem :
Priority
Primary :
Phone Number
BT Tymnet
Secondary :
Phone Number
BT Tymnet
= Disabled
=
= no
=
= no
EXAMPLE #2
---------To test the primary email server, type:
# /nas/sbin/nas_connecthome -test -email_1
--------------------------------------------------------ConnectEMC 2.0.27-bl18 Wed Aug 22 10:24:32 EDT 2007
RSC API Version: 2.0.27-bl18
Copyright (C) EMC Corporation 2003-2007, all rights reserved.
--------------------------------------------------------Reading configuration file: ConnectEMC.ini.
Run Service begin...
Test succeeded for Primary Email.
EXAMPLE #3
---------To modify the configuration information, type:
# /nas/sbin/nas_connecthome -modify -esrs_priority 1
--------------------------------------------------------ConnectEMC 2.0.27-bl18 Wed Aug 22 10:24:32 EDT 2007
RSC API Version: 2.0.27-bl18
Copyright (C) EMC Corporation 2003-2007, all rights reserved.
--------------------------------------------------------Reading configuration file: ConnectEMC.ini.
Run Service begin...
Modify succeeded for Primary ESRS.
-------------------------------------Last Modified: September 26, 2012 11:15a.m
nas_copy
Creates a replication session for a one-time copy of a file system. This
command is available with VNX Replicator.
SYNOPSIS
-------nas_copy
-name <sessionName>
-source
{-fs {<name>|id=<fsId>}|-ckpt {<ckptName>|id=<ckptId>}
-destination
{-fs {id=<dstFsId>|<existing_dstFsName>}
|-pool {id=<dstStoragePoolId>}|<dstStoragePool>}}
[-storageSystem <dstStorageSerialNumber>]}
[-from_base {<ckpt_name>|id=<ckptId>}]
-interconnect {<name>|id=<interConnectId>}
[-source_interface {<nameServiceInterfaceName>|ip=<ipaddr>}]
[-destination_interface {<nameServiceInterfaceName>|ip=<ipaddr>}]
[-overwrite_destination]
[-refresh]
[-full_copy]
[-background]
DESCRIPTION
----------nas_copy from the Control Station on the source side, performs a
one-time copy of a source read-only file system or a checkpoint file
system.
Note: Depending on the size of the data in the source, this command may
take some time to complete. Once a copy session begins, you can monitor it
or interrupt it if necessary using the nas_task command. You can list all
replication sessions, including copy sessions, using the nas_replicate -list
command.
OPTIONS
-------name <sessionName> -source -fs {<name>|id=<fsId>|-ckpt { <ckptName>|id=<ckptId
>}
-destination {-fs {<existing_dstFsName>|id=<dstFsId>[-pool <dstStoragePool>|
id=<dstStoragePoolId>}] [-from_base {<ckpt_Name>|id=<ckptId>}] -interconnect
{<name>|id=<interConnectId>}
Creates a VNX Replicator session that performs a one-time copy of a
source read-only file system or a checkpoint file system.
The session name assigned must be unique for the Data Mover pair
as defined by the interconnect. The naming convention
<source_fs_or_ckpt_name>_replica<#> is used if a read-only file
system or checkpoint at the destination already has the same name as
the source. An integer between 1 and 4 is assigned according to how
many replicas of that file system or checkpoint already exist.
The -source specifies the name or ID of an existing read-only file
system or checkpoint file system as the source for this copy session.
This is to be used as a common base for the initial transfer. The
checkpoint is identified by checkpoint name or checkpoint file system
ID. This option is intended to accommodate upgrade situations to
VNX Replicator.
The -destination specifies either an existing destination file system or
the storage needed to create the destination file system automatically,
as part of the copy operation. An existing destination file system
must be read-only and the same size as the source. Specifying a
57
storage pool or ID creates the destination file system automatically, as
read-only, using the same name and size as the source file system.
[-storageSystem <dstStorageSerialNumber>]
When the destination file system is to be created from a pool, this
specifies the storage system for the destination file system to
reside. Use the nas_storage -list command to obtain the serial
number of the storage system.
[-from_base {ckpt_name>|id=<ckptId>}]
Specifies an existing source file system checkpoint to be used as a
common base for the initial data transfer. The checkpoint is identified
by the checkpoint name or ID.
The -interconnect specifies the local (source) side of an established
Data Mover interconnect to use for this copy session. Use the nas_cel
-interconnect -list command on the source VNX to list the
interconnects available to VNX Replicator sessions.
[-source_interface {<nameServiceInterfaceName>|ip= <ipAddr>}]
Instructs the copy session to use a specific local interface defined
for the interconnect on the source VNX instead of selecting the
local interface supporting the lowest number of sessions (the
default). If this local interface was defined for the interconnect
using a name service interface name, specify the name service
interface name; if it was defined using an IP address, specify the
IP address. The source_interfaces field of the output from the
nas_cel -interconnect -info command shows how the source
interface was defined. This option does not apply to a loopback
interconnect, which always uses 127.0.0.1.
[-destination_interface {<nameServiceInterfaceName>|ip=<ipaddr>}]
Instructs the copy session to use a specific interface defined for
the interconnect on the destination VNX instead of selecting the
peer interface supporting the lowest number of sessions (the
default). If this peer interface was defined for the interconnect
using a name service interface name, specify the name service
interface name; if it was defined using an IP address, specify the
IP address. The destination_interfaces field of the output from
the nas_cel -interconnect -info command shows how the peer
interface was defined. This option does not apply to a loopback
interconnect, which always uses 127.0.0.1.
[-overwrite_destination]
For an existing destination, discards any changes made to the
destination object and restores it from the established common
base (differential copy). If this option is not specified and an
existing destination object contains different content than the
established common base, an error is returned.
[-refresh {<name>|id=<session_id>}
Updates a destination checkpoint that has the same name as the
copied checkpoint. This option does not refresh the source object;
it refreshes only the destination for a existing checkpoint. If you
specify this option and no checkpoint exists with the same name,
the command returns an error.
[-full_copy]
For an existing destination object, if a common base checkpoint
exists, this performs a full copy of the source checkpoint to the
destination, instead of a differential copy. If this option is not
specified and an existing destination object has different content
than the established common base, an error is returned.
[-background]
Executes the command in asynchronous mode. Use the nas_task
command to check the status of the command.
SEE ALSO
-------nas_cel, nas_replicate, nas_task.
EXAMPLE #1
---------To create a one-time copy of a checkpoint file system with session
name ufs1_replica1 with the source checkpoint ufs_ckpt1 and
destination pool clar_r5_performance on the interconnect
NYs3_LAs2, source interface 10.6.3.190, and destination interface
10.6.3.173, type:
$ nas_copy -name ufs1_replica1 -source -ckpt ufs1_ckpt1 -destination
-pool clar_r5_performance -interconnect NYs3_LAs2 -source_interface
10.6.3.190 -destination_interface 10.6.3.173
OK
EXAMPLE #2
---------To create a one-time copy of a read-only file system for the
session ufs1_replica1 with source file system ufs1 and overwrite
an existing destination file system ufs1 on the interconnect NYs3_LAs2,
source interface 10.6.3.190, and destination interface 10.6.3.173, type:
$ nas_copy -name ufs1_replica1 -source -fs ufs1 -destination -fs ufs1
-interconnect NYs3_LAs2 -source_interface 10.6.3.190 -destination_interface
10.6.3.173 -overwrite_destination
OK
EXAMPLE #3
---------To initiate a differential copy of ufs_ckpt2 to the ufs1_destination file
system using ufs1_ckpt1 as the common base, using the -from_base
option, type:
$ nas_copy -name ufs1_replica1 -source -ckpt -ufs1_ckpt2
-destination -fs ufs1_destination -from_base ufs1_ckpt1
-interconnect NYs3_LAs2
OK
Caution: Using the -from_base option overrides any common base that may
exist.Ensure that the specified checkpoint represents the correct
state of the destination file system.
EXAMPLE #4
---------To refresh the destination of the replication session ufs1_replica1
for the source checkpoint ufs1_ckpt1 and destination file system ufs1
on the interconnect NYs3_LAs2, type:
$ nas_copy -name ufs1_replica1 -source -ckpt ufs1_ckpt1 -destination -fs
ufs1 -interconnect NYs3_LAs2 -refresh
OK
EXAMPLE #5
---------59
To perform a full copy of the source checkpoint to the destination for
the replication session ufs1_replica1 with the source file system ufs1
and destination file system ufs1 on the interconnect NYs3_LAs2, type:
$ nas_copy -name ufs1_replica1 -source -fs ufs1 -destination -fs ufs1
-interconnect NYs3_LAs2 -overwrite_destination -full_copy -background
Info 26843676673: In Progress: Operation is still running. Check task id 4177
on the Task Status screen for results.
--------------------------------------Last Modified: July 13, 2011 11:00 am
nas_cs
Manages the configuration properties of the Control Station.
SYNOPSIS
-------nas_cs
-info [-timezones]
| -set [-hostname <hostname>]
[ -nat1_ip4address <ipv4_address> ]
[ -nat1_ip4netmask <ipv4_netmask> ]
[ -nat1_ip6address <ipv6_address[/prefix_length]> ]
[ -nat2_ip4address <ipv4_address> ]
[ -nat2_ip4netmask <ipv4_netmask> ]
[ -nat2_ip6address <ipv6_address[/prefix_length]> ]
[-dns_domain <dns_domain_name>]
[-search_domain <domain_name>[,...]]
[-dns_servers <dns_server>[,...]]
[-session_monitor_timeout <days>]
[-session_idle_timeout <minutes>]
[-time <yyyymmddhhmm [ss]>]
[-timezone <time_zone_str>]
[-ntp_servers <ntp_server>[,...]]
| -natsync [ -dnssync ]
| -reboot
DESCRIPTION
----------nas_cs sets, clears, and lists the Control Station configuration.
nas_cs can be used to reboot the Control Station.
OPTIONS
-------info [-timezones]
Displays the Control Station configuration. The -timezones option
displays all supported time zones on the Control Station.
-set [-hostname <hostname>]
Sets the user configurable parameters of the Control Station
configuration. Sets the hostname of the primary Control Station. To
specify a hostname, the maximum number of characters is 64,
excluding white spaces and dot characters.
[-ip4address {<ipv4_address>]
Sets the IPv4 network address of the primary Control Station.
The IPv4 address must be a valid address.
[-ip4netmask <ipv4_netmask>]
Sets the subnet mask for a valid IPv4 network address on the
primary Control Station.
[-ip4gateway <ipv4_gateway>]
Sets the IPv4 network address of the gateway machine for the
primary Control Station on the network. The IPv4 address must
be a valid address.
[-ip6address <ipv6_addr[/prefix_length]>]
Sets the IPv6 network address of the primary Control Station.
The IPv6 address must be a valid address. The /prefix_length option
sets the integer value, between 8 and 128, for the prefix length of
the IPv6 address of the primary Control Station.
[-ip6gateway <ipv6_gateway>]
Sets the IPv6 network address of the gateway machine for the
61
primary Control Station on the network. The IPv6 address must
be a valid address.
[-dns_domain <dns_domain_name>]
Sets the Domain Name System of which the primary Control
Station is a member. It can accept valid domain names.
[-search_domain <domain_name>[,...]]
Sets the order in which DNS domains are searched to resolve a
comma-separated list of valid domain names.
[-dns_servers <ip_addr>[,.]]
Sets the IP addresses of the DNS servers of the domain. It is a
comma-separated list of valid IPv4 or IPv6 addresses and can
have a maximum of three DNS addresses.
[-session_idle_timeout <minutes>]
Sets the timeout period in minutes for an inactive administrator
session to become invalid.
[-session_monitor_timeout <days>]
Sets the limit for the number of days until when a valid login is
allowed to run queries on the primary Control Station. Any active
management session requires a login on the primary Control
Station.
[-time <yyyymmddhhmm [ss]>]
Sets the current system date and time in the format
<yyyymmddhhmm [ss]>].
[-timezone <time_zone_str>]
Sets a valid time zone value on the primary Control Station.
[-ntp_servers <ip_addr>[,.]]
Sets the IP addresses of the NTP server used by the primary
Control Station. It is a comma-separated list of valid IPv4 or IPv6
addresses and can have a maximum of four NTP addresses.
-natsync
[ -dnssync ]
This operation is added to sync-up NAT1A/2A and DNS information
after OCC.
-reboot
Reboots the primary Control Station.
EXAMPLE #1
---------To display the configuration properties of the primary Control
Station, type:
$ nas_cs -info
Host name = eng24416
Version = 6.0
Location = system:NS40G:HK1908075100410000|controlStation::0
Status = Ok
Standby location = system:NS40G:HK1908075100410000|controlStation::1
Stand by status = Ok
IPv4 address = 172.24.250.26
IPv4 gateway = 172.24.250.10
IPv4 netmask = 255.255.255.0
IPv6 address = 2002:ac18:af02:f4:20e:cff:fe6e:d524/64
IPv6 gateway = 2002:ac18:af02:f4:20e:cff:fe6e:d527
DNS Domain = eng.lss.emc.com
DNS Domain search order = eng.lss.emc.com,rtp.lab.emc.com
DNS servers = 2002:ac18:af02:f4:20e:cff:fe6e:d526
Session idle timeout = 10 Minutes
Session moniotor timeout = 10 Days
Current Time = Thu Nov 6 07:54:52 EST 2008
NTP Servers = 2002:ac18:af02:f4:20e:cff:fe6e:d529
EXAMPLE #2
---------To set the hostname,nat1_ip4address for the primary Control Station, type:
$ nas_cs -set -hostname Ml9q26-cs0 -nat1_ip4address
10.246.124.63
OK
EXAMPLE #3
---------To set the nat1_ip6address for the primary Control Station, type:
$ nas_cs -set -nat1_ip6address
2620:0:170:260:16ff:fe5d:535c:2467/64
OK
EXAMPLE #4
---------To set the DNS domain, search domains, and DNS servers for the
primary Control Station, type:
$ nas_cs -set -dns_domain eng.lss.emc.com -search_domain
lss.emc.com,rtp.lab.emc.com -dns_servers
172.24.175.172,172.24.175.173
OK
EXAMPLE #5
---------To set the session monitor timeout and session idle timeout for the
primary Control Station, type:
$ nas_cs -set -session_monitor_timeout 2 -session_idle_timeout 30
OK
EXAMPLE #6
---------To set the date, time, timezone, and NTP servers for the primary
Control Station, type:
$ nas_cs -set -time 200811070205 -timezone
America/New_York -ntp_server 128.221.252.0
OK
EXAMPLE #7
----------To reboot the primary Control Station, type:
$ nas_cs -reboot
OK
-------------------------------------------------------------
63
Last modifed: May 14, 2012 11:45 am
nas_dbtable
Displays the table records of the Control Station.
SYNOPSIS
-------nas_dbtable
To execute the command against a database that is on the Data Mover area:
-info -mover <movername> -db <dbname>
-query <tablename> -mover <movername> -db <dbname>
-filter {(<fieldname> <operator> <value> [{-and|-or}
<fieldname>{<|<=|>|>=|=|.CONTAIN.};<value>]...] ]
-list -mover <movername>
DESCRIPTION
----------Displays the table records of the specified Data Mover. It also filters
the records of a particular field, and lists those records by using
primary or secondary key values.
To execute the command against a database that is on the Control Station area:
-info -cs_path <cs_pathname> -db <dbname>
-query <tablename> -cs_path <cs_pathname> -db <dbname>
-filter {(<fieldname> <operator> <value> [{-and|-or}
<fieldname>{<|<=|>|>=|=|.CONTAIN.};<value>]...] ]
-list -cs_path <cs_pathname>
DESCRIPTION
----------Displays the table records of the Control Station. It also filters the records
of a particular field, and lists those records by using primary or secondary
key values.
The database located in the Data Mover can be read directly. The backup of the
database is read on the Control Station. If the database is inconsistent, the
nas_dbtable command allows you to manually verify the backup of the database
before restoring it.
The Data Mover table uses the standard XML interface of the administration
commands. The application can structure each table data and keys as a set of
fields. Each field has a unique name, type, and size.
The table structure is stored in the db.<base name> file. It is backed up and
restored with the database. The DBMS reader uses this description of the table
structure to read and display the records from the backup database.
DATA MOVER OPTIONS
------------------info -mover <movername> -db <dbname>
Displays the schema of a table or the list of fields and keys. It also
displays the number of records of the table so that the user can know if it is
reasonable to dump the entire table.
-query <tablename> -mover <movername> -db <dbname>
Displays the records of a table. Selects the records to display on the value
of some fields or secondary keys.
-filter { (<fieldname><operator><value> [{-and|-or}
<fieldname>{<|<=|>|>=|=|.CONTAIN.};<value>]...] ]
Filters the records of a particular field, and lists the records using primary
or secondary key values. The default with multiple filters is the -and option.
65
Only the = operator is supported in the first implementation.
NOTE: The keys are used when the -and option is used. Multiple fields with the
-or option parses the table, and applies a filter on each record.
The <fieldname> argument is the name of a secondary key or field. If the
secondary key is declared as a sequence of fields, it is used by specifying
either the value of its fields or value. If the secondary key is not declared
in the schema, then rename the key and its value as filter.
The <value> argument is the value of the field encoded in character.
CONTROL STATION OPTIONS
-----------------------info -cs_path <cs_pathname> -db <dbname>
Displays the schema of a table or the list of fields and keys. It also
displays the number of records of the table so that the user can know if it is
reasonable to dump the entire table.
-query <tablename> -cs_path <cs_pathname> -db <dbname>
Displays the records of the table. Selects the records to display on the value
of some fields or secondary keys.
-filter { (<fieldname><operator><value> [{-and|-or}
<fieldname>{<|<=|>|>=|=|.CONTAIN.};<value>]...] ]
Filters the records of a particular field, and lists the records using
primary or secondary key values. The default with multiple filters is
the -and option. Only the = operator is supported in the first
implementation.
NOTE: Keys are used when the -filter option contains all components of the
key, and the -and option is used. With the -or option, it is necessary to
parse all the records.
The <fieldname> argument is the name of a secondary key or field. If the
secondary key is declared as a sequence of fields, it is used by specifying
either the value of its fields or the secondary key value. If the secondary
key is not declared in the schema, rename the key and its value as filter.
The <value> argument is the value of the field encoded in character.
-list -cs_path <cs_pathname>
Displays the list of databases and tables within a particular directory
of the Control Station area.
SEE ALSO
-------server_dbms
EXAMPLE #1
---------To display the Secmap schema of the Data Mover, type:
$ nas_dbtable -info -mover <movername> -db Secmap
Database identification
=======================
Base Name
Table Name
Primary Key Schema
==================
= Secmap
= Mapping
sid
= SID
Secondary Key Components
========================
xid
= xidType, fxid
Data Schema
===========
origin
xidType
fxid
cdate
gid
name
= Enumeration
Unknown
: 0
Secmap
: 16
Localgroup : 32
Etc
: 48
Nis
: 64
AD
: 80
Usrmap
: 96
Ldap
: 112
Ntx
: 128
= Enumeration
unknown_name : -2
unknown_sid : -1
unknown_type : 0
user
: 1
group
: 2
= Unsigned Integer size : 4
= Date
= Unsigned Integer size : 4
= String, length container size : 2
EXAMPLE #2
---------To filter the records of the Secmap schema, type:
$ nas_dbtable -query Mapping -mover <movername> -db Secmap -filter fxid=10011
sid
= S-1-5-15-2b3be507-6bc5c62-3f32a78a-8cc
origin
xidType
fxid
cdate
gid
name
=
=
=
=
=
=
Record count
Last key
= 1
= 1050000000000051500000007e53b2b625cbc068aa7323fcc080000
Nis
user
10011
Fri Sep 11 17:39:09 2009
107
DVT2KA\MaxUsers00000011
-------------------------------------------------------------Last modified: July 13, 2011 12:55pm
67
nas_devicegroup
Manages an established MirrorView/Synchronous (MirrorView/S)
consistency group, also known as a device group.
SYNOPSIS
-------nas_devicegroup
-list
| -info {<name>|id=<id>|-all} [-sync [yes|no]]
| -acl <acl_value> {<name>|id=<id>}
| -suspend {<name>|id=<id>}
| -resume {<name>|id=<id>}
DESCRIPTION
----------nas_devicegroup lists the device group information for a
MirrorView/S configuration, gets detailed information about a
consistency group, specifies an access control level value for the
group, suspends MirrorView/S operations, or resumes operations of
the device group.
A MirrorView/S with a VNX configuration involves source and
destination VNXs attached to old versions of storage systems.
MirrorView/S performs synchronous mirroring of source storage
logical units (LUNs) representing production images, where the
mirrored LUNs are part of a MirrorView/S consistency group.
On the source VNX, you must be root to issue the -acl, -suspend, and
-resume options.
nas_devicegroup must be run from a Control Station in slot 0; it will
report an error if run from a Control Station in slot 1.
OPTIONS
-------list
Displays a list of available configured MirrorView/S device groups.
-info {<name>|id=<id>|-all} [-sync [yes|no]]
Displays detailed information about the MirrorView/S configuration
for a specific device group or for all groups.
[-sync [yes|no]]
The -sync option first synchronizes the Control Station’s view
with the VNX for block before displaying configuration
information. The default is yes.
-acl <acl_value> {<name>|id=<id>}
Sets an access control level value that defines the owner of the storage
system, and the level of access allowed for users and groups defined
in the access control level table. The nas_acl command provides more
information).
CAUTION
The access control level value for the group should not be changed
from the default setting. A change in access control level value can
prevent MirrorView/S from functioning properly.
-suspend {<name>|id=<id>}
Temporarily halts mirroring from the source to the destination,
thereby suspending the link. Changes can still be made to the source
LUNs, but are not applied to the destination LUNs until operations
are resumed.
-resume {<name>|id=<id>}
Resumes device group operations and restarts mirroring,
synchronizing the destination LUNs with the source LUNs.
SEE ALSO
-------Using MirrorView/Synchronous with VNX for Disaster Recovery, nas_acl,
and nas_logviewer.
STORAGE SYSTEM OUTPUT
--------------------The number associated with the storage device is dependent on the
attached storage system of the system; for MirrorView/S, some VNX
for block display a prefix of APM before a set of integers, for example,
APM00033900124-0019. The VNX for block supports the following
system-defined AVM storage pools for MirrorView/S only: cm_r1,
cm_r5_performance, cm_r5_economy, cmata_archive, and cmata_r3.
EXAMPLE #1
---------To list the configured MirrorView/S device groups that are
available, type:
$ nas_devicegroup -list
ID name owner storage ID acl type
2 mviewgroup 500 APM00053001549 0 MVIEW
EXAMPLE #2
---------To display detailed information for a MirrorView/S device group,
type:
$ nas_devicegroup -info mviewgroup
Sync with CLARiiON backend ...... done
name = mviewgroup
description =
uid = 50:6:1:60:B0:60:27:20:0:0:0:0:0:0:0:0
state = Synchronized
role = Primary
condition = Active
recovery policy = Automatic
number of mirrors = 16
mode = SYNC
owner = 500
mirrored disks =
local clarid = APM00053001549
remote clarid = APM00053001552
mirror direction = local -> remote
Where:
Value
-----
Definition
----------
Sync with CLARiiON storage system Indicates that a sync with the VNX for block
was performed to retrieve the most recent information.
name
description
This does not appear if you specify -info -sync no.
Name of the device group.
Brief description of device group.
69
uid
state
role
condition
recovery policy
number of mirrors
mode
owner
mirrored disks
local clarid
remote clarid
mirror direction
UID assigned, based on the system.
State of the device group (for example, Consistent,
Synchronized, Out-of-Sync, Synchronizing, Scrambled,
Empty, Incomplete, or Local Only).
Whether the current system is the Primary (source) or
Secondary (destination).
Whether the group is functioning (Active), Inactive,
Admin Fractured (suspended), Waiting on Sync, System
Fractured (which indicates link down), or Unknown.
Type of recovery policy (Automatic is the default and
recommended value for group during storage system
configuration; if Manual is set, use -resume after a
link down failure).
Number of mirrors in group.
MirrorView mode (always SYNC in this release).
User whom the object is assigned to, and is indicated
by the index number in the access control level table.
nas_acl provides information.
Comma-separated list of disks that are mirrored.
APM number of local VNX for block storage array.
APM number of remote VNX for block storage array.
On primary system, local to remote (on primary system)
;
on destination system, local from remote.
EXAMPLE #3
---------To display detailed information about a MirrorView/S device group
without synchronizing the Control Station.s view with the VNX for
block, type:
$ nas_devicegroup -info id=2 -sync no
name = mviewgroup
description =
uid = 50:6:1:60:B0:60:27:20:0:0:0:0:0:0:0:0
state = Consistent
role = Primary
condition = Active
recovery policy = Automatic
number of mirrors = 16
mode = SYNC
owner = 500
mirrored disks =
local clarid = APM00053001549
remote clarid = APM00053001552
mirror direction = local -> remote
EXAMPLE #4
---------To halt operation of the specified device group, as root user, type:
# nas_devicegroup -suspend mviewgroup
Sync with CLARiiON backend ...... done
STARTING an MV ’SUSPEND’ operation.
Device group: mviewgroup ............ done
The MV ’SUSPEND’ operation SUCCEEDED.
done
EXAMPLE #5
---------To resume operations of the specified device group, as root user,
type:
# nas_devicegroup -resume mviewgroup
Sync with CLARiiON backend ...... done
STARTING an MV ’RESUME’ operation.
Device group: mviewgroup ............ done
The MV ’RESUME’ operation SUCCEEDED.
done
---------------------------------------------------------------Last modified: May 11, 2011 10:00 am.
71
nas_disk
Manages the disk table.
SYNOPSIS
-------nas_disk
-list
| -delete <disk_name> [[-perm]|[-unbind]]
| -info {<disk_name>|id=<disk_id>}
| -rename <old_name> <new_name>
DESCRIPTION
----------nas_disk displays a list of known disks and renames, deletes, or
displays information for the specified disk.
OPTIONS
-------list
Lists the disk table.
It also displays the new type of device. This device is having the default stor
age
group without any SLO or SRP set.
Example - DSL,R1DSL,R2DSL,R1BDSL,R2BDSL
Note: The ID of the object is an integer and is assigned automatically.
The name of the disk might be truncated if it is too long for the display.
To display the full name, use the -info option with the disk ID.
-delete <disk_name> [[ -perm]|[-unbind]
Deletes an entry from the disk table. In a VNX, restores the VNX for
block LUN name to its default value.
Unless -perm is specified, the disk is still identified as a VNX disk
and can be discovered and marked again using server_devconfig.
The -perm option removes the entry from the disk table and deletes
the diskmark. The disk is then available to be deployed for use by
another platform. The -unbind option removes the LUN from the
VNX Storage group (if EMC Access Logix. is enabled). The -unbind
option permanently destroys the LUN and its contents. If this is the
last LUN using a RAID group, then the RAID group will be deleted.
-info {<disk_name>|id=<disk_id>}
Displays information for a specific <disk_name> or <disk_id> such
as size, type, and Access control level (ACL).
-rename <old_name> <new_name>
Renames a disk to <new_name>.
Note: If a VNX for block LUN uses the default name, renames it in the format
VNX_<VNX-hostname>_<lun-id>_<VNX-dvol-name>.
SEE ALSO
-------VNX System Operations and server_devconfig.
SYSTEM OUTPUT
------------The number associated with the storage device is dependent on the attached
system. VNX for block display a prefix of alphabetic characters before a
set of integers, for example, FCNTR074200038-0019. Symmetrix systems
display as a set of integers, for example, 002804000190-003C.
EXAMPLE #1
---------EXAMPLE #1 To list the disk table, type:
$ nas_disk -list
id inuse sizeMB storageID-devID type name servers
1 y 22874 000197100127-00001 STD root_disk 1,2
2 y 11619 000197100127-00002 STD root_ldisk 1,2
3 y 2077 000197100127-00008 STD d3 1,2
4 y 2077 000197100127-00009 STD d4 1,2
5 y 4154 000197100127-00006 STD d5 1,2
6 y 65542 000197100127-00007 STD d6 1,2
7 y 17261 000197100127-00021 DSL d7 1,2
8 n 17261 000197100127-00022 DSL d8 1,2
9 n 17261 000197100127-00023 DSL d9 1,2
10 n 17261 000197100127-00024 DSL d10 1,2
11 n 17261 000197100127-00025 DSL d11 1,2
12 n 17261 000197100127-00026 DSL d12 1,2
13 n 17261 000197100127-00027 DSL d13 1,2
14 n 17261 000197100127-00028 DSL d14 1,2
15 y 17261 000197100127-00029 DSL d15 1,2
17 y 17261 000197100127-0002A DSL d17 1,2
Where:
Value
-----
Definition
----------
id
inuse
sizeMB
storageID-devID
type
ID of the disk (assigned automatically).
Used by any type of volume or file system.
Total size of disk.
ID of the system and device associated with the disk.
Type of disk contingent on the system attached; CLSTD,
CLATA, CMSTD, CLEFD, CMEFD, CMATA, MIXED (indicates
tiers used in the pool contain multiple disk types),
Performance, Capacity, Extreme_performance, Mirrored_m
ixed,
Mirrored_performance, Mirrored_capacity, and
Mirrored_extreme_performance are VNX disk types and ST
D,
BCV, R1BCV, R2BCV, R1STD, R2STD, ATA, R1ATA, R2ATA,BCV
A,
R1BCA, R2BCA, EFD, FTS, R1FTS, R2FTS, BCVF, R1BCF, R2B
CF,
BCVMIXED, R1MIXED, R2MIXED, R1BCVMIXED, and R2BCVMIXED
are
name
Symmetrix disk types.
Name of the disk; ’dd’ in a disk name indicates a remo
te disk.
servers
Servers that have access to this disk.
EXAMPLE #2
---------To list the disk table for the system with a Symmetrix system, type:
$ nas_disk -list
id inuse sizeMB storageID-devID type name servers
1 y 11507 000190100530-00FB STD root_disk 1,2,3,4,5,6,7,8
2 y 11507 000190100530-00FC STD root_ldisk 1,2,3,4,5,6,7,8
3 y 2076 000190100530-00FD STD d3 1,2,3,4,5,6,7,8
4 y 2076 000190100530-00FE STD d4 1,2,3,4,5,6,7,8
5 y 2076 000190100530-00FF STD d5 1,2,3,4,5,6,7,8
6 y 65536 000190100530-04D4 STD d6 1,2,3,4,5,6,7,8
73
7 n 28560 000190100530-0102 STD d7 1,2,3,4,5,6,7,8
8 n 28560 000190100530-0103 STD d8 1,2,3,4,5,6,7,8
9 n 28560 000190100530-0104 STD d9 1,2,3,4,5,6,7,8
10 n 28560 000190100530-0105 STD d10 1,2,3,4,5,6,7,8
11 n 28560 000190100530-0106 STD d11 1,2,3,4,5,6,7,8
12 n 28560 000190100530-0107 STD d12 1,2,3,4,5,6,7,8
13 n 28560 000190100530-0108 STD d13 1,2,3,4,5,6,7,8
14 n 28560 000190100530-0109 STD d14 1,2,3,4,5,6,7,8
15 n 28560 000190100530-010A STD d15 1,2,3,4,5,6,7,8
16 n 28560 000190100530-010B STD d16 1,2,3,4,5,6,7,8
17 n 28560 000190100530-010C STD d17 1,2,3,4,5,6,7,8
18 n 28560 000190100530-010D STD d18 1,2,3,4,5,6,7,8
19 n 28560 000190100530-010E STD d19 1,2,3,4,5,6,7,8
20 n 28560 000190100530-010F STD d20 1,2,3,4,5,6,7,8
21 n 28560 000190100530-0110 STD d21 1,2,3,4,5,6,7,8
22 n 28560 000190100530-0111 STD d22 1,2,3,4,5,6,7,8
23 n 28560 000190100530-0112 STD d23 1,2,3,4,5,6,7,8
24 n 28560 000190100530-0113 STD d24 1,2,3,4,5,6,7,8
[....]
155 n 28560 000190100530-0196 STD d155 1,2,3,4,5,6,7,8
156 n 28560 000190100530-0197 STD d156 1,2,3,4,5,6,7,8
157 n 28560 000190100530-0198 BCV rootd157 1,2,3,4,5,6,7,8
158 n 28560 000190100530-0199 BCV rootd158 1,2,3,4,5,6,7,8
159 n 28560 000190100530-019A BCV rootd159 1,2,3,4,5,6,7,8
160 n 28560 000190100530-019B BCV rootd160 1,2,3,4,5,6,7,8
161 n 28560 000190100530-019C BCV rootd161 1,2,3,4,5,6,7,8
162 n 28560 000190100530-019D BCV rootd162 1,2,3,4,5,6,7,8
163 n 28560 000190100530-019E BCV rootd163 1,2,3,4,5,6,7,8
164 n 28560 000190100530-019F BCV rootd164 1,2,3,4,5,6,7,8
165 n 28560 000190100530-01A0 BCV rootd165 1,2,3,4,5,6,7,8
166 n 28560 000190100530-01A1 BCV rootd166 1,2,3,4,5,6,7,8
167 n 28560 000190100530-01A2 BCV rootd167 1,2,3,4,5,6,7,8
168 n 28560 000190100530-01A3 BCV rootd168 1,2,3,4,5,6,7,8
169 n 28560 000190100530-01A4 BCV rootd169 1,2,3,4,5,6,7,8
170 n 28560 000190100530-01A5 BCV rootd170 1,2,3,4,5,6,7,8
171 n 28560 000190100530-01A6 BCV rootd171 1,2,3,4,5,6,7,8
172 n 28560 000190100530-01A7 BCV rootd172 1,2,3,4,5,6,7,8
173 n 28560 000190100530-01A8 BCV rootd173 1,2,3,4,5,6,7,8
174 n 28560 000190100530-01A9 BCV rootd174 1,2,3,4,5,6,7,8
175 n 28560 000190100530-01AA BCV rootd175 1,2,3,4,5,6,7,8
176 n 28560 000190100530-01AB BCV rootd176 1,2,3,4,5,6,7,8
177 n 28560 000190100530-01AC BCV rootd177 1,2,3,4,5,6,7,8
178 n 28560 000190100530-01AD BCV rootd178 1,2,3,4,5,6,7,8
179 n 28560 000190100530-01AE BCV rootd179 1,2,3,4,5,6,7,8
180 n 28560 000190100530-01AF BCV rootd180 1,2,3,4,5,6,7,8
181 n 28560 000190100530-01B0 BCV rootd181 1,2,3,4,5,6,7,8
182 n 28560 000190100530-01B1 BCV rootd182 1,2,3,4,5,6,7,8
183 n 28560 000190100530-01B2 BCV rootd183 1,2,3,4,5,6,7,8
184 n 28560 000190100530-01B3 BCV rootd184 1,2,3,4,5,6,7,8
185 n 28560 000190100530-01B4 BCV rootd185 1,2,3,4,5,6,7,8
186 n 28560 000190100530-01B5 BCV rootd186 1,2,3,4,5,6,7,8
187 n 11507 000190100530-051D EFD d187 1,2,3,4,5,6,7,8
188 n 11507 000190100530-051E EFD d188 1,2,3,4,5,6,7,8
189 n 11507 000190100530-051F EFD d189 1,2,3,4,5,6,7,8
190 n 11507 000190100530-0520 EFD d190 1,2,3,4,5,6,7,8
191 n 11507 000190100530-0521 EFD d191 1,2,3,4,5,6,7,8
192 n 11507 000190100530-0522 EFD d192 1,2,3,4,5,6,7,8
193 n 11507 000190100530-0523 EFD d193 1,2,3,4,5,6,7,8
194 n 11507 000190100530-0524 EFD d194 1,2,3,4,5,6,7,8
195 n 11507 000190100530-0525 EFD d195 1,2,3,4,5,6,7,8
196 n 11507 000190100530-0526 EFD d196 1,2,3,4,5,6,7,8
197 n 11507 000190100530-0527 EFD d197 1,2,3,4,5,6,7,8
198 n 11507 000190100530-0528 EFD d198 1,2,3,4,5,6,7,8
199 n 11507 000190100530-0529 EFD d199 1,2,3,4,5,6,7,8
200 n 11507 000190100530-052A EFD d200 1,2,3,4,5,6,7,8
201 n 11507 000190100530-052B EFD d201 1,2,3,4,5,6,7,8
202 n 11507 000190100530-052C EFD d202 1,2,3,4,5,6,7,8
203 n 11507 000190100530-052D EFD d203 1,2,3,4,5,6,7,8
204 y 11507 000190100530-052E EFD d204 1,2,3,4,5,6,7,8
Note: This is a partial listing due to the length of the output.
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #3
---------To view information for disk d7 for a system with a VNX for block, type:
$ nas_disk -info d7
id = 7
name = d7
acl = 0
in_use = True
pool = TP1
size (MB) = 273709
type = Mixed
protection= RAID5(4+1)
stor_id = FCNTR074200038
stor_dev = 0012
volume_name = d7
storage_profiles = TP1
thin = True
tiering_policy = Auto-tier
compressed= False
mirrored = False
servers = server_2,server_3,server_4,server_5
server = server_2 addr=c0t1l2
server = server_2 addr=c32t1l2
server = server_2 addr=c16t1l2
server = server_2 addr=c48t1l2
server = server_3 addr=c0t1l2
server = server_3 addr=c32t1l2
server = server_3 addr=c16t1l2
server = server_3 addr=c48t1l2
server = server_4 addr=c0t1l2
server = server_4 addr=c32t1l2
server = server_4 addr=c16t1l2
server = server_4 addr=c48t1l2
server = server_5 addr=c0t1l2
server = server_5 addr=c32t1l2
server = server_5 addr=c16t1l2
server = server_5 addr=c48t1l2
Where:
Value
-----
Definition
----------
id
name
acl
in_use
pool
size (MB)
type
ID of the disk (assigned automatically).
Name of the disk.
Access control level value of the disk.
Used by any type of volume or file system.
Name of the storage pool in use.
Total size of the disk.
Type of disk contingent on the system attached;
VNX for block disk types are CLSTD, CLATA, CMSTD,
CLEFD, CLSAS, CMEFD, CMATA, MIXED (indicates tiers
used in the pool contain multiple disk types),
Performance, Capacity, Extreme_performance,
Mirrored_mixed, Mirrored_performance, Mirrored_capacit
y,
and Mirrored_extreme_performance.
75
protection
stor_id
stor_dev
volume_name
storage_profiles
thin
The type of disk protection that has been assigned.
ID of the system associated with the disk.
ID of the device associated with the disk.
Name of the volume residing on the disk.
The storage profiles to which the disk belongs.
Indicates whether the block system uses thin provision
tiering_policy
Values are: True, False.
Indicates the tiering policy in effect. If the initial
ing.
tier
and the tiering policy are the same, the values are: A
uto-Tier,
Highest Available Tier, Lowest Available Tier. If the
initial tier and the tiering policy are not the same,
the
values are: Auto-Tier/No Data Movement, Highest
Available Tier/No Data Movement, Lowest Available Tier
/No
compressed
Data Movement.
For VNX for block, indicates whether data is compresse
d.
Values are: True, False, Mixed (indicates some of the
LUNs,
mirrored
servers
addr
but not all, are compressed).
Indicates whether the disk is mirrored.
Lists the servers that have access to this disk.
Path to system (SCSI address).
EXAMPLE #4
---------To view information for disk d205 for the system with a Symmetrix system, type:
$ nas_disk -info d205
id = 205
name = d205
acl = 0
in_use = True
pool = SG0
size (MB) = 28560
type = Mixed
protection= RAID1
symm_id = 000190100530
symm_dev = 0539
volume_name = d205
storage_profiles = SG0_000192601245
thin = True
tiering_enabled = True
compression = True
mirrored = False
servers =
server_2,server_3,server_4,server_5,server_6,server_7,server_8,server_9
server = server_2 addr=c0t14l0 FA=03A FAport=0
server = server_2 addr=c16t14l0 FA=04A FAport=0
server = server_3 addr=c0t14l0 FA=03A FAport=0
server = server_3 addr=c16t14l0 FA=04A FAport=0
server = server_4 addr=c0t14l0 FA=03A FAport=0
server = server_4 addr=c16t14l0 FA=04A FAport=0
server = server_5 addr=c0t14l0 FA=03A FAport=0
server = server_5 addr=c16t14l0 FA=04A FAport=0
server = server_6 addr=c0t14l0 FA=03A FAport=0
server = server_6 addr=c16t14l0 FA=04A FAport=0
server = server_7 addr=c0t14l0 FA=03A FAport=0
server = server_7 addr=c16t14l0 FA=04A FAport=0
server = server_8 addr=c0t14l0 FA=03A FAport=0
server = server_8 addr=c16t14l0 FA=04A FAport=0
server = server_9 addr=c0t14l0 FA=03A FAport=0
server = server_9 addr=c16t14l0 FA=04A FAport=0
Where:
Value
-----
Definition
----------
id
name
acl
in_use
pool
size (MB)
type
ID of the disk (assigned automatically).
Name of the disk.
Access control level value of the disk.
Used by any type of volume or file system.
Name of the storage pool in use.
Total size of disk.
Type of disk contingent on the system attached;
Symmetrix disk types are STD, BCV, R1BCV, R2BCV,
R1STD, R2STD, ATA, R1ATA, R2ATA, BCVA, R1BCA, R2BCA,
EFD, FTS, R1FTS, R2FTS, BCVF, R1BCF, R2BCF, BCVMIXED,
R1MIXED, R2MIXED, R1BCVMIXED, and R2BCVMIXED. If
multiple disk volumes are used, the type is Mixed.
The type of disk protection that has been assigned.
ID of the Symmetrix system associated with the disk.
ID of the Symmetrix device associated with the disk.
Name of the volume residing on the disk.
The storage profiles to which the disk belongs.
Indicates whether the system uses thin provisioning.
Values are: True, False, Mixed.
Indicates whether a tiering policy is being used.
For VNX with Symmetrix backend, indicates whether data
protection
symm_id
symm_dev
volume_name
storage_profiles
thin
tiering_enabled
compressed
is compressed. Values are:True, False, Mixed (indicate
s
mirrored
servers
addr
some of the LUNs, but not all, are compressed).
Indicates whether the disk is mirrored.
Lists the servers that have access to this disk.
Path to system (SCSI address).
EXAMPLE #5
---------To view information for disk d17 (FTS device created using eDisk configured in
external provisioning mode) for the system with a Symmetrix system, type:
$ nas_disk -info id=17
id = 17
name = d17
acl = 0
in_use = True
pool = user_pool
size (MB) = 17261
type = DSL
protection= TDEV
symm_id = 000197100127
symm_dev = 0002A
volume_name = d17
storage_profiles = symm_dsl
thin = True
compressed= False
mirrored = False
servers = server_2,server_3
server = server_2 addr=c0t1l9
server = server_2 addr=c16t1l9
server = server_2 addr=c32t1l9
server = server_2 addr=c48t1l9
server = server_3 addr=c0t1l9
server = server_3 addr=c16t1l9
server = server_3 addr=c32t1l9
77
server = server_3 addr=c48t1l9
EXAMPLE #4 provides a description of command outputs.
EXAMPLE #6
---------To rename a disk in the system with a VNX for block, type:
$ nas_disk -rename d7 disk7
id = 7
name = disk7
acl = 0
in_use = True
size (MB) = 273709
type = CLSTD
protection= RAID5(4+1)
stor_id = FCNTR074200038
stor_dev = 0012
volume_name = disk7
storage_profiles = clar_r5_performance
virtually_provisioned = False
mirrored = False
servers = server_2,server_3,server_4,server_5
server = server_2 addr=c0t1l2
server = server_2 addr=c32t1l2
server = server_2 addr=c16t1l2
server = server_2 addr=c48t1l2
server = server_3 addr=c0t1l2
server = server_3 addr=c32t1l2
server = server_3 addr=c16t1l2
server = server_3 addr=c48t1l2
server = server_4 addr=c0t1l2
server = server_4 addr=c32t1l2
server = server_4 addr=c16t1l2
server = server_4 addr=c48t1l2
server = server_5 addr=c0t1l2
server = server_5 addr=c32t1l2
server = server_5 addr=c16t1l2
server = server_5 addr=c48t1l2
EXAMPLE #4 provides a description of command outputs.
EXAMPLE #7
---------To delete a disk entry from the disk table for the system with a VNX
for block, type:
$ nas_disk -delete d24
id = 24
name = d24
acl = 0
in_use = False
size (MB) = 456202
type = CLATA
protection= RAID5(6+1)
stor_id = FCNTR074200038
stor_dev = 0023
storage_profiles = clarata_archive
virtually_provisioned = False
mirrored = False
servers = server_2,server_3,server_4,server_5
EXAMPLE #4 provides a description of command outputs.
------------------------------------------------------------------Last Modified: Jan 11, 2013 3:17 pm
79
nas_diskmark
Queries the system, manages and lists the SCSI devices configuration.
SYNOPSIS
-------nas_diskmark
-mark {-all|<movername>} [-discovery {y|n}] [-monitor {y|n}]
[-Force {y|n}]
| -list {-all|<movername>}
DESCRIPTION
----------nas_diskmark queries the available system device and tape
device configuration; saves the device configuration into the Data
Movers database, and lists SCSI devices. This command also manages NAS
database configuration related to advanced data services from
back-end storage system.
CAUTION
It is recommended that all Data Movers have the same device
configuration. When adding devices to the device table for a single
Data Mover only, certain actions such as standby failover are not
successful unless the standby Data Mover has the same disk device
configuration as the primary Data Mover.
The -all option executes the command for all Data Movers.
LUN migration for VNX Symmetrix systems
--------------------------------------When a newly created LUN having an ID same as that of some removed
device is detected, the command may report a conflict error. After
a LUN is removed at the backend and then a new LUN is created with
the same ID, Control Station cannot be aware of its deletion at first.
The error occurs because the new LUN has duplicate storage ID and
device ID with stale disk volume. This case only applies in Symmetrix
backend.
For example, During LUN migration, where a Symmetrix device is moved
from source storage group (SG) to destination SG, the LUN ID of this
device in the source SG should be maintained even in the destination
SG. Else, this will reflect in a conflict error on the Control Station
during running nas_diskmark.
OPTIONS
-------mark {-all|<movername>}
Queries SCSI devices and saves them into the device table database
on the Data Mover.
Modifies VNX for block LUN names to the
VNX_<VNX-hostname>_<lun-id>_<VNX-dvol-name> format, if the
LUNs use the default Unisphere name.
CAUTION
The time taken to complete this command may be lengthy,
depending on the number and type of attached devices.
[-discovery {y|n}]
Enables or disables the storage discovery operation.
CAUTION
Disabling the -discovery option should be done only under the
direction of an EMC Customer Service Engineer.
[-monitor {y|n}]
Displays the progress of the query and discovery operations.
[-Force {y|n}]
Overrides the health check failures and changes the storage
configuration.
CAUTION
Use the -Force option only under the direction of an EMC Customer
Service Engineer, as high availability can be lost when changing
storage configuration.
-list {-all|<movername>}
Lists the SCSI devices for the specified Data Mover or all Data
Movers.
EXAMPLE #1
---------To query SCSI devices on server_2 and display the progress of the
query operation, type:
$ nas_diskmark -mark server_2 -monitor y
Discovering storage (may take several minutes)
server_2:
chain 0 ..........
chain 16 ........
chain 32 ........
chain 48 ..........
chain 96 ..........
chain 112 ..........
Verifying disk reachability
Verifying file system reachability
Verifying local domain
Verifying disk health
Verifying gate keepers
Verifying device group
done
EXAMPLE #2
---------To list the SCSI devices for server_2, type:
$ nas_diskmark -list server_2
server_2 : chain 0 :
chain= 0, scsi-0
stor_id= HK190807090011
tid/lun= 0/0 type= disk
tid/lun= 0/1 type= disk
tid/lun= 0/2 type= disk
tid/lun= 0/3 type= disk
tid/lun= 0/4 type= disk
tid/lun= 0/5 type= disk
tid/lun= 1/0 type= disk
tid/lun= 1/1 type= disk
tid/lun= 1/2 type= disk
tid/lun= 1/3 type= disk
tid/lun= 1/4 type= disk
VNX_id= HK1908070900110032
sz= 11263 val= 1 info= DGC RAID 5 03243200000032NI
sz= 11263 val= 2 info= DGC RAID 5 03243300010033NI
sz= 2047 val= 3 info= DGC RAID 5 03243400020034NI
sz= 2047 val= 4 info= DGC RAID 5 03243500030035NI
sz= 2047 val= 5 info= DGC RAID 5 03243600040036NI
sz= 32767 val= 6 info= DGC RAID 5 03243700050037NI
sz= 274811 val= 7 info= DGC RAID 5 03244400100044NI
sz= 274811 val= -5 info= DGC RAID 5 03244500110045NI
sz= 274811 val= 8 info= DGC RAID 5 03244600120046NI
sz= 274811 val= -5 info= DGC RAID 5 03244700130047NI
sz= 274811 val= 9 info= DGC RAID 5 03245600140056NI
81
tid/lun= 1/5 type= disk sz= 274811 val= -5 info= DGC RAID 5 03245700150057NI
tid/lun= 1/6 type= disk sz= 274811 val= 10 info= DGC RAID 5 03245800160058NI
tid/lun= 1/7 type= disk sz= 274811 val= -5 info= DGC RAID 5 03245900170059NI
tid/lun= 1/8 type= disk sz= 274811 val= 99 info= DGC RAID 5 03245A0018005ANI
tid/lun= 1/9 type= disk sz= 274811 val= -5 info= DGC RAID 5 03245B0019005BNI
tid/lun= 1/10 type= disk sz= 274811 val= 97 info= DGC RAID 5 03245C001A005CNI
tid/lun= 1/11 type= disk sz= 274811 val= -5 info= DGC RAID 5 03245D001B005DNI
tid/lun= 1/12 type= disk sz= 274811 val= 13 info= DGC RAID 5 03245E001C005ENI
tid/lun= 1/13 type= disk sz= 274811 val= -5 info= DGC RAID 5 03245F001D005FNI
tid/lun= 1/14 type= disk sz= 274811 val= 14 info= DGC RAID 5 032460001E0060NI
tid/lun= 1/15 type= disk sz= 274811 val= -5 info= DGC RAID 5 032461001F0061NI
server_2 : chain 1 :
no drives on chain
server_2 : chain 2 :
no drives on chain
server_2 : chain 3 :
no drives on chain
server_2 : chain 4 :
no drives on chain
server_2 : chain 5 :
no drives on chain
server_2 : chain 6 :
no drives on chain
server_2 : chain 7 :
no drives on chain
server_2 : chain 8 :
no drives on chain
server_2 : chain 9 :
no drives on chain
server_2 : chain 10 :
no drives on chain
server_2 : chain 11 :
no drives on chain
server_2 : chain 12 :
no drives on chain
server_2 : chain 13 :
no drives on chain
server_2 : chain 14 :
no drives on chain
server_2 : chain 15 :
no drives on chain
Note: This is a partial listing due to the length of the output.
-----------------------------------------------------Last Modified: Feb 21, 2013 11:00 am
nas_emailuser
Manages email notifications for serious system events.
SYNOPSIS
-------nas_emailuser
-info
| -test
| -modify
[-enabled {yes|no}]
[-to <email_addr> [,...]]
[-cc <email_addr> [,...]]
[-email_server <email_server>]
[-subject_prefix <email_subject>]
[-from <email_addr>]
| -init
DESCRIPTION
----------nas_emailuser enables, configures, and tests email notifications for
serious system events.
OPTIONS
-------info
Displays the configuration for email notifications.
-test
Generates a test event that sends a test email notification to the email
addresses configured in -to and -cc. The recipient email address must
be configured prior to testing email notification.
Note: After the -test option is run, all the configured recipients must
be asked to confirm whether they received the test email with the correct
system identification information.
-modify
Modifies one or more of the following configuration parameters:
[-enabled {yes|no}]
Enables email notification if yes is specified. The recipient email address
must be configured prior to enabling email notification. Disables email
notification if no is specified.
[-to <email_addr> [,...]]
Configures one or more recipient email addresses. The email
addresses are comma-separated, enclosed in single-quotes, and
follow the mailbox@fully_qualified_domain_name format. For example,
storage_admin@yourcompany.com, backup_admin@yourcompany.com.
Refer the following email address format guidelines for
configuring email addresses. An email address can contain:
* A maximum of 63 characters; the field can contain a maximum
of 255 characters:
* ASCII characters: a through z, A through Z, 0 through 9, ! #
$ % & * + - / = ? ^ _ ‘ {|,} ˜ are allowed; a period, if it is not
the first or last character in the mailbox
83
* Alphanumeric strings
* Single quotes, if they are escaped in the format:
- your\’email@yourcompany.com
-’first’\’’email@yourcompany.com,second’\’’email@yourcompany.com’
[-cc <email_addr> [,...]]
Configures a list of carbon-copy recipients. The email addresses
are comma-separated, enclosed in single-quotes, and follow the
mailbox@fully_qualified_domain_name format. For example,
’storage_admin@yourcompany.com’. For the email address
character set and format guidelines, refer the -to option.
[-email_server <email_server>]
Configures the email server that accepts and routes the email
notifications. <email_server> specifies an IP address or the fully
qualified domain name, which can have 1 to 63 characters. The IP
addresses 0.0.0.0 and 255.255.255.255 are not allowed.
[-subject_prefix <email_subject>]
Specifies the email subject prefix. The subject prefix for the email
notification can be from 1 to 63 characters long, is enclosed in
quotes, and should contain printable ASCII characters. You can
customize the subject prefix for specific needs like email filtering.
The default subject is "System Notification."
[-from <email_addr>]
Configures the sender’s email address. If the sender’s email
address is not specfied, a default email address of the format
root@<hostname> is configured. The email address follows the
mailbox@fully_qualified_domain_name format. For example,
’storage_admin@yourcompany.com’. For the email address
character set and format guidelines, refer the -to option.
-init
Initializes the default state; displays a status message if the
feature has already been initialized. The -init option must be used
only when directed.
SEE ALSO
-------Configuring Events and Notifications on VNX for File.
EXAMPLE #1
---------To configure email notifications using email server 10.6.50.122 from
administrator to support, while copying engineering and documentation, type:
$ nas_emailuser -modify -to
szg30@fire2.hosts.pvt.dns,support1@nasdocs.emc.com,documentation@nasdocs.emc.co
m
OK
EXAMPLE #2
---------To display information on email notifications, type:
$ nas_emailuser -info
Service Enabled
= Yes
Recipient Address(es)
=
szg30@fire2.hosts.pvt.dns,support1@nasdocs.emc.com,documentation@nasdocs.emc.co
m
Carbon copy Address(es) =
Email Server
Subject Prefix
Sender Address
= 10.241.168.23
= System Notification
=
EXAMPLE #3
---------To test email notifications, type:
$ nas_emailuser -test
OK
EXAMPLE #4
---------To disable email notification, type:
$ nas_emailuser -modify -enabled no
OK
----------------------------------------Last Modified: May 14, 2012 1:00 pm
85
nas_environment
Reports the inlet air temperatures and input power to the user.
SYNOPSIS
-------nas_environment -info
{
| -system [-present|-average]
| -dme [enclosure_id] [-intemp [f|c]|-power] [-present]|[-average]
| -array [-present|-average]
| -shelf {<shelf_id>|<-all>}[-intemp [f|c]|-power][-present|-average]
| -battery [a|b] [-present|-average]
| -spe [-intemp [f|c]|-power] [-present|-average]
| -all
}
DESCRIPTION
----------nas_environment -info displays the inlet air temperatures of the
data mover enclosures, disk array enclosures, the input power of the
data mover enclosures, disk array enclosures, and standby power supply
through the CLI and Unisphere GUI.
OPTIONS
-------system
Displays the present or average input power information of the
system, which includes file and block on VNX systems, and file only
on gateway systems.
-present
Displays the current value, which is a sum of the present
input power for all supported systems. The current value is
computed as the 30 second average of the power consumption
sampled every three seconds.
-average
Displays the average value. It requires an hour to calculate
the correct value. N/A is displayed if there is less than one
hour worth of data. The average value is computed as the 60
minute rolling average of the present power consumption values.
-dme
Displays the present or average inlet air temperature and/or input
power information on a specified data mover enclosure. If a specific
enclosure_id is not specified, all data mover enclosure information
is displayed.
enclosure_id
Specifies a data mover enclosure id on which to display
information.
-intemp [f|c]
Displays the inlet air temperature information. The f flag
indicates Fahrenheit. The default value or c flag indicates
Celsius.
-power
Displays the input power information.
-present
Displays the current value.
-average
Displays the average value. It requires an hour to
calculate the correct value. N/A is displayed if
there is less than one hour worth of data.
-array
Displays the present or average input power information on the array.
-present
Displays the current value.
-average
Displays the average value. It requires an hour to calculate
the correct value. N/A is displayed if there is less than one
hour worth of data.
-shelf
Allows to input a value for a selected enclosure. It displays the
present and average inlet air temperature and input power information
on a specified disk array enclosure. If a specific enclosure_id is not
specified, all disk array enclosure information is displayed.
<shelf_id>
Specifies a disk array enclosure id on which to display information.
-intemp f|c
Displays the inlet air temperature information. The f flag indicates
Fahrenheit. The default value or c flag indicates Celsius.
-power
Displays the input power information.
-present
Displays the current value.
-average
Displays the average value. It requires an hour to calculate
the correct value. N/A is displayed if there is less than one
hour worth of data.
-battery
Displays the present and average input power information on a specified standby
power supply. If no -a or -b is specified, then the information is displayed on
both standby power supplies.
-a
Specifies a standby power supply A on which to display information.
-b
Specifies a standby power supply B on which to display information.
-present
Displays the current value.
-average
Displays the average value. It requires an hour to calculate the
correct value. N/A is displayed if there is not one hour worth of
data.
-spe
Displays the present and average input power information on a specified
standby power supply.
-intemp [f|c]
Displays the inlet air temperature information. The f flag indicates
87
Fahrenheit. The default value or c flag indicates Celsius.
-power
Displays the input power information.
-present
Displays the current value.
-average
Displays the average value. It requires an hour to calculate
the correct value. N/A is displayed if there is less than one
hour worth of data.
-all
Displays the following:
* System input power
* Data mover enclosure inlet air temperatures and input power
* Array input power
* Disk array enclosure inlet air temperatures and input power
* Storage processor enclosure inlet air temperatures and input power
* Standby power supply input power
EXAMPLE #1
---------To view the present and average input power information for file and bl
ock
on systems or file only on gateway system, type:
$ nas_environment -info -system
System = Celerra ns 600 APM 000237001650000
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
EXAMPLE #2
---------To view the average inlet air temperature on data mover enclosure 1 in
degrees Farenheit, type:
$ nas_environment -info -dme 1 -intemp f -average
Component = DME 0 Data Mover 1
Temperature Status = OK
Rolling average (degrees) = 53F
EXAMPLE #3
---------To view the average inlet air temperature on data mover enclosure 1 in
degrees Celsius, type:
$ nas_environment -info -dme1 -intemp c -average
Data Mover Enclosure 1
Status: Valid
Inlet Air Temperature
Rolling average (degrees Celsius): 11.3
EXAMPLE #4
---------To view the present system information, type:
$ nas_environment -info -system -present
System = Celerra ns 600 APM 000237001650000
Power Status = OK
Present (watts) = 150
EXAMPLE #5
---------To view the information array (input power and inlet temperature), type
:
$ nas_environment -info -array
Component = CLARiiON CX600 APM0023700165
Power Status = OK
Present (watts) = 230
Rolling average (watts) = 245
EXAMPLE #6
---------To view the present and average inlet air temperature on al shelves,
type:
$ nas_environment -info -shelf -all
Component = Shelf 0/0 Shelf 0/0
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
Temperature Status = OK
Present (degrees) = 12C
Rolling average (degrees) = 11C
Component = Shelf 0/1 Shelf 0/1
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
Temperature Status = OK
Present (degrees) = 12C
Rolling average (degrees) = 11C
Component = Shelf 1/0 Shelf 1/0
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
Temperature Status = OK
Present (degrees) = 12C
Rolling average (degrees) = 11C
Component = Shelf 1/1 Shelf 1/1
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
Temperature Status = OK
Present (degrees) = 12C
Rolling average (degrees) = 11C
EXAMPLE #7
---------To view the present and average inlet air temperature on shelf 1
enclosure 1, type:
$ nas_environment -info -shelf 1/1 -average
Component = Shelf 1/1 Shelf 1/1
89
Power Status = OK
Rolling average (watts) = 150
Temperature Status = OK
Rolling average (degrees) = 11C
EXAMPLE #8
---------To view the present and average inlet air temperature on all spes, type
:
$ nas_environment -info -spe
Component = SPE 0 SPE 0
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
Temperature Status = OK
Present (degrees) = 12C
Rolling average (degrees) = 11C
EXAMPLE #9
---------To view the present information for all batteries, type:
$ nas_environment -info -battery
Component = Shelf 0/0 SP A
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
Component = Shelf 0/0 SP B
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
EXAMPLE #10
----------To view the present information for all batteries, type:
$ nas_environment -info -battery a -average
Component = Shelf 0/0 SP A
Power Status = OK
Rolling average (watts) = 150
EXAMPLE #11
----------To view all the components, type:
$ nas_environment -info -all
Component = Celerra ns600 APM000237001650000
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
Component = DME 0 Data Mover 0
Power Status = OK
Present (watts) = 200
Rolling average (watts) = 333
Temperature Status = OK
Present (degrees) = 12C
Rolling Average (degrees) = 11C
Component = DME 0 Data Mover 1
Power Status = Error 13690667102: Not Present
Present (watts) = N/A
Rolling average (watts) = N/A
Temperature Status = Error 13690667102: Unsupported
Present (watts) = N/A
Rolling average (watts) = N/A
Component = DME 0 Data Mover 2
Power Status = Error 13690667102: Uninitialized
Present (watts) = 150
Average (watts) = N/A
Temperature Status = Error 13690667102: Uninitialized
Present (degrees) = 12C
Average (degrees) = N/A
Component = DME 0 Data Mover 3
Power Status = Error 13690667102: Failed
Present (watts) = 150
Average (watts) = N/A
Temperature Status = Error 13690667102: Failed
Present (degrees) = 12C
Average (degrees) = N/A
Component = Shelf 0/0
Power Status = OK
Present (watts): 150
Rolling average (watts): 150
Temperature Status = OK
Present (degrees) = 12C
Rolling average (degrees) = 11C
Component = Shelf 0/1
Power Status = Error 13690667102: Invalid
Present (watts) = N/A
Rolling average (watts) = N/A
Temperature Status = Error 13690667102: Invalid
Present (degrees) = N/A
Rolling average (degrees) = N/A
Component = Shelf 1/0
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
Temperature Status = OK
Present (degrees) = 12C
Rolling average (degrees) = 11C
Component = Shelf 1/1
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
Temperature Status = OK
Present (degrees) = 12C
Rolling average (degrees) = 11C
Component = CLARiiON CX600 APM00023700165
Power Status = OK
Present (watts) = 230
Rolling average (watts) = 245
Component = SPE 0 SPE 0
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
Temperature Status = OK
91
Present (degrees) = 12C
Rolling average (degrees) = 11C
Component = Shelf 0/0 SP A
Power Status = OK
Present (watts) = 150
Rolling average (watts) = 150
Component = Shelf 0/0 SP B
Power Status = Error 13690667102: Not Present
Present (watts) = N/A
Rolling average (watts) = N/A
-----------------------------------------------------------------------Last Modified: May 5 2011, 19:30
nas_event
Provides a user interface to system-wide events.
SYNOPSIS
-------nas_event
-Load {-info|<file_name>}
| -Unload <file_name>
| -list
-action {-info|{trap|logfile|mail|callhome|exec|udprpc|tcprpc|terminat
e}
[-component {<component>|<id>}][-facility {<facility>
| <id>}][-severity {<severity>|<id>}]]|[-id]}
|-component {-info|{<component>|<id>} [-facility {<facility>|<id>}]|[-i
d]}
|-severity {-info|<severity>|<id>} [-component {<component>|<id>}
[-facility {<facility>|<id>}]]|[-id]
|-keyword <keyword> [-component {<component>|<id>}
[-facility{<facility>|<id>}][-severity {<severity>|<id>}]]|[-id]
DESCRIPTION
----------nas_event provides a user interface to system-wide events within
the VNX. The VNX includes a default event configuration file that
contains a mapping of facilities that generate events, and the
associated actions triggered by those events.
To list the default configuration files, type:
$ nas_event -Load -info
Using a text editor, a new event configuration file can be created and
loaded into the database to add an event, or change an action.
VNX facilities generate events that trigger specific actions. An event
consists of:
.
.
.
.
An assigned ID for the event and the facility issuing the event
The high water mark for the severity of the event
A description of the event
The system-defined action to take when the event occurs
CAUTION
The callhome events in the system are carefully reviewed and
configured to provide the right level of support. Do not add, delete,
or modify lines that specify the callhome action in the event
configuration files in the /nas/sys directory. User-defined event
configuration files should not use the callhome action.
OPTIONS
-------Load {-info|<file_name>}
Loads the event configuration file <file_name> into the system. The
-info option displays the currently loaded configuration files.
-Unload <file_name>
Unloads the configuration file <file_name> from the system.
CAUTION
The /nas/sys/nas_eventlog.cfg configuration file must not be deleted,
as this can lead to data loss. Unloading or modifying configuration
files that are loaded by default can affect logging, alerts,
notifications, and system operations.
93
-list
The -list option displays components and facilities that generate
events, and the actions that are taken as a result. You can search for an
event, facility, or action by using a keyword. Component, facility, and
severity can be specified by either the text name or ID. The output is
displayed with parameter names in the form $(paraname,
typeIndicator, fmtStr).
-action
{-info|{trap|logfile|mail|callhome|exec|udprpc|tcprpc|terminate}
With the -info option, lists all the possible actions associated with
events. If one of the actions trap, logfile, mail, callhome, exec,
udprpc, tcprpc, or terminate is specified, lists the possible events
that trigger the specified action. These events are categorized by
component and facility:
[-component {<component>|<id>}][-facility {<facility>|<id>}]
Lists the possible events in the specified component that
trigger the given action. If facility is specified, lists the events
in the specified component and facility that trigger the given
action.
[-severity {<severity>|<id>}]
Lists the possible events with the specified severity that
trigger the given action.
[-id]
Lists the output with the MessageID number in addition to
BaseID, Severity, and Brief_Description.
-component {-info|{<component>|<id>}[-facility{<facility>
|<id>}]
With the -info option, lists the ids and names of all the
components. If the component is specified, lists the ids and names
of all the facilities under that component. Specifying facility lists
the events that can be generated by the specified facility and
component.
[-id]
Lists the output with the message ID number in addition to
BaseID and Brief_Description.
-severity {-info|<severity>|<id>}]
With the -info option, lists the severity levels. If severity is
specified, lists the events with the specified severity level.
[-component {<component>|<id>} [-facility <facility>|<id>]
Lists the events filtered by the given severity and component.
If facility is specified lists the events further filtered by
the given facility.
[-id]
Lists the output with the MessageID number in addition to
BaseID, Severity, and Brief_Description.
Note: To receive email notifications sent to multiple recipient
s,
specify the email addresses within the quotes and separate them
with
a comma.
-keyword <keyword>
Lists all events that match the specified keyword.
[-component {<component>|<id>}][-facility{<facility> |<id>}]
Lists events filtered by the specified keyword and component.
If facility is specified, lists the events further filtered by the
given facility.
[-severity {<severity>|<id>}]
Lists events filtered by the specified severity.
[-id]
Lists the output with the MessageID number in addition to
BaseID, Severity, and Brief_Description.
SEE ALSO
-------Configuring Events and Notifications on VNX for File.
EXAMPLE #1
---------After using a text editor to create an event configuration file, to
load the new configuration file into the NAS database, type:
$ nas_event -Load /nas/site/new_eventlog.cfg
EventLog : will load /nas/site/new_eventlog.cfg...done
EXAMPLE #2
---------To verify that the configuration file was loaded, type:
$ nas_event -Load -info
Loaded config. files:
1: /nas/sys/nas_eventlog.cfg
2: /nas/http/webui/etc/web_client_eventlog.cfg
3: /nas/site/new_eventlog.cfg
EXAMPLE #3
---------To list actions, type:
$ nas_event -list -action -info
action
terminate
trap
exec
mail
callhome
logfile
EXAMPLE #4
---------To list the events that trigger the mail action, type:
$ nas_event -list -action mail
CS_PLATFORM(6)
|--> EventLog(130)
BaseID Severity Brief_Description
50 EMERGENCY(0) ${text,8,%s}
51 ALERT(1) ${text,8,%s}
52 CRITICAL(2) ${text,8,%s}
EXAMPLE #5
95
---------To list the components, type:
$ nas_event -list -component -info
Id
Component
1
DART
2
CS_CORE
5
XML_API
6
CS_PLATFORM
EXAMPLE #6
---------To list the facilities under the component DART, type:
$ nas_event -list -component DART -info
DART(1)
|->Id Facility
24 ADMIN
26 CAM
27 CFS
36 DRIVERS
40 FSTOOLS
43 IP
45 KERNEL
46 LIP
51 NDMP
52 NFS
54 SECURITY
56 SMB
58 STORAGE
64 UFS
68 LOCK
70 SVFS
72 XLT
73 NETLIB
75 MGFS
77 VRPL
78 LDAP
81 VC
83 RCPD
84 VMCAST
86 CHAMII
93 USRMAP
101 ACLUPD
102 FCP
108 REP
111 DPSVC
115 SECMAP
117 WINS
118 DNS
122 DBMS
144 PERFSTATS
146 CEPP
148 DEDUPE
EXAMPLE #7
---------To list the events generated by DART in the facility with the ID
146, type:
$ nas_event -list -component DART -facility 146
DART(1)
|--> CEPP(146)
BaseID Severity Brief_Description
1 NOTICE(5) CEPP server ${ipaddr,8,%s} of pool ${pool,8,%s} is
${status,8,%s}. Vendor ${vendor,8,%s}, ntStatus
0x${ntstatus,2,%x}.
2 ERROR(3) Error on CEPP server ${ipaddr,8,%s} of pool
${pool,8,%s}: ${status,8,%s}. Vendor ${vendor,8,%s},
ntStatus 0x${ntstatus,2,%x}.
3 NOTICE(5) The CEPP facility is started.
4 NOTICE(5) The CEPP facility is stopped.
EXAMPLE #8
---------To list events with severity 4 generated by component CS_CORE and
facility DBMS, and to display the MessageID in the output, type:
$ nas_event -list -severity 4 -component CS_CORE -facility DBMS -id
CS_CORE(2)
|--> DBMS(122)
MessageID BaseID Brief_Description
86444212226 2 Db: Compact${compact_option,8,%s}: ${db_name,8,%s}:
Failed: ${db_status,8,%s}.
86444212227 3 Db Env: ${db_env,8,%s}: Log Remove: Failed:
${db_status,8,%s}.
EXAMPLE #9
---------To list events filtered by the keyword freeblocks, type:
$ nas_event -list -keyword freeblocks
DART(1)
|--> DBMS(122)
BaseID Severity Brief_Description
2 CRITICAL(2) Only ${freeblocks,3,%llu} free blocks in the root
file system (fsid ${fsid,2,%u}) of the VDM
${vdm,8,%s}.
3 ALERT(1) The root file system (fsid ${fsid,2,%u}) of the
VDM ${vdm,8,%s} is full. There are only
${freeblocks,3,%llu} free blocks.
EXAMPLE #10
----------To list events with the keyword data generated in DART with the
severity level 6, type:
$ nas_event -list -keyword data -component DART -severity 6
DART(1)
|--> USRMAP(93)
BaseID Severity Brief_Description
1 INFO(6) The Usermapper database has been created.
4 INFO(6) The Usermapper database has been destroyed.
8 INFO(6) The migration of the Usermapper database to the
VNX version 5.6 format has
started.
9 INFO(6) The Usermapper database has been successfully
migrated.
DART(1)
|--> SECMAP(115)
BaseID Severity Brief_Description
1 INFO(6) The migration of the secmap database to the VNX version
5.6 format has started.
97
2 INFO(6) The secmap database has been successfully migrated.
EXAMPLE #11
----------To unload the event configuration file, type:
$ nas_event -Unload /nas/site/new_eventlog.cfg
EventLog : will unload /nas/site/new_eventlog.cfg... done
EXAMPLE #12
----------To receive email notifications that are sent to multiple
recipients, add the following line to your /nas/sys/eventlog.cfg file:
disposition severity=0-3, mail "nasadmin@nasdocs.emc.com,
helpdesk@nasdocs.emc.com"
EXAMPLE #13
----------To list the events that trigger a particular trap action, type:
$ nas_event -l -a trap | more
CS_PLATFORM(6)
|--> BoxMonitor(131)
BaseID Severity Brief_Description
1 CRITICAL(2) EPP failed to initialize.
3 CRITICAL(2) Failed to create ${threadname,8,%s} thread.
4 CRITICAL(2) SIB Read failure: ${string,8,%s}
..
CS_PLATFORM(6)
|--> SYR(143)
BaseID Severity Brief_Description
5 INFO(6) The SYR file ${src_file_path,8,%s} with
${dest_extension,8,%s} extension is attached.
------------------------------------------------------------Last modified: May 14, 2012 1:35 pm
nas_fs
Manages local file systems for the VNX.
SYNOPSIS
-------nas_fs
-list [-all]
| -delete <fs_name> [-option <options>][-Force]
| -info [-size] {-all|<fs_name>|id=<fs_id>} [-Ads] [-option <options>]
| -rename <old_name> <new_name> [-Force]
| -size <fs_name>
| -acl <acl_value> <fs_name>
| -translate <fs_name> -access_policy start
-to {MIXED} -from {NT|NATIVE|UNIX|SECURE}
| -translate <fs_name> -access_policy status
| -xtend <fs_name> {<volume_name>|size=<integer>[T|G|M|%][pool=<pool>]
[storage=<system_name>]} [-option <options>]
| -modify <fs_name> -auto_extend {no|yes [-thin {no|yes}]}
[-hwm <50-99>%][-max_size <integer>[T|G|M]]
| -modify <fs_name> -worm [-default_retention {<integer>{Y|M|D}|infinite}]
[-min_retention {<integer>{Y|M|D}|infinite}]
[-max_retention {<integer>{Y|M|D}|infinite}]
| -modify <fs_name> -worm [-auto_lock {enable[-policy_interval
<integer>{M|D|H}]|disable}]
[-auto_delete {enable|disable}]
[-policy_interval <integer>{M|H|D}]
| -modify <fs_name> -worm -reset_epoch <year>
| -Type <type> <fs_name> -Force
| [-name <name>][-type <type>] -create <volume_name>
[samesize=<fs_name>[:cel=<cel_name>]]
[worm={enterprise|compliance|off}]
[-default_retention {<integer>{Y|M|D} |infinite}] [-min_retention
{<integer>{Y|M|D}|infinite}]
[-max_retention {<integer>{Y|M|D}|infinite}]]
[log_type={common|split}][fast_clone_level={1|2}] [-option <options>]
| [-name <name>][-type <type>] -create {size=<integer>[T|G|M]
| samesize=<fs_name>[:cel=<cel_name>]}
pool=<pool> [storage=<system_name>][worm={enterprise|compliance|off}]
[-default_retention {<integer>{Y|M|D}|infinite}]
[-min_retention {<integer>{Y|M|D}|infinite}]
[-max_retention {<integer>{Y|M|D}|infinite}]]
[log_type={common|split}][fast_clone_level={1|2}]
[-auto_extend {no|yes} [-thin {no|yes}]
[-hwm <50-99>%][-max_size <integer>[T|G|M]]}]
[-option <options>]
| [-name <name>] -type nmfs -create
DESCRIPTION
----------nas_fs creates, deletes, extends, and lists file systems. nas_fs displays
the attributes of a file system, translates the access policy, enables,
automatic file system extension and thin provisioning capabilities, manages
retention periods, enables automatic file locking and automatic file deletion,
and manages access control level values.
OPTIONS
-------list [-all]
Displays a list of file systems and their attributes such as the name,
ID, usage, type, access control level setting, the residing volume, and
99
the server. The -all option displays all file systems including
system-generated internal file systems. For example, Replicator
internal checkpoints.
Note: The ID is an integer and is assigned automatically, but not always
sequentially, depending on ID availability. The name of a file system might be
truncated if it is more than 19 characters. To display the full file system
name, use the -info option with a file system ID.
The file system types are:
1=uxfs (default)
5=rawfs (unformatted file system)
6=mirrorfs (mirrored file system)
7=ckpt (checkpoint)
8=mgfs (migration file system)
100=group file system
102=nmfs (nested mount file system)
Note: The file system types uxfs, mgfs, nmfs, and rawfs are created by using
nas_fs. Other file system types are created either automatically or with their
specific commands.
-delete <fs_name>
Deletes the file system specified by file system name or ID. A file system
cannot be deleted when it is mounted or part of a group.
[-option <options>]
Specifies the following comma-separated options:
volume
Deletes the file system’s underlying volume structure.
Note: If a checkpoint is created with a volume that has been specified
by size, the underlying volume is deleted when the checkpoint is delete
d.
If a file system, using a storage pool is deleted, the underlying volum
e
structure is also deleted.
[-Force]
Forces the deletion of a file system with SnapSure checkpoints
known as the PFS, when a task scheduler such as an automated
scheduler for SnapSure is running or is enabled.
-info [-size] [-Ads] {-all|<fs_name>|id=<fs_id>}
Displays the attributes of a single file system, or all file systems,
including the configuration of associated disks and replication
sessions that are stopped or configured on the file system. If a file system
is mounted, data is reported from the NAS database and the
Data Mover. If a file system is unmounted, data is reported from the
NAS database only.
The -size option also displays the total size of the file system and the
block count in megabytes.
The -Ads option displays the advanced data service properties of the file
system.
[-option <options>]
Specifies the following comma-separated options:
mpd
Displays the current directory type and translation status for the
specified Multi-Protocol Directory (MPD) file system.
-rename <old_name> <new_name>
Changes the file system name from <old_name> to <new_name>.
[-Force]
Forces the rename of the file system with SnapSure checkpoints
known as the PFS.
-size <fs_name>
Displays the total size of the file system and the block count in
megabytes. The total size of a file system relates to the mounted or
unmounted status of a file system.
-acl <acl_value> <fs_name>
Sets an access control level value that defines the owner of a file system,
and the level of access allowed for users and groups defined in the access
control level table. The nas_acl command provides more information.
-translate <fs_name> -access_policy start -to{MIXED}
-from {NT |NATIVE|UNIX|SECURE}
Synchronizes the UNIX and Windows permissions on the specified
file system. Prior to executing the -translate option by using
server_mount, mount the specified file system with the MIXED
access-checking policy. The <fs_name> must be a uxfs file system
type mounted as read/write.
The policy specified in the -from option instructs the VNX about
which operating system (UNIX or Windows) to derive permissions
from, when migrating to the MIXED or MIXED_COMPAT
access-checking policy (set with server_mount). For example, if you
type UNIX in the -from option, all ACLs are regenerated from the
UNIX mode bits. The policy typed in the -from option does not relate
to the policy previously used by the file system object.
-translate <fs_name> -access_policy status
Prints the status of the access policy translation for the specified
file system.
-xtend <fs_name> <volume_name>
Adds the specified volume to the mounted file system.
-xtend <fs_name> size=<integer>[T|G|M|%]
Adds the volume as specified by its desired size to the file system or
checkpoint. Type an integer within the range of 1 to 1024, then specify
T for terabytes, G for gigabytes (default), M for megabytes, or type an
integer representing the percentage of a file system’s size followed by
the percent sign. The extended volume added to the file system by
the system will have a size equal to or greater than the total size
specified.
Caution: When executing this command, extends should be performed
incrementally by using like volumes to reduce time consumption.
[pool=<pool>]
Applies the specified storage pool rule set to the volume that
has been added to the mounted file system.
Note: The storage pool is a rule set that contains automatically
created volumes and defines the type of disk volumes used and how
they are aggregated.
[storage=<system_name>]
Specifies the storage system on which the checkpoint resides.
If a storage system is not specified, the default storage system
is the one on which the file system resides. If the file system
spans multiple storage systems, the default is to use all the
storage systems on which the file system resides. Use
nas_storage -list to obtain attached storage system names.
101
[-option <options>]
Specifies the following comma-separated options:
slice={y|n}
Specifies whether the disk volumes used by the file system may
be shared with other file systems that use a slice. The slice=y
option allows the file system to share disk volumes with other file systems.
The slice=n option gives the new filesystem exclusive
access to the disk volumes it uses, and is relevant when using
TimeFinder/FS.
When symm_std, symm_std_rdf_src, symm_ata,
symm_ata_rdf_src, symm_ata_rdf_tgt, and symm_std_rdf_tgt,
symm_fts, symm_fts_rdf_tgt, symm_dsl and symm_fts_rdf_src pools are specifie
d,
the default is not to slice the volumes, which is overridden with
slice=y. For symm_efd, the default is slice=y,because TimeFinder/FS
is not supported with Flash(EFD) disk types.
When clar_r1, clar_r5_performance, clar_r5_economy, clar_r6,
clarata_r3, clarata_r6, clarata_r10, clarata_archive, cm_r1,
cm_r5_performance, cm_r5_economy, cm_r6, cmata_r3,
cmata_archive, cmata_r6, cmata_r10, clarsas_archive, clarsas_r6,
clarsas_r10, clarefd_r5, clarefd_r10, cmsas_archive, cmsas_r6,
cmsas_r10, and cmefd_r5 pools are specified, the default for
standard AVM pools is to slice the volumes (slice=y), which is
overridden by using slice=n. The default for mapped pools is not
to slice the volumes (slice=n). Use nas_pool to change the default
slice option.
-modify <fs_name> -auto_extend {no|yes [-thin{yes|no}]}
[-hwm <50-99>%][-max_size <integer>[T|G|M]]]
For an AVM file system, turns automatic file system extension and
thin provisioning on or off, and sets a high water mark and
maximum size for the file system. When file system extension is
turned on, the file system is automatically extended up to the
maximum size specified when the high water mark is reached. The
default for -auto_extend is no.
Thin provisioning reports the maximum file system size to the CIFS,
NFS, and FTP users, even if the actual size of the file system is
smaller. If thin provisioning is disabled, the true file system size and
maximum file system sizes are reported to the system administrator.
Thin provisioning requires that a maximum file system size also be
set. If a file system is created by using a virtual storage pool that
contains:
* Only thick LUNs, use the nas_fs command’s -thin option to enable thin
provisioning on the file system.
* Only thin LUNs, using the nas_fs command’s -thin option is not recommended. I
t
is redundant, but allowed, for a thin file system to be built on thin LUNs.
* Both thick and thin LUNs, the file system may be built on either thick LUNs,
thin LUNs, or both thick and thin LUNs. Using the nas_fs command’s -thin
option may be redundant if the file system uses thin LUNs.
Automatic file system extension cannot be used for any file system
that is part of an RDF configuration (for example, file systems on
Data Movers configured with an RDF standby). Do not use the nas_fs
command with the -auto_extend option for file systems associated
with RDF configurations.
[-hwm <50-99>%]
Specifies the size threshold that must be reached before the file system
is automatically extended. Type an integer between the range of 50 and
99 to represent the percentage of file system usage. The default is 90.
[-max_size <integer> [T|G|M]]
Sets the maximum file system size to which a file system can be
extended. Type an integer and specify T for terabytes, G for
gigabytes (default), or M for megabytes. If the -max_size option is
not specified, then it defaults to the maximum limit of the file system size
which is 16 terabytes.
-modify <fs_name> -worm [-default_retention {<integer>{Y|M|D}|infinite}]
[-min_retention {<integer>{Y|M|D}|infinite}]
[-max_retention {<integer>{Y|M|D}|infinite}]
For an FLR-enabled file system, manages retention periods.
[-default_retention {<integer>{Y|M|D}|infinite}]
Sets a default retention period that is used in an FLR-enabled
file system when a file is locked and a retention period is not
specified. This value must be greater than or equal to the -min_retention
option, and less than or equal to the -max_retention option. Type an
integer and specify Y for years, M for months, or D for days. The default
value is infinite. Setting infinite means that the files can never be
deleted.
[-min_retention {<integer>{Y|M|D}|infinite}]
Sets the minimum retention period that files on an FLR-enabled
filesystem can be locked and protected from deletion. This value must
be less than or equal to the -max_retention option. Type an integer and
specify Y for years, M for months, or D for days. The default value is one
day. Setting infinite means that the files can never be deleted.
[-max_retention {<integer>{Y|M|D}|infinite}]
Sets the maximum retention period that files on an FLR-enabled
filesystem can be locked and protected from deletion. Type an integer
and specify Y for years, M for months, or D for days. The default
value is infinite. Setting infinite means that the files can never be delete
d.
-modify <fs_name> -worm [-auto_lock {enable[-policy_interval <integer>{M|D|H}]|
disable}]
[-auto_delete {enable|disable}][-policy_interval <integer>{M|D|H}]
For an FLR-enabled filesystem, manages automatic file locking and automatic
file deletion.
[-auto_lock {enable|disable}]
Specifies whether automatic file locking for all files in an
FLR-enabled file system is on or off. When enabled, auto-locked files are
set with the default retention period value.
[-policy_interval <integer>{M|D|H}]
Specifies an interval for how long to wait after the files are modified befo
re
the files are automatically locked in an FLR-enabled file system. Type an in
teger and
specify M for minutes, D for days, or H for hours. The policy interval has
a minimum value of one minute and a maximum value of 366 days. The
default value is one hour.
[-auto_delete {enable|disable}]
Specifies whether automatically deleting locked files from an
FLR-enabled file system once the retention period has expired is on or off.
-modify <fs_name> -worm -reset_epoch <year>
For an FLR-enabled file system, specifies the base year used for
calculating the retention date of a file beyond 2038. Type an integer
between the range of 2000 and 2037. The default value is 2003. The
maximum value for the retention period is December 31, 2104 11:59:59 p.m.
Trying to set a date beyond this value generates an error. Refer to Using
VN
103
X
File-Level Retention for additional information.
-Type <type> <fs_name> -Force
Changes the file system type from the one of <fs_name> to the new
specified <type>.
Caution: Converting uxfs to rawfs is prevented.
Caution: The conversion from rawfs to
invalid filesystem specified" because
rawfs. However, if the user initially
NDMP volume backup on the rawfs, then
rawfs to a uxfs will be successful.
uxfs will fail with "Error 3105:
a uxfs is not available on the
creates a rawfs, and restores an
the conversion from the
CREATING A FILE SYSTEM
---------------------File systems can be created by using:
*
*
*
*
A volume specified by name
A volume specified by its size and desired storage pool
An existing local or remote filesystem with the samesize option
An existing local or remote filesystem with the samesize option
and by using space from the available storage pool
[-name <name>][-type <type>] -create <volume_name>
Creates a file system on the specified volume and assigns an optional
name to the file system. If a name is not specified, one is assigned
automatically.
A file system name cannot:
*
*
*
*
Begin with a dash (-)
Be comprised entirely of integers
Be a single integer
Contain the word root or contain a colon (:)
The -type option assigns the file system type to be uxfs (default),
mgfs, or rawfs.
[samesize=<fs_name>[:cel=<cel_name>]]
Specifies that the new file system must be created with the same
size as the specified local or remote file system. When using the
samesize option by using the options described below, the slice=
must be set to y.
Note: The specified file system must be mounted.
[worm={enterprise|compliance|off}]
Enables storage capability on a new file system. The option can
only be specified when creating a new file system; existing file system
s
cannot be converted. After a file system is enabled, it is
persistently marked as such until the time when it is deleted.
Note: The compliance file system cannot be deleted if it has protected
files.
Caution: The Enterprise version of this feature is intended for
self-regulated archiving. The administrator is considered a
trusted user and the capability does not protect the archived
data from the administrator’s actions. If the administrator
attempts to delete the file system, the file system issues a
warning message and prompts the administrator to confirm the
operation. This version is not intended for high-end
compliance applications such as pharmaceuticals, aerospace, or
finance.
As part of enabling file-level retention (worm) on a new file system, y
ou can
also set these retention period options:
[-default_retention {<integer>{Y|M|D}|infinite}]
Sets a default retention period that is used in an FLR-enabled filesyst
em when
a file is locked and a retention period is not specified. This value mu
st be
greater than or equal to the -min_retention option, and less than or eq
ual to
the -max_retention option. Type an integer and specify Y for years, M f
or
months, D for days, or infinite. The default value is infinite which me
ans
that the files can never be deleted.
[-min_retention {<integer>{Y|M|D}|infinite}]
Sets the minimum retention period that files on an FLR-enabled file sys
tem can
be locked and protected from deletion. This value must be less than or
equal
to the -max_retention option. Type an integer and specify Y for years,
M for
months, D for days, or infinite. The default value is one day. Setting
infinite
means that the files can never be deleted.
[-max_retention {<integer>{Y|M|D}|infinite}]
Sets the maximum retention period that files on an FLR-enabled file sys
tem can
be locked and protected from deletion. Type an integer and specify Y fo
r
years, M for months, D for days, or infinite. The default value is infi
nite
which means that the files can never be deleted.
log_type={common|split}
Specifies the type of log file associated with the file system. Log fil
es can
be either shared (common) or uniquely assigned to individual file
systems(split). For SRDF Async or STAR feature, split option is strongl
y
recommended to avoid fsck before mounting a BCV file system on SiteB or
SiteC.
[fast_clone_level={1|2}]
fast_clone_level=2 enables ability to create fast clone of a fast clone
(also
called as the second level fast clone) on the file system. fast_clone_l
evel=1
enables ability to create a fast clone. File level retention and fast c
lone
creation cannot be enabled together on a file system.
Enabling split l
og
implies fast_clone_level=2, if file level retention is not enabled on t
he
filesystem. Replication sessions cannot be created between two differen
t
fast_clone_level capable filesystems.
Note: fast_clone_level=1 indicates that a fast clone can be created on
the
filesystem and it is the default option if nothing is specified.
105
[-option <options>]
Specifies the following comma-separated options:
nbpi=<number>
The number of bytes per inode block. The default is 8192 bytes.
mover=<movername>
Assigns an optional Data Mover to build a file system. If no Data
Mover is assigned, the system will automatically pick the first
available Data Mover to build the file system.
slice={y|n}
Specifies whether the disk volumes used by the
may be shared with other file systems by using
slice=y option allows the file system to share
other file systems. The slice=n option ensures
new file system
a slice. The
disk volumes with
that the new file system
has exclusive access to the disk volumes it uses, and is
relevant when using TimeFinder/FS.
When symm_std, symm_std_rdf_src, symm_ata,
symm_ata_rdf_src, symm_ata_rdf_tgt, symm_std_rdf_tgt, symm_fts,
symm_fts_rdf_tgt, and symm_fts_rdf_src pools are specified, the default
is not to slice the volumes. When slice=y is specified, it overrides th
e
default. For symm_efd, the default is slice=y, because TimeFinder/FS is
not supported with Flash disk types.
When clar_r1, clar_r5_performance, clar_r5_economy, clar_r6,
clarata_r3, clarata_r6, clarata_r10, clarata_archive, cm_r1,
cm_r5_performance, cm_r5_economy, cm_r6, cmata_r3,
cmata_archive, cmata_r6, cmata_r10, clarsas_archive, clarsas_r6,
clarsas_r10, clarefd_r5, clarefd_r10, cmsas_archive, cmsas_r6,
cmsas_r10, and cmefd_r5 pools are specified, the default for
standard AVM pools is to slice the volumes (slice=y), which is
overridden by using slice=n. The default for mapped pools is not
to slice the volumes (slice=n). Use nas_pool to change the default
slice option.
id=<desired_id>
Specifies the ID to be assigned to the new file system. If a file syste
m
already exists with the specified ID, a warning is displayed indicating
that the ID is not available, and the new file system is assigned the
next available ID.
[-name <name>][-type <type>] -create {size=
<integer>[T|G|M]|samesize=<fs_name>[:cel=<cel_name>]} pool=<pool>
Creates a file system on the volume specified by its desired size and
storage pool or by using the same size as a specified local or remote
file system. Also assigns an optional name and file system type to a
file system. If a name is not specified, one is assigned automatically. A
file system name can be up to 240 characters, but cannot begin with a
dash (-), be comprised entirely of integers or be a single integer,
contain the word root or contain a colon (:). Available file system
types are uxfs (default), mgfs, or rawfs.
When using the samesize option by using the options described
below, the slice= should be set to y. The new file system is created
with the same size as the specified file system.
The pool option specifies a rule set for the new file system that
contains automatically created volumes and defines the type of disk
volumes used and how they are aggregated. Storage pools are system
defined (storage pool description provides more information) or user
defined.
[worm={enterprise|compliance|off}]
Enables the storage capability on the new file system. The
capability can only be specified when creating a new file system;
existing file systems cannot be converted. After a file system is
enabled, it is persistently marked as such until the time when it is
deleted.
Caution: The Enterprise version of this feature is intended for
self-regulated archiving. The administrator is considered a
trusted user and feature does not protect the archived data from
the administrator’s actions. If the administrator attempts to
delete a file system, the file system issues a warning message
and prompts the administrator to confirm the operation. This
version of this feature is not intended for high-end compliance
applications such as pharmaceuticals, aerospace, or finance.
As part of enabling file-level retention (worm) on a new file system, y
ou can
also set these retention period options:
[-default_retention {<integer>{Y|M|D}|infinite}]
Sets a default retention period that is used in an FLR-enabled file sys
tem when
a file is locked and a retention period is not specified. This value mu
st be
greater than or equal to the -min_retention option, and less than or eq
ual to
the -max_retention option. Type an integer and specify Y for years, M f
or
months, D for days, or infinite. The default value is infinite which me
ans
that the files can never be deleted.
[-min_retention {<integer>{Y|M|D}|infinite}]
Setsthe minimum retention period that files on an FLR-enabled file syst
em can
be locked and protected from deletion. This value must be less than or
equal
to the -max_retention option. Type an integer and specify Y for years,
M for
months, D for days, or infinite. The default value is 1 day. Setting in
finite
means that the files can never be deleted.
[-max_retention {<integer>{Y|M|D}|infinite}]
Sets the maximum retention period that files on an FLR-enabled file sys
tem can
be locked and protected from deletion. Type an integer and specify Y fo
r
years, M for months, D for days, or infinite. The default value is infi
nite
which means that the files can never be deleted.
[storage=<system_name>]
Specifies the system on which the file system resides. Use
nas_storage -list to obtain a list of the available system names.
[-auto_extend {no|yes} [-thin {no|yes}]
For an AVM file system, turns automatic file system extension
and thin provisioning on or off, and sets a high water mark and
maximum size for the file system. When automatic file system
extension is turned on, the file system is automatically extended
up to the maximum size specified when the high water mark is
107
reached. The default for -auto_extend is no.
Thin provisioning reports the maximum file system size to the
CIFS, NFS, and FTP users, even if the actual size of the file system
is smaller. If disabled, the true file system size and maximum file sys
tem
sizes are reported to the system administrator. Thin provisioning requi
res
that a maximum file system size also be set.
If a file system is created in a storage pool that contains:
* Only thick LUNs, use the nas_fs command’s -thin option to enable thi
n
provisioning on the file system.
* Only thin LUNs, using the nas_fs command’s -thin option is not recom
mended. It
is redundant, but allowed, for a thin file system to be built on thi
n LUNs.
* Both thick and thin LUNs, the file system may be built on either thi
ck LUNs,
thin LUNs, or both thick and thin LUNs. Using the nas_fs command’s thin
option may be redundant if the file system uses thin LUNs.
Note: SRDF pools are not supported.
[-hwm <50-99>%]
Specifies the size threshold that must be reached before the file syste
m
is automatically extended. Type an integer between the range of 50 and
99
to represent the percentage of file system usage. The default is 90.
[-max_size <integer> [T|G|M]]
Sets the maximum file system size to which a file system can be
extended. Type an integer and specify T for terabytes, G for
gigabytes (default), or M for megabytes. If the -max_size option is
not specified, then it defaults to the maximum limit of the file system
size which is 16 terabytes. Maximum size must be set to
enable thin provisioning. The maximum size is what is presented
to users as the file system size through thin provisioning.
[-option <options>]
Specifies the following comma-separated options:
nbpi=<number>
The number of bytes per inode block. The default is 8192 bytes.
mover=<movername>
Assigns an optional Data Mover on which to build a file system.
If no Data Mover is assigned, the system will automatically pick
the first available Data Mover to build the file system.
slice={y|n}
Specifies whether the disk volumes used by the
may be shared with other file systems by using
slice=y option allows the file system to share
other file systems. The slice=n option ensures
new file system
a slice. The
disk volumes with
that the new file syst
em
has exclusive access to the disk volumes it uses, and is
relevant when using TimeFinder/FS.
When symm_std, symm_std_rdf_src, symm_ata,
symm_ata_rdf_src, symm_ata_rdf_tgt, and symm_std_rdf_tgt, symm_fts,
symm_fts_rdf_tgt, and symm_fts_rdf_src pools are specified, the defau
lt
is not to slice the volumes, which is overridden with slice=y.
For symm_efd, the default is slice=y, because TimeFinder/FS is not
supported with Flash disk types.
When clar_r1, clar_r5_performance, clar_r5_economy, clar_r6,
clarata_r3, clarata_r6, clarata_r10, clarata_archive, cm_r1,
cm_r5_performance, cm_r5_economy, cm_r6, cmata_r3,
cmata_archive, cmata_r6, cmata_r10, clarsas_archive, clarsas_r6,
clarsas_r10, clarefd_r5, clarefd_r10, cmsas_archive, cmsas_r6,
cmsas_r10, and cmefd_r5 pools are specified, the default for
standard AVM pools is to slice the volumes (slice=y), which is
overridden by using slice=n. The default for mapped pools is not
to slice the volumes (slice=n). Use nas_pool to change the default
slice option.
[-name <name>] -type nmfs -create
Creates a nested mount file system (NMFS) that can be used to
combine multiple uxfs file systems into a single virtual file system.
The NMFS can then be mounted and exported as a single
share or mount point.
SEE ALSO
-------Managing Volumes and File Systems with VNX Automatic Volume
Management, Managing Volumes and FileSystems for VNX Manually,
Using VNX File-Level Retention, Controlling Access to System Objects on
VNX, Using VNX Replicator, fs_ckpt, fs_timefinder, nas_acl, nas_rdf,
nas_volume, server_export, server_mount, fs_dedupe, and
server_mountpoint.
STORAGE SYSTEM OUTPUT
--------------------The number associated with the storage device is dependent on the
attached storage system. VNX for Block displays a prefix of APM
before a set of integers, for example, APM00033900124-0019.
Symmetrix storage systems appear as 002804000190-003C. The
outputs displayed in the examples use a VNX for Block.
VNX for Block supports the following system-defined storage pools:
clar_r1, clar_r5_performance, clar_r5_economy, clar_r6, clarata_r3,
clarata_r6, clarata_r10, clarata_archive, cm_r1, cm_r5_performance,
cm_r5_economy, cm_r6, cmata_r3, cmata_archive, cmata_r6,
cmata_r10, clarsas_archive, clarsas_r6, clarsas_r10, clarefd_r5,
clarefd_r10, cmsas_archive, cmsas_r6, cmsas_r10, and cmefd_r5.
VNXs with a Symmetrix storage system support the following
system-defined storage pools: symm_std_rdf_src, symm_std,
symm_ata, symm_ata_rdf_src, symm_ata_rdf_tgt,
symm_std_rdf_tgt, symm_efd, symm_fts, symm_fts_rdf_tgt,
and symm_fts_rdf_src.
For user-defined storage pools, the difference in output is in the disk
type. Disk types when using a Symmetrix are: STD, R1STD, R2STD,
BCV, R1BCV, R2BCV, ATA, R1ATA, R2ATA, BCVA, R1BCA,
R2BCA, EFD, FTS, R1FTS, R2FTS, R1BCF, R2BCF, and BCVF.
Disk types when using VNX for Block are: CLSTD, CLEFD, CLATA,
MIXED (indicates that tiers used in the pool contain multiple disk
types), Performance, Capacity, and Extreme_performance and for
VNX for block involving mirrored disks are: CMEFD, CMSTD,
109
CMATA, Mirrored_mixed, Mirrored_performance,
Mirrored_capacity, and Mirrored_extreme_performance.
EXAMPLE #1
---------To create a file system named ufs1 on metavolume mtv1, type:
$ nas_fs -name ufs1 -create mtv1
id
= 37
name
= ufs1
acl
= 0
in_use
= False
type
= uxfs
worm
= enterprise with no protected files
worm_clock
= Clock not initialized
worm Max Retention Date = NA
worm Default Retention Period = infinite
worm Minimum Retention Period = 1 Day
worm Maximum Retention Period = infinite
FLR Auto_lock
= off
FLR Policy Interval
= 3600 seconds
FLR Auto_delete
= off
FLR Epoch Year
= 2003
volume
= mtv1
pool
=
rw_servers
=
ro_servers
=
rw_vdms
=
ro_vdms
=
auto_ext
= no,thin=no
deduplication
= off
stor_devs
= APM00042000818-0012,APM00042000818-0014
disks
= d7,d9
Where:
Value
id
name
acl
in_use
type
volume
worm
pool
rw_servers
ro_servers
rw_vdms
ro_vdms
worm_clock
worm Max
Retention
Date
worm Default
Retention
Definition
Automatically assigned ID of a file system.
Name assigned to a file system.
Access control value assigned to the file system.
If a file system is registered into the mount table of a Data
Mover.
Type of file system. See -list for a description of the types.
Volume on which a file system resides.
Write Once Read Many (WORM) state of the file system.It states
whether the file-level retention is disabled or set to either
compliance or enterprise.
Storage pool for the file system.
Servers with read/write access to a file system.
Servers with read-only access to a file system.
VDM servers with read/write access to a file system.
VDM servers with read-only access to a file system.
Software clock maintained by the file system. The clock
functions only when the file system is mounted read/write.
Time when the protected files expire. The file system can be
deleted only after this date. The special values returned are:
* 3 - The file system is is set to File-Level retention
enterprise with protected files.
* 2 - The file system is scanning for max_retention period.
* 1 - The default value (No protected files created).
* 0 - Infinite retention period (if the server is up and
running.)
Specifies a default retention period that files on an
FLR-enabled filesystem will be locked and protected from deleti
on.
Period
worm Minimum
Retention
Period
worm Maximum
Retention
Period
FLR Auto_Lock
FLR Policy
Interval
If you do not set either a minimum retention period or a
maximum retention period, this default value is used when the
file-level retention is enabled.
Specifies the minimum retention period that files on an
FLR-enabled file system will be locked and protected from
deletion.
Specifies the maximum retention period that files on an
FLR-enabled file system will be locked and protected from
deletion.
Specifies whether automatic file locking for all files in an
FLR-enabled file system is on or off.
Specifies an interval for how long to wait after files are
modified before the files are automatically locked and protecte
d
from deletion.
FLR Auto_delete Specifies whether locked files are automatically deleted once
the retention period has expired.
FLR Epoch Year Specifies the base year used for calculating the retention
date of a file beyond 2038. When a file is locked with its atim
e
set to a value greater than the FLR Epoch Year value, the file’
s
retention date is set to the file’s atime value. When a file is
locked with its atime set to a value less than the FLR Epoch Ye
ar value,
the file’s retention date is set to 2038 + (YEAR(atime) - 1970)
.
volume
pool
rw_servers
ro_servers
rw_vdms
ro_vdms
auto_ext
deduplication
stor_devs
disks
Volume on which a file system resides.
Storage pool for the file system.
Servers with read/write access to a file system.
Servers with read-only access to a file system.
VDM servers with read/write access to a file system.
VDM servers with read-only access to a file system.
Indicates whether auto-extension and thin provisioning are
enabled.
Deduplication state of the file system. The file data is
transferred to the storage which performs the deduplication
and compression on the data. The states are:
* On - deduplication on the file system is enabled.
* Suspended - Deduplication on the file system is suspended.
Deduplication does not perform any new space
reduction but the existing files that were
reduced in space remain the same.
* Off - Deduplication on the file system is disabled.
Deduplication does not perform any new space reduction
and the data is now reduplicated.
Storage system devices associated with a file system.
Disks on which the metavolume resides.
Note: The Deduplication state is unavailable when the file system is unmounted.
EXAMPLE #2
---------To display information about a file system using the file system ID 14, using
the clar_mapped_pool VNX mapped pool, type:
$ nas_fs -info id=14
id
= 14
name
= ufs2_flre
acl
= 0
in_use
= True
type
= uxfs
worm
= enterprise with no protected files
111
worm_clock
= Fri Jul 29 07:56:42 EDT 2011
worm Max Retention Date= No protected files created
worm Default Retention Period= 10 Years
worm Minimum Retention Period= 30 Days
worm Maximum Retention Period= 10 Years
FLR Auto_lock
= off
FLR Policy Interval= 3600 seconds
FLR Auto_delete = off
FLR Epoch Year = 2003
volume
= v117
pool
= clar_mapped_pool
member_of
= root_avm_fs_group_50
rw_servers
= server_2
ro_servers
=
rw_vdms
=
ro_vdms
=
auto_ext
= no,thin=no
deduplication
= Off
thin_storage
= True
tiering_policy = Auto-tier
compressed
= False
mirrored
= False
stor_devs
=
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
disk=d16
stor_dev=BB005056830430-0019 addr=c0t1l9
server=server_2
disk=d16
stor_dev=BB005056830430-0019 addr=c16t1l9
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c0t1l6
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c16t1l6
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c0t1l5
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c16t1l5
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c0t1l0
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c16t1l0
server=server_2
Where:
Value
Definition
thin_storage
Indicates whether the VNX for Block storage system uses thin
provisioning. Values are: True, False, Mixed.
Indicates the tiering policy in effect. If the initial tier
and the tiering policy are the same, the values are: Auto-Tier,
tiering_policy
Highest
Available Tier, Lowest Available Tier. If the initial tier and
the tiering
policy are not the same, the values are: Auto-Tier/No Data Move
ment, Highest
Available Tier/No Data Movement, Lowest Available Tier/No Data
Movement.
compressed
ixed
l, are compressed).
mirrored
Indicates whether data is compressed. Values are True, False, M
(indicates some of the LUNs, but not al
Indicates whether the disk is mirrored.
EXAMPLE #3
---------To display a list of file systems, type:
$ nas_fs -list
id
inuse type
1
n
1
2
y
1
3
n
5
5
n
5
acl
0
0
0
0
volume
20
50
83
103
name
root_fs_1
root_fs_common
root_fs_ufslog
root_fs_d3
server
1
6
7
8
9
10
11
13
14
n
n
n
y
n
y
y
y
5
5
5
1
5
1
1
1
0
0
0
0
0
0
0
0
104
105
106
22
108
112
115
117
root_fs_d4
root_fs_d5
root_fs_d6
root_fs_2
root_panic_reserve
ufs1
ufs1_flr
ufs2_flre
1
1
1
1
EXAMPLE #4
---------To list all the file systems including internal checkpoints, type:
$ nas_fs -list -all
id
inuse type acl
1
n
1
0
2
y
1
0
3
y
1
0
4
n
1
0
5
n
1
0
6
n
1
0
7
n
1
0
8
n
1
0
9
n
1
0
10
n
1
0
11
n
1
0
12
n
1
0
13
n
1
0
14
n
1
0
15
n
1
0
16
y
1
0
17
n
5
0
18
n
5
0
212
y
1
0
213
y
101 0
214
n
1
0
230
y
1
0
231
y
1
0
342
y
1
0
343
y
1
0
986
n
11 0
987
y
7
0
988
y
1
0
989
y
5
0
1343
n
11 0
1344
y
7
0
1345
y
7
0
1346
y
1
0
1347
n
11 0
1348
y
7
0
1349
y
7
0
1350
y
1
0
1354
n
1
0
1358
n
11 0
1359
y
7
0
1360
y
7
0
1361
n
1
0
1362
n
11 0
1363
n
7
0
1364
n
7
0
1365
y
1
0
1366
y
7
0
1367
y
7
0
1368
n
11 0
1369
n
7
0
1370
n
7
0
volume
24
26
28
30
32
34
36
38
40
42
44
46
48
50
52
54
87
90
315
0
318
346
349
560
563
0
1722
1725
1728
0
2351
2351
2354
0
2358
2358
2367
2374
0
2383
2383
2385
0
2388
2388
2392
2383
2383
0
2395
2395
name
server
root_fs_1
root_fs_2
1
root_fs_3
2
root_fs_4
root_fs_5
root_fs_6
root_fs_7
root_fs_8
root_fs_9
root_fs_10
root_fs_11
root_fs_12
root_fs_13
root_fs_14
root_fs_15
root_fs_common
2,1
root_fs_ufslog
root_panic_reserve
v2src1
1
root_avm_fs_group_3
v2dst1
v2srclun1
1
v2dstlun1
2
root_fs_vdm_srcvdm1 1
root_fs_vdm_srcvdm2 1
vpfs986
gstest
1
src1
1
dst1
1
vpfs1343
root_rep_ckpt_342_2 1
root_rep_ckpt_342_2 1
root_fs_vdm_srcvdm1 1
vpfs1347
root_rep_ckpt_1346_ 1
root_rep_ckpt_1346_ 1
fs1
v9
fs1_replica1
vpfs1358
root_rep_ckpt_1350_ v9
root_rep_ckpt_1350_ v9
fs1_replica2
vpfs1362
root_rep_ckpt_1361_
root_rep_ckpt_1361_
fs1365
1
root_rep_ckpt_1350_ v9
root_rep_ckpt_1350_ v9
vpfs1368
root_rep_ckpt_1354_
root_rep_ckpt_1354_
113
1371
1372
1376
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
y
y
y
n
y
y
y
n
y
y
y
n
y
y
n
y
y
n
y
y
y
y
y
n
y
y
n
y
y
y
1
1
1
11
7
7
1
11
7
7
1
1
1
1
11
7
7
11
7
7
7
1
1
11
7
7
11
7
7
7
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2399
2401
2406
0
2414
2414
2416
0
2419
2419
2423
2425
2427
2429
0
2432
2432
0
2435
2435
2432
2439
2441
0
2444
2444
0
2447
2447
2444
root_fs_vdm_v1
f1
root_fs_vdm_v1_repl
vpfs1380
root_rep_ckpt_1372_
root_rep_ckpt_1372_
f1_replica1
vpfs1384
root_rep_ckpt_1383_
root_rep_ckpt_1383_
cworm
cworm1
fs2
fs3
vpfs1391
root_rep_ckpt_1389_
root_rep_ckpt_1389_
vpfs1394
root_rep_ckpt_1390_
root_rep_ckpt_1390_
fs2_ckpt1
fs4
fs5
vpfs1400
root_rep_ckpt_1398_
root_rep_ckpt_1398_
vpfs1403
root_rep_ckpt_1399_
root_rep_ckpt_1399_
fs4_ckpt1
1
v40
2
v40
v40
v41
v41
v41
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
Note: NDMP and Replicator internal checkpoints can be identified by
specific prefixes in the filename. Using VNX SnapSure provides more
information for internal checkpoints naming formats.
EXAMPLE #5
---------To create a uxfs file system named ufs20 on storage system BB005056830430,
with a size of 1 GB, using the clar_r5_performance pool and allowing the
file system to share disk volumes with other file systems, type:
$ nas_fs -name ufs20 -type uxfs -create size=1G pool=clar_r5_performance
storage=BB005056830430 -option slice=y
id
= 15
name
= ufs20
acl
= 0
in_use
= False
type
= uxfs
worm
= off
volume
= v119
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication = unavailable
stor_devs =
BB005056830430-0018,BB005056830430-0017,BB005056830430-0014,BB005056830430-0011
disks
= d15,d14,d11,d8
Where:
Value
member_of
Definition
Filesystem group to which the filesystem belongs.
EXAMPLE #1 provides a description of command output.
EXAMPLE #6
---------To create a rawfs file system named ufs3 with the same size as the file system
ufs1 using the clar_r5_performance pool and allowing the file system
to share disk volumes with other filesystems, type:
$ nas_fs -name ufs3 -type rawfs -create samesize=ufs1 pool=clar_r5_performance
storage=APM00042000818 -option slice=y
id
= 39
name
= ufs3
acl
= 0
in_use
= False
type
= rawfs
worm
= off
volume
= v173
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication = unavailable
stor_devs =
APM00042000818-001F,APM00042000818-001D,APM00042000818-0019,APM00042
000818-0016
disks
= d20,d18,d14,d11
EXAMPLE #1 and EXAMPLE #3 provide for a description of
command outputs.
EXAMPLE #7
---------To create a uxfs file system named ufs4, with a size of 100 GB, using the
clar_r5_performance pool, with file-level retention set to enterprise, 4096 byt
es per
inode, and server_3 for file system building, type:
$ nas_fs -name ufs4 -create size=100G pool=clar_r5_performance worm=enterprise
-option nbpi=4096,mover=server_3
id
= 16
name
= ufs4
acl
= 0
in_use
= False
type
= uxfs
worm
= enterprise with no protected files
worm_clock= Clock not initialized
worm Max Retention Date= NA
worm Default Retention Period= infinite
worm Minimum Retention Period= 1 Day
worm Maximum Retention Period= infinite
FLR Auto_lock= off
FLR Policy Interval= 3600 seconds
FLR Auto_delete= off
FLR Epoch Year= 2003
volume
= v121
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
115
deduplication
= unavailable
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
To ensure retention of protected files, it can also be set to compliance by
typing:
$ nas_fs -name ufs4 -create size=100G pool=clar_r5_performance worm=compliance
-option nbpi=4096,mover=server_3
id
= 17
name
= ufs4
acl
= 0
in_use
= False
type
= uxfs
worm
= compliance with no protected files
worm_clock= Clock not initialized
worm Max Retention Date= NA
worm Default Retention Period= infinite
worm Minimum Retention Period= 1 Day
worm Maximum Retention Period= infinite
FLR Auto_lock= off
FLR Policy Interval= 3600 seconds
FLR Auto_delete= off
FLR Epoch Year= 2003
volume
= v123
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= unavailable
stor_devs =
BB005056830430-0018,BB005056830430-0017,BB005056830430-0014,BB005056830430-0011
disks
= d15,d14,d11,d8
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #8
---------To create a file system named ufs30, with a size of 1 GB, by using the
clar_r5_performance pool, with file-level retention set to enterprise, a
minimum retention period of 30 days, and a maximum retention period of 10
years, type:
$ nas_fs -name ufs30 -create size=1G pool=clar_r5_performance worm=enterprise
-min_retention 30D -max_retention 10Y
id
= 18
name
= ufs30
acl
= 0
in_use
= False
type
= uxfs
worm
= enterprise with no protected files
worm_clock= Clock not initialized
worm Max Retention Date= NA
worm Default Retention Period= 10 Years
worm Minimum Retention Period= 30 Days
worm Maximum Retention Period= 10 Years
FLR Auto_lock= off
FLR Policy Interval= 3600 seconds
FLR Auto_delete= off
FLR Epoch Year= 2003
volume
= v125
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= unavailable
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #9
---------To display information about file system ufs4, type:
$ nas_fs -info ufs4
id
= 16
name
= ufs4
acl
= 0
in_use
= False
type
= uxfs
worm
= enterprise with no protected files
worm_clock= Clock not initialized
worm Max Retention Date= NA
worm Default Retention Period= infinite
worm Minimum Retention Period= 1 Day
worm Maximum Retention Period= infinite
FLR Auto_lock= off
FLR Policy Interval= 3600 seconds
FLR Auto_delete= off
FLR Epoch Year= 2003
volume
= v121
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= unavailable
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #10
----------To create a uxfs file system named ufs40, with a size of 10 GB, by using the
clar_r5_performance pool, and an ID of 8000 assigned to ufs1, type:
$ nas_fs -name ufs40 -type uxfs -create size=10G pool=clar_r5_performance
-option slice=y,id=8000
id = 8000
name = ufs40
acl = 0
in_use = False
type = uxfs
worm = off
volume = v127
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
117
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = no,thin=no
deduplication = unavailable
stor_devs =
BB005056830430-0018,BB005056830430-0017,BB005056830430-0014,BB005056830430-0011
disks
= d15,d14,d11,d8
EXAMPLE #11
----------To create a uxfs file system named ufs41, with a size of 10 GB, by using the
clar_r5_performance pool, and an ID of 8000 assigned to ufs1, type:
$ nas_fs -name ufs41 -type uxfs -create size=10G pool=clar_r5_performance
-option slice=y,id=8000
id
= 8001
name
= ufs41
acl
= 0
in_use
= False
type
= uxfs
worm
= off
volume
= v129
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= unavailable
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
Warning 17716815881: unavailable id : 8000.
Note: The warning output is displayed if the desired ID is not available. Becau
se
id=8000 was used in Example 10, the system set the id to 8001 instead.
EXAMPLE #12
----------To view the size of ufs1, type:
$ nas_fs -size ufs1
total = 945 avail = 945 used = 1 ( 0% ) (sizes in MB) ( blockcount = 2097152 )
volume: total = 1024 (sizes in MB) ( blockcount = 2097152 ) avail = 944 used =
80 ( 8% )
When a file system is mounted, the size info for the volume and a file system,
as well as the number of blocks that are used are displayed.
Where:
Value
total
blockcount
Definition
Total size of the file system.
Total number of blocks used.
EXAMPLE #13
----------To rename a file system from ufs1 to ufs5, type:
$ nas_fs -rename ufs1 ufs5
id
= 11
name
= ufs5
acl
= 0
in_use
= True
type
= uxfs
worm
= off
volume
= v112
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication = Off
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
disk=d16
stor_dev=BB005056830430-0019 addr=c0t1l9
server=server_2
disk=d16
stor_dev=BB005056830430-0019 addr=c16t1l9
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c0t1l6
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c16t1l6
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c0t1l5
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c16t1l5
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c0t1l0
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c16t1l0
server=server_2
EXAMPLE #1 and EXAMPLE #3 provide a description of command outputs.
EXAMPLE #14
----------To extend the file system, ufs1, with the volume, emtv2b, type:
$ nas_fs -xtend ufs1 emtv2b
id
= 38
name
= ufs1
acl
= 0
in_use
= True
type
= uxfs
worm
= off
volume
= v171
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication = off
stor_devs =
APM00042000818-001F,APM00042000818-001D,APM00042000818-0019,APM00042
000818-0016,APM00042000818-001C
disks
= d20,d18,d14,d11,d17
disk=d20
stor_dev=APM00042000818-001F addr=c0t1l15
server=server_2
disk=d20
stor_dev=APM00042000818-001F addr=c32t1l15
server=server_2
disk=d18
stor_dev=APM00042000818-001D addr=c0t1l13
server=server_2
disk=d18
stor_dev=APM00042000818-001D addr=c32t1l13
server=server_2
disk=d14
stor_dev=APM00042000818-0019 addr=c0t1l9
server=server_2
disk=d14
stor_dev=APM00042000818-0019 addr=c32t1l9
server=server_2
disk=d11
stor_dev=APM00042000818-0016 addr=c0t1l6
server=server_2
disk=d11
stor_dev=APM00042000818-0016 addr=c32t1l6
server=server_2
disk=d17
stor_dev=APM00042000818-001C addr=c0t1l12
server=server_2
disk=d17
stor_dev=APM00042000818-001C addr=c32t1l12
server=server_2
EXAMPLE #1 provides a description of command outputs.
119
EXAMPLE # 15
-----------To extend the file system named ufs5, with the specified size of 1 GB, by using
clar_r5_performance pool, type:
$ nas_fs -xtend ufs5 size=1G pool=clar_r5_performance
id
= 11
name
= ufs5
acl
= 0
in_use
= True
type
= uxfs
worm
= off
volume
= v112
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= Off
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
disk=d16
stor_dev=BB005056830430-0019 addr=c0t1l9
server=server_2
disk=d16
stor_dev=BB005056830430-0019 addr=c16t1l9
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c0t1l6
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c16t1l6
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c0t1l5
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c16t1l5
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c0t1l0
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c16t1l0
server=server_2
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #16
-----------To set the access control level to 1432 for the file system ufs5, type:
$ nas_fs -acl 1432 ufs5
id
= 11
name
= ufs5
acl
= 1432, owner=nasadmin, ID=201
in_use
= True
type
= uxfs
worm
= off
volume
= v112
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= Off
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
disk=d16
stor_dev=BB005056830430-0019 addr=c0t1l9
server=server_2
disk=d16
stor_dev=BB005056830430-0019 addr=c16t1l9
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c0t1l6
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c16t1l6
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c0t1l5
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c16t1l5
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c0t1l0
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c16t1l0
server=server_2
Note: The value 1432 specifies nasadmin as the owner and gives users with an ac
cess
level of at least observer read access only, users with an access level of at
least operator read/write access, and users with an access level of at least
admin read/write/delete access.
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #17
----------To set the maximum retention period for file system ufs2_flre to 11 years,
type:
$ nas_fs -modify ufs2_flre -worm -max_retention 11Y
id
= 14
name
= ufs2_flre
acl
= 0
in_use
= True
type
= uxfs
worm
= enterprise with no protected files
worm_clock= Fri Jul 29 11:14:27 EDT 2011
worm Max Retention Date= No protected files created
worm Default Retention Period= 10 Years
worm Minimum Retention Period= 30 Days
worm Maximum Retention Period= 11 Years
FLR Auto_lock= off
FLR Policy Interval= 3600 seconds
FLR Auto_delete= off
FLR Epoch Year= 2003
volume
= v117
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= Off
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
disk=d16
stor_dev=BB005056830430-0019 addr=c0t1l9
server=server_2
disk=d16
stor_dev=BB005056830430-0019 addr=c16t1l9
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c0t1l6
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c16t1l6
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c0t1l5
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c16t1l5
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c0t1l0
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c16t1l0
server=server_2
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #17
-----------To set the maximum retention period for file system ufs2_flre to 11 years,
type:
$ nas_fs -modify ufs2_flre -worm -max_retention 11Y
id
name
acl
in_use
type
=
=
=
=
=
14
ufs2_flre
0
True
uxfs
121
worm
= enterprise with no protected files
worm_clock= Fri Jul 29 11:14:27 EDT 2011
worm Max Retention Date= No protected files created
worm Default Retention Period= 10 Years
worm Minimum Retention Period= 30 Days
worm Maximum Retention Period= 11 Years
FLR Auto_lock= off
FLR Policy Interval= 3600 seconds
FLR Auto_delete= off
FLR Epoch Year= 2003
volume
= v117
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= Off
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
disk=d16
stor_dev=BB005056830430-0019 addr=c0t1l9
server=server_2
disk=d16
stor_dev=BB005056830430-0019 addr=c16t1l9
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c0t1l6
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c16t1l6
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c0t1l5
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c16t1l5
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c0t1l0
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c16t1l0
server=server_2
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #18
-----------To reset the FLR epoch year for file system ufs2_flre to 2000, type:
$ nas_fs -modify ufs2_flre -worm -reset_epoch 2000
id
= 14
name
= ufs2_flre
acl
= 0
in_use
= True
type
= uxfs
worm
= enterprise with no protected files
worm_clock= Fri Jul 29 11:18:36 EDT 2011
worm Max Retention Date= No protected files created
worm Default Retention Period= 10 Years
worm Minimum Retention Period= 30 Days
worm Maximum Retention Period= 11 Years
FLR Auto_lock= off
FLR Policy Interval= 3600 seconds
FLR Auto_delete= off
FLR Epoch Year= 2000
volume
= v117
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= Off
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
disk=d16
stor_dev=BB005056830430-0019 addr=c0t1l9
server=server_2
disk=d16
stor_dev=BB005056830430-0019 addr=c16t1l9
server=server_2
disk=d13
disk=d13
disk=d12
disk=d12
disk=d7
disk=d7
stor_dev=BB005056830430-0016
stor_dev=BB005056830430-0016
stor_dev=BB005056830430-0015
stor_dev=BB005056830430-0015
stor_dev=BB005056830430-0010
stor_dev=BB005056830430-0010
addr=c0t1l6
addr=c16t1l6
addr=c0t1l5
addr=c16t1l5
addr=c0t1l0
addr=c16t1l0
server=server_2
server=server_2
server=server_2
server=server_2
server=server_2
server=server_2
EXAMPLE #19
----------To enable FLR automatic file locking with a policy interval of 30 minutes for
file system ufs2_flre, type:
$ nas_fs -modify ufs2_flre -worm -auto_lock enable -policy_interval 30M
id
= 14
name
= ufs2_flre
acl
= 0
in_use
= True
type
= uxfs
worm
= enterprise with no protected files
worm_clock= Fri Jul 29 12:14:44 EDT 2011
worm Max Retention Date= No protected files created
worm Default Retention Period= 10 Years
worm Minimum Retention Period= 30 Days
worm Maximum Retention Period= 11 Years
FLR Auto_lock= on
FLR Policy Interval= 1800 seconds
FLR Auto_delete= off
FLR Epoch Year= 2000
volume
= v117
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= Off
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
disk=d16
stor_dev=BB005056830430-0019 addr=c0t1l9
server=server_2
disk=d16
stor_dev=BB005056830430-0019 addr=c16t1l9
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c0t1l6
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c16t1l6
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c0t1l5
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c16t1l5
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c0t1l0
server=server_2
disk=d7
stor_dev=BB005056830430-0010 addr=c16t1l0
server=server_2
EXAMPLE #20
----------To enable FLR automatic file deletion for file system ufs2_flre, type:
$ nas_fs -modify ufs2_flre -worm -auto_delete enable
id
= 40
name
= ufs4
acl
= 0
in_use
= True
type
= uxfs
worm
= enterprise with no protected files
worm_clock= Wed Jul 6 11:11:13 UTC 2011
worm Max Retention Date= No protected files created
worm Default Retention Period= 1 Year
worm Minimum Retention Period= 1 Day
worm Maximum Retention Period= 1 Year
123
FLR Auto_lock= on
FLR Policy Interval= 1800 seconds
FLR Auto_delete= on
FLR Epoch Year= 2000
volume
= v175
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication = Off
stor_devs =
APM00042000818-001F,APM00042000818-001D,APM00042000818-0019,APM00042
000818-0016
disks
= d20,d18,d14,d11
EXAMPLE #21
----------To start the conversion of the file system, ufs2, and to conform to the MIXED
access policy mode, type:
$ nas_fs -translate ufs2 -access_policy start -to MIXED -from NT
id
= 38
name
= ufs2
acl
= 1432, owner=nasadmin, ID=201
in_use
= True
type
= uxfs
worm
= off
volume
= v171
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication = off
stor_devs =
APM00042000818-001F,APM00042000818-001D,APM00042000818-0019,APM00042
000818-0016,APM00042000818-001C
disks
= d20,d18,d14,d11,d17
disk=d20
stor_dev=APM00042000818-001F addr=c0t1l15
server=server_2
disk=d20
stor_dev=APM00042000818-001F addr=c32t1l15
server=server_2
disk=d18
stor_dev=APM00042000818-001D addr=c0t1l13
server=server_2
disk=d18
stor_dev=APM00042000818-001D addr=c32t1l13
server=server_2
disk=d14
stor_dev=APM00042000818-0019 addr=c0t1l9
server=server_2
disk=d14
stor_dev=APM00042000818-0019 addr=c32t1l9
server=server_2
disk=d11
stor_dev=APM00042000818-0016 addr=c0t1l6
server=server_2
disk=d11
stor_dev=APM00042000818-0016 addr=c32t1l6
server=server_2
disk=d17
stor_dev=APM00042000818-001C addr=c0t1l12
server=server_2
disk=d17
stor_dev=APM00042000818-001C addr=c32t1l12
server=server_2
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #22
----------To display the status of access policy conversion for ufs2, type:
$ nas_fs -translate ufs2 -access_policy status
status=In progress
percent_inode_scanned=90
EXAMPLE #23
-----------
To create a nested mount file system, nmfs1, type:
$ nas_fs -name nmfs1 -type nmfs -create
id
= 8002
name
= nmfs1
acl
= 0
in_use
= False
type
= nmfs
worm
= off
volume
= 0
pool
=
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= unavailable
stor_devs =
disks
=
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #24
----------To delete ufs1, type:
$ nas_fs -delete ufs41
name
= ufs41
acl
= 0
in_use
= False
type
= uxfs
worm
= off
volume
= v129
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= unavailable
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #25
----------To create a file system named ufs3, with a size of 1 GB, by using the
clar_r5_performance pool, a maximum size of 10 GB and with auto-extend and
thin provisioning enabled, type:
$ nas_fs -name ufs3 -create size=1G pool=clar_r5_performance -auto_extend yes
-max_size 10G -thin yes
id
= 8003
name
= ufs3
acl
= 0
in_use
= False
type
= uxfs
worm
= off
volume
= v133
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
125
rw_vdms
=
ro_vdms
=
auto_ext = hwm=90%,max_size=10240M,thin=yes
deduplication
= unavailable
stor_devs =
BB005056830430-0018,BB005056830430-0017,BB005056830430-0014,BB005056830430-0011
disks
= d15,d14,d11,d8
EXAMPLE #1 provides a description of command outputs.
EXAMPLE # 26
-----------To disable thin provisioning on ufs3, type:
$ nas_fs -modify ufs3 -thin no
id
= 8003
name
= ufs3
acl
= 0
in_use
= False
type
= uxfs
worm
= off
volume
= v133
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = hwm=90%,max_size=10240M,thin=no
deduplication
= unavailable
stor_devs =
BB005056830430-0018,BB005056830430-0017,BB005056830430-0014,BB005056830430-0011
disks
= d15,d14,d11,d8
EXAMPLE #1 provides a description of command outputs.
EXAMPLE # 27
-----------To query the current directory type and translation status for MPD, type:
$ nas_fs -info ufs5 -option mpd
id
= 11
name
= ufs5
acl
= 1432, owner=nasadmin, ID=201
in_use
= True
type
= uxfs
worm
= off
volume
= v112
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
deduplication
= Off
stor_devs =
BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010
disks
= d16,d13,d12,d7
disk=d16
stor_dev=BB005056830430-0019 addr=c0t1l9
server=server_2
disk=d16
stor_dev=BB005056830430-0019 addr=c16t1l9
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c0t1l6
server=server_2
disk=d13
stor_dev=BB005056830430-0016 addr=c16t1l6
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c0t1l5
server=server_2
disk=d12
stor_dev=BB005056830430-0015 addr=c16t1l5
server=server_2
disk=d7
disk=d7
stor_dev=BB005056830430-0010 addr=c0t1l0
stor_dev=BB005056830430-0010 addr=c16t1l0
server=server_2
server=server_2
Multi-Protocol Directory Information
Default_directory_type
= DIR3
Needs_translation
= False
Translation_state
= Never
Has_translation_error
= False
where:
Value
Default_directory_type
Needs_translation
Translation_state
Has_translation_error
Default_directory_type
DIR3
Definition
The default directory type for the file system.
Available types are: DIR3 and COMPAT.
If true, then the file system may contain more
than one directory type. If false, then all
directories are of the file system
default directory type.
The current state of the translation thread.
Available states are: never, not requested,
pending, queued, running, paused,
completed, and failed.
Indicated if the most recent translation
encountered any errors.
Needs_translation state
False
File system
Is MPD. No action requi
red.
DIR3
file system maintenance.
True
Requires translation or
Contact EMC Customer Se
rvice.
COMPAT
translation.
False
Is COMPAT and requires
Contact EMC Customer Se
rvice.
COMPAT
True
Requires translation.
Contact EMC Customer Se
rvice.
The state where both Default_directory_type=DIR3 and Needs_transalation=False
assure that this filesystem’s directories are all in MPD format, and there are
no directories of the obsolete single-protocol format.
Any other combination of states, for example, Needs_transalation=True,
indicates that there could be non-MPD directories in the filesystem which may
not be compatible with a future release.
EXAMPLE #28
-----------To display the information about the file system ufs3 and a valid
fast_clone_level of 1 or 2, type:
$ nas_fs -info ufs3
id
= 478
name
= ufs2_flre
acl
= 0
in_use
= False
type
= uxfs
worm
= off
volume
= v1168
pool
= clarsas_archive
member_of = root_avm_fs_group_32
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
127
fast_clone_level= unavailable
deduplication
= unavailable
stor_devs =
APM00112101832-0019,APM00112101832-0028,APM00112101832-0027,APM00112101832-0022
disks
= d25,d19,d32,d16
EXAMPLE #29
-----------To display the information about a file system using the file system
ufs4 using Symmetrix backend mapped pool, type:
$ nas_fs -info ufs4
id = 32
name = ufs4
acl = 0
in_use = True
type = uxfs
worm = off
volume = v644
pool = symm_mapped_pool
member_of = root_avm_fs_group_21
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = hwm=50%,max_size=1024M,thin=yes
fast_clone_level = 1
deduplication = Off
compressed= Mixed
frontend_io_quota = maxiopersec 500,maxmbpersec 500
stor_devs = 000196900016-0553
disks = d524
disk=d524 stor_dev=000196900016-0553 addr=c4t3l5-0-0 server=server_2
disk=d524 stor_dev=000196900016-0553 addr=c20t3l5-0-0 server=server_2
disk=d524 stor_dev=000196900016-0553 addr=c36t3l5-0-0 server=server_2
disk=d524 stor_dev=000196900016-0553 addr=c52t3l5-0-0 server=server_2
disk=d524 stor_dev=000196900016-0553 addr=c68t3l5-0-0 server=server_2
disk=d524 stor_dev=000196900016-0553 addr=c84t3l5-0-0 server=server_2
disk=d524 stor_dev=000196900016-0553 addr=c100t3l5-0-0 server=server_2
disk=d524 stor_dev=000196900016-0553 addr=c116t3l5-0-0 server=server_2
where:
Value
compressed
er data is compressed.
Definition
For VNX with Symmetrix backend, indicates wheth
Values are: True, False, Mixed (indicates some
of the LUNs, but not
frontend_io_quota
otend IO Quota is
all, are compressed).
For VNX with Symmetrix backend, indicates if Fr
configured on this mapped pool, could also have
value as False
(indicates Frontend IO Quota is not configured
on mapped SG in
Symmetrix backend).
--------------------------------------------------------------------------------Last Modified: Jan 11, 2013 4:12 pm
nas_fsck
Manages fsck and aclchk utilities on specified file systems.
SYNOPSIS
-------nas_fsck
-list
| -info {-all|<fs_name>|id=<fs_id>}
| -start {<fs_name>|id=<fs_id>} [-aclchkonly][-monitor][-mover <mover_name>]
[-Force]
DESCRIPTION
----------nas_fsck uses the fsck and aclchk utilities to perform a check for
consistency and errors on the specified file system. nas_fsck also lists
and displays the status of the fsck and aclchk utilities. File systems
must be mounted read-write to use these utilities.
Depending on the size of the file system, the FSCK utility may use a
significant portion of the system’s memory and may affect overall
system performance. Hence, it should not be run on a server under
heavy load as it is possible that the server may run out of resources.
In most cases, the user will be notified if sufficient memory is not
available to run a file system check. In these cases, one of the
following can be done:
. Start the file system during off-peak hours.
. Reboot the server and let nas_fsck run on reboot.
. Run nas_fsck on a different server if the file system is
unmounted.
OPTIONS
-------list
Displays a list of all the file systems undergoing fsck or aclchk.
-info {-all|<fs_name>|id=<fs_id>}
Queries the Data Mover and displays information about the status of
the fsck or aclchk utilities for the specified file system.
-start {<fs_name>|id=<fs_id>}
Starts the fsck and the aclchk utilities on the specified file system.
CAUTION
If file system check is started on a mounted file system, the file
system will be unavailable for the duration of the check. NFS
clients will display the message NFS server not responding and
CIFS clients will lose connectivity with the server and will have to
remap shares.
[-aclchkonly]
Initiates the aclchk utility only, which checks and corrects
any errors in the ACL database and removes duplicate ACL
information stored on the specified file system. The aclchkonly
option can only be used on a file system that is not exported.
The default is for both fsck and aclchk.
Note: The NDMP backup process must be stopped on the Data Mover
before using the nas_fsck -aclchkonly command.
[-monitor]
Displays the status of fsck and aclchk until the command
completes.
129
Note: For a mounted file system, a <movername> is not required
since the fsck and aclchk utilities are run on the Data Mover
where the file system is mounted.
[-Force]
Forces a fsck or aclchk to be run on a enabled file system.
SEE ALSO
-------Managing Volumes and File Systems for VNX Manually and nas_fs.
EXAMPLE #1
---------To start file system check on ufs1 and monitor the progress, type:
$ nas_fsck -start ufs1 -monitor
id = 27
name = ufs1
volume = mtv1
fsck_server = server_2
inode_check_percent = 10..20..30..40..60..70..80..100
directory_check_percent = 0..0..100
used_ACL_check_percent = 100
free_ACL_check_status = Done
cylinder_group_check_status = In Progress..Done
Where:
Value
-----
Definition
----------
id
name
volume
fsck_server
Automatically assigned ID of a file system.
Name assigned to the file system.
Volume on which the file system resided.
Name of the Data Mover where the utility is be
ing run.
inode_check_percent
Percentage of inodes in the file system checke
directory_check_percent
fixed.
Percentage of directories in the file system c
used_ACL_check_percent
and fixed.
Percentage of used ACLs that have been checked
free_ACL_check_status
cylinder_group_check_status
fixed.
Status of the ACL check.
Status of the cylinder group check.
d and
hecked
and
EXAMPLE #2
---------To start ACL check on ufs1, type:
$ nas_fsck -start ufs1 -aclchkonly
ACLCHK: in progress for file system ufs1
EXAMPLE #3
---------To start a file system check on ufs2 using Data Mover server_5,
type:
$ nas_fsck -start ufs2 -mover server_5
name = ufs2
id = 23
volume = v134
fsck_server = server_5
inode_check_percent = 40
directory_check_percent = 0
used_ACL_check_percent = 0
free_ACL_check_status = Not Started
cylinder_group_check_status = Not Started
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #4
---------To list all current file system checks, type:
$ nas_fsck -list
id type state volume name server
23 1 FSCK 134 ufs2 4
27 1 ACLCHK 144 ufs1 1
Where:
Value
-----
Definition
----------
id
type
state
volume
name
server
Automatically assigned ID of a file system.
Type of file system.
Utility being run.
Volume on which the file system resided.
Name assigned to the file system.
Server on which fsck is being run.
EXAMPLE #5
---------To display information about file system check for ufs2 that is
currently running, type:
$ nas_fsck -info ufs2
name = ufs2
id = 23
volume = v134
fsck_server = server_5
inode_check_percent = 100
directory_check_percent = 100
used_ACL_check_percent = 100
free_ACL_check_status = Done
cylinder_group_check_status = In Progress
EXAMPLE #1 provides a description of command outputs.
EXAMPLE #6
---------To display information about all file system checks that are
currently running, type:
$ nas_fsck -info -all
name = ufs2
id = 23
131
volume = v134
fsck_server = server_5
inode_check_percent = 30
directory_check_percent = 0
used_ACL_check_percent = 0
free_ACL_check_status = Not
cylinder_group_check_status
name = ufs1
id = 27
volume = mtv1
fsck_server = server_2
inode_check_percent = 100
directory_check_percent = 0
used_ACL_check_percent = 0
free_ACL_check_status = Not
cylinder_group_check_status
Started
= Not Started
Started
= Not Started
EXAMPLE #1 provides a description of command outputs.
-----------------------------------------------------------------Last modified: May 11, 2011 9:30 am.
nas_halt
Performs a controlled halt of all Control Stations and Data Movers in
the VNX.
SYNOPSIS
-------nas_halt now
DESCRIPTION
----------nas_halt performs an orderly shutdown of the Control Stations and
Data Movers in the VNX. nas_halt must be executed from the
/nas/sbin directory.
OPTIONS
------now
Performs an immediate halt for the VNX.
SEE ALSO
-------VNX System Operations and server_cpu.
EXAMPLE #1
---------To perform an immediate halt of the VNX, type:
# /nas/sbin/nas_halt now
usage: nas_halt now
Perform a controlled halt of the Control Stations and Data Movers
# /nas/sbin/nas_halt now
******************************** WARNING! *******************************
You are about to HALT this system including all of its Control Stations
and Data Movers. DATA will be UNAVAILABLE when the system is halted.
Note that this command does *not* halt the storage array.
ARE YOU SURE YOU WANT TO CONTINUE? [yes or no] : yes
Sending the halt signal to the Master Control Daemon...: Done
May 3 11:12:54 cs100 EMCServer: nas_mcd: Check and halt other CS...: Done
May 3 11:13:26 cs100 JSERVER: *** Java Server is exiting ***
May 3 11:13:31 cs100 ucd-snmp[11218]: Received TERM or STOP signal... shutting
down...
May 3 11:13:31 cs100 snmpd: snmpd shutdown succeeded
May 3 11:13:32 cs100 setup_enclosure: Executing -dhcpd stop option
May 3 11:13:32 cs100 snmptrapd[11179]: Stopping snmptrapd
May 3 11:13:32 cs100 EV_AGENT[13721]: Signal TERM received
May 3 11:13:32 cs100 EV_AGENT[13721]: Agent is going down
May 3 11:13:40 cs100 DHCPDMON: Starting DHCPD on CS 0
May 3 11:13:41 cs100 setup_enclosure: Executing -dhcpd start option
May 3 11:13:41 cs100 dhcpd: Internet Software Consortium DHCP Server V3.0pl1
May 3 11:13:41 cs100 dhcpd: Copyright 1995-2001 Internet Software Consortium.
May 3 11:13:41 cs100 dhcpd: All rights reserved.
May 3 11:13:41 cs100 dhcpd: For info, please visit
http://www.isc.org/products/DHCP
May 3 11:13:41 cs100 dhcpd: Wrote 0 deleted host decls to leases file.
May 3 11:13:41 cs100 dhcpd: Wrote 0 new dynamic host decls to leases file.
May 3 11:13:41 cs100 dhcpd: Wrote 0 leases to leases file.
May 3 11:13:41 cs100 dhcpd: Listening on
LPF/eth2/00:00:f0:9d:04:13/128.221.253.0/24
May 3 11:13:41 cs100 dhcpd: Sending on
LPF/eth2/00:00:f0:9d:04:13/128.221.253.0/24
133
May 3 11:13:41 cs100 dhcpd: Listening on
LPF/eth0/00:00:f0:9d:01:e5/128.221.252.0/24
May 3 11:13:41 cs100 dhcpd: Sending on
LPF/eth0/00:00:f0:9d:01:e5/128.221.252.0/24
May 3 11:13:41 cs100 dhcpd: Sending on Socket/fallback/fallback-net
May 3 11:13:59 cs100 mcd_helper: : Failed to umount /nas (0)
May 3 11:13:59 cs100 EMCServer: nas_mcd: Failed to gracefully shutdown MCD and
halt servers. Forcing halt and reboot...
May 3 11:13:59 cs100 EMCServer: nas_mcd: Halting all servers...
May 3 11:15:00 cs100 get_datamover_status: Data Mover server_5: COMMAND doesnt
match.
---------------------------------------------------------Last modified: May 10, 2011 5:25 pm.
nas_inventory
Provides detailed information about hardware components in the
system.
SYNOPSIS
-------nas_inventory
{
-list [-location]
| {-info <location>|-all}
| -tree
}
DESCRIPTION
----------nas_inventory displays detailed information about the hardware
components that are configured on a system.
OPTIONS
-------list
Displays a list of all hardware components and their associated name,
type, status, and system ID.
[-location]
Displays the location string for each component in the output.
The location string is a unique identifier for the component.
Specifies the location string with enclosed double quotes (" ") and
displays a list of detailed information for the specific component
for which the string is the unique ID.
-info <location_string>|-all
Displays a list of all the properties for a component, including the
component name, type, status, variant, associated system, serial
number, part number, and history.
The -all option lists detailed information for all components in the
system.
-tree
Displays a hierarchical tree of components, including the status of
each component.
EXAMPLE #1
---------To display a list of components on the system, type:
$ nas_inventory -list
Component
Battery A
VNX NS40G
FCNTR083000055001A
CLARiiON CX4-240
FCNTR083000055
Type
Battery
Status
OK
System ID
CLARiiON CX4-240FCNTR083000055
VNX
Warning
VNX NS40GFCNTR083000055001A
CLARiiON
OK
CLARiiON CX4-240 FCNTR08300005
OK
VNX NS40G FCNTR083000055001A
OK
VNX NS40G FCNTR083000055001A
OK
VNX NS40G FCNTR083000055001A
135
5
DME 0 Data Mover 2 Data Mover
DME 0 Data Mover 2
Ethernet Module
Module
DME 0 Data Mover 2
SFP BE0
SFP
DME
SFP
DME
SFP
0 Data Mover 2
BE1
SFP
0 Data Mover 2
FE0
SFP
OK
VNX NS40G FCNTR083000055001A
OK
VNX NS40G FCNTR083000055001A
Where:
Value
-----
Definition
----------
Component
Type
Description of the component.
The type of component. Possible types are: battery,
blower, VNX, Control Station, Data Mover, and disk.
The current status of the component. Status is compone
Status
nt
type specific. There are several possible status value
s,
each of which is associated with a particular componen
t type.
System ID
The identifier for the VNX or the storage ID of the sy
stem
containing the component.
EXAMPLE #2
---------To display a list of components and component locations, type:
$ nas_inventory -list -location
Component Type Status System ID
Location
Battery A Battery OK CLARiiON CX4-240 FCNTR083000055
system:NS40G:FCNTR083000055001A|clariionSystem:CX4-240:FCNTR083000055|sps::A
Celerra NS40G FCNTR083000055001A Celerra Warning Celerra NS40G
FCNTR083000055001A system:NS40G:FCNTR083000055001A
CLARiiON CX4-240 FCNTR083000055 CLARiiON OK CLARiiON CX4-240 FCNTR083000055
system:NS40G:FCNTR083000055001A|clariionSystem:CX4-240:FCNTR083000055
DME 0 Data Mover 2 Data Mover OK Celerra NS40G FCNTR083000055001A
system:NS40G:FCNTR083000055001A|enclosure:xpe:0|mover:NS40:2
DME 0 Data Mover 2 Ethernet Module Module OK Celerra NS40G FCNTR083000055001A
system:NS40G:FCNTR083000055001A|enclosure:xpe:0|mover:NS40:2|module:ethernet:
DME 0 Data Mover 2 SFP BE0 SFP OK Celerra NS40G FCNTR083000055001A
system:NS40G:FCNTR083000055001A|enclosure:xpe:0|mover:NS40:2|sfp::BE0
DME 0 Data Mover 2 SFP BE1 SFP OK Celerra NS40G FCNTR083000055001A
system:NS40G:FCNTR083000055001A|enclosure:xpe:0|mover:NS40:2|sfp::BE1
DME 0 Data Mover 2 SFP FE0 SFP OK Celerra NS40G FCNTR083000055001A
system:NS40G:FCNTR083000055001A|enclosure:xpe:0|mover:NS40:2|sfp::FE0
EXAMPLE #3
---------To list information for a specific component, type:
$ nas_inventory -info "system:NS40G:FCNTR083000055001A|
clariionSystem:CX4-240:FCNTR083000055|iomodule::B0"
Location = system:NS40G:FCNTR083000055001A|clariionSystem:CX4-240:
FCNTR083000055|iomodule::B0
Component Name = IO Module B0
Type = IO Module
Status = OK
Variant = 4 PORT FIBRE IO MODULE
Storage System = CLARiiON CX4-240 FCNTR083000055
Serial Number = CF2YW082800426
Part Number = 103-054-100C
History = EMC_PART_NUMBER:103-054-100C
EMC_ARTWORK_REVISION:C01
EMC_ASSEMBLY_REVISION:C03
EMC_SERIAL_NUMBER:CF2YW082800426
VENDER_PART_NUMBER:N/A
VENDER_ARTWORK_NUMBER:N/A
VENDER_ASSEMBLY_NUMBER:N/A
VENDER_SERIAL_NUMBER:N/A
VENDOR_NAME:N/A
LOCATION_OF_MANUFACTURE:N/A
YEAR_OF_MANUFACTURE:N/A
MONTH_OF_MANUFACTURE:N/A
DAY_OF_MONTH_OF_MANUFACTURE:N/A
ASSEMBLY_NAME:4 PORT FIBRE IO MODULE
Note: The location string must be enclosed in double quotes.
Where:
Value
-----
Definition
----------
Location
The unique identifier of the component and where the
component is located in the component hierarchy.
Component
The description of the component.
Type
The type of component. Possible types are: battery,
blower, VNX for file, VNX for block, Control
Station, Data Mover, and disk.
Status
The current condition of the component. Status is
component type specific. There are several
possible status values, each of which is associated
with a particular component type.
Variant
The specific type of hardware.
Storage System
The model and serial number of the system.
Serial Number
The serial number of the hardware component.
Part Number
The part number of the hardware component.
History
If available, the history information of the component
.
Possible values are: part number, serial number, vendor,
date of manufacture, and CPU information.
EXAMPLE #4
---------To display components in a tree structure, type:
$ nas_inventory -tree
Component
Celerra NS40G FCNTR083000055001A
CLARiiON CX4-240 FCNTR083000055
Battery A
IO Module A0
IO Module A1
IO Module A2
IO Module A3
IO Module A4
IO Module B0
IO Module B1
IO Module B2
IO Module B3
IO Module B4
Power Supply A0
Power Supply A1
Power Supply B0
Power Supply B1
Type
Celerra
CLARiiON
Battery
IO Module
IO Module
IO Module
IO Module
IO Module
IO Module
IO Module
IO Module
IO Module
IO Module
Power Supply
Power Supply
Power Supply
Power Supply
Status
Warning
OK
OK
OK
OK
Empty
Empty
Empty
OK
OK
Empty
Empty
Empty
OK
OK
OK
OK
137
EXAMPLE #5
---------To list information for a specific component, type:
$ nas_inventory -info "system:EA-NAS-SN:00019670026100013|enclosure:SYMM:Eng 3
Dir A|mover:EA-NAS-SN:3|iomodule::3"
Location
= system:EA-NAS-SN:00019670026100013|enclosure:SYMM:Eng 3 Dir A|
mover:EA-NAS-SN:3|iomodule::3
Component Name = SYMM Eng 3 Dir A Data Mover 3 IO Module 3
Type
= IO Module
Status
= OK
Variant
= 4 PORT CU GIGE
History
= FIRMWARE_VERSION:3.28
ASSEMBLY_NAME:4 PORT CU GIGE
Note: The location string must be enclosed in double quotes.
Where:
Value
-----
Definition
----------
Location
The unique identifier of the component and where the
component is located in the component hierarchy.
Component
The description of the component.
Type
The type of component. Possible types are: battery,
blower, VNX for file, VNX for block, Control
Station, Data Mover, and disk.
Status
The current condition of the component. Status is
component type specific. There are several
possible status values, each of which is associated
with a particular component type.
Variant
The specific type of hardware.
Storage System
The model and serial number of the system.
Serial Number
The serial number of the hardware component.
Part Number
The part number of the hardware component.
History
If available, the history information of the component
.
Possible values are: part number, serial number, vendor,
date of manufacture, firmware version and CPU information.
FIRMWARE_VERSION: Displays firmware version of iomodule component
--------------------------------------------------------------Last modified: May 11, 2011 10:00 am.
nas_license
Enables software packages.
SYNOPSIS
-------nas_license
-list
| -create <package_name>[|<key_code>]
| -delete <package_name>
| -init
DESCRIPTION
----------nas_license enables software packages that are available for use with
the system. The <key_code> is supplied by EMC.
All entries are case-sensitive.
OPTIONS
------No arguments
Displays a usage message containing all available and valid software
packages that can be installed.
-list
Displays the site_key as a string and any software packages for which
a license has been installed. The site_key is a permanent license and
cannot be deleted.
Note: Licenses installed on the Control Station are read by the system.
The site_key is a unique identifier which gets generated the first time
nas_license is run. The site_key is also used to decode the key_code
supplied by EMC personnel for special packages.
-create <package_name>[=<key_code>]
Installs the license for the indicated <package_names>. Valid
<package_names> are:
site key
nfs
cifs
snapsure
advancedmanager
replicator
filelevelretention
Note: These packages do not require key_code as they can be enabled from
the GUI. Special packages are supplied along with the required Key_code by
the EMC Customer Service Representative.
-delete <package_name>
Deletes the license for the specified <package_name>.
-init
Initializes the database and re-creates the license file by using the
site_key that is already installed. The license file is located at
/nas/site as nas_license. It contains license keys in an encrypted
format. The -init option should be run only if the license file
containing all the license information has been lost and the following
error message is received:
license table is not initialized
Once the license file has been re-created, the rest of the entries, if
139
present, should be re-added by using the -create option.
EXAMPLE #1
---------To install a license for the snapsure software package, type:
$ nas_license -create snapsure
done
EXAMPLE #2
---------To display all software packages with currently installed licenses,
type:
$ nas_license -list
key
site_key
advancedmanager
nfs
cifs
snapsure
replicator
filelevelretention
status
online
online
online
online
online
online
online
value
42 de 6f d1
EXAMPLE #3
---------To delete a license for specified software package, type:
$ nas_license -delete snapsure
done
EXAMPLE #4
---------To initialize the database and re-create the license file, type:
$ nas_license -init
done
-----------------------------------------------------------Last modified: Jan 15, 2013 4:25 pm
nas_logviewer
Displays the content of nas_eventlog generated log files.
SYNOPSIS
-------nas_logviewer <file_name>
[-f][-v|-t]
DESCRIPTION
----------nas_logviewer displays the event log and other logs created by
nas_eventlog. The log files may be system generated, or created by
the user. Information in the log file is read from oldest to newest.
OPTIONS
------No arguments
Displays the contents of the specified logfile.
-f
Monitors the growth of the log by entering into an endless loop,
pausing and reading the log as it is being generated. To exit, press
Ctrl-C together.
[-v|-t]
Displays the log files in verbose or terse format.
SEE ALSO
-------Configuring Events and Notifications on VNX for File and server_log.
EXAMPLE #1
---------To view the contents of the sys_log file, type:
$ nas_logviewer /nas/log/sys_log|more
May 12 18:01:57 2007:CS_PLATFORM:NASDB:INFO:300:::::nasdb_backup:
checkpoint in progress
May 12 18:02:59 2007:CS_PLATFORM:NASDB:INFO:305:::::nasdb_backup:
Checkpoint done
May 12 18:03:00 2007:CS_PLATFORM:NASDB:ERROR:202:::::NAS database
detectedMay 12 18:03:12 2007
:CS_PLATFORM:NASDB:INFO:306:::::nasdb_backup: NAS DB Backup done
May 12 19:01:52 2007:CS_PLATFORM:NASDB:INFO:300:::::nasdb_backup:
checkpoint in progress
May 12 19:02:50 2007:CS_PLATFORM:NASDB:INFO:305:::::nasdb_backup:
Checkpoint done
May 12 19:02:51 2007:CS_PLATFORM:NASDB:ERROR:202:::::NAS database
detectedMay 12 19:03:02 2007
:CS_PLATFORM:NASDB:INFO:306:::::nasdb_backup: NAS DB Backup done
May 12 20:01:57 2007:CS_PLATFORM:NASDB:INFO:300:::::nasdb_backup:
checkpoint in progress
May 12 20:02:58 2007:CS_PLATFORM:NASDB:INFO:305:::::nasdb_backup:
Checkpoint done
May 12 20:02:59 2007:CS_PLATFORM:NASDB:ERROR:202:::::NAS database
detectedMay 12 20:03:10 2007
:CS_PLATFORM:NASDB:INFO:306:::::nasdb_backup: NAS DB Backup done
May 12 21:01:52 2007:CS_PLATFORM:NASDB:INFO:300:::::nasdb_backup:
checkpoint in progress
May 12 21:02:51 2007:CS_PLATFORM:NASDB:INFO:305:::::nasdb_backup:
Checkpoint done
NAS_DB
NAS_DB
error
NAS_DB
NAS_DB
error
NAS_DB
NAS_DB
error
NAS_DB
NAS_DB
141
May 12 21:02:52 2007:CS_PLATFORM:NASDB:ERROR:202:::::NAS database error
detectedMay 12 21:03:03 2007
:CS_PLATFORM:NASDB:INFO:306:::::nasdb_backup: NAS DB Backup done
Note: This is a partial listing due to the length of the outputs.
EXAMPLE #2
---------To display the contents of the log files in terse format, type:
$ nas_logviewer -t /nas/log/sys_log
May
May
May
May
May
May
May
May
May
May
May
May
May
May
12
12
12
12
12
12
12
12
12
12
12
12
12
12
18:01:57
18:02:59
18:03:00
18:03:12
19:01:52
19:02:50
19:02:51
19:03:02
20:01:57
20:02:58
20:02:59
20:03:10
21:01:52
21:02:51
2007:96108871980:nasdb_backup: NAS_DB checkpoint in progress
2007:96108871985:nasdb_backup: NAS_DB Checkpoint done
2007:83223969994:NAS database error detected
2007:96108871986:nasdb_backup: NAS DB Backup done
2007:96108871980:nasdb_backup: NAS_DB checkpoint in progress
2007:96108871985:nasdb_backup: NAS_DB Checkpoint done
2007:83223969994:NAS database error detected
2007:96108871986:nasdb_backup: NAS DB Backup done
2007:96108871980:nasdb_backup: NAS_DB checkpoint in progress
2007:96108871985:nasdb_backup: NAS_DB Checkpoint done
2007:83223969994:NAS database error detected
2007:96108871986:nasdb_backup: NAS DB Backup done
2007:96108871980:nasdb_backup: NAS_DB checkpoint in progress
2007:96108871985:nasdb_backup: NAS_DB Checkpoint done
EXAMPLE #3
---------To display the contents of the log files in verbose format, type:
$ nas_logviewer -v /nas/log/sys_log|more
logged time = May 12 18:01:57 2007
creation time = May 12 18:01:57 2007
slot id =
id = 96108871980
severity = INFO
component = CS_PLATFORM
facility = NASDB
baseid = 300
type = EVENT
brief discription = nasdb_backup: NAS_DB checkpoint in progress
full discription = The Celerra configuration database is being checkpointed.
recommended action = No action required.
logged time = May 12 18:02:59 2007
creation time = May 12 18:02:59 2007
slot id =
id = 96108871985
severity = INFO
component = CS_PLATFORM
facility = NASDB
baseid = 305
type = EVENT
brief description = nasdb_backup: NAS_DB Checkpoint done
full description = The NAS DB backup has completed a checkpoint of the current
reparation for performing a backup of NAS system data.
recommended action = No action required.
EXAMPLE #4
---------To monitor the growth of the current log, type:
$ nas_logviewer -f /nas/log/sys_log|more
May 12 18:01:57 2007:CS_PLATFORM:NASDB:INFO:300:::::nasdb_backup:
checkpoint in progress
May 12 18:02:59 2007:CS_PLATFORM:NASDB:INFO:305:::::nasdb_backup:
Checkpoint done
May 12 18:03:00 2007:CS_PLATFORM:NASDB:ERROR:202:::::NAS database
detectedMay 12 18:03:12 2007
:CS_PLATFORM:NASDB:INFO:306:::::nasdb_backup: NAS DB Backup done
May 12 19:01:52 2007:CS_PLATFORM:NASDB:INFO:300:::::nasdb_backup:
checkpoint in progress
May 12 19:02:50 2007:CS_PLATFORM:NASDB:INFO:305:::::nasdb_backup:
Checkpoint done
May 12 19:02:51 2007:CS_PLATFORM:NASDB:ERROR:202:::::NAS database
detectedMay 12 19:03:02 2007
:CS_PLATFORM:NASDB:INFO:306:::::nasdb_backup: NAS DB Backup done
May 12 20:01:57 2007:CS_PLATFORM:NASDB:INFO:300:::::nasdb_backup:
checkpoint in progress
May 12 20:02:58 2007:CS_PLATFORM:NASDB:INFO:305:::::nasdb_backup:
Checkpoint done
May 12 20:02:59 2007:CS_PLATFORM:NASDB:ERROR:202:::::NAS database
detectedMay 12 20:03:10 2007
:CS_PLATFORM:NASDB:INFO:306:::::nasdb_backup: NAS DB Backup done
May 12 21:01:52 2007:CS_PLATFORM:NASDB:INFO:300:::::nasdb_backup:
checkpoint in progress
May 12 21:02:51 2007:CS_PLATFORM:NASDB:INFO:305:::::nasdb_backup:
Checkpoint done
May 12 21:02:52 2007:CS_PLATFORM:NASDB:ERROR:202:::::NAS database
detectedMay 12 21:03:03 2007
:CS_PLATFORM:NASDB:INFO:306:::::nasdb_backup: NAS DB Backup done
NAS_DB
NAS_DB
error
NAS_DB
NAS_DB
error
NAS_DB
NAS_DB
error
NAS_DB
NAS_DB
error
------------------------------------------------------------------Last modified: May 10, 2011 1:00 pm.
143
nas_message
Displays message description.
SYNOPSIS
-------nas_message
-info <MessageId>
DESCRIPTION
----------nas_message provides detailed descriptions to a dedicated message.
A brief description, full description, and recommended user action of
the message are displayed.
OPTIONS
-------info <MessageId>
Displays detailed descriptions of the error message, including
severity, component, facility, BaseID, and recommended user action.
The message parameters are displayed in the form ${stateDesc,8,%s}
and not as parameter values. The <MessageId> must be a positive
integer.
SEE ALSO
-------Celerra Network Server Error Messages Guide.
EXAMPLE #1
---------To display detailed descriptions for error message 13421838337, type:
$ nas_message -info 13421838337
MessageID = 13421838337
BaseID = 1
Severity = ERROR
Component = CS_CORE
Facility = default
Type = STATUS
Brief_Description = Operation not permitted${arg0,8,%s}
Full_Description = The operation is not permitted due to an ACL or ownership
issue on the specified object.
Recommended_Action = Check ownership or ACL of the object in question.If
appropriate change the setting to resolve the conflict. Refer to the nas_acl
and chmod man page.
-------------------------------------------------------------------Last modified: May 10, 2011 5:25 pm.
nas_migrate
Plans migrations for Virtual Data Mover (VDM) level, and
manages migrations for both VDM and File system (FS) level.
SYNOPSIS
-------nas_migrate
-list [{-all|-mover <movername>}][-id]
| -info [{-all|-mover <movername>|id=<migId>|<migName>}]
| -plan {
-list [{-all|-mover <movername>}][-id]
| -info [{-all|-mover <movername>|id=<planId>|<planName>}]
| -create <planName>
-source <vdmName>
-destination
{<existing_dstVdmName>
|-pool{id=<dstStoragePoolId>|<dstStoragePoolName>}
}
-interconnect {<interConnectName>|id=<interConnectId>}
[-storage_pools
{<<srcStoragePoolName>:<dstStoragePoolName>[,...]>
|-id<<srcStoragePoolId>:<dstStoragePoolId>[,...]>
}]
[-take_over_ips
[-network_devices <<srcDeviceName>:<dstDeviceName>[,.
..]>]]
[-checkpoint_excluded]
[-background]
| -modify {<planName>|id=<planId>}
[-name <newPlanName>]
[-filesystems
{<srcFs=<name>[:dstFs=<id>][:dstPool=<name>]
[:srcSavPool=<name>][:dstSavPool=<name>][,...]>
|-id<srcFs=<id>[:dstFs=<id>][:dstPool=<id>]
[:srcSavPool=<id>][:dstSavPool=<id>][,...]>
}]
[-interfaces <name=<infName>:dstDevice=<devName>[,...]
>]
|
|
|
|
|
|
[-background]
| -delete {<planName>|id=<planId>} [-background]
}
-create <migName> -vdm
-plan {<planName>|id=<planId>}[-background]
-create <migName> -fs
-source {id=<fsId>|<fsName>}
[-sav {id=<srcSavVolStoragePoolId>
|<srcSavVolStoragePoolName>}]
-destination
{{<existing_dstFsName>|id=<existing_dstFsId>}
|-pool{id=<dstStoragePoolId>|<dstStoragePoolName>}
}
[-sav{id=<dstSavVolStoragePoolId>
|<dstSavVolStoragePoolName>}]
[-vdm <dstMountVdmName>]
-interconnect {<interConnectName>|id=<interConnectId>}
[-checkpoint_excluded]
[-background]
-complete {<migName>|id=<migId>}
[-checkpoint_mismatch_ignored] [-background]
-delete {<migName>|id=<migId>} [-background]
-stop {<migName>|id=<migId>} [-background]
-start {<migName>|id=<migId>} [-background]
DESCRIPTION
-----------
145
The nas_migrate command manages the migration of VDMs, and FS or
checkpoints mounted to VDMs. It is also used for migration of FS and
its mounted checkpoints.
OPTIONS
-------list [{-all|-mover <movername>}][-id]
Lists summary of all the migrations for the cabinet regarded as
destination cabinet, or for the specified mover regarded as destination
mover. The -id option shows migration ID in the summary; by default,
the system-generated migration ID isn’t shown.
-info [{-all|-mover <movername>|id=<migId>|<migName>}]
Displays detailed information of all the migrations for the cabinet
regarded as destination cabinet, for the specified mover regarded as
destination mover, or for one migration with a specified ID or name.
-plan
Generates a migration plan for VDM migration.
-list [{-all|-mover <movername>}][-id]
Lists summary information for all VDM migration plans with
cabinet as destination cabinet if -all option is specified or
no option is specified by default, or a specific data mover as
destination mover. The -id option shows migration plan ID in the
summary; by default, the system-generated migration plan ID
isn’t shown.
-info [{-all|-mover <movername>|id=<planId>|<planName>}]
Displays detailed information for all VDM migration plans with
cabinet as destination cabinet if -all option is specified or
no option is specified by default. Also, displays detailed
information for a specific data mover as destination mover,
or for one migration plan with specific ID or name.
-create <planName>
Creates a VDM migration plan.
-source <vdmName>
Specifies the name of the source VDM to migrate. The source
system information is implied in the interconnect option of
VNX for File Operating Environment. See the usage of
-interconnect option.
-destination {<existing_dstVdmName>
|-pool {id=<dstStoragePoolId>|<dstStoragePoolName>}}
Specifies the name of an existing destination VDM, or the pool
name or ID to create a new destination VDM.
-interconnect {<interConnectName>|id=<interConnectId>}
Specifies the name or ID of the local VNX for File Operating
Environment interconnect configured on the destination.
The mutual VNX for File Operating Environment interconnects
are supposed to be configured between source and destination.
The source system information is implied in the interconnect
option of VNX for File Operating Environment.
[-storage_pools {<<srcStoragePoolName>:<dstStoragePoolName>[,...]>
|-id <<srcStoragePoolId>:<dstStoragePoolId[,...]>}]
Indicates mapping of source and destination storage pools, and
SavVol pools. These pools must exist on the source or the destinati
on.
This will guide the migration to create file systems in the
storage pools on the destination, based on the mapping relationship
of the source storage pools where their source file system lie.
Either pool name or pool ID can be specified, with commas separatin
g each
mapping.If no mappings are specified, destination storage pools wil
l be
automatically selected to create each file systems.
[-take_over_ips][-network_devices <<srcDeviceName>:<dstDeviceName [
,...]>]
Takes over source network interfaces if -take_over_ips option is
specified. Indicates mapping of source and destination network devi
ces.
This will guide the migration to choose the network devices, which
will be used when creating destination network interfaces when
option -take_over_ips is specified. Without -take_over_ips, the
destination interfaces must be manually created on the destination
mover
by the user, with names identical to the source interfaces.
No matter whether the source network interfaces are to be
taken-over or not, the interfaces attached to the source VDM
will be turned down after migration is completed.
Note: To take over IPs, the interfaces must be in the same subnet a
nd have the
same VLAN settings at the source and the destination. Also, the int
erfaces must
be IPv4. IPv6 interfaces cannot be taken over.
Note: To exclude a File System, it must be unexported or unshared a
nd unmounted
before creating a VDM level migration plan.
[-checkpoint_excluded]
Excludes all the existing read-only user checkpoints from the migra
tion.
[-background]
Runs the task in background.
Note: When -background is specified, a task ID will be returned, an
d
the user can check nas_task -i <taskId> to see the result of the ta
sk:
succeeded, failed or running. Otherwise, if -background is not spec
ified,
the command will return "OK" till the task is finished or succeeded
,
or an error message if failed.
-modify {<planName>| id=<planId>}
Modifies a VDM migration plan of specific name or ID.
[-name <newPlanName>]
Renames the migration plan.
[-filesystems {<srcFs=<name>[:dstFs=<id>][:dstPool=<name>]
[:srcSavPool=<name>][:dstSavPool=<name>][,...]>
|-id <srcFs=<id>[:dstFs=<id>][:dstPool=<id>][:srcSavPool=<id>]
[:dstSavPool=<id>][,...]>]}]
Updates source SavVol, recommended destination FSID, destination po
ol,
or destination SavVol for a specified source file system. Only need
s
to specify the file system(s) to be reconfigured, especially
srcFs=<name> or srcFs=<id>. Source file system that is not in
the current migration plan cannot be specified.
147
[-interfaces <name=<infName>:dstDevice=<devName> [,...]>]
Updates network devices on which to create network interfaces.
Both the device name <devName> and interface name(s) <infName>
should be specifed, especially the interface name(s). The interface
name(s) to be reconfigured are key. Interface name(s) that are not
in the current migration plan cannot be specified.
[-background]
Runs the task in background.
Note: When -background is specified, a task ID will be returned, an
d
the user can check nas_task -i <taskId> to see the result of the ta
sk:
succeeded, failed or running. Otherwise, if -background is not spec
ified,
the command will return "OK" till the task is finished or succeeded
,
or an error message if failed.
-delete {<planName>| id=<planId>}
Deletes a VDM migration plan of specific name or ID.The migration plan
cannot
be deleted when a migration exists that references the migration plan.
[-background]
Runs the task in background.
Note: When -background is specified, a task ID will be returned, an
d
the user can check nas_task -i <taskId> to see the result of the ta
sk:
succeeded, failed or running. Otherwise, if -background is not spec
ified,
the command will return "OK" till the task is finished or succeeded
,
or an error message if failed.
-create <migName> -vdm
Creates a VDM level migration after creating a migration plan. The <migName> sp
ecifies
the name of migration session, which is unique per destination cabinet.
-plan {<planName>|id=<planId>}
Specifies the name or ID of the VDM migration plan. The <planName> is
created by the user and the <planId> is generated by the system, both o
f them
are unique per destination cabinet.
[-background]
Runs the task in background.
Note: When -background is specified, a task ID will be returned, and th
e user
can check nas_task -i <taskId> to see the result of the task: succeeded
, failed
or running. Otherwise, if -background is not specified, the command wil
l return
"OK" till the task is finished or succeeded, or an error message if fai
led.
-create <migName> -fs
Creates a File System level migration after creating a migration plan. The <mig
Name>
specifies the name of migration session, which is unique per destination
cabinet and unchangeable.
-source {id=<fsId>|<fsName>}
Specifies the source File System name or ID.
[-sav{id=<srcSavVolStoragePoolId>|<srcSavVolStoragePoolName>}]
Specifies the SavVol pool used by all subsequent checkpoints of the sou
rce
file system. If it is not specified, VNX File Migration applies the sam
e
storage pool of the source file system as the SavVol pool.
Note: This option is only valid when the source file system has
no checkpoints before the migration.
-destination {<existing_dstFsName>|id=<existing_dstFsId>}
Specifies the destination file system, either from a name or ID of an e
xisting
file system name as the destination. An existing destination file syste
m
must be mounted as read-only and have the same size and configuration a
s the
source.
-pool {id=<dstStoragePoolId>|<dstStoragePoolName>}}
Specifies a storage pool to create the destination file system automati
cally,
using the same size as the source file system.
[-sav{id=<dstSavVolStoragePoolId>|<dstSavVolStoragePoolName>}]
Specifies the SavVol pool used by all subsequent checkpoints of the des
tination
file system. If it is not specified, VNX File Migration applies the sam
e
storage pool of the destination file system as the SavVol pool.
Note: All the checkpoints for a file system share the same SavVol.
[-vdm <dstMountVdmName>]}
Specifies a VDM to mount the newly created destination file system. If
the
destination file system is mounted to a VDM, this option is mandatory.
By default, without this option, the newly-created destination file sys
tem
will be mounted to the destination Data Mover specified in the VNX for
File
Operating Environment interconnect.
-interconnect {<interConnectName>|id=<interConnectId>}
Specifies the name or ID of the local VNX for File Operating Environmen
t
interconnect configured on the destination. The mutual VNX for File Ope
rating
Environment interconnects are supposed to be configured
between source and destination data movers.
[-checkpoint_excluded]
Excludes all the existing read-only user checkpoints from the migration
.
[-background]
Runs the task in background.
Note: When -background is specified, a task ID will be returned, and th
e user
can check nas_task -i <taskId> to see the result of the task: succeeded
, failed
149
or running. Otherwise, if -background is not specified, the command wil
l return
"OK" till the task is finished or succeeded, or an error message if fai
led.
-complete {<migName>|id=<migId>} [-checkpoint_mismatch_ignored] [-background]
Completes a migration when (1)the migration state is READY_TO_COMPLETE or COMPL
ETE_FAILED,
(2)names of the destination network interfaces must be configured the same as t
he source
for VDM level migration, and (3)the migration has globalsystem/mover configurat
ion migrated
with the system configuration migration script, or manually by an administrator
.
Completes a migration with a specified name or ID. The -checkpoint_mismatch_ign
ored
option forcefully completes the migration, ignoring any mismatching checkpoints
.The
-background option means the task can be run in the background.
-delete {<migName>|id=<migId>} [-background]
Deletes an existing migration with a specified name or ID, when no migration co
mmands
are running. Executes the commands when the migration state is not INITIAL_COPY
ING,
STARTING, STOPPING, or DELETING. The source and destination VDM, file systems,
checkpoints and
interfaces will not be deleted. The -background option means the task can be ru
n in
the background.
-stop {<migName>|id=<migId>} [-background]
Stops a migration with a specified name or ID, when the migration state is
READY_TO_COMPLETE, STOP_FAILED or START_FAILED. The -background option means
the task can be run in the background.
-start {<migName>|id=<migId>} [-background]
Starts a migration with a specified name or ID, when the migration state is STO
PPED,
STOP_FAILED, START_FAILED or INITIAL_COPY_FAILED. The -background option means
the task can be run
in the background.
SEE ALSO
-------migrate_system_conf, nas_replicate, nas_fs, and fs_ckpt.
SYSTEM OUTPUT
------------The migration states that can appear in the output include CREATING, INITIAL_CO
PYING,
INITIAL_COPY_FAILED, READY_TO_COMPLETE, COMPLETING, COMPLETE_FAILED, COMPLETED,
STOPPING,
STOP_FAILED, STARTING, STOPPED, START_FAILED, DELETING and DELETE_FAILED.
EXAMPLE #1
---------To list summary information of all the migrations, type:
$ nas_migrate -list -all -id
ID
Name
Type
M/FS DestVDM/FS
Network Status
20000010804 vdmMigEx1 VDM
vdmEx1
OK
State
Source Celerra/VNX Source VD
READY_TO_COMPLETE spring vdmEx1
vdmEx1
20000000877 fsMigEx1
OK
fs3
FILESYSTEM READY_TO_COMPLETE spring fs3
fs3
EXAMPLE #2
---------To view detailed information of all the migrations whose destination are curren
t cabinet,
type:
$ nas_migrate -info -all
ID
Name
Type
State
Network Status
Source Celerra/VNX Network Server
Peer Dart Interconnect
Dart Interconnect
Source VDM
Destination VDM
Vdm Migration Plan
File Systems
=
=
=
=
=
=
=
=
=
=
=
=
20000010804
vdmMigEx1
VDM
READY_TO_COMPLETE
OK
spring
spring_winter
winter_spring
vdmEx1
vdmEx1
planEx1
fs1->fs1;Checkpoints:fs1_ckpt1-
>fs1_ckp
t1(*Mismatched),fs1_ckpt2->fs1_ckpt2
fs2->fs2;Checkpoints:fs2_ckpt1->
fs2_ckpt1
Source Mover
Destination Mover
Read-Only User Checkpoints Excluded
Takeover IP Addresses
Interfaces to Takeover
Replications
B0050569059F6_0000:VDM
=
=
=
=
=
=
server_2
server_3
No
Yes
eth1, eth2
3093_BB005056903C71_0000_2926_B
3095_BB005056903C71_0000_2931_B
B0050569059F6_0000:Filesystem
3097_BB005056903C71_0000_2933_B
B0050569059F6_0000:Filesystem
ID
Name
Type
State
Network Status
Source Celerra/VNX Network Server
Peer Dart Interconnect
Dart Interconnect
File Systems
Source Mover
Destination Mover
Read-Only User Checkpoints Excluded
Replications
0050569059F6_0000 : Filesystem
=
=
=
=
=
=
=
=
=
=
=
=
=
20000000877
fsMigEx1
FILESYSTEM
READY_TO_COMPLETE
OK
spring
spring_summer
summer_spring
fs3->fs3
server_2
server_4
Yes
337_BB005056903C71_0000_2951_BB
EXAMPLE #3
---------To list summary information of all VDM migration plans, type:
$ nas_migrate -plan -list -id
ID
Name
Source Celerra/VNX
Source VDM Destination VDM
Destinatio
spring
spring
vdmEx1
vdmEx2
dstpool3
N/A
n Pool
20000034500 PlanEx1
20000035780 PlanEx2
N/A
vdmEx2
151
Note: Either Destination VDM or Pool is N/A because the user can specify a pool
to create
destination VDM root File system, or the existing destination VDM.
EXAMPLE #4
---------To display detailed information about migration plan PlanEx1, type:
$ nas_migrate -plan -info PlanEx1
ID
Name
Source Celerra/VNX Network Server
Peer Dart Interconnect
Dart Interconnect
Source VDM
Destination VDM
Destination Pool (for VDM)
File Systems
= 20000034500
= planEx1
= spring
= spring_winter
= winter_spring
= vdmEx1
= N/A
= dstpool3
= srcFs
|-- dstFs(Recommended ID)
=
=
fs1
1001,NO
|-- dstPool
=
dstpool
|-- srcSavPool
=
srcpool
\221-- dstSavPool
=
dstpool
= srcFs
|-- dstFs(Recommended ID )
=
=
fs2
1002, P
|-- dstPool
=
dstpool
|-- srcSavPool
=
srcpool
T PRESERVED
1
1
1
RESERVED
2
2
\221-- dstSavPool
=
Dstpool
2
Read-Only User Checkpoints Excluded = No
Takeover IP Addresses
= Yes
Interfaces - Devices
= name=eth1:dstDevice=cge1
name=eth2:dstDevice=cge20
Where:
Value
-----
Definition
----------
NOT PRESERVED
The source file system ID cannot be preserved, then the NFS cli
ents
have to remount this file system after VDM migration completes.
EXAMPLE #5
---------To create a VDM migration plan with the default setting when IP-takeover applie
s, type:
$ nas_migrate -plan -create planEx1 -source vdmEx1 -destination -pool dstpool3
-interconnect winter_spring -take_over_ips
Info 26843676673: In Progress: Operation is still running. Check task id 24416
on
the Background Tasks screen for results.
Validate plan name ... succeeded
Create plan ...
Validate destination system licenses ... succeeded
Validate interconenct ... succeeded
Validate source system licenses ... succeeded
Validate system versions ... succeeded
Validate I18N and CIFS service ... succeeded
Validate source VDM ... succeeded
Make migration plan for VDM ... succeeded
Validate source file system(s) ... succeeded
Make migration plan for file system(s) ... succeeded
Make migration plan for interface(s) ... succeeded
Create plan ... succeeded
Save plan ... succeeded
OK
EXAMPLE #6
---------To create a VDM migration plan with storage pool mapping and IP-takeover, type:
$ nas_migrate -plan -create planEx1 -source vdmEx1 -destination -pool dstpool3
-interconnect winter_spring -storage_pools srcpool1:dstpool1,srcpool2:dstpool2
-take_over_ips
Output omitted for brevity.
Where:
Value
-----
Definition
----------
storage_pools
Specifies the storage pool mapping. When not specified, the def
ault
matching rules are: auto-select a storage pool on the destinati
on for
each file system by (in the priority order) storage pool profil
e,
disk type, then size.
EXAMPLE #7
---------To create a VDM migration plan with network device mapping, type:
$ nas_migrate -plan -create planEx1 -source vdmEx1 -destination -pool dstpool3
-interconnect winter_spring -take_over_ips -network_devices cge_src1:cge_dst1,
cge_src2:cge_dst2
Output omitted for brevity.
Where:
Value
-----
Definition
----------
network_devices Specifies the network device mapping to create destination inte
rfaces
with the exact same IP addresses as source interfaces. When not
specified, the default matching rule is to use the network devi
ces with
identical names as those of the source network devices. This op
tion
is a sub-option for "-take_over_ips."
EXAMPLE #8
153
---------To create a VDM migration plan with storage pool mapping, IP-takeover,
network device mapping, and file systems excluded, type:
$ nas_migrate -plan -create planEx1 -source vdmEx1 -destination -pool dstpool3
-interconnect winter_spring -storage_pools srcpool1:dstpool1,srcpool2:dstpool2
-take_over_ips -network_devices cge1:cge1,cge2:cge20
Output omitted for brevity.
EXAMPLE #6 and EXAMPLE #7 provide descriptions of storage pool and netork devic
e mapping.
EXAMPLE #9
----------To modify a VDM migration plan, type:
$ nas_migrate -plan -modify plan001 -name plan001_New -filesystems -id
srcFs=100:dstFs=100,srcFs=300:dstPool=3 -interfaces name=eth10:dstDevice=cge10
Output omitted for brevity.
EXAMPLE #10
----------To delete a VDM migration plan, type:
$ nas_migrate -plan -delete plan001
Output omitted for brevity.
EXAMPLE #11
----------To create a VDM level migration, type:
$ nas_migrate -create vdmMigEx1 -vdm -plan planEx1
Output omitted for brevity.
EXAMPLE #12
----------To create a file system level migration, type:
$ nas_migrate -create fsMigEx1 -fs -source fs3 -destination -pool dstpool3
-interconnect summer_spring
Info 26843676673: In Progress: Operation is still running. Check task id 63654
on
the Background Tasks screen for results.
Validate migration name <fsmigEx1> ... succeeded
Query migration plan ... succeeded
Validate migration ... succeeded
Create migration session ... succeeded
Create FS [<fs name>] ... succeeded
Create interfaces ... succeeded
Create FS replication [<fs name>] ... succeeded
Initial Copy FS [<fs name>] ... succeeded.
Create destination file systems ...
Create destination file systems: <#created>/<#total>(updated per 2 minutes)
Create destination file systems... succeeded
Create checkpoints ...
Create checkpoints: <#created>/<#total>(updated per 2 minutes)
Create checkpoints ... succeeded
Create replications ...
Create replications: <#created>/<#total>(updated per 2 minutes)
Create replications ... succeeded
Update Migration State [INITIAL_COPYING] ... succeeded
Initial Copy ...
Initial Copy: Total=50000(M): Copied=10000(M): Transfer Rate=2000(KB/s)(updated
per 10 minutes)
Initial Copy: Total=50000(M): Copied=20000(M): Transfer Rate=3000(KB/s)(updated
per 10 minutes)
Initial Copy ... succeeded
Modify RPO of replications ... succeeded
Update migration state to [READY_TO_COMPLETE] ... succeeded
OK
EXAMPLE #13
----------To complete a migration with the background flag, type:
$ nas_migrate -complete fsMigEx1 -checkpoint_mismatch_ignored -background
Info 26843676432: In Progress: Operation is still running. Check task id 134227
on the Background Tasks screen for results.
EXAMPLE #14
----------To delete a migration with the background flag, type:
$ nas_migrate -delete fsMigEx1 -background
Info 26843676556: In Progress: Operation is still running. Check task id 142811
on the Background Tasks screen for results.
EXAMPLE #15
----------To stop a migration with the background flag, type:
$ nas_migrate -stop fsMigEx1 -background
Info 26843676556: In Progress: Operation is still running. Check task id 144511
on the Background Tasks screen for results.
EXAMPLE #16
----------To stop a migration, type:
$ nas_migrate -stop id=20002224601
Info 26843676673: In Progress: Operation is still running. Check task id 17919
on
the Background Tasks screen for results.
Check migration state ... succeeded
Change migration state to STOPPING ... succeeded
Check local replication state ... succeeded
Check remote replication state ... succeeded
Stop replication in parallel ...
Stop replication task state: Total=10 Succeeded=0 Failed=0
Stop replication task state: Total=10 Succeeded=5 Failed=0
155
Stop replication
Stop replication
Stop replication
Change migration
task state: Total=10 Succeeded=6 Failed=0
task state: Total=10 Succeeded=10 Failed=0
in parallel succeeded
state to STOPPED ... succeeded
EXAMPLE #17
----------To start a migration, type:
$ nas_migrate -start id=20002224601 -background
Info 26843676673: In Progress: Operation is still running. Check task id 144527
on the Background Tasks screen for results.
-------------------------------------------------------------------Last modified: Feb 22 2013, 4:34 pm
nas_mview
Performs MirrorView/Synchronous (MirrorView/S) operations on a
system attached to an older version of VNX for block.
SYNOPSIS
-------nas_mview
-info
| -init <cel_name>
| -activate
| -restore
DESCRIPTION
----------nas_mview retrieves MirrorView/S cabinet-level information, initializes
the source and destination systems for MirrorView/S, activates a failover
to a destination VNX for file, or restores the source site after a failover.
MirrorView/S is supported on a system attached to an older version
of VNX for block array serving as the boot storage, not the secondary
storage. nas_mview must be run from a Control Station in slot 0; it
will report an error if run from a Control Station in slot 1.
nas_mview must be issued as root from the /nas/sbin directory. For
the -init and -info options, log in with your administrative username
and use the su root command to log in as root. For the -activate and
-restore options, you must log in to the destination system using the
remote administration account (for example, dradmin) and log in as
root.
OPTIONS
-------info
Displays disaster recovery information such as the MirrorView/S
device group eligible, displays the MirrorView/S Data Mover
configuration for the current system.
-init <cel_name>
Initializes the MirrorView/S relationship between the source and
destination systems based on if the configuration is active/passive
(unidirectional) or active/active’ (bidirectional).
Note: The apostrophe in active/active’ indicates that both sites have
source LUNs mirrored at the other site.
The passphrase-protected relationship between the source and
destination systems in the MirrorView/S configuration must be built
prior to initialization using the nas_cel -create command:
. On the destination Control Station in a MirrorView/S
active/passive configuration, use the -init option to specify the
name of the source system.
. On the Control Station of each system in a MirrorView/S
active/active. configuration, use the -init option to specify the
name of the remote system. The active/active configuration is a
bidirectional configuration in which a VNX for file can serve both
as source and destination for another system.
-activate
Executed from the destination system using the remote
administration account, initiates a failover from the source to the
destination system. The activation works as follows:
157
. If the source is available, the -activate option swaps the
primary-secondary role for all mirrors in the MirrorView/S
device group and makes the destination LUNs read/write. The
standby Data Movers acquire the IP and MAC addresses, file
systems, and export tables of their source counterparts.
. If the original source site is unavailable, the destination LUNs are
promoted to the primary role, making them visible to the
destination VNX for file. The original source LUNs cannot be
converted to backup images; they stay visible to the source VNX
for file, and the original destination site is activated with new
source (primary) LUNs only. If the source cannot be shut down in
a disaster scenario, any writes occurring after the forced
activation will be lost during a restore.
-restore
Issued from the destination system using the remote administration
account, restores a source system after a MirrorView/S failover, and
fails back the device group to the source system.
The restore process begins by checking the state of the device group.
If the device group state is Local Only (where each mirror has only
the source LUN), the device group will be fully synchronized and
rebuilt before the failback can occur. If the device group condition is
fractured, an incremental synchronization is performed before the
failback occurs. Source devices are then synchronized with the data
on the original destination devices, I/O access is shut down, the
original destination Data Movers are rebooted as remote standbys,
and the mirrored devices are failed back. When the source side is
restored, the source Data Movers and their services are restarted.
If the restore fails, the source Control Station is not reachable on the
data network. To complete the restore, access the source, log in as
root, and type /nasmcd/sbin/nas_mview -restore.
SEE ALSO
-------Using MirrorView/Synchronous with VNX for Disaster Recovery, nas_cel,
and nas_checkup.
STORAGE SYSTEM OUTPUT
--------------------The number associated with the storage device reflects the attached
storage system; for MirrorView/S, VNX for block displays a prefix of
APM before a set of integers, for example, APM00033900124-0019.
The VNX for block supports the following system-defined AVM
storage pools for MirrorView/S only: cm_r1, cm_r5_performance,
cm_r5_economy, cmata_archive, cmata_r3, cm_r6, and cmata_r6.
EXAMPLE #1
---------To initialize a destination VNX for file in an active/passive
configuration to communicate with source site source_cs, from the
destination Control Station, type:
# /nas/sbin/nas_mview -init source_cs
Celerra with MirrorView/Synchronous Disaster Recovery
Initializing source_cs --> target_cs
Contacting source_cs for remote storage info
Local storage system: APM00053001549
Remote storage system: APM00053001552
Enter the
Username:
Password:
Password:
Global CLARiiON account information
emc
*** Retype your response to validate
***
Discovering storage on source_cs (may take several minutes)
Setting security information for APM00053001549
Discovering storage APM00053001552 (may take several minutes)
Discovering storage (may take several minutes)
Contacting source_cs for remote storage info
Gathering server information...
Contacting source_cs for server capabilities...
Analyzing server information...
Source servers available to be configured for remote DR
------------------------------------------------------1. server_2:source_cs
2. server_3:source_cs [ local standby ]
v. Verify standby server configuration
q. Quit initialization process
c. Continue initialization
Select a source_cs server: 1
Destination servers available to act as remote standby
-----------------------------------------------------1. server_2:target_cs [ unconfigured standby ]
2. server_3:target_cs [ unconfigured standby ]
b. Back
Select a target_cs server: 1
Source servers available to be configured for remote DR
------------------------------------------------------1. server_2:source_cs [ remote standby is server_2:target_cs ]
2. server_3:source_cs [ local standby ]
v. Verify standby server configuration
q. Quit initialization process
c. Continue initialization
Select a source_cs server: 2
Destination servers available to act as remote standby
-----------------------------------------------------server_2:target_cs [ is remote standby for server_2:source_cs ]
2. server_3:target_cs [ unconfigured standby ]
b. Back
Select a target_cs server: 2
Source servers available to be configured for remote DR
------------------------------------------------------1. server_2:source_cs [ remote standby is server_2:target_cs ]
2. server_3:source_cs [ remote standby is server_3:target_cs ]
v. Verify standby server configuration
q. Quit initialization process
c. Continue initialization
Select a source_cs server: c
Standby configuration validated OK
Enter user information for managing remote site source_cs
Username: dradmin
Password: ******* Retype your response to validate
Password: *******
Active/Active configuration
Initializing (source_cs-->target_cs)
159
Do you wish to continue? [yes or no] yes
Updating MirrorView configuration cache
Setting up server_3 on source_cs
Setting up server_2 on source_cs
Creating user account dradmin
Setting acl for server_3 on target_cs
Setting acl for server_2 on target_cs
Updating the Celerra domain information
Creating device group mviewgroup on source_cs
done
EXAMPLE #2
---------To get information about a source MirrorView configuration (for
example, on new_york configured as active/passive), type:
# /nas/sbin/nas_mview -info
***** Device Group Configuration *****
name = mviewgroup
description =
uid = 50:6:1:60:B0:60:26:BC:0:0:0:0:0:0:0:0
state = Consistent
role = Primary
condition = Active
recovery policy = Automatic
number of mirrors = 16
mode = SYNC
owner = 0
mirrored disks =
root_disk,root_ldisk,d5,d8,d10,d11,d24,d25,d26,d27,d29,d30,d31,d32,d33,d39,
local clarid = APM00053001552
remote clarid = APM00053001549
mirror direction = local -> remote
***** Servers configured with RDFstandby *****
id = 1
name = server_2
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 2
member_of =
standby = server_3, policy=auto
RDFstandby= slot=2
status :
defined = enabled
actual = online, active
id = 2
name = server_3
acl = 1000, owner=nasadmin, ID=201
type = standby
slot = 3
member_of =
standbyfor= server_2
RDFstandby= slot=3
status :
defined = enabled
actual = online, ready
***** Servers configured as standby *****
No servers configured as standby
Where:
Value
-----
Definition
----------
Device group configuration:
name
description
uid
state
role
condition
Name of the consistency (device) group.
Brief description of device group.
UID assigned, based on the system.
State of the device group (for example,
Consistent, Synchronized, Out-of-Sync,
Synchronizing, Scrambled, Empty, Incomplete,
or Local Only).
Whether the current system is the Primary
(source) or Secondary (destination) for this
group.
Whether the group is functioning (Active),
Inactive, Admin Fractured (suspended), Waiting
on Sync, System Fractured (which indicates lin
k
recovery policy
down), or Unknown.
Type of recovery policy (Automatic is the defa
ult
and recommended value for group during
storage system configuration; if Manual is set
,
you must use -resume after a link down failure
).
number of mirrors
mode
owner
Number of mirrors in group.
MirrorView mode (always SYNC in this release).
ACL ID assigned (0 indicates no control). nas_
mirrored disks
provides information.
Comma-separated list of disks that are mirrore
local clarid
APM number of local VNX for block storage arra
remote clarid
APM number of remote VNX for block storage arr
mirror direction
On primary system, local to remote (on primary
acl
d.
y.
ay.
system); on destination system, local from rem
ote.
Servers configured with RDFstandby/ Servers configured as standby:
id
name
acl
type
slot
member_of
standby
Server ID
Server name
ACL value and owner
Server type (for example, nas or standby)
Slot number for this Data Mover
If applicable, shows membership information.
If this Data Mover is configured with local st
RDFstandby
the server that is the local standby and any
policy information.
If this Data Mover is configured with a remote
standbyfor
standby, the slot number of the destination
Data Mover that serves as the RDF standby.
If this Data Mover is also configured as a loc
andbys,
RDF
al
standby, the server numbers for which it is a
local
status
standby.
Indicates whether the Data Mover is defined
an
161
d
online/ready.
EXAMPLE #3
---------To activate a failover, log in to destination Control Station using dradmin acc
ount,
su to root, and type:
# /nas/sbin/nas_mview -activate
Sync with CLARiiON backend ...... done
Validating mirror group configuration ...... done
Is source site source_cs ready for complete shut down (power OFF)? [yes or no]
yes
Contacting source site source_cs, please wait... done
Shutting down remote site source_cs ......................................
done
Sync with CLARiiON backend ...... done
STARTING an MV ’FAILOVER’ operation.
Device group: mviewgroup ............ done
The MV ’FAILOVER’ operation SUCCEEDED.
Failing over Devices ... done
Adding NBS access for server_2 ........ done
Adding NBS access for server_3 ........ done
Activating the target environment ... done
server_2 : going offline
rdf : going active
replace in progress ...done
failover activity complete
server_3 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
commit in progress (not interruptible)...done
commit in progress (not interruptible)...done
commit in progress (not interruptible)...done
done
EXAMPLE #4
---------To restore, log in to the destination Control Station using dradmin
account, as root user, and type:
# /nas/sbin/nas_mview -restore
Sync with CLARiiON backend ...... done
Validating mirror group configuration ...... done
Contacting source site source_cs, please wait... done
Running restore requires shutting down source site source_cs.
Do you wish to continue? [yes or no] yes
Shutting down remote site source_cs ....... done
Is source site source_cs ready for storage restoration ? [yes or no] yes
Sync with CLARiiON backend ...... done
STARTING an MV ’RESUME’ operation.
Device group: mviewgroup ............ done
The MV ’RESUME’ operation SUCCEEDED.
Percent synchronized: 100
Updating device group ... done
Is source site ready for network restoration ? [yes or no] yes
Restoring servers ...... done
Waiting for servers to reboot ...... done
Removing NBS access for server_2 .. done
Removing NBS access for server_3 .. done
Waiting for device group ready to failback .... done
Sync with CLARiiON backend ...... done
STARTING an MV ’FAILBACK’ operation.
Device group: mviewgroup ............ done
The MV ’FAILBACK’ operation SUCCEEDED.
Restoring remote site source_cs ...... failed
Error 5008: -1:Cannot restore source_cs. Please run restore on site source_cs.
Then on the Source Control Station, as the root user, type:
# /nasmcd/sbin/nas_mview -restore
Stopping NAS services. Please wait...
Powering on servers ( please wait ) ...... done
Sync with CLARiiON backend ...... done
STARTING an MV ’SUSPEND’ operation.
Device group: mviewgroup ............ done
The MV ’SUSPEND’ operation SUCCEEDED.
server_2 : going standby
rdf : going active
replace in progress ...done
failover activity complete
server_3 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
commit in progress (not interruptible)...done
Sync with CLARiiON backend ...... done
STARTING an MV ’RESUME’ operation.
Device group: mviewgroup ............ done
The MV ’RESUME’ operation SUCCEEDED.
Restarting NAS services ...... done
commit in progress (not interruptible)...done
commit in progress (not interruptible)...done
done
----------------------------------------------------------------Last modified: May 11, 2011 11:25 am.
163
nas_pool
Manages the user-defined and system-defined storage pools for the
system.
SYNOPSIS
-------nas_pool
-list
| -info {<name>|id=<id>|-all} [-Ads] [-storage <system_name>]
| -size {<name>|id=<id>|-all} [-mover <mover>][-slice {y|n}]
[-storage <system_name>]
| -create [-name <name>][-acl <acl>][-description <desc>]
[-volumes <volume_name>[,<volume_name>,...]]
[-default_slice_flag {y|n}] [-is_greedy {y|n}]
| -create [-name <name>][-acl <acl>][-description <desc>]
[-default_slice_flag {y|n}] [-is_greedy {y|n}]
-size <integer>[M|G|T][-storage <system_name>]
-template <system_pool_name> [-num_stripe_members <num>]
[-stripe_size <num>]
| -modify {<name>|id=<id>} [-storage <system_name>] [-name <name>]
[-acl <acl>] [-description <desc>][-default_slice_flag {y|n}]
[-is_dynamic {y|n}][-is_greedy {y|n}]
| -delete {<name>|id=<id>} [-deep] [-storage <system_name>]
| -xtend {<name>|id=<id>} [-storage <system_name>]
-volumes <volume_name>[,<volume_name>,...]
| -xtend {<name>|id=<id>} -size <integer> [M|G|T][-storage <system_name
>]
| -shrink {<name>|id=<id>} [-storage <system_name>] -volumes
<volume_name>[,<volume_name>,...][-deep]
DESCRIPTION
----------nas_pool creates, deletes, extends, shrinks, lists, displays, manages
the access control level, and modifies a user-defined storage pool.
nas_pool extends, shrinks, lists, displays, and modifies system-defined
storage pools.
OPTIONS
-------list
Lists all storage pools on the system.
-info {<name>|id=<id>|-all} [-Ads] [-storage <system_name>]
Displays detailed information for the specified storage pool, or all
storage pools. The -storage option can be used to differentiate pools
when the same pool name is used in multiple storage systems.
The -Ads option displays the advanced data service properties of the file
system.
-size {<name>|id=<id>|-all}
Displays the size for the specified storage pool, or all storage pools.
[-mover <mover>]
Displays size information that is visible to the physical Data
Mover or the virtual Data Mover (VDM).
[-slice {y|n}]
If y is typed, displays size information when volumes in the
storage pool are sliced. If n is typed, displays size information
when volumes in the storage pool are not sliced. The -slice option
defaults to the value of default_slice_flag for the storage pool.
[-storage <system_name>]
Displays size information for members that reside on a specified
storage system.
-create
Creates a user-defined storage pool.
[-name <name>]
Assigns a name to the new storage pool. If no name is specified,
assigns one by default.
[-acl <acl>]
Sets an access control level value that defines the owner of the
storage pool, and the level of access allowed for users and groups
defined in the access control level table. The nas_acl command
provides more information.
[-description <desc>]
Assigns a comment to the storage pool.
[-volumes <volume_name>[,<volume_name>,...]
Designates the members to be added to the storage pool. The
members can be any meta, slice, stripe, or disk volumes.
[-default_slice_flag {y|n}]
If set to y (default), then members of the storage pool might be
sliced when space is allocated from the storage pool. If set to n,
members of the storage pool will not be sliced when space is
dispensed from the storage pool and the volumes specified
cannot be built on a slice.
[-is_greedy {y|n}]
If set to n (default), the system uses space from the user-defined stor
age
pool’s existing member volumes in the order that the volumes were added
to the
pool to create a new file system or extend an existing file system.
If set to y, the user-defined storage pool uses space from the least-us
ed
member volume to create a new file system. When there is more than one
least-used member volume available, AVM selects the member volume that
contains the most disk volumes. For example, if one member volume conta
ins
four disk volumes and another member volume contains eight disk volumes
, AVM
selects the one with eight disk volumes. If there are two or more membe
r
volumes that have the same number of disk volumes, AVM selects the one
with
the lowest ID.
[-size <integer> {M|G|T}]
Creates a storage pool with the size specified. M specifies megabytes,
G
specifies gigabytes (default), and T specifies terabytes. The maximum s
ize
that you can specify for a storage pool is the maximum supported storag
e
capacity for the system.
[-storage <system_name>]
Specifies the storage system on which one or more volumes will
be created, to be added to the storage pool.
165
[-template <system_pool_name>]
Specifies a system pool name, required when the -size option is
specified. The user pool will be created using the profile attributes
of the specified system pool template.
[-num_stripe_members <num>]
Specifies the number of stripe members for user pool creation by
size. The -num_stripe_members option works only when both
-size and -template options are specified. It overrides the number
of stripe members attribute of the specified system pool template.
[-stripe_size <num>]
Specifies the stripe size for user pool creation by size. The
-stripe_size option works only when both -size and -template
options are specified. It overrides the stripe size attribute of the
specified system pool template.
-modify {<name>|id=<id>} [-storage <system_name>]
Modifies the attributes of the specified user-defined or
system-defined storage pool. The -storage option can be used to
differentiate pools when the same pool name is used in multiple
storage systems.
Managing Volumes and File Systems with VNX Automatic Volume
Management lists the available system-defined storage pools.
[-name <name>]
Changes the name of the storage pool to the new name.
[-acl <acl>]
Sets an access control level value that defines the owner of the
storage pool, and the level of access allowed for users and groups
defined in the access control level table. The nas_acl command
provides more information.
[-description <desc>]
Changes the comment for the storage pool.
[-default_slice_flag {y|n}]
If set to y (default), then members of the storage pool might be
sliced when space is dispensed from the storage pool. If set to n,
members of the storage pool will not be sliced when space is
dispensed from the storage pool and the volumes specified
cannot be built on a slice.
[-is_dynamic {y|n}]
Allows a system-defined storage pool to automatically extend or
shrink member volumes.
Note: The -is_dynamic option is for system-defined storage pools only.
[-is_greedy {y|n}]
For system-defined storage pools, if set to y, then the storage pool at
tempts
to create new member volumes before using space from existing member vo
lumes.
A system-defined storage pool that is not greedy (set to n), consumes a
ll the
space existing in the storage pool before trying to add additional memb
er
volumes. A y or n value must be specified when modifying a system-defin
ed
storage pool.
For user-defined storage pools, if set to n (default), the system uses
space
from the user-defined storage pool’s existing member volumes in the ord
er that
the volumes were added to the pool to create a new file system.
For user-defined storage pools, if set to y, the system uses space from
the
least-used member volume in the user-defined storage pool to create a n
ew file
system. When there is more than one least-used member volume available,
AVM
selects the member volume that contains the most disk volumes. For exam
ple, if
one member volume contains four disk volumes and another member volume
contains eight disk volumes, AVM selects the one with eight disk volume
s. If
there are two or more member volumes that have the same number of disk
volumes, AVM selects the one with the lowest ID.
For both system-defined and user-defined pools when extending a file sy
stem,
the is_greedy attribute is ignored unless there is not enough free spac
e on
the existing volumes that the file system is using to meet the requeste
d
extension size.
-delete {<name>|id=<id>} [-storage <system_name>]
Deletes a storage pool. Storage pools cannot be deleted if any
members are in use. After deletion, the storage pool no longer exists
on the system, however, members of the storage pool are not deleted.
The -storage option can be used to differentiate pools when the same
pool name is used in multiple storage systems.
[-deep]
Deletes the storage pool and also recursively deletes each
member of the storage pool. Each storage pool member is deleted
unless it is in use or is a disk volume.
-xtend {<name>|id=<id>} [-storage <system_name>]
-volumes <volume_name>[, <volume_name>,...]
Adds one or more unused volumes to a storage pool. The -storage
option can be used to differentiate pools when the same pool name is
used in multiple storage systems. If the default_slice_value is set to
n, member volumes cannot contain slice volumes (for compatibility
with TimeFinder/FS).
Note: Extending a storage pool by volume is for user-defined storage pools
only.
-xtend {<name>|id=<id>} -size <integer> [M|G|T]
Extends the specified storage pool with one or more volumes of the
size equal to or greater than the size specified. When specifying the
volume by size, type an integer between 1 and 1024, then specify T
for terabytes, G for gigabytes (default), or M for megabytes.
[-storage <system_name>]
Specifies the storage system on which one or more volumes will
be created, to be added to the storage pool.
Note: To successfully extend a system-defined storage pool by size, the
is_dynamic attribute must be set to n, and there must be enough
available disk volumes to satisfy the request.
-shrink {<name>|id=<id>} [-storage <system_name>]
-volumes <volume_name>[,<volume_name>,...][-deep]
Shrinks the storage pool by the specified unused volumes. The
-storage option can be used to differentiate pools when the same pool
167
name is used in multiple storage systems. When the -deep option is
used to shrink a user-defined storage pool, it removes the specified
member volumes from the pool, and recursively deletes any unused
volumes unless it is a disk volume. If the -deep option is not used to
shrink a user-defined storage pool, the member volumes are left
intact so that they can be reused. The is_dynamic option must be set
to n before shrinking system-defined storage pools.
Note: Shrinking of a system-defined storage pool by default deletes
member volumes automatically. Specifying the -deep option on the
system-defined storage pool shrink does not make any difference.
SEE ALSO
-------Managing Volumes and File Systems with VNX Automatic Volume
Management, Managing Volumes and File Systems for VNX Manually,
Controlling Access to System Objects on VNX, Using TimeFinder/FS,
NearCopy, and FarCopy on VNX for File, fs_timefinder, nas_fs,
nas_volume, and nas_slice.
STORAGE SYSTEM OUTPUT
--------------------VNX for block supports the following traditional system-defined
storage pools: clar_r1, clar_r5_performance, clar_r5_economy,
clar_r6, clarata_r3, clarata_r6, clarata_r10, clarata_archive, cm_r1,
cm_r5_performance, cm_r5_economy, cm_r6, cmata_r3,
cmata_archive, cmata_r6, cmata_r10, clarsas_archive, clarsas_r6,
clarsas_r10, clarefd_r5, clarefd_r10, cmsas_archive, cmsas_r6,
cmsas_r10, and cmefd_r5.
A mapped pool was formerly called a thin or virtual pool.
Disk types when using VNX for block are CLSTD, CLEFD, CLATA,
MIXED (indicates that tiers used in the pool contain multiple disk
types), Performance, Capacity, and Extreme_performance and for
VNX for block involving mirrored disks are: CMEFD, CMSTD,
CMATA, Mirrored_mixed, Mirrored_performance,
Mirrored_capacity, and Mirrored_extreme_performance.
Disk types when using VNX for block are CLSTD, CLEFD, and
CLATA, and for VNX for block involving mirrored disks are:
CMEFD, CMSTD, and CMATA.
VNX with a Symmetrix storage system support the following
system-defined storage pools: symm_std, symm_std_rdf_src,
symm_ata, symm_ata_rdf_src, symm_ata_rdf_tgt,
symm_std_rdf_tgt, symm_efd, symm_fts, symm_fts_rdf_tgt,
and symm_fts_rdf_src.
For user-defined storage pools, the difference in output is in the disk
type. Disk types when using a Symmetrix are STD, R1STD, R2STD,
BCV, R1BCV, R2BCV, ATA, R1ATA, R2ATA, BCVA, R1BCA,
R2BCA, EFD, FTS, R1FTS, R2FTS, R1BCF, R2BCF, BCVF,
BCVMIXED, R1MIXED, R2MIXED, R1BCVMIXED, and R2BCVMIXED.
EXAMPLE #1
---------To create a storage pool with the name, marketing, with a
description, with the following disk members, d12, d13, and with the
default slice flag set to y, type:
$ nas_pool -create -name marketing -description ’Storage
Pool’ -volumes d12,d13 -default_slice_flag y
id
name
= 20
= marketing
description
acl
in_use
clients
members
storage_system(s)
default_slice_flag
is_user_defined
thin
disk_type
server_visibility
is_greedy
template_pool
num_stripe_members
stripe_size
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
Storage Pool
0
False
d12,d13
FNM00105000212
True
True
False
CLSTD
server_2,server_3,server_4,server_5
False
N/A
N/A
N/A
Where:
Value
-----
Definition
----------
id
name
description
acl
in_use
clients
members
storage_systems(s)
default_slice_flag
is_user_defined
thin
disk_type
ID of the storage pool.
Name of the storage pool.
Comment assigned to the storage pool.
Access control level value assigned to the storage pool.
Whether the storage pool is being used by a file system.
File systems using the storage pool.
Volumes used by the storage pool.
Storage systems used by the storage pool.
Allows slices from the storage pool.
User-defined as opposed to system-defined.
Indicates whether thin provisioning is enabled or disabled.
Type of disk contingent on the storage system attached. CLST
D,
CLATA, CMSTD, CLEFD, CMEFD, CMATA, MIXED (indicates tiers us
ed
in the pool contain multiple disk
types), Performance, Capacity, Extreme_performance, Mirrored
_mixed,
Mirrored_performance, Mirrored_capacity, and Mirrored_extrem
e_performance
are for VNX for block, and STD, BCV, R1BCV, R2BCV, R1STD, R2
STD,
ATA, R1ATA, R2ATA, BCVA, R1BCA, R2BCA, EFD, BCVMIXED, R1MIXE
D, R2MIXED,
server_visibility
is_greedy
R1BCVMIXED, and R2BCVMIXED are for Symmetrix.
Storage pool is visible to the physical Data Movers
specified.
Indicates whether the system-defined storage pool will use n
ew
template_pool
member volumes as needed.
System pool template used to create the user pool. Only
applicable to user pools created by size or if the last memb
er volume
is a stripe or both.
num_stripe_members Number of stripe members used to create the user pool. Appli
cable to
system pools and user pools created by size or if the last m
ember volume
stripe_size
is a stripe or both.
Stripe size used to create the user pool. Applicable to syst
em
pools and user pools created by size or if the last member v
olume is a
stripe or both.
EXAMPLE #2
169
---------To change the description for the marketing storage pool to include a descripti
ve comment,
type:
$ nas_pool -modify marketing -description ’Marketing Storage Pool’
id
name
description
acl
in_use
clients
members
storage_system(s)
default_slice_flag
is_user_defined
thin
disk_type
server_visibility
is_greedy
template_pool
num_stripe_members
stripe_size
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
20
marketing
Marketing Storage Pool
0
False
d12,d13
FNM00105000212
True
True
False
CLSTD
server_2,server_3,server_4,server_5
False
N/A
N/A
N/A
EXAMPLE #1 provides a description of command output.
EXAMPLE #3
---------To view the size information for the FP1 mapped pool, type:
$ nas_pool -size FP1
id
name
used_mb
avail_mb
total_mb
potential_mb
=
=
=
=
=
=
40
FP1
0
0
0
2047
Where:
Value
-----
Definition
----------
used_mb
avail_mb
total_mb
potential_mb
Space in use by the storage pool specified.
Unused space still available in the storage pool.
Total space in the storage pool (total of used and unused).
Available space that can be added to the storage pool.
Note: Each of the options used with the command nas_pool - size is filters
for the output of the command. For example, if you specify a Data Mover, the
output will reflect only the space to which the specified Data Mover has
visibility. Physical used_mb, Physical avail_mb, and Physical total_mb are
applicable for system-defined virtual AVM pools only.
EXAMPLE #4
---------To view the size information for the TP1 mapped pool which contains
only virtual LUNs, type:
$ nas_pool -size TP1
id
name
= 40
= TP1
used_mb
= 0
avail_mb
= 0
total_mb
= 0
potential_mb
= 2047
Physical storage usage in tp1 on FCNTR074200038:
used_mb
= 0
avail_mb
= 20470
Where:
Value
-----
Definition
----------
Physical used_mb
Used physical size of a storage system mapped pool in
Physical avail_mb
(some may be used by non-VNX hosts).
Available physical size of a storage system mapped pool
MB
in
MB.
Note: Physical used_mb and Physical avail_mb are applicable for
system-defined AVM pools that contain virtual LUNs only.
EXAMPLE #5
---------For VNX system, to change the -is_greedy and -is_dynamic
options for the system defined, clar_r5_performance storage pool,
type:
$ nas_pool -modify clar_r5_performance -is_dynamic n -is_greedy y
id
name
description
acl
in_use
clients
members
storage_system(s)
default_slice_flag
is_user_defined
thin
disk_type
server_visibility
volume_profile
is_dynamic
is_greedy
num_stripe_members
stripe_size
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
3
clar_r5_performance
CLARiiON RAID5 4plus1
421
False
v120
True
False
False
CLSTD
server_2,server_3,server_4,server_5
clar_r5_performance_vp
False
True
4
32768
EXAMPLE #1 provides a description of command output.
EXAMPLE #6
---------For VNX for file with a Symmetrix system, to change the
-is_greedy and -is_dynamic options for the system-defined,
symm_std storage pool, type:
$ nas_pool -modify symm_std -is_dynamic y -is_greedy y
id = 1
name = symm_std
description = Symmetrix STD
acl = 1421, owner=nasadmin, ID=201
in_use = True
171
clients = ufs3
members = v169,v171
default_slice_flag = False
is_user_defined = False
thin = False
disk_type = STD
compressed = True
server_visibility = server_2,server_3,server_4,server_5
volume_profile = symm_std_vp
is_dynamic = True
is_greedy = True
num_stripe_members = 8
stripe_size = 32768
Where:
Value
-----
Definition
----------
id
name
description
acl
ID of the storage pool.
Name of the storage pool.
Comment assigned to the storage pool.
Access control level value assigned to the storage poo
in_use
Whether the storage pool is being used by a filesystem
clients
members
default_slice_flag
is_user_defined
thin
File systems using the storage pool.
Disks used by the storage pool.
Allows slices from the storage pool.
User-defined as opposed to system-defined.
Indicates whether thin provisioning is enabled or disa
disk_type
compressed
Contingent on the storage system attached.
For VNX with Symmetrix backend, indicates whether data
l.
.
bled.
is
compressed. Values are: True, False, Mixed (indicates
some
server_visibility
volume_profile
is_dynamic
ove volumes.
is_greedy
use new
template_pool
of the LUNs, but not all, are compressed).
Storage pool is visible to the physical Data Movers
specified.
Volume profile used.
Whether the system-defined storage pool can add or rem
Indicates whether the system-defined storage pool will
member volumes as needed.
System pool template used to create the user pool. Onl
y
applicable to user pools created
by size or if the last member volume is a stripe or bo
th.
num_stripe_members
Number of stripe members used to create the user pool.
Applicable to system pools and user pools created by s
ize
stripe_size
or if the last member volume is a stripe or both.
Stripe size used to create the user pool. Applicable t
o system
pools and user pools created by size or if the last me
mber
volume is a stripe or both.
EXAMPLE #7
---------To change the -is_greedy option for the user-defined, user_pool storage pool,
type:
$ nas_pool -modify user_pool -is_greedy y
id
name
description
acl
in_use
clients
members
storage_system(s)
default_slice_flag
is_user_defined
thin
disk_type
server_visibility
is_greedy
template_pool
num_stripe_members
stripe_size
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
58
user_pool
0
False
d21,d22,d23,d24
FNM00105000212
True
True
False
CLSTD
server_2
True
N/A
N/A
N/A
EXAMPLE #1 provides a description of command output.
EXAMPLE #8
---------To add the volumes, d7 and d8, to the marketing storage pool, type:
$ nas_pool -xtend marketing -volumes d7,d8
id
name
description
acl
in_use
clients
members
default_slice_flag
is_user_defined
thin
disk_type
server_visibility
template_pool
num_stripe_members
stripe_size
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
20
marketing
Marketing Storage Pool
0
False
d12,d13,d7,d8
True
True
True
CLSTD
server_2,server_3,server_4,server_5
N/A
N/A
N/A
EXAMPLE #1 provides a description of command output.
EXAMPLE #9
---------For a VNX system, to extend the system-defined storage pool
by a specified size with a specified system, type:
$ nas_pool -xtend clar_r5_performance -size 128M -storage APM00042000818
id
name
description
acl
in_use
clients
members
default_slice_flag
is_user_defined
thin
disk_type
server_visibility
=
=
=
=
=
=
=
=
=
=
=
=
3
clar_r5_performance
CLARiiON RAID5 4plus1
1421, owner=nasadmin, ID=201
False
v120
True
False
False
CLSTD
server_2,server_3,server_4,server_5
173
volume_profile
is_dynamic
is_greedy
num_stripe_members
stripe_size
=
=
=
=
=
clar_r5_performance_vp
False
True
4
32768
EXAMPLE #1 provides a description of command output.
EXAMPLE #10
----------For a VNX system, to remove d7 and d8 from the marketing storage pool, type:
$ nas_pool -shrink marketing -volumes d7,d8
id
name
description
acl
in_use
clients
members
default_slice_flag
is_user_defined
thin
disk_type
server_visibility
template_pool
num_stripe_members
stripe_size
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
20
marketing
Marketing Storage Pool
0
False
d12,d13
True
True
True
CLSTD
server_2,server_3,server_4,server_5
N/A
N/A
N/A
EXAMPLE #1 provides a description of command output.
EXAMPLE #11
----------To list the storage pools, type:
$ nas_pool -list
id
2
3
4
10
11
20
40
41
inuse
n
n
n
n
n
n
y
y
acl
421
421
421
421
421
0
0
0
name
clar_r1
clar_r5_performance
clar_r5_economy
clarata_archive
clarata_r3
marketing
TP1
FP1
storage_system
N/A
FCNTR074200038
N/A
FCNTR074200038
N/A
FCNTR074200038
FCNTR074200038
FCNTR074200038
Where:
Value
-----
Definition
----------
id
inuse
acl
ID of the storage pool.
Whether the storage pool is being used by a filesystem.
Access control level value assigned to the storage pool
name
storage_system
Name of the storage pool.
Name of the storage system where the storage pool resid
.
es.
EXAMPLE #12
----------To display information about the user-defined storage pool called
marketing, type:
$ nas_pool -info marketing
id
name
description
acl
in_use
clients
members
storage_system(s)
default_slice_flag
is_user_defined
thin
disk_type
server_visibility
is_greedy
template_pool
num_stripe_members
stripe_size
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
20
marketing
Marketing Storage Pool
0
False
d12,d13
True
True
True
CLSTD
server_2,server_3,server_4,server_5
False
N/A
N/A
N/A
EXAMPLE #1 provides a description of command output.
EXAMPLE #13
----------To display information about the system-defined clar_r5_performance storage poo
l, type:
$ nas_pool -info clar_r5_performance
id
name
description
acl
in_use
clients
members
default_slice_flag
is_user_defined
thin
disk_type
server_visibility
volume_profile
is_dynamic
is_greedy
num_stripe_members
stripe_size
=
=
=
=
=
=
=
= 3
= clar_r5_performance
= CLARiiON RAID5 4plus1
= 1421, owner=nasadmin, ID=201
= False
=
= v120
= True
= False
= False
CLSTD
server_2,server_3,server_4,server_5
clar_r5_performance_vp
False
True
4
32768
EXAMPLE #1 provides a description of command output.
EXAMPLE #14
----------To display information about the system-defined engineer virtual
pool, type:
$ nas_pool -info engineer
id
name
description
acl
in_use
clients
members
=
=
=
=
=
=
=
40
engineer
Mapped Pool engineer on APM00084401666
0
True
DA_BE_VIRT_FS,vp_test,vp_test1,vp_test12,cvpfs1,cvpfs3
v363
175
default_slice_flag
is_user_defined
thin
disk_type
server_visibility
volume_profile
is_dynamic
is_greedy
num_stripe_members
stripe_size
=
=
=
=
=
=
=
=
=
=
True
False
True
CLSTD
server_2,server_3
engineer_APM00084401666_vp
True
True
N/A
N/A
EXAMPLE #1 provides a description of command output.
EXAMPLE #15
----------To display information about the mapped storage pool called FP1
from a VNX for block, type:
$ nas_pool -info FP1
id
name
description
acl
in_use
clients
members
default_slice_flag
is_user_defined
thin
tiering_policy
compressed
mirrored
disk_type
volume_profile
is_dynamic
is_greedy
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
40
FP1
Mapped Pool on FCNTR074200038
0
False
True
False
True
Auto-tier
False
False
Mixed
FP1
True
True
Where:
Value
----tiering_policy
tier and
Definition
----------Indicates the tiering policy in effect. If the initial
the tiering policy are the same, the values are:
Auto-Tier, Highest Available Tier, Lowest Available Ti
er. If the
initial tier and the tiering policy are not the
same, the values are: Auto-Tier/No Data Movement, High
est Available
Tier/No Data Movement, Lowest Available Tier/No Data M
ovement.
compressed
d. Values
For VNX for block, indicates whether data is compresse
are: True, False, Mixed (indicates some of the LUNs, b
ut not all,
mirrored
are compressed).
Indicates whether the disk is mirrored.
EXAMPLE #16
----------To display information about the mapped storage pool called SG0
from a Symmetrix storage system, type:
$ nas_pool -info SG0
id
name
description
acl
in_use
clients
members
default_slice_flag
is_user_defined
thin
tiering_policy
compressed
frontend_io_quota
disk_type
volume_profile
is_dynamic
is_greedy
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
40
SG0
Symmetrix Mapped Pool on 000192601245
0
False
True
False
True
symm_policy_1
True
maxiopersec 500,maxmbpersec 500
Mixed
True
True
N/A
Where:
Value
-----
Definition
-----------
id
name
description
acl
ID of the storage pool.
Name of the storage pool.
Comment assigned to the storage pool.
Access control level value assigned to the storage pool
in_use
Whether the storage pool is being used by a file system
clients
members
default_slice_flag
is_user_defined
thin
File systems using the storage pool.
Volumes used by the storage pool.
Allows slices from the storage pool.
User-defined as opposed to system-defined.
Indicates whether thin provisioning is enabled or disab
tiering_policy
Indicates the tiering policy in effect. If the initial
.
.
led.
tier
and the tiering policy are the same, the values are:
Auto-Tier, Highest Available Tier, Lowest Available Tie
r.
If the initial tier and the tiering policy are not the
same,
the values are: Auto-Tier/No Data Movement,Highest
Available Tier/No Data Movement, Lowest Available Tier/
No
compressed
Data Movement.
For VNX with Symmetrix backend, indicates whether data
frontend_io_quota
compressed. Values are: True, False, Mixed (indicates
some of the LUNs, but not all, are compressed).
For VNX with Symmetrix backend, indicates if Frotend IO
is
Quota is configured on this mapped pool, could also hav
e
disk_type
value as False (indicates Frontend IO Quota is not
configured on mapped SG in Symmetrix backend).
Type of disk contingent on the system attached. CLSTD,
CLATA, CMSTD, CLEFD, CMEFD, CMATA, MIXED (indicates tie
rs
used in the pool contain multiple disk types), Performa
nce,
Capacity, Extreme_performance, Mirrored_mixed,
Mirrored_performance, Mirrored_capacity, and
Mirrored_extreme_performance are for VNX for block,
177 and
STD,
BCV, R1BCV, R2BCV, R1STD, R2STD, ATA, R1ATA, R2ATA, BCV
A,
R1BCA, R2BCA, EFD, BCVMIXED, R1MIXED, R2MIXED, R1BCVMIX
ED,
volume_profile
is_dynamic
and R2BCVMIXED are for Symmetrix.
Volume profile used.
Whether the system-defined storage pool can add or remo
is_greedy
volumes.
Indicates whether the system-defined storage pool will
ve
use
new member volumes as needed.
EXAMPLE #17
----------To delete the storage pool, marketing, and each of the storage pool
member volumes recursively, type:
$ nas_pool -delete marketing -deep
id
name
description
acl
in_use
clients
members
storage_system(s)
default_slice_flag
is_user_defined
is_greedy
thin
template_pool
num_stripe_members
stripe_size
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
20
marketing
Marketing Storage Pool
0
False
True
True
True
True
N/A
N/A
N/A
EXAMPLE #1 provides a description of command output.
----------------------------------------------------------------Last modified: January 11 2013, 4:31 pm.
nas_quotas
Manages quotas for mounted file systems.
SYNOPSIS
-------nas_quotas
-edit [-user|-group] {-mover <movername>|-fs <fs_name>} [-path
<pathname>]}[[-proto <proto_id>]|[-block <hard_limit>[:<soft_limit>]]
[-inode <hard_limit>[:<soft_limit>]]] <id> [<id>...]
| -edit -config {-mover <movername>|-fs <fs_name>}[-path <pathname>]}
[-option <options>]
| -edit -tree -fs <fs_name>
[[-proto <proto_id>]|[-block <hard_limit>[:<soft_limit>]]
[-inode <hard_limit>[:<soft_limit>]]]
[-comment <comment>] <id> [<id>...]
| -report [-user|-group]{-mover <movername>|-fs <fs_name>}
[-path <pathname>]}[<id> <id>...]
| -report -config {-mover <movername>|-fs <fs_name>} [-path <pathname>]}
| -report -tree -fs <fs_name> [<id> <id>...]
| {-on|-off|-clear} [-user|-group|-both]
{-mover <movername>|-fs <fs_name>|[-path <pathname>] -all}
| -on -tree -fs <fs_name> -path <pathname> [-comment <comment>]
| -off -tree -fs <fs_name> -path <pathname>
| {-list|-clear} -tree -fs <fs_name>
| -check -start [-mode online|offline] [-tree] -fs <fs_name> [-path
<pathname>]
| -check {-stop|-status} -fs <fs_name> [-path <pathname>]
| -quotadb {-info|-upgrade [-Force]} {-mover <movername>|-fs <fs_name>}
DESCRIPTION
----------nas_quotas edits quotas for mounted file systems, and displays a
listing of quotas and disk usage at the file system level (by the user,
group, or tree), or at the quota-tree level (by the user or group).
nas_quotas also turns quotas on and off, and clears quotas records
for a file system, quota tree, or a Data Mover. When a Data Mover is
specified, the action applies to all mounted file systems on the Data
Mover.
nas_quotas also starts and stops quota database checks either online
or offline for quota trees and file systems, and allows you to upgrade
the quota database limits to the maximum limit value for a file
system. When a Data Mover is specified, the action applies to all
mounted file systems on the Data Mover.
Caution: Quotas should be turned on (enabled) before file systems go into a
production environment. Enabling (or disabling, or clearing)
quotas in a production environment is time consuming and the
process may disrupt file system operation. CIFS clients are
disconnected during these events and NFS clients receive a
message that the server is not responding. However, once enabled,
quotas can be changed at any time without impact.
OPTIONS
-------edit [-user|-group] {-mover <movername>|-fs <fs_name>
[-path <pathname>]} [<id> [<id> ...]
Sets the quota limits for users or groups on a specified Data Mover, mounted
file system, or directory tree.
179
For a user, the ID can be a user ID or UID, however, if NIS or the local
password file on the Data Mover is available, a username can also be used.
For a group, the ID can be a group ID or GID, however, if NIS or the local
password file is available, a group name can also be used.
Upon execution, a vi session (unless the EDITOR environment variable
specifies otherwise) is opened to edit the quota configuration file. Changes
to the file are applied when the vi session is saved and exited.
[-proto <proto_id>]|[-block <hard_limit>[:<soft_limit>]]
Applies the quota configuration defined for the prototype user for each
specified ID, and sets a hard and soft limit for storage (block) usage
in kilobytes.
[-inode <hard_limit>[:<soft_limit>]][<id> [<id>...]
[-block <hard_limit>[:<soft_limit>]]
Edits the inode (file count) limits and the block (storage in KBs)
limits directly into the quota configuration file without opening an
editing session.
-edit -config {-mover <movername>|-fs <fs_name>}
[-path <pathname>]}
Edits the default quota configuration for all users/groups currently without
quotas or subsequently added to the specified Data Mover or file system or
quota tree. Also edits the grace periods for soft quotas, and the conditions
upon which to generate a quotas-event message to the system log.
[-option <options>]
Specifies the following comma-separated options:
BGP=<integer>
Sets the block grace period in seconds.
IGP=<integer>
Sets the inode grace period in seconds.
DUBSL=<integer>
Sets the default user block soft limit in KB.
DUBHL=<integer>
Sets the default user block hard limit in KB.
DUISL=<integer>
Sets the default user inode soft limit.
DUIHL=<integer>
Sets the default user inode hard limit.
DGBSL=<integer>
Sets the default group block soft limit in KB.
DGBHL=<integer>
Sets the default group block hard limit in KB.
DGISL=<integer>
Sets the default group inode soft limit.
DGIHL=<integer>
Sets the default group inode hard limit.
HLE={True|False}
Specifies whether the hard limit is enforced.
ESFCS={True|False}
Specifies the event for check start has been sent.
ESFCE={True|False}
Specifies the event for check end has been sent.
ESFBSL={True|False}
Specifies that the event for block soft limits has been sent.
ESFBHL={True|False}
Specifies that the event for block hard limits has been sent.
-edit -tree -fs <fs_name> [[-proto <proto_id>]|
[-block <hard_limit> [:<soft_limit>]][-inode
<hard_limit>[:<soft_limit>]]][-comment <comment>]
<id> [<id>...]
Edits the quota limits for trees (inodes or blocks used by a tree
directory) where the <id> is the tree ID. This option can only be
applied on each file system basis. The -list option to display the tree
IDs.
The -proto option applies the quota configuration of the prototype
tree for each specified tree ID, or sets a hard and soft limit for blocks.
The <proto_id> must be a tree ID.
The -inode and -block options edit the inode/block limits for the tree
directly in the quota configuration file without opening an editing
session.
The -comment option associates a comment with the quota tree. The
comment is delimited by single quotes. Comment length is limited to
256 bytes (represented as 256 ASCII characters or a variable number
of Unicode multibyte characters) and cannot include single quotes (.
.), double quotes (" "), semicolons (;), NL (New Line), or FF (Form
Feed).
-report [-user|-group] {-mover <movername>|-fs
<fs_name>} [-path <pathname>]} [<id> <id> ...]
Displays a summary of disk usage and quotas for the user or group,
including the number of files and space in kilobytes for the specified
<fs_name>, or all file systems mounted on the specified
<movername>, or for the specified quota tree. The -edit option
provides more information for the usage of UIDs and GIDs.
Note: The nas_quotas can show report for maximum 1024 IDs at a time.
-report -config {-mover <movername>|-fs <fs_name>}
[-path <pathname>]}
Displays quota configuration information as viewed from the
specified Data Mover, file system, or quota-tree level, including:
*
*
*
*
*
Active quota policy
Quota status (user/group quotas enabled or disabled)
Grace period
Default limits currently set for users/groups
Hard-quota enforcement option setting (deny disk space enabled
or disabled)
* Quota conditions that trigger event-logging
-report -tree -fs <fs_name>[<id> <id>...]
Displays the quota limits for a specified quota tree in a file system.
The <id> is a tree ID.
Note: The <id> is either a user ID, a group ID, or a tree ID. If the quota
type is not specified, the default is set to the ’-user’ ID.
{-on|-off|-clear} [-user|-group|-both] {-mover
<movername>|-fs <fs_name>|[-path <pathname>]|-all}
Turns quotas on, off, and clears quotas for the user, group, or both
181
(users and groups at once) on the <movername>, <fs_name>,
<pathname>, for all users, or groups on all file systems on all Data
Movers in the cabinet.
The -clear option permanently removes all quota records, deletes the
quota configuration file, and turns quotas off.
Caution: While quotas are being turned on, off, or cleared, other operations
to a file system may be disrupted. CIFS clients are disconnected
during this execution.
-on -tree -fs <fs_name> -path <pathname>
Turns on (enables) tree quotas so that quota tracking and hard-limit
enforcement (if enabled) can occur. When enabling tree quotas, the
directory must not exist; it is created in this tree-quota-enabling
process.
Note: The quota path length (which VNX for file calculates as including the
file system mountpoint) must be less than 1024 bytes. If Unicode is enabled
on the selected Data Mover, -path accepts any characters defined by the
Unicode 3.0 standard. Otherwise, it accepts only ASCII characters.
[-comment <comment>]
The -comment option associates a comment with the quota tree.
The comment is delimited by single quotes. Comment length is
limited to 256 bytes (represented as 256 ASCII characters or a
variable number of Unicode multibyte characters) and cannot
include single quotes (. .), double quotes (" "), semicolons (;), NL
(New Line), or FF (Form Feed).
-off -tree -fs <fs_name> -path <pathname>
Turns tree quotas off. When turning tree quotas off, the tree directory
must be empty.
{-list|-clear} -tree -fs <fs_name>
The -list option displays all active quota trees and their respective
tree IDs used by -edit and -report with the specified file system.
Use the -tree -clear option to clear all the information from the
database after you disable (turn off) quotas for all trees within a file
system. Once cleared, the database information is not recoverable.
Caution: The -clear option deletes the usage and the limit information for
tree quotas. The limits cannot be recovered.
-check -start [-mode online|offline] [-tree] -fs <fs_name> [-path
<pathname>]
Starts a check of a quota database in online or offline mode for a tree
quota or a file system quota. The default mode is online if the -mode
option is not specified, and a quota check is run while the file system
remains online.
-check {-stop|-status} -fs <fs_name> [-path <pathname>]
Stops or provides status of a file system quota database check that is
in progress.
-quotadb {-info|-upgrade [-Force]} {-mover <movername>|-fs
<fs_name>}
Either displays status related to the quota database upgrade or starts
an upgrade of the quota database for a specific file system or all file
systems on a Data Mover.
The -info option displays the status related to the quota database
limits upgrade.
Use the -upgrade option to perform an upgrade of the quota
database. If the -Force option is not specified, you are in interactive
mode while upgrading the quota database. If the -Force option is
specified, you are in non-interactive mode while upgrading the quota
database.
Use -mover <movername> to upgrade all mounted file systems.
quota databases on a Data Mover.
Use -fs <fs_name> to upgrade a specific file system.s quota database.
Note: Before the upgrade process runs, the Control Station displays the
estimated upgrade time on the file system whose quota database will be
upgraded, and also displays a warning message to notify users that the file
system will be unavailable during the upgrade process. If users are in
interactive mode, a dialog displays letting users choose whether they want to
continue. If users are in non-interactive mode, after displaying the estimated
upgrade time message and warning message, the upgrade process starts
immediately.
SEE ALSO
-------Using Quotas on VNX.
EXAMPLE# 1
---------To enable quotas for users and groups of a file system, type:
$ nas_quotas -on -both -fs ufs1
done
EXAMPLE #2
---------To open a vi session to edit file system quotas on ufs1 for the specified
user, 1000, type:
$ nas_quotas -edit -user -fs ufs1 1000
Userid : 1000
fs ufs1 blocks (soft = 2000, hard = 3000) inodes (soft = 0, hard = 0)
˜
˜
˜
˜
"/tmp/EdP.agGQuIz" 2L, 84C written
done
EXAMPLE #3
---------To change the block limit and inode limit for a file without opening up a vi
session, type:
$ nas_quotas -edit -user -fs ufs1 -block 7000:6000 -inode 700:600 2000
done
EXAMPLE #4
---------To view a report of user quotas for ufs1, type:
$ nas_quotas -report -user -fs ufs1
Report for user quotas on filesystem ufs1 mounted on /ufs1
+-----------+--------------------------------+-----------------------------+
183
|User
|
Bytes Used
(1K)
|
Files
|
+-----------+-------+-------+-------+--------+-------+------+------+-------+
|
|
Used |
Soft |
Hard |Timeleft|
Used | Soft | Hard |Tim
eleft|
+-----------+-------+-------+-------+--------+-------+------+------+-------+
|#1000
|
1328|
2000|
3000|
|
54|
0|
0|
|#2000
|
6992|
6000|
7000| 7.0days|
66|
600|
700|
|#5000
| 141592|
0|
516|
0|
0|
|
|
0|
|
|
+-----------+-------+-------+-------+--------+-------+------+------+-------+
done
EXAMPLE #5
---------To select user 300 as prototype user for ufs1, and assign other users the
same limits, type:
$ nas_quotas -group -edit -fs ufs1 -proto 300 301 302 303
done
EXAMPLE #6
---------To display the group quotas information for ufs1, type:
$ nas_quotas -report -group -fs ufs1
Report for group quotas on filesystem ufs1 mounted on /ufs1
+-----------+--------------------------------+-----------------------------+
| Group
|
Bytes Used
(1K)
|
Files
|
+-----------+-------+-------+-------+--------+-------+------+------+-------+
|
|
Used |
Soft |
Hard |Timeleft|
Used | Soft | Hard |Tim
eleft|
+-----------+-------+-------+-------+--------+-------+------+------+-------+
|#1
|
296|
0|
|#300
|
6992|
|#301
|
|#302
0|
|
12|
0|
0|
6000|
7000| 7.0days|
67|
600|
700|
0|
6000|
7000|
|
0|
600|
700|
|
0|
6000|
7000|
|
0|
600|
700|
|#303
|
0|
6000|
7000|
|
0|
600|
700|
|#32772
|
22296|
0|
0|
|
228|
0|
0|
|
|
|
|
|
|
+-----------+-------+-------+-------+--------+-------+------+------+-------+
done
EXAMPLE #7
---------To edit the default quota configuration for server_2, type:
$ nas_quotas -edit -config -mover server_2
File System Quota Parameters:
fs "ufs1"
Block Grace: (1.0 weeks)
Inode Grace: (1.0 weeks)
* Default Quota Limits:
User: block (soft = 5000, hard = 8000) inodes (soft = 100, hard= 200)
Group: block (soft = 6000, hard = 9000) inodes (soft = 200, hard= 400)
Deny disk space to users exceeding quotas: (yes)
* Generate Events when:
Quota check starts: (no)
Quota check ends: (no)
soft quota crossed: (no)
hard quota crossed: (no)
fs "ufs2"
Block Grace: (1.0 weeks)
Inode Grace: (1.0 weeks)
* Default Quota Limits:
User: block (soft = 0, hard = 0) inodes (soft = 0, hard= 0)
Group: block (soft = 0, hard = 0) inodes (soft = 0, hard= 0)
Deny disk space to users exceeding quotas: (yes)
* Generate Events when:
Quota check starts: (no)
Quota check ends: (no)
soft quota crossed: (no)
hard quota crossed: (no)
˜
˜
˜
˜
"/tmp/EdP.ahCPdAB" 25L, 948C written
done
EXAMPLE #8
---------To open a vi session and edit the quota configuration for a file system,
type:
$ nas_quotas -edit -config -fs ufs1
File System Quota Parameters:
fs "ufs1"
Block Grace: (1.0 weeks)
Inode Grace: (1.0 weeks)
* Default Quota Limits:
User: block (soft = 5000, hard = 8000) inodes (soft = 100, hard= 200)
Group: block (soft = 6000, hard = 9000) inodes (soft = 200, hard= 400)
Deny disk space to users exceeding quotas: (yes)
* Generate Events when:
Quota check starts: (no)
Quota check ends: (no)
soft quota crossed: (yes)
hard quota crossed: (yes)
˜
˜
˜
˜
"/tmp/EdP.a4slhyg" 13L, 499C written
done
EXAMPLE #9
---------To view the quota configuration for the file system, ufs1, type:
185
$ nas_quotas -report -config -fs ufs1
+--------------------------------------------------------+
| Quota parameters for filesystem ufs1 mounted on /ufs1:
+--------------------------------------------------------+
| Quota Policy: blocks
| User Quota: ON
| Group Quota: ON
| Block grace period: (1.0 weeks)
| Inode grace period: (1.0 weeks)
| Default USER quota limits:
| Block Soft: ( 5000), Block Hard: ( 8000)
| Inode Soft: ( 100), Inode Hard: ( 200)
| Default GROUP quota limits:
| Block Soft: ( 6000), Block Hard: ( 9000)
| Inode Soft: ( 200), Inode Hard: ( 400)
| Deny Disk Space to users exceeding quotas: YES
| Log an event when ...
| Block hard limit reached/exceeded: YES
| Block soft limit (warning level) crossed: YES
| Quota check starts: NO
| Quota Check ends: NO
+--------------------------------------------------------+
done
EXAMPLE #10
----------To enable tree quotas for ufs1, type:
$ nas_quotas -on -tree -fs ufs1 -path /tree1 -comment ’Tree #1’
done
EXAMPLE #11
----------To create a tree quota with multibyte character support, type:
$ nas_quotas -on -tree -fs fs_22 -path /<path_in_local_language_text>
-comment <comment_in_local_language_text>
done
EXAMPLE #12
---------To list the tree quotas for ufs1, type:
$ nas_quotas -list -tree -fs ufs1
+---------------------------------------------------------------------------+
| Quota trees for filesystem ufs1 mounted on /ufs1:
+------+--------------------------------------------------------------------+
|TreeId| Quota tree path (Comment)
|
+------+--------------------------------------------------------------------+
|
1 | /tree1 (Tree #1)
|
2 | /tree2 (Tree #2)
|
3 | /<tree_path_in_local_language_text> (Tree #3)
|
|
|
+------+---------------------------------------------------------------
------+
done
EXAMPLE #13
----------To edit tree quotas for ufs1 and add a comment, type:
$ nas_quotas -edit -tree -fs ufs1 -comment ’Quota for Tree1’ 1
done
EXAMPLE #14
----------To edit tree quotas for ufs1, type:
$ nas_quotas -edit -tree -fs ufs1 1
treeid : 1
fs ufs1 blocks (soft = 6000, hard = 8000) inodes (soft = 200, hard = 300)
˜
˜
˜
˜
"/tmp/EdP.aiHKgh5" 2L, 85C written
done
EXAMPLE #15
----------To edit tree quotas for ufs1 and change the block and inodes, type:
$ nas_quotas -edit -tree -fs ufs1 -block 8000:6000 -inode 900:800 1
done
EXAMPLE #16
----------To edit tree quotas for ufs1 and apply the quota configuration of the
prototype tree, type:
$ nas_quotas -edit -tree -fs ufs1 -proto 1 2
done
EXAMPLE #17
----------To display any currently active trees on a file system, type:
$ nas_quotas -report -tree -fs ufs1
Report for tree quotas on filesystem ufs1 mounted on /ufs1
+-----------+--------------------------------+-----------------------------+
| Tree
|
Bytes Used
(1K)
|
Files
|
+-----------+-------+-------+-------+--------+-------+------+------+-------+
|
|
Used |
Soft |
Hard |Timeleft|
Used | Soft | Hard |Tim
eleft|
+-----------+-------+-------+-------+--------+-------+------+------+-------+
|#1
|
384|
6000|
8000|
|
3|
800|
900|
|#2
|
7856|
6000|
8000| 7.0days|
60|
800|
900|
|
|
187
+-----------+-------+-------+-------+--------+-------+------+------+-------+
done
EXAMPLE #18
----------To disable tree quotas, type:
$ nas_quotas -tree -off -fs ufs1 -path /tree1
done
EXAMPLE #19
----------To enable quotas for users and groups on tree quota, /tree3, of a file
system, ufs1, type:
$ nas_quotas -on -both -fs ufs1 -path /tree3
done
EXAMPLE #20
---------To open a vi session to edit file system quotas on quota tree, /tree3, on
ufs1 for the specified user, 1000, type:
$ nas_quotas -edit -user -fs ufs1 -path /tree3 1000
Userid : 1000
fs ufs1 tree "/tree3" blocks (soft = 4000, hard = 6000) inodes (soft = 30,
hard = 50)
˜
˜
˜
˜
"/tmp/EdP.aMdtIQR" 2L, 100C written
done
EXAMPLE #21
----------To change the block limit and inode limit on quota tree, /tree3, on ufs1 for
the specified user, 1000, without opening up a vi session, type:
$ nas_quotas -edit -user -fs ufs1 -path /tree3 -block 6000:4000 -inode
300:200 1000
done
EXAMPLE #22
----------To view a report of user quotas on tree quota, /tree3, for ufs1, type:
$ nas_quotas -report -user -fs ufs1 -path /tree3
Report for user quotas on quota tree /tree3 on filesystem ufs1 mounted
on
/ufs1
+-----------+--------------------------------+-----------------------------+
|User
|
Bytes Used
(1K)
|
Files
|
+-----------+-------+-------+-------+--------+-------+------+------+---
-----+
|
|
Used |
Soft |
Hard |Timeleft|
Used | Soft | Hard |Tim
eleft|
+-----------+-------+-------+-------+--------+-------+------+------+-------+
|#1000
|
2992|
4000|
6000|
|
34|
200|
300|
|#32768
|
9824|
0|
0|
|
28|
0|
0|
|
|
+-----------+-------+-------+-------+--------+-------+------+------+-------+
done
EXAMPLE #23
----------To open a vi session and edit the quota configuration for tree quota,
/tree3, on a file system, ufs1, type:
$ nas_quotas -edit -config -fs ufs1 -path /tree3
Tree Quota Parameters:
fs "ufs1"
tree "/tree3"
Block Grace: (1.0 weeks)
Inode Grace: (1.0 weeks)
* Default Quota Limits:
User: block (soft = 8000, hard = 9000) inodes (soft = 200, hard= 300)
Group: block (soft = 8000, hard = 9000) inodes (soft = 300, hard= 400)
Deny disk space to users exceeding quotas: (yes)
* Generate Events when:
Quota check starts: (no)
Quota check ends: (no)
soft quota crossed: (yes)
hard quota crossed: (yes)
˜
˜
˜
˜
"/tmp/EdP.aDTOKeU" 14L, 508C written
done
EXAMPLE #24
----------To view the quota configuration for tree quota, /tree3, on file system,
ufs1, type:
$ nas_quotas -report -config -fs ufs1 -path /tree3
+-----------------------------------------------------------------+
| Quota parameters for tree quota /tree3 on filesystem ufs1 mounted
| on /ufs1:
+-----------------------------------------------------------------+
| Quota Policy: blocks
| User Quota: ON
| Group Quota: ON
| Block grace period: (1.0 weeks)
| Inode grace period: (1.0 weeks)
| Default USER quota limits:
| Block Soft: ( 8000), Block Hard: ( 9000)
| Inode Soft: ( 200), Inode Hard: ( 300)
| Default GROUP quota limits:
| Block Soft: ( 8000), Block Hard: ( 9000)
| Inode Soft: ( 300), Inode Hard: ( 400)
| Deny Disk Space to users exceeding quotas: YES
189
| Log an event when ...
| Block hard limit reached/exceeded: YES
| Block soft limit (warning level) crossed: YES
| Quota check starts: NO
| Quota Check ends: NO
+-----------------------------------------------------------------+
done
EXAMPLE #25
----------To disable user quota and group quota on tree quota, /tree3, type:
$ nas_quotas -off -both -fs ufs1 -path /tree3
done
EXAMPLE #26
----------To disable group quotas for ufs1, type:
$ nas_quotas -off -group -fs ufs1
done
EXAMPLE #27
----------To clear all tree quotas for ufs1, type:
$ nas_quotas -clear -tree -fs ufs1
done
EXAMPLE #28
----------To clear quotas for users and groups of a Data Mover, type:
$ nas_quotas -clear -both -mover server_2
done
EXAMPLE #29
----------To start a tree quota check in quota tree /mktg-a/dir1 in file system ufs1
with the file system online, type:
$ nas_quotas -check -start -mode online -tree -fs ufs1 /mktg-a/dir1
done
EXAMPLE #30
----------To stop a tree quota check in file system ufs1, type:
$ nas_quotas -check -stop -fs ufs1
done
EXAMPLE #31
----------To view the status of a tree quota check in quota tree /mktg-a/dir1 in file
system ufs1, type:
$ nas_quotas -check -status -tree -fs ufs1 -path /mktg-a/dir1
Tree quota check on filesystem ufs1 and path /mktg-a/dir is running and is 60%
complete.
Done
EXAMPLE #32
----------To list quota database limits for all file systems on a Data Mover, type:
$ nas_quotas -quotadb -info -mover server_2
Info 13421850365 : The quota limit on ufs0 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Info 13421850365 : The quota limit on ufs1 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Info 13421850365 : The quota limit on ufs2 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Info 13421850365 : The quota limit on ufs3 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Info 13421850366 : The quota limit on ufs4 is at 256 TB
EXAMPLE #33
----------To list quota database limits for file system ufs4, type:
$ nas_quotas -quotadb -info -fs ufs4
Info 13421850366 : The quota limit on ufs4 is at 256 TB
EXAMPLE #34
----------To upgrade all file systems on a Data Mover, in interactive mode, type:
$ nas_quotas -quotadb -upgrade -mover server_2
Info 13421850365 : The quota limit on ufs0 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Info 13421850365 : The quota limit on ufs1 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Info 13421850365 : The quota limit on ufs2 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
191
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Info 13421850365 : The quota limit on ufs3 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Info 13421850366 : The quota limit on ufs4 is at 256 TB
Warning 17716861297: The file systems specified in the list above will not be
accessible during the quota database upgrade, and a file system’s CIFS share
and NFS export also will not be accessible during the upgrade. The file
systems shown above are listed in the order that the quota database conversion
is performed, one by one sequentially. The estimated time ( shown above )
needed to upgrade the quota database may change based on the file system’s
quota configuration and I/O performance when the conversion is running.
Do you really want to upgrade the file system quota database now[Y/N]: Y
Info 13421850367 : quota db upgraded on ufs0
Info 13421850367 : quota db upgraded on ufs1
Info 13421850367 : quota db upgraded on ufs2
Error 13421850368 : Timeout occurred when upgrading quota db on ufs3. The
Quota db upgrade may still be in progress. Use the "-info" option to check
status.
Info 13421850369 : quota db already upgraded on ufs4
EXAMPLE #35
----------To list quota database limits for file system ufs3 after an upgrade has timed
out, type:
$ nas_quotas -quotadb -info -fs ufs3
Info 13421850370 : The quota limit on ufs3 is at 4TB. Upgrade is 48% complete.
EXAMPLE #36
----------To upgrade all file systems on a Data Mover, in non-interactive mode, type:
$ nas_quotas -quotadb -upgrade -Force -mover server_2
Info 13421850365 : The quota limit on ufs0 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Info 13421850365 : The quota limit on ufs1 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Info 13421850365 : The quota limit on ufs2 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Info 13421850365 : The quota limit on ufs3 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Info 13421850366 : The quota limit on ufs4 is at 256 TB
Warning 17716861297: The file systems specified in the list above will not be
accessible during the quota database upgrade, and a file system’s CIFS share
and NFS export also will not be accessible during the upgrade. The file
systems shown above are listed in the order that the quota database conversion
is performed, one by one sequentially. The estimated time ( shown above )
needed to upgrade the quota database may change based on the file system’s
quota configuration and I/O performance when the conversion is running.
Info 13421850367 : quota db upgraded on ufs0
Info 13421850367 : quota db upgraded on ufs1
Info 13421850367 : quota db upgraded on ufs2
Error 13421850368 : Timeout occurred when upgrading quota db on ufs3. The
Quota db upgrade may still be in progress. Use the "-info" option to check
status.
Info 13421850369 : quota db already upgraded on ufs4
EXAMPLE #37
----------To upgrade file system ufs3, in interactive mode, type:
$ nas_quotas -quotadb -upgrade -fs ufs3
Info 13421850365 : The quota limit on ufs3 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Warning 17716861297: The file systems specified in the list above will not be
accessible during the quota database upgrade, and a file system’s CIFS share
and NFS export also will not be accessible during the upgrade. The file
systems shown above are listed in the order that the quota database conversion
is performed, one by one sequentially. The estimated time ( shown above )
needed to upgrade the quota database may change based on the file system’s
quota configuration and I/O performance when the conversion is running.
Do you really want to upgrade the file system quota database now[Y/N]: Y
Info 13421850367 : quota db upgraded on ufs3
done
EXAMPLE #38
----------To upgrade file system ufs3, in non-interactive mode, type:
$ nas_quotas -quotadb -upgrade -Force -fs ufs3
Info 13421850365 : The quota limit on ufs3 is at 4TB. The upgrade to 256 TB is
estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at
a speed of 300 blocks per second.
Warning 17716861297: The file systems specified in the list above will not be
accessible during the quota database upgrade, and a file system’s CIFS share
and NFS export also will not be accessible during the upgrade. The file
systems shown above are listed in the order that the quota database conversion
is performed, one by one sequentially. The estimated time ( shown above )
needed to upgrade the quota database may change based on the file system’s
quota configuration and I/O performance when the conversion is running.
Info 13421850367 : quota db upgraded on ufs3
done
----------------------------------------------------------------Last Modified: May 12, 2011 3:15 pm
193
nas_rdf
Facilitates communication between two VNX systems. Its primary
use is to manage VNX for file systems and define the relationships
needed for disaster recovery in a SRDF environment.
SYNPOSIS
-------nas_rdf
-init
| -activate [-reverse]|-skip_rdf_operations][-skip_SiteA_shutdown][-nocheck]
| -restore [-skip_rdf_operations [-skip_SiteA_shutdown]][-nocheck]
-check {-all|<test>,...}
DESCRIPTION
----------nas_rdf establishes and manages relationships for Control Stations
and Data Movers that physically reside in separate VNX for file
cabinets.
For SRDF, nas_rdf initializes the VNX, activates a failover to a
destination VNX for file, or restores a source VNX. For Dynamic
SRDF, nas_rdf activates a failover and reverses the system from a
destination volume (R2) to a source volume (R1). Configuration
details depend on the type of SRDF: active/passive or active/active’.
SRDF/S for synchronous replication with disaster recovery, or
active/passive SRDF/A for extended-distance, or asynchronous
replication with a point-in-time replica.
Note: The apostrophe in active/active’ indicates that both sites have a source
volume mirrored at the other site.
SRDF is supported only on a VNX attached to a Symmetrix system.
Also, this command must be run from a primary Control Station in
slot 0; it will report an error if run from a Control Station in slot 1.
Note: This command must be executed from the /nas/sbin directory, unless
otherwise directed. Log in with your administrative username and password,
and execute this functionality from root.
OPTIONS
-------init
Initializes a source or destination (target) VNX for SRDF/S or
SRDF/A.
-activate [-reverse]
Initiates an SRDF failover from the source VNX for file to the
destination. The -activate option is executed on the destination VNX
at the discretion of the user. The -activate option sets each
SRDF-protected volume on the source VNX as read-only, and each
mirrored volume on the destination VNX is set as read-write. The
SRDF standby Data Movers acquire the IP and MAC addresses, file
systems, and export tables of their source counterparts. The -reverse
option reverses SRDF direction by converting R2 volumes at
destination site to R1 and synchronizing the destination and source
sites. The -reverse option adds SYMCLI swap and establishes
operations on the system after the normal activate operation is
performed. When the -activate option is executed, an automatic,
internal SRDF health check is performed before activating a failover.
The -nocheck option allows you to skip this health check.
-activate -skip_rdf_operations
Skips RDF backend operations like symrdf failover. The backend operations must
be done using Solution Enablers or Mainframe host component prior to this
command. SiteA shutdown (Data Mover shutdown and reboot Control Station) will
be skipped all the time when this option is specified. However Control Station
reboot is sent to SiteA at the end of the activate operation when the backend
RDF status is not "Split" to clean up old processes. (The "Split" status means
SiteA is read write, and the production site is up and running). For failover
from SiteB to SiteC or SiteC to SiteB, the Control Station reboot is sent to
SiteB or SiteC. SiteB/SiteC must be read write before starting this operation.
The -activate -skip_rdf_operations -skip_SiteA_shutdown will do the same
operation.
-activate -skip_SiteA_shutdown
Skips SiteA shutdown (Data Mover shutdown and reboot Control Station)
operation. However the SiteA shutdown is sent to SiteA at the end of the
activate operation. This option is mainly used to minimize the failover time.
-restore -skip_rdf_operations
Skips RDF backend operations like symrdf failback. This option also completes
only SiteB/SiteC restore operations and skip SiteA restore operation. The
SiteA restore operation must be done separately at SiteA after the SiteB/SiteC
restore operation completes. SiteB/SiteC must be read write before starting
this operation.
-restore -skip_rdf_operations -skip_SiteA_shutdown
Skips RDF backend operations like symrdf failback and also skip SiteA shutdown
operation. This is mainly used to failover from SiteB to SiteC or from SiteC
to SiteB.
-restore
Restores a source VNX after a failover. The -restore option is initially
executed on the destination VNX. The data on each destination
volume is copied to the corresponding volume on the source VNX.
On the destination VNX, services on each SRDF standby Data Mover
are stopped. (NFS clients connected to these Data Movers see a
"server unavailable" message; CIFS client connections time out.)
Each volume on the source VNX is set as read-write, and each
mirrored volume on the destination VNX is set as read-only.
Finally, nas_rdf -restore can be remotely executed on the source VNX
to restore the original configuration. Each primary Data Mover
reacquires its IP and MAC addresses, file systems, and export tables.
When the -restore option is executed, an automatic, internal SRDF
health check is performed before restoring source and destination
VNX systems. The -nocheck option allows you to skip this health
check.
-check { -all|<test>,...}
Runs SRDF health checks on the VNX. The -check option can be
executed either by using the -all option or by specifying one or more
of the following individual checks: SRDF standby Data Mover
configuration check (r1_dm_config, r2_dm_config), SRDF session
state check (r1_session, r2_session), Device group configuration
check (r1_dev_group, r2_dev_group), Data Mover mirrored device
state check (dev_not_normal), and SRDF restored state check
(restored). In these checks, r1 represents the source side and r2
represents the destination side.
When the -all option is used, all the checks are performed
automatically. If the -check option detects invalid configurations or
state issues, it prints relevant warning messages with recommended
actions so that the issues can be resolved before running the activate
or restore options. You can use the -check option to perform health
checks at any time.
195
Note: To run the -check option, you must log in to the VNX either as
nasadmin and then switch (su) to root, or as rdfadmin and then switch (su)
to root.
SEE ALSO
-------Using SRDF/S with VNX for Disaster Recovery, Using SRDF/S with
VNX, and nas_cel.
EXAMPLE #1
---------To start the initialization process on a destination VNX in an active/passive
SRDF/S configuration, as a nasadmin su to root user, type:
# /nas/sbin/nas_rdf -init
Discover local storage devices ...
Discovering storage on eng564168 (may take several minutes)
done
Start R2 dos client ...
done
Start R2 nas client ...
done
Contact CS_A ... is alive
Create a new login account to manage the RDF site CELERRA
Caution: For an active-active configuration, avoid using the same UID
that was used for the rdfadmin account on the other side.
New login username and UID (example: rdfadmin:500): rdfadmin:600
done
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
Changing password for user rdfadmin.
passwd: all authentication tokens updated successfully.
done
operation in progress (not interruptible)...
id = 1
name = CS_A
owner = 600
device = /dev/ndj1
channel = rdev=/dev/ndg, off_MB=391; wdev=/dev/nda, off_MB=391
net_path = 10.245.64.169
celerra_id = 0001949004310028
passphrase = nasadmin
Discover remote storage devices ...done
The following servers have been detected on the system (CS_B):
id type acl slot groupID state name
1 4 2000 2 0 server_2
2 1 0 3 0 server_3
Please enter the id(s) of the server(s) you wish to reserve
(separated by spaces) or "none" for no servers.
Select server(s) to use as standby: 1
operation in progress (not interruptible)...
id = 1
name = CS_A
owner = 600
device = /dev/ndj1
channel = rdev=/dev/ndg, off_MB=391; wdev=/dev/nda, off_MB=391
net_path = 10.245.64.169
celerra_id = 0001949004310028
passphrase = nasadmin
EXAMPLE #2
---------To initiate an SRDF failover from the source VNX to the destination, as a
rdfadmin su to root, type:
# /nas/sbin/nas_rdf -activate
Is remote site CELERRA completely shut down (power OFF)?
Do you wish to continue? [yes or no]: yes
Successfully pinged (Remotely) Symmetrix ID: 000187430809
Successfully pinged (Remotely) Symmetrix ID: 000190100559
Successfully pinged (Remotely) Symmetrix ID: 000190100582
Write Disable device(s) on SA at source (R1)..............Done.
Suspend RDF link(s).......................................Done.
Read/Write Enable device(s) on RA at target (R2)..........Done.
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
fsck 1.35 (28-Feb-2004)
/dev/ndj1: recovering journal
/dev/ndj1: clean, 13780/231360 files, 233674/461860 blocks
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
id
1
2
3
4
type
1
4
1
4
acl slot groupID
1000 2
1000 3
1000 4
1000 5
state
0
0
0
0
name
server_2
server_3
server_4
server_5
server_2 :
server_2 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_3 :
server_3 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Device: 045A in (0557,005)............................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
EXAMPLE #3
---------To initiate an SRDF failover from the source VNX to the destination,
without the SRDF health check, as rdfadmin su to root user, type:
# /nas/sbin/nas_rdf -activate -nocheck
Skipping SRDF health check ....
Is remote site CELERRA completely shut down (power OFF)?The nas Commands
Do you wish to continue? [yes or no]: yes
197
Successfully pinged (Remotely) Symmetrix ID: 000187430809
Successfully pinged (Remotely) Symmetrix ID: 000190100559
Successfully pinged (Remotely) Symmetrix ID: 000190100582
Write Disable device(s) on SA at source (R1)..............Done.
Suspend RDF link(s).......................................Done.
Read/Write Enable device(s) on RA at target (R2)..........Done.
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
fsck 1.35 (28-Feb-2004)
/dev/ndj1: recovering journal
/dev/ndj1: clean, 13780/231360 files, 233674/461860 blocks
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 4 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
server_2 :
server_2 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_3 :
server_3 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Device: 045A in (0557,005)............................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
EXAMPLE #4
---------To initiate a Dynamic SRDF failover from the source VNX to the
destination, as rdfadmin su to root user, type:
#/nas/sbin/nas_rdf -activate -reverse
Is remote site CELERRA completely shut down (power OFF)?
Do you wish to continue? [yes or no]: yes
Successfully pinged (Remotely) Symmetrix ID: 000280600118
Write Disable device(s) on SA at source (R1)...............Done.
Suspend RDF link (s).......................................Done.
Read/Write Enable device(s) on RA at target (r2)...........Done.
fsck 1.35 (28-Feb-2004)
/dev/sdjl: recovering journal
Clearing ophaned inode 37188 (uid0, gid=0, mode=0100644, size=0)
/dev/sdj1: clean, 12860/219968 files, 194793/439797 blocks
id
type acl slot groupID state name
1
1
1000 2
0
server_2
2
4
1000 3
0
server_3
3
4
2000 4
0
server_4
4
4
2000 5
0
server_5
server_2 :
server_2 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit
in progress (not interruptible)...done
done
server_3 :
server_3 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit
in progress (not interruptible)...done
done
An RDF ’Swap Personality’ operation execution is
in progress for device group ’1R2_500_1’. Please wait...
Swap RDF Personality......................................Started.
Swap RDF Personality......................................Done.
The RDF ’Swap Personality’ operation successfully executed for
device group ’1R2_500_1’.
An RDF ’Incremental Establish’ operation execution is
in progress for device group ’1R2_500_1’. Please wait...
Suspend RDF link(s).......................................Done.
Resume RDF link(s)........................................Started.
Merge device track tables between source and target.......Started.
Devices: 0009-000B ...................................... Merged.
Devices: 0032-0034 ...................................... Merged.
Devices: 0035-0037 ...................................... Merged.
Devices: 0038-003A ...................................... Merged.
Devices: 003B-003D ...................................... Merged.
Devices: 003E-0040 ...................................... Merged.
Devices: 0041-0043 ...................................... Merged.
Devices: 0044-0046 ...................................... Merged.
Devices: 0047-0049 ...................................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Done.
The RDF ’Incremental Establish’ operation successfully initiated for
device group ’1R2_500_1’.
EXAMPLE #5
---------To restore a source VNX after failover, as rdfadmin su to root user, type:
# /nas/sbin/nas_rdf -restore
Is remote site CELERRA ready for Storage restoration?
Do you wish to continue? [yes or no]: yes
Contact Joker_R1_CS0 ... is alive
Restore will now reboot the source site control station.
Do you wish to continue? [yes or no]: yes
Device Group (DG) Name
DG’s Type
DG’s Symmetrix ID
: 1R2_500_5
: RDF2
: 000190100557
Target (R2) View
Source (R1) View
MODES
------------------------------------------------------- ----- -----------ST
LI
ST
Standard
A
N
A
Logical
T R1 Inv
R2 Inv K
T R1 Inv
R2 Inv
RDF Pair
Device Dev
E Tracks
Tracks S Dev
E Tracks
Tracks MDA
STATE 199
-------------------------------- -- ------------------------ ----- -----------DEV001
DEV002
DEV003
DEV004
DEV005
DEV006
DEV007
DEV008
DEV009
DEV010
DEV011
DEV012
DEV013
DEV014
DEV015
DEV016
DEV017
DEV018
DEV019
DEV020
DEV021
DEV022
DEV023
DEV024
DEV025
DEV026
DEV027
DEV028
DEV029
DEV030
DEV031
DEV032
DEV033
DEV034
DEV035
DEV036
DEV037
DEV038
DEV039
DEV040
DEV041
DEV042
DEV043
DEV044
DEV045
DEV046
DEV047
DEV048
DEV049
DEV050
DEV051
DEV052
DEV053
DEV054
DEV055
DEV056
DEV057
DEV058
DEV059
DEV060
DEV061
DEV062
DEV063
DEV064
DEV065
045A
045B
045C
045D
045E
045F
0467
0468
0469
046A
046B
046C
046D
046E
046F
0470
0471
0472
0473
0474
0475
0476
0477
0478
0479
047A
047B
047C
047D
047E
047F
0480
0481
0482
0483
0484
0485
0486
0487
0488
0489
048A
048B
048C
048D
048E
048F
0490
0491
0492
0493
0494
0495
0496
0497
0498
0499
049A
049B
049C
049D
049E
049F
04A0
04A1
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
RW
10
2054
0
0
1284
0
0
2
0
0
2
0
0
0
2
0
2
0
0
0
0
0
2
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
2
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
RW
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
NR
045A
045B
045C
045D
045E
045F
0467
0468
0469
046A
046B
046C
046D
046E
046F
0470
0471
0472
0473
0474
0475
0476
0477
0478
0479
047A
047B
047C
047D
047E
047F
0480
0481
0482
0483
0484
0485
0486
0487
0488
0489
048A
048B
048C
048D
048E
048F
0490
0491
0492
0493
0494
0495
0496
0497
0498
0499
049A
049B
049C
049D
049E
049F
04A0
04A1
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
WD
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
S..
R1 Updated
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
Failed Over
DEV066 04A2 RW
0
0 NR 04A2 WD
0
0 S..
Failed Over
DEV067 04A3 RW
0
0 NR 04A3 WD
0
0 S..
Failed Over
DEV068 04A4 RW
0
0 NR 04A4 WD
0
0 S..
Failed Over
DEV069 04A5 RW
0
0 NR 04A5 WD
0
0 S..
Failed Over
DEV070 04A6 RW
0
0 NR 04A6 WD
0
0 S..
Failed Over
Total
-------- --------------- -------Track(s)
3366
0
0
0
MB(s)
105.2
0.0
0.0
0.0
Legend for MODES:
M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy
D(omino)
: X = Enabled, . = Disabled
A(daptive Copy)
: D = Disk Mode, W = WP Mode, . = ACp off
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 045A-045F, 0467-0477 in (0557,005).............. Merged.
Devices: 0478-0489 in (0557,005)......................... Merged.
Devices: 048A-049B in (0557,005)......................... Merged.
Devices: 049C-04A6 in (0557,005)......................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
Is remote site CELERRA ready for Network restoration?
Do you wish to continue? [yes or no]: yes
server_2 : done
server_3 : done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
fsck 1.35 (28-Feb-2004)
/dev/ndj1: clean, 13836/231360 files, 233729/461860 blocks
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
Waiting for 1R2_500_5 access ...done
Write Disable device(s) on RA at target (R2)..............Done.
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 045A-045F, 0467-0477 in (0557,005).............. Merged.
Devices: 0478-0489 in (0557,005)......................... Merged.
Devices: 048A-049B in (0557,005)......................... Merged.
Devices: 049C-04A6 in (0557,005)......................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
Read/Write Enable device(s) on SA at source (R1)..........Done.
Waiting for 1R2_500_5 sync ...done
Starting restore on remote site CELERRA ...
Waiting for nbs clients to start ... done
Waiting for nbs clients to start ... done
Suspend RDF link(s).......................................Done.
server_2 :
server_2 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_3 :
server_3 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_4 :
201
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
If the RDF device groups were setup to operate in ASYNCHRONOUS ( SRDF/A ) mode,
now would be a good time to set it back to that mode.
Would you like to set device group 1R2_500_5 to ASYNC Mode ? [yes or no]: no
done
EXAMPLE #6
----------To restore a source VNX after failover, without the SRDF health
check, as rdfadmin su to root user, type:
# /nas/sbin/nas_rdf -restore -nocheck
Skipping SRDF health check ....
Is remote site CELERRA ready for Storage restoration?
Do you wish to continue? [yes or no]: yes
Contact Joker_R1_CS0 ... is alive
Restore will now reboot the source site control station.
Do you wish to continue? [yes or no]: yes
Device Group (DG) Name : 1R2_500_5
DG’s Type : RDF2
DG’s Symmetrix ID : 000190100557
Target (R2) View Source (R1) View MODES
-------------------------------- ------------------------ ----- -----------ST
LI
ST
Standard
A
N
A
Logical
T R1 Inv R2 Inv K
T R1 Inv R2 Inv
RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE
-------------------------------- -- --------------- ---- ---------DEV001 045A RW 10
0
RW 045A WD 0
0
S.. R1 Updated
DEV002 045B RW 2054
0
NR 045B WD 0
0
S.. Failed Over
DEV003 045C RW 0
0
NR 045C WD 0
0
S.. Failed Over
DEV004 045D RW 0
0
NR 045D WD 0
0
S.. Failed Over
DEV005 045E RW 1284
0
NR 045E WD 0
0
S.. Failed Over
DEV006 045F RW 0
0
NR 045F WD 0
0
S.. Failed Over
DEV007 0467 RW 0
0
NR 0467 WD 0
0
S.. Failed Over
DEV008 0468 RW 2
0
NR 0468 WD 0
0
S.. Failed Over
DEV009 0469 RW 0
0
NR 0469 WD 0
0
S.. Failed Over
DEV010 046A RW 0
0
NR 046A WD 0
0
S.. Failed Over
DEV011 046B RW 2
0
NR 046B WD 0
0
S.. Failed Over
DEV012 046C RW 0
0
NR 046C WD 0
0
S.. Failed Over
DEV013 046D RW 0
0
NR 046D WD 0
0
S.. Failed Over
DEV014 046E RW 0
0
NR 046E WD 0
0
S.. Failed Over
DEV015 046F RW 2
0
NR 046F WD 0
0
S.. Failed Over
DEV016 0470 RW 0
0
NR 0470 WD 0
0
S.. Failed Over
DEV017 0471 RW 2
0
NR 0471 WD 0
0
S.. Failed Over
DEV018 0472 RW 0
0
NR 0472 WD 0
0
S.. Failed Over
DEV019 0473 RW 0
0
NR 0473 WD 0
0
S.. Failed Over
DEV020 0474 RW 0
0
NR 0474 WD 0
0
S.. Failed Over
DEV021 0475 RW 0
0
NR 0475 WD 0
0
S.. Failed Over
DEV022 0476 RW 0
0
NR 0476 WD 0
0
S.. Failed Over
DEV023 0477 RW 2
0
NR 0477 WD 0
0
S.. Failed Over
DEV024 0478 RW 2
0
NR 0478 WD 0
0
S.. Failed Over
DEV025 0479 RW 0
0
NR 0479 WD 0
0
S.. Failed Over
DEV026 047A RW 0
0
NR 047A WD 0
0
S.. Failed Over
DEV027 047B RW 0
0
NR 047B WD 0
0
S.. Failed Over
DEV028 047C RW 0
0
NR 047C WD 0
0
S.. Failed Over
DEV029 047D RW 0
0
NR 047D WD 0
0
S.. Failed Over
DEV030 047E RW 0
0
NR 047E WD 0
0
S.. Failed Over
DEV031 047F RW 0
0
NR 047F WD 0
0
S.. Failed Over
DEV032 0480 RW 0
0
NR 0480 WD 0
0
S.. Failed Over
DEV033 0481 RW 0
0
NR 0481 WD 0
0
S.. Failed Over
DEV034 0482 RW 0
0
NR 0482 WD 0
0
S.. Failed Over
DEV035 0483 RW 0
0
NR 0483 WD 0
0
S.. Failed Over
DEV036 0484 RW 0
0
NR 0484 WD 0
0
S.. Failed Over
DEV037 0485 RW 0
0
NR 0485 WD 0
0
S.. Failed Over
DEV038 0486 RW 0
0
NR 0486 WD 0
0
S.. Failed Over
DEV039 0487 RW 0
0
NR 0487 WD 0
0
S.. Failed Over
DEV040 0488 RW 0
0
NR 0488 WD 0
0
S.. Failed Over
DEV041 0489 RW 0
0
NR 0489 WD 0
0
S.. Failed Over
DEV042 048A RW 0
0
NR 048A WD 0
0
S.. Failed Over
DEV043 048B RW 0
0
NR 048B WD 0
0
S.. Failed Over
DEV044 048C RW 0
0
NR 048C WD 0
0
S.. Failed Over
DEV045 048D RW 0
0
NR 048D WD 0
0
S.. Failed Over
DEV046 048E RW 0
0
NR 048E WD 0
0
S.. Failed Over
DEV047 048F RW 2
0
NR 048F WD 0
0
S.. Failed Over
DEV048 0490 RW 0
0
NR 0490 WD 0
0
S.. Failed Over
DEV049 0491 RW 0
0
NR 0491 WD 0
0
S.. Failed Over
DEV050 0492 RW 0
0
NR 0492 WD 0
0
S.. Failed Over
DEV051 0493 RW 0
0
NR 0493 WD 0
0
S.. Failed Over
DEV052 0494 RW 0
0
NR 0494 WD 0
0
S.. Failed Over
DEV053 0495 RW 0
0
NR 0495 WD 0
0
S.. Failed Over
DEV054 0496 RW 0
0
NR 0496 WD 0
0
S.. Failed Over
DEV055 0497 RW 2
0
NR 0497 WD 0
0
S.. Failed Over
DEV056 0498 RW 2
0
NR 0498 WD 0
0
S.. Failed Over
DEV057 0499 RW 0
0
NR 0499 WD 0
0
S.. Failed Over
DEV058 049A RW 0
0
NR 049A WD 0
0
S.. Failed Over
DEV059 049B RW 0
0
NR 049B WD 0
0
S.. Failed Over
DEV060 049C RW 0
0
NR 049C WD 0
0
S.. Failed Over
DEV061 049D RW 0
0
NR 049D WD 0
0
S.. Failed Over
DEV062 049E RW 0
0
NR 049E WD 0
0
S.. Failed Over
DEV063 049F RW 0
0
NR 049F WD 0
0
S.. Failed Over
DEV064 04A0 RW 0
0
NR 04A0 WD 0
0
S.. Failed Over
DEV065 04A1 RW 0
0
NR 04A1 WD 0
0
S.. Failed Over
DEV066 04A2 RW 0
0
NR 04A2 WD 0
0
S.. Failed Over
DEV067 04A3 RW 0
0
NR 04A3 WD 0
0
S.. Failed Over
DEV068 04A4 RW 0
0
NR 04A4 WD 0
0
S.. Failed Over
DEV069 04A5 RW 0
0
NR 04A5 WD 0
0
S.. Failed Over
DEV070 04A6 RW 0
0
NR 04A6 WD 0
0
S.. Failed Over
Total -------- -------- -------- -------Track(s) 3366 0 0 0he nas Commands
MB(s) 105.2 0.0 0.0 0.0
Legend for MODES:
M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy
D(omino) : X = Enabled, . = Disabled
A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 045A-045F, 0467-0477 in (0557,005).............. Merged.
Devices: 0478-0489 in (0557,005)......................... Merged.
Devices: 048A-049B in (0557,005)......................... Merged.
Devices: 049C-04A6 in (0557,005)......................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
Is remote site CELERRA ready for Network restoration?
Do you wish to continue? [yes or no]: yes
server_2 : done
server_3 : done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
fsck 1.35 (28-Feb-2004)
/dev/ndj1: clean, 13836/231360 files, 233729/461860 blocks
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
203
Waiting for 1R2_500_5 access ...done
Write Disable device(s) on RA at target (R2)..............Done.
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 045A-045F, 0467-0477 in (0557,005).............. Merged.
Devices: 0478-0489 in (0557,005)......................... Merged.
Devices: 048A-049B in (0557,005)......................... Merged.
Devices: 049C-04A6 in (0557,005)......................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
Read/Write Enable device(s) on SA at source (R1)..........Done.
Waiting for 1R2_500_5 sync ...done
Starting restore on remote site CELERRA ...
Waiting for nbs clients to start ... done
Waiting for nbs clients to start ... done
Suspend RDF link(s).......................................Done.
server_2 :
server_2 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
doneThe nas Commands
server_3 :
server_3 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
If the RDF device groups were setup to operate in ASYNCHRONOUS ( SRDF/A )
mode,
now would be a good time to set it back to that mode.
Would you like to set device group 1R2_500_5 to ASYNC Mode ? [yes or no]: no
done
EXAMPLE #7
---------To restore a source VNX after failover, when using Dynamic SRDF, rdfadmin su
to root user, type:
# /nas/sbin/nas_rdf -restore
Is remote site CELERRA ready for Storage restoration?
Do you wish to continue? [yes or no]: yes
Contact eng17335 ... is alive
Restore will now reboot the source site control station.
Do you wish to continue? [yes or no]: yes
Device Group (DG) Name
DG’s Type
DG’s Symmetrix ID
Remote Symmetrix ID
RDF (RA) Group Number
:
:
:
:
:
1R2_500_1
RDF1
000280600187
000280600118
1 (00)
(Microcode Version: 5568)
(Microcode Version: 5568)
Source (R1) View
Target (R2) View MODES
------------------------------------------------ ---- -----------ST
LI
ST
Standard
A
N
A
Logical
T R1 Inv R2 Inv K
T R1 Inv R2 Inv
RDF Pair
Device Dev
E Tracks Tracks S Dev
E Tracks Tracks MDA STATE
------------------------------- -- ------------------- --- -----------DEV001 0056 RW
DEV002 0057 RW
DEV003 0032 RW
...............
BCV008 0069 RW
BCV009 006A RW
BCV010 006B RW
Total
Track(s)
MB(s)
0
0
0
0 RW 0030 WD
0 RW 0031 WD
0 RW 000C WD
0
0
0
0 S.. Synchronized
0 S.. Synchronized
0 S.. Synchronized
0
0
0
0 RW 005F WD
0 RW 0060 WD
0 RW 0061 WD
0
0
0
0 S.. Synchronized
0 S.. Synchronized
0 S.. Synchronized
------ -----0
0
0.0
0.0
------ -----0
0
0.0
0.0
Legend for MODES:
M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy
D(omino)
: X = Enabled, . = Disabled
A(daptive Copy)
: D = Disk Mode, W = WP Mode, . = ACp off
Is remote site CELERRA ready for Network restoration?
Do you wish to continue? [yes or no]: yes
server_2 : done
server_3 : done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
fsck 1.35 (28-Feb-2004)
/dev/sdj1: clean, 12956/219968 files, 188765/439797 blocks
An RDF ’Failover’ operation execution is
in progress for device group ’1R2_500_1’. Please wait...
Write Disable device(s) on SA at source (R1)..............Done.
Suspend RDF link(s).......................................Done.
Swap RDF Personality......................................Started.
Swap RDF Personality......................................Done.
Suspend RDF link(s).......................................Done.
Read/Write Enable device(s) on SA at source (R1)..........Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
Read/Write Enable device(s) on SA at target (R2)..........Done.
The RDF ’Failover’ operation successfully executed for
device group ’1R2_500_1’.
Waiting for 1R2_500_1 sync ...done
Starting restore on remote site CELERRA ...
Suspend RDF link(s).......................................Done.
server_2 :
server_2 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_3 :
server_3 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
205
server_4 :
Error 4003: server_4: standby is not configured
server_5 :
Error 4003: server_5: standby is not configured
done
EXAMPLE #8
---------To run all available checks on a source VNX, as a nasadmin
su to root user, type:
# /nas/sbin/nas_rdf -check -all
--------------------- SRDF Health Checks --------------------SRDF: Checking system is restored........................ Pass
SRDF: Checking device is normal.......................... Pass
SRDF: Checking R1 SRDF session is Synch or Consistent.... Pass
SRDF: Checking R1 Data Mover configuration is valid...... Pass
SRDF: Checking R1 devices are available.................. Pass
SRDF: Checking R1 device group has all devices........... Pass
SRDF: Checking R2 SRDF session is Synch or Consistent.... Pass
SRDF: Checking R2 Data Mover configuration is valid...... Pass
SRDF: Checking R2 devices are available.................. Pass
SRDF: Checking R2 device group has all devices........... Pass
EXAMPLE #9
---------To run one or more specific available checks on a source VNX,
as a nasadmin su to root user, type:
# /nas/sbin/nas_rdf -check r1_dev_group,r2_dev_group
--------------------- SRDF Health Checks --------------------SRDF: Checking R1 device group has all devices........... Pass
SRDF: Checking R2 device group has all devices........... Pass
EXAMPLE #10
----------To initiate an SRDF failover from the source VNX to the destination, without
the SRDF health check for the following use cases, a rdfadmin su to root user,
type:
# /nas/sbin/nas_rdf -activate -skip_rdf_operations -nocheck
* SRDF STAR concurrent or cascaded
* SRDF concurrent or cascaded
* SRDF R2 enable (Split)
SiteA to SiteB/SiteC failover case
Skipping SRDF health check ....
Skipping Site A shutdown process for the skip_rdf_opertaions option ....
Successfully pinged (Remotely) Symmetrix ID: 000194900462
Successfully pinged (Remotely) Symmetrix ID: 000194900546
Skipping symrdf failover process ....
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
fsck 1.39 (29-May-2006)
/dev/ndj1: recovering journal
/dev/ndj1: clean, 15012/252928 files, 271838/516080 blocks
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
id
type acl slot groupID state name
1
2
1
1
0
0
2
3
0
0
server_2
server_3
server_2 :
server_2 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
Skipping symrdf update process ....
A reboot Control Station request was sent to Site A to clean up old processes
....
SiteB to SiteC failover case
[root@CS_C rdfadmin]# /nas/sbin/nas_rdf -activate
-skip_rdf_operations -nocheck
Skipping Site A shutdown process ....
For Site B to Site C failover or Site C to Site B failover, nas_rdf -restore
-skip_rdf_operations -skip_SiteA_shutdown and reboot -f -n operations must be
done on the source side Control Station (with read write backend) to clean up
old processes before continue this activate operation unless the source side
is not reachable or destroyed.
Do you wish to continue? [yes or no]: yes
Successfully pinged (Remotely) Symmetrix ID: 000194900431
Successfully pinged (Remotely) Symmetrix ID: 000194900546
Successfully pinged (Remotely) Symmetrix ID: 000194900673
Skipping symrdf failover process ....
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
fsck 1.39 (29-May-2006)
/dev/ndj1: clean, 14717/252928 files, 279439/516080 blocks
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
server_2 :
server_2 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
Skipping symrdf update process ....
A reboot Control Station request was sent to 10.245.64.168 to clean up old
processes ....
EXAMPLE # 11
-----------To initiate an SRDF failover from the source VNX to the destination, without
the SRDF health check for the case SiteA Data Movers are already shutdown and
the Control Station is already rebooted, type:
# /nas/sbin/nas_rdf -activate -skip_SiteA_shutdown -nocheck
Skipping SRDF health check ....
Skipping Site A shutdown process ....
This skip_SiteA_shutdown option is only for the case the Site A Data Movers
have been already shutdown and the Site A Control Station has been already
rebooted to clean up old processes.
207
Do you wish to continue? [yes or no]: yes
Successfully pinged (Remotely) Symmetrix ID: 000194900431
Successfully pinged (Remotely) Symmetrix ID: 000194900462
Successfully pinged (Remotely) Symmetrix ID: 000194900673
Write Disable device(s) on SA at source (R1)..............Done.
Suspend RDF link(s).......................................Done.
Read/Write Enable device(s) on RA at target (R2)..........Done.
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
fsck 1.39 (29-May-2006)
/dev/ndj1: recovering journal
/dev/ndj1: clean, 14237/252928 files, 297432/516080 blocks
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
id
type acl slot groupID state name
1
4
2000 2
0
server_2
2
1
1000 3
0
server_3
server_3 :
server_3 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 0078-0078 in (0546,011)..........................Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
A shutdown request was sent to Site A to clean up old processes ....
EXAMPLE #12
----------To restore a source VNX after failover for the following use cases, as a
nasadmin su to root user, type:
# /nas/sbin/nas_rdf -restore -skip_rdf_operations
SRDF STAR concurrent or cascaded
SRDF concurrent or cascaded
SRDF R2 enable (Split)
Restore on SiteB/SiteC
Skipping session check ....
Is remote site CELERRA ready for Storage restoration?
Do you wish to continue? [yes or no]: yes
Contact eng564169 ... is alive
Restore will now reboot the source site control station. This process may take
several minutes.
Do you wish to continue? [yes or no]: yes
Halting SiteA Data Movers and rebooting SiteA Control Station ....
Checking SiteA Data Mover halt status ....
Skipping symrdf update operation ....
Is remote site CELERRA ready for Network restoration?
Do you wish to continue? [yes or no]: yes
server_2 : done
server_3 :
Error 4003: server_3 : standby is not configured
fsck 1.39 (29-May-2006)
/dev/ndj1: clean, 14716/252928 files, 279441/516080 blocks
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
Skipping symrdf failback operation & Site A restore ....
Restore on SiteA
To restore on siteA as a nasadmin su to root user, type:
[root@CS_A nasadmin]# /nasmcd/sbin/nas_rdf -restore
-skip_rdf_operations
Waiting for NAS services to finish starting......................... Done
Ensure that SiteA is currently write-enabled to continue this restore
operation.
Do you wish to continue? [yes or no]: yes
Waiting for nbs clients to start ... done
Waiting for nbs clients to start ... done
server_2 :
server_2 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_3 :
Error 4003: server_3 : standby is not configured
Skipping symrdf set async operation ....
Run ’nas_diskmark -mark -all’ on all Control Stations in the SRDF
configuration to make sure the SRDF configuration and nasdb are restored
completely.
Starting Services ...done
EXAMPLE #13
-----------To disable SiteB for failover from SiteB to SiteC, as a rdfadmin su to root
user, type:
# /nas/sbin/nas_rdf -restore -skip_rdf_operations -skip_SiteA_shutdown
Skipping session check ....
Skipping Site A shutdown process ....
Skipping symrdf update operation ....
Is remote site CELERRA ready for Network restoration?
Do you wish to continue? [yes or no]: yes
server_2 : done
server_3 :
Error 4003: server_3 : standby is not configured
fsck 1.39 (29-May-2006)
/dev/ndj1: clean, 14717/252928 files, 279439/516080 blocks
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
Waiting for nbs clients to die ... done
Waiting for nbs clients to start ... done
Skipping symrdf failback operation & Site A restore ....
----------------------------------------------------------Last Modified: May 28, 2012 3:45 p.m.
209
nas_replicate
Manages loopback, local, and remote VNX Replicator sessions.
SYNOPSIS
-------nas_replicate
-list [-id]
| -info {-all|id=<sessionId>|<name>}
| -create <name>
-source -fs {<fsName>|id=<fsId>}
[-sav {<srcSavVolStoragePool>|id=<srcSavVolStoragePoolId>}
[-storageSystem <srcSavStorageSerialNumber>]]
-destination {-fs {id=<dstFsId>|<existing_dstFsName>}
| -pool {id=<dstStoragePoolId>|<dstStoragePool>}
[-storageSystem <dstStorageSerialNumber> ] }
[-vdm <dstVdmName>]}
[-sav {id=<dstSavVolStoragePoolId>|<dstSavVolStoragePool>}
[-storageSystem <dstSavStorageSerialNumber> ] ]
-interconnect {<name>|id=<interConnectId>}
[-source_interface {ip=<ipAddr>|<nameServiceInterfaceName>}]
[-destination_interface {ip=<ipAddr>|<nameServiceInterfaceNam
e>}]
[{-max_time_out_of_sync <maxTimeOutOfSync>|-manual_refresh}]
[-overwrite_destination][-tape_copy][-background]
| -create <name>
-source -vdm <vdmName>
-destination {-vdm <existing_dstVdmName>|-pool
{id=<dstStoragePoolId>|<dstStoragePool>}[-storageSystem
<dstStorageSerialNumber> ]}
-interconnect {<name>|id=<interConnectId>}
[-source_interface {ip=<ipAddr>|<nameServiceInterfaceName>}]
[-destination_interface {ip=<ipAddr>|<nameServiceInterfaceNam
e>}]
[{-max_time_out_of_sync <maxTimeOutOfSync>|-manual_refresh}]
[-overwrite_destination][-background]
| -start {<name>|id=<sessionId>}
[-interconnect {<name>|id=<interConnectId>}]
[-source_interface {ip=<ipAddr>|<nameServiceInterfaceName>}]
[-destination_interface {ip=<ipAddr>|<nameServiceInterfaceNam
e>}]
[{-max_time_out_of_sync <maxTimeOutOfSync>|-manual_refresh}]
[-overwrite_destination][-reverse][-full_copy][-background]
| -modify {<name>|id=<sessionId>} [-name <new name>]
[-source_interface {ip=<ipAddr>|<nameServiceInterfaceName>}]
[-destination_interface {ip=<ipAddr>|<nameServiceInterfaceNam
e>}]
[{-max_time_out_of_sync <maxTimeOutOfSync>|-manual_refresh}]
| -stop {<name>|id=<sessionId>} [-mode {source|destination|both}]
[-background]
| -delete {<name>|id=<sessionId>} [-mode {source|destination|both}]
[-background]
| -failover {<name>|id=<sessionId>} [-background]
| -switchover {<name>|id=<sessionId>}
| -reverse {<name>|id=<sessionId>} [-background]
| -refresh {<name>|id=<sessionId>} [-source {<ckptName>|id=<ckptId>}
-destination {<ckptName>|id=<ckptId>}] [-background]
Note: This command manages replication sessions using VNX Replicator. For
a one-time file system copy using VNX Replicator, use the nas_copy
command. For ongoing file system copy, use the nas_replicate command.
DESCRIPTION
----------nas_replicate creates, manages, or displays session information for
ongoing VNX Replicator replication of a file system, Virtual Data
Mover (VDM) at a destination using an existing Data Mover interconnect.
Each session handles a single source object and destination, and is
assigned a globally unique ID, fixed for the life of the session.
In response to a potential disaster scenario, use nas_replicate to
perform a failover of a specified replication session with possible data
loss. The -switchover option switches over a replication relationship
and performs synchronization of the source and destination without
data loss. Use nas_replicate to also reverse the direction of a
replication session or refresh the destination side with updates to the
source based on a time-out of synchronization value or on demand.
OPTIONS
-------list [-id]
Displays all configured (or stopped) replication sessions on each Data Mover
in the VNX for file cabinet. Each session is represented by either a name
or a session ID that is generated automatically whenever a session is
configured and is globally unique.
Use this option to obtain the session ID needed for another
command. Since session IDs are lengthy, the session ID obtained from
this command can be copied and pasted into the command.
-info {-all|id=<sessionId>|<name>}
Displays the status of a specific configured (or stopped) replication
session or copy session, or the status of all replication sessions.
CREATING FILE SYSTEM REPLICATION
--------------------------------create <name>
Assigns a name to the file system replication session. The name must
be unique for each Data Mover pair, which is defined by the
interconnect.
-source -fs {<fsName>|id=<fsId>} [-sav<srcSavVolStoragePool>|
id=<srcSavVolStoragePoolId>}[-storageSystem<srcSavStorageSerialNumber>]
]
Specifies the name or ID of the existing source file system to replicat
e.
The source file system must be mounted as read-only or read and write.
Note: If the source file system is mounted to a VDM and the goal is to
replicate a CIFS environment for disaster recovery (that is, replicate
a VDM and the file systems mounted to the VDM), create a session to
replicate the VDM first, before replicating a file system mounted to
the VDM.
The -sav option allocates a storage pool for all subsequent checkpoints
for the file system. By default, if checkpoint storage (the checkpoint
SavVol) needs to be allocated for checkpoints of the file system, the
command uses the same storage pool used to create the source file syste
m.
The -storageSystem option identifies the storage system on which all
subsequent checkpoionts for the source file system resides. For RAID
group-based pools, specifies the backend storage system when there are
multiple systems attached. For mapped pools, specify the pool ID or the
pool ID and storage system serial number to uniquely identify a pool.
-destination {-fs {<existing_dstFsName>| id=<dstFsId>}| -pool
211
{<dstStoragePool>| id=<dstStoragePoolId>}[-storageSystem
<dstStorageSerialNumber>]}
Specifies an existing destination file system or the storage needed
to create the destination file system. An existing destination file
system must be mounted as read-only and the same size as the
source. Specifying a storage pool or ID creates the read-only,
destination file system automatically, using the same name and
size as the source file system.
The -storageSystem option identifies the storage system on which the
destination file system will reside. This is necessary when there are
multiple back-end systems attached. Use nas_storage -list to obtain
attached storage system serial numbers.
-vdm <dstVdmName>]} [-sav {id=<dstSavVolStoragePoolId>
|<dstSavVolStoragePool>}[-storageSystem <dstStorageSerialNumber>]
Specifying a pool with the -vdm option mounts the destination file
system to an existing VDM as part of replication in a CIFS environment.
The -sav option allocates a storage pool for all subsequent checkpoints
of the destination file system. By default, if destination checkpoint
storage needs to be allocated for checkpoints, the command uses the sam
e
storage pool used to create the destination file system. The -storageSy
stem
option identifies the storage system on which the destination checkpoin
t
will reside. This is necessary when there are multiple back-end systems
attached. Use nas_storage -list to obtain attached storage system seria
l
numbers.
By default, the destination file system name will be the same as the so
urce
file system name. If a file system with the same name as the source fil
e
system already exists on the destination, the naming convention
<source_fs_name>_replica<#> will be used. A number 1-4 is assigned acco
rding
to how many replicas of that file system already exist.
-interconnect {<name>|id=<interConnectId>}
Specifies the local (source) side of an established Data Mover
interconnect to use for this replication session.
Use the nas_cel -interconnect -list command on the source VNX
for file to list the interconnects available to the replication
sessions.
[-source_interface{<nameServiceInterfaceName>|ip=<ipAddr>}]
Instructs the replication session to use a specific local interface
defined for the interconnect on the source VNX instead of
selecting the local interface supporting the lowest number of
sessions (the default). If this local interface was defined for the
interconnect using a name service interface name, specify the
name service interface name; if it was defined using an IP
address, specify the IP address. If you define an interface
using an IP address, make sure that the destination interface
uses the same IPv4/IPv6 protocol. An IPv4 interface cannot
connect to an IPv6 interface and vice versa. Both sides of the
connection must use the same protocol.
The source_interfaces field of the output from the nas_cel
-interconnect -info command shows how the source interface
was defined. This option does not apply to a loopback
interconnect, which always uses 127.0.0.1.
If no source interface is specified, the system will select an
interface. This ensures that the interface selected can
communicate with the destination interface.
[-destination_interface{<nameServiceInterfaceName>|ip=<ipaddr>}]
Instructs the replication session to use a specific peer interface
defined for the interconnect on the destination VNX instead of
selecting the peer interface supporting the lowest number of
sessions (the default). If this peer interface was defined for the
interconnect using a name service interface name, specify the
name service interface name; if it was defined using an IP
address, specify the IP address. If you define an interface using an
IP address, make sure that the source interface uses the same
IPv4/IPv6 protocol. An IPv4 interface cannot connect to an IPv6
interface and vice versa. Both sides of the connection must use the
same protocol.
The destination_interfaces field of the output from the
nas_cel -interconnect -info command shows how the peer
interface was defined. This option does not apply to a
loopback interconnect, which always uses 127.0.0.1.
If no destination interface is specified, the system will select an
interface. This ensures that the interface selected can
communicate with the source interface.
[{-max_time_out_of_sync <maxTimeOutOfSync>|
-manual_refresh}]
Specifies the time, in 1.1440 minutes (up to 24 hours), that the
source and destination can be out of synchronization before an
update occurs. If you do not specify a max_time_out_of_sync
value, use the -manual_refresh option to indicate that the
destination will be updated on demand using the
nas_replicate -refresh command. If no option is selected, the
refresh default time for a file system replication is 10 minutes.
[-overwrite_destination]
For an existing destination object, discards any changes made
to the destination object and restores it from the established
common base, thereby starting the replication session from a
differential copy. If this option is not specified and the
destination object contains different content than the
established common base, an error is returned.
[-tape_copy]
For file system replication only, creates and stops the
replication session to enable an initial copy using the physical
tape backup and process instead of an initial copy over the
network. Using VNX Replicator describes the procedures for
performing a tape copy, which involves a manually issued
backup to tape from the source file system, a restore from tape
to the destination file system, and a start of the replication
session.
[-background]
Executes the command in asynchronous mode. Use the
nas_task command to check the status of the command.
CREATING VDM REPLICATION
------------------------create <name>
Assigns a name to the VDM replication session. The name must be
unique for each Data Mover pair, which is defined by the
213
interconnect.
-source -vdm <vdmName>|[id=<VdmId>]
Specifies the name or ID of an existing VDM to replicate. This
replicates the CIFS working environment information contained
in the root file system of the VDM. The source VDM must be in a
loaded read/write or mounted read-only state. The source VDM
can be the source or destination VDM of another replication
session.
Note: Any file system mounted to a VDM must be replicated using file
system replication. VDM replication affects the VDM only.
-destination {-vdm {<existing_dstVdmName>| id=<dstVdmId>|-pool
{id=<dstStoragePoolId>| <dstStoragePool>}[-storageSystem
<dstStorageSerialNumber>]}
Specifies either an existing destination VDM or the storage needed to
create the destination VDM. An existing destination VDM must be mounted
as read-only, the same size as the source, and not loaded. The destinat
ion
VDM can be the source of another replication but cannot be the destinat
ion
of another replication. Specifying a storage pool creates the destinati
on VDM
automatically, as read-only, using the same name and size as the source
VDM.
The -storageSystem option identifies the storage system on which the
destination VDM will reside. This is necessary when there are multiple
back-end systems attached. Use nas_storage -list to obtain attached sto
rage
system serial numbers.
-interconnect {<name>|id=<interConnectId>}
Specifies the local (source) side of an established Data Mover intercon
nect
to use for this replication session.
Use the nas_cel -interconnect -list command on the source VNX
to list the interconnects available to replication sessions. The
nas_cel -interconnect -create command is executed twice, one
from each side, to create an interconnect between a pair of Data
Movers (two local Data Movers for local replication, or one local
and one remote, for remote replication). Loopback interconnects
are created for each Data Mover and are named automatically.
[-source_interface {<nameServiceInterfaceName>|ip=<ipAddr>}]
Instructs the replication session to use a specific local interface
defined for the interconnect on the source VNX instead of
selecting the local interface supporting the lowest number of
sessions (the default). If this local interface was defined for the
interconnect using a name service interface name, specify the
name service interface name; if it was defined using an IP
address, specify the IP address. If you define an interface using an
IP address, make sure that the destination interface uses the same
IPv4/IPv6 protocol. An IPv4 interface cannot connect to an IPv6
interface and vice versa. Both sides of the connection must use the
same protocol.
The source_interfaces field of the output from the nas_cel
-interconnect -info command shows how the source interface
was defined. This option does not apply to a loopback
interconnect, which always uses 127.0.0.1.
If no source interface is specified, the system will select an
interface. This ensures that the interface selected can
communicate with the destination interface.
[-destination_interface{<nameServiceInterfaceName>|ip=<ipaddr>}]
Instructs the replication session to use a specific peer interface
defined for the interconnect on the destination VNX instead of
selecting the peer interface supporting the lowest number of
sessions (the default). If this peer interface was defined for the
interconnect using a name service interface name, specify the
name service interface name; if it was defined using an IP
address, specify the IP address. If you define an interface using
an IP address, make sure that the source interface uses the same
IPv4/IPv6 protocol. An IPv4 interface cannot connect to an IPv6
interface and vice versa. Both sides of the connection must use the
same protocol.
The destination_interfaces field of the output from the nas_cel
-interconnect -info command shows how the peer interface was
defined. This option does not apply to a loopback interconnect,
which always uses 127.0.0.1.
If no destination interface is specified, the system will select an
interface. This ensures that the interface selected can
communicate with the source interface.
[{-max_time_out_of_sync <maxTimeOutOfSync>|-manual_refresh}]
Specifies the time, in 1.1440 minutes (up to 24 hours), that the
source and destination can be out of synchronization before an
update occurs. If you do not specify a max_time_out_of_sync
value, use the -manual_refresh option to indicate that the
destination will be updated on demand using the nas_replicate
-refresh command. If no option is selected, the refresh default
time for a VDM replication is 5 minutes.
[-overwrite_destination]
For an existing destination object, discards any changes made to
the destination object and restores it from the established
common base, thereby starting the replication session from a
differential copy. If this option is not specified, and the
destination object contains different content than the established
common base, an error is returned.
[-background]
Executes the command in asynchronous mode. Use the nas_task
command to check the status of the command.
START OPTIONS
-------------start {<name>|id=<sessionId>}
From the source side only, specifies the name or session ID needed to
start the replication session. A replication name is unique for each
Data Mover pair; if a duplicate name is detected on the system, the
session ID is required. To get the session ID, use nas_replicate -list.
[-interconnect {<name>|id=<interConnectId>}]
Specifies an established source-side (local) Data Mover
interconnect to use for the replication session. Use the nas_cel
-interconnect -list command to list the interconnects available to
replication sessions. The nas_cel -interconnect -create command
creates an interconnect between a pair of Data Movers (two local
Data Movers for local replication, or one local and one remote, for
remote replication). Loopback interconnects are created and
named automatically, and always use IP address 127.0.0.1.
[-source_interface {<nameServiceInterfaceName>|ip=<ipaddr>}]
As the source interface for the replication session, uses a specific
215
local interface defined for the interconnect instead of any local
interface defined for the interconnect (the default, which enables
the software to select the interface supporting the lowest number
of sessions). If this interface was defined for the interconnect
using a name service interface name, specify the name service
interface name; if it was defined using an IP address, specify the
IP address (IPv4 or IPv6). If you define an interface using an IP
address, make sure that the destination interface uses the same
IPv4/IPv6 protocol. An IPv4 interface cannot connect to an IPv6
interface and vice versa. Both sides of the connection must use the
same protocol.
[-destination_interface{<nameServiceInterfaceName>|ip=<ipaddr>}]
As the destination interface for the replication session, uses a
specific peer interface defined for the interconnect instead of any
peer interface defined for the interconnect (the default, which
enables the software to select the interface supporting the lowest
number of sessions). If this interface was defined for the
interconnect using a name service interface name, specify the
name service interface name; if it was defined using an IP
address, specify the IP address (IPv4 or IPv6). If you define an
interface using an IP address, make sure that the source interface
uses the same IPv4/IPv6 protocol. An IPv4 interface cannot
connect to an IPv6 interface and vice versa. Both sides of the
connection must use the same protocol.
[{-max_time_out_of_sync <maxtimeOutOfSync>|-manual_refresh}]
Specifies the time, in 1.1440 minutes (up to 24 hours), that the
source and destination can be out of synchronization before an
update occurs. If you do not specify a max_time_out_of_sync
value, use the -manual_refresh option to indicate that the
destination will be updated on demand using the nas_replicate
-refresh command. If no option is selected, the refresh default
time for file system replication is 10 minutes, and 5 minutes
for VDM replication sessions.
[-overwrite_destination]
For an existing destination object, discards any changes made to
the destination object and restores the destination object from the
established, internal common base checkpoint, thereby starting
the replication session from a differential copy. If this option is not
specified and the destination object has different content than the
established common base, an error is returned.
[-reverse]
Reverses the direction of the replication session when invoked
from the new source side (the original destination). A reverse
operation continues to use the established replication name or
replication session ID. Use this option to restart replication after a
failover or switchover.
[-full_copy]
For an existing destination object that contains content changes,
performs a full copy of the source object to the destination object.
If replication cannot be started from a differential copy using the
-overwrite_destination option, omitting this option causes the
command to return an error.
[-background]
Executes the command in asynchronous mode. Use the nas_task
command to check the status of the command.
MODIFY OPTIONS
--------------modify {<name>|id=<sessionId>}
From the source side only, specifies the name or session ID of the
replication session to be modified. If a duplicate name is detected on
the system, the session ID (fixed for the life of the session) is required.
To get the session ID, use nas_replicate -list.
Note: A session cannot be modified if a -stop, -delete, -reverse, -failover,
-switchover, -create, or -start operation is running. However, once a -modify
operation is running, any other operation is permitted.
[-name <newName>]
Renames the replication session to the new name specified. When
renaming a session, note that the name must be unique for each
Data Mover pair.
[-source_interface {<nameServiceInterfaceName>|ip=<ipaddr>}]
Changes the source interface used for the session to another local
interface from the list defined for the interconnect. If this interface
was defined for the interconnect using a name service interface
name, specify the name service interface name; if it was defined
using an IP address, specify the IP address (IPv4 or IPv6). If you
change an IP address, make sure that the destination interface
uses the same IPv4/IPv6 protocol. An IPv4 interface cannot
connect to an IPv6 interface and vice versa. Both sides of the
connection must use the same protocol.
[-destination_interface{<nameServiceInterfaceName>|ip=<ipaddr>}]
Changes the destination interface used for the session to another
peer interface from the list defined for the interconnect. If this
interface was defined for the interconnect using a name service
interface name, specify the name service interface name; if it was
defined using an IP address, specify the IP addres (IPv4 or IPv6).
If you change an IP address, make sure that the source interface
uses the same IPv4/IPv6 protocol. An IPv4 interface cannot
connect to an IPv6 interface and vice versa. Both sides of the
connection must use the same protocol.
[-max_time_out_of_sync <maxTimeOutOfSync>|-manual_refresh]
Specifies the time, from 1.1440 minutes (up to 24 hours), that the
source and destination can be out of synchronization before an
update occurs. If you do not specify a max_time_out_of_sync
value, use the -manual_refresh option to indicate that the
destination will be updated on demand using the nas_replicate
-refresh command. If no option is selected, the refresh default
time for file system replication is 10 minutes,and 5 minutes for
VDM replication sessions.
STOP OPTIONS
------------stop {<name>|id=<session_id>}
Executed from the Control Station on the source VNX, stops the
specified replication session but retains the session.s configuration
information. Any data transfer in progress is terminated immediately
and the destination object is restored to a consistent state.
Note: A session cannot be stopped if the -delete option is already running for
the session. Once a stop operation is in progress, only the options -list,
-info, and the nas_task command are permitted.
[-mode {source|destination|both}]
When stopping a session handling a local or remote replication
from the source side, the -mode both option immediately stops
both sides of the replication session. The -mode source option
stops only the replication session on the source and ignores the
other side of the replication relationship. If the destination side is
not operational, the -mode source option is required to stop the
session. From the destination side, only the -mode destination
217
option can be issued. When stopping a session handling a
loopback replication, you can specify any -mode option to stop
the session.
[-background]
Executes the command in asynchronous mode. Use the nas_task
command to check progress.
DELETE OPTIONS
--------------delete {<name>|id=<session_id>}
Executed from the Control Station on the source VNX, cancels replication
data transfer if it is in progress, performs an internal
checkpoint restore of the latest destination checkpoint to bring the file
system back to a consistent state and then deletes the replication session
specified by the -mode options.
[-mode {source|destination|both}]
When deleting a local or remote replication session from the
source side, the -mode both option deletes both sides of the
replication session. The -mode source option immediately aborts
only the replication session on the source and ignores the other
side of the replication relationship. If the destination side is not
operational, the -mode source option is required to delete the
session. From the destination side, only the -mode destination
option can be issued. When deleting a loopback replication, you
can specify any -mode option to stop the session.
[-background]
Executes the command in asynchronous mode. Use the nas_task
command to check progress.
The execution of the -delete option is asynchronous and can be
delayed if there is a network problem. During the delete process,
other operations on the replication session are not allowed.
FAILOVER OPTIONS
----------------failover {<name>|id=<session_id>}
In response to a potential disaster scenario, performs a failover of the
specified replication session with possible data loss. Execute this
command from the Control Station on the destination VNX only. This
command cancels any data transfer that is in process and marks the
destination object as read-write so that it can serve as the new source
object. When the original source Data Mover becomes reachable, the
source object is changed to read-only.
CAUTION: The execution of the failover operation is asynchronous and results
in data loss if all the data was not transferred to the destination site
prior to issuing the failover.
If there are multiple sessions using the same source object, only one
replication session can be failed over. After the selected session is
failed over, the other sessions become inactive until the session is
restarted or failed back.
[-background]
Executes the command in asynchronous mode. Use the nas_task
command to check progress.
SWITCHOVER OPTIONS
------------------switchover {<name>|id=<session_id>}
For test or migration purposes, switches over the specified replication
relationship and performs synchronization of the source and
destination without data loss. Execute this command from the
Control Station on the source VNX only. This command stops
replication, mounts the source object as read-only, and marks the
destination object as read-write so that it can act as the new source
object.
Unlike a reverse operation, a switchover operation does not restart
replication.
[-background]
Executes the command in asynchronous mode. Use the nas_task
command to check progress.
REVERSE OPTIONS
---------------reverse {<name>|id=<session_id>}
If executed from the source side of a replication session, reverses the
direction of the specified replication session without data loss. A
reverse synchronizes the destination with the source, mounts the
source object as read-only, stops replication, marks the destination
object as read-write so that it can act as the new source object, then
restarts replication in the reverse direction from a differential copy
(using the same configuration parameters established originally for
the session).
[-background]
Executes the command in asynchronous mode. Use the nas_task
command to check progress.
REFRESH OPTIONS
--------------REFRESH OPTIONS
-refresh{<name>|id=<session_id>}
Updates the destination side of the specified replication
changes to the source side. Execute this command from the
the source side only. A refresh operation handles updates
alternative, the -max_time_out_of_sync option performs an
after a specified number of minutes.
session based on
Control Station on
on demand; as an
update automatically
If the data changes on the source are large, this command can take a
long time to complete. Consider running this command in
background mode.
[-source{<ckptName>|id=<ckptId>} -destination{<ckptName>|id=<ckptId>}]
Instructs the replication -refresh option to use a specific checkpoint on the
source side and a specific checkpoint on the destination side.
Specifying source and destination checkpoints for the -refresh option is
optional. However, if you specify a source checkpoint, you must also specify a
destination checkpoint. Replication transfers the contents of the
user-specified source checkpoint to the destination file system. This transfer
can be either a full copy or a differential copy depending on the existing
replication semantics. After the transfer, the replication internally
refreshes the user specified destination checkpoint and marks the two
checkpoints as common bases.
After the replication refresh operation completes successfully, both the
source and destination checkpoints have the same view of their file systems.
The replication continues to use these checkpoints as common bases until the
next transfer is completed. After a user checkpoint is marked with a common
base property, the property is retained until the checkpoint is refreshed or
deleted. A checkpoint that is already paired as a common base with another
checkpoint propagates its common base property when it is specified as the
source in a replication refresh operation. This propagation makes it possible
for file systems without a direct replication relationship to have common219base
checkpoints.
[-background]
Executes the command in asynchronous mode. Use the nas_task command to check
progress.
STORAGE SYSTEM OUTPUT
--------------------The number associated with the storage device is dependent on the
attached storage system. VNX for block displays a prefix of APM
before a set of integers, for example, APM00033900124-0019. For
example, Symmetrix storage systems appear as 002804000190-003C.
The outputs displayed in the examples use a VNX for block.
EXAMPLE #1
---------To list all the VNX Replicator sessions, type:
$ nas_replicate -list
Name Type Local Mover Interconnect Celerra Status
ufs1_rep1 filesystem server_3 -->NYs3_LAs2 cs110 OK
vdm1_rep1 vdm server_3 -->NYs3_LAs2 cs110 OK
Where:
Value
-----
Definition
----------
Name
Either the name of the session or the globally
unique session ID for the session, if there are
duplicate names on the system.
The type of replication session (ongoing file system
(fs), copy, or VDM).
The source Data Mover for the session.
The name of the source-side interconnect used for the
Type
Source Mover
Interconnect
session.
Celerra
Status
The name of the VNX system.
The status of the session (OK, Active, Idle, Stopped,
Error,
Waiting) Info, Critical.
EXAMPLE #2
---------To create a file system replication session ufs1_rep1 on the source file
system ufs1 and destination pool clar_r5_performance on the
interconnect NYs3_LAs2 using the specified source and destination
IP addresses to be updated automatically every 5 minutes, type:
$ nas_replicate -create ufs1_rep1 -source -fs ufs1 -destination -pool clar_r5_p
erformance
-interconnect NYs3_LAs2 -source_interface ip=10.6.3.190 -destination_interface
ip=10.6.3.173
-max_time_out_of_sync 5
OK
EXAMPLE #3
---------To display information for a replication session ufs1_rep1, type:
$ nas_replicate -info ufs1_rep1
ID = 184_APM00064600086_0000_173_APM00072901601_0000
Name = ufs1_rep1
Source Status = OK
Network Status = OK
Destination Status = OK
Last Sync Time = Thu Dec 13 14:47:16 EST 2007
Type = filesystem
Celerra Network Server = cs110
Dart Interconnect = NYs3_LAs2
Peer Dart Interconnect = 20004
Replication Role = source
Source Filesystem = ufs1
Source Data Mover = server_3
Source Interface = 10.6.3.190
Source Control Port = 0
Source Current Data Port = 0
Destination Filesystem = ufs1_replica3
Destination Data Mover = server_2
Destination Interface = 10.6.3.173
Destination Control Port = 5081
Destination Data Port = 8888
Max Out of Sync Time (minutes) = 5
Next Transfer Size (Kb) = 0
Latest Snap on Source =
Latest Snap on Destination =
Current Transfer Size (KB) = 0
Current Transfer Remain (KB) = 0
Estimated Completion Time =
Current Transfer is Full Copy = No
Current Transfer Rate (KB/s) = 76
Current Read Rate (KB/s) = 11538
Current Write Rate (KB/s) = 580
Previous Transfer Rate (KB/s) = 0
Previous Read Rate (KB/s) = 0
Previous Write Rate (KB/s) = 0
Average Transfer Rate (KB/s) = 6277
Average Read Rate (KB/s) = 0
Average Write Rate (KB/s) = 0
EXAMPLE #4
---------To create a VDM replication session vdm_rep1 on source VDM vdm1 and destination
pool
clar_r5_performance on the interconnect NYs3_LAs2 with the given source and des
tination
IP addresses to be updated automatically every 5 minutes, type:
$ nas_replicate -create vdm1_rep1 -source -vdm vdm1 -destination -pool clar_r5_
performance
-interconnect NYs3_LAs2 -source_interface ip=10.6.3.190 -destination_interface
ip=10.6.3.173
-max_time_out_of_sync 5
OK
EXAMPLE #5
---------To list existing replication sessions, type:
$ nas_replicate -list
Name
Type
Local Mover
Interconnect
Celerra
Status
221
ufs1_rep1
vdm1_rep1
filesystem
vdm
server_3
server_3
-->NYs3_LAs2
-->NYs3_LAs2
cs110
cs110
OK
OK
EXAMPLE #6
---------To manually synchronize source and destination for the replication session
ufs1_rep1, type:
$ nas_replicate -refresh ufs1_rep1
OK
EXAMPLE #7
---------To manually synchronize source and destination for the replication session
ufs1_rep1 by using user checkpoints on the source and the destination, type:
$ nas_replicate -refresh ufs1_rep1 -source id=101 -destination id=102
OK
EXAMPLE #8
---------To stop replication on both source and destination for the replication session
ufs1_rep1, type:
$ nas_replicate -stop ufs1_rep1 -mode both
OK
EXAMPLE #9
----------To start stopped replication session ufs1_rep1 on interconnect NYs3_LAs2,
specify manual refresh and to overwrite the destination file system performing
a full copy, type:
$ nas_replicate -start ufs1_rep1 -interconnect NYs3_LAs2 -manual_refresh
-overwrite_destination -full_copy
OK
EXAMPLE #10
----------To display information for the VDM replication session vdm_rep1, type:
$ nas_replicate -info vdm1_rep1
ID = 278_APM00064600086_0000_180_APM00072901601_0000
Name = vdm1_rep1
Source Status = OK
Network Status = OK
Destination Status = OK
Last Sync Time = Fri Dec 14 16:49:54 EST 2007
Type = vdm
Celerra Network Server = cs110
Dart Interconnect = NYs3_LAs2
Peer Dart Interconnect = 20004
Replication Role = source
Source VDM = vdm1
Source Data Mover = server_3
Source Interface = 10.6.3.190
Source Control Port = 0
Source Current Data Port = 0
Destination VDM = vdm1
Destination Data Mover = server_2
Destination Interface = 10.6.3.173
Destination Control Port = 5081
Destination Data Port = 8888
Max Out of Sync Time (minutes) = 5
Next Transfer Size (Kb) = 0
Latest Snap on Source =
Latest Snap on Destination =
Current Transfer Size (KB) = 0
Current Transfer Remain (KB) = 0
Estimated Completion Time =
Current Transfer is Full Copy = No
Current Transfer Rate (KB/s) = 313
Current Read Rate (KB/s) = 19297
Current Write Rate (KB/s) = 469
Previous Transfer Rate (KB/s) = 0
Previous Read Rate (KB/s) = 0
Previous Write Rate (KB/s) = 0
Average Transfer Rate (KB/s) = 155
Average Read Rate (KB/s) = 0
Average Write Rate (KB/s) = 0
EXAMPLE #11
----------To change the session name vdm1_rep1 to vdm1_rep2, and to change max time out
of sync value to 90, type:
$ nas_replicate -modify vdm1_rep1 -name vdm1_rep2 -max_time_out_of_sync 90
OK
EXAMPLE #12
----------To failover the replication session ufs1_rep1, type on destination:
$ nas_replicate -failover ufs1_rep1
OK
EXAMPLE #13
----------To start failed over replication in the reverse direction, type:
$ nas_replicate -start ufs1_rep1 -interconnect LAs2_NYs3 -reverse -overwrite_de
stination
OK
EXAMPLE #14
----------To reverse direction of the replication session ufs1_rep1, type:
$ nas_replicate -reverse ufs1_rep1
OK
EXAMPLE #15
----------To switch over the replication session ufs1_rep1 using the background option, t
ype:
223
$ nas_replicate -switchover ufs1_rep1 -background
Info 26843676673: In Progress: Operation is still running. Check task id 4058 o
n the
Task Status screen for results.
*** Comment: Use nas_task -info command to find out the status of background
task.
EXAMPLE #16
----------To delete the replication session fs1_rep1 on both source and destination, type
:
$ nas_replicate -delete fs1_rep1 -mode both
OK
-------------------------------------------------------------------Last modified: Feb 21 2013, 2:34 pm
nas_server
Manages the Data Mover (server) table.
SYNOPSIS
-------nas_server
-list [-all|-vdm]
| -delete <movername>
| -info {-all|<movername>|<slot_number>|id=<mover_id>
| -vdm {-all|<vdm_name>|id=<vdm_id>}}
| -rename <old_movername> <new_movername>
| -acl <acl_value> <movername>
| [-name <name>][-type <type>] -create <movername> [-setstate <state>]
[-fs <fs_name>|pool=<pool> [storage=<system_name>]][-option <options>
]
|
|
|
|
-vdm
-vdm
-vdm
-vdm
<vdm_name>
<vdm_name>
<vdm_name>
<vdm_name>
-attach <interface> [,<interface2>...]
-detach <interface> [,<interface2>...]
-setstate <state> [<movername>][-ConvertI18N]
-move <movername> [-ConvertI18N]
DESCRIPTION
----------nas_server manages the server tables for both physical and virtual
Data Movers (VDMs), creates a VDM, sets an access control value for
a physical Data Mover or VDM, renames a Data Mover and displays
attributes for a specified Data Mover or all Data Movers, deletes a
physical Data Mover entry from the server table, deletes the
VDM configuration for a Data Mover, and attaches or detaches the network
interface to a VDM.
OPTIONS
-------list
Lists the Data Mover server table. The server table displays the ID,
type, access control level value, slot number, group ID, state, and
name of a Data Mover. VDMs have their own server table and do not have
a numeric reference in the general server table.
Note: The ID of the object is an integer and is assigned automatically. The
name of the Data Mover might be truncated if it is too long for the display.
To display the full name, use the -info option with the Data Mover ID.
Valid Data Mover types are:
1=nas
2=not used
3=not used
4=standby
5=not used
6=rdf
Note: The nas type is set automatically, vdm is set using nas_rp,
rdf and standby are set up using server_ssh.
[-all|-vdm]
The -all option displays the physical Data Mover and VDMs. The
-vdm option lists the VDMs only.
-delete <movername>
Deletes the specified physical Data Mover entry from the server table
or deletes the VDM configuration. A Data Mover that is being deleted
225
cannot contain mounted filesystems.
Deleting a physical Data Mover requires the root command. Use
/nas/sbin/rootnas_server to execute a delete.
-info {-all|<movername>|<slot_number>|id=<mover_id>}
Displays attributes for all physical Data Movers, or a Data Mover
specified by its <movername>, <slot_number>, or <mover_id>.
-info -vdm {-all|<vdm_name>|id=<vdm_id>}
Displays attributes for all VDMs, or a specified VDM, including the
network interfaces that are being used by the CIFS servers.
-rename <old_movername> <new_movername>
Changes the name of the physical Data Mover or the VDM to the
specified name. The -create option provides more information for
rules applicable to naming a Data Mover.
-acl <acl_value> <movername>
Sets an access control level value that defines the owner of the
physical Data Mover or the Virtual Data Mover, and the level of
access allowed for users and groups defined in the access control
level table. The nas_acl command provides more information.
[-name <name>][-type vdm] -create <movername>
Creates a VDM with an optional name for the specified physical or
VDM. The movername is case-sensitive and supports the following
characters: a through z, A through Z, 0 through 9, _(underscore), (hyphen) though names may not start with a hyphen. The default
type is nas.
[-setstate <state>]
Sets the Data Mover to loaded or mounted.
The loaded option installs the image of the VDM onto the
physical Data Mover, but does not mount the non-root filesystems.
The mounted option mounts the root_fs as read-only, but the VDM image
is not installed. The -setstate option is for use with replication.
Note: Before a VDM image
the previous Data Mover,
server_cifs. The network
VDM must be available on
is loaded, the image must be unloaded from
and the CIFS server must be joined using
interfaces used by the CIFS servers on the
the destination Data Mover.
[-fs <fs_name>|pool=<pool>]
Specifies a filesystem or assigns a rule set known as a storage
pool for the VDM root filesystem.
[storage=<system_name>]
For the -fs option, the filesystem must be unmounted, clean
(nas_fsck provides more information), and be either of type uxfs
or rawfs. For a loaded state VDM, only an uxfs filesystem type
can be used, but for mounted state VDM, both uxfs and rawfs can
be used. The filesystem to be provided as the VDM root file
system is renamed to root_fs_vdm_<vdm_name>. This is deleted
when the VDM is deleted.
The storage pool option assigns a rule set for the root filesystem
of the VDM that contains automatically created volumes and
defines the type of disk volumes used and how they are
aggregated. Storage pools are system defined (storage pool
description provides more information) or user defined. nas_pool
provides a description of pool types.
[-option <options>]
Specifies the following comma-separated list of options:
fstype={rawfs|uxfs}
Specifies the filesystem type of the root file system for the server.
It can be either rawfs or uxfs type.
log_type={common|split}
Specifies the type of log file associated with the file system. Log fil
es can
be either shared (common) or uniquely assigned to individual file
systems(split). For SRDF Async or STAR feature, split option is strongl
y
recommended to avoid fsck before mounting a BCV file system on SiteB or
SiteC.
-vdm <vdm_name> -attach <interface> [,<interface2>...]
Allows the user to manage the network interface(s) for a VDM. The
interfaces are attached to a VDM when the VDM state is loaded.
When an interface is attached to a VDM, the NFS clients connecting
the Data Mover through this interface have access to the filesystem
exported by the VDM configuration.
-vdm <vdm_name> -detach <interface> [,<interface2>...]
An attempt to delete an interface attached to the VDM with the
server_ifconfig command fails with an error message. It indicates
that the interface is currently used by the VDM <vdm_name>. The
user must detach the interface from the VDM before deleting it.
Note: If the user wants to share a VDM interface for both CIFS and NFSv3 or
NFSv4 clients, the administrator must create a CIFS server and assign it to
the interface by using the server_cifs command.
-vdm <vdm_name> -setstate <state>
Sets the state of the VDM to loaded, mounted, tempunloaded, or
permunloaded.
The loaded option installs the image of the VDM onto the physical
Data Mover, but does not mount the non-root filesystems. The
mounted option mounts the root_fs read-only, but the VDM image is
not installed.
The tempunloaded option, temporarily unloads the VDM image,
while the permunloaded option permanently unloads the image.
[<movername>]
Specifies a physical Data Mover for the VDM.
[-ConvertI18N]
When loading the VDM image, forces the conversion of the I18N
mode of the VDM’s root filesystem from ASCII to UNICODE.
The I18N mode of the Data Mover can be either ASCII or
UNICODE. The mode of the VDM must be the same as the
physical Data Mover, for example, when performing the -move
option, or when replicating.
This mode is used when the mode of the VDM root filesystem is
different from that of the physical Data Mover.
Default states are:
-vdm <vdm_name> -move <movername>
Moves the image of the VDM onto the physical Data Mover, and
mounts the non-root filesystems.
Note: Before a VDM image
previous Data Mover, and
server_cifs. The network
VDM must be available on
is loaded, the image must be unloaded from the
the CIFS server must be joined using
interfaces used by the CIFS servers on the
the destination Data Mover.
227
[-ConvertI18N]
When loading the VDM image, forces the conversion of the I18N
mode of the VDM’s root filesystem from ASCII to UNICODE.
The I18N mode of the Data Mover can be either ASCII or
UNICODE. The mode of the VDM must be the same as the
physical Data Mover, for example, when performing the -move
option, or when replicating.
This mode is used when the mode of the VDM root filesystem is
different from that of the physical Data Mover.
SEE ALSO
-------Configuring Virtual Data Mover on VNX, Using International Character
Sets for File, nas_fs, nas_volume, and server_cifs.
SYSTEM OUTPUT
------------VNX systems support the following system-defined storage pools:
clar_r1, clar_r5_performance, clar_r5_economy, clar_r6, clarata_r3,
clarata_r6, clarata_r10, clarata_archive, cm_r1, cm_r5_performance,
cm_r5_economy, cm_r6, cmata_r3, cmata_archive, cmata_r6,
cmata_r10, clarsas_archive, clarsas_r6, clarsas_r10, clarefd_r5,
clarefd_r10, cmsas_archive, cmsas_r6, cmsas_r10, and cmefd_r5.
Disk types when using VNX for block are CLSTD, CLEFD, and
CLATA, and for VNX for block involving mirrored disks are
CMEFD, CMSTD, and CMATA.
VNX with a Symmetrix system supports the following
system-defined storage pools: symm_std, symm_std_rdf_src,
symm_ata, symm_ata_rdf_src, symm_ata_rdf_tgt,
symm_std_rdf_tgt, symm_ata_rdf_tgt, symm_std_rdf_tgt, and
symm_efd.
For user-defined storage pools, the difference in output is in the disk
type. Disk types when using a Symmetrix are STD, R1STD, R2STD,
BCV, R1BCV, R2BCV, ATA, R1ATA, R2ATA, BCVA, R1BCA,
R2BCA, and EFD.
EXAMPLE #1
---------To list the physical Data Mover table, type:
$ nas_server -list
id
1
2
3
4
type
1
1
1
4
acl
1000
1000
1000
1000
slot groupID state name
2
0
server_2
3
0
server_3
4
0
server_4
5
0
server_5
Where:
Value
-----
Definition
----------
id
type
acl
ID of the Data Mover.
Type assigned to Data Mover.
Access control level value assigned to the
Data Mover or VDM.
Physical slot in the cabinet where the Data Mover
resides.
ID of the Data Mover group.
slot
groupID
state
Whether the Data Mover is enabled=0, disabled=1,
failed over=2.
Name given to the Data Mover.
name
EXAMPLE #2
----------
To list the physical Data Mover and VDM table, type:
$ nas_server -list -all
id
1
2
3
4
type
1
1
1
4
acl
1000
1000
1000
1000
slot
2
3
4
5
groupID state name
0
server_2
0
server_3
0
server_4
0
server_5
id
3
acl
0
server
1
mountedfs
31
rootfs name
vdm_1
EXAMPLE #1 provides a description of outputs for the physical Data
Movers. The following table provides a description of the command
output for the VDM table.
Where:
Value
-----
Definition
----------
id
acl
ID of the Data Mover.
Access control level value assigned to the Data
Mover or VDM.
Server on which the VDM is loaded on.
Filesystems that are mounted on this VDM.
ID number of the root file system.
Name given to the Data Mover or VDM.
server
mountedfs
rootfs
name
EXAMPLE #3
----------
To list the VDM server table, type:
$ nas_server -list -vdm
id
3
acl
0
server
1
mountedfs
rootfs
31
name
vdm_1
EXAMPLE #4
---------To list information for a Data Mover, type:
$ nas_server -info server_2
id = 1
name = server_2
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 2
member_of =
standby = server_5, policy=auto
status :
defined = enabled
actual = online, ready
Where:
Value
Definition
229
-----
----------
id
name
acl
type
slot
ID of the Data Mover
Name given to the Data Mover
Access control level value assigned to the Data
Mover or VDM.
Type assigned to Data Mover
Physical slot in the cabinet where the Data Mover resi
member_of
standby
Group to which the Data Mover is a member.
If the Data Mover has a local standby associated with
status
Whether the Data Mover is enabled or disabled, and
whether it is active.
des.
it.
EXAMPLE #5
---------To display detailed information for all servers, type:
$ nas_server -info -all
id = 1
name = server_2
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 2
member_of =
standby = server_5, policy=auto
status :
defined = enabled
actual = online, active
id = 2
name = server_3
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 3
member_of =
standby = server_5, policy=auto
status :
defined = enabled
actual = online, ready
id = 3
name = server_4
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 4
member_of =
standby = server_5, policy=auto
status :
defined = enabled
actual = online, ready
id = 4
name = server_5
acl = 1000, owner=nasadmin, ID=201
type = standby
slot = 5
member_of =
standbyfor= server_4,server_2,server_3
status :
defined = enabled
actual = online, ready
EXAMPLE #4 provides a description of command outputs.
EXAMPLE #6
----------
To display information for all VDMs, type:
$ nas_server -info -vdm -all
id = 3
name = vdm_1
acl = 0
type = vdm
server = server_2
rootfs = root_fs_vdm_1
I18N mode = UNICODE
mountedfs =
member_of =
status :
defined = enabled
actual = mounted
Interfaces to services mapping:
Where:
Value
-----
Definition
----------
id
name
acl
type
server
rootfs
I18N mode
ID of the Data Mover.
Name of the Data Mover.
Access control level value assigned to the VDM.
For VDM server, the type is always VDM.
Server on which the VDM loaded.
Root filesystem of the VDM.
L18N mode of the VDM. I18N mode is either ASCII
or UNICODE.
Filesystems that are mounted on this VDM.
If it is a member of a cluster, then this field will
mountedfs
member_of
show
status
it
the cluster name.
Whether the VDM is enabled or disabled, and whether
can be loaded ready, loaded active, mounted,
temporarily
unloaded, and permanently unloaded.
Interfaces to services
mapping
List of interfaces that are used for the services
configured on this VDM. Currently, only CIFS service
is
provided, so this field lists all the interfaces used
in
the CIFS servers configured on this VDM.
EXAMPLE #7
---------To create a mounted VDM named vdm_1 on server_2 using the storage pool,
clar_r5_performance with a rawfs, type:
$ nas_server -name vdm_1 -type vdm -create server_2 -setstate mounted
pool=clar_r5_performance -option fstype=uxfs
id = 3
name = vdm_1
acl = 0
type = vdm
server = server_2
rootfs = root_fs_vdm_1
I18N mode = UNICODE
mountedfs =
member_of =
status :
defined = enabled
231
actual = mounted
Interfaces to services mapping:
EXAMPLE #6 provides a description of command outputs.
EXAMPLE #8
---------To set the state of a vdm_1 to mounted, type:
$ nas_server -vdm vdm_1 -setstate mounted
id = 3
name = vdm_1
acl = 0
type = vdm
server = server_2
rootfs = root_fs_vdm_1
I18N mode = UNICODE
mountedfs =
member_of =
status :
defined = enabled
actual = mounted
Interfaces to services mapping:
EXAMPLE #6 provides a description of command outputs.
EXAMPLE #9
---------To move the image of vdm_1 onto server_4, type:
$ nas_server -vdm vdm_1 -move server_4
id = 3
name = vdm_1
acl = 0
type = vdm
server = server_4
rootfs = root_fs_vdm_1
I18N mode = UNICODE
mountedfs =
member_of =
status :
defined = enabled
actual = loaded, ready
Interfaces to services mapping:
EXAMPLE #6 provides a description of command outputs.
EXAMPLE #10
----------To rename a Data Mover entry from server_2 to dm2, type:
$ nas_server -rename server_2 dm2
id = 1
name = dm2
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 2
member_of =
standby = server_5, policy=auto
status :
defined = enabled
actual = online, active
EXAMPLE #4 provides a description of command outputs.
EXAMPLE #11
----------To set the access control level for server_2, type:
$ nas_server -acl 1432 server_2
id = 1
name = server_2
acl = 1432, owner=nasadmin, ID=201
type = nas
slot = 2
member_of =
standby = server_5, policy=auto
status :
defined = enabled
actual = online, ready
Note: The value 1432 specifies nasadmin as the owner, gives users with an
access level of at least observer read-only access, users with an access level
of at least operator read/write access, and users with an access level of at
least admin read/write/delete access.
EXAMPLE #4 provides a description of command outputs.
EXAMPLE #12
----------To delete vdm_1, type:
$ nas_server -delete vdm_1
id = 3
name = vdm_1
acl = 0
type = vdm
server =
rootfs = root_fs_vdm_1
I18N mode = UNICODE
mountedfs =
member_of =
status :
defined = enabled
actual = permanently unloaded
Interfaces to services mapping:
EXAMPLE #6 provides a description of command outputs.
EXAMPLE #13
----------To delete a physical Data Mover using root command, type:
$ /nas/sbin/rootnas_server -delete server_3
id = 2
name = server_3
acl = 0
type = nas
slot = 3
member_of =
standby = server_5, policy=auto
status :
defined = disabled
233
actual = boot_level=0
EXAMPLE #6 provides a description of command outputs.
EXAMPLE #14
----------To create a VDM named vdm1 on the server 3, type:
$ nas_server -name vdm1 -type vdm -create server_3
id = 43
name = vdm1
acl = 0
type = vdm
server = server_3
rootfs = root_fs_vdm_vdm1
I18N mode = UNICODE
mountedfs =
member_of =
status :
defined = enabled
actual = loaded, ready
Interfaces to services mapping:
EXAMPLE #15
----------To assign the network interface to vdm1, assuming vdm1if1 and
vdm1if2 exist and are not attached to another vdm, type:
$ nas_server -vdm vdm1 -attach vdm1if1, vdm1if2
id = 43
name = vdm1
acl = 0
type = vdm
server = server_2
rootfs = root_fs_vdm_vdm1
I18N mode = UNICODE
mountedfs =
member_of =
status :
defined = enabled
actual = loaded, ready
Interfaces to services mapping:
interface=vdm1if1 :vdm
interface=vdm1if2 :vdm
EXAMPLE #16
----------To query the vdm1 state, type:
$ nas_server -info -vdm vdm1
id = 43
name = vdm1
acl = 0
type = vdm
server = server_2
rootfs = root_fs_vdm_vdm1
I18N mode = UNICODE
mountedfs =
member_of =
status :
defined = enabled
actual = loaded, ready
Interfaces to services mapping:
interface=vdm1if2 :cifs vdm
interface=vdm1if1 :vdm
EXAMPLE #17
----------To create a VDM named vdm2 on the server_3 using split ufs log type, type:
$ nas_server -name vdm2 -type vdm -create server_3 -setstate loaded
pool=symm_std_rdf_src -o log_type=split
id
= 2
name
= vdm2
acl
= 0
type
= vdm
server
= server_3
rootfs
= root_fs_vdm_vdm2
I18N mode = ASCII
mountedfs =
member_of =
status
:
defined = enabled
actual = loaded, ready
Interfaces to services mapping:
To confirm a VDM ufs log type, type:
/nas/sbin/rootnas_fs -i root_fs_vdm_vdm2
id
= 49
name
= root_fs_vdm_vdm2
acl
= 0
in_use
= True
type
= uxfs
worm
= off
volume
= v1260
pool
= symm_std_rdf_src
member_of = root_avm_fs_group_8
rw_servers= server_3
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,thin=no
log_type = split
fast_clone_level = 2
deduplication
= Off
stor_devs =
000194900462-10C6,000194900462-10CE,000194900462-10D6,000194900462-10DE,
000194900462-10E6,000194900462-10EE,000194900462-10F6,000194900462-10FE
disks
= d1102,d1103,d1104,d1105,d1106,d1107,d1108,d1109
disk=d1102 stor_dev=000194900462-10C6
addr=c4t3l4-72-0
server=server_3
disk=d1102 stor_dev=000194900462-10C6
addr=c20t3l4-71-0
server=server_3
disk=d1102 stor_dev=000194900462-10C6
addr=c36t3l4-71-0
server=server_3
disk=d1102 stor_dev=000194900462-10C6
addr=c52t3l4-72-0
server=server_3
disk=d1103 stor_dev=000194900462-10CE
addr=c4t3l5-72-0
server=server_3
disk=d1103 stor_dev=000194900462-10CE
addr=c20t3l5-71-0
server=server_3
disk=d1103 stor_dev=000194900462-10CE
addr=c36t3l5-71-0
server=server_3
disk=d1103 stor_dev=000194900462-10CE
addr=c52t3l5-72-0
server=server_3
disk=d1104 stor_dev=000194900462-10D6
addr=c4t3l6-72-0
server=server_3
disk=d1104 stor_dev=000194900462-10D6
addr=c20t3l6-71-0
server=server_3
disk=d1104 stor_dev=000194900462-10D6
addr=c36t3l6-71-0
server=server_3
disk=d1104 stor_dev=000194900462-10D6
addr=c52t3l6-72-0
server=server_3
disk=d1105 stor_dev=000194900462-10DE
addr=c4t3l7-72-0
server=server_3
disk=d1105 stor_dev=000194900462-10DE
addr=c20t3l7-71-0
server=server_3
disk=d1105 stor_dev=000194900462-10DE
addr=c36t3l7-71-0
server=server_3
disk=d1105 stor_dev=000194900462-10DE
addr=c52t3l7-72-0
server=server_3
disk=d1106 stor_dev=000194900462-10E6
addr=c4t3l8-72-0
server=server_3
disk=d1106 stor_dev=000194900462-10E6
addr=c20t3l8-71-0
server=server_3
disk=d1106 stor_dev=000194900462-10E6
addr=c36t3l8-71-0
server=server_3
disk=d1106 stor_dev=000194900462-10E6
addr=c52t3l8-72-0
server=server_3
disk=d1107 stor_dev=000194900462-10EE
addr=c4t3l9-72-0
server=server_3
235
disk=d1107
disk=d1107
disk=d1107
disk=d1108
disk=d1108
disk=d1108
disk=d1108
disk=d1109
disk=d1109
disk=d1109
disk=d1109
stor_dev=000194900462-10EE
stor_dev=000194900462-10EE
stor_dev=000194900462-10EE
stor_dev=000194900462-10F6
stor_dev=000194900462-10F6
stor_dev=000194900462-10F6
stor_dev=000194900462-10F6
stor_dev=000194900462-10FE
stor_dev=000194900462-10FE
stor_dev=000194900462-10FE
stor_dev=000194900462-10FE
addr=c20t3l9-71-0
addr=c36t3l9-71-0
addr=c52t3l9-72-0
addr=c4t3l10-72-0
addr=c20t3l10-71-0
addr=c36t3l10-71-0
addr=c52t3l10-72-0
addr=c4t3l11-72-0
addr=c20t3l11-71-0
addr=c36t3l11-71-0
addr=c52t3l11-72-0
server=server_3
server=server_3
server=server_3
server=server_3
server=server_3
server=server_3
server=server_3
server=server_3
server=server_3
server=server_3
server=server_3
----------------------------------------------------------------------Last modified: December 3, 2014 12:20 p.m.
nas_stats
Manages Statistics Groups.
SYNOPSIS
-------nas_stats
-groups
{ -list
| -info [-all|<statgroup_name>[,...]]
| -create <statgroup_name>
[-description "<description_line>"]
{<statpath_name>|<statgroup_name>}[,...]
| -modify <statgroup_name>
{[-rename <new_statgroup_name>]
[-description "<description_line>"]
[{<statpath_name>|<statgroup_name>}[,...]]}
| -add <statgroup_name>
{<statpath_name>|<statgroup_name>}[,...]
| -remove <statgroup_name>
{<statpath_name>|<statgroup_name>}[,...]
| -delete <statgroup_name> [-Force]
| -database
{ -recover [-Force]
|-verify }
DESCRIPTION
----------nas_stats allows the user to manage Statistics Groups. There are two
types of Statistics Groups: System-defined and User-defined groups.
System-defined statistics groups are created as part of the installation
(or upgrade) process and include the following statistics, which
correspond to the summary and table collections used by server_stats:
System-defined statistics group name
basic-std
caches-std
cifs-std
nfs-std
cifsOps-std
diskVolumes-std
metaVolumes-std
netDevices-std
nfsOps-std
server_stats collection name
-summary basic
-summary caches
-summary cifs
-summary nfs
-table cifs
-table dvol
-table fsvol
-table net
-table nfs
Note: server_stats collection names are deprecated and will not be supported
in future releases.
Statistics Groups can be created to include any combination of
statpath names, displayed through the server_stats command, as well
as other existing statgroup names.
Any Statistics Group name can be used with server_stats -monitor to
collect statistics as defined in its member_stats list.
OPTIONS
-------list
Lists system and user-defined Statistics Groups.
-info
Provides detailed information on all (or specified) Statistics Groups.
237
-create
Creates a statistics group and specifies what statpath names it
includes. It also allows the nesting of statgroups by adding existing
statgroups to new statgroups.
Statgroup names can be used with the -info request. A statgroup
name is limited to 255 characters. Space, slash, back slash, quote,
double quote, and comma are the illegal characters in it.
[-description]
The -description option is optional and defaults to the statgroup
name. If the -description option is used, its argument must be
enclosed in quotation marks.
-modify
Allows you to modify a statgroup.s member_stats list by specifying
the new member statistics of the group, overriding the previous
contents.
-add
Allows you to add statpath and existing statgroup names to a
statgroup by specifying additional items to be appended to the
statgroup.s list member_stats list.
-remove
Allows you to remove member statpath and statgroup names from a
statgroup by specifying the items to remove from the statgroup.s
member_stats list.
-delete
Allows you to delete a statgroup. However, this option does not
delete any statsgroups that are members of the statgroup.
-recover
Attempts to recover the latest uncorrupted copy of the Statistics
Groups database from the NAS database backups. nas_stats searches
through the available backups and restores the latest copy. In this
event, NAS database backups do not contain a healthy version of the
Statistics Groups; a new Statistics Groups database is installed. In the
case of a new Statistics Groups database, all user-defined information
is lost. NAS database backups run hourly and VNX maintains the last
12 backups.
[-Force]
Use the -Force option with the -recover option to skip the
warning prompt.
-verify
Checks the health status of the Statistics Groups database.
SEE ALSO
-------server_stats
EXAMPLE #1
---------To list the system-defined and user-defined Statistics Groups, type:
$ nas_stats -groups -list
Type
System
System
...
User
User
Name
basic-std
basicCifs-std
...
basic
nfsNet
...
...
EXAMPLE #2
---------To provide detailed information on all (or specified) Statistics Groups, type:
$ nas_stats -groups -info
name = basic-std
description = The basic system-defined group.
type = System-defined
member_stats =
kernel.cpu.utilization.cpuUtil,net.basic.inBytes,net.basic.outBytes,store.readB
y
tes,store.writeBytes
member_elements =
member_of =
name = basic3
description = CPU and Memory
type = User-defined
member_stats = kernel.cpu.utilization.cpuUtil,kernel.memory.freeBytes
member_elements =
member_of =
name = caches-std
description = The caches system-defined group.
type = System-defined
member_stats =
fs.dnlc.hitRatio,fs.ofCache.hitRatio,kernel.memory.bufferCache.hitRatio
member_elements =
member_of =
name = cifs-std
description = The cifs system-defined group.
type = System-defined
member_stats =
cifs.global.basic.totalCalls,cifs.global.basic.reads,cifs.global.basic.readByte
s
,cifs.global.basic.readAvgSize,cifs.global.basic.writes,cifs.global.basic.write
B
ytes,cifs.global.basic.writeAvgSize,cifs.global.usage.currentConnections,cifs.g
l
obal.usage.currentOpenFiles
member_elements =
member_of = newSG
name = cifsOps-std
description = The cifs table system-defined group.
type = System-defined
member_stats = cifs.smb1.op,cifs.smb2.op
member_elements =
member_of =
name = diskVolumes-std
description = The disk volume table system-defined group.
type = System-defined
member_stats = store.diskVolume
member_elements =
name = metaVolumes-std
description = The meta volume table system-defined group.
type = System-defined
member_stats = store.logicalVolume.metaVolume
member_elements =
member_of =
239
name = netDevices-std
description = The net table system-defined group.
type = System-defined
member_stats = net.device
member_elements =
member_of =
name = newSG
description = newSG
type = User-defined
member_stats = cifs-std,nfs.v3.op,nfs.v4.op
member_elements =
member_of =
name = nfs-std
Description = The nfs system-defined group.
type = System-defined
member_stats =
nfs.totalCalls,nfs.basic.reads,nfs.basic.readBytes,nfs.basic.readAvgSize,nfs.ba
s
ic.writes,nfs.basic.writeBytes,nfs.basic.writeAvgSize,nfs.currentThreads
member_elements =
member_of =
name = nfsOps-std
description = The nfs table system-defined group.
type = System-defined
member_stats = nfs.v2.op,nfs.v3.op,nfs.v4.op
member_elements =
member_of =
name = statgroup1
description = My first group
type = User-defined
member_stats =
net.basic.inBytes,net.basic.outBytes,store.readBytes,store.writeBytes
member_elements =
member_of = statgroup2
name = statgroup2
description = My first group
type = User-defined
member_stats =
net.basic.inBytes,net.basic.outBytes,store.readBytes,store.writeBytes,kernel.cp
u
.utilization.cpuUtil,statgroup1
member_elements =
member_of =
EXAMPLE #3
---------To provide detailed information on all (or specified) Statistics Groups, type:
$ nas_stats -groups -info statsA
name = statsA
description = My group # 2
type = user-defined
member_stats = statpath1, statpath2, statpath3, statsC
member_elements =
member_of = statsB
EXAMPLE #4
----------
To create a statistics group called basic3, type:
$ nas_stats -groups -create basic3 -description "CPU and Memory"
kernel.cpu.utilization.cpuUtil,kernel.memory.freeBytes
’basic3’ created successfully.
EXAMPLE #5
---------To create a statistics group called statgroup2, type:
$ nas_stats -groups -create statgroup2 statgroup1,nfs,net
’statgroup2’ created successfully.
EXAMPLE #6
---------To use an existing statgroup, type:
$ nas_stats -groups -create statgroup1 -description "My
first group" kernel.cpu.utilization.cpuUtil,
net.basic.inBytes,net.basic.outBytes,store.readBytes,
store.writeBytes
ERROR (13421969439): ’statgroup1’ already exists.
EXAMPLE #7
---------To modify a statgroup by specifying the new contents of the group,
overriding the previous contents, type:
$ nas_stats -groups -modify statgroup2 cifs,nfs-std
’statgroup2’ modified successfully.
EXAMPLE #8
---------To modify the description of a statgroup, type:
$ nas_stats -groups -modify basic1 -description "My basic
group"
’basic1’ modified successfully.
EXAMPLE #9
---------To rename a user-defined statgroup, type:
$ nas_stats -groups -modify statgroup2 -rename basic2
’statgroup2’ modified successfully.
EXAMPLE #10
----------To add to the member_stats list of a statgroup, type:
$ nas_stats -groups -add statgroup2
kernel.cpu.utilization.cpuUtil,statgroup1
Adding the following statistics:
... kernel.cpu.utilization.cpuUtil
241
... statgroup1
Statistics added to ’statgroup2’ successfully.
EXAMPLE #11
----------To remove from the member_stats list of a statgroup, type:
$ nas_stats -groups -remove statgroup1 kernel.cpu.utilization.cpuUtil
Removing the following statistics:
... kernel.cpu.utilization.cpuUtil
Statistics removed from ’statgroup1’ successfully.
EXAMPLE #12
----------To delete a statgroup, type:
$ nas_stats -groups -delete statgroup1
’statgroup1’ deleted successfully.
EXAMPLE #13
----------To delete reference from other groups using statgroupA, type:
$ nas_stats -groups -delete statgroupA
’statgroupA’ is used in group (s): mystats1, mystats2.
Clear ’statgroupA’ from other groups? [Y/N] Y
’statgroupA’ deleted successfully.
EXAMPLE #14
----------To delete reference from other groups using statgroupA and the
-Force option to skip the warning prompt, type:
$ nas_stats -groups -delete statgroupA -F
’statgroupA’ is used in group (s): mystats1, mystats2.
’statgroupA’ deleted successfully.
EXAMPLE #15
----------To recover the latest healthy (uncorrupted) copy of a statgroup
database from the NAS database backups, type:
$ nas_stats -groups -database -recover
Latest healthy database modified last on Tue Apr 7 17:29:06 EDT 2009.
Any updates performed after the latest backup will be lost. Continue? [Y/N] Y
The nas_stats command recover operation is completed successfully.
EXAMPLE #16
----------To recover the latest healthy (uncorrupted) copy of the statgroup
database from the NAS database backups using the -Force option to
skip the warning prompt, type:
$ nas_stats -groups -database -recover -Force
Latest healthy database modified last on Tue Apr 7 17:29:06 EDT 2009.
The nas_stats command recover operation is completed successfully.
EXAMPLE #17
----------To check the health status of the Statistics Groups database,
type:
$ nas_stats -groups -database -verify
Database is healthy.
---------------------------------------------------------------Last modified: May 10, 2011 4:30 pm.
243
nas_storage
Controls storage system access and performs some management
tasks.
SYNOPSIS
-------nas_storage
-list
| -info {-all|<name>|id=<storage_id>} [-option <options>]
| -rename <old_name> <new_name>
| -acl <acl_value> <name>
| -delete {<name>|id=<storage_id>} [-spare <spindle-id>|-group
<diskgroup-id>]
| -failback {<name>|id=<storage_id>}
| -sync {-all|<name>|id=<storage_id>}
| -check {-all|<name>|id=<storage_id>}
| -modify {<name>|id=<storage_id>} -network
{-spa|-spb} <IP>
| -modify {<name>|id=<storage_id>}
-security [-username <username>][-password <password>]
[-newpassword <new_password>]]
Note: Output from this command is determined by the type of storage
system attached to the VNX.
DESCRIPTION
----------nas_storage sets the name for a storage system, assigns an access
control value, displays attributes, performs a health check,
synchronizes the storage system with the Control Station, and
performs a failback for VNX for block systems.
OPTIONS
-------list
Displays a list of all attached storage systems available for the VNX.
Note: The ID of the object is an integer and is assigned automatically. The
name of the storage system may be truncated if it is too long for the display.
To display the full name, use the -info option with the storage system ID.
-info {-all|<name>|id=<storage_id>}
Displays the configuration of the attached storage system.
[-option <options>]
Specifies a comma-separated list of options.
sync={yes|no}
Synchronizes the Control Station.s view with that of the storage
system before displaying configuration information. Default=yes.
-rename <old_name> <new_name>
Renames the current storage system name to a new name. By default,
the storage system name is its serial number.
-acl <acl_value> <name>
Sets an access control level value that defines the owner of the
storage system, and the level of access allowed for users and groups
defined in the access control level table (nas_acl provides information
).
-delete {<name>|id=<storage_id> [-spare<spindle-id>|-group <diskgroup-i
d>]
Deletes an entry from the storage system table. The storage system
can only be deleted after all disks on the storage system have been
deleted using nas_disk. The storage system and disks can be
rediscovered using the server_devconfig command. The -spare
option deletes the hot spare disk from the hot spare pool on the VNX
for block storage used by NAS. The -group option deletes the disk
group specified. This deletes and unbinds the LUNs in the RAID
groups used by VNX for file. If there are other LUNs in the RAID
group not allocated to the VNX, the RAID group is not unbound.
After removing the VNX LUNs, the RAID group is empty and it will
be destroyed.
-sync {-all|<name>|id=<storage_id>}
Synchronizes the Control Station’s view with that of the storage
system.
-check {-all|<name>|id=<storage_id>}
Performs a health check on the storage system to verify if it is
configured for, and in a state to provide the level of high
availability that is required.
Use this option after making any management changes to your
storage system (for example, changes to VNX for block array
properties, such as enabling/disabling statistics polling).
Note: This option does not support remote storage. For example, for
recoverpoint configurations where remote storage is listed, the check w
ill
only run on first listed storage system.
For VNX for Block only
----------------------failback {<name>|id=<storage_id>}
Returns the storage system.s normal operating state by returning
ownership of all disk volumes to their default storage processor.
To verify that the storage system failed over, type the -info option.
If the value appears as failed_over=True, then the system has failed
over.
-modify {<name>|id=<storage_id>} -network {-spa|-spb} <IP>
Modifies the IP address of the VNX for block in the VNX database.
-modify {<name>|id=<storage_id>} -security [-username <username>]
[-password <password>]
Updates the login information the VNX for file uses to authenticate
with the VNX and changes the VNX username, or password if the
VNX account is changed or the following error is reported:
Error 5010: APM00055105668: Storage API code=4651:
SYMAPI_C_CLAR_NOT_PRIVILEGED
Operation denied by Clariion array - you are not privileged to perform
the requested operation
[-newpassword <new_password]
Assigns a new password to the username on the VNX for block.
Note: This operation is not supported for Symmetrix storage systems.
-resetssv
Resets hostname. For this SE lock box also need to be updated to take
the modified host name.
SEE ALSO
--------
245
VNX System Operations, nas_rdf, nas_disk, and server_devconfig.
STORAGE SYSTEM OUTPUT
--------------------The number associated with the storage device is dependent on the
attached storage system. VNX for block displays a prefix of APM
before a set of integers, for example, APM00033900124-0019. For
example, Symmetrix storage systems appear as 002804000190-003C.
EXAMPLE #1
---------For the VNX storage system, to list all attached storage systems,
type:
$ nas_storage -list
id
1
acl
0
name
APM00042000818
serial_number
APM00042000818
For the VNX with a Symmetrix storage system, to list all attached
storage systems, type:
$ nas_storage -list
id
1
acl
0
name
000187940260
serial_number
000187940260
Where:
Value
-----
Definition
----------
id
acl
ID number of the attached storage system.
Access control level value assigned to the attached
storage system.
Name assigned to the attached storage system.
Serial number of the attached storage system.
name
serial_number
EXAMPLE #2
----------
For the VNX storage system, to display information for the attached
storage system, type:
$ nas_storage -info APM00042000818
id = 1
arrayname = APM00042000818
name = APM00042000818
type = Clariion
model_type = RACKMOUNT
model_num = 700
db_sync_time = 1131986667 == Mon Nov 14 11:44:27 EST 2005
API_version = V6.0-629
num_disks = 60
num_devs = 34
num_pdevs = 8
num_storage_grps = 1
num_raid_grps = 16
cache_page_size = 8
wr_cache_mirror = True
low_watermark = 60
high_watermark = 80
unassigned_cache = 0
is_local = True
failed_over = False
captive_storage = False
Active Software
-AccessLogix = FLARE-Operating-Environment= 02.16.700.5.004
-NavisphereManager = Storage Processors
SP Identifier = A
signature = 1057303
microcode_version = 2.16.700.5.004
serial_num = LKE00040201171
prom_rev = 3.30.00
agent_rev = 6.16.0 (4.80)
phys_memory = 3967
sys_buffer = 773
read_cache = 122
write_cache = 3072
free_memory = 0
raid3_mem_size = 0
failed_over = False
hidden = False
network_name = spa
ip_address = 172.24.102.5
subnet_mask = 255.255.255.0
gateway_address = 172.24.102.254
num_disk_volumes = 20 - root_disk root_ldisk d3 d4 d5 d6 d7 d8 d9 d10 d11 d12
d13 d14 d15 d16 d17 d18 d19 d20
Port Information
Port 1
uid = 50:6:1:60:B0:60:1:CC:50:6:1:61:30:60:1:CC
link_status = UP
port_status = ONLINE
switch_present = True
switch_uid = 10:0:8:0:88:A0:36:F3:20:42:8:0:88:A0:36:F3
sp_source_id = 6373907
<...removed...>
Port 2
uid = 50:6:1:60:B0:60:1:CC:50:6:1:62:30:60:1:CC
link_status = UP
port_status = ONLINE
switch_present = True
switch_uid = 10:0:8:0:88:A0:36:F3:20:41:8:0:88:A0:36:F3
sp_source_id = 6373651
SP Identifier = B
signature = 1118484
microcode_version = 2.16.700.5.004
serial_num = LKE00041700812
prom_rev = 3.30.00
agent_rev = 6.16.0 (4.80)
phys_memory = 3967
sys_buffer = 773
read_cache = 122
write_cache = 3072
free_memory = 0
raid3_mem_size = 0
247
failed_over = False
hidden = False
network_name = spb
ip_address = 172.24.102.6
subnet_mask = 255.255.255.0
gateway_address = 172.24.102.254
num_disk_volumes = 0
Port Information
Port 1
uid = 50:6:1:60:B0:60:1:CC:50:6:1:69:30:60:1:CC
link_status = UP
port_status = ONLINE
switch_present = True
switch_uid = 10:0:8:0:88:A0:36:F3:20:3E:8:0:88:A0:36:F3
sp_source_id = 6372883
<...removed...>
Port 2
uid = 50:6:1:60:B0:60:1:CC:50:6:1:6A:30:60:1:CC
link_status = UP
port_status = ONLINE
switch_present = True
switch_uid = 10:0:8:0:88:A0:36:F3:20:3D:8:0:88:A0:36:F3
sp_source_id = 6372627
Storage Groups
id = A4:74:8D:50:6E:A1:D9:11:96:E1:8:0:1B:43:5E:4F
name = ns704g-cs100
num_hbas = 18
num_devices = 24
shareable = True
hidden = False
Hosts
uid = 50:6:1:60:90:60:3:49:50:6:1:60:10:60:3:49
storage_processor = B
port = 1
server = server_4
uid = 50:6:1:60:90:60:3:49:50:6:1:60:10:60:3:49
storage_processor = A
port = 0
server = server_4
uid = 50:6:1:60:80:60:4:F0:50:6:1:61:0:60:4:F0
storage_processor = B
port = 0
server = server_2
<...removed...>
uid = 50:6:1:60:80:60:4:F0:50:6:1:68:0:60:4:F0
storage_processor = B
port = 1
server = server_3
uid = 20:0:0:0:C9:2B:98:77:10:0:0:0:C9:2B:98:77
storage_processor = B
port = 0
uid = 20:0:0:0:C9:2B:98:77:10:0:0:0:C9:2B:98:77
storage_processor = A
port = 0
ALU HLU
-----------0000 -> 0000
0001 -> 0001
0002 -> 0002
0003 -> 0003
0004 -> 0004
0005 -> 0005
0018 -> 0018
0019 -> 0019
0020 -> 0020
0021 -> 0021
0022 -> 0022
0023 -> 0023
0024 -> 0024
0025 -> 0025
0026 -> 0026
0027 -> 0027
0028 -> 0028
0029 -> 0029
0030 -> 0030
0031 -> 0031
0032 -> 0032
0033 -> 0033
0034 -> 0034
0035 -> 0035
Disk Groups
id = 0000
storage profiles = 2 - clar_r5_performance,cm_r5_performance
raid_type = RAID5
logical_capacity = 1068997528
num_spindles = 5 - 0_0_0 0_0_1 0_0_2 0_0_3 0_0_4
num_luns = 6 - 0000 0001 0002 0003 0004 0005
num_disk_volumes = 6 - root_disk root_ldisk d3 d4 d5 d6
spindle_type = FC
bus = 0
raw_capacity = 1336246910
used_capacity = 62914560
free_capacity = 1006082968
hidden = False
<...removed...>
id = 2_0_14
product = ST314670 CLAR146
revision = 6A06
serial = 3KS02RHM
capacity = 280346624
used_capacity = 224222822
disk_group = 0014
hidden = False
type = FC
bus = 2
enclosure = 0
slot = 14
vendor = SEAGATE
remapped_blocks = -1
state = ENABLED
For the VNX with a Symmetrix storage system, to display
information for the attached storage system, type:
249
$ nas_storage -info 000187940260
id = 1
serial_number = 000187940260
name = 000187940260
type = Symmetrix
ident = Symm6
model = 800-M2
microcode_version = 5670
microcode_version_num = 16260000
microcode_date = 03012004
microcode_patch_level = 69
microcode_patch_date = 03012004
symmetrix_pwron_time = 1130260200 == Tue Oct 25 13:10:00 EDT 2005
db_sync_time = 1133215405 == Mon Nov 28 17:03:25 EST 2005
db_sync_bcv_time = 1133215405 == Mon Nov 28 17:03:25 EST 2005
db_sync_rdf_time = 1133215405 == Mon Nov 28 17:03:25 EST 2005
last_ipl_time = 1128707062 == Fri Oct 7 13:44:22 EDT 2005
last_fast_ipl_time = 1130260200 == Tue Oct 25 13:10:00 EDT 2005
API_version = V6.0-629
cache_size = 32768
cache_slot_count = 860268
max_wr_pend_slots = 180000
max_da_wr_pend_slots = 90000
max_dev_wr_pend_slots = 6513
permacache_slot_count = 0
num_disks = 60
num_symdevs = 378
num_pdevs = 10
sddf_configuration = ENABLED
config_checksum = 0x01ca544
num_powerpath_devs = 0
config_crc = 0x07e0ba1e6
is_local = True
Physical Devices
/nas/dev/c0t0l15s2
/nas/dev/c0t0l15s3
/nas/dev/c0t0l15s4
/nas/dev/c0t0l15s6
/nas/dev/c0t0l15s7
/nas/dev/c0t0l15s8
/nas/dev/c16t0l15s2
/nas/dev/c16t0l15s3
/nas/dev/c16t0l15s4
/nas/dev/c16t0l15s8
Director Table
type num slot ident stat scsi vols ports p0_stat p1_stat p2_stat p3_stat
---- --- ---- ----- ---- ---- ---- ----- ------- ------- ------- ------DA
1
1 DF-1A
On
NA
21
2
On
On
NA
NA
DA
2
2 DF-2A
On
NA
8
2
On
On
NA
NA
DA
15
15 DF-15A
On
NA
21
2
On
On
NA
NA
DA
16
16 DF-16A
On
NA
8
2
On
On
NA
NA
DA
17
1 DF-1B
On
NA
8
2
On
On
NA
NA
DA
18
2 DF-2B
On
NA
21
2
On
On
NA
NA
DA
31
15 DF-15B
On
NA 152
2
On
On
NA
NA
DA
32
16 DF-16B
On
NA 165
2
On
On
NA
NA
FA
33
1 FA-1C
On
NA
0
2
On
On
NA
NA
FA
34
2 FA-2C
On
NA
0
2
On
On
NA
NA
FA
47
15 FA-15C
On
NA
0
2
On
On
NA
NA
FA
48
16 FA-16C
On
NA
0
2
On
On
NA
NA
FA
49
1 FA-1D
On
NA
0
2
On
On
NA
NA
Note: This is a partial listing due to the length of the outputs.
EXAMPLE #3
---------To rename a storage system, type:
$ nas_storage -rename APM00042000818 cx700_1
id = 1
serial_number = APM00042000818
name = cx700_1
acl = 0
EXAMPLE #4
---------To set the access control level for the storage system cx700_1,
type:
$ nas_storage -acl 1000 cx700_1
id = 1
serial_number = APM00042000818
name = cx700_1
acl = 1000, owner=nasadmin, ID=201
Note: The value 1000 specifies nasadmin as the owner and gives read, write,
and delete access only to nasadmin.
EXAMPLE #5
---------To change the existing password on the VNX for block, type:
$ nas_storage -modify APM00070204288 -security -username
nasadmin -password nasadmin -newpassword abc
Changing password on APM00070204288
EXAMPLE #6
---------To avoid specifiying passwords in clear text on the command line,
type:
$ nas_storage -modify APM00070204288 -security
-newpassword
Enter the Global CLARiiON account information
Username: nasadmin
Password: *** Retype your response to validate
Password: ***
New Password
Password: ******** Retype your response to validate
Password: ********
Changing password on APM00070204288
Done
EXAMPLE #7
---------To failback a VNX for block, type:
$ nas_storage -failback cx700_1
id = 1
serial_number = APM00042000818
name = cx700_1
251
acl = 1000, owner=nasadmin, ID=201
EXAMPLE #8 To display information for a VNX for block and turn synchronization
off, type:
$ nas_storage -info cx700_1 -option sync=no
id = 1
arrayname = APM00042000818
name = cx700_1
type = Clariion
model_type = RACKMOUNT
model_num = 700
db_sync_time = 1131986667 == Mon Nov 14 11:44:27 EST 2005
API_version = V6.0-629
num_disks = 60
num_devs = 34
num_pdevs = 8
num_storage_grps = 1
num_raid_grps = 16
cache_page_size = 8
wr_cache_mirror = True
low_watermark = 60
high_watermark = 80
unassigned_cache = 0
is_local = True
failed_over = False
captive_storage = False
Active Software
-AccessLogix = FLARE-Operating-Environment= 02.16.700.5.004
-NavisphereManager = Storage Processors
SP Identifier = A
signature = 1057303
microcode_version = 2.16.700.5.004
serial_num = LKE00040201171
prom_rev = 3.30.00
agent_rev = 6.16.0 (4.80)
phys_memory = 3967
sys_buffer = 773
read_cache = 122
write_cache = 3072
free_memory = 0
raid3_mem_size = 0
failed_over = False
hidden = False
network_name = spa
ip_address = 172.24.102.5
subnet_mask = 255.255.255.0
gateway_address = 172.24.102.254
num_disk_volumes = 20 - root_disk root_ldisk d3 d4 d5 d6 d7 d8 d9 d10
d11 d12 d13 d14 d15 d16 d17 d18 d19 d20
Port Information
Port 1
uid = 50:6:1:60:B0:60:1:CC:50:6:1:61:30:60:1:CC
link_status = UP
port_status = ONLINE
switch_present = True
switch_uid = 10:0:8:0:88:A0:36:F3:20:42:8:0:88:A0:36:F3
sp_source_id = 6373907
<...removed...>
Port 2
uid = 50:6:1:60:B0:60:1:CC:50:6:1:62:30:60:1:CC
link_status = UP
port_status = ONLINE
switch_present = True
switch_uid = 10:0:8:0:88:A0:36:F3:20:41:8:0:88:A0:36:F3
sp_source_id = 6373651
SP Identifier = B
signature = 1118484
microcode_version = 2.16.700.5.004
serial_num = LKE00041700812
prom_rev = 3.30.00
agent_rev = 6.16.0 (4.80)
phys_memory = 3967
sys_buffer = 773
read_cache = 122
write_cache = 3072
free_memory = 0
raid3_mem_size = 0
failed_over = False
hidden = False
network_name = spb
ip_address = 172.24.102.6
subnet_mask = 255.255.255.0
gateway_address = 172.24.102.254
num_disk_volumes = 0
Port Information
Port 1
uid = 50:6:1:60:B0:60:1:CC:50:6:1:69:30:60:1:CC
link_status = UP
port_status = ONLINE
switch_present = True
switch_uid = 10:0:8:0:88:A0:36:F3:20:3E:8:0:88:A0:36:F3
sp_source_id = 6372883
<...removed...>
Port 2
uid = 50:6:1:60:B0:60:1:CC:50:6:1:6A:30:60:1:CC
link_status = UP
port_status = ONLINE
switch_present = True
switch_uid = 10:0:8:0:88:A0:36:F3:20:3D:8:0:88:A0:36:F3
sp_source_id = 6372627
Storage Groups
id = A4:74:8D:50:6E:A1:D9:11:96:E1:8:0:1B:43:5E:4F
name = ns704g-cs100
num_hbas = 18
num_devices = 24
shareable = True
hidden = False
Hosts
uid = 50:6:1:60:90:60:3:49:50:6:1:60:10:60:3:49
storage_processor = B
port = 1
server = server_4
253
uid = 50:6:1:60:90:60:3:49:50:6:1:60:10:60:3:49
storage_processor = A
port = 0
server = server_4
uid = 50:6:1:60:80:60:4:F0:50:6:1:61:0:60:4:F0
storage_processor = B
port = 0
server = server_2
<...removed...>
uid = 50:6:1:60:80:60:4:F0:50:6:1:68:0:60:4:F0
storage_processor = B
port = 1
server = server_3
uid = 20:0:0:0:C9:2B:98:77:10:0:0:0:C9:2B:98:77
storage_processor = B
port = 0
uid = 20:0:0:0:C9:2B:98:77:10:0:0:0:C9:2B:98:77
storage_processor = A
port = 0
ALU HLU
-----------0000 -> 0000
0001 -> 0001
0002 -> 0002
0003 -> 0003
0004 -> 0004
0005 -> 0005
0018 -> 0018
0019 -> 0019
0020 -> 0020
0021 -> 0021
0022 -> 0022
0023 -> 0023
0024 -> 0024
0025 -> 0025
0026 -> 0026
0027 -> 0027
0028 -> 0028
0029 -> 0029
0030 -> 0030
0031 -> 0031
0032 -> 0032
0033 -> 0033
0034 -> 0034
0035 -> 0035
Disk Groups
id = 0000
storage profiles = 2 - clar_r5_performance,cm_r5_performance
raid_type = RAID5
logical_capacity = 1068997528
num_spindles = 5 - 0_0_0 0_0_1 0_0_2 0_0_3 0_0_4
num_luns = 6 - 0000 0001 0002 0003 0004 0005
num_disk_volumes = 6 - root_disk root_ldisk d3 d4 d5 d6
spindle_type = FC
bus = 0
raw_capacity = 1336246910
used_capacity = 62914560
free_capacity = 1006082968
hidden = False
<...removed...>
id = 0205
storage profiles = 0
raid_type = SPARE
logical_capacity = 622868992
num_spindles = 1 - 0_1_0
num_luns = 1 - 0205
num_disk_volumes = 0
spindle_type = ATA
bus = 0
raw_capacity = 622868992
used_capacity = 622868992
free_capacity = 0
hidden = False
Spindles
id = 0_0_0
product = ST314670 CLAR146
revision = 6A06
serial = 3KS088SQ
capacity = 280346624
used_capacity = 12582912
disk_group = 0000
hidden = False
type = FC
bus = 0
enclosure = 0
slot = 0
vendor = SEAGATE
remapped_blocks = -1
state = ENABLED
<...removed...>
id = 2_0_14
product = ST314670 CLAR146
revision = 6A06
serial = 3KS02RHM
capacity = 280346624
used_capacity = 224222822
disk_group = 0014
hidden = False
type = FC
bus = 2
enclosure = 0
slot = 14
vendor = SEAGATE
remapped_blocks = -1
state = ENABLED
Note: This is a partial display due to the length of the outputs.
EXAMPLE #9
---------To delete a storage system with no attached disks, type:
$ nas_storage -delete APM00035101740
id = 0
serial_number = APM00035101740
name = APM00035101740
acl = 0
255
EXAMPLE #10
----------To turn synchronization on for all systems, type:
$ nas_storage -sync -all
done
EXAMPLE #11
----------To perform a health check on the storage system, type:
$ nas_storage -check -all
Discovering storage (may take several minutes)
done
EXAMPLE #12
----------To set the access control level for the storage system
APM00042000818, type:
$ nas_storage -acl 1432 APM00042000818
id = 1
serial_number = APM00042000818
name = APM00042000818
acl = 1432, owner=nasadmin, ID=201
Note: The value 1432 specifies nasadmin as the owner and gives users with
an access level of at least observer read access only, users with an access
level of at least operator read/write access, and users with an access level
of at least admin read/write/delete access.
EXAMPLE #13
----------To modify the IP address of the VNX for block, type:
$ nas_storage -modify APM00072303347 -network -spa
10.6.4.225
Changing IP address for APM00072303347
Discovering storage (may take several minutes)
done
EXAMPLE #14
----------To reset hostname.
$ nas_storage -resetssv
done
-----------------------------------------------------Last modified: July 26, 2011 12:35 pm.
nas_syncrep
Manages Virtual Data Mover (VDM) synchronous replication
sessions. The list, info, and create switches of this command can be
executed on both the active and standby systems. Execute the delete
switch of this command on the active system. Execute the reverse,
failover, and Clean switches of this command on the standby system.
SYNOPSIS
-------nas_syncrep
-list
| -info { -all | <name> | id=<id> } [-verbose]
| -create <name>
-vdm <vdm_name>
-remote_system <cel_name>
-remote_pool <pool_name>
-remote_mover <mover_name>
[-network_devices <local_device_name>:<remote_device_name>[,...]]
| -start { -all | <name> | id=<id> }
| -delete { <name> | id=<id> }
| -reverse { <name> | id=<id> }
| -failover { <name> | id=<id> }
| -Clean { -all | <name> | id=<id> } [-Force]
| -Refresh_pairs { -all | <name> | id=<id> }
DESCRIPTION
----------nas_syncrep creates, manages, or displays session information for ongoing
VDM synchronous replication sessions. Each session handles a single object
between the active and standby systems.
OPTIONS
-------list
Displays all configured synchronous replication sessions on the local
system’s NAS database and those having the local system as the standby
system in the remote system’s replicated NAS database.
-info { -all | <name> | id=<id> } [-verbose]
Displays the status of a specific configured synchronous replication
session, or the status of all synchronous replication sessions.
-create <name>
Assigns a name to the synchronous replication session. The session name
is case-sensitive and supports the following characters: a through z,
A through Z, 0 through 9, _(underscore), -(hyphen) though names may
not start with a hyphen. The maximum length of the name is 128 characters.
The following items will need to be manually migrated using the
migrate_system_conf command after the creation of a synchronous
replication session and any time this data changes:
DNS
NIS
NTP
Local passwd and group
Usermapper client
FTP/SFTP, LDAP, HTTP, CEPP, CAVA, Server Parameters
Netgroup
Nsswitch
Hosts
-vdm <vdm_name>
257
Specifies the name of an existing source sync-replicable VDM to replicate.
-remote_system <cel_name>
Specifies the name of an existing remote VNX system.
-remote_pool <pool_name>
Specifies the name of an existing remote user-defined pool.
-remote_mover <mover_name>
Specifies the name of the existing remote Data Mover.
[-network_devices <local_device_name>:<remote_device_name>[,...]]
Specifies the mappings of the local and remote network devices.
-local_storage journal=<alu>
Specifies the assigned system LUN for the local journal volume.
-remote_storage journal=<alu>
Specifies the assigned system LUN for the remote journal volume.
-start { -all | <name> | id=<id> }
Starts all SRDF synchronous replication sessions or a specified
synchronous replication session. Execute this switch on the standby system.
-delete { <name> | id=<id> }
Deletes a synchronous replication session of specific name or ID
with local system as active. Execute this switch on the active system.
-reverse { <name> | id=<id> }
Switches the active/standby role of the two VNX systems in a
synchronous replication session when both are up.
Execute this switch on the standby system.
-failover { <name> | id=<id> }
Fails over the specified VDM to the standby system to make it active.
Execute this switch on the standby system.
-Clean { -all | <name> | id=<id> } [-Force]
Cleans all synchronous replication sessions or a specified
synchronous replication session. Execute this switch on the standby system.
Note: Always use ’-all’ whenever there are multiple sessions to be cleaned. Els
e,
the operation will error out.
-Refresh_pairs { -all | <name> | id=<id> }
Refreshs all synchronous replication session or a specified
synchronous replication session to establish RDF pairing for any new devices
added. Execute this switch on the active system.
Note: After failover, the LUNs on the standby system under synchronous
replication are Read Only and the original VDM/File Systems/checkpoints remain
on them. If any write operation occurs on those objects, such as mount a
File System or write I/O to a File System, the Data Mover will run into panic.
The Clean operation removes those obsoleted objects from the failed system
for the specified synchronous session or all synchronous replication sessions
on the standby system so that the Data Mover can be returned to use.
EXAMPLE #1
---------To list synchronous replication sessions, type:
$ nas_syncrep -list
id
name
5020
my_syncrep1
s
vdm_name
my_vdm1
remote_system
-->my_system1
session_status
sync_in_progres
10030
my_syncrep2
my_vdm2
<--my_system1
in_sync
EXAMPLE #2
---------To display information of a synchronous replication session by name, type:
$ nas_syncrep -i id=4096
id
= 4096
name
= LY2E6_session1
vdm_name
= LY2E6_vdm1
syncrep_role
= active
local_system
= LY2E6_CS0
local_pool
= src_sg_1
local_mover
= server_2
remote_system
= L9P36_CS0
remote_pool
= dst_sg_1
remote_mover
= server_2
device_group
= 61_260_60_125
session_status
= in_sync
EXAMPLE #3
---------To create a synchronous replication session, type:
$ nas_syncrep -create LY2E6_session1 -vdm LY2E6_vdm1 -remote_system L9P36_CS0
-remote_pool l9p36_marketing_sg -remote_mover server_2 -network_devices cge0:cg
e0
Now validating params...
Now creating LUN mapping...
Now creating remote network interface(s)...
Now marking remote pool as standby pool...
Now updating local disk type...
Now updating remote disk type...
Now generating session entry...
done
done
done
done
done
done
done
done
EXAMPLE #4
---------To delete a synchronous replication session, type:
$ nas_syncrep -delete my_syncrep1
WARNING: Please do not perform any operation on my_syncrep1 on standby
system until delete is done.
Deleting...
done
done
EXAMPLE #5
---------To reverse a synchronous replication session, type:
$ nas_syncrep -reverse id=4315
WARNING: There will be a period of Data Unavailabilty during the reverse
operation, and, after the reverse operation, the VDM/FS(s)/checkpoint(s)
protected by the sync replication session will be reversed to the local site.
Are you sure you want to proceed? [yes or no] yes
Now doing precondition check...
done: 19 s
Now doing health check...
done: 11 s
Now cleaning local...
done: 1 s
Service outage start......
Now turning down remote network interface(s)...
done: 8 s
Now switching the session (may take several minutes)... done: 7 s
Now importing sync replica of NAS database...
done: 16 s
259
Now
Now
Now
Now
Now
creating VDM...
importing VDM settings...
mounting exported FS(s)/checkpoint(s)...
loading VDM...
turning up local network interface(s)...
Service outage end:
done:
done:
done:
done:
done:
52 s
5
0
13
3
0
s
s
s
s
s
Now
Now
Now
Now
mounting unexported FS(s)/checkpoint(s)...
importing schedule(s)...
unloading remote VDM/FS(s)/checkpoint(s)...
cleaning remote...
Elapsed time:
done:
done:
done:
done:
116s
0
0
16
17
s
s
s
s
done
EXAMPLE #6
---------To failover a synchronous replication session, type:
$ nas_syncrep -failover id=4560
WARNING: You have just issued the nas_syncrep -failover command.
Verify whether the peer system or any of its file storage resources
are accessible. If they are, then you should issue the nas_syncrep
-reverse command instead. Running the nas_syncrep -failover command
while the peer system is still accessible could result in Data
Unavailability or Data Loss. Are you sure you want to proceed?
[yes or no] yes
Now doing precondition check...
done: 30 s
Now doing health check...
done: 7 s
Now cleaning local...
done: 1 s
Now switching the session (may take several minutes)... done: 4 s
Now importing sync replica of NAS database...
done: 15 s
Now creating VDM...
done: 5 s
Now importing VDM settings...
done: 0 s
Now mounting exported FS(s)/checkpoint(s)...
done: 3 s
Now loading VDM...
done: 4 s
Now turning up local network interface(s)...
done: 0 s
Service outage end: 69 s
Now mounting unexported FS(s)/checkpoint(s)...
done: 0
Now importing schedule(s)...
done: 0
Elapsed time: 69 s
done
s
s
EXAMPLE #7
---------To clean a synchronous replication session, type:
[nasadmin@L9P36_CS0 ˜]$ nas_syncrep -Clean LY2E6_session1
WARNING: You have just issued the nas_syncrep -Clean command. This may result
in a reboot of the original source Data Mover that the VDM was failed over
from. Verify whether or not you have working VDM(s)/FS(s)/checkpoint(s) on
this Data Mover and plan for this reboot accordingly. Running the nas_syncrep
-Clean command while you have working VDM(s)/FS(s)/checkpoint(s) on this Data
Mover will result in Data Unavailability during the reboot. Are you sure you
want to proceed? [yes or no] yes
Now cleaning session LY2E6_session1 (may take several minutes)... done
Now starting session LY2E6_session1...
done
done
EXAMPLE #8
---------To refresh a synchronous replication session, type:
$ nas_syncrep -Refresh_pairs LY2E6_session1
WARNING: You have just issued the nas_syncrep -Refresh_pairs command.
Please do not perform any operation(s) on the remote (R2) side during
the same. Also note that the operation cannot be reverted. Are you
sure you want to proceed? [yes or no] yes
Now refreshing session LY2E6_session1...
done
----------------------------------------------------------------------Last modified: December 5, 2014 11:20 a.m.
261
nas_task
Manages in-progress or completed tasks.
SYNOPSIS
-------nas_task
-list [-remote_system {<remoteSystemName>|id=<id>}]
| -info {-all|<taskId>}
[-remote_system {<remoteSystemName>|id=<id>}]
| -abort <taskId>
[-mover <moverName>][-remote_system {<remoteSystemName>|id=<id>}]
| -delete <taskId>
-remote_system {<remoteSystemName>|id=<id>}]
DESCRIPTION
----------nas_task lists the tasks associated with commands currently in
progress or completed, reports information about a particular task,
aborts a task, or deletes a task. Each task can be uniquely identified
by its task ID and the remote VNX system name or ID.
Use the nas_task command to monitor, abort, and delete long
running tasks and tasks started in asynchronous mode.
OPTIONS
-------list
Lists all local tasks that are in progress, or completed tasks that have
not been deleted. For each task, lists the task ID, remote system name,
a description of the task, and the task state (running, recovering,
succeeded, or failed).
-remote_system {<remoteSystemName>|id=<id>}
Lists local tasks initiated by the specified remote VNX system.
Specify the remote system name or ID.
-info {-all|<taskId>}
Provides more detailed status information for all tasks or for a
particular task. Displays the run time status, estimated completion
time, and percent complete for running tasks. Displays the
completion status and actual end time for completed tasks.
The taskID is the ID returned from a command run in the background
mode or from the nas_task -list command.
Note: The ID of the task is an integer and is assigned automatically.
The task ID is unique to the VNX.
[-remote_system {<remoteSystemName>|id=<id>}]
Provides more detailed status information of local tasks initiated
by the specified remote VNX system. Specify the remote system
name or remote system ID. The remote system name is returned
from the nas_task -list command.
-abort <taskId>
Aborts the specified task leaving the system in a consistent state. For
example, it aborts a one-time copy in progress. This might take a long
time to complete because a remote system may be unavailable or the
network may be down. You should check the status of the task to
verify that the task was aborted. This command can be executed from
the source only.
CAUTION
This option might leave the system in an inconsistent state. Use
caution when using this option.
[-mover <moverName>]
Aborts a task running locally on the specified Data Mover.
This command can be executed from the source or destination side.
Use this command when the source and destination VNX systems
cannot communicate. You should run this command on both
sides.
[-remote_system {<remoteSystemName>|id=<id>]
Aborts a task that was initiated on a remote VNX leaving the
source side intact. Specify the Data Mover to abort a task from the
destination side. Specify the Data Mover and remote system
name or remote system id along with the task id.
-delete id <taskId>
Based on the task ID, deletes a completed task from the database on
the Control Station.
[-remote_system {<remoteSystemName>|id=<id>]
Deletes a task that was initiated on a remote VNX. Specify the
remote system name or remote system id along with the task id.
SEE ALSO
-------Using VNX Replicator, nas_copy, nas_replicate, and nas_cel.
EXAMPLE #1
---------To display detailed information about the task with taskID 4241, type:
$ nas_task -info 4241
Task Id = 4241
Celerra Network Server = cs100
Task State = Running
Percent Complete = 95
Description = Create Replication ufs1_replica1.
Originator = nasadmin@cli.localhost
Start Time = Mon Dec 17 14:21:35 EST 2007
Estimated End Time = Mon Dec 17 19:24:21 EST 2007
Schedule = n/a
Where:
Value
-----
Definition
----------
Task Id
Globally unique character string used as the
identifier of the task.
When set, local.
When set, identifies a remote task.
Running, Recovering, Completed, or Failed. Running
could be a combination of completed and failed.
Displays state property when available.
Appears only when set and not complete.
Appears if details are set.
User or host that initiated the task.
The starting time and ending time (or status) for the
task.
Appears instead of previous line when available and
task is incomplete.
The schedule in effect, or n/a for a task that is not
a scheduled checkpoint refresh.
Displayed list of messages, if any. A completed 263
task
VNX
Remote Task Id
State
Current Activity
Percent Completed
Description
Originator
Start Time/End Time
Estimated End Time
Schedule
Response Statuses
should always have one.
EXAMPLE #2
---------To display the list of all tasks, type:
$ nas_task -list
ID
Task
State
Originator
Start
Time
Description
Schedu
le
Remote System
4241 Running nasadmin@cli+ Mon Dec 17 14:21:35 EST 2007 Create Replication
ufs1_r+ cs100
4228 Succeeded nasadmin@cli+ Mon Dec 17 14:04:02 EST 2007 Delete task NONE:
4214.
cs100
4177 Failed nasadmin@cli+ Mon Dec 17 13:59:26 EST 2007 Create Replication
ufs1_r+ cs100
4150 Succeeded nasadmin@cli+ Mon Dec 17 13:55:39 EST 2007 Delete task NONE:
4136.
cs100
4127 Succeeded nasadmin@cli+ Mon Dec 17 11:38:32 EST 2007 Delete task NONE:
4113.
cs100
4103 Succeeded nasadmin@cli+ Mon Dec 17 11:21:00 EST 2007 Delete task NONE:
4098.
cs100
4058 Succeeded nasadmin@cli+ Fri Dec 14 16:43:23 EST 2007 Switchover
Replication
NONE. cs100
2277 Succeeded nasadmin@cli+ Fri Dec 14 16:42:08 EST 2007 Reverse Replication
NONE. cs110
2270 Succeeded nasadmin@cli+ Fri Dec 14 16:40:29 EST 2007 Start Replication
NONE.
cs110
2265 Failed nasadmin@cli+ Fri Dec 14 16:40:11 EST 2007 Start Replication NONE.
cs110
EXAMPLE #1 provides a description of the outputs.
EXAMPLE #3
---------To abort task 4267 running locally on server_3, type:
$ nas_task -abort 4267 -mover server_3
OK
EXAMPLE #4
---------To delete the existing task 4267, type:
$ nas_task -delete 4267
OK
---------------------------------------------------------------Last Modified: May 10, 2011 5:00 pm
nas_version
Displays the software version running on the Control Station.
SYNOPSIS
-------nas_version
[-h|-l]
DESCRIPTION
----------nas_version displays the Control Station version in long form or
short form. When used during a software upgrade, informs the user
about the upgrade in progress.
OPTIONS
------No arguments
Displays the software version running on the Control Station.
-h
Displays command usage.
-l
Displays detailed software version information for the Control
Station.
EXAMPLE #1
---------To display the software version running on the Control Station
during a software upgrade, type:
$ nas_version
5.6.25-0
EXAMPLE #2
---------To display the system output during a software upgrade, type:
$ nas_version
5.6.19-0
Warning!!Upgrade is in progress from 5.6.19-0 to 5.6.20-0
Warning!!Please log off IMMEDIATELY if you are not upgrading the system
EXAMPLE #3
---------To display the usage for nas_version, type:
$ nas_version -h
usage: /nas/bin/nas_version [-h|-l]
-h help
-l long_format
EXAMPLE #4
---------To display detailed software version information for the Control
Station, type:
265
$ nas_version -l
Name : emcnas Relocations: /nas
Version : 5.6.19 Vendor: EMC
Release : 0 Build Date: Tue 19 Dec 2006 08:53:31 PM EST
Size : 454239545 License: EMC Copyright
Signature : (none)
Packager : EMC Corporation
URL : http://www.emc.com
Summary : EMC nfs base install
Description : EMC nfs base install
EXAMPLE #5
---------To display detailed software version information for the Control
Station during a software upgrade, type:
$ nas_version -l
Name : emcnas Relocations: /nas
Version : 5.6.19 Vendor: EMC
Release : 0 Build Date: Wed 14 Mar 2007 12:36:55 PM EDT
Size : 500815102 License: EMC Copyright
Signature : (none)
Packager : EMC Corporation
URL : http://www.emc.com
Summary : EMC nfs base install
Description : EMC nfs base install
Warning!!Upgrade is in progress from 5.6.19-0 to 5.6.20-0
Warning!!Please log off IMMEDIATELY if you are not upgrading the system
-----------------------------------------------------------Last modified: May 10, 2011 5:15 pm.
nas_volume
Manages the volume table.
SYNOPSIS
-------nas_volume
-list
-delete <volume_name>
-info [-size] {-all|<volume_name>} [-tree]
-rename <old_name> <new_name>
-size <volume_name>
-acl <acl_value> <volume_name>
-xtend <volume_name> {<volume_name>,...}
[-name <name>] -create [-Stripe [<stripe_size>]|-Meta]
[-Force] {<volume_name>,...}
| -Clone <volume_name> [{<svol>:<dvol>,...}][-option <options>]
|
|
|
|
|
|
|
DESCRIPTION
----------nas_volume creates metavolumes and stripe volumes and lists,
renames, extends, clones, and deletes metavolumes, stripe, and slice
volumes. nas_volume sets an access control value for a volume, and
displays detailed volume attributes, including the total size of the
volume configuration.
OPTIONS
-------list
Displays the volume table.
Note: The ID of the object is an integer and is assigned automatically. The
name of the volume may be truncated if it is more than 17 characters. To
display the full name, use the -info option with the volume ID.
-delete <volume_name>
Deletes the specified volume.
-info [-size] {-all|<volume_name>} [-tree]
Displays attributes and the size for all volumes, or the specified
<volume_name>. The -tree option recursively displays the volume
set, that is, the list of component volumes for the specified volume or
all volumes.
-rename <old_name> <new_name>
Changes the current name of a volume to a new name.
-size <volume_name>
Displays the total size in MB of the <volume_name>, including used
and available space.
-acl <acl_value> <volume_name>
Sets an access control level value that defines the owner of the
volume, and the level of access allowed for users and groups defined
in the access control level table. The nas_acl command provides
information.
-xtend <volume_name> {<volume_name>,...}
Extends the specified metavolume by adding volumes to the
configuration. The total size of the metavolume increases by the sum
of all the volumes added.
Note: Only metavolumes can be extended. The volume that was added
remains in use until the original metavolume is deleted. Volumes containing
mounted file systems cannot be extended using this option. The nas_fs
267
command provides information to extend a volume that is hosting a
mounted file system.
-create {<volume_name>,...}
Creates a volume configuration from the specified volumes. Unless
otherwise specified, volumes are automatically created as
metavolumes.
[-name <name>]
Assigns a <name> to volume. If a name is not specified, one is
assigned automatically. The name of a volume is case-sensitive.
[-Stripe <stripe_size>|-Meta]
Sets the type for the volume to be either a stripe volume or
metavolume (default). If -Stripe is specified, type a stripe size in
multiples of 8192 bytes with a recommended size of 262,144 bytes (256 K
B)
for all environments and drive types. If a stripe size is not specified
, the
system creates a 256 KB stripe by default.
nas_slice provides information to create a slice volume.
[-Force] {<volume_name>,...}
Forces the creation of a volume on a mixed storage system.
-Clone <volume_name>
Creates an exact clone of the specified <volume_name>. Volumes can
be cloned from slice, stripe, or metavolumes. The name automatically
assigned to the clone is derived from the ID of the volume.
[{<svol>:<dvol>,...}]
Sets a specific disk volume set for the source volume and the
destination volume. The size of the destination volume must be
the same as the source volume.
-option disktype=<type>
Specifies the type of disk to be created.
Disk types when using VNX for block are CLSTD, CLEFD, and
CLATA, and for VNX for block involving mirrored disks are:
CMEFD, CMSTD, and CMATA.
Disk types when using a Symmetrix are STD, R1STD, R2STD,
BCV, R1BCV, R2BCV, ATA, R1ATA, R2ATA, BCVA, R1BCA,
R2BCA, and EFD.
SEE ALSO
-------Managing Volumes and File Systems with VNX Automatic Volume
Management, Managing Volumes and File Systems for VNX Manually,
Using TimeFinder/FS, NearCopy, and FarCopy on VNX for File,
Controlling Access to System Objects on VNX, nas_slice, nas_disk,
nas_acl, and nas_fs.
EXAMPLE #1
---------To list all volumes, type:
$ nas_volume -list
id inuse type acl name cltype clid
1 y 4 0 root_disk 0 1,2,3,4,5,6,7,8,9,10,11,
12,13,14,15,16,17,18,19,20,
21,22,23,24,25,26,27,28,29,
30,31,32,33,34,51
2 y 4 0 root_ldisk 0 35,36,37,38,39,40,41,42,
43,44,45,46,47,48,49,50,52
3 y 4 0 d3 1 76
4 y 4 0 d4 1 77
5 y 4 0 d5 1 78
6 y 4 0 d6 1 79
7 n 1 0 root_dos 0
8 n 1 0 root_layout 0
9 y 1 0 root_slice_1 1 10
10 y 3 0 root_volume_1 2 1
11 y 1 0 root_slice_2 1 12
12 y 3 0 root_volume_2 2 2
13 y 1 0 root_slice_3 1 14
...
Note: This is a partial listing due to the length of the outputs.
Where:
Value
-----
Definition
----------
id
inuse
type
2=stripe,
ID of the volume.
Whether the volume is used.
Type assigned to the volume. Available types are: 1=slice,
acl
name
cltype
3=meta, 4=disk, and 100=pool.
Access control level assigned the volume.
Name assigned to the volume.
The client type of the volume. Available values are:
clid
0 - If the clid field is not empty then the client is a slice.
1 - The client is another volume (meta, stripe, volume_pool).
2 - The client is a file system.
ID of the client.
EXAMPLE #2
---------To create a metavolume named, mtv1, on disk volume, d7, type:
$ nas_volume -name mtv1 -create d7
id = 146
name = mtv1
acl = 0
in_use = False
type = meta
volume_set = d7
disks = d7
Where:
Value
-----
Definition
----------
id
name
acl
in_use
type
disk,
ID of the volume.
Name assigned to the volume.
Access control level value assigned to the volume.
Whether the volume is used.
Type assigned to the volume. Types are meta, stripe, slice,
volume_set
disks
and pool.
Name assigned to the volume.
Disks used to build a file system.
269
EXAMPLE #3
---------To display configuration information for mtv1, type:
$ nas_volume -info mtv1
id = 146
name = mtv1
acl = 0
in_use = False
type = meta
volume_set = d7
disks = d7
EXAMPLE #4
---------To rename a mtv1 to mtv2, type:
$ nas_volume -rename mtv1 mtv2
id = 146
name = mtv2
acl = 0
in_use = False
type = meta
volume_set = d7
disks = d7
EXAMPLE #5
----------To create a stripe volume named, stv1, with a size of 32768 bytes
on disk volumes d10, d12, d13, and d15, type:
$ nas_volume -name stv1 -create -Stripe 32768 d10,d12,d13,d15
id = 147
name = stv1
acl = 0
in_use = False
type = stripe
stripe_size = 32768
volume_set = d10,d12,d13,d15
disks = d10,d12,d13,d15
Where:
Value
-----
Definition
----------
stripe_size
Specified size of the stripe volume.
EXAMPLE #6
---------To clone mtv1, type:
$ nas_volume -Clone mtv1
id = 146
name = mtv1
acl = 0
in_use = False
type = meta
volume_set = d7
disks = d7
id = 148
name = v148
acl = 0
in_use = False
type = meta
volume_set = d8
disks = d8
EXAMPLE #7
---------To clone the volume mtv1 and set the disk type to BCV, type:
$ /nas/sbin/rootnas_volume -Clone mtv1 -option disktype=BCV
id = 322
name = mtv1
acl = 0
in_use = False
type = meta
volume_set = d87
disks = d87
id = 323
name = v323
acl = 0
in_use = False
type = meta
volume_set = rootd99
disks = rootd99
EXAMPLE #8
---------To extend mtv1 with mtv2, type:
$ nas_volume -xtend mtv1 mtv2
id = 146
name = mtv1
acl = 0
in_use = False
type = meta
volume_set = d7,mtv2
disks = d7,d8
EXAMPLE #9
---------To display the size of mtv1, type:
$ nas_volume -size mtv1
total = 547418 avail = 547418 used = 0 ( 0% ) (sizes in MB)
Where:
Value
-----
Definition
----------
total
avail
used
Total size of the volume.
Amount of unused space on the volume.
Amount of space used on the volume.
EXAMPLE #10
----------To set the access control level for the metavolume mtv1, type:
271
$ nas_volume -acl 1432 mtv1
id = 125
name = mtv1
acl = 1432, owner=nasadmin, ID=201
in_use = False
type = meta
volume_set = d7,mtv2
disks = d7,d8
Note: The value 1432 specifies nasadmin as the owner and gives users with
an access level of at least observer read access only, users with an access
level of at least operator read/write access, and users with an access level
of at least admin read/write/delete access.
EXAMPLE #11
----------To delete mtv2, type:
$ nas_volume -delete mtv1
id = 146
name = mtv1
acl = 1432, owner=nasadmin, ID=201
in_use = False
type = meta
volume_set = d7,mtv2
disks = d7,d8
-----------------------------------------------------------------------Last modified: April 29 2011, 3:15 pm.
FS CLI Commands
This chapter lists the eNAS Command Set provided for managing,
configuring, and monitoring the specified file system. The commands are
prefixed with fs and appear alphabetically. The command line syntax
(Synopsis), a description of the options, and examples of usage are provided
for each command.
fs_ckpt
fs_dedupe
fs_dhsm
fs_group
fs_rdf
fs_timefinder
273
fs_ckpt
Manages checkpoints using the EMCSnapSure functionality.
SYNOPSIS
-------fs_ckpt {<fs_name>|id=<fs_id>}
-list [-all]
| [-name <name>] -Create [-readonly {y|n}][<volume_name>]
[-option <options>]
| [-name <name>] -Create [-readonly {y|n}][size=<integer>[T|G|M|%]]
[pool=<pool>][storage=<system_name>][-option <options>]
| -refresh [-option <options>]
| [-name <name>] -Restore [-Force][-option <options>]
| -modify [%full=<value>][maxsavsize=<integer>[T|G|M]]
DESCRIPTION
----------The fs_ckpt command creates a checkpoint of a Production File System (PFS),
lists associated checkpoints, refreshes a checkpoint to the current time, and
restores a PFS back to a specific point in time using a checkpoint.
Checkpoints are deleted using nas_fs
What is a checkpoint file system?
A PFS is made up of blocks. When a block within a PFS is modified, a copy
containing the original contents of that block is saved to a metavolume called
the SavVol. Subsequent changes made to the same block in the PFS are not
copied into the SavVol. The original blocks from the PFS (in the SavVol) and
the unchanged PFS blocks (that remain in the PFS) are read according to a
bitmap and blockmap data tracking structure. These blocks combine to provide a
complete point-in-time file system image which is called a checkpoint.
OPTIONS
-------list [all]
Displays all of the associated checkpoints for the specified file system. The
-all option displays system-generated Replication checkpoints in addition to
checkpoints created by the user.
[-name <name>] -Create
Creates, mounts, and optionally assigns a name to the checkpoint of the PFS.
The checkpoint must be unmounted prior to unmounting the PFS. Names assigned
to a checkpoint cannot be all numeric. If a name is not chosen, one is
assigned by default.
[-readonly {y|n}]
Specifies whether a checkpoint is read only or not. y (default) sets the
checkpoint as read only; n sets the checkpoint as writeable.
[<volume_name>]
Specifies an unused metavolume for the checkpoint.
Note: A volume can be specified for only the first checkpoint of a
PFS as all of the subsequent checkpoints share the same SavVol. The
minimum size required for a SavVol is 64 MB. The volume size is 10
GB. However, if the PFS is less than 10 GB, the volume is the same
size as the file system.
[-option <options>]
Specifies the following comma-separated options:
%full=<value>
Specifies a value as the percentage threshold permitted for the SavVol. Wh
en
that value is reached, a warning is sent to the server_log and the syslog
files. The Control Station acknowledges the event and automatically extend
s
the checkpoint. The SavVol is automatically extended by 10 GB if its defau
lt
%full value is reached. If the %full value is set to zero, the option is
disabled.
maxsavsize=<integer>[T|G|M]
Limits the final size to which the SavVol can be automatically
extended when the high watermark value specified in %full has been
reached. Automatic extension of the SavVol stops when the size of
the SavVol reaches the value specified in maxsavsize. The range for
maxsavsize is 64 MB to 16 TB.
automount=no
Stops the checkpoint from being automatically mounted.
[-name <name>] -Create
Creates, mounts, and optionally assigns a name to the checkpoint of the PFS.
The checkpoint must be unmounted prior to unmounting the PFS. Names assigned
to a checkpoint cannot be all numeric. If a name is not chosen, one is
assigned by default.
[-readonly {y|n}]
Specifies whether a checkpoint is read-only or not. The default option is
y. It sets the checkpoint as read-only; n sets the checkpoint as writeable.
[size=<integer>[T|G|M|%]]
Specifies a size for the checkpoint file system. Type an integer between
1 and 1024, specify T for terabytes, G for gigabytes (default) or M for
megabytes. An integer representing the percentage of a file
systems size can also be typed, followed by the percent sign.
[pool=<pool>]
Specifies the storage pool to be used for the checkpoint. Storage pools ca
n
either be user-defined or system-defined. The nas_pool -list command displ
ays
a listing of available pool types.
[storage=<system_name>]
Specifies the attached storage system where the checkpoint SavVol will
reside.
[-option <options>]
Specifies the following comma-separated options:
%full=<value>
Specifies a value as the percentage threshold permitted for the SavVol.
When
that value is reached, a warning is sent to the server_log and the syslo
g
files. The Control Station acknowledges the event and automatically exte
nds
the checkpoint. The SavVol is automatically extended by 10 GB if its def
ault
%full value is reached. If the %full value is set to zero, the option is
%disabled. The default for <value> is 90 and it can be within the range
of 10
%to 99.
automount=no
Stops the checkpoint from being automatically mounted.
275
-refresh
Initiates an immediate update of a checkpoint, thereby allowing the SavVol
space to be reused. Refreshing a checkpoint does not add to the number of
checkpoints of the PFS.
[-option <options>] %full=<value>
Specifies a value as the percentage threshold permitted for the metavolume
.
When that value is reached, a warning is sent to the server_log and the sy
slog
files. The Control Station acknowledges the event and automatically extend
s
the checkpoint. The SavVol is automatically extended by 10 GB if its defau
lt
%full value is reached. If the %full value is set to zero, the option is
%disabled. The default for <value> is 90.
-modify
Modifies one or all of the following options:
Note: The -modify action works only on the PFS and not on the checkpoint.
[%full=<value>]
Modifies the value of the percentage threshold permitted for the
metavolume.
[maxsavsize=<integer>[T|G|M]]
Modifies the final size to which the SavVol can be automatically extended,
when the size specified in %full is reached.
[-name <name>] -Restore
Restores the PFS from the specified checkpoint and optionally assigns a name
to the automatically created checkpoint. If a name is not chosen, one is
assigned by default.
Note: As part of the restore, a new checkpoint is automatically created to
capture the latest point-in-time image of the PFS. This is for protection
in the event that the restored image is discarded.
[-Force]
The -Force option must be used when restoring a production file system
with File-Level Retention enabled.
Caution: Forcing a restore of a production file system with
File-Level Retention enabled from a checkpoint will delete or
overwrite files that were written after this checkpoint was
created or refreshed.
[-option <options>]
Specifies the following comma-separated option(s):
%full=<value>
Specifies a value as the percentage threshold permitted for the
SavVol. When that value is reached, a warning is sent to the
server_log and the syslog file. The Control Station acknowledges the
event and automatically extends the checkpoint. The SavVol is
automatically extended by 10 GB if its default %full value is reached.
If the %full value is set to zero, the option is disabled. The <value>
can be an integer between 10 and 75 (default).
automount=no
Stops the checkpoint from being automatically mounted.
SEE ALSO
--------
Using VNX Snapsure, nas_fs, and nas_pool
STORAGE SYSTEM OUTPUT
The number associated with the storage device is dependent on the attached
storage system. VNX for block displays a prefix of APM before a set of
integers, for example, APM00033900124-0019. For example, EMC Symmetrix storage
systems display as 002804000190-003C. The outputs displayed in the
examples use a VNX for block.
EXAMPLE #1
----------To display the checkpoint for the file system fs4, type:
$ fs_ckpt fs4 -list
id
ckpt_name
creation_time inuse
ckpt_usage_on_savvol
1406 fs4_ckpt1 05/26/2008-16:22:19-EDT
y
0%
id
ckpt_name
inuse
e ckpt_usage_on_savvol
fullmark
total_savvol_used
0%
51%
fullmark
total_savvol_used
bas
EXAMPLE #2
---------To display all the checkpoints including internal checkpoints
for the file system fs4, type:
$ fs_ckpt fs4 -list -all
id
ckpt_name
creation_time
al_savvol_used
ckpt_usage_on_savvol
1401 root_rep_ckpt_1398_21625_1 05/26/2008-16:11:10-EDT
51%
0%
1402 root_rep_ckpt_1398_21625_2 05/26/2008-16:11:22-EDT
51%
0%
1406
fs4_ckpt1 05/26/2008-16:22:19-EDT
51%
0%
id
wckpt_name
al_savvol_used base ckpt_usage_on_savvol
inuse
fullmark
y
90%
y
90%
y
90%
inuse
fullmark
EXAMPLE #3
---------To create a checkpoint of ufs1, on the volume, ssmtv1, type:
$ fs_ckpt ufs1 -Create ssmtv1
operation in progress (not interruptible)...id = 22
name
= ufs1
acl
= 0
in_use
= True
type
= uxfs
worm
= off
volume
= mtv1
pool
=
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
ckpts
= ufs1_ckpt1
stor_devs = APM00043807043-0010,APM00043807043-0014
disks
= d7,d9
disk=d7
stor_dev=APM00043807043-0010 addr=c0t1l0
disk=d7
stor_dev=APM00043807043-0010 addr=c16t1l0
disk=d9
stor_dev=APM00043807043-0014 addr=c0t1l4
disk=d9
stor_dev=APM00043807043-0014 addr=c16t1l4
server=server_2
server=server_2
server=server_2
server=server_2
277
tot
tot
id
=
name
=
acl
=
in_use
=
type
=
worm
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
checkpt_of=
used
=
full(mark)=
stor_devs =
disks
=
disk=d12
disk=d12
disk=d15
disk=d15
24
ufs1_ckpt1
0
True
ckpt
off
vp132
server_2
ufs1 Wed Oct 13 18:01:04 EDT 2004
0%
90%
APM00043807043-0011,APM00043807043-0017
d12,d15
stor_dev=APM00043807043-0011 addr=c16t1l1
stor_dev=APM00043807043-0011 addr=c0t1l1
stor_dev=APM00043807043-0017 addr=c16t1l7
stor_dev=APM00043807043-0017 addr=c0t1l7
server=server_2
server=server_2
server=server_2
server=server_2
Where:
Value
Definition
id
name
acl
in_use
Automatically assigned ID of a file system or the checkpoint.
Name assigned to the file system or the checkpoint
Access control value for a file system. See nas_acl.
If a file system is registered into the mount table of a Data
Mover.
Type of file system. See -list for a description of the types.
Whether the File-Level Retention feature is enabled.
Volume on which a file system resides.
Storage pool for the file system.
Group to which the file system belongs.
Servers with read-write access to a file system.
Servers with read-only access to a file system.
VDM servers with read-write access to a file system.
VDM servers with read-only access to a file system.
Associated checkpoints for the file system.
Name of the PFS related to the existing checkpoints.
Percentage of SavVol space used by the checkpoints of the PFS.
SavVol usage point which, when reached, sends a warning
message to the system log, and auto-extends the SavVol as
system space permits.
Storage system devices associated with a file system.
Disks on which the metavolume resides.
type
worm
volume
pool
member_of
rw_servers
ro_servers
rw_vdms
ro_vdms
ckpts
checkpt_of
used
full(mark)
stor_devs
disks
EXAMPLE #4
----------
To create a checkpoint of ufs1 named ufs1_ckpt2 with a size of 2 GB
using the clar_r5_performance pool, with the specified storage system, with
the %full set to 95, type:
$ fs_ckpt ufs1 -name ufs1_ckpt2 -Create size=2G pool=clar_r5_performance
storage=APM00043807043 -option %full=95
operation in progress (not interruptible)...id = 27
name
= ufs1
acl
= 0
in_use
= True
type
= uxfs
worm
= off
volume
= mtv1
pool
=
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
ckpts
=
stor_devs =
disks
=
disk=d7
disk=d7
disk=d9
disk=d9
server_2
id
=
name
=
acl
=
in_use
=
type
=
worm
=
volume
=
pool
=
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
checkpt_of=
used
=
full(mark)=
stor_devs =
disks
=
disk=d12
disk=d12
disk=d15
disk=d15
30
ufs1_ckpt2
0
True
ckpt
off
vp145
ufs1_ckpt1,ufs1_ckpt2
APM00043807043-0010,APM00043807043-0014
d7,d9
stor_dev=APM00043807043-0010 addr=c0t1l0
stor_dev=APM00043807043-0010 addr=c16t1l0
stor_dev=APM00043807043-0014 addr=c0t1l4
stor_dev=APM00043807043-0014 addr=c16t1l4
server=server_2
server=server_2
server=server_2
server=server_2
server_2
ufs1 Wed Nov 10 14:00:20 EST 2004
0%
95%
APM00043807043-0011,APM00043807043-0017
d12,d15
stor_dev=APM00043807043-0011 addr=c16t1l1
stor_dev=APM00043807043-0011 addr=c0t1l1
stor_dev=APM00043807043-0017 addr=c16t1l7
stor_dev=APM00043807043-0017 addr=c0t1l7
server=server_2
server=server_2
server=server_2
server=server_2
EXAMPLE #3 provides a description of command output.
EXAMPLE #5
---------To create a checkpoint of ufs2 named ufs2_ckpt1 with a size of 2 GB by using
the clar_mapped_pool VNX mapped pool, with the specified system, with the
%full set to 95, type:
$ fs_ckpt ufs2 -name ufs2_ckpt1 -Create size=2G pool=clar_mapped_pool
storage=APM00043807043 -option %full=95
operation in progress (not interruptible)...id = 435
name = ufs2
acl = 0
in_use = True
type = uxfs
worm = off
volume = v731
pool = clar_mapped_pool
member_of = root_avm_fs_group_50
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = no,thin=no
fast_clone_level = 1
deduplication = Off
thin_storage = False
tiering_policy = N/A/Optimize Pool
compressed= False
mirrored = False
279
ckpts = ufs2_ckpt1
stor_devs =
FNM00103400314-0036,FNM00103400314-0037,FNM00103400314-0038,FNM00103400314-0039
disks = d60,d61,d62,d63
disk=d60 stor_dev=FNM00103400314-0036 addr=c0t1l0 server=server_2
disk=d60 stor_dev=FNM00103400314-0036 addr=c16t1l0 server=server_2
disk=d61 stor_dev=FNM00103400314-0037 addr=c0t1l1 server=server_2
disk=d61 stor_dev=FNM00103400314-0037 addr=c16t1l1 server=server_2
disk=d62 stor_dev=FNM00103400314-0038 addr=c0t1l2 server=server_2
disk=d62 stor_dev=FNM00103400314-0038 addr=c16t1l2 server=server_2
disk=d63 stor_dev=FNM00103400314-0039 addr=c0t1l3 server=server_2
disk=d63 stor_dev=FNM00103400314-0039 addr=c16t1l3 server=server_2
id = 438
name = ufs2_ckpt1
acl = 0
in_use = True
type = ckpt
worm = off
volume = vp735
pool = clar_mapped_pool
member_of =
rw_servers=
ro_servers= server_2
rw_vdms =
ro_vdms =
checkpt_of= ufs2 Fri Jan 4 01:43:20 EST 2013
deduplication = Off
thin_storage = False
tiering_policy = N/A/Optimize Pool
compressed= False
mirrored = False
used = 13%
full(mark)= 95%
stor_devs =
FNM00103400314-0036,FNM00103400314-0037,FNM00103400314-0038,FNM00103400314-0039
disks = d60,d61,d62,d63
disk=d60 stor_dev=FNM00103400314-0036 addr=c0t1l0 server=server_2
disk=d60 stor_dev=FNM00103400314-0036 addr=c16t1l0 server=server_2
disk=d61 stor_dev=FNM00103400314-0037 addr=c0t1l1 server=server_2
disk=d61 stor_dev=FNM00103400314-0037 addr=c16t1l1 server=server_2
disk=d62 stor_dev=FNM00103400314-0038 addr=c0t1l2 server=server_2
disk=d62 stor_dev=FNM00103400314-0038 addr=c16t1l2 server=server_2
disk=d63 stor_dev=FNM00103400314-0039 addr=c0t1l3 server=server_2
disk=d63 stor_dev=FNM00103400314-0039 addr=c16t1l3 server=server_2
Where:
Value
Definition
thin_storage
Indicates whether the VNX for block system uses thin provisioni
tiering_policy
Values are: True, False, Mixed.
Indicates the tiering policy is in effect. If the initial tier
ng.
and
the tiering policy are the same, the values are: Auto-Tier, Hig
hest
Available Tier, Lowest Available Tier. If the initial tier and
the
tiering policy are not the same, the values are: Auto-Tier/No D
ata
Movement, Highest Available Tier/No Data Movement, Lowest Avail
able
compressed
ixed
d).
Tier/No Data Movement.
Indicates whether data is compressed. Values are True, False, M
(indicates some of the LUNs, but not all, are compresse
mirrored
Indicates whether the disk is mirrored.
EXAMPLE #6
---------To create a writeable checkpoint of baseline checkpoint ufs1_ckpt1, type:
$ fs_ckpt ufs1_ckpt1 -Create -readonly n
operation in progress (not interruptible)...id = 45
name
= ufs1_ckpt1
acl
= 0
in_use
= False
type
= ckpt
worm
= off
volume
= vp145
pool
= clar_r5_performance
member_of =
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
checkpt_of= ufs1 Tue Nov 6 14:56:43 EST 2007
ckpts
= ufs1_ckpt1_writeable1
used
= 38%
full(mark)= 90%
stor_devs =
APM00042000814-0029,APM00042000814-0024,APM00042000814-0021,APM000420
00814-001C
disks
= d34,d17,d30,d13
id
= 46
name
= ufs1_ckpt1_writeable1
acl
= 0
in_use
= True
type
= wckpt
worm
= off
volume
= vp145
pool
= clar_r5_performance
member_of =
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
checkpt_of= ufs1
baseline_ckpt = ufs1_ckpt1 Tue Nov 6 14:56:43 EST 2007
used
= 38%
full(mark)= 90%
stor_devs =
APM00042000814-0029,APM00042000814-0024,APM00042000814-0021,APM000420
00814-001C
disks
= d34,d17,d30,d13
disk=d34
stor_dev=APM00042000814-0029 addr=c16t2l9
server=server_2
disk=d34
stor_dev=APM00042000814-0029 addr=c32t2l9
server=server_2
disk=d34
stor_dev=APM00042000814-0029 addr=c0t2l9
server=server_2
disk=d34
stor_dev=APM00042000814-0029 addr=c48t2l9
server=server_2
disk=d17
stor_dev=APM00042000814-0024 addr=c0t2l4
server=server_2
disk=d17
stor_dev=APM00042000814-0024 addr=c48t2l4
server=server_2
disk=d17
stor_dev=APM00042000814-0024 addr=c16t2l4
server=server_2
disk=d17
stor_dev=APM00042000814-0024 addr=c32t2l4
server=server_2
disk=d30
stor_dev=APM00042000814-0021 addr=c16t2l1
server=server_2
disk=d30
stor_dev=APM00042000814-0021 addr=c32t2l1
server=server_2
disk=d30
stor_dev=APM00042000814-0021 addr=c0t2l1
server=server_2
disk=d30
stor_dev=APM00042000814-0021 addr=c48t2l1
server=server_2
disk=d13
stor_dev=APM00042000814-001C addr=c0t1l12
server=server_2
disk=d13
stor_dev=APM00042000814-001C addr=c48t1l12
server=server_2
disk=d13
stor_dev=APM00042000814-001C addr=c16t1l12
server=server_2
disk=d13
stor_dev=APM00042000814-001C addr=c32t1l12
server=server_2
281
Where:
Value
Definition
baseline_ckpt
Name of the read-only checkpoint from which the writeable
checkpoint is created.
EXAMPLE #3 provides a description of command output.
EXAMPLE #7
--------To list checkpoints for ufs1, type:
$ fs_ckpt ufs1
id ckpt_name
29 ufs1_ckpt1
30 ufs1_ckpt2
Where:
Value
id
ckpt_name
creation_time
inuse
full(mark)
used
-list
creation_time
inuse full(mark) used
11/04/2004-14:54:06-EST n
95%
0%
11/10/2004-14:00:20-EST y
95%
0%
Definition
Automatically assigned ID of a file system or checkpoint.
Name assigned to the checkpoint.
Date and time the checkpoint was created.
If a checkpoint is registered into the mount table of a
Data Mover.
SavVol-usage point which, when reached, sends a warning
message to the system log, and auto-extends the SavVol as
system space permits.
Percentage of SavVol space used by checkpoints of the PFS.
EXAMPLE #8
---------To refresh ufs1_ckpt2 using the %full at 85, type:
$ fs_ckpt ufs1_ckpt2 -refresh -option %full=85
operation in progress (not interruptible)...id = 30
name
= ufs1_ckpt2
acl
= 0
in_use
= True
type
= ckpt
worm
= off
volume
= vp145
pool
=
member_of =
rw_servers=
ro_servers= server_2
rw_vdms
=
ro_vdms
=
checkpt_of= ufs1 Wed Nov 10 14:02:59 EST 2004
used
= 0%
full(mark)= 85%
stor_devs = APM00043807043-0011,APM00043807043-0017
disks
= d12,d15
disk=d12
stor_dev=APM00043807043-0011 addr=c16t1l1
disk=d12
stor_dev=APM00043807043-0011 addr=c0t1l1
disk=d15
stor_dev=APM00043807043-0017 addr=c16t1l7
disk=d15
stor_dev=APM00043807043-0017 addr=c0t1l7
server=server_2
server=server_2
server=server_2
server=server_2
EXAMPLE #3 provides a description of command output.
EXAMPLE #9
---------Using root command, to restore ufs1_ckpt2 and capture the latest
point-in-time image of the PFS on ufs1_ckpt3, type:
$ /nas/sbin/rootfs_ckpt ufs1_ckpt2 -name ufs1_ckpt3 -Restore
operation in progress (not interruptible)...id = 30
name
= ufs1_ckpt2
acl
= 0
in_use
= True
type
= ckpt
worm
= off
volume
= vp145
pool
=
member_of =
rw_servers=
ro_servers= server_2
rw_vdms
=
ro_vdms
=
checkpt_of= ufs1 Wed Nov 10 14:02:59 EST 2004
used
= 0%
full(mark)= 90%
stor_devs = APM00043807043-0011,APM00043807043-0017
disks
= d12,d15
disk=d12
stor_dev=APM00043807043-0011 addr=c16t1l1
server=server_2
disk=d12
stor_dev=APM00043807043-0011 addr=c0t1l1
server=server_2
disk=d15
stor_dev=APM00043807043-0017 addr=c16t1l7
server=server_2
disk=d15
stor_dev=APM00043807043-0017 addr=c0t1l7
server=server_2
EXAMPLE #3 provides a description of command output.
EXAMPLE #10
---------To modify the %full value of the SavVol associated with the file system ufs1
and set it to 95, type:
$ fs_ckpt ufs1 -modify %full=95
operation in progress (not interruptible)...id
= 33
name
= ufs1
acl
= 0
in_use
= True
type
= uxfs
worm
= off
volume
= vp145
pool
=
rw_servers= server_2
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext = no,virtual_provision=no
ckpts
= wipckpt
stor_devs = APM00062400708-0014,APM00062400708-0016
disks
= d26,d27
disk=d26
stor_dev=APM00062400708-0014 addr=c0t1l4
disk=d26
stor_dev=APM00062400708-0014 addr=c16t1l4
disk=d27
stor_dev=APM00062400708-0016 addr=c0t1l6
disk=d27
stor_dev=APM00062400708-0016 addr=c16t1l6
server=server_2
server=server_2
server=server_2
server=server_2
EXAMPLE #11
----------To modify the maxsavsize value of the SavVol associated with the file system
ufs1 and set it to 65 GB, type:
$ fs_ckpt
operation
name
acl
in_use
type
ufs1 -modify maxsavsize=65G
in progress (not interruptible)...id
= ufs1
= 0
= True
= uxfs
= 33
283
worm
=
volume
=
pool
=
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
auto_ext =
ckpts
=
stor_devs =
disks
=
disk=d26
disk=d26
disk=d27
disk=d27
off
vp145
server_2
no,virtual_provision=no
wipckpt
APM00062400708-0014,APM00062400708-0016
d26,d27
stor_dev=APM00062400708-0014 addr=c0t1l4
stor_dev=APM00062400708-0014 addr=c16t1l4
stor_dev=APM00062400708-0016 addr=c0t1l6
stor_dev=APM00062400708-0016 addr=c16t1l6
DIAGNOSTICS
fs_ckpt returns one of the following return codes:
0 - Command completed successfully
1 - Usage error
2 - Invalid object error
3 - Unable to acquire lock
4 - Permission error
5 - Communication error
6 - Transaction error
7 - Dart error
8 - Backend error
-------------------------------------Last Modified: Jan 11, 2013 3:47 pm
server=server_2
server=server_2
server=server_2
server=server_2
fs_dedupe
Manages filesystem deduplication state.
SYNOPSIS
-------fs_dedupe {
-list
| -info {-all|<fs_name>|id=<fs_id>}
| -modify {<fs_name>|id=<fs_id>} [-state
{off|suspended|on}][-minimum_scan_interval <days>][-minimum_size <KB>]
[-maximum_size <MB>][-access_time <days>][-modification_time <days>]
[-case_sensitive {yes|no}][-pathname_exclude_list <path_list>]
[-file_ext_exclude_list <ext_list>][-duplicate_detection_method
{sha1|byte|off}][-savvol_threshold <percent>][-backup_data_threshold
<percent>][-cifs_compression_enabled {yes|no}] [-compression_method{fast|deep
}]
| -clear {<fs_name>|id=<fs_id>}[-minimum_scan_interval][-minimum_size]
[-maximum_size][-access_time][-modification_time][-case_sensitive]
[-pathname_exclude_list][-file_ext_exclude_list]
[-duplicate_detection_method][-savvol_threshold]
[-backup_data_threshold][-cifs_compression_enabled][-compression_method]
| -default {
-info {<mover_name>|-all}
|
-set {<mover_name>|-all}[-minimum_scan_interval <days>]
[-minimum_size<KB>][-maximum_size <MB>][-access_time
<days>][-modification_time <days>][-case_sensitive
{yes|no}][-file_ext_exclude_list <ext_list>] [-duplicate_detection_method
{sha1|byte|off}][-savvol_threshold <percent>][-cpu_usage_low_watermark
<percent> ] [-cpu_usage_high_watermark <percent>][-backup_data_threshold
<percent>] [-cifs_compression_enabled {yes|no}]
| -clear {<mover_name>|-all}
[-minimum_scan_interval][-minimum_size][-maximum_size][-access_time]
[-modification_time][-case_sensitive][-file_ext_exclude_list]
[-duplicate_detection_method][-savvol_threshold]
[-cpu_usage_low_watermark][-cpu_usage_high_watermark] [-backup_data_threshol
d
<percent>][-cifs_compression_enabled]
}
}
DESCRIPTION
----------fs_dedupe allows the VNX administrator to enable, suspend, and undo all
deduplication processing on a filesystem or a Data Mover. The Data Mover
settings are the global settings that can be used for both the Data Mover and
the filesystem. If a user sets a value for a specific filesystem, then that
value overrides the Data Mover global value. If a user clears a value set for
a specific filesystem, then that value is reset to the Data Mover global
value.
OPTIONS
-------list
Lists all deduplication-enabled filesystems on the VNX.
-info {-all|<fs_name>|id=<fs_id>}
Lists the existing filesystems and provides information on the state of
deduplication processing.
-all
285
Lists all filesystems and provides detailed information on the state of
deduplication processing.
<fs_name>
Lists the filesystem information for the specified filesystem name.
id=<fs_id>
Lists the filesystem information for the specified identifier.
The filesystem state and status information displayed includes:
If the state is off and the status is not reduplicating:
- ID
- Name
- Deduplication state
If the state is off and the status is reduplicating:
- ID
- Name
- Deduplication state
- Progress information (the percentage of files scanned)
If the state of the filesystem is on or suspended, and the status is Idle or
Scanning:
- ID
- Name
- Reduplication state
- Status
- The percentage of files scanned
- Last system scan time
- Number of files scanned
- Number of files deduplicated
- The percentage of files deduplicated
- File system capacity
- Logical data size
- Percentage of filesystem usage
- Space saved (in MB and percent)
-modify {<fs_name>|id=<fs_id>} [-state {off|suspended|on}]
Modifies the deduplication state of the filesystem for each specified filesyste
m
identifier or filesystem name. The state can be set to off, on, or
suspended.
[-minimum_scan_interval <days>]
Defines the minimum number of days between completing one scan of a filesyste
m
and before scanning the same filesystem again. The values range from 1
to 365 and the default value is 7 days.
[-minimum_size <KB>]
Defines the file size in KB that limits deduplication. File sizes equal to
this value or smaller will not be deduplicated. Setting this value to zero
disables it. This value should not be set lower than 24 KB. The values range
from 0 to 1000 and the default value is 24 KB.
[-maximum_size <MB>]
Defines the file size in MB of the largest file to be processed for
deduplication. Files larger than this size in MB will not be deduplicated.
Setting this value to zero disables it. The values range from 0 to 8388608
and the default value is 8388608 MB.
[-access_time <days>]
Defines the minimum required file age in days based on read access time.
Files that have been read within the specified number of days will not be
deduplicated. This setting does not apply to files with an FLR locked state.
Setting this value to zero disables it. The values range from 0 to 365 and th
e
default value is 15 days.
[-modification_time <days>]
Defines the minimum required file age in days based on modification time.
Files updated within the specified number of days will not be deduplicated.
Setting this value to zero disables it. The values range from 0 to 365 and th
e
default value is 15 days.
[-case_sensitive {yes|no}]
Defines whether case-sensitive (for NPS environments) or case-insensitive (fo
r
CIFS environments) string comparisons will be used during scans. By default,
case-insensitive comparisons will be done to be consistent for CIFS
environments. The default value is zero (false).
[-pathname_exclude_list <path_list>]
This is a filesystem setting only (no global setting). It is empty by
default.
Defines a semicolon-delimited list of relative pathnames, in UTF-8 format, to
be excluded from deduplication. Any directory below a specified pathname will
be excluded from deduplication. You can specify a maximum of 10 pathnames and
each one can be up to 1024 bytes. The default value is ’ ’ (empty).
[-file_ext_exclude_list <ext_list>]
Specifies a colon-delimited list of filename extensions to be excluded from
deduplication. Each extension must include the leading dot. The default value
is ’ ’ (empty).
[-duplicate_detection_method {sha1|byte|off}]
0 (off) - This means that duplicate data detection is disabled. With
this setting, every deduplicated file is considered unique
and the only space savings made are accomplished with
compression.
1 (sha1) - The SHA-1 hash is used to detect duplicate data. It is faste
r
than a byte comparison. This is the default method.
2 (byte) - This will use a byte-by-byte comparison to detect duplicate
data. This adds considerable overhead especially for large
files.
[-savvol_threshold <percent>]
Represents the percentage of the configured save volume (SavVol) auto
extension threshold that can be used during deduplication. When the specified
amount of SavVol is used, deduplication stops on this filesystem. By default,
this value is 90 percent and the SavVol auto extension is also 90 percent;
this option will apply when the SavVol is 81 percent full (90 * 90). Setting
this value to zero disables it. The values range from 0 to 100.
Warning: If you set the SavVol threshold option to 0 to disable it, be aware
that the SavVol may grow up to the size of the compressed version of the data
,
consuming disk space that cannot be reclaimed unless you delete all
checkpoints.
[-backup_data_threshold <percent>]
Indicates the full percentage that a deduplicated file has to be below in
order to trigger space-reduced backups for NDMP. For example, when set to 90,
any deduplicated file whose physical size (compressed file plus changed
blocks) is greater than 90 percent of the logical size of the file will have
the entire file data backed up without attempting to back it up in a
space-reduced format. Setting this value to zero disables it. The values rang
e
from 0 to 200 and the default value is 90 percent.
287
[-cifs_compression_enabled {yes|no}]
This option controls whether CIFS compression is allowed. When the default is
yes, enable CIFS compression is allowed. When set to yes and the deduplicatio
n
state of the filesystem is either on or suspended, then CIFS compression is
enabled. If the deduplication state is either off or in the process of being
turned off, then CIFS compression is not allowed, regardless of whether this
option is set to yes.
[-compression_method {fast|deep}]
Indicates whether the compression algorithm is set to fast (default setting)
or deep. This option is valid for VNX systems that use version 7.1 and later.
You can set this value for filesystems only. You cannot set it as a Data
Mover global value.
The fast option is the default compression algorithm that achieves the
original compression ratios and performance.
The deep option is the compression algorithm that achieves space savings up t
o
30% greater than the fast method. For example, if a file is 50% compressible,
then the deep algorithm can compress the same file up to 65%. However, the
compression and decompression time when using this deep option is longer than
when using the fast option. You obtain more storage space at the cost of
slower access. Selecting this deep compression method applies only to new
files that are subsequently compressed, and not to existing compressed files.
When using VNX Replicator, VNX systems that use version 7.0 and earlier
cannot read the deep compression format and will return an I/O error if a rea
d
operation is attempted. Select the deep compression format only if downstream
replication sessions are using compatible software or are scheduled to be
upgraded soon.
-clear {<fs_name>|id=<fs_id>}
Sets the filesystem setting back to the Data Mover setting, which is the
default setting.
[-minimum_scan_interval]
Defines the minimum number of days between completing one scan of a filesyste
m
and before scanning the same filesystem again. The values range from 1
to 365 and the default value is 7 days.
[-minimum_size]
Defines the file size in KB that limits deduplication. File sizes equal to
this value or smaller will not be deduplicated. File sizes greater than this
value will be candidates for deduplication. Setting this value to zero
disables it. This value should not be set lower than 24 KB. The values range
from 0 to 1000 and the default value is 24 KB.
[-maximum_size]
Defines the file size in MB of the largest file to be processed for
deduplication. Files larger than this size in MB will not be deduplicated.
Setting this value to zero disables it. The values range from 0 to 8388608
and the default value is 8388608 MB.
[-access_time]
Defines the minimum required file age in days based on read access time. File
s
that have been read within the specified number of days will not be
deduplicated. This setting does not apply to files with an FLR locked state.
Setting this value to zero disables it. The values range from 0 to 365 and th
e
default value is 15 days.
[-modification_time]
Defines the minimum required file age in days based on modification time.
Files updated within the specified number of days will not be deduplicated.
Setting this value to zero disables it. The values range from 0 to 365 and th
e
default value is 15 days.
[-case_sensitive]
Defines whether case-sensitive (for NPS environments) or case-insensitive (fo
r
CIFS environments) string comparisons will be used during scans. By default,
case insensitive comparisons will be done to be consistent for CIFS
environments. The default value is zero (false).
[-pathname_exclude_list]
This is a filesystem setting only (no global setting).
Specifies a semicolon-delimited list of relative path names, in UTF-8 format,
to be excluded from deduplication. Any directory below a specified path name
will be excluded from deduplication. You can specify a maximum of 10 path
names and each one can be up to 1024 bytes. The default value is ’ ’ (empty).
[-file_ext_exclude_list]
Specifies a colon-delimited list of filename extensions to be excluded from
deduplication. Each extension must include the leading dot. The default value
is ’ ’ (empty).
[-duplicate_detection_method {sha1|byte|off}]
0 (off) - This means that duplicate data detection is disabled. With
this setting, every deduplicated file is considered unique
and the only space savings made are accomplished with
compression.
1 (sha1) - The SHA-1 hash is used to detect duplicate data. It is faste
r
than a byte comparison. This is the default method.
2 (byte) - This will use a byte-by-byte comparison to detect duplicate
data. This adds considerable overhead especially for large
files.
[-savvol_threshold]
Represents the percentage of the configured save volume (SavVol) auto
extension threshold that can be used during deduplication. After the specifie
d
amount of SavVol is used, deduplication stops on this filesystem. By default,
this value is 90 percent and the SavVol auto extension is also 90 percent;
this option will apply when the SavVol is 81 percent full (90 * 90). Setting
this value to zero disables it. The values range from 0 to 100.
[-backup_data_threshold]
Indicates the full percentage that a deduplicated file has to be below in
order to trigger space-reduced backups for NDMP. For example, when set to 90,
any deduplicated file whose physical size (compressed file plus changed block
s)
is greater than 90 percent of the logical size of the file will have the enti
re
file data backed up without attempting to back it up in a space-reduced forma
t.
Setting this value to zero disables it. The values range from 0 to 200 and th
e
default value is 90 percent.
[-cifs_compression_enabled]
This option controls whether CIFS compression is allowed. The default is yes,
enable CIFS compression. When set to yes and the deduplication state of the
filesystem is either on or suspended, then CIFS compression is allowed. If
the deduplication state is either off or in the process of being turned off,
then CIFS compression is not allowed, regardless of whether this option is se
t
to yes.
[-compression_method]
289
This is a filesystem setting only (no global setting). Identifies the
compression algorithm: fast (default) or deep.
| -default {-info {<mover_name>|-all}|-set {<mover_name>|-all}
Manages the Data Mover settings. The -set option determines the Data Mover sett
ings.
[-minimum_scan_interval <days>]
Defines the minimum number of days between completing one scan of a file
system and before scanning the same filesystem again. The values range from 1
to 365 and the default value is 7 days.
[-minimum_size <KB>]
Defines the file size in KB that limits deduplication. File sizes equal to
this value or smaller will not be deduplicated. File sizes greater than this
value will be candidates for deduplication. Setting this value to zero
disables it. This value should not be set lower than 24 KB. The values range
from 0 to 1000 and the default value is 24 KB.
[-maximum_size <MB>]
Defines the file size in MB of the largest file to be processed for
deduplication. Files larger than this size in MB will not be deduplicated.
Setting this value to zero disables it. The values range from 0 to 8388608
and the default value is 8388608 MB.
[-access_time <days>]
Defines the minimum required file age in days based on read access time.
Files that have been read within the specified number of days will not be
deduplicated. This setting does not apply to files with an FLR locked state.
Setting this value to zero disables it. The values range from 0 to 365 and th
e
default value is 15 days.
[-modification_time <days>]
The minimum required file age in days based on modification time. Files
updated within the specified number of days will not be deduplicated. Setting
this value to zero disables it. The values range from 0 to 365 and the defaul
t
value is 15 days.
[-case_sensitive {yes|no}]
Defines whether case-sensitive (for NPS environments) or case-insensitive (fo
r
CIFS environments) string comparisons will be used during scans. By default,
case insensitive comparisons will be done to be consistent for CIFS
environments. The default value is zero (false).
[-file_ext_exclude_list <ext_list>]
Specifies a colon-delimited list of filename extensions to be excluded from
deduplication. Each extension must include the leading dot.
The default value is ’ ’ (empty).
[-duplicate_detection_method {sha1|byte|off}]
0 (off) - This means that duplicate data detection is disabled. With
this setting, every deduplicated file is considered unique
and the only space savings made are accomplished with
compression.
1 (sha1) - The SHA-1 hash is used to detect duplicate data. It is faste
r
than a byte comparison. This is the default method.
2 (byte) - This will use a byte-by-byte comparison to detect duplicate
data. This adds considerable overhead especially for large
files.
[-savvol_threshold <percent>]
Represents the percentage of the configured save volume (SavVol) auto
extension threshold that can be used during deduplication. Once the specified
amount of SavVol is used, deduplication stops on this filesystem. By default,
this value is 90 percent and the SavVol auto-extension is also 90 percent;
this option will apply when the SavVol is 81 percent full (90 * 90). Setting
this value to zero disables it. The values range from 0 to 100.
Warning: If you set the SavVol threshold option to 0 to disable it, be aware
that the SavVol may grow up to the size of the compressed version of the data
,
consuming disk space that cannot be reclaimed unless you delete all
checkpoints.
[-cpu_usage_low_watermark <percent>]
Defines the average percent of CPU usage that can be used during the
deduplication process at which full throttle mode is re-entered. The values
range from 0 to 100 and the default value is 40 percent. This is a global
setting only.
[-cpu_usage_high_watermark <percent>]
Defines the average percent of CPU usage that can be used during the
deduplication process which should trigger a slow throttle mode. The system
starts in full throttle mode. The values range from 0 to 100 and the default
value is 75 percent. This is a global setting only.
[-backup_data_threshold <percent>]
Defines the full percentage that a deduplicated file has to be below in order
to trigger space-reduced backups for NDMP. For example, when set to 90, any
deduplicated file whose physical size (compressed file plus changed blocks)
is greater than 90 percent of the logical size of the file will have the
entire file data backed up without attempting to back it up in a space-reduce
d
format. Setting this value to zero disables it. The values range
from 0 to 200 and the default value is 90 percent.
[-cifs_compression_enabled {yes|no}]
This option controls whether CIFS compression is allowed. The default is yes,
enable CIFS compression. When set to yes and the deduplication state of the
filesystem is either on or suspended, then CIFS compression is allowed. If
the deduplication state is either off or in the process of being turned off,
then CIFS compression is not allowed, regardless of whether this option is se
t
to yes.
| -clear {<mover_name>|-all}
The -clear option sets the global setting back to the default value.
[-minimum_scan_interval]
Defines the minimum number of days between completing one scan of a file
system and before scanning the same file system again. The values range fro
m 1
to 365 and the default value is 7 days.
[-minimum_size]
Defines the file size in KB that limits deduplication. File sizes equal to
this value or smaller will not be deduplicated. File sizes greater than thi
s
value will be candidates for deduplication. Setting this value to zero
disables it. This value should not be set lower than 24 KB. The values rang
e
from 0 to 1000 and the default value is 24 KB.
[-maximum_size]
Defines the file size in MB of the largest file to be processed for
deduplication. Files larger than this size in MB will not be deduplicated.
Setting this value to zero disables it. The values range from 0 to 8388608
and the default value is 8388608 MB.
[-access_time]
291
Defines the minimum required file age in days based on read access time. Fi
les
that have been read within the specified number of days will not be
deduplicated. This setting does not apply to files with an FLR locked state
.
Setting this value to zero disables it. The values range from 0 to 365 and
the
default value is 15 days.
[-modification_time]
Defines the minimum required file age in days based on modification time.
Files updated within the specified number of days will not be deduplicated.
Setting this value to zero disables it. The values range from 0 to 365 and
the
default value is 15 days.
[-case_sensitive]
Defines whether case-sensitive (for NPS environments) or case-insensitive (
for
CIFS environments) string comparisons will be used during scans. By default
,
case insensitive comparisons will be done to be consistent for CIFS
environments. The default value is zero (false).
[-file_ext_exclude_list]
Specifies a colon-delimited list of filename extensions to be excluded from
deduplication. Each extension must include the leading dot.
The default value is ’ ’ (empty).
[-duplicate_detection_method]
0 (off) - This means that duplicate data detection is disabled. With th
is
setting, every deduplicated file is considered unique and the
only space savings made are accomplished with compression.
1 (sha1) - The SHA-1 hash is used to detect duplicate data. It is faster
than
a byte comparison. This is the default method.
2 (byte) - This will use a byte-by-byte comparison to detect duplicate d
ata.
This adds considerable overhead especially for large files.
[-savvol_threshold]
Represents the percentage of the configured save volume (SavVol) auto
extension threshold that can be used during deduplication. After the specif
ied
amount of SavVol is used, deduplication stops on this filesystem. By defaul
t,
this value is 90 percent and the SavVol auto extension is also 90 percent;
this option will apply when the SavVol is 81 percent full (90 * 90). Settin
g
this value to zero disables it. The values range from 0 to 100.
[-cpu_usage_low_watermark]
Specifies the average percent of CPU usage that can be used during the
deduplication process at which full throttle mode is re-entered. The values
range from 0 to 100 and the default value is 25 percent. This is a global
setting only.
[-cpu_usage_high_watermark]
Specifies the average percent of CPU usage that can be used during the
deduplication process which should trigger a slow throttle mode. The system
starts in full throttle mode. The values range from 0 to 100 and the defaul
t
value is 75 percent. This is a global setting only.
[-backup_data_threshold <percent>]
Specifies the full percentage that a deduplicated file has to be below in
order to trigger space-reduced backups for NDMP. For example, when set to 9
0,
any deduplicated file whose physical size (compressed file plus changed blo
cks)
is greater than 90 percent of the logical size of the file will have the en
tire
file data backed up without attempting to back it up in a space-reduced for
mat.
Setting this value to zero disables it. The values range from 0 to 200 and
the
default value is 90 percent.
[-cifs_compression_enabled]
This option controls whether CIFS compression is allowed. When the default
is
yes, enable CIFS compression. When set to yes and the deduplication state o
f
the filesystem is either on or suspended, then CIFS compression is allowed.
If the deduplication state is either off or in the process of being turned
off, then CIFS compression is not allowed, regardless of whether this optio
n
is set to yes.
SEE ALSO: nas_fs
--------
EXAMPLE #1
---------To list the filesystems and their deduplication states, type:
$ fs_dedupe -list
id
name
state
141 ranap1replica
status time_of_last
_scan
Suspended
original_data
_size
usage space_saved
Wed Nov 12
5 MB
09:04:45 EST 2008
0%
0 MB(0%)
104 ds850gb On
_replica1
Idle
Fri Nov 21
875459MB
10:31:15 EST 2008
84%
341590 MB
(39%)
495 cworm
On
Idle
Thu Nov 20
3 MB
09:14:09 EST 2008
0%
0 MB(0%)
33 chrisfs1
On
Idle
Sat Nov 22
1100 MB
10:04:33 EST 2008
18%
424 MB
(38%)
Where:
Value
id
name
state
status
Definition
Filesystem identifier
Name of the filesystem
Deduplication state of the filesystem. The file
data is transferred to the storage which performs
the deduplication and compression on the data
The states are:
On-- Deduplication on the filesystem is enabled.
Suspended-- Deduplication on the filesystem is
suspended. Deduplication does not perform any new
space reduction but the existing files that were
reduced in space remain the same.
Off-- Deduplication on the filesystem is
disabled. Deduplication does not perform any new
space reduction and the data is now reduplicated.
Current state of the deduplication enabled file 293
time_of_last_scan
original_data_size
usage
space_saved
system. The progress statuses are:
Idle-- Deduplication process is currently idle.
Scanning-- Filesystem is being scanned for
deduplication. It displays the percentage of
scanned files in the filesystem.
Reduplicating-- Filesystem files are being
reduplicated from the deduplicated files. It
displays the percentage of reduplicated files.
Time when the filesystem was last scanned
Original size of the filesystem before deduplication
Current space usage of the filesystem
Filesystem space saved after deduplication
EXAMPLE #2
---------To list the filesystems and provide detailed reports on the state of the
deduplication processing, type:
$ fs_dedupe -info -all
Id
Name
Deduplication
File system parameters:
Case Sensitive
Duplicate Detection Method
Access Time
Modification Time
Minimum Size
Maximum Size
File Extension Exclude List
Minimum Scan Interval
Savevol Threshold
Backup Data Threshold
Cifs Compression Enabled
Pathname Exclude List
Compression Method
= 53
= svr2fs1
= Off
=
=
=
=
=
=
=
=
=
=
=
=
=
no
sha1
15
15
24 KB
8388608 MB
7
90
90
yes
fast
Id
= 2040
Name
= server_2_fsltest2
Deduplication
= Suspended
As of the last file system scan (Mon Aug 17 11:33:38 EDT 2009):
Files scanned
= 4
Files deduped
= 3 (75% of total files)
File system capacity
= 2016 MB
Original data size
= 6 MB (0% of current file system capacity)
Space saved
= 0 MB (0% of original data size)
File system parameters:
Case Sensitive
= no
Duplicate Detection Method = sha1
Access Time
= 15
Modification Time
= 15
Minimum Size
= 24 KB
Maximum Size
= 8388608 MB
File Extension Exclude List =
Minimum Scan Interval
= 7
Savevol Threshold
= 90
Backup Data Threshold
= 90
Cifs Compression Enabled
= yes
Pathname Exclude List
=
Compression Method
= fast
Id
Name
Deduplication
File system parameters:
Case Sensitive
= 506
= demofs
= Off
= no
Duplicate Detection Method
Access Time
Modification Time
Minimum Size
Maximum Size
File Extension Exclude List
Minimum Scan Interval
Savevol Threshold
Backup Data Threshold
Cifs Compression Enabled
Pathname Exclude List
=
=
=
=
=
=
=
=
=
=
=
sha1
15
15
24 KB
8388608 MB
7
90
90
yes
Id
= 2113
Name
= testrdefs
Deduplication
= Suspended
As of the last file system scan (Thu Aug 13 14:22:31 EDT 2009):
Files scanned
= 1
Files deduped
= 0 (0% of total files)
File system capacity
= 1008 MB
Original data size
= 0 MB (0% of current file system capacity)
Space saved
= 0 MB (0% of original data size)
File system parameters:
Case Sensitive
= no
Duplicate Detection Method = sha1
Access Time
= 15
Modification Time
= 15
Minimum Size
= 24 KB
Maximum Size
= 8388608 MB
File Extension Exclude List =
Minimum Scan Interval
= 7
Savevol Threshold
= 90
Backup Data Threshold
= 90
Cifs Compression Enabled
= yes
Pathname Exclude List
=
Compression Method
= fast
Id
Name
Deduplication
File system parameters:
Case Sensitive
Duplicate Detection Method
Access Time
Modification Time
Minimum Size
Maximum Size
File Extension Exclude List
Minimum Scan Interval
Savevol Threshold
Backup Data Threshold
Cifs Compression Enabled
Pathname Exclude List
Compression Method
= 2093
= kfs_ckpt1
= Off
=
=
=
=
=
=
=
=
=
=
=
=
=
no
sha1
15
15
24 KB
8388608 MB
7
90
90
yes
fast
Id
= 2095
Name
= ranap-test3
Deduplication
= On
Status
= Idle
As of the last file system scan (Tue Aug 11 17:37:58 EDT 2009):
Files scanned
= 30
Files deduped
= 2 (7% of total files)
File system capacity
= 5041 MB
Original data size
= 1109 MB (22% of current file system capacity)
Space saved
= 0 MB (0% of original data size)
File system parameters:
Case Sensitive
= no
Duplicate Detection Method = sha1
295
Access Time
Modification Time
Minimum Size
Maximum Size
File Extension Exclude List
Minimum Scan Interval
Savevol Threshold
Backup Data Threshold
Cifs Compression Enabled
Pathname Exclude List
Compression Method
Where:
Value
Deduplication
Status
Name
Id
Files scanned
Files deduped
duplicated.
Original data size
e
File system capacity
Space saved
inal data
Case Sensitive
=
=
=
=
=
=
=
=
=
=
=
15
15
24 KB
8388608 MB
7
90
90
yes
deep
Definition
Current deduplication state of the filesystem.
Progress status of the files being scanned.
Name of the filesystem.
Filesystem identifier.
Number of files scanned.
Number of files in the filesystem that has been de
Proportion of space in use with respect to the fil
system capacity.
Current space usage of the filesystem.
Proportion of space saved with respect to the orig
size.
Method of string comparison: case sensitive or cas
e
Duplicate Detection Method
Access Time
cess time.
Modification Time
insensitive.
Method of duplication detection: 0, sha-1, or
byte-by-byte.
Minimum required file age in days based on read ac
Minimum Size
Minimum required file age in days based on
modification time.
Minimum file size to be processed for deduplicatio
Maximum Size
Maximum file size to be processed for deduplicatio
File Extension Exclude List
Lists filename extensions to be excluded from
the deduplication.
Minimum number of days between completing one scan
n.
n.
Minimum Scan Interval
of
a filesystem and before scanning the same file sys
tem again.
SavVol Threshold
Backup Data Threshold
Percentage of SavVol space that can be used during
deduplication.
Percentage below which a deduplicated file has to
be
Cifs Compression Enabled
Pathname Exclude List
Compression Method
in order to trigger space-reduced NDMP backups.
Controls whether CIFS permission is enabled.
Lists relative path names to be excluded from the
deduplication.
Compression algorithm used: fast or deep.
Note: If reduplication fails, then the state transitions to the suspended state
and
a CCMD message will be sent to the server’s event log. If reduplication
succeeds, then it remains in the off state.
EXAMPLE #3
---------To list the filesystems for a given filesystem name, type:
$ fs_dedupe -info server3_fs3
Id
= 98
Name
= server3_fs3
Deduplication
= On
Status
= Idle
As of the last filesystem scan on Tue Sep 23 13:28:01 EDT 2008:
Files deduped
= 30 (100%)
Filesystem capacity
= 413590 MB
Original data size
= 117 MB (0% of current filesystem
capacity)
Space saved
= 106 MB (90% of original data size)
Filesystem parameters:
Case Sensitive
= yes
Duplicate Detection Method
= sha1
Access Time
= 30
Modification Time
= 30
Minimum Size
= 20
Maximum Size
= 200
File Extension Exclude List
= .jpg:.db:.pst
Minimum Scan Interval
= 1
SavVol Threshold
= 90
Backup Data Threshold
= 90
Pathname Exclude List
= root;etc
Compression Method
= fast
EXAMPLE #2 provides a description of command output.
EXAMPLE #6
---------To list the duplication properties of a given Data Mover, type:
$ fs_dedupe -default -info server_2
Server parameters:
Case Sensitive
=
Duplicate Detection Method
=
Access Time
=
Modification Time
=
Minimum Size
=
Maximum Size
=
File Extension Exclude List
=
Minimum Scan Interval
=
SavVol Threshold
=
Backup Data Threshold
=
CPU % Usage Low Water Mark
=
CPU % Usage High Water Mark
=
Cifs Compression Enabled
=
Where:
Value
Deduplication
Status
Name
Id
Files scanned
Files deduped
Original data size
File system capacity
Space saved
Case Sensitive
Duplicate Detection Method
Access Time
yes
sha1
30
30
20
200
.jpg:.db:.pst
1
90
90
25
90
yes
Definition
Current deduplication state of the filesystem.
Progress status of the files being scanned.
Name of the filesystem.
Filesystem identifier.
Number of files scanned.
Number of files in the filesystem that has
been deduplicated.
Proportion of space in use with respect to the
file system capacity.
Current space usage of the filesystem.
Proportion of space saved with respect to the
original data size.
Method of string comparison - case sensitive or
case insensitive.
Method of duplication detection : 0, sha-1, or
byte-by-byte.
Minimum required file age in days based on
read access time.
297
Modification Time
Mininum Size
Minimum required file age in days based on
modification time.
Minimum file size to be processed for
deduplication.
Maximum file size to be processed for
deduplication.
Lists filename extensions to be excluded from
the deduplication.
Minimum number of days between completing one
scan of a filesystem and before scanning the
same file system again.
Percentage of SavVol space that can be used
during deduplication.
Percentage below which a deduplicated file has
to be in order to trigger space-reduced NDMP bac
Maximum Size
File Extension Exclude List
Mininum Scan Interval
SavVol Threshold
Backup Data Threshold
kup.
CPU % Usage Low Water Mark
CPU % Usage High Water Mark
Average
trigger
Average
trigger
percentage of
full throttle
percentage of
slow throttle
CPU usage which should
mode.
CPU usage which should
mode.
EXAMPLE #5
---------To modify the filesystem, type:
$ fs_dedupe -modify testrdefs -state on
Done
EXAMPLE #6
---------To modify the filesystem settings to the user specified values, type:
$ fs_dedupe -modify testrdefs -maximum_size 100 -file_extension_exclude_list
.jpg:.db:.pst
Done
EXAMPLE #7
---------To modify specific Data Mover settings, type:
$ fs_dedupe -default -set server_2 -maximum_size 100 -minimum_size 20
-duplicate_detection_method sha1
Done
EXAMPLE #8
----------To reset the filesystem settings to the default settings (which are the Data
Mover settings) type:
$ fs_dedupe -clear testrdefs -maximum_size -minimum_size -duplicate_detection_m
ethod
Done
EXAMPLE #9
----------To reset specific Data Mover settings to the default settings, type:
$ fs_dedupe -default -clear server_2 -maximum_size -minimum_size
-duplicate_detection_method
Done
EXAMPLE #10
----------To reset all options for a specific Data Mover to the default settings, type:
$ fs_dedupe -default -clear server_2
Done
EXAMPLE #11
----------To reset all options on all Data Movers to the default settings, type:
$ fs_dedupe -default -clear -all
Done
--------------------------------------------------------------------------Last modified: April 13, 2012 1:00 p.m.
299
fs_dhsm
Manages the VNX FileMover file system connections.
SYNOPSIS
-------fs_dhsm
-list
| -info [<fs_name>|id=<fs_id>]
| -modify {<fs_name>|id=<fs_id>}[-state enabled]
[-popup_timeout <sec>][-backup {offline|passthrough}]
[-log {on|off}][-max_log_size <mb>][-offline_attr {on|off}]
[-read_policy_override {none|full|passthrough|partial}]}
| -modify {<fs_name>|id=<fs_id>}[-state disabled]
| -connection {<fs_name>|id=<fs_id>}
-list
| -info [<cid>]
| -create -type {nfsv3|nfsv2} -secondary <nfs_server>:/<path>
[-read_policy_override {full|passthrough|partial|none}]
[-useRootCred {true|false}][-proto {UDP|TCP}][-nfsPort <port>]
[-mntPort <port>][-mntVer {3|2|1}][-localPort <port>]
| -create -type cifs -admin [<fqdn>\]<admin_name>
-secondary \\<fqdn>\<share>[\<path>]
-local_server <host_name> [-wins <address>][-password <password>]
[-read_policy_override {full|passthrough|partial|none}]
| -create -type http -secondary http://<host><url_path>
[-read_policy_override {full|passthrough|partial|none}]
[-httpPort <port>][-localPort <port>]
[-user <username> [-password <password>]]
[-timeout <seconds>][-cgi {y|n}]
| -create -type https -secondary https://<host><url_path>
[-read_policy_override {full|passthrough|partial|none}]
[-httpsPort <port>][-localPort <port>]
[-user <username> [-password <password>]]
[-timeout <seconds>][-cgi {y|n}]
| -delete {-all|<cid>[,<cid>...]} [-recall_policy {check|no|yes}]
| -modify {-all|<cid>[,<cid>...]} [-state {enabled|disabled|recallonly}]
[-read_policy_override {full|passthrough|partial|none}]
[{[-nfs_server <address>] [-localPort <port>]
[-proto {TCP|UDP}] [-useRootCred {true|false}]}
| {[-cifs_server <fqdn>][-local_server <host_name>]
[-password <password>][-admin [<fqdn>\]<admin_name>]
[-wins <address>]}
| {[-http_server <host>][-httpPort <port>][-httpsPort <port>]
[-localPort <port>][-user <username>]
[-password <password>][-timeout <seconds>]}
DESCRIPTION
----------The fs_dhsm command modifies the properties on file systems
enabled for VNX FileMover. The fs_dhsm command creates, deletes,
and modifies NFS, CIFS, and HTTP connections to remote hosts, lists
VNX FileMover file systems, and provides information on the
connections.
OPTIONS
-------list
Lists all file systems enabled with the VNX FileMover.
-info [<fs_name>|id=<fs_id>]
Displays information for the specified VNX FileMover file systems.
-modify {<fs_name>|id=<fs_id>}
Sets VNX FileMover parameters for the specified file system.
Note: When specifying the -modify option on a disabled file system, the state
is automatically changed to enabled. When specifying the -state disabled
option, it is not possible to specify any other parameter to modify.
[-state enabled]
Enables VNX FileMover operations on the specified file system.
The file system must be enabled to accept other options.
[-state disabled]
Disables VNX FileMover operations on the specified file system.
New FileMover attributes cannot be specified as part of a disable
command, nor can be specified for a file system that is in the
disabled state. The attributes persist. If the file system is enabled
after a disable command, then the attributes prior to the disable
command take effect.
[-popup_timeout <sec>]
Specifies the Windows popup timeout value in seconds. If a CIFS
I/O request cannot be processed within the specified time, then a
popup notification of the delay is sent to the CIFS client. The
default for <sec> is 0 (zero) which disables Windows popups.
Note: It may take up to 10 seconds before the popup is displayed.
[-backup {offline|passthrough}]
Specifies the nature of CIFS network backups. The offline option
backs up the stub file only. The passthrough (default) option
backs up all of the file data by using passthrough read.
[-log {on|off}]
Enables or disables VNX FileMover logging. The default log
filename is dhsm.log; it resides in the /.etc directory on the
FileMover-enabled file system.
[-max_log_size <mb>]
Specifies the maximum size of the log file. The current log file, in
addition to four old log files, is saved. The minimum log file size
is 10 MB.
[-offline_attr {on|off}]
Specifies whether the Data Mover should set the CIFS offline file
attributes on the stub files. The default is on.
Caution: It is recommended that you do not disable the CIFS offline
attributes.
[-read_policy_override {none|full|passthrough|partial}]
Specifies the migration method option used by the VNX, in the
connection level or file system level, to override the migration
method specified in the stub file. none (default) specifies no
override, full recalls the whole file to the VNX on a read request
before the data is returned, passthrough retrieves data without
recalling the data to the VNX, and partial recalls only the blocks
required to satisfy the client read request.
Note: The full migration may take several minutes or hours if the file is
very large.
-connection {<fs_name>|id=<fs_id>} -list
Lists all connections for the specified file system.
-connection {<fs_name>|id=<fs_id>} -info [<cid>]
Displays details on all connections for the specified file system. If the
<cid> is specified, only information for that connection is displayed.
301
Note: A connection ID is automatically created when a connection is
established. The connection ID is displayed using the -list and is referred to
as the <cid> in other commands.
NFS CONNECTIONS
-connection {<fs_name>|id=<fs_id>} -create -type
{nfsv3|nfsv2} -secondary <nfs_server>:/<path>
Creates a connection using the NFS protocol between the specified
file system and the secondary file system. The secondary file system
stores migrated data. The -type option specifies the NFS version that
the Data Mover should use when connecting to the secondary server.
Note: VNX FileMover does not currently support NFSv4 protocol.
The -secondary option specifies the location of the remote file system.
Note: Although an IP address can be specified for an <nfs_server>, EMC
strongly suggests using the hostname of the server, which allows you to take
advantage of Domain Name System (DNS) failover capability.
[-read_policy_override {full|passthrough|partial|none}]
Specifies the migration method for data recall in response to
client read requests. full migrates the whole file before it returns
the requested blocks. passthrough leaves the stub file, but
retrieves the requested data from the secondary file system.
partial migrates only the blocks required to satisfy the client read
request. none (default) defaults to the read method option
specified in the stub file.
Note: The full migration may take minutes or hours if the file is very
large.
[-useRootCred {true|false}]
Specifies the user credentials that the Data Mover uses when
requesting data from the secondary VNX. When set to true, the
Data Mover requests data as the root user (UID 0). When set to
false (default), the Data Mover requests data as the owner of the
file as specified in the stub file.
Note: If the -useRootCred option is set to true, the secondary storage
NFS server must grant the Data Mover root privilege for NFS traffic.
[-proto {TCP|UDP}]
Specifies the protocol for the Data Movers to use for
communication to the secondary <nfs_server>. TCP is the
default.
[-nfsPort <port>]
Specifies an NFS port on the secondary <nfs_server>. A default
port is discovered automatically.
[-mntPort <port>]
Specifies a mount port on the secondary <nfs_server>. A default
mount port is discovered automatically.
Note: The -nfsPort and the -mntPort options are used for secondary
servers which do not have the Portmapper running. The admin starts the
nfsd and mountd daemons on specific ports to avoid hackers.
[-mntVer {1|2|3}]
Specifies the mount version for the NFS connection. If the -type is
nfsv3, then the -mntVer must be 3. If the -type is nfsv2, then 1 or 2
can be specified. The default for nfsv2 is 2.
[-localPort <port>]
Overrides the default port that the Data Mover uses during
connection to be compatible with firewalls. The default for UDP is
1020. By default, TCP uses a random port over 1024 to make the
connection.
-connection {<fs_name>|id=<fs_id>} -modify {-all|<cid>[,<cid>...]}
Changes parameters on an existing NFS VNX FileMover connection.
Either all connections can be removed or just the specified <cid>
connection can be removed.
[-state {enabled|disabled|recallonly}]
Sets the state of VNX FileMover operations on the specified file
system. enabled (default) allows both the creation of stub files
and data migration through reads and writes. If the state is
disabled, neither stub files nor data migration is possible. Data
currently on the VNX can be read and written to in the disabled
state.
If the state is recallonly, the policy engine is not allowed to create
stub files, but the user is still able to trigger data migration using
a read or write request from the secondary file system to the VNX.
[-read_policy_override {full|passthrough|partial|none}]
Specifies the migration method option used by the VNX, in the
connection level or file system level, to override the migration
method specified in the stub file. none (default) specifies no
override, full recalls the whole file to the VNX on read request
before the data is returned, passthrough retrieves data without
recalling the data to the VNX, and partial recalls only the blocks
required to satisfy the client read request.
Note: The full migration may take minutes or hours if the file is very
large.
[-nfs_server <address>]
Specifies the name or IP address of the secondary NFS server.
Note: Although an IP address can be specified for the <nfs_server>, EMC
strongly suggests using the hostname of the server, which allows use of
the DNS failover capability.
[-localPort <port>]
Specifies a port to override the default port used by the Data
Mover during connection for compatibility with firewalls.
[-proto {TCP|UDP}]
Specifies the protocol for the Data Mover to use for NFS
communications to the secondary <nfs_server>. TCP is the
default.
[-useRootCred {true|false}]}
Specifies the user credentials that the Data Mover uses when
requesting data from the secondary VNX. When set to true,
the Data Mover requests data as the root user (UID 0). When
set to false (default), the Data Mover requests data as the
owner of the file as specified in the stub file.
Note: If the -useRootCred option is set to true, the secondary storage
NFS server must grant the Data Mover root privilege for NFS traffic.
-connection <fs_name> -delete {-all|<cid>[,<cid>...]}
Removes an existing NFS connection between the file system and the
secondary file system. Either all connections can be removed or just
the specified <cid> connection can be removed.
[-recall_policy {check|no|yes}]
Specifies the recall policy for any migrated file during the -delete.
303
check (default) scans the file system for stub files that depend on
the connection and fails on the first one. no deletes the connection
without checking for stub files that depend on the connection,
and yes migrates the files back to the VNX before the connection
is removed. If no is specified and stub files exist, an I/O error
appears when the file is read because the connection no longer
exists.
CIFS CONNECTIONS
-connection {<fs_name>|id=<fs_id>} -create -type cifs
Creates a connection using the CIFS protocol between the specified
file system and a secondary file system. A connection ID is
automatically created when a connection is established. The
connection ID is seen using the -list and is referred to as the <cid> in
other commands.
-admin [<fqdn>\]<admin_name>
Specifies the <admin_name> used to make the CIFS connection.
If an optional <fqdn> is specified, it must be a fully qualified
domain name. The [<fqdn>\]<admin_name> entry must be
enclosed within quotes as shown in EXAMPLE #2. If the <fqdn>
is not specified, the -local_server domain is used.
-secondary \\<fqdn>\<share>[\<path>]
Specifies the CIFS server, the share, and path for the secondary
server for connection. The <fqdn>\<share>[\<path>] entry must
be enclosed within quotes. The domain must be fully qualified; an
IP address will not work.
-local_server <host_name>
Specifies the NetBIOS name or computer name of the local CIFS
server on the Data Mover.
[-wins <address>]
Specifies a WINS server to resolve names in a Windows domain.
[-password <password>]
Allows the user to specify the admin password. The password is
not recorded in the command log. If the -password option is
given but no password is specified, the user is prompted
interactively.
Caution: When specifying the password with this option, be aware it is
unmasked, and visible to other users. The command may also
be read from the log of the shell.
[-read_policy_override {full|passthrough|partial|none}]
Specifies the migration method for data recall in response to
client read requests. full migrates the whole file before it returns
the requested blocks. passthrough leaves the stub file, but
retrieves the requested data from the secondary file system.
partial migrates only the blocks required to satisfy the client read
request. none (default) defaults to the read method option
specified in the stub file.
Note: The full migration may take several minutes or hours if the file is
very large.
-connection {<fs_name>|id=<fs_id>} -modify {-all|<cid>[,<cid>...]}
Changes parameters on an existing NFS VNX FileMover connection.
[-state {enabled|disabled|recallonly}]
Sets the state of VNX FileMover operations on the specified file
system. enabled (default) allows both the creation of stub files
and data migration through reads and writes. If the state is
disabled, neither stub files nor data migration is possible. Data
currently on the VNX can be read and written to in the disabled
state.
If the state is recallonly, the policy engine is not allowed to create
stub files, but the user is still able to trigger data migration using
a read or write request from the secondary file system to the VNX.
[-read_policy_override {full|passthrough|partial|none}]
Specifies the migration method option used by the VNX, in the
connection level or file system level, to override the migration
method specified in the stub file. none (default) specifies no
override, full recalls the whole file to the VNX on read request
before the data is returned, passthrough retrieves data without
recalling the data to the VNX, and partial recalls only the blocks
required to satisfy the client read request.
Note: The full migration may take minutes or hours if the file is very
large.
[-cifs_server <fqdn>]
Specifies the fully qualified domain name of the secondary CIFS
server.
[-local_server <host_name>]
Specifies the NetBIOS name or computer name of the local
CIFS server on the Data Mover.
[-password <password>]
Allows the user to specify the admin password. The password
is not recorded in the command log. If the -password option is
given but no password is specified, the user is prompted
interactively.
When specifying the password with this option, be aware it is
unmasked, and visible to other users. The command may also
be read from the log of the shell.
[-admin [<fqdn>\]<admin_name>]
Specifies the <admin_name> used to make the CIFS
connection. If an optional <fqdn> is specified, it must be a
fully qualified domain name. If the <fqdn> is not specified, the
-local_server domain is used.
[-wins <address>]}
Specifies a WINS server to resolve names in a Windows
domain.
-connection <fs_name> -delete {-all|<cid> [,<cid>...]}
Removes an existing CIFS connection between the file system and the
secondary file system.
[-recall_policy {check|no|yes}]
Specifies the recall policy for any migrated file during the -delete
option. check (default) scans the file system for stub files that
depend on the connection and fails on the first one. no deletes the
connection without checking for stub files that depend on the
connection, and yes migrates the files back to the VNX before the
connection is removed. If no is specified and stub files exist, an
I/O error appears when the file is read because the connection no
longer exists.
HTTP CONNECTIONS
-connection {<fs_name>|id=<fs_id>} -create -type
http -secondary http://<host><url_path>
Creates a connection using the HTTP protocol between the specified
primary file system and a secondary file system. There are two types
305
of HTTP connections: CGI and non-CGI. For CGI connections, the
value of the -secondary option specifies the hostname of the server
running the secondary storage HTTP server and the location of the
CGI application that provides access to a storage system. For
non-CGI connections, the value for the -secondary option specifies
the hostname and, optionally, a portion of the hierarchical namespace
published by the web server.
Note: Although an IP address can be specified for a <host>, EMC strongly
suggests using the hostname of the server, which allows the DNS failover
capability.
[-read_policy_override {full|passthrough|partial|none}]
Specifies the migration method option used by the VNX, in the
connection level or file system level, to override the migration
method specified in the stub file. none (default) specifies no
override, full recalls the whole file to the VNX on read request
before the data is returned, passthrough retrieves data without
recalling the data to the VNX, and partial recalls only the blocks
required to satisfy the client read request.
Note: The full migration may take several minutes or hours if the file is
very large.
-httpPort <port>
Specifies the remote port number that the Data Mover delivers the HTTP
request to. If not specified, the Data Mover issues HTTP requests to port
80 on the secondary storage HTTP server.
-localPort <port>
Specifies the local port number the Data Mover uses to issue HTTP requests
to the web server running on the secondary storage. The <port> specified
should be an integer no less than 1024. If not specified, the Data Mover
selects a port to issue the HTTP requests.
Note: The two end points of an HTTP connection are specified by the
file system name and the value specified for the -secondary option. If
multiple connections are created using identical end points with different
attributes such as, -cgi, -user, -password, -localPort, -httpPort, the
connection will fail.
[-user <username>]
Defines the username the HTTP client uses if digest authentication is
required by the secondary storage HTTP server.
[-password <password>]
Allows the user to specify the admin password. The password is not recorded
in the command log. If the -password option is given but no password is
specified, the user is prompted interactively.
Use the -password option when digest authentication is required by the
secondary storage HTTP server.
[-timeout <seconds>
Specifies the timeout value in seconds. By default, the Celerra HTTP
client waits 30 seconds for a reply from the HTTP server and then retries
the operation once.
[-cgi {y|n}
Specifies the HTTP connection type: CGI or non-CGI. By default, FileMover
assumes that the web server is using CGI connections to access migrated
file data using a CGI application. For non-CGI connections, set the -cgi
option to n; FileMover then assumes the web server has direct access to
migrated file content on secondary storage.
-connection {<fs_name>|id=<fs_id>} -modify {-all| <cid>[,<cid>...]}
Changes parameters on an existing NFS VNX FileMover connection.
[-state {enabled|disabled|recallonly}]
Sets the state of VNX FileMover operations on the specified file
system. enabled (default) allows both the creation of stub files
and data migration through reads and writes. If the state is
disabled, neither stub files nor data migration is possible. Data
currently on the VNX can be read and written to in the disabled
state.
If the state is recallonly, the policy engine is not allowed to create
stub files, but the user is still able to trigger data migration by
using a read or write request from the secondary file system to the
VNX.
[-read_policy_override {full|passthrough|partial|none}]
Specifies the migration method option used by the VNX, in the
connection level or file system level, to override the migration
method specified in the stub file. none (default) specifies no
override, full recalls the whole file to the VNX on read request
before the data is returned, passthrough retrieves data without
recalling the data to the VNX, and partial recalls only the blocks
required to satisfy the client read request.
Note: The full migration may take minutes or hours if the file is very
large.
[-http_server <host>]
Specifies the hostname of the secondary storage HTTP server.
-httpPort <port>
Specifies the remote port number that the Data Mover delivers
the HTTP request to. If not specified, the Data Mover issues
HTTP requests to port 80 on the secondary storage HTTP
server.
-localPort <port>
Specifies the local port number the Data Mover uses to issue
HTTP requests to the web server active on the secondary
storage. The <port> specified should be an integer no less than
1024. If not specified, the Data Mover selects a port to issue the
HTTP requests.
Note: If you attempt to create multiple HTTP connections by using
identical end points with different attributes such as -cgi, -user,
-password, -localPort, -httpPort, the connection will fail.
[-user <username>]
An optional attribute used to define the username the HTTP
client uses if digest authentication is required by the
secondary storage HTTP server.
[-password <password>]
Allows the user to specify the admin password. The password
is not recorded in the command log. If the -password option is
given but no password is specified, the user is prompted
interactively.
[-timeout <sec>]
Specifies the timeout value in seconds. By default, VNX.s
HTTP client waits 30 seconds for a reply from the HTTP server
and then retries the operation once before commencing the
failover operation.
-connection <fs_name> -delete {-all|<cid>[,<cid>...]}
Removes an existing HTTP connection between the file system and
the secondary file system. Either all connections can be removed or
just the specified <cid> connection can be removed.
307
[-recall_policy {check|no|yes}]
Specifies the recall policy for any migrated file during the -delete
option. The check (default) argument scans the file system for
stub files that depend on the connection and fails on the first one.
no deletes the connection without checking for stub files that
depend on the connection, and yes migrates the files back to the
VNX before the connection is removed. If no is specified and stub
files exist, an I/O error appears when the file is read because the
connection no longer exists.
HTTPS CONNECTIONS
-connection {<fs_name>|id=<fs_id>} -create -type
https -secondary https://<host><url_path>
Creates a connection by using the HTTPS protocol between the
specified primary file system and a secondary file system. There are
two types of HTTPS connections: CGI and non-CGI. For CGI
connections, the value of the -secondary option specifies the
hostname of the server running the secondary storage HTTPS server
and the location of the CGI application that provides access to a
storage system. For non-CGI connections, the value for the
-secondary option specifies the hostname and, optionally, a portion of
the hierarchical namespace published by the web server.
Note: Although an IP address can be specified for a <host>, EMC strongly
suggests using the hostname of the server, which allows the DNS failover
capability.
[-read_policy_override {full|passthrough|partial|none}]
Specifies the migration method option used by the VNX, in the
connection level or file system level, to override the migration
method specified in the stub file. none (default) specifies no
override, full recalls the whole file to the VNX on read request
before the data is returned, passthrough retrieves data without
recalling the data to the VNX, and partial recalls only the blocks
required to satisfy the client read request.
Note: The full migration may take several minutes or hours if the file is
very large.
[-httpsPort <port> ]
Specifies the remote port number that the Data Mover delivers
the HTTPS request to. If not specified, the Data Mover issues
HTTPS requests to port 443 on the secondary storage HTTPS
server.
[-localPort <port> ]
Specifies the local port number the Data Mover uses to issue
HTTPS requests to the web server active on the secondary
storage. The <port> specified should be an integer no less than
1024. If not specified, the Data Mover selects a port to issue the
HTTPS requests.
Note: The two end points of an HTTPS connection are specified by the
file system name and the value specified for the -secondary option. If
multiple connections are created by using identical end points with
different attributes such as -cgi, -user, -password, -localPort, -httpsPort,
the connection will fail.
[-user <username>]
Defines the username the HTTPS client uses if digest authentication is
required by the secondary storage HTTPS server.
[-password <password>]
Allows the user to specify the admin password. The password is
not recorded in the command log. If the -password option is
given but no password is specified, the user is prompted
interactively.
Use the -password option when digest authentication is required
by the secondary storage HTTPS server.
[-timeout <seconds>
Specifies the timeout value in seconds. By default, the VNX
HTTPS client waits 30 seconds for a reply from the HTTPS server
and then retries the operation once.
[-cgi {y|n}
Specifies the HTTPS connection type: CGI or non-CGI. By default,
FileMover assumes that the web server is using CGI connections
to access migrated file data by using a CGI application. For
non-CGI connections, set the -cgi option to n; FileMover then
assumes the web server has direct access to migrated file content
on secondary storage.
-connection {<fs_name>|id=<fs_id>} -modify {-all|<cid>[,<cid>...]}
Changes parameters on an existing NFS VNX FileMover connection.
[-state {enabled|disabled|recallonly}]
Sets the state of VNX FileMover operations on the specified file
system. enabled (default) allows both the creation of stub files
and data migration through reads and writes. If the state is
disabled, neither stub files nor data migration is possible. Data
currently on the VNX can be read and written to in the disabled
state.
If the state is recallonly, the policy engine is not allowed to create
stub files, but the user is still able to trigger data migration by
using a read or write request from the secondary file system to the
VNX.
[-read_policy_override {full|passthrough|partial|none}]
Specifies the migration method option used by the VNX, in the
connection level or file system level, to override the migration
method specified in the stub file. none (default) specifies no
override, full recalls the whole file to the VNX on read request
before the data is returned, passthrough retrieves data without
recalling the data to the VNX, and partial recalls only the blocks
required to satisfy the client read request.
Note: The full migration may take minutes or hours if the file is very
large.
[-http_server <host>]
Specifies the hostname of the secondary storage HTTPS server.
-httpsPort <port>
Specifies the remote port number that the Data Mover delivers
the HTTPS request to. If not specified, the Data Mover issues
HTTPS requests to port 443 on the secondary storage HTTPS
server.
Note: Although the -http_server option is used to modify the name
of the secondary storage HTTPS server, files that can be converted
into a stub by using an HTTPS connection can be brought back online
using only HTTPS and not using NFS, CIFS, or even HTTP.
-localPort <port>
Specifies the local port number the Data Mover uses to issue
HTTPS requests to the web server active on the secondary
storage. The <port> specified should be an integer no less than
1024. If not specified, the Data Mover selects a port to issue the
HTTPS requests.
309
Note: If you attempt to create multiple HTTPS connections by using
identical end points with different attributes such as -cgi, -user,
-password, -localPort, -httpsPort, the connection will fail.
[-user <username>]
An optional attribute used to define the username the HTTPS
client uses if digest authentication is required by the
secondary storage HTTPS server.
[-password <password>]
Allows the user to specify the admin password. The password
is not recorded in the command log. If the -password option is
given but no password is specified, the user is prompted
interactively.
[-timeout <sec>]
Specifies the timeout value in seconds. By default, VNX.s
HTTPS client waits 30 seconds for a reply from the HTTPS
server and then retries the operation once before commencing
the failover operation.
-connection <fs_name> -delete {-all|<cid>[,<cid>...]}
Removes an existing HTTPS connection between the file system and
the secondary file system. Either all connections can be removed or
just the specified <cid> connection can be removed.
[-recall_policy {check|no|yes}]
Specifies the recall policy for any migrated file during the -delete.
check (default) scans the file system for stub files that depend on
the connection and fails on the first one. no deletes the connection
without checking for stub files that depend on the connection,and
yes migrates the files back to the VNX before the connection
is removed. If no is specified and stub files exist, an I/O error
appears when the file is read because the connection no longer
exists.
SEE ALSO
-------Using VNX FileMover, server_cifs, server_http, and server_nfs.
EXAMPLE #1
---------To enable VNX FileMover on a file system, type:
$ fs_dhsm -modify ufs1 -state enabled
ufs1:
state = enabled
offline attr = on
popup timeout = 0
backup = passthrough
read policy override = none
log file = on
max log size = 10MB
Done
Where:
Value
Definition
State
Whether VNX FileMover is enabled or disabled on the file
system
offline attr
Whether CIFS clients should be notified that a file
is migrated
popup timeout
Timeout value in seconds, before Windows popup
notification is sent to the CIFS client
backup
Nature of CIFS network backups
read policy override
Migration method option used to override the read
method specified in the stub file
log file
Whether FileMover logging is enabled or disabled
max log size
Maximum size of the log file
EXAMPLE #2
---------To create a CIFS connection for ufs1 to the secondary file system
\\winserver2.nasdocs.emc.com\dhsm1 with a specified administrative account
nasdocs.emc.com\Administrator and local server dm102-cge0:
$ fs_dhsm -connection ufs1 -create -type cifs -admin
’nasdocs.emc.com\Administrator’ -secondary
’\\winserver2.nasdocs.emc.com\dhsm1’ -local_server dm102-cge0
Enter Password:********
ufs1:
state = enabled
offline attr = on
popup timeout = 0
backup = passthrough
read policy override = none
log file = on
max log size = 10MB
cid = 0
type = CIFS
secondary = \\winserver2.nasdocs.emc.com\dhsm1\
state = enabled
read policy override = none
write policy = full
local_server = DM102-CGE0.NASDOCS.EMC.COM
admin = nasdocs.emc.com\Administrator
wins =
Done
Where:
Value
Definition
state
Whether VNX FileMover is enabled or disabled on
the file system
offline attr
Whether CIFS clients should be notified that a file
is migrated
popup timeout
Timeout value, in seconds, before a popup
notification is sent to CIFS client
backup
Nature of CIFS network backups
read policy override
Migration method option used to override the read
method specified in the stub file
log file
Whether FileMover logging is enabled or disabled
max log size
Maximum size of the log file
cid
Connection ID
311
type
Type of file system
See -list for a description of the types
secondary
Hostname or IP address of the remote file system
state
Specifies whether VNX FileMover is enabled or
disabled on the file system
read policy override
Migration method option used to override the read
method specified in the stub file
write policy
Write policy option used to recall data from
secondary storage
local_server
Name of the local CIFS server used to authenticate
the CIFS connection
EXAMPLE #3
---------To create a CIFS connection for ufs1 to the secondary file system
\\winserver2.nasdocs.emc.com\dhsm2 with a specified administrative account
nasdocs.emc.com\Administrator, local server dm102-cge0, WINS server, and
with the migration method set to full, type:
$ fs_dhsm -connection ufs1 -create -type cifs -admin ’
nasdocs.emc.com\Administrator’ -secondary
’\\winserver2.nasdocs.emc.com\dhsm1’
-local_server dm102-cge0 -wins 172.24.102.25 -read_policy_override full
Enter Password:********
ufs1:
state = enabled
offline attr = on
popup timeout = 0
backup = passthrough
read policy override = none
log file = on
max log size = 10MB
cid = 0
type = CIFS
secondary = \\winserver2.nasdocs.emc.com\dhsm1\
state = enabled
read policy override = full
write policy = full
local_server = DM102-CGE0.NASDOCS.EMC.COM
admin = nasdocs.emc.com\Administrator
wins = 172.24.102.25
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #4
---------To display connection information for ufs1, type:
$ fs_dhsm -connection ufs1 -info 1
ufs1:
state = enabled
offline attr = on
popup timeout = 0
backup = passthrough
read policy override = none
log file = on
max log size = 10MB
EXAMPLE #2 provides a description of command output.
EXAMPLE #5
---------To modify the read_policy_override setting for connection 0 for ufs1, type:
$ fs_dhsm -connection ufs1 -modify 0 -read_policy_override passthrough
ufs1:
state = enabled
offline attr = on
popup timeout = 0
backup = passthrough
read policy override = none
log file = on
max log size = 10MB
cid = 0
type = CIFS
secondary = \\winserver2.nasdocs.emc.com\dhsm1\
state = enabled
read policy override = pass
write policy = full
local_server = DM102-CGE0.NASDOCS.EMC.COM
admin = nasdocs.emc.com\Administrator
wins = 172.24.102.25
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #6
---------To modify the VNX FileMover connection for ufs1, type:
$ fs_dhsm -connection ufs1 -modify 0 -nfs_server 172.24.102.115 -proto TCP
ufs1:
state = enabled
offline attr = on
popup timeout = 10
backup = offline
read policy override = full
log file = on
max log size = 25MB
cid = 0
type = NFSV3
secondary = 172.24.102.115:/export/dhsm1
state = enabled
read policy override = full
write policy = full
options = useRootCred=true proto=TCP
cid = 1
type = CIFS
secondary = \\winserver2.nasdocs.emc.com\dhsm1\
state = enabled
read policy override = none
write policy = full
local_server = DM102-CGE0.NASDOCS.EMC.COM
admin = nasdocs.emc.com\Administrator
wins = 172.24.102.25
cid = 2
type = HTTP
secondary = http://172.24.102.115/export/dhsm1
state = enabled
read policy override = none
write policy = full
user =
313
options = cgi=n
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #7
---------To create the NFSv3 connection for ufs1 to the secondary file system
172.24.102.115:/export/dhsm1 with the migration method set to full, the
-useRootCred set to true, and the protocol set to UDP, type:
$ fs_dhsm -connection ufs1 -create -type nfsv3 -secondary
172.24.102.115:/export/dhsm1 -read_policy_override full -useRootCred true
-proto UDP
ufs1:
state = enabled
offline attr = on
popup timeout = 0
backup = passthrough
read policy override = none
log file = on
max log size = 10MB
cid = 0
type = CIFS
secondary = \\winserver2.nasdocs.emc.com\dhsm1\
state = enabled
read policy override = pass
write policy = full
local_server = DM102-CGE0.NASDOCS.EMC.COM
admin = nasdocs.emc.com\Administrator
wins = 172.24.102.25
cid = 1
type = NFSV3
secondary = 172.24.102.115:/export/dhsm1
state = enabled
read policy override = full
write policy = full
options = useRootCred=true proto=UDP
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #8
---------To modify the VNX FileMover connection for ufs1, type:
$ fs_dhsm -connection ufs1 -modify 1 -proto TCP
ufs1:
state = enabled
offline attr = on
popup timeout = 0
backup = passthrough
read policy override = none
log file = on
max log size = 10MB
cid = 0
type = CIFS
secondary = \\winserver2.nasdocs.emc.com\dhsm1\
state = enabled
read policy override = pass
write policy = full
local_server = DM102-CGE0.NASDOCS.EMC.COM
admin = nasdocs.emc.com\Administrator
wins = 172.24.102.25
cid = 1
type = NFSV3
secondary = 172.24.102.115:/export/dhsm1
state = enabled
read policy override = full
write policy = full
options = useRootCred=true proto=TCP
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #9
---------To display VNX FileMover connection information for ufs1, type:
$ fs_dhsm -info ufs1
ufs1:
state = enabled
offline attr = on
popup timeout = 0
backup = passthrough
read policy override = none
log file = on
max log size = 10MB
EXAMPLE #1 provides a description of command output.
EXAMPLE #10
----------To list VNX FileMover connections, type:
$ fs_dhsm -connection ufs1 -list
id name cid
29 ufs1 0
29 ufs1 1
29 ufs1 2
EXAMPLE #11
----------To modify the VNX FileMover connection for ufs1, type:
$ fs_dhsm -modify ufs1 -popup_timeout 10 -backup offline -log on
-max_log_size 25 -offline_attr on -read_policy_override full
ufs1:
state = enabled
offline attr = on
popup timeout = 10
backup = offline
read policy override = full
log file = on
max log size = 25MB
cid = 0
type = CIFS
secondary = \\winserver2.nasdocs.emc.com\dhsm1\
state = enabled
read policy override = pass
write policy = full
local_server = DM102-CGE0.NASDOCS.EMC.COM
admin = nasdocs.emc.com\Administrator
wins = 172.24.102.25
cid = 1
type = NFSV3
secondary = 172.24.102.115:/export/dhsm1
315
state = enabled
read policy override = full
write policy = full
options = useRootCred=true proto=TCP
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #12
----------To modify the state of the VNX FileMover connection 0 for ufs1, type:
$ fs_dhsm -connection ufs1 -modify 0 -state disabled
ufs1:
state = enabled
offline attr = on
popup timeout = 10
backup = offline
read policy override = full
log file = on
max log size = 25MB
cid = 0
type = CIFS
secondary = \\winserver2.nasdocs.emc.com\dhsm1\
state = disabled
read policy override = pass
write policy = full
local_server = DM102-CGE0.NASDOCS.EMC.COM
admin = nasdocs.emc.com\Administrator
wins = 172.24.102.25
cid = 1
type = NFSV3
secondary = 172.24.102.115:/export/dhsm1
state = enabled
read policy override = full
write policy = full
options = useRootCred=true proto=TCP
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #13
----------To modify the state of the VNX FileMover connection 1 for ufs1, type:
$ fs_dhsm -connection ufs1 -modify 1 -state recallonly
ufs1:
state = enabled
offline attr = on
popup timeout = 10
backup = offline
read policy override = full
log file = on
max log size = 25MB
cid = 0
type = CIFS
secondary = \\winserver2.nasdocs.emc.com\dhsm1\
state = enabled
read policy override = pass
write policy = full
local_server = DM102-CGE0.NASDOCS.EMC.COM
admin = nasdocs.emc.com\Administrator
wins = 172.24.102.25
cid = 1
type = NFSV3
secondary = 172.24.102.115:/export/dhsm1
state = recallonly
read policy override = full
write policy = full
options = useRootCred=true proto=TCP
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #14
----------To delete the VNX FileMover connections 1 and 2 for ufs1, and specify
the recall policy for any migrated files during the delete, type:
$ fs_dhsm -connection ufs1 -delete 0,1 -recall_policy
no
ufs1:
state = enabled
offline attr = on
popup timeout = 10
backup = offline
read policy override = full
log file = on
max log size = 25MB
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #15
----------To change the state of the VNX FileMover connection for ufs1 to
disabled, type:
$ fs_dhsm -modify ufs1 -state disabled
ufs1:
state = disabled
offline attr = on
popup timeout = 10
backup = offline
read policy override = full
log file = on
max log size = 25MB
Done
EXAMPLE #1 provides a description of command output.
EXAMPLE #16
----------To create an HTTP connection for ufs1 to the secondary file system
/export/dhsm1 on the web server http://172.24.102.115 which has direct
access to the storage, type:
$ fs_dhsm -connection ufs1 -create -type http -secondary
http://172.24.102.115/export/dhsm1 -cgi n
ufs1:
state = enabled
offline attr = on
popup timeout = 10
backup = offline
317
read policy override = full
log file = on
max log size = 25MB
cid = 2
type = HTTP
secondary = http://172.24.102.115/export/dhsm1
state = enabled
read policy override = none
write policy = full
user =
options = cgi=n
Done
EXAMPLE #2 provides a description of command output.
EXAMPE #17
---------To create an HTTP connection for ufs1 to the secondary file system using CGI
connections to access migrated file data using a CGI application, type:
$ fs_dhsm -connection ufs1 -create -type http -secondary
http://www.nasdocs.emc.com/cgi-bin/access.sh
ufs1:
state = enabled
offline attr = on
popup timeout = 0
backup = passthrough
read policy override = none
log file = on
max log size = 10MB
cid = 0
type = HTTP
secondary = http://www.nasdocs.emc.com/cgi-bin/access.sh
state = enabled
read policy override = none
write policy = full
user =
options =
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #18
----------To create an HTTPS connection for server2_fs1 on the web server
https://int16543 with read_policy_override set to full, type:
$ fs_dhsm -connection server2_fs1 -create -type https -secondary
https://int16543 -read_policy_override full -cgi n
server2_fs1:
state
= enabled
offline attr
= on
popup timeout
= 0
backup
= passthrough
read policy override = passthrough
log file
= on
max log size
= 10MB
cid
= 0
type
= HTTPS
secondary
= https://int16543
state
= enabled
read policy override = full
write policy
=
full
user
=
options
=
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #19
----------To create an HTTPS connection for ufs1 to the secondary file system using
CGI connections to access migrated file data using a CGI application, type:
$ fs_dhsm -connection ufs1 -create -type https .secondary
https://www.nasdocs.emc.com/cgi-bin/access.sh
ufs1:
state = enabled
offline attr = on
popup timeout = 0
backup = passthrough
read policy override = none
log file = on
max log size = 10MB
cid = 0
type = HTTPS
secondary = https://www.nasdocs.emc.com/cgi-bin/access.sh
state = enabled
read policy override = none
write policy = full
user =
options =
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #20
----------To create an HTTPS connection on httpsPort 443 for server2_ufs1 on the web
server https://int16543 with read_policy_override set to passthrough, type:
$ fs_dhsm -connection server2_fs1 -create -type https -secondary
https://int16543 -read_policy_override passthrough -httpsPort 443 -cgi n
server2_fs1:
state
= enabled
offline attr
= on
popup timeout
= 0
backup
= passthrough
read policy override = passthrough
log file
= on
max log size
= 10MB
cid
= 1
type
= HTTPS
secondary
= https://int16543
state
= enabled
read policy override = pass
write policy
=
full
user
=
options
=
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #21
----------To create an HTTPS connection on localPort 80 for server2_ufs1 on the web319
server https://int16543 with read_policy_override set to passthrough, type:
$ fs_dhsm -connection server2_fs1 -create -type https -secondary
https://int16543 -read_policy_override passthrough -localPort 80 -cgi n
server2_fs1:
state
= enabled
offline attr
= on
popup timeout
= 0
backup
= passthrough
read policy override = passthrough
log file
= on
max log size
= 10MB
cid
= 0
type
= HTTPS
secondary
= https://int16543
state
= enabled
read policy override = pass
write policy
=
full
user
=
options
=
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #22
----------To create an HTTPS connection on httpsPort 443 for server2_ufs1 on the web
server https://int16543 with a specified user dhsm_user, type:
$ fs_dhsm -connection server2_fs1 -create -type https -secondary
https://int16543 -read_policy_override full -httpsPort 443 .user dhsm_user
-password dhsm_user -cgi n
server2_fs1:
state
= enabled
offline attr
= on
popup timeout
= 0
backup
= passthrough
read policy override = passthrough
log file
= on
max log size
= 10MB
cid
= 1
type
= HTTPS
secondary
= https://int16543
state
= enabled
read policy override = full
write policy
=
full
user
= dhsm_user
options
=
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #23
----------To modify the read_policy_override setting for connection 1 from server2_fs1,
type:
$ fs_dhsm -connection server2_fs1 -modify 1 -read_policy_override passthrough
server2_fs1:
state
= enabled
offline attr
= on
popup timeout
= 0
backup
= passthrough
read policy override = passthrough
log file
= on
max log size
= 10MB
cid
= 1
type
= HTTPS
secondary
= https://int16543
state
= enabled
read policy override = pass
write policy
=
full
user
= dhsm_user
options
=
Done
EXAMPLE #2 provides a description of command output.
EXAMPLE #24
----------To delete the VNX FileMover connection 0 for ufs1, type:
$ fs_dhsm -connection ufs1 -delete 0
ufs1:
state
= enabled
offline attr
= on
popup timeout
= 0
backup
= passthrough
read policy override = none
log file
= on
max log size
= 10MB
Done
EXAMPLE #1 provides a description of command output.
-------------------------------------Last Modified: March 29, 2011 05:00 Pm
321
fs_group
Creates a file system group from the specified file systems or a single file
system.
SYNOPSIS
-------fs_group
-list
| -delete <fs_group_name>
| -info {<fs_group_name>|id=<fs_group_id>}
| [-name <name>] -create {<fs_name>,...}
| -xtend <fs_group_name> {<fs_name>,...}
| -shrink <fs_group_name> {<fs_name>,...}
DESCRIPTION
----------The fs_group command combines file systems to be acted upon simultaneously as a
single
group for TimeFinder/FS.
OPTIONS
-------list
Displays a listing of all file system groups.
Note: The ID of the object is an integer and is assigned automatically. The
name of a file system may be truncated if it is too long for the display. To
display the full name, use the -info option with a file system ID.
-delete <fs_group_name>
Deletes the file system group configuration. Individual file systems are not
deleted.
-info {<fs_group_name>|id=<fs_group_id>}
Displays information about a file system group, either by name or group ID.
[-name <name>] -create {<fs_name>,...}
Creates a file system group from the specified file systems. If a name is
not specified, one is assigned by default.
-xtend <fs_group_name> {<fs_name>,...}
Adds the specified file systems or group, to a file system group.
-shrink <fs_group_name> {<fs_name>,...}
Removes the specified file systems or group from a file system group.
Individual file systems are not deleted.
SEE ALSO
-------Managing Volumes and File Systems for VNX Manually and Using
TimeFinder/FS, NearCopy, and FarCopy on VNX for File, fs_timefinder,
and nas_fs.
STORAGE SYSTEM OUTPUT
The number associated with the storage device is dependent on the
attached storage system. VNX for block displays a prefix of APM
before a set of integers, for example, APM00033900124-0019.
Symmetrix storage storage systems appear as 002804000190-003C.
EXAMPLE #1
----------
To create a file system group named, ufsg1, and add ufs1, type:
$ fs_group -name ufsg1 -create ufs1
id = 22
name = ufsg1
acl = 0
in_use = False
type = group
fs_set = ufs1
pool =
stor_devs =
000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009
disks = d3,d4,d5,d6
Where:
Value
Indicates:
id
ID of the group that is automatically assigned
name
Name assigned to the group
acl
Access control value for the group
in_use
Whether a file system is used by a group
type
Type of file system
fs_set
File systems that are part of the group
pool
Storage pool given to the file system group
stor_devs
Storage system devices associated with the group
disks
Disks on which the metavolume resides
EXAMPLE #2
---------To list all file system groups, type:
$ fs_group -list
id name acl in_use type member_of fs_set
20 ufsg1 0
n
100
18
Where:
Value
member_of
Indicates:
Groups which the file system group belong to
EXAMPLE #3
---------To display information for the file system group, ufsg1, type:
$ fs_group -info ufsg1
id = 22
name = ufsg1
acl = 0
in_use = False
type = group
fs_set = ufs1
pool =
stor_devs =
000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009
disks = d3,d4,d5,d6
323
EXAMPLE #1 provides a description of command output.
EXAMPLE #4
---------To add file system, ufs2, to the file system group, ufsg1, type:
$ fs_group -xtend ufsg1 ufs2
id = 22
name = ufsg1
acl = 0
in_use = False
type = group
fs_set = ufs1,ufs2
pool =
stor_devs =
000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009,0001879
40268-000A,000187940268-000B,000187940268-000C,000187940268-000D
disks = d3,d4,d5,d6,d7,d8,d9,d10
EXAMPLE #1 provides a description of command output.
EXAMPLE #5
---------To remove file system, ufs2, from the file system group, ufsg1, type:
$ fs_group -shrink ufsg1 ufs2
id = 22
name = ufsg1
acl = 0
in_use = False
type = group
fs_set = ufs1
pool =
stor_devs =
000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009
disks = d3,d4,d5,d6
EXAMPLE #1 provides a description of command output.
EXAMPLE #6
---------To delete file system group, ufsg1, type:
$ fs_group -delete ufsg1
id = 22
name = ufsg1
acl = 0
in_use = False
type = group
fs_set =
stor_devs =
disks =
EXAMPLE #1 provides a description of command output.
-------------------------------------Last Modified: March 29, 2010 6:00 pm
fs_rdf
Manages the Remote Data Facility (RDF) functionality for a file
system residing on RDF drives.
SYNOPSIS
-------fs_rdf {<fs_name>|id=<fs_id>}
-Mirror {on|off|refresh}
| -Restore [-Force]
| -info
Note: RDF is supported only on a VNX attached to a Symmetrix.
DESCRIPTION
----------The fs_rdf command turns mirroring on and off for an RDF file
system and displays information about RDF relationships.
OPTIONS
-------Mirror {on|off|refresh}
The on option resumes the link between the RDF drives of a file
system thereby enabling mirroring for the RDF file system. The off
option halts mirroring between the file systems, and the refresh
option does an immediate mirror on then off which refreshes the file
system image.
-Restore [-Force]
Restores a file system from the R2 side (remote) when remote
TimeFinder/FS FarCopy is used. The -Restore can only be executed
on the R1 side. The -Force option must be used when restoring a file
system with enabled.
-info
Displays information about RDF relationships.
SEE ALSO
-------Using SRDF/S with VNX for Disaster Recovery, Using TimeFinder/FS,
NearCopy, and FarCopy on VNX for File, and Using VNX File-Level
Retention.
EXAMPLE #1
---------To turn on mirroring for ufs1_snap1 from the R1 Control Station, type:
$ fs_rdf ufs1_snap1 -Mirror on
id = 20
name = ufs1_snap1
acl = 0
in_use = False
type = uxfs
volume = v168
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
backup_of = ufs1 Fri Apr 23 16:29:23 EDT 2004
stor_devs =
002804000190-0052,002804000190-0053,002804000190-0054,002804000190-0055
325
disks = rootd33,rootd34,rootd35,rootd36
RDF Information:
remote_symid = 002804000218
remote_sym_devname =
ra_group_number = 2
dev_rdf_type = R1
remote_symid = 002804000218
remote_sym_devname =
ra_group_number = 2
dev_rdf_type = R1
dev_ra_status = READY
dev_link_status = READY
rdf_mode = SYNCHRONOUS
rdf_pair_state = SYNCINPROG
rdf_domino = DISABLED
adaptive_copy = DISABLED
adaptive_copy_skew = 65535
num_r1_invalid_tracks = 0
num_r2_invalid_tracks = 736440
dev_rdf_state = READY
remote_dev_rdf_state = WRITE_DISABLED
rdf_status = 0
link_domino = DISABLED
prevent_auto_link_recovery = DISABLED
link_config =
suspend_state = NA
consistency_state = DISABLED
adaptive_copy_wp_state = NA
prevent_ra_online_upon_pwron = ENABLED
Where:
Value
Definition
id
ID of a file system that is assigned automatical
name
Name assigned to a file system
acl
Access control value for a file system
in_use
Whether a file system is registered into the mou
type
Type of file system
See nas_fs for a description of the types
volume
Volume on which a file system resides
pool
Storage pool for the file system
rw_servers
Servers with read-write access to a file system
ro_servers
Servers with read-only access to a file system
rw_vdms
VDM servers with read-write access to a file sys
ro_vdms
VDM servers with read-only access to a file syst
backup_of
The remote RDF file system
stor_devs
The storage system devices associated with a fil
ly
nt table
tem
em
e system
disks
The disks on which the metavolume resides
remote_symid
ng the target volume
The serial number of the storage system containi
remote_sym_devname
ice in an RDF pair
The storage system device name of the remote dev
ra_group_number
The RA group number (1-n)
dev_rdf_type
The type of RDF device
Possible values are: R1 and R2
dev_ra_status
dev_link_status
RA status. Possible values are: READY,
NOT_READY, WRITE_DISABLED, STATUS_NA,
STATUS_MIXED
Link status
Possible values are: READY,
NOT_READY, WRITE_DISABLED, NA, MIXED
rdf_mode
The RDF mode.Possible values are:
SYNCHRONOUS, SEMI_SYNCHRONOUS,
ADAPTIVE_COPY, MIXED
rdf_pair_state
Composite state of the RDF pair
Possible values are:
INVALID, SYNCINPROG, SYNCHRONIZED,
SPLIT,SUSPENDED, FAILED_OVER, PARTITIONED,
R1_UPDATED, R1_UPDINPROG, MIXED
rdf_domino
The RDF device domino
Possible values are: ENABLED, DISABLED, MIXED
adaptive_copy
Possible values are: DISABLED, WP_MODE, DISK_MOD
adaptive_copy_skew
Number of invalid tracks when in Adaptive
copy mode
num_r1_invalid_tracks
Number of invalid tracks on the source (R1)
device
num_r2_invalid_tracks
Number of invalid tracks on the target (R2)
device
E, MIXED
dev_rdf_state
Specifies the composite RDF state of the RDF
device
Possible values are: READY, NOT_READY, WRITE_DIS
ABLED, NA, MIXED
remote_dev_rdf_state
rdf_status
LED,NA,MIXED
link_domino
prevent_auto_link_recovery
Specifies the composite RDF state of the
remote RDF device
Possible values are:
READY, NOT_READY, WRITE_DISABLED, NA, MIXED
Specifies the RDF status of the device
Possible values are: READY,NOT_READY,WRITE_DISAB
RDF link domino
Possible values are: ENABLED, DISABLED
When enabled, prevents the automatic
resumption of data copy across the RDF links
as soon as the links have recovered
Possible values are: ENABLED, DISABLED
327
link_config
Possible values are: CONFIG_ESCON, CONFIG_T3
suspend_state
Specifies the status of R1 devices in a
consistency group
Possible states are: NA, OFFLINE, OFFLINE_PEND,
ONLINE_MIXED
consistency_state
adaptive_copy_wp_state
ONLINE_MIXED
prevent_ra_online_upon_pwron
Specifies state of an R1 device related to
consistency groups
Possible states are: ENABLED, DISABLED
Specifies state of the adaptive copy mode
Possible states are: NA, OFFLINE, OFFLINE_PEND,
Specifies the state of the RA director
coming online after power on
Possible states are: ENABLED, DISABLED
EXAMPLE #2
---------To display RDF-related information for ufs1_snap1 from the R2 Control
Station, type:
$ fs_rdf ufs1_snap1 -info
id = 20
name = ufs1_snap1
acl = 0
in_use = False
type = uxfs
volume = v168
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
backup_of = ufs1 Fri Apr 23 16:29:23 EDT 2004
stor_devs =
002804000190-0052,002804000190-0053,002804000190-0054,002804000190-0055
disks = rootd33,rootd34,rootd35,rootd36
RDF Information:
remote_symid = 002804000218
remote_sym_devname =
ra_group_number = 2
dev_rdf_type = R1
dev_ra_status = READY
dev_link_status = READY
rdf_mode = SYNCHRONOUS
rdf_pair_state = SYNCINPROG
rdf_domino = DISABLED
adaptive_copy = DISABLED
adaptive_copy_skew = 65535
num_r1_invalid_tracks = 0
num_r2_invalid_tracks = 696030
dev_rdf_state = READY
remote_dev_rdf_state = WRITE_DISABLED
rdf_status = 0
link_domino = DISABLED
prevent_auto_link_recovery = DISABLED
link_config =
suspend_state = NA
consistency_state = DISABLED
adaptive_copy_wp_state = NA
prevent_ra_online_upon_pwron = ENABLED
EXAMPLE #1 provides a description of command output.
EXAMPLE #3
---------To turn the mirroring off for ufs1_snap1 on the R1 Control Station, type:
$ fs_rdf ufs1_snap1 -Mirror off
remainder(MB) = 20548..17200..13110..8992..4870..746 0
id = 20
name = ufs1_snap1
remainder(MB) = 20548..17200..13110..8992..4870..746 0
id = 20
name = ufs1_snap1
acl = 0
in_use = False
type = uxfs
volume = v168
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
backup_of = ufs1 Fri Apr 23 16:29:23 EDT 2004
stor_devs =
002804000190-0052,002804000190-0053,002804000190-0054,002804000190-0055
disks = rootd33,rootd34,rootd35,rootd36
RDF Information:
remote_symid = 002804000218
remote_sym_devname =
ra_group_number = 2
dev_rdf_type = R1
dev_ra_status = READY
dev_link_status = NOT_READY
rdf_mode = SYNCHRONOUS
rdf_pair_state = SUSPENDED
rdf_domino = DISABLED
adaptive_copy = DISABLED
adaptive_copy_skew = 65535
num_r1_invalid_tracks = 0
num_r2_invalid_tracks = 0
dev_rdf_state = READY
remote_dev_rdf_state = WRITE_DISABLED
rdf_status = 0
link_domino = DISABLED
prevent_auto_link_recovery = DISABLED
link_config =
suspend_state = OFFLINE
consistency_state = DISABLED
adaptive_copy_wp_state = NA
prevent_ra_online_upon_pwron = ENABLED
EXAMPLE #1 provides a description of command output.
EXAMPLE #4
---------To perform a mirror refresh for ufs1_snap1 on the R1 Control Station, type:
$ fs_rdf ufs1_snap1 -Mirror refresh
remainder(MB) = 1 0
id = 20
name = ufs1_snap1
acl = 0
329
in_use = False
type = uxfs
volume = v168
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
backup_of = ufs1 Fri Apr 23 16:29:23 EDT 2004
stor_devs =
002804000190-0052,002804000190-0053,002804000190-0054,002804000190-0055
disks = rootd33,rootd34,rootd35,rootd36
RDF Information:
remote_symid = 002804000218
remote_sym_devname =
ra_group_number = 2
dev_rdf_type = R1
dev_ra_status = READY
dev_link_status = NOT_READY
rdf_mode = SYNCHRONOUS
rdf_pair_state = SUSPENDED
rdf_domino = DISABLED
adaptive_copy = DISABLED
adaptive_copy_skew = 65535
num_r1_invalid_tracks = 0
num_r2_invalid_tracks = 0
dev_rdf_state = READY
remote_dev_rdf_state = WRITE_DISABLED
rdf_status = 0
link_domino = DISABLED
prevent_auto_link_recovery = DISABLED
link_config =
suspend_state = OFFLINE
consistency_state = DISABLED
adaptive_copy_wp_state = NA
prevent_ra_online_upon_pwron = ENABLED
EXAMPLE #1 provides a description of command output.
EXAMPLE #5
---------To restore the file system ufs1_snap1 from the R1 Control Station, type:
$ /nas/sbin/rootfs_rdf ufs1_snap1 -Restore
remainder(MB) = 1 0
id = 20
name = ufs1_snap1
acl = 0
in_use = False
type = uxfs
volume = v168
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
backup_of = ufs1 Fri Apr 23 16:29:23 EDT 2004
stor_devs =
002804000190-0052,002804000190-0053,002804000190-0054,002804000190-0055
disks = rootd33,rootd34,rootd35,rootd36
RDF Information:
remote_symid = 002804000218
remote_sym_devname =
ra_group_number = 2
dev_rdf_type = R1
dev_ra_status = READY
dev_link_status = READY
rdf_mode = SYNCHRONOUS
rdf_pair_state = SYNCHRONIZED
rdf_domino = DISABLED
adaptive_copy = DISABLED
adaptive_copy_skew = 65535
num_r1_invalid_tracks = 0
num_r2_invalid_tracks = 0
dev_rdf_state = READY
remote_dev_rdf_state = WRITE_DISABLED
rdf_status = 0
link_domino = DISABLED
prevent_auto_link_recovery = DISABLED
link_config =
suspend_state = NA
consistency_state = DISABLED
adaptive_copy_wp_state = NA
prevent_ra_online_upon_pwron = ENABLED
EXAMPLE #1 provides a description of command output.
-------------------------------------Last Modified: March 29, 2010 06:15 pm
331
fs_timefinder
Manages the TimeFinderTM/FS functionality for the specified filesystem or
filesystem group.
SYNOPSIS
-------fs_timefinder {<fs_name>|id=<fs_id>}
-Mirror {on|off|refresh [-Force]}[-star]
| [-name <name>] -Snapshot [-volume <volume_name>][-option <options>][-star]
| -Restore [-Force][-option <options>][-star]
Note: TimeFinder/FS is supported only on a VNX attached to a Symmetrix.
DESCRIPTION
----------The fs_timefinder command creates a copy of a file system or file
system group that can be placed into a mirrored mode with its
original file system. The Symmetrix must already have business
continuance volumes (BCVs) configured to the same size as the
volumes on the VNX. After the copy of the filesystem has been
made, it can be mounted on any Data Mover.
OPTIONS
-------Mirror {on|off|refresh}
on places the unmounted filesystem copy, created by using the
-Snapshot option, into mirrored mode with its original filesystem.
The file system copy is frozen and remains unavailable to users until
mirrored mode is turned off.
The refresh option initiates an immediate -Mirror on then off for the
unmounted file system copy, thereby refreshing the filesystem copy.
[-Force]
The file system copy should not be mounted read-write when placed
into mirrored mode or when refreshed. If the filesystem copy is
mounted read-write, the -Force option can be used to force a refresh if
the metavolume is an STD type. The -Force option requires root
command and must be executed by using
/nas/sbin/rootfs_timefinder.
[-star]
The -star option allows the fs_timefinder command to run on
STAR SRDF configuration.
Caution: Performing a mirror refresh may be time consuming, relative to the
amount of data that has changed in the file system.
[-name <name>] -Snapshot
Creates a copy of a file system and assigns an optional name to the
file system copy. If a name is not specified, one is assigned by default.
If no options are provided, a name and metavolume are
automatically assigned. Use nas_fs to delete the copy of the file
system.
Caution: Creating a copy by using -Snapshot may be time consuming,
relative to the size of a file system.
[-volume <volume_name>]
Assigns a metavolume to a file system copy. The metavolume
must be created by using the nas_volume -Clone command prior
to executing this option. The metavolume must be a BCV type
and have the same characteristics as the metavolume of the
original file system.
[-option <options>]
Specifies the following comma-separated options:
mirror=on
Leaves the file system copy in mirrored mode.
disktype=<disktype>
For systems with both local and R1BCVs, specifies the type of
volume to use when creating a snapshot. In a TimeFinder/FS
FarCopy configuration, use disktype=R1BCV for creating a
snapshot of the PFS on the local VNX for file. For creating a
snapshot of an imported FarCopy snapshot on the remote VNX
for file, use disktype=STD. This option is supported for only
RAID group based disk volumes and cannot be combined with
the "mapped_pool=" option.
By default, the system uses the first available R1BCV or BCV, or
R1STD or STD device.
Use the disktype= option to designate which to use if there are R1
devices in your configuration.
pool=<mapped_pool>
Specifies the mapped pool to use when creating a snapshot from
that pool. This option is supported only for mapped pool disk
volumes and cannot be combined with the disktype= option.
A mapped pool is a VNX for file storage pool that is dynamically
generated when diskmark is run. It is a one-to-one mapping with
either a VNX for block storage pool or a Symmetrix Storage
Group.
Note: If the pool= option is used when creating a snapshot, the disk volume
will be selected from only this pool. If the pool does not have enough disk
volumes to create a snapshot for the source file system, the fs_timefinder
command reports an error.
[-star]
The -star option allows the fs_timefinder command to run on
STAR SRDF configuration.
-Restore
Restores a file system to its original location by using the
unmounted file system copy created with the -Snapshot option.
The original file system must not have any associated SnapSure
checkpoints.
Caution: Restoring a file system may be time consuming, relative to the
amount of data that has changed in the file system.
[-Force]
Forces a restore of a file system copy that is mounted on the
metavolume as read-only, or if the volume is an STD type.
[-option <options>]
Specifies the following comma-separated options:
mirror=on
Places the file system copy in mirrored mode.
[-star]
The -star option allows the fs_timefinder command to run on
STAR SRDF configuration.
333
SEE ALSO
-------Using TimeFinder/FS, NearCopy, and FarCopy on VNX for File, fs_ckpt,
fs_group, and nas_fs.
EXAMPLE #1
---------To create a TimeFinder/FS copy of the PFS, type:
$ fs_timefinder ufs1 -Snapshot
operation in progress (not interruptible)...
remainder(MB) =
43688..37205..31142..24933..18649..12608..7115..4991..4129..3281..2457..1653..8
1
5..0
operation in progress (not interruptible)...id = 18
name = ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = mtv1
pool =
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
backups = ufs1_snap1
auto_ext = no,thin=no
fast_clone_level = 1
deduplication
= Off
stor_devs =
000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009
disks = d3,d4,d5,d6
disk=d3 stor_dev=000187940268-0006 addr=c0t1l0-48-0 server=server_2
disk=d3 stor_dev=000187940268-0006 addr=c16t1l0-33-0 server=server_2
disk=d4 stor_dev=000187940268-0007 addr=c0t1l1-48-0 server=server_2
disk=d4 stor_dev=000187940268-0007 addr=c16t1l1-33-0 server=server_2
disk=d5 stor_dev=000187940268-0008 addr=c0t1l2-48-0 server=server_2
disk=d5 stor_dev=000187940268-0008 addr=c16t1l2-33-0 server=server_2
disk=d6 stor_dev=000187940268-0009 addr=c0t1l3-48-0 server=server_2
disk=d6 stor_dev=000187940268-0009 addr=c16t1l3-33-0 server=server_2
id = 19
name = ufs1_snap1
acl = 0
in_use = False
type = uxfs
worm = off
volume = v456
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
backup_of = ufs1 Thu Oct 28 14:13:30 EDT 2011
auto_ext = no,thin=no
fast_clone_level = unavailable
deduplication
= unavailable
stor_devs =
000187940268-0180,000187940268-0181,000187940268-0182,000187940268-0183
disks = rootd378,rootd379,rootd380,rootd381
Where:
Value
name
acl
Definition
Name assigned to the file system.
Access control value for a file system. nas_ac provides informat
ion.
in_use
fast_clone_
If a file system is registered into the mount table of a Data
Mover.
Type of file system.-list option provides a description of
the types.
Whether is enabled.
Volume on which the file system resides.
Storage pool for the file system.
Servers with read-write access to a file system.
Servers with read-only access to a file system.
VDM servers with read-write access to a file system.
VDM servers with read-only access to a file system.
Name of associated backups.
File system that the file system copy is made from.
Indicates whether auto-extension and thin provisioning are
enabled.
fast_clone_level=1 enables ability to create a fast clone. File
level
retention and fast clone creation cannot be enabled together on
type
worm
volume
pool
rw_servers
ro_servers
rw_vdms
ro_vdms
backups
backup_of
auto_ext
level
a
filesystem.
fast_clone_level=2 enables ability to create fast clone of a fas
t clone (also
deduplication
called as the second level fast clone) on the filesystem.
Deduplication state of the file system. The file data is
transferred to the storage which performs the deduplication
and compression on the data. The states are:
On - Deduplication on the file system is enabled.
Suspended - Deduplication on the file system is suspended.
Deduplication does not perform any new space reduction but the e
xisting files
that were reduced in space remain the same.
Off - Deduplication on the file system is disabled.
Deduplication does not perform any new space reduction and the d
ata is now
stor_devs
disks
reduplicated.
Storage system devices associated with a file system. The
storage device output is the result of the Symmetrix hardware
storage system.
Disks on which the metavolume resides.
EXAMPLE #2
---------To create a TimeFinder/FS copy of the PFS, ufs1, and leave a file system
copy in mirrored mode, type:
$ fs_timefinder ufs1 -Snapshot -option mirror=on
operation in progress (not interruptible)...id = 18
name = ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = mtv1
pool =
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
backups = ufs1_snap1
auto_ext = no,thin=no
335
fast_clone_level = 1
deduplication
= Off
stor_devs =
000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009
disks = d3,d4,d5,d6
disk=d3 stor_dev=000187940268-0006 addr=c0t1l0-48-0 server=server_2
disk=d3 stor_dev=000187940268-0006 addr=c16t1l0-33-0 server=server_2
disk=d4 stor_dev=000187940268-0007 addr=c0t1l1-48-0 server=server_2
disk=d4 stor_dev=000187940268-0007 addr=c16t1l1-33-0 server=server_2
disk=d5 stor_dev=000187940268-0008 addr=c0t1l2-48-0 server=server_2
disk=d5 stor_dev=000187940268-0008 addr=c16t1l2-33-0 server=server_2
disk=d6 stor_dev=000187940268-0009 addr=c0t1l3-48-0 server=server_2
disk=d6 stor_dev=000187940268-0009 addr=c16t1l3-33-0 server=server_2
id = 19
name = ufs1_snap1
acl = 0
in_use = False
type = mirrorfs
worm = off
volume = v456
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
backup_of = ufs1 Thu Oct 28 14:19:03 EDT 2012
auto_ext = no,thin=no
fast_clone_level = unavailable
deduplication
= unavailable
remainder = 0 MB (0%)
stor_devs =
000187940268-0180,000187940268-0181,000187940268-0182,000187940268-0183
disks = rootd378,rootd379,rootd380,rootd381
EXAMPLE #1 provides a description of command output.
EXAMPLE #3
---------To turn mirroring off for a file system copy, ufs1_snap1, type:
$ fs_timefinder ufs1_snap1 -Mirror off
operation in progress (not interruptible)...
remainder(MB) = 0
operation in progress (not interruptible)...id = 18
name = ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = mtv1
pool =
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
backups = ufs1_snap1
auto_ext = no,thin=no
fast_clone_level = 1
deduplication
= Off
stor_devs =
000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009
disks = d3,d4,d5,d6
disk=d3 stor_dev=000187940268-0006 addr=c0t1l0-48-0 server=server_2
disk=d3 stor_dev=000187940268-0006 addr=c16t1l0-33-0 server=server_2
disk=d4
disk=d4
disk=d5
disk=d5
disk=d6
disk=d6
stor_dev=000187940268-0007
stor_dev=000187940268-0007
stor_dev=000187940268-0008
stor_dev=000187940268-0008
stor_dev=000187940268-0009
stor_dev=000187940268-0009
addr=c0t1l1-48-0 server=server_2
addr=c16t1l1-33-0 server=server_2
addr=c0t1l2-48-0 server=server_2
addr=c16t1l2-33-0 server=server_2
addr=c0t1l3-48-0 server=server_2
addr=c16t1l3-33-0 server=server_2
id = 19
name = ufs1_snap1
acl = 0
in_use = False
type = uxfs
worm = off
volume = v456
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
backup_of = ufs1 Thu Oct 28 14:21:50 EDT 2011
auto_ext = no,thin=no
fast_clone_level = unavailable
deduplication
= unavailable
stor_devs =
000187940268-0180,000187940268-0181,000187940268-0182,000187940268-0183
disks = rootd378,rootd379,rootd380,rootd381
EXAMPLE #1 provides a description of command output.
EXAMPLE #4
---------To turn mirroring on for a file system copy, ufs1_snap1, type:
$ fs_timefinder ufs1_snap1 -Mirror on
operation in progress (not interruptible)...id = 18
name = ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = mtv1
pool =
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
backups = ufs1_snap1
auto_ext = no,thin=no
fast_clone_level = 1
deduplication
= Off
stor_devs =
000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009
disks = d3,d4,d5,d6
disk=d3 stor_dev=000187940268-0006 addr=c0t1l0-48-0 server=server_2
disk=d3 stor_dev=000187940268-0006 addr=c16t1l0-33-0 server=server_2
disk=d4 stor_dev=000187940268-0007 addr=c0t1l1-48-0 server=server_2
disk=d4 stor_dev=000187940268-0007 addr=c16t1l1-33-0 server=server_2
disk=d5 stor_dev=000187940268-0008 addr=c0t1l2-48-0 server=server_2
disk=d5 stor_dev=000187940268-0008 addr=c16t1l2-33-0 server=server_2
disk=d6 stor_dev=000187940268-0009 addr=c0t1l3-48-0 server=server_2
disk=d6 stor_dev=000187940268-0009 addr=c16t1l3-33-0 server=server_2
id = 19
name = ufs1_snap1
acl = 0
in_use = False
337
type = mirrorfs
worm = off
volume = v456
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
backup_of = ufs1 Thu Oct 28 14:21:50 EDT 2011
auto_ext = no,thin=no
fast_clone_level = unavailable
deduplication
= unavailable
remainder = 0 MB (0%)
stor_devs =
000187940268-0180,000187940268-0181,000187940268-0182,000187940268-0183
disks = rootd378,rootd379,rootd380,rootd381
EXAMPLE #1 provides a description of command output.
EXAMPLE #5
---------To perform a mirror refresh on ufs1_snap1, type:
$ fs_timefinder ufs1_snap1 -Mirror refresh
operation in progress (not interruptible)...
remainder(MB) = 4991..4129..3281..2457..1653..815..0
operation in progress (not interruptible)...id = 18
name = ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = mtv1
pool =
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
backups = ufs1_snap1
auto_ext = no,thin=no
fast_clone_level = 1
deduplication
= Off
stor_devs =
000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009
disks = d3,d4,d5,d6
disk=d3 stor_dev=000187940268-0006 addr=c0t1l0-48-0 server=server_2
disk=d3 stor_dev=000187940268-0006 addr=c16t1l0-33-0 server=server_2
disk=d4 stor_dev=000187940268-0007 addr=c0t1l1-48-0 server=server_2
disk=d4 stor_dev=000187940268-0007 addr=c16t1l1-33-0 server=server_2
disk=d5 stor_dev=000187940268-0008 addr=c0t1l2-48-0 server=server_2
disk=d5 stor_dev=000187940268-0008 addr=c16t1l2-33-0 server=server_2
disk=d6 stor_dev=000187940268-0009 addr=c0t1l3-48-0 server=server_2
disk=d6 stor_dev=000187940268-0009 addr=c16t1l3-33-0 server=server_2
id = 19
name = ufs1_snap1
acl = 0
in_use = False
type = uxfs
worm = off
volume = v456
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
backup_of = ufs1 Thu Oct 28 14:25:21 EDT 2011
auto_ext = no,thin=no
fast_clone_level = unavailable
deduplication = unavailable
stor_devs =
000187940268-0180,000187940268-0181,000187940268-0182,000187940268-0183
disks = rootd378,rootd379,rootd380,rootd381
Example #1 provides a description of command output.
EXAMPLE #6
---------To restore the file system copy, ufs1_snap1, to its original location, type:
$ /nas/sbin/rootfs_timefinder ufs1_snap1 -Restore -Force
operation in progress (not interruptible)...
remainder(MB) = 0
operation in progress (not interruptible)...id = 19
name = ufs1_snap1
acl = 0
in_use = False
type = uxfs
worm = off
volume = v456
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
backup_of = ufs1 Thu Oct 28 14:25:21 EDT 2011
auto_ext = no,thin=no
fast_clone_level = unavailable
deduplication
= unavailable
stor_devs =
000187940268-0180,000187940268-0181,000187940268-0182,000187940268-0183
disks = rootd378,rootd379,rootd380,rootd381
id = 18
name = ufs1
acl = 0
in_use = True
type = uxfs
worm = off
volume = mtv1
pool =
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
backups = ufs1_snap1
auto_ext = no,thin=no
fast_clone_level = 1
deduplication
= Off
stor_devs =
000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009
disks = d3,d4,d5,d6
disk=d3 stor_dev=000187940268-0006 addr=c0t1l0-48-0 server=server_2
disk=d3 stor_dev=000187940268-0006 addr=c16t1l0-33-0 server=server_2
disk=d4 stor_dev=000187940268-0007 addr=c0t1l1-48-0 server=server_2
disk=d4 stor_dev=000187940268-0007 addr=c16t1l1-33-0 server=server_2
disk=d5 stor_dev=000187940268-0008 addr=c0t1l2-48-0 server=server_2
disk=d5 stor_dev=000187940268-0008 addr=c16t1l2-33-0 server=server_2
disk=d6 stor_dev=000187940268-0009 addr=c0t1l3-48-0 server=server_2 disk=d6
stor_dev=000187940268-0009 addr=c16t1l3-33-0 server=server_2
EXAMPLE #7
339
---------To create a snapshot for a mapped pool, type:
$ fs_timefinder ufs1 -name ufs1_snap1 -Snapshot -option pool=bcv_sg
operation in progress (not interruptible)...
remainder(MB) = ..14184..0
operation in progress (not interruptible)...id
= 87
name
= ufs1
acl
= 0
in_use
= False
type
= uxfs
worm
= off
volume
= mtv1
pool
=
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
backups
= ufs1_snap1
fast_clone_level = 1
deduplication
= Off
deduplication
= unavailable
auto_ext = no,thin=no
deduplication
= unavailable
stor_devs = 000194900546-0037
disks
= d11
id
= 88
name
= ufs1_snap1
acl
= 0
in_use
= False
type
= uxfs
worm
= off
volume
= v456
pool
= bcv_sg
member_of = root_avm_fs_group_49
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
backup_of = ufs1 Fri Oct 1 12:03:10 EDT 2011
auto_ext = no,thin=no
fast_clone_level = unavailable
deduplication
= unavailable
thin_storage
= False
tiering_policy = thickfp2
mirrored = False
stor_devs = 000194900546-003C
disks
= rootd16
Where:
Value
auto_ext
deduplication
Definition
Indicates whether auto-extension and thin provisioning are
enabled.
Deduplication state of the file system. The file data is
transferred to the storage which performs the deduplication and
compression on the data. The states are:
On - Deduplication on the file system is enabled.
Suspended - Deduplication on the file system is suspended. Dedu
plication does
not perform any new space reduction but the existing files that
were reduced
in space remain the same.
Off - Deduplication on the file system is disabled. Deduplicati
on does not
perform any new space reduction and the data is now reduplicate
d.
thin_storage
tiering_policy
Indicates whether the block storage system uses thin
provisioning. Values are: True, False, Mixed.
Indicates the tiering policy in effect. If the initial tier
and the tiering policy are the same, the values are: Auto-Tier,
Highest
Available Tier, Lowest Available Tier. If the initial tier and
the tiering
policy are not the same, the values are: Auto-Tier/No Data Move
ment, Highest
Available Tier/No Data Movement, Lowest Available Tier/No Data
Movement.
mirrored
Indicates whether the disk is mirrored.
-------------------------------------Last Modified: June 5, 2012 12:30 p.m.
341
Server CLI Commands
This chapter lists the eNAS Command Set provided for managing,
configuring, and monitoring Data Movers. The commands are prefixed with
server and appear alphabetically. The command line syntax (Synopsis), a
description of the options, and an example of usage are provided for each
command.
server_archive
server_arp
server_cdms
server_cepp
server_certificate
server_checkup
server_cifs
server_cifssupport
server_cpu
server_date
server_dbms
server_devconfig
server_df
server_dns
server_export
server_file
server_fileresolve
server_ftp
server_http
server_ifconfig
server_ip
server_kerberos
server_ldap
server_log
server_mount
server_mountpoint
server_mpfs
server_mt
server_name
server_netstat
server_nfs
server_nis
server_nsdomains
server_param
server_pax
server_ping
server_ping6
server_rip
server_route
server_security
server_setup
server_snmpd
server_ssh
server_standby
server_stats
server_sysconfig
server_sysstat
server_tftp
server_umount
server_uptime
server_user
server_usermapper
server_version
server_viruschk
server_vtlu
343
server_archive
Reads and writes file archives, and copies directory hierarchies.
SYNOPSIS
-------server_archive <movername> [-cdnvN] -f <archive_file> [-J [p][w|d|u]]
[-I <client_dialect>]
[-e <archive_name>][-s <replstr>] ...
[-T [<from_date>][,<to_date>]][<pattern>] ...
server_archive <movername> -r [-cdiknuvDNYZ][-E <limit>]
[-J [w|d|u]][-C d|i|m][-I <client_dialect>]
[-f <file_name>][-e <archive_name>]
[-p <string>] ... [-s <replstr>] ...
[-T [<from_date>][,<to_date>]] ... [<pattern> ...]
server_archive <movername> -w [-dituvLNPX]
[-J [w|d|u]][-I <client_dialect>]
[-b <block_size>][-f <file_name>][-e <archive_name>]
[-x <format>][-B bytes][-s <replstr>] ...
[-T [<from_date>][,<to_date>][/[c][m]]] ...
[[-0]|[-1]][<file> ...]
server_archive -r -w [-diklntuvDLPXYZ]
[-J [w|d|u]][-C d|i|m]
[-p <string>] ... [-s <replstr>] ...
[-T [<from_date>][,<to_date>][/[c][m]]] ...
[<file> ...] <directory>
DESCRIPTION
----------server_archive reads, writes, and lists the members of an archive
file, and copies directory hierarchies. The server_archive operation is
independent of the specific archive format, and supports a variety of
different archive formats.
Note: A list of supported archive formats can be found under the description
of the -x option.
The presence of the -r and the -w options specifies the following
functional modes: list, read, write, and copy.
-List (no arguments)
server_archive writes to standard output a table of contents of the
members of the archive file read from archive, whose pathnames
match the specified patterns.
Note: If no options are specified, server_archive lists the contents of
the archive.
Read (-r)
server_archive extracts the members of the archive file read from the
archive, with pathnames matching the specified patterns. The archive
format and blocking are automatically determined on input. When an
extracted file is a directory, the entire file hierarchy rooted at that
directory is extracted.
Note: Ownership, access, and modification times, and file mode of the
extracted files are discussed in more detail under the -p option.
Write (-w)
server_archive writes an archive containing the file operands to
archive using the specified archive format. When a file operand is
also a directory, the entire file hierarchy rooted at that directory is
included.
Copy (-r -w)
server_archive copies the file operands to the destination directory.
When a file operand is also a directory, the entire file hierarchy rooted
at that directory is included. The effect of the copy is as if the copied
files were written to an archive file and then subsequently extracted,
except that there may be hard links between the original and the
copied files. The -l option provides more information.
CAUTION
------The destination directory must exist and must not be one of the file
operands or a member of a file hierarchy rooted at one of the file
operands. The result of a copy under these conditions is
unpredictable.
While processing a damaged archive during a read or list operation,
server_archive attempts to recover from media defects and searches
through the archive to locate and process the largest number of
archive members possible (the -E option provides more details on
error handling).
OPERANDS
-------The directory operand specifies a destination directory pathname. If
the directory operand does not exist, or it is not writable by the user,
or it is not a directory name, server_archive exits with a non-zero exit
status.
The pattern operand is used to select one or more pathnames of
archive members. Archive members are selected using the pattern
matching notation described by fnmatch 3. When the pattern
operand is not supplied, all members of the archive are selected.
When a pattern matches a directory, the entire file hierarchy rooted at
that directory is selected. When a pattern operand does not select at
least one archive member, server_archive writes these pattern
operands in a diagnostic message to standard error and then exits
with a non-zero exit status.
The file operand specifies the pathname of a file to be copied or
archived. When a file operand does not select at least one archive
member, server_archive writes these file operand pathnames in a
diagnostic message to standard error and then exits with a non-zero
exit status.
The archive_file operand is the name of a file where the data is stored
(write) or read (read/list). The archive_name is the name of the
streamer on which the data will be stored (write) or read (read/list).
Note: To obtain the device name, you can use server_devconfig -scsi.
OPTIONS
------The following options are supported:
-r
Reads an archive file from archive and extracts the specified files. If
any intermediate directories are needed to extract an archive member,
these directories will be created as if mkdir 2 was called with the
bit-wise inclusive OR of S_IRWXU, S_IRWXG, and S_IRWXO, as the
mode argument. When the selected archive format supports the
specification of linked files and these files cannot be linked while the
archive is being extracted, server_archive writes a diagnostic
message to standard error and exits with a non-zero exit status at the
completion of operation.
345
-w
Writes files to the archive in the specified archive format.
-0 (zero)
With this option, a full referenced backup is performed with the time
and date of launching put in a reference file. This reference file is an
ASCII file and is located in /.etc/BackupDates. The backup is
referenced by the pathname of the files to back up and the time and
date when the backup was created. This file is updated only if the
backup is successful.
Backup files can be copied using the server_file command.
-<x>
Level x (x=1.9) indicates a backup of all files in a file system that have
been modified since the last backup of a level smaller than the
previous backup.
For example, a backup is performed for:
Monday: level 0 = full backup
Tuesday: level 3 = files modified since Monday
Friday: level 5 = files modified since Tuesday
Saturday: level 4 = files modified since Tuesday
Sunday: level 4 = files modified since Tuesday
Note: If the backup type is not indicated, a full backup is performed
automatically.
-b <block_size>
When writing an archive, blocks the output at a positive decimal
integer number of bytes per write to the archive file. The
<block_size> must be a multiple of 512 bytes with a maximum size of
40 kilobytes.
Note: To remain POSIX-compatible, do not exceed 32256 Bytes.
A <block_size> can end with k or b to specify multiplication by 1024
(1K) or 512, respectively. A pair of <block_size> can be separated by x
to indicate a product. A specific archive device may impose
additional restrictions on the size of blocking it will support. When
blocking is not specified, the default for <block_size> is dependent
on the specific archive format being used. The -x option provides
more information.
-c
Matches all file or archive members except those specified by the
pattern and file operands.
-d
Causes files of type directory being copied or archived, or archive
members of type directory being extracted, to match only the
directory file or archive member, and not the file hierarchy rooted at
the directory.
-e <archive_name>
Specifies the archive name when it is streamed.
Note: To prevent the tape from rewinding at the end of command execution,
use the -N option with the -e <archive_name> option.
-f <archive_file>
Specifies the archive name when it is a file.
Note: A single archive may span multiple files and different archive devices.
When required, server_archive prompts for the pathname of the file or
device of the next volume in the archive.
-i
Interactively renames files or archive members. For each archive
member matching a pattern operand, or each file matching a file
operand, server_archive prompts to /dev/tty giving the name of the
file, its file mode, and its modification time. Then server_archive
reads a line from /dev/tty. If this line is blank, the file or archive
member is skipped. If this line consists of a single period, the file or
archive member is processed with no modification to its name.
Otherwise, its name is replaced with the contents of the line. Then
server_archive immediately exits with a non-zero exit status if
<EOF> is encountered when reading a response, or if /dev/tty cannot
be opened for reading and writing.
-k
Does not allow overwriting existing files.
-l
Links files. In the copy mode (-r, -w), hard links are made between the
source and destination file hierarchies whenever possible.
-I <client_dialect>
Allows filename information recovered from an archive to be
translated into UTF-8.
-n
Selects the first archive member that matches each pattern operand.
No more than one archive member is matched for each pattern. When
members of type directory are matched, the file hierarchy rooted at
that directory is also matched (unless -d is also specified).
-p <string>
Specifies one or more file characteristic options (privileges). The
<string> option-argument is a string specifying file characteristics to
be retained or discarded on extraction. The string consists of the
specification characters a, e, m, o, and p. Multiple characteristics can
be concatenated within the same string and multiple -p options can
be specified. The meaning of the specification characters is as follows:
a
Do not preserve file access times. By default, file access times are
preserved whenever possible.
e
Preserve everything (default mode), the user ID, group ID, file
mode bits, file access time, and file modification time.
Note: The e flag is the sum of the o and p flags.
m
Do not preserve file modification times. By default, file
modification times are preserved whenever possible.
o
Preserve the user ID and group ID.
p
Preserve the file mode bits. This specification character is
intended for a user with regular privileges who wants to preserve
all aspects of the file other than the ownership. The file times are
preserved by default, but two other flags are offered to disable
this and use the time of extraction instead.
In the preceding list, preserve indicates that an attribute stored in
the archive is given to the extracted file, subject to the permissions
of the invoking process. Otherwise, the attribute of the extracted
file is determined as part of the normal file creation action. If
347
neither the e nor the o specification character is specified, or the
user ID and group ID are not preserved for any reason,
server_archive will not set the S_ISUID (setuid) and S_ISGID
(setgid) bits of the file mode. If the preservation of any of these
items fails for any reason, server_archive writes a diagnostic
message to standard error.
Note: Failure to preserve these items will affect the final exit status,
but will not cause the extracted file to be deleted.
If the file characteristic letters in any of the string
option-arguments are duplicated, or in conflict with one another,
the ones given last will take precedence. For example, if you
specify -p eme, file modification times are still preserved.
-s <replstr>
Modifies the file or archive member names specified by the pattern or
<file> operand according to the substitution expression <replstr>
using the syntax of the ed utility regular expressions.
Note: The ed 1 manual page provides information.
Multiple -s expressions can be specified. The expressions are applied
in the order they are specified on the command line, terminating with
the first successful substitution. The optional trailing g continues to
apply the substitution expression to the pathname substring, which
starts with the first character following the end of the last successful
substitution.
The optional trailing p causes the final result of a successful
substitution to be written to standard error in the following format:
<original pathname> >> <new pathname>
File or archive member names that substitute the empty string are not
selected and are skipped.
-t
Resets the access times of any file or directory read or accessed by
server_archive to be the same as they were before being read or
accessed by server_archive.
-u
Ignores files that are older (having a less recent file modification time)
than a pre-existing file, or archive member with the same name.
During read, an archive member with the same name as a file in a file
system is extracted if the archive member is newer than the file.
During copy, the file in the destination hierarchy is replaced by the
file in the source hierarchy, or by a link to the file in the source
hierarchy if the file in the source hierarchy is newer.
-v
During a list operation, produces a verbose table of contents using
the format of the ls 1 utility with the -l option. For pathnames
representing a hard link to a previous member of the archive, the
output has the format:
<ls -l listing> == <link name>
For pathnames representing a symbolic link, the output has the
format:
<ls -l listing> => <link name>
where <ls -l listing> is the output format specified by the ls 1 utility
when used with the -l option. Otherwise, for all the other operational
modes (read, write, and copy), pathnames are written and flushed a
standard error without a trailing <newline> as soon as processing
begins on that file or archive member. The trailing <newline> is not
buffered, and is written only after the file has been read or written.
-x format
Specifies the output archive format, with the default format being
ustar. The server_archive command currently supports the following
formats:
cpio
The extended cpio interchange format specified in the -p1003.2
standard. The default blocksize for this format is 5120 bytes.
Inode and device information about a file (used for detecting
file hard links by this format) which may be truncated by this
format is detected by server_archive and is repaired.
Note: To be readable by server_archive, the archive must be built
on another machine with the option -c (write header information in
ASCII).
bcpio
The old binary cpio format. The default blocksize for this format
is 5120 bytes.
Note: This format is not very portable and should not be used when
other formats are available.
Inode and device information about a file (used for detecting file
hard links by this format) which may be truncated by this format
is detected by server_archive and is repaired.
sv4cpio
The System V release 4 cpio. The default blocksize for this format
is 5120 bytes. Inode and device information about a file (used for
detecting file hard links by this format) which may be truncated
by this format is detected by server_archive and is repaired.
sv4crc
The System V release 4 cpio with file crc checksums. The default
blocksize for this format is 5120 bytes. Inode and device
information about a file (used for detecting file hard links by this
format) which may be truncated by this format is detected by
server_archive and is repaired.
tar
The old BSD tar format as found in BSD4.3. The default blocksize
for this format is 10240 bytes. Pathnames stored by this format
must be 100 characters or less in length. Only regular files, hard
links, soft links, and directories will be archived (other file system
types are not supported).
ustar
The extended tar interchange format specified in the -p1003.2
standard. The default blocksize for this format is 10240 bytes.
Note: Pathnames stored by this format must be 250 characters or less
in length (150 for basename and 100 for <file_name>).
emctar
This format is not compatible with -p1003.2 standard. It allows
archiving to a file greater than 8 GB. Pathnames stored by this
format are limited to 3070 characters. The other features of this
format are the same as ustar.
server_archive detects and reports any file that it is unable to store or
extract as the result of any specific archive format restrictions. The
349
individual archive formats may impose additional restrictions on use.
Note: Typical archive format restrictions include (but are not limited to)
file pathname length, file size, link pathname length, and the type of the file
.
-B bytes
Limits the number of bytes written to a single archive volume to
bytes. The bytes limit can end with m, k, or b to specify multiplication
by 1048576 (1M), 1024 (1K) or 512, respectively. A pair of bytes limits
can be separated by x to indicate a product.
Note: The limit size will be rounded up to the nearest block size.
-C [d|i|m]
When performing a restore, this allows you to choose PAX behaviors
on CIFS collision names.
d: delete
i: ignore
m: mangle
-D
Ignores files that have a less recent file inode change time than a
pre-existing file, or archive member with the same name. The -u
option provides information.
Note: This option is the same as the .u option, except that the file inode
change time is checked instead of the file modification time. The file inode
change time can be used to select files whose inode information (such as uid,
gid, and so on) is newer than a copy of the file in the destination directory.
-E limit
Has the following two goals:
. In case of medium error, to limit the number of consecutive read
faults while trying to read a flawed archive to limit. With a
positive limit, server_archive attempts to recover from an archive
read error and will continue processing starting with the next file
stored in the archive. A limit of 0 (zero) will cause server_archive
to stop operation after the first read error is detected on an
archive volume. A limit of "NONE" will cause server_archive to
attempt to recover from read errors forever.
. In case of no medium error, to limit the number of consecutive
valid header searches when an invalid format detection occurs.
With a positive value, server_archive will attempt to recover from
an invalid format detection and will continue processing starting
with the next file stored in the archive. A limit of 0 (zero) will
cause server_archive to stop operation after the first invalid
header is detected on an archive volume. A limit of "NONE" will
cause server_archive to attempt to recover from invalid format
errors forever. The default limit is 10 retries.
CAUTION
Using this option with NONE requires extreme caution as
server_archive may get stuck in an infinite loop on a badly flawed
archive.
-J
Backs up, restores, or displays CIFS extended attributes.
p: displays the full pathnamefor alternate names (for listing and
archive only)
u: specifies UNIX name for pattern search
w: specifies M256 name for pattern search
d: specifies M83 name for pattern search
-L
Follows all symbolic links to perform a logical file system traversal.
-N
Used with the -e archive_name option, prevents the tape from
rewinding at the end of command execution.
-P
Does not follow symbolic links.
Note: Performs a physical file system traversal. This is the default mode.
-T [from_date][,to_date][/[c][m]]
Allows files to be selected based on a file modification or inode
change time falling within a specified time range of from_date to
to_date (the dates are inclusive). If only a from_date is supplied, all
files with a modification or inode change time equal to or less than
are selected. If only a to_date is supplied, all files with a modification
or inode change time equal to or greater than will be selected. When
the from_date is equal to the to_date, only files with a modification or
inode change time of exactly that time will be selected.
When server_archive is in the write or copy mode, the optional
trailing field [c][m] can be used to determine which file time (inode
change, file modification or both) is used in the comparison. If neither
is specified, the default is to use file modification time only. The m
specifies the comparison of file modification time (the time when the
file was last written). The c specifies the comparison of inode change
time (the time when the file inode was last changed; for example, a
change of owner, group, mode, and so on). When c and m are both
specified, then the modification and inode change times are both
compared. The inode change time comparison is useful in selecting
files whose attributes were recently changed, or selecting files which
were recently created and had their modification time reset to an
older time (as what happens when a file is extracted from an archive
and the modification time is preserved). Time comparisons using both
file times are useful when server_archive is used to create a
time-based incremental archive (only files that were changed during
a specified time range will be archived).
A time range is made up of six different fields and each field must
contain two digits. The format is:
[yy[mm[dd[hh]]]]mm[ss]
Where yy is the last two digits of the year, the first mm is the month
(from 01 to 12), dd is the day of the month (from 01 to 31), hh is the
hour of the day (from 00 to 23), the second mm is the minute (from 00
to 59), and ss is seconds (from 00 to 59). The minute field mm is
required, while the other fields are optional, and must be added in
the following order: hh, dd, mm, yy. The ss field may be added
independently of the other fields. Time ranges are relative to the
current time, so -T 1234/cm selects all files with a modification or
inode change time of 12:34 P.M. today or later. Multiple -T time range
can be supplied, and checking stops with the first match.
-X
When traversing the file hierarchy specified by a pathname, does not
allow descending into directories that have a different device ID. The
st_dev field as described in stat 2 for more information about device
IDs.
-Y
Ignores files that have a less recent file inode change time than a
pre-existing file, or archive member with the same name.
Note: This option is the same as the -D option, except that the inode change
time is checked using the pathname created after all the filename
modifications have completed.
351
-Z
Ignores files that are older (having a less recent file modification time)
than a pre-existing file, or archive member with the same name.
Note: This option is the same as the -u option, except that the modification
time is checked using the pathname created after all the filename
modifications have completed.
The options that operate on the names of files or archive members (-c,
-i, -n, -s, -u, -v, -D, -T, -Y, and -Z) interact as follows.
When extracting files during a read operation, archive members are
selected, based only on the user-specified pattern operands as
modified by the -c, -n, -u, -D, and -T options. Then any -s and -i
options will modify, in that order, the names of those selected files.
Then the -Y and -Z options will be applied based on the final
pathname. Finally, the -v option will write the names resulting from
these modifications.
When archiving files during a write operation, or copying files
during a copy operation, archive members are selected, based only on
the user specified pathnames as modified by the -n, -u, -D, and -T
options (the -D option applies only during a copy operation). Then
any -s and -i options will modify, in that order, the names of these
selected files. Then during a copy operation, the -Y and the -Z options
will be applied based on the final pathname. Finally, the -v option
will write the names resulting from these modifications.
When one or both of the -u or -D options are specified along with the
-n option, a file is not considered selected unless it is newer than the
file to which it is compared.
SEE ALSO
-------Using the server_archive Utility on VNX.
EXAMPLE #1
---------To archive the contents of the root directory to the device rst0, type:
$ server_archive <movername> -w -e rst0
EXAMPLE #2
---------To display the verbose table of contents for an archive stored in
<file_name>, type:
$ server_archive <movername> -v -f <file_name>
EXAMPLE #3
---------To copy the entire olddir directory hierarchy to newdir, type:
$ server_archive <movername> -rw <olddir newdir>
EXAMPLE #4
---------To interactively select the files to copy from the current
directory to dest_dir, type:
$ server_archive <movername> -rw -i <olddir dest_dir>
EXAMPLE #5
---------To extract all files from the archive stored in <file_name>, type:
$ server_archive <movername> -r -f <file_name>
EXAMPLE #6
---------To update (and list) only those files in the destination directory
/backup that are older (less recent inode change or file modification
times) than files with the same name found in the source file tree
home, type:
$ server_archive <movername> -r -w -v -Y -Z home /backup
STANDARDS
--------The server_archive utility is a superset of the -p1003.2 standard.
Note: The archive formats bcpio, sv4cpio, sv4crc, and tar, and the flawed
archive handling during list and read operations are extensions to the POSIX
standard.
ERRORS
-----The server_archive command exits with one of the following system
messages:
All files were processed successfully.
or
An error occurred.
Whenever server_archive cannot create a file or a link when reading
an archive, or cannot find a file when writing an archive, or cannot
preserve the user ID, group ID, or file mode when the -p option is
specified, a diagnostic message is written to standard error, and a
non-zero exit status is returned. However, processing continues.
In the case where server_archive cannot create a link to a file, this
command will not create a second copy of the file.
If the extraction of a file from an archive is prematurely terminated by
a signal or error, server_archive may have only partially extracted a
file the user wanted. Additionally, the file modes of extracted files
and directories may have incorrect file bits, and the modification and
access times may be wrong.
If the creation of an archive is prematurely terminated by a signal or
error, server_archive may have only partially created the archive
which may violate the specific archive format specification.
If while doing a copy, server_archive detects a file is about to
overwrite itself, the file is not copied, a diagnostic message is written
to standard error and when server_archive completes, it exits with a
non-zero exit status.
---------------------------------------------------------Last modified: May 12, 2011 1:15 pm.
353
server_arp
Manages the Address Resolution Protocol (ARP) table for the Data Movers.
SYNOPSIS
-------server_arp {<movername>|ALL}
<ip_addr>
| -all
| -delete <ip_addr>
| -set <ip_addr> <physaddr>
DESCRIPTION
----------server_arp displays and modifies the IP-to-MAC address translation
tables used by the ARP for the specified Data Mover.
The ALL option executes the command for all Data Movers.
OPTIONS
------<ip_addr>
Displays the ARP entry for the specified IP address.
-all
Displays the first 64 of the current ARP entries.
-delete <ip_addr>
Deletes an ARP entry.
-set <ip_addr> <physaddr>
Creates an ARP entry with an IP address and physical address.
EXAMPLE #1
---------To create an ARP entry, type:
$ server_arp server_2 -set 172.24.102.20
00:D0:B7:82:98:E0
server_2 : added: 172.24.102.20 at 0:d0:b7:82:98:e0
EXAMPLE #2
---------To display all ARP entries for a specified Data Mover, type:
$ server_arp server_2 -all
server_2 :
172.24.102.254 at 0:d0:3:f9:37:fc
172.24.102.20 at 0:d0:b7:82:98:e0
172.24.102.24 at 0:50:56:8e:1d:5
128.221.253.100 at 0:4:23:a7:b1:35
EXAMPLE #3
---------To display an ARP entry specified by IP address, type:
$ server_arp server_2 172.24.102.20
server_2 : 172.24.102.20 at 0:d0:b7:82:98:e0
EXAMPLE #4
----------
To delete an ARP entry, type:
$ server_arp server_2 -delete 172.24.102.24
server_2 : deleted: 172.24.102.24 at 0:50:56:8e:1d:5
-------------------------------------Last Modified: March 31, 2010 11:15 am
355
server_cdms
Provides File Migration Service for VNX functionality for the
specified Data Movers.
SYNOPSIS
-------server_cdms {<movername>|ALL}
-connect <mgfs> -type {nfsv2|nfsv3} -path <localpath>
-source <srcName>:/<srcPath>[-option <options>]
| -connect <mgfs> -type cifs -path <localpath> -netbios <netbios> -source
\\<srcServer>[.<domain>]\<srcShare>[\<srcPath>] -admin
[<domain>\]<admin_name> [-wins <wins>]
| -disconnect <mgfs> {-path <localpath>|-path <cid>|-all}
| -verify <mgfs> [-path {<localpath>|<cid>}]
| -Convert <mgfs>
| -start <mgfs> -path <localpath> [-Force] -log <logpath>
[-include <include_path>][-exclude <exclude_path>]
| -halt <mgfs> -path <localpath>
| -info [<mgfs>][-state {START|STOP|ON_GOING|ERROR|SUCCEED|FAIL}]
DESCRIPTION
----------server_cdms establishes and removes connections to remote systems,
and allows users to start on-access migration.
server_cdms creates an auto-migration process on the Data Mover to
ensure that all data has been migrated from the remote system.
server_cdms also checks the state of the migrated file system (MGFS),
all auto-migration processes, and the connection, and reports if all
data has been migrated successfully.
CDMS supports NFSv2 and NFSv3 only.
The ALL option executes the command for all Data Movers.
OPTIONS
-------connect <mgfs> -type {nfsv2|nfsv3} -path
<localpath> -source <srcName>:/<srcPath>
Provides a connection for the VNX with the remote NFS server. The
-type option specifies the protocol type to be used for communication
with the remote NFS server. The directory <localpath> in the file
system must be unique for that file system.
The -source option specifies the source file server name or IP address
of the remote server as the <srcName> and the export path for
migration. For example, nfs_server:/export/path
Note: After the -connect command completes, the file system must be
exported.
[-option <options>]
Specifies the following comma-separated options:
[useRootCred={true|false}]
When the file system is mounted, true ensures that the MGFS
reads from the source file server using root access UID=0, GID=0.
This assumes that the source file server path is exported to allow
root access from the specified Data Mover. When false (default),
the MGFS uses the owner.s UID and GID to access data.
[proto={TCP|UDP}]
Sets the connection protocol type. The default is TCP.
[nfsPort=<port>]
Sets a remote NFS port number in case the Portmapper or RPC
bind is not running, and the port is not the default of 2049.
[mntPort=<port>]
Sets a remote mount port number in case Portmapper or RPC
bind is not running.
[mntVer={1|2|3}]
Sets the version used for mount protocol. By default, NFSv2 uses
mount version 2, unless user specified version 1; NFSv3 uses
mount version 3.
[localPort=<port>]
Sets the port number used for NFS services, if it needs to be
different from the default. The default port number is always
greater than 1024.
-connect <mgfs> -type cifs -path <localpath>
-netbios <netbios> -source \\<srcServer>[.<domain>]
\<srcShare>[\<srcPath>] -admin [<domain>\]
<admin_name>[-wins <wins>]
Provides a connection for the VNX with the remote CIFS server as
specified by its NetBIOS name. The directory <localpath> in the file
system must be unique for that file system. The -source option
specifies the source file server name of the remote server as the
<srcName> and the share path for migration that is not at the root of
the share. For example, \\share\dir1...
The -source and -admin option strings must be enclosed by quotes
when issued in a Linux shell.
The -admin option specifies an administrator for the file system. A
password is asked interactively when the command is issued. The
-wins option specifies an IP address for the WINS server.
Note: This is required only for Windows NT 4.0.
-disconnect <mgfs> {-path <localpath>|-path <cid>|-all}
Removes a connection without migrating the data. The <localpath> is
not removed nor is any partially migrated data.
The administrator should manually remove this data before
attempting a -verify or -Convert command. It may require the
administrator to handle a partial migration of old data as well as
potentially new data created by users.
It is recommended not to use the -disconnect option if the
administrator has exported this directory for user access.
-verify <mgfs>
Checks that all data has completed the migration for the <mgfs>.
[-path {<localpath>|<cid>}]
If the -path option is provided, it can check on a communication
basis. If no path is provided, the system defaults to checking all
connections on the file system.
-Convert <mgfs>
Performs a verify check on the entire file system, then changes the file
system type from MGFS to UxFS. After the -Convert option succeeds,
no data migration can be done on that file system.
-start <mgfs> -path <localpath> [-Force] -log
<logpath>
Directs the Data Mover to migrate all files from the source file server
357
to the VNX. The -log option provides detailed information on the
state of the migration, and any failures that might occur. The
<localpath> is the path where the migration thread is started. The
-Force option is used if you need to start a migration thread a second
time on the same <localpath> where a previous migration thread had
already finished. For example, -Force would be needed to start a
thread which had no include file (that is, to migrate all remaining
files) on <localpath> where a thread with an include file had already
been run.
[-include <include_path>]
Starts the thread in the <include_path> which is the path of the
file containing the specified directories.
[-exclude <exclude_path>]
Excludes files or directories from migration. The <include_path>
is the path of the file containing the specified directories.
-halt <mgfs> -path <localpath>
Stops a running thread, and halts its execution on the Data Mover.
The <mgfs> is the name of the migration file system and the
<localpath> is the full path where the migration thread was started.
The -start option resumes thread execution.
-info
Displays a status on the migration file system and the threads.
[<mgfs>]
Specifies the migration file system.
[-state {START|STOP|ON_GOING|ERROR|SUCCEED|FAIL}]
Displays only the threads that are in the state that is specified.
SEE ALSO
--------VNX CDMS Version 2.0 for NFS and CIFS, server_export,
server_mount, and server_setup.
EXAMPLE #1
---------To provide a connection for the migration file system to communicate
with the remote NFS server, type:
$ server_cdms server_2 -connect ufs1 -type nfsv3 -path
/nfsdir -source 172.24.102.144:/srcdir -option proto=TCP
server_2 : done
EXAMPLE #2
---------To provide a connection for the migration file system to communicate
with the remote CIFS server, type:
$ server_cdms server_2 -connect ufs1 -type cifs -path
/dstdir -netbios dm112-cge0 -source
"\\\winserver1.nasdocs.emc.com\srcdir" -admin
"nasdocs.emc.com\administrator" -wins 172.24.102.25
server_2 : Enter Password:*******
done
EXAMPLE #3
---------To display a status on the migration file system, type:
$ server_cdms server_2
server_2 :
CDMS enabled with 32 threads.
ufs1:
path = /nfsdir
cid = 0
type = NFSV3
source = 172.24.102.144:/srcdir
options= proto=TCP
path = /dstdir
cid = 1
type = CIFS
source = \\winserver1.nasdocs.emc.com\srcdir\
netbios= DM112-CGE0.NASDOCS.EMC.COM
admin = nasdocs.emc.com\administrator
When migration is started:
$ server_cdms server_2
server_2 :
CDMS enabled with 32 threads.
ufs1:
path = /nfsdir
cid = 0
type = NFSV3
source = 172.24.102.144:/srcdir
options= proto=TCP
path = /dstdir
cid = 1
type = CIFS
source = \\winserver1.nasdocs.emc.com\srcdir\
netbios= DM112-CGE0.NASDOCS.EMC.COM
admin = nasdocs.emc.com\administrator
threads:
path = /dstdir
state = ON_GOING
log = /
cid = NONE
Where:
Value
Definition
ufs1
Migration file system
path
Directory in the local file system
cid
Connection ID (0 through 1023)
type
Protocol type to be used to communicate with the remote serv
source
Source file server name or IP address of the remote server a
er
nd
the export path for migration
options
Connection protocol type
netbios
NetBIOS name of the remote CIFS server
admin
Administrator for the file system
threads
Currently existing migration threads
state
Current status of migration threads
359
log
Location of the log file that provides detailed information
EXAMPLE #4
---------To direct server_2 to migrate all files from the source file server to the
VNX, type:
$ server_cdms server_2 -start ufs1 -path /dstdir -log /
server_2 : done
EXAMPLE #5
---------To display information about migration with the specified status, type:
$ server_cdms server_2 -info ufs1 -state ON_GOING
server_2 :
ufs1:
path = /nfsdir
cid = 0
type = NFSV3
source = 172.24.102.144:/srcdir
options= proto=TCP
path = /dstdir
cid = 1
type = CIFS
source = \\winserver1.nasdocs.emc.com\srcdir\
netbios= DM112-CGE0.NASDOCS.EMC.COM
admin = nasdocs.emc.com\administrator
threads:
path = /dstdir
state = ON_GOING
log = /
cid = NONE
EXAMPLE #6
---------To stop data migration on server_2 for ufs1, type:
$ server_cdms server_2 -halt ufs1 -path /dstdir
server_2 : done
EXAMPLE #7
---------To check that all data has completed the migration, type:
$ server_cdms server_2 -verify ufs1 -path /dstdir
server_2 : done
EXAMPLE #8
---------To disconnect the path on server_2 for data migration, type:
$ server_cdms server_2 -disconnect ufs1 -path /nfsdir
server_2 : done
EXAMPLE #9
---------To disconnect all paths for data migration, type:
$ server_cdms server_2 -disconnect ufs1 -all
server_2 : done
EXAMPLE #10
----------To perform a verify check on ufs1, and then convert it to a uxfs, type:
$ server_cdms server_2 -Convert ufs1
server_2 : done
-------------------------------------Last Modified: March 31, 2010 05:00 pm
361
server_cepp
Manages the Common Event Publishing Agent (CEPA) service on the specified
Data Mover
SYNOPSIS
-------server_cepp {<movername>|ALL}
-service {-start|-stop|-status|-info}
| -pool {-info|-stats}
DESCRIPTION
----------server_cepp starts or stops the CEPA service on the specified Data
Mover or all Data Movers and displays information on the status,
configuration, and statistics for the service and the pool. The CEPA
service is set up in the cepp.conf configuration file. The CEPA
configuration is displayed using -service -status, but changes can
only be made by directly editing the file with a text editor.
ALL executes the command for all Data Movers.
OPTIONS
-------service {-start|-stop|-status|-info}
The -start option starts the CEPA service on the specified Data Mover.
-stop stops the CEPA service, -status returns a message indicating
whether the CEPA service has started or been stopped, and -info
displays information about the CEPA service including key
properties of the configured pool.
-pool {-info|-stats}
Displays properties or statistics for the CEPA pool on the specified
Data Mover.
SEE ALSO
-------Using VNX Event Enabler
EXAMPLE #1
---------To start the CEPA service on a Data Mover, type:
$ server_cepp server_2 .service .start
server_2 : done
EXAMPLE #2
---------To display the status of the CEPA service, type:
$ server_cepp server_2 .service .status
server_2 : CEPP Stopped
EXAMPLE #3
---------To display the configuration of the CEPA service, type:
To display the configuration of the CEPA service, type:
$ server_cepp server_2 .service .info
server_2 :
CIFS share name = \\DVBL\CHECK$
cifs_server = DVBL
heartbeat_interval = 15 seconds
ft level = 1
ft size = 1048576
ft location = /.etc/cepp
msrpc user = OMEGA13$
msrpc client name = OMEGA13.CEE.LAB.COM
pool_name
server_required access_checks_ignored req_timeout retry_timeout
pool_1
no
0
5000
25000
Where
Value
Definition
CIFS share name
The name of the shared directory and CIFS server used
to access files in the Data Movers.
cifs_server
CIFS server to access files.
heartbeat_interval
The time taken to scan each CEPA server.
ft level
Fault tolerance level assigned. This option is required
. 0
(continue and tolerate lost events; default setting), 1
(continue and use a persistence file as a circular even
t
buffer for lost events), 2 (continue and use a persiste
nce
file as a circular event buffer for lost events until t
he
buffer is filled and then stop CIFS), or 3 (upon heartb
eat
loss of connectivity, stop CIFS).
ft location
Directory where the persistence buffer file resides
relative to the root of a file system. If a location
is not specified, the default location is the root of
the file system.
ft size
Maximum size in MB of the persistence buffer file. The
default is 1 MB and the range is 1 MB to 100 MB.
msrpc user
Name assigned to the user account that the CEPA service
is
running under on the CEE machine. For example, ceeuser.
msrpc client name
Domain name assigned if the msrpc user is a member of a
domain. For example, domain.ceeuser.
pool_name
Name assigned to the pool that will use the specified
CEPA options.
server_required
Displays availability of the CEPA server. If a CEPA
server is not available and this option is yes,
an error is returned to the requestor that access
is denied. If a CEPA server is not available and this
option is no, an error is not returned to the
requestor and access is allowed.
access_checks_ignored
The number of CIFS requests processed when a CEPA
server is not available and the server_required
option is set to "no." This option is reset when
the CEPA server becomes available.
363
req_timeout
Time out in ms to send a request that allows access
to the CEPA server.
retry_timeout
Time out in ms to retry the access request sent to
the CEPA server.
EXAMPLE #4
---------To display information about the CEPA pool, type:
$ server_cepp server_2 -pool -info
server_2 :
pool_name = pool1
server_required = yes
access_checks_ignored = 0
req_timeout = 5000 ms
retry_timeout = 25000 ms
pre_events = OpenFileNoAccess, OpenFileRead
post_events = CreateFile,DeleteFile
post_err_events = CreateFile,DeleteFile
CEPP Servers:
IP = 10.171.10.115, state = ONLINE, vendor = Unknown
...
Where
Value
Definition
pre_events
Sends notification before selected event occurs. An empty
list indicates that no pre-event messages are generated.
post_events
Sends notification after selected event occurs. An empty
list indicates that no post-event messages are generated.
post_err_events Sends notification if selected event generates an error. An
empty list indicates that no post-error-event messages
are generated.
CEPP Servers
IP addresses of the CEPA servers; state of the CEPA
servers; vendor software installed on CEPA servers.
EXAMPLE #5
---------To display statistics for the CEPA pool, type:
$ server_cepp server_2 -pool -stats
server_2 :
pool_name = pool1
Event Name
Requests
Min(us)
OpenFileWrite
2
659
CloseModified
2
604
Total Requests = 4
Min(us) = 604
Max(us) = 758
Average(us) = 664
-------------------------------------------Last Modified: April 05 2010, 11:15 am
Max(us)
758
635
Average(us)
709
620
server_certificate
Manages VNX for file system’s Public Key Infrastructure (PKI) for the
specified Data Movers.
SYNOPSIS
-------server_certificate {<movername>|ALL}
-ca_certificate
[-list]
| -info {-all|<certificate_id>}
| -import [-filename <path>]
| -delete {-all|<certificate_id>}}
-persona
[-list]
| -info {-all|<persona_name>|id=<persona_id>}
| -generate {<persona_name>|id=<persona_id>} -key_size {2048|4096}
[-cs_sign_duration <# of months>]
{-cn|-common_name} <common_name>[;<common_name>]
[-ou <org_unit>[;<org_unit>]]
[-organization <organization>]
[-location <location>]
[-state <state>]
[-country <country>]
[-filename <output_path>]
| -clear {<persona_name>|id=<persona_id>}{-next|-current|-both}
| -import {<persona_name>|id=<persona_id>} [-filename <path>]
DESCRIPTION
----------server_certificate manages the use of public key certificates between
Data Movers acting as either clients or servers. server_certificate
-ca_certificate manages the Certificate Authority (CA) certificates the
VNX uses to confirm a server’s identity when the Data Mover is
acting as a client. server_certificate -persona manages the certificates
presented by the Data Mover to a client application when the Data
Mover is acting as a server as well as the certificates presented by the
Data Mover to a server configured to require client authentication.
OPTIONS
-------ca_certificate
Lists the CA certificates currently available on the VNX. The ouput
from this command is identical to the output from the -list option.
-ca_certificate -list
Lists the CA certificates currently available on the VNX.
-ca_certificate -info {-all|<certificate_id>}
Displays the properties of a specified CA certificate or all CA
certificates.
-ca_certificate -import [-filename <path>]
Imports a CA certificate. You can only paste text in PEM format at the
command prompt. Specify -filename and provide a path to import a
CA certificate in either DER or PEM format.
-ca_certificate -delete {-all|<certificate_id>}
Deletes a specified CA certificate or all CA certificates.
-persona
Lists the key sets and associated certificates currently available on the
VNX. The ouput from this command is identical to the output from
the -list option.
365
-persona -list
Lists the key sets and associated certificates currently available on the
VNX.
-persona -info {-all|<persona_name>|id=<persona_id>}
Displays the properties of the key sets and associated certificates,
including the text of a pending certificate request, of a specified
persona or all personas.
-persona -generate {<persona_name>|id=<persona_id>}
-key_size <bits> {-cn|-common_name} <common_name>
[;<common_name>]
Generates a public/private key set along with a request to sign the
certificate. Specify either the persona name or ID. The ID is
automatically generated when the persona is created. You can
determine the ID using the -list or -info options. The key size can be
either 2048 or 4096 bits. Use either -cn or -common_name to specify
the commonly used name. The common name is typically a hostname
that describes the Data Mover with which the persona is associated.
Multiple common names are allowed but must be separated by
semicolons.
[-cs_sign_duration <# of months>]
Specifies the number of months the certificate is valid. A month is
defined as 30 days. This option is valid only if the certificate will
be signed by the Control Station. If this option is specified, you
cannot save the request to a file using the -filename option.
[-ou <org_unit>[;<org_unit>]]
Identifies the organizational unit. Multiple organizational units
are allowed but must be separated by semicolons.
[-organization <organization>]
Identifies the organization.
[-location <location>]
Identifies the physical location of the organizational unit.
[-state <state>]
Identifies the state where the organizational unit is located.
[-country <country>]
Identifies the country where the organization unit is located. This
value is limited to two characters.
[-filename <output_path>]
Provides a path to where the request should be saved to a file.
This option is valid only if the certificate will be signed by an
external CA. If this option is specified, you cannot specify the
number of months the certificate is valid using the
-cs_sign_duration option.
-persona -clear {<persona_name>|id=<persona_id>}{-next|-current|-both}
Deletes a key set and the associated certificate. You can delete the
current key set and certificate, the next key set and certificate, or both.
-persona -import {<persona_name>|id=<persona_id>}[-filename
Imports a CA-signed certificate. You can only paste text in
format at the command prompt. Specify -filename and provide
path to import a CA-signed certificate in either DER or PEM
SEE ALSO
-------nas_ca_certificate
<path>]
PEM
a
format.
EXAMPLE #1
---------To import a CA certificate, specifying a filename and path, type:
$ server_certificate server_2 -ca_certificate -import -filename
"/tmp/ca_cert.pem"
done
EXAMPLE #2
---------To list all the CA certificates currently available on the VNX, type:
$ server_certificate ALL -ca_certificate -list
server_2 :
id=1
subject=O=Celerra Certificate Authority;CN=sorento
issuer=O=Celerra Certificate Authority;CN=sorento
expire=20120318032639Z
id=2
subject=C=US;O=VeriSign, Inc.;OU=Class 3 Public Primary Certification Author
issuer=C=US;O=VeriSign, Inc.;OU=Class 3 Public Primary Certification Author
expire=20280801235959Z
server_3 :
id=1
subject=O=Celerra Certificate Authority;CN=zeus-cs
issuer=O=Celerra Certificate Authority;CN=zeus-cs
expire=20120606181215Z
EXAMPLE #3
---------To list the properties of the CA certificate identified by certificate ID
2, type:
$ server_certificate server_2 -ca_certificate -info 2
server_2 :
id=2
subject = C=US;O=VeriSign, Inc.;OU=Class 3 Public Primary Certification
Authority
issuer = C=US;O=VeriSign, Inc.;OU=Class 3 Public Primary Certification
Authority
start = 19960129000000Z
expire = 20280801235959Z
signature alg. = md2WithRSAEncryption
public key alg. = rsaEncryption
public key size = 1024 bits
serial number = 70ba e41d 10d9 2934 b638 ca7b 03cc babf
version = 1
EXAMPLE #4
---------To generate a key set and certificate request to be sent to an external
CA for the persona identified by the persona name default, type:
$ server_certificate server_2 -persona -generate default
-key_size 2048 -common_name division.xyz.com
server_2 :
Starting key generation. This could take a long time ...
367
done
EXAMPLE #5
---------To list all the key sets and associated certificates currently available
on the VNX, type:
$ server_certificate ALL -persona -list
server_2 :
id=1
name=default
next state=Request Pending
request subject=CN=name;CN=1.2.3.4
server_3 :
id=1
name=default
next state=Not Available
CURRENT CERTIFICATE:
id=1
subject=CN=test;CN=1.2.3.4
expire=20070706183824Z
issuer=O=Celerra Certificate Authority;CN=eng173100
EXAMPLE #6
---------To list the properties of the key set and certificate identified by
persona ID 1, type:
$ server_certificate server_2 -persona -info id=1
server_2 :
id=1
name=default
next state=Request Pending
request subject=CN=name;CN=1.2.3.4
Request:
-----BEGIN CERTIFICATE REQUEST----MIIEZjCCAk4CAQAwITENMAsGA1UEAxMEbmFtZTEQMA4GA1UEAxMHMS4yLjMuNDCC
AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANKW3Q/F6eMqIxrCO5IeXLET
bWkm5RzrbI5lHxLNuhobR5S9G2o+k47X0QZFkGzq/2F7kR06vVIH7CPH9X2gGAzV
7GmZaFO0wPcktPJYzjQE8guNhcL1qZpPl4IZrbnSGEAWcAAE0nvNwLp9aN0WSC+N
TDJZY4A9yTURiUc+Bs8plhQh16wLLL0zjUKIvKjAqiTE0F3RApVJEE/9y6N+Idsb
Vwf/rvzP6/z0wZW5Hl84HKXInJaHTBDK59G+e/Y2JgvUY1UNBZ5SODunOakHabex
k6COFYjDu7Vd+yHpvcyTalHJ2RcIavpQuM02o+VVpxgUyX7M1+VXJXTJm0yb4j4g
tZITOSVZ2FqEpOkoIpzqoAL7A9B69WpFbbpIX8danhReafDh4oj4yWocvSwMKYv1
33nLak3+wpMQNrwJ2L9FIHP2fXClnvThBgupm7uqqHP3TfNBbBPTYY3qkNPZ78wx
/njUrZKbfWd81Cc+ngUi33hbMuBR3FFsQNASYZUzgl5+JexALH5jhBahd2aRXBag
itQLhvxYK0dEqIEwDfdDedx7i+yro2gbNxhLLdtkuBtKrmOnuT5g2WWXNKzNa/H7
KWv8JSwCv1mW1N/w7V9aEbDizBBfer+ZdMPkGLbyb/EVXZnHABeWH3iKC6/ecnRd
4Kn7KO9F9qXVHlzzTeYVAgMBAAGgADANBgkqhkiG9w0BAQUFAAOCAgEAzSS4ffYf
2WN0vmZ0LgsSBcVHPVEVg+rP/aU9iNM9KDJ4P4OK41UDU8tOGy09Kc8EvklBUm59
fyjt2T/3RqSgvvkCEHJsVW3ZMnSsyjDo6Ruc0HmuY4q+tuLl+dilSQnZGUxt8asw
dhEpdEzXA6o9cfmVZMSt5QicfAmmBNr4BaO96+VAlg59fu/chU1pvKWWMGXz4I2s
7z+UdMBYO4pEfyG1i34Qof/z4K0SVNICn3CEkW5TIsSt8qA/E2JXXlLhbMYWKYuY
9ur/gspHuWzkIXZFx4SmTK9/RsE1Vy7fBztIoN8myFN0nma84D9pyqls/yhvXZ/D
iDF6Tgk4RbNzuanRBSYiJFu4Tip/nJlK8uv3ZyFJ+3DK0c8ozlBLuQdadxHcJglt
m/T4FsHa3JS+D8CdA3uDPfIvvVNcwP+4RBK+Dk6EyQe8uKrVL7ShbacQCUXn0AAd
Ol+DQYFQ7Mczcm84L98srhov3JnIEKcjaPseB7S9KtHvHvvs4q1lQ5U2RjQppykZ
qpSFnCbYDGjOcqOrsqNehV9F4h9fTszEdUY1UuLgvtRj+FTT2Ik7nMK641wfVtSO
LCial6kuYsZg16SFxncnH5gKHtQMWxd9nv+UyJ5VwX3aN12N0ZQbaIDcQp75Em2E
aKjd28cZ6FEavimn69sz0B8PHQV+6dPwywM=
-----END CERTIFICATE REQUEST----EXAMPLE #7
----------
To generate a key set and certificate request that is automatically
received by the Control Station for the persona identified by the
persona name default, type:
$ server_certificate server_2 -persona -generate default
-key_size 2048 -cs_sign_duration 12 -comon_name
division.xyz.com
server_2 :
Starting key generation. This could take a long time ...
done
EXAMPLE #8
---------To generate a key set and certificate request to be sent to an external
CA specifying subject information, type:
$ server_certificate server_2 -persona -generate default
-key_size 2048 -common_name division.xyz.com -ou QA
-organization XYZ -location Bethesda -state Maryland
-country US -filename /tmp/server_2.1.request.pem
server_2 :
Starting key generation. This could take a long time ...
done
EXAMPLE #9
---------To import a signed certificate and paste the certificate text, type:
$ server_certificate server_2 -persona -import default
server_2 : Please paste certificate data. Enter a carriage return and on the
new
line type .end of file. or .eof. followed by another carriage return.
----------------------------------------------------Last Modified: March 31, 2010 12:45 pm
369
server_checkup
Checks the configuration parameters, and state of a Data Mover and
its dependencies.
SYNOPSIS
-------server_checkup {<movername>|ALL}
[-test <component> [-subtest <dependency>][-quiet][-full]]
| -list
| -info {<component>|all}
DESCRIPTION
----------server_checkup performs a sanity check of a specific Data Mover
component and its dependencies by checking configuration
parameters, and the current state of the component and
dependencies.
A component is any basic feature that is available on the Data Mover,for
example, CIFS. A dependency is a configuration component of aData Mover that
the proper operation of a Data Mover functionality(like CIFS) is depending
upon. This configuration component can be owned by multiple Data Mover
components. For example, proper operation of a CIFS service depends on
correctly specified DNS,WINS, Antivirus, and so on.
server_checkup displays a report of errors and warnings detected in
the specified Data Mover component and its dependencies.
OPTIONS
------No arguments
Performs a sanity check of all the components and all their dependencies on the
specified Data Mover or all Data Movers.
-test <component>
Performs a sanity check of a specific component and all of its dependencies.
[-subtest <dependency>]
Performs a sanity check of a specific component and its specified
dependency only. If the dependency is not defined, executes the
command for all the dependencies of the component.
[-quiet]
Displays only the number of errors and warnings for the sanity
check.
[-full]
Provides a full sanity check of the specified Data Movers.
-list
Lists all available components that can be checked on a Data Mover.
-info <component>
Lists all dependencies of the specified component, with details of checks that
can be performed on each dependency.
EXAMPLE #1
---------To list the available component in the Data Mover, type:
$ server_checkup server_2 -list
server_2 : done
REPV2
HTTPS
CIFS
FTPDS
EXAMPLE #2
---------To execute the check of the CIFS component, type:
$ server_checkup server_2 -test CIFS
server_2 :
------------------------------------Checks------------------------------------Component CIFS :
ACL : Checking the number of ACL per file system.....................*Pass
Connection: Checking the load of TCP connections of CIFS...................Pass
Credential: Checking the validity of credentials...........................Pass
DC : Checking the connectivity and configuration of the DCs.........*Fail
DFS : Checking the DFS configuration files and DFS registry.......... Pass
DNS : Checking the DNS configuration and connectivity to DNS servers. Pass
EventLog : Checking the configuration of Windows Event Logs...............Pass
FS_Type : Checking if all file systems are all DIR3 type................. Pass
GPO : Checking the GPO configuration................................. Pass
HomeDir : Checking the configuration of home directory share............. Pass
I18N : Checking the I18N mode and the Unicode/UTF8 translation tables. Pass
Kerberos : Checking machine password update for Kerberos..................Fail
LocalGrp : Checking the local groups database configuration...............Fail
NIS : Checking the connectivity to the NIS servers, if defined....... Pass
NTP : Checking the connectivity to theNTP servers, if defined........ Pass
Ntxmap : Checking the ntxmap configuration file......................... Pass
Security : Checking the CIFS security settings............................Pass
Server : Checking the CIFS files servers configuration.................. Pass
Share : Checking the network shares database........................... Pass
SmbList : Checking the range availability of SMB ID......................*Pass
Threads : Checking for CIFS blocked threads.............................. Pass
UM_Client : Checking for the connectivity to usermapper servers, if any....Pass
UM_Server : Checking the consistency of usermapper database, if primary....*Pas
s
UnsupOS : Checking for unsupported client network OS..................... Pass
UnsupProto: Checking for unsupported client network protocols..............Pass
VC : Checking the configuration to Virus Checker servers............ Pass
WINS : Checking for the connectivity to WINS servers, if defined...... Pass
NB: a result with a ’*’ means that some tests were not executed. use -full to
run them
---------------------------------------------------------------------------------------------------------CIFS : Kerberos Warnings---------------------------Warning 17451974742: server_2 : No update of the machine password of server
’DM102-CGE1’. hold.
--> Check the log events to find out the reason of this issue.
Warning 17451974742: server_2 : No update of the machine password of server
’DM102-CGE0’. hold.
--> Check the log events to find out the reason of this issue.
---------------------------CIFS : LocalGrp Warnings---------------------------Warning 17451974726: server_2 : The local group ’Guests’ of server
’DM102-CGE1’
contains an unmapped member: S-1-5-15-60415a8a-335a7a0d-6b635f23-202.The
access
to some network resources may be refused.
--> According the configured resolver of your system (NIS, etc config files,
usermapper, LDAP...),add the missing members.
------------------------------------------------------------------------------371
-------------------------------CIFS : DC Errors-------------------------------Error 13160939577: server_2 : pingdc failed due to NT error ACCESS_DENIED at
step
SAMR lookups
--> check server configuration and/or DC policies according to reported error.
Error 13160939577: server_2 : pingdc failed due to NT error ACCESS_DENIED at
step
SAMR lookups
--> check server configuration and/or DC policies according to reported error.
------------------------------------------------------------------------------EXAMPLE #3
--------To execute only the check of the DNS dependency of the CIFS
component, type:
$ server_checkup server_2 -test CIFS -subtest DNS
server_2 :
------------------------------------Checks------------------------------------Component CIFS :
DNS : Checking the DNS configuration and connectivity to DNS servers. Pass
------------------------------------------------------------------------------EXAMPLE #4
---------To list the available dependencies of the CIFS component, type:
$ server_checkup server_2 -info CIFS
server_2 :
done
COMPONENT : CIFS
DEPENDENCY : ACL
DESCRIPTION : Number of ACL per file system.
TESTS :
In full mode, check if the number of ACL per file system doesn’t exceed 90% of
the maximum limit.
COMPONENT : CIFS
DEPENDENCY : Connection
DESCRIPTION : TCP connection number
TESTS :
Check if the number of CIFS TCP connections doesn’t exceed 80% of the maximum
number.
COMPONENT : CIFS
DEPENDENCY : Credential
DESCRIPTION : Users and groups not mapped
TESTS :
Check if all credentials in memory are mapped to a valid SID.
COMPONENT : CIFS
DEPENDENCY : DC
DESCRIPTION : Connectivity to the domain controllers
TESTS :
Check the connectivity to the favorite DC (DCPing),
In full mode, check the connectivity to all DC of the domain,
Check if DNS site information are defined for each computer name,
Check
Check
Check
Check
if the site of each computer name has an available DC,
if trusted domain of each computer name can be reached,
the ds.useDCLdapPing parameter is enabled,
the ds.useADSite parameter is enabled.
COMPONENT : CIFS
DEPENDENCY : DFS
DESCRIPTION : DFS service configuration on computer names
TESTS :
Check the DFS service is enabled in registry if DFS metadata exists,
Check the DFS metadata of each share with DFS flag are correct,
Check if share names in DFS metadata are valid and have the DFS flag,
Check if each DFS link is valid and loaded,
Check in the registry if the WideLink key is enabled and corresponds to a valid
share name.
COMPONENT : CIFS
DEPENDENCY : DNS
DESCRIPTION : DNS domain configuration
TESTS :
Check if each DNS domain has at least 2 defined servers,
Check the connectivity to each DNS server of each DNS domain,
Check if each DNS server of each DNS domain supports really the DNS service,
Check the ds.useDSFile parameter (automatic discovery of DC),
Check the ds.useDSFile parameter is enabled if the directoryservice file
exists.
COMPONENT : CIFS
DEPENDENCY : EventLog
DESCRIPTION : Event Logs parameters on servers
TESTS :
Check if the pathnames of each event logs files are valid (application, system
and security),
Check if the maximum file size of each event logs file doesn’t exceed 1GB,
Check if the retention time of each event logs file doesn’t exceed 1 month.
COMPONENT : CIFS
DEPENDENCY : FS_Type
DESCRIPTION : DIR3 mode of filesystems
TESTS :
Check if each file system is configured in the DIR3 mode.
COMPONENT : CIFS
DEPENDENCY : GPO
DESCRIPTION : GPO configuration on Win2K servers
TESTS :
Check if the size of the GPO cache file doesn’t exceed 10% of the total size of
the root file system,
Check the last modification date of the GPO cache file is up-to-date,
Check the cifs.gpo and cifs.gpoCache parameters have not been changed,
COMPONENT : CIFS
DEPENDENCY : HomeDir
DESCRIPTION : Home directory shares configuration
TESTS :
Check if the home directory shares configuration file exists, the feature is
enabled,
Check if the home directory shares configuration file is optimized (40 lines
maximum),
Check the syntax of the home directory shares configuration file.
COMPONENT : CIFS
DEPENDENCY : I18N
DESCRIPTION : Internationalization and translation tables
TESTS :
Check if computer name exists, the I18N mode is enabled,
Check the .etc_common file system is correctly mounted,
373
Check the syntax of the definition file of the Unicode characters,
Check the uppercase/lowercase conversion table of Unicode character is valid.
COMPONENT : CIFS
DEPENDENCY : Kerberos
DESCRIPTION : Kerberos configuration
TESTS :
Check the machine password update is enabled and up-to-date.
COMPONENT : CIFS
DEPENDENCY : LocalGrp
DESCRIPTION : Local groups and local users
TESTS :
Check the local group database doesn’t contain more than 80% of the maximum
number of servers,
Check if the servers in the local group database are all valid servers,
Check the state of the local group database (initialized and writable),
Check if the members of built-in local groups are all resolved in the domain,
Check the number of built-in local groups and built-in local users,
Check if the number of defined local users doesn’t exceed 90% of the maximum
number.
COMPONENT : CIFS
DEPENDENCY : NIS
DESCRIPTION : Network Information System (NIS) configuration
TESTS :
If NIS is configured, check at least 2 NIS servers are defined (redundancy
check),
Check if each NIS server can be contacted on the network,
Check if each NIS server really supports the NIS service.
COMPONENT : CIFS
DEPENDENCY : NTP
DESCRIPTION : Network Time Protocol (NTP) configuration
TESTS :
If NTP is configured, check at least 2 NTP servers are defined (redundancy
check),
Check if each NIS server can be contacted on the network,
If computer names exist, check if NTP is configured and is running.
COMPONENT
:
DEPENDENCY
:
DESCRIPTION :
TESTS
:
Check the data
CIFS
Ntxmap
Checking the ntxmap.conf file.
consistency of the ntxmap configuration file.
COMPONENT : CIFS
DEPENDENCY : Security
DESCRIPTION : Security settings
TESTS :
If the I18N mode is enabled, check the share/unix security setting is not in
use,
Discourage to use the share/unix security setting,
Check the cifs.checkAcl parameter is enabled if the security setting is set to
NT.
COMPONENT : CIFS
DEPENDENCY : Server
DESCRIPTION : Files servers
TESTS :
Check if each CIFS server is configured with a valid IP interface,
Check if each computer name has joined its domain,
Check if each computer name is correctly registered in their DNS servers,
Check if the DNS servers have the valid IP addresses of each computer name,
Check if a DNS domain exists if at least one computer name exists,
COMPONENT : CIFS
DEPENDENCY : Share
DESCRIPTION : Network shares
TESTS :
Check the available size and i-nodes on the root file system are at least 10%
of the total size,
Check the size of the share database doesn’t exceed 30% of the total size of
the root file system,
Check if the pathname of each share is valid and is available,
Check if each server in the share database really exists,
Check if the I18N mode is enabled, all the share names are UTF-8 compatible,
Check the list of ACL of each share contains some ACE,
Check the length of each share name doesn’t exceed 80 Unicode characters.
COMPONENT : CIFS
DEPENDENCY : SmbList
DESCRIPTION : 64k UID, TID and FID limits
TESTS :
In full mode, check the 3 SMB ID lists (UID, FID and TID) don’t exceed 90% of
the maximum ID number.
COMPONENT : CIFS
DEPENDENCY : Threads
DESCRIPTION : Blocked threads and overload
TESTS :
Check CIFS threads blocked more than 5 and 30 seconds,
Check the maximum number of CIFS threads in use in the later 5 minutes doesn’t
exceed 90% of the total number,
Check the number of threads reserved for Virus Checker doesn’t exceed 20% of
the total number of CIFS threads.
COMPONENT : CIFS
DEPENDENCY : UM_Client
DESCRIPTION : Connectivity to the usermapper server
TESTS :
If usermapper servers are defined, check each server can be contacted,
Check if usermapper servers are defined, NIS is not simultaneously activated.
COMPONENT : CIFS
DEPENDENCY : UM_Server
DESCRIPTION : Primary usermapper server
TESTS :
If a primary usermapper is defined locally, check its database size doesn’t
exceed 30% of the total size,
Check if configuration file is in use, the filling rate of the ranges doesn’t
exceed 90%,
Check if configuration file is in use, 2 ranges do not overlap,
Check if secmap is enabled,
In full mode, check the SID/UID and SID/GID mappings and reverses are correct
and coherent.
COMPONENT : CIFS
DEPENDENCY : UnsupOS
DESCRIPTION : Client OS not supported
TESTS :
Check for unsupported client network OS.
COMPONENT : CIFS
DEPENDENCY : UnsupProto
DESCRIPTION : Unsupported protocol commands detected
TESTS :
Check for unsupported client network protocol commands.
COMPONENT : CIFS
DEPENDENCY : VC
DESCRIPTION : Virus checker configuration
TESTS :
If VC is enabled, check the syntax of the VC configuration file,
375
Check if the VC ’enable’ file and the VC configuration are compatible,
Check the number of VC servers. Make sure at least 2 servers are defined, for
redundancy,
Check if there are offline VC servers,
Check if the VC high watermark has not been reached,
Check the connection of VC servers to the Data Mover.
COMPONENT : CIFS
DEPENDENCY : WINS
DESCRIPTION : WINS servers.
TESTS :
If NetBIOS names are defined, check if at least one WINS server is defined,
Check the number of WINS servers. check if two servers are defined for
redundancy,
Check if each WINS server can be contacted on the network,
Check these servers are really WINS servers,
Check if the NetBIOS are correctly registered on the servers.
EXAMPLE #5
---------To execute additional tests, type:
$ server_checkup server_2 -full
server_2 :
------------------------------------Checks------------------------------------Component REPV2 :
F_RDE_CHEC: Checking the F-RDE compatibilty of Repv2 sessions........ Fail
Component HTTPS :
HTTP: Checking the configuration of HTTP applications................ Pass
SSL : Checking the configuration of SSL applications................. Pass
Component CIFS :
ACL : Checking the number of ACL per file system..................... Pass
Connection: Checking the load of TCP connections of CIFS............. Pass
Credential: Checking the validity of credentials..................... Pass
DC : Checking the connectivity and configuration of the DCs.......... Fail
DFS : Checking the DFS configuration files and DFS registry.......... Pass
DNS : Checking the DNS configuration and connectivity to DNS servers. Pass
EventLog : Checking the configuration of Windows Event Logs.......... Pass
FS_Type : Checking if all file systems are all DIR3 type............. Pass
GPO : Checking the GPO configuration................................. Pass
HomeDir : Checking the configuration of home directory share......... Pass
I18N : Checking the I18N mode and the Unicode/UTF8 translation tables Pass
Kerberos : Checking machine password update for Kerberos............. Fail
LocalGrp : Checking the local groups database configuration.......... Fail
NIS : Checking the connectivity to the NIS servers, if defined....... Pass
NTP : Checking the connectivity to theNTP servers, if defined........ Pass
Ntxmap : Checking the ntxmap configuration file...................... Pass
Security : Checking the CIFS security settings....................... Pass
Server : Checking the CIFS files servers configuration............... Pass
Share : Checking the network shares database......................... Pass
SmbList : Checking the range availability of SMB ID.................. Pass
Threads : Checking for CIFS blocked threads.......................... Pass
UM_Client : Checking for the connectivity to usermapper servers, if any....
Pass
UM_Server : Checking the consistency of usermapper database, if primary....
Pass
UnsupOS : Checking for unsupported client network OS................. Pass
UnsupProto: Checking for unsupported client network protocols........ Pass
VC : Checking the configuration to Virus Checker servers............. Pass
WINS : Checking for the connectivity to WINS servers, if defined..... Pass
Component FTPDS :
FS_Type
: Checking if all file systems are in the DIR3 format...... Pass
FTPD
: Checking the configuration of FTPD....................... Fail
NIS
: Checking the connectivity to the NIS servers............. Pass
NS
: Checking the naming services configuration............... Fail
NTP
: Checking the connectivity to the NTP servers............. Fail
SSL
: Checking the configuration of SSL applications........... Fail
--------------------------------------------------------------------------------------------------------HTTPS : SSL Warnings---------------------------Warning 17456169084: server_2 : The SSL feature ’DHSM’ can not get
certificate from the persona default. Because this feature needs a certificate
and a private key, it can not start,
--> Run the server_certificate command to generate a new key set and
certificate for this persona. Or run the appropriate command (like server_http
for instance) to set a correct persona for this SSL feature.
Warning 17456169084: server_2 : The SSL feature ’DIC’ can not get certificate
from the persona default. Because this feature needs a certificate and a
private key, it can not start,
--> Run the server_certificate command to generate a new key set and
certificate for this persona. Or run the appropriate command (like server_http
for instance) to set a correct persona for this SSL feature.
Warning 17456169084: server_2 : The SSL feature ’DIC_S’ can not get
certificate from the persona default. Because this feature needs a certificate
and a private key, it can not start,
--> Run the server_certificate command to generate a new key set and
certificate for this persona. Or run the appropriate command (like server_http
for instance) to set a correct persona for this SSL feature.
Warning 17456169084: server_2 : The SSL feature ’DIC_L’ can not get
certificate from the persona default. Because this feature needs a certificate
and a private key, it can not start,
--> Run the server_certificate command to generate a new key set and
certificate for this persona. Or run the appropriate command (like server_http
for instance) to set a correct persona for this SSL feature.
Warning 17456169084: server_2 : The SSL feature ’DBMS_FILE_TRANSFER’ can not
get certificate from the persona default. Because this feature needs a
certificate and a private key, it can not start,
--> Run the server_certificate command to generate a new key set and
certificate for this persona. Or run the appropriate command (like server_http
for instance) to set a correct persona for this SSL feature.
-----------------------CIFS : Credential Warnings------------------------Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
---------------------------CIFS : DC Warnings----------------------------377
Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
--------------------------CIFS : DFS Warnings----------------------------Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
------------------------CIFS : EventLog Warnings-------------------------Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
------------------------CIFS : HomeDir Warnings--------------------------Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
--------------------------CIFS : I18N Warnings---------------------------Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
------------------------CIFS : Kerberos Warnings-------------------------Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
------------------------CIFS : LocalGrp Warnings-------------------------Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
--------------------------CIFS : NTP Warnings----------------------------Warning 17456169044: server_2 : The Network Time Protocol subsystem (NTP) has
been stopped or is not connected to its server. It may cause potential errors
during Kerberos authentication (timeskew).
--> If the NTP service is not running, start it using the server_date command.
If it is not connected, check the IP address of the NTP server and make sure
the NTP service is up and running on the server. If needed, add another NTP
server in the configuration of the Data Mover. Use the server_date command to
manage the NTP service and the parameters on the Data Mover.
-------------------------CIFS : Secmap Warnings---------------------------
Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
-------------------------CIFS : Server Warnings--------------------------Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
-------------------------CIFS : Share Warnings---------------------------Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
------------------------CIFS : SmbList Warnings--------------------------Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
--------------------------CIFS : WINS Warnings---------------------------Warning 17456168968: server_2 : The CIFS service is currently stopped. Many
CIFS sanity check tests cannot be done as all CIFS servers are currently
disabled on this Data Mover.
--> Start the CIFS server by executing the ’server_setup’ command, and try
again.
--------------------------FTPDS : NTP Warnings---------------------------Warning 17456169044: server_2 : The Network Time Protocol subsystem (NTP) has
been stopped or is not connected to its server. It may cause potential errors
during Kerberos authentication (timeskew).
--> If the NTP service is not running, start it using the server_date command.
If it is not connected, check the IP address of the NTP server and make sure
the NTP service is up and running on the server. If needed, add another NTP
server in the configuration of the Data Mover. Use the server_date command to
manage the NTP service and the parameters on the Data Mover.
--------------------------FTPDS : SSL Warnings---------------------------Warning 17456169084: server_2 : The SSL feature ’DHSM’ can not get
certificate from the persona default. Because this feature needs a certificate
and a private key, it can not start,
--> Run the server_certificate command to generate a new key set and
certificate for this persona. Or run the appropriate command (like server_http
for instance) to set a correct persona for this SSL feature.
Warning 17456169084: server_2 : The SSL feature ’DIC’ can not get certificate
from the persona default. Because this feature needs a certificate and a
private key, it can not start,
--> Run the server_certificate command to generate a new key set and
certificate for this persona. Or run the appropriate command (like server_http
for instance) to set a correct persona for this SSL feature.
Warning 17456169084: server_2 :
The SSL feature ’DIC_S’ can not get
379
certificate from the persona default. Because this feature needs a certificate
and a private key, it can not start,
--> Run the server_certificate command to generate a new key set and
certificate for this persona. Or run the appropriate command (like server_http
for instance) to set a correct persona for this SSL feature.
Warning 17456169084: server_2 : The SSL feature ’DIC_L’ can not get
certificate from the persona default. Because this feature needs a certificate
and a private key, it can not start,
--> Run the server_certificate command to generate a new key set and
certificate for this persona. Or run the appropriate command (like server_http
for instance) to set a correct persona for this SSL feature.
Warning 17456169084: server_2 : The SSL feature ’DBMS_FILE_TRANSFER’ can not
get certificate from the persona default. Because this feature needs a
certificate and a private key, it can not start,
--> Run the server_certificate command to generate a new key set and
certificate for this persona. Or run the appropriate command (like server_http
for instance) to set a correct persona for this SSL feature.
-----------------------------------------------------------------------------------------------------REPV2 : F_RDE_CHECK Errors------------------------Error 13160415855: server_2 : For the Replication session: rep1,
Data Mover version on the source fs: 5.6.47
Data Mover version on the destination fs: 5.5.5
Minimum required Data Mover version on the destination fs: 5.6.46
The Data Mover version on the destination file system is incompatible with the
Data Mover version on the source file system. After data transfer, the data in
the destination file system may appear to be corrupt, even though the data is
in fact intact.
Upgrade the Data Mover where the destination file system resides to at least
5.6.46.
Error 13160415855: server_2 : For the Replication session:rsd1,
F-RDE version on the source fs: 5.6.46
F-RDE version on the destination fs: 5.5.5
Minimum required F-RDE version on the destination fs: 5.6.46
The F-RDE versions are incompatible.
After data transfer, the data in the dst FS may appear to be corrupt.
--> Upgrade the DataMover where the dst fs resides to atleast the version on
the source.
Error 13160415855: server_2 : For the Replication session:rsd2,
F-RDE version on the source fs: 5.6.46
F-RDE version on the destination fs: 5.5.5
Minimum required F-RDE version on the destination fs: 5.6.46
The F-RDE versions are incompatible.
After data transfer, the data in the dst FS may appear to be corrupt.
--> Upgrade the DataMover where the dst fs resides to atleast the version on
the source.
Error 13160415855: server_2 : For the Replication session:rsd3, F-RDE version
on the source fs: 5.6.46
F-RDE version on the destination fs: 5.5.5 Minimum required
F-RDE version on the destination fs: 5.6.46
The F-RDE versions are incompatible.
After data transfer, the data in the dst FS may appear to be corrupt.
--> Upgrade the DataMover where the dst fs resides to atleast the version on
the source.
---------------------------HTTPS : SSL Errors-----------------------------
Error 13156876314: server_2 : The persona ’default’ contains nor certificate
neither private keys sets. So, this persona can not be used by a SSL feature
on the Data Mover.
--> Run the server_certificate command to generate a new key set and
certificate for this persona.
---------------------------CIFS : DNS Errors-----------------------------Error 13161070637: server_2 : The DNS service is currently stopped and does
not contact any DNS server. The CIFS clients may not be able to access the
Data Mover on the network.
--> Start the DNS service on the Data Mover, using the ’server_dns’ command.
-----------------------------CIFS : NS Errors----------------------------Error 13156352011: server_2 : None of the naming services defined for the
entity ’host’ in nsswitch.conf is configured.
--> Make sure each entity (e.g. host, passwd..) in the nsswitch.conf file
contains naming services, (e.g. local files, NIS or usermapper), and make sure
these services are configured. Use the corresponding commands like server_nis,
server_dns or server_ldap to make sure they are configured.
Error 13156352011: server_2 : None of the naming services defined for the
entity ’group’ in nsswitch.conf is configured.
--> Make sure each entity (e.g. host, passwd..) in the nsswitch.conf file
contains naming services, (e.g. local files, NIS or usermapper), and make sure
these services are configured. Use the corresponding commands like server_nis,
server_dns or server_ldap to make sure they are configured.
Error 13156352011: server_2 : None of the naming services defined for the
entity ’netgroup’ in nsswitch.conf is configured.
--> Make sure each entity (e.g. host, passwd..) in the nsswitch.conf file
contains naming services, (e.g. local files, NIS or usermapper), and make sure
these services are configured. Use the corresponding commands like server_nis,
server_dns or server_ldap to make sure they are configured.
--------------------------FTPDS : FTPD Errors----------------------------Error 13156876314: server_2 : The persona ’default’ contains nor certificate
neither private keys sets. So, this persona can not be used by a SSL feature
on the Data Mover.
--> Run the server_certificate command to generate a new key set and
certificate for this persona.
---------------------------FTPDS : NS Errors-----------------------------Error 13156352011: server_2 : None of the naming services defined for the
entity ’host’ in nsswitch.conf is configured.
--> Make sure each entity (e.g. host, passwd..) in the nsswitch.conf file
contains naming services, (e.g. local files, NIS or usermapper), and make sure
these services are configured. Use the corresponding commands like server_nis,
server_dns or server_ldap to make sure they are configured.
Error 13156352011: server_2 : None of the naming services defined for the
entity ’group’ in nsswitch.conf is configured.
--> Make sure each entity (e.g. host, passwd..) in the nsswitch.conf file
contains naming services, (e.g. local files, NIS or usermapper), and make sure
these services are configured. Use the corresponding commands like server_nis,
server_dns or server_ldap to make sure they are configured.
Error 13156352011: server_2 : None of the naming services defined for the
entity ’netgroup’ in nsswitch.conf is configured.
--> Make sure each entity (e.g. host, passwd..) in the nsswitch.conf file
contains naming services, (e.g. local files, NIS or usermapper), and make sure
these services are configured. Use the corresponding commands like server_nis,
server_dns or server_ldap to make sure they are configured.
381
---------------------------FTPDS : SSL Errors----------------------------Error 13156876314: server_2 : The persona ’default’ contains nor certificate
neither private keys sets. So, this persona can not be used by a SSL feature
on the Data Mover.
--> Run the server_certificate command to generate a new key set and
certificate for this persona.
------------------------------------------------------------------------------Total :
14 errors, 25 warnings
------------------------------------------------------------------------------EXAMPLE #6
---------To display only the number of errors and warnings for a Data Mover and
dependency, type:
$ server_checkup server_2 -quiet
server_2 :
------------------------------------Checks------------------------------------Component REPV2 :
F_RDE_CHEC: Checking the F-RDE compatibilty of Repv2 sessions.......... Fail
Component HTTPS :
HTTP
SSL
: Checking the configuration of HTTP applications............ Pass
: Checking the configuration of SSL applications............. Pass
Component CIFS :
ACL
: Checking the number of ACLs per file system....................*Pas
s
Connection: Checking the load of CIFS TCP connections.................. Pass
Credential: Checking the validity of credentials....................... Fail
DC
: Checking the connectivity and configuration of Domain Controlle Fai
l
DFS
DNS
EventLog
FS_Type
GPO
HomeDir
I18N
:
:
:
:
:
:
:
Checking
Checking
Checking
Checking
Checking
Checking
Checking
the DFS configuration files and DFS registry...... Fail
the DNS configuration and connectivity to DNS servers.Fail
the configuration of Windows Event Logs........... Fail
if all file systems are in the DIR3 format........ Pass
the GPO configuration............................. Pass
the configuration of home directory shares........ Fail
the I18N mode and the Unicode/UTF8 translation tables. Fai
Kerberos
LDAP
LocalGrp
NIS
NS
NTP
Ntxmap
Secmap
Security
Server
Share
SmbList
Threads
UM_Client
UM_Server
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
Checking
Checking
Checking
Checking
Checking
Checking
Checking
Checking
Checking
Checking
Checking
Checking
Checking
Checking
Checking
password updates for Kerberos..................... Fail
the LDAP configuration............................ Pass
the database configuration of local groups........ Fail
the connectivity to the NIS servers............... Pass
the naming services configuration................. Fail
the connectivity to the NTP servers............... Fail
the ntxmap configuration file..................... Pass
the SECMAP database............................... Fail
the CIFS security settings........................ Pass
the CIFS file servers configuration............... Fail
the network shares database....................... Fail
the range availability of SMB IDs.................*Pass
for CIFS blocked threads.......................... Pass
the connectivity to usermapper servers............ Pass
the usermapper server database....................*Pass
l
UnsupOS
:
UnsupProto:
VC
:
WINS
:
Checking
Checking
Checking
Checking
for
for
the
the
unsupported client network operating systems..
unsupported client network protocols..........
configuration of Virus Checker servers........
connectivity to WINS servers..................
Pass
Pass
Pass
Fail
if all file systems are in the DIR3 format........
the configuration of FTPD.........................
the connectivity to the NIS servers...............
the naming services configuration.................
the connectivity to the NTP servers...............
the configuration of SSL applications.............
Pass
Fail
Pass
Fail
Fail
Pass
Component FTPDS :
FS_Type
FTPD
NIS
NS
NTP
SSL
:
:
:
:
:
:
Checking
Checking
Checking
Checking
Checking
Checking
NB: a result with a ’*’ means that some tests were not executed. use -full
to run them
---------------------------------------------------------------------------Total :
12 errors, 14 warnings
------------------------------------Checks------------------------------------------------------------------------------------------------------------Last Modified: April 05, 2010 12:30 pm
383
server_cifs
Manages the CIFS configuration for the specified Data Movers or
Virtual Data Movers (VDMs).
SYNOPSIS
-------server_cifs {<movername>|ALL} [<options>]
’options’ can be one of the following:
| -option {{audit [,user=<user_name>][,client=<client_name>][,full]}
| homedir[=NO]}
| -add netbios=<netbios_name>,domain=<domain_name>[,alias=<alias_name>...]
[,hidden={y|n} [[,interface=<if_name>[,wins=<ip>[:<ip>]]]...]
[,local_users][-comment <comment>]
| -add compname=<comp_name>,domain=<full_domain_name>[,alias=<alias_name>...]
[,hidden={y|n}][,authentication={kerberos|all}]
[,netbios=<netbios_name>][[,interface=<if_name>[,wins=<ip>[:<ip>]]
[,dns=<if_suffix>]]...][,local_users][-comment <comment>]
|-add standalone=<netbios_name>,workgroup=<workgroup_name>
[,alias=<alias_name>...][,hidden={y|n}]
[[,interface=<if_name>[,wins=<ip>[:<ip>]]...][,local_users]
[-comment <comment>]
| -rename -netbios <old_name> <new_name>
| -Join compname=<comp_name>,domain=<full_domain_name>,admin=<admin_name>
[,ou=<organizational_unit>]
[-option {reuse|resetserverpasswd|addservice=nfs}]
| -Unjoin compname=<comp_name>,domain=<full_domain_name>,admin=<admin_name>
| -add security={NT|UNIX|SHARE} [,dialect=<dialect_name>]
| -add wins=<ip_addr>[,wins=<ip_addr>...]
| -add usrmapper=<ip_addr>[,usrmapper=<ip_addr>...]
| -Disable <interface>[,<interface>...]
| -Enable <interface>[,<interface>...]
| -delete netbios=<netbios_name> [-remove_localgroup]
[,alias=<alias_name>...][,interface=<if_name>]
| -delete compname=<comp_name> [-remove_localgroup]
[,alias=<alias_name>...][,interface=<if_name>]
| -delete wins=<ip_addr>[,wins=<ip_addr>...]
| -delete usrmapper=<ip_addr>[,usrmapper=<ip_addr>...]
| -delete standalone=<netbios_name> [-remove_localgroup]
[,alias=<alias_name>...][,interface=<if_name>]
| -update {<share_name>|<path>} [mindirsize=<size>][force]
| -Migrate {<fs_name> -acl|<netbios_servername> -localgroup}
<src_domain>{:nb=<netbios>|:if=<interface>}
<dst_domain>{:nb=<netbios>|:if=<interface>}
| -Replace {<fs_name> -acl|<netbios_servername> -localgroup}
{:nb=<netbios>|:if=<interface>}
| -smbhash
{-hashgen <path> [-recursive] [-minsize <size>]
| -hashdel <path> [-recursive]
| -abort <id>
| -info
| -fsusage <fs_name>
| -exclusionfilter <filter>
| -audit {enable|disable} [-task] [-service] [-access]
| -service {enable|disable}
| -cleanup <fs_name> [-all |-unusedfor <days>|-unusedsince
<date>}}
| -setspn {-list [server=<full_comp_name>]
| -add <SPN> compname=<comp_name>,domain=<full_domain_name>,
admin=<admin_name>
| -delete <SPN> compname=<comp_name>,domain=<full_domain_name>,
admin=<admin_name>}
}
}
DESCRIPTION
----------server_cifs manages the CIFS configuration for the specified
<movername> which can be the physical Data Mover or VDMs.
Most command options are used with both VDMs and physical Data
Movers, whereas others are only used with physical Data Movers.
Options available for physical Data Movers only are:
-add security/dialect
-add/delete usrmapper
-enable/disable interface
The ALL option executes the command for all Data Movers.
OPTIONS
------No arguments
Displays the CIFS protocol configuration. Certain inputs are not
casesensitive; however, variables may be automatically converted to uppercase.
<options>
CIFS options include:
-option audit
Audits the CIFS configuration by testing for live connections to a
Data Mover.
[,user=<user_name>][,client=<client_name>][,full]
Audits the live connections created when the session is initiated
by the specified <client_name> or audits the live connections for
those owned by the specified <user_name>. The full option can
be used to identify open files. The <client_name> can be a string
or an IPV4 address and the <user_name> can be a string of
maximum 20 characters.
-option homedir[=NO]
Enables and disables (default) the home directory feature. The Data
Mover reads information from the homedir map file.
-add netbios=<netbios_name>, domain=<domain_name>
Configures a Windows NT 4.0-like CIFS server on a Data Mover,
assigning the specified <netbios_name> and <domain_name> to the
server. The domain name is limited to 15 bytes.
Caution: Each NetBIOS name must be unique to the domain and the Data
Mover.
[,alias=<alias_name>...]
Assigns a NetBIOS alias to the <netbios_name> associated with the
NetBIOS name. The <alias_name> must:
*
*
*
*
Be unique on a Data Mover
Be limited to 15 bytes
Not begin with an @ (at sign) or - (dash) character
Not include spaces, tab characters, or the following symbols: /
\ : ; , = * +|[] ? < > "
[,hidden={y|n}]
By default, the <netbios_name> is displayed in the Network
Neighborhood. If hidden=y is specified, the <netbios_name>
does not appear.
385
[[,interface=<if_name>[,wins=<ip>[:<ip>]]]...]
Specifies a logical IP interface for the CIFS server in the Windows
NT 4.0 domain and associates up to two WINS IP addresses with
each interface. The interface name is case-sensitive.
Note: When configuring a CIFS server without any interfaces for a Data
Mover, it becomes the default CIFS server and is available on all
interfaces not used by other CIFS servers. The default CIFS server can be
deleted at any time. It is recommended that IP interfaces should always
be specified. VDMs do not have default CIFS servers.
[,local_users]
Enables local user support that allows the creation of a limited
number of local user accounts on the CIFS server. When this
command executes, type and confirm a password that is assigned
to the local Administrator account on the CIFS server. In addition
to the Administrator account, a Guest account is also created. The
Guest account is disabled by default. The Administrator account
password must be changed before the Administrator can log in to
the CIFS server.
After initial creation of the stand-alone server, the local_users
option resets the local Administrator account password. The
password can only be reset if it has not been changed through a
Windows client. If the password has already been changed
through Windows, the reset will be refused.
[-comment <comment>]
Assigns a comment to the configuration. The comment is
delimited by quotes. Comment length is limited to 48 bytes
(represented as 48 ASCII characters or a variable number of
Unicode multibyte characters) and cannot include colons since
they are recognized as delimiters.
-add compname=<comp_name>,domain=<full_domain_name>
Configures a CIFS server as the <comp_name> in the specified
Windows Active Directory workgroup. A default NetBIOS name is
automatically assigned to the <comp_name>. Since the default for
<netbios_name> is derived from the <comp_name>, the
<comp_name> must not contain any characters that are invalid for a
<netbios_name>.
In the case of disjointed namespaces, you must use the fully qualified
domain name for the <comp_name>. For example, for a disjointed
namespace, you must always specify the fully qualified domain
name (FQDN) with the computer name when joining a CIFS server to
a domain, that is, dm112-cge0.emc.com, not just dm112-cge0.
The <comp_name> is limited to 63 bytes. The fully qualified domain
name is limited to 155 bytes. The <full_domain_name> must contain
a dot (.). There cannot be a @ (at sign) or - (dash) character. The name
also cannot include spaces, tab characters, or the symbols: / \ : ; , = *
+|[] ? < > "
Caution: Each computer name must be unique to the domain and the Data
Mover.
Note: Using International Character Sets for File provides details. Only
Windows NT security mode can be configured when UTF-8 is enabled.
[,alias=<alias_name>...]
Assigns an alias to the NetBIOS name. The <alias_name> must:
*
*
*
*
Be unique on a Data Mover
Be limited to 15 bytes
Not begin with an @ (at sign) or - (dash) character
Not include spaces, tab characters, or the following symbols: /
\ : ; , = * +|[] ? < > "
[,hidden={y|n}]
By default, the computer name appears in the Network
Neighborhood. If hidden=y is specified, then the computer name
does not appear.
[,authentication={kerberos|all}]
Specifies the type of user authentication. The kerberos option
limits the server usage to Kerberos authentication; the all option
(default) allows both Kerberos and NTLM authentication.
[,netbios=<netbios_name>]
Specifies a <netbios_name> for the <comp_name> in place of the
default. The default for <netbios_name> is assigned
automatically and is derived from the first 15 bytes of the
<comp_name>. The <netbios_name> cannot begin with an @ (at
sign) or - (dash) character. The name also cannot include spaces,
tab characters, or the symbols: / \ : ; , = * +|[] ? < > "
[[,interface=<if_name>[,wins=<ip>[:<ip>]]]...]
Specifies a logical IP interface for the CIFS server in the Active
Directory domain and associates up to two WINS IP addresses
with each interface. The interface name is case-sensitive.
Note: When configuring a CIFS server without any interfaces for a Data
Mover, it becomes the default CIFS server and is available on all
interfaces not used by other CIFS servers. The default CIFS server can be
deleted at any time. It is recommended that IP interfaces should always
be specified. VDMs do not have default CIFS servers.
[,dns=<if_suffix>]
Specifies a different DNS suffix for the interface for DNS updates.
By default, the DNS suffix is derived from the domain. This DNS
option does not have any impact on the DNS settings of the Data
Mover.
[,local_users]
Enables local user support that allows the creation of a limited
number of local user accounts on the CIFS server. When this
command executes, type and confirm a password that is assigned
to the local Administrator account on the CIFS server. In addition
to the Administrator account, a Guest account is also created. The
Guest account is disabled by default. The Administrator account
password must be changed before the Administrator account can
log in to the CIFS server.
After initial creation of the stand-alone server, the local_users
option resets the local Administrator account password. The
password can only be reset if it has not been changed through a
Windows client. If the password has already been changed
through Windows, the reset will be refused.
[-comment <comment>]
Assigns a comment to the configuration. The comment is
delimited by quotes. Comment length is limited to 48 bytes
(represented as 48 ASCII characters or a variable number of
Unicode multibyte characters) and cannot include colons, since
they are recognized as delimiters.
-add standalone=<netbios_name>, workgroup=<workgroup_name>
Creates or modifies a stand-alone CIFS server on a Data Mover,
assigning the specified <netbios_name> and <workgroup_name>
to the server. The NetBIOS and workgroup names are limited to
15 bytes. When creating a stand-alone CIFS server for the first
time, the ,local_users option must be typed, or the command will
fail. It is not required when modifying the CIFS server. A
387
stand-alone CIFS server does not require any Windows domain
infrastructure. A stand-alone server has local user accounts on the
Data Mover and NTLM is used to authenticate users against the
local accounts database.
Caution: Each NetBIOS name must be unique to the workgroup and the
Data Mover.
[,alias=<alias_name>...]
Assigns an alias to the NetBIOS name. The <alias_name> must:
*
*
*
*
Be unique on a Data Mover
Be limited to 15 bytes
Not begin with an @ (at sign) or - (dash) character
Not include spaces, tab characters, or the following symbols: /
\ : ; , = * +|[] ? < > "
[,hidden={y|n}]
By default, the <netbios_name> is displayed in the Network
Neighborhood. If hidden=y is specified, the <netbios_name>
does not appear.
[[,interface=<if_name>[,wins=<ip>[:<ip>]]]...]
Specifies a logical IP interface for the CIFS server and associates
up to two WINS IP addresses with each interface. The interface
name is case-sensitive.
Note: When configuring a CIFS server without any interfaces for a Data
Mover, it becomes the default CIFS server and is available on all
interfaces not used by other CIFS servers. The default CIFS server can be
deleted at any time. It is recommended that IP interfaces should always
be specified. VDMs do not have default CIFS servers.
[,local_users]
Enables local user support that allows the creation of a limited
number of local user accounts on the CIFS server. When this
command executes, type and confirm a password that is assigned
to the local Administrator account on the CIFS server. In addition
to the Administrator account, a Guest account is also created. The
Guest account is disabled by default. The Administrator account
password must be changed before the Administrator can log in to
the CIFS server.
After initial creation of the stand-alone server, the local_users
option resets the local Administrator account password. The
password can only be reset if it has not been changed through a
Windows client. If the password has already been changed
through Windows, the reset will be refused.
[-comment <comment>]
Assigns a comment to the configuration. The comment is
delimited by quotes. Comment length is limited to 48 bytes
(represented as 48 ASCII characters or a variable number of
Unicode multibyte characters) and cannot include colons since
they are recognized as delimiters.
-rename -netbios <old_name> <new_name>
Renames a NetBIOS name. For Windows Server, renames a
Compname after the CIFS server is unjoined from the domain.
Note: Before performing a rename, the new NetBIOS name must be added to
the domain using the Windows Server Users and Computers MMC snap-in.
-Join compname=<comp_name>,domain=
<full_domain_name>,admin=<admin_name>
Creates an account for the CIFS server in the Active Directory. By
default, the account is created under the domain root as
ou=Computers,ou=EMC VNX.
Caution: Before performing a -Join, CIFS service must be started using
server_setup.
The <comp_name> is limited to 63 bytes and represents the name of
the server to be registered in DNS. The <full_domain_name> is the
full domain name to which the server belongs. This means the name
must contain at least one period (.). The <admin_name> is the logon
name of the user with the right to create and manage computer
accounts in the Organizational Unit that the CIFS server is being
joined to. If a domain is given as part of the admin username it
should be of the form: admin@FQDN. If no domain is given the
admin user account is assumed to be part of the domain the CIFS
Server is being joined to. The user is prompted to type a password for
the admin account.
An Active Directory and a DNS can have the same domain name, or a
different domain name (disjoint namespace). For each type of Active
Directory and DNS domain relationship, specific VNX parameters
and command values must be used. For example, for a disjoint
namespace, you must always specify the fully qualified domain
name (FQDN) with the computer name when joining a CIFS server to
a domain, that is, dm112-cge0.emc.com, not just dm112-cge0.
Caution: Time services must be synchronized using server_date.
[,ou=<organizational_unit>]
Specifies the organizational unit or container where computer
accounts are created in the Active Directory. By default, computer
accounts are created in an organizational unit called Computers.
The name must be in a valid distinguished name format, for
example, ou="cn=My_mover". The name may contain multiple
nested elements, such as ou="cn=comp:ou=mach". The colon (:)
must be used as a separator for multiple elements. By default,
ou=Computers,ou=EMC VNX is used. The organizational unit
name is limited to 256 bytes.
[-option {reuse|resetserverpasswd|addservice=nfs}]
The reuse option reuses the existing computer account with the
original principal or joins a CIFS server to the domain where the
computer account has been created manually.
The resetserverpasswd option resets the CIFS server password
and encryption keys on a domain controller. This option could be
used for security reasons, such as changing the server password
in the Kerberos Domain Controller.
The addservice option adds the NFS service to the CIFS server,
making it possible for NFS users to access the Windows Kerberos
Domain Controller. Before adding NFS service, the
<comp_name> must already be joined to the domain, otherwise
the command will fail.
-Unjoin compname=<comp_name>,domain=
<full_domain_name>,admin=<admin_name>
Deletes the account for the CIFS server as specified by its
<comp_name> from the Active Directory database. The user is
prompted to type a password for the admin account.
-add security={NT|UNIX|SHARE}
Defines the user authentication mechanism used by the Data Mover
for CIFS services. NT (default) security mode uses standard
Windows domain based user authentication. The local password and
group files, NIS, EMC Active Directory UNIX users and groups
extension, or UserMapper are required to translate Windows user
and group names into UNIX UIDs and GIDs. NT security mode is
389
required for the Data Mover to run Windows 2000 or later native
environments. Unicode should be enabled for NT security mode.
Caution: EMC does not recommend the use of UNIX or SHARE security
modes.
For UNIX security mode, the client supplies a username and a
plain-text password to the server. The server uses the local (password
or group) file or NIS to authenticate the user. To use UNIX security
mode, CIFS client machines must be configured to send user
passwords to the Data Mover unencrypted in plain text. This requires
a registry or security policy change on every CIFS client machine.
For VDM, UNIX and SHARE security modes are global to the Data
Mover and cannot be set for each VDM. Unicode must not be
enabled.
For SHARE security mode, clients supply a read-only or read-write
password for the share. No user authentication is performed using
SHARE security. Since this password is sent through the network in
clear text, you must modify the Client Registry to allow for clear text
passwords.
Caution: Before adding or changing a security mode, CIFS service must be
stopped using server_setup, then restarted once options have been
set.
[,dialect=<dialect_name>]
Specifies a dialect. Optimum dialects are assigned by default.
Options include CORE, COREPLUS, LANMAN1 (default for
UNIX and SHARE security modes), LANMAN2, and NT1 (which
represents SMB1 and is the default for NT security mode), SMB2
and SMB3.
. SMB1 dialect is NT1 dialect.
. SMB2 dialect means max dialect in SMB2 which is SMB2.1.
SMB2.0 or SMB2.1 can be specified explicitly to refine the
dialect revision.
. SMB3 dialect means max dialect in SMB3 which is SMB3.0.
SMB3.0 can be specified explicitly.
Note: SMB3 is enabled by default.
-add wins=<ip_addr>[,wins=<ip_addr>...]
Adds the WINS servers to the CIFS configuration. The list of WINS
servers is processed in the order in which they are added. The first
one is the preferred WINS server. If after 1500 milliseconds, the first
WINS server times out, the next WINS server on the list is used.
-add usrmapper=<ip_addr>[,usrmapper=<ip_addr>...]
Adds the IP address(es) of a secondary Usermapper hosts to the CIFS
configuration. A single IP address can point to a primary or
secondary Usermapper host. If you are using distributed
Usermappers, up to eight subsequent IP addresses can point to
secondary Usermapper hosts.
-Disable <interface> [<interface>,...]
Disables the specified IP interfaces for CIFS service. Interface names
are case-sensitive. All unused interfaces should be disabled.
-Enable <interface> [<interface>,...]
Enables the specified IP interfaces for CIFS service. Interface names
are case-sensitive.
-delete standalone=<netbios_name>
[-remove_localgroup][,alias=<alias_name>...][,interface=<if_name>]
Deletes the stand-alone CIFS server as identified by its NetBIOS
name from the CIFS configuration of the Data Mover.
-delete netbios=<netbios_name>
[-remove_localgroup][,alias=<alias_name>...][,inter
face=<if_name>]
Deletes the CIFS server as identified by its NetBIOS name from the
CIFS configuration of the Data Mover.
-delete compname=<comp_name> [-remove_localgroup]
[,alias=<alias_name>...][,interface=<if_name>]
Deletes the CIFS server as identified by its compname from the CIFS
configuration of the Data Mover. This does not remove the account
from the Active Directory. It is recommended that an -Unjoin be
executed prior to deleting the computer name.
Caution: The -remove_localgroup option permanently deletes the local
group information of the CIFS server from the permanent storage
of the Data Mover. The alias and interface options delete the alias
and the interface only, however, the CIFS server exists. The alias
and interface options can be combined in the same delete
command.
Deletes the WINS servers from the CIFS configuration.
-delete usrmapper=<ip_addr>[,usrmapper=<ip_addr>...]
Deletes the IP addresses of a secondary Usermapper hosts from the
CIFS configuration.
-update {<share_name>|<path>}
Updates the attributes and their CIFS names for COMPAT file
systems. For every file system, CIFS maintains certain attributes for
which there are no NFS equivalents. Updating CIFS attributes
updates file attributes and CIFS names by searching the
subdirectories of the defined share or path, generating a listing of
Microsoft clients filenames (M8.3 and M256), and converting them to
a format that CIFS supports. It is not necessary to use this command
for DIR3 file systems. Options include:
[mindirsize=<size>]
Updates the directories with the minimum size specified. Size
must be typed in multiples of 512 bytes. A value of 0 ensures that
all directories are rebuilt.
[force]
Forces a previous update to be overwritten.
Caution: The initial conversion of a directory can take considerable time
when the directory contains a large number of files. Although the
process is designed to take place in the background, an update
should be run only during periods of light system usage.
-Migrate {<fs_name> -acl|<netbios_servername>
-localgroup}<src_domain>{:nb=<netbios>|:if=<interfa
ce>}<dst_domain>{:nb=<netbios>|:if=<interface>}
Updates all security IDs (SIDs) from a <src_domain> to the SIDs of a
<dst_domain> by matching the user and group account names in the
source domain to the user and group account names in the
destination domain. The interface that is specified in this option
queries the local server, then its corresponding source and target
Domain Controllers to look up each object.s SID.
If -acl is specified, all secure IDs in the ACL database are migrated for
the specified file system.
The -localgroup option must be used to migrate the SID members of
local group defined for the specified NetBIOS name.
On the source domain, an interface specified to issue a lookup of the
391
SID is defined by either the NetBIOS name or the interface name. On
the destination domain, an interface specified to issue a lookup of the
SID is defined by either the NetBIOS name or the interface name.
-Replace {<fs_name> -acl|<netbios_servername>
-localgroup}{:nb=<netbios>|:if=<interface>}
Replaces the history SIDs from the old domain with the new SIDS in
the new domain. An interface that can be specified to issue a lookup
of the SIDs is defined by the interface name or the NetBIOS name.
The -localgroup option must be used to migrate the SID members of
the local group defined for the specified NetBIOS name. When the
-Replace option is used, the user or group migrated in the new
domain keeps their old SID in addition to the new SID created in the
new domain.
The -localgroup option does the same kind of migration for a
specified NetBIOS name in the local groups (instead of the ACL in a
file system for the history argument).
-smbhash -hashgen <path> [-recursive] [-minsize <size>]
Triggers the generation of all SMB Hash Files for this path.Both
BranchCache V1 and BranchCache V2 hash files are generated. This path is an
absolute path from the root of the VDM.
If the path is a file, only the SMB Hash File for this file will be generated.
If the path is a directory, then SMB Hash File for all files will be generated
in this directory. Additionally, if the -recursive option is specified, then
the SMB Hash File for all files will be generated recursively inside the
sub-directories.
By default, only files greater than 64KB are considered. If -minsize option is
specified, then all files greater or equal to the specified size in KB will be
considered. Any size specified smaller than 64 KB will be ignored. SMB Hash
Files are generated only if they are missing or obsolete.
The hash file generation is asynchronous, so the command will reply
immediately. Use -info or check the system event log to monitor if the request
has been completed.
-smbhash -hashdel <path> [-recursive]
Triggers the deletion of all SMB Hash Files for this path. Both
BranchCache V1 and BranchCache V2 hash files are deleted. This path is an
absolute path from the root of the VDM.
If the path is a file, only the SMB Hash File for this file will be deleted.
If the path is a directory, then SMB Hash File for all files will be deleted
in this directory. Additionally, if the -recursive option is specified, then
the SMB Hash File for all files will be deleted recursively inside the
sub-directories.
The hash file deletion is asynchronous, so the command will reply immediately.
Use -info or check the system event log to monitor if the request has been
completed.
-smbhash -abort <id>
Cancels the pending or ongoing request (generation or deletion) provided its
ID is given. Request ID is received from the output of the command -info.
-smbhash -info
Get all kinds of information relative to the hash generation service:
* The list of pending requests with their ID.
* The list of under processing requests with their ID.
* Values of the parameters which are actually in use.
* Value of the GPO setting taken into account for each server.
* Statistics
-smbhash -fsusage <fs_name>
Displays the SMB Hash File disk usage of the specified file system. The return
values are:
* Total size in bytes of the file system
* Usage in bytes of the SMB Hash Files of the file system
* Usage in percentage of the file system of the SMB Hash Files
-smbhash -exclusionfilter <filter>
Files which match the exclusion filter will not have a SMB Hash File
generated. This is to avoid waste of resources spent on files that frequently
change like temporary files.
This command directly modifies the parameter ExclusionFilter as
defined with the specified format:
Type: REG_STRING
Meaning: Hash files are not generated for files which match one of the
specified filters. The comparison between this parameter and
the filename is done case less. Any change is taken into account
immediately.
Values: Default is no filter. A filter is a list of items separated by
a character ":". Each item is made of:
- Any valid character for a filename
- *: means any string
- ?:means any character
-smbhash -audit { enable | disable } [-service] [-task] [-access]
Enables the generation of audits in the smbhash event log. By default, it is
not validated. The parameters are one of the following:
* enable: Enables generation of specified event. If no event is specified in
the optional list, all events are enabled.
* disable: Disable generation of specified event. If no event is specified in
the optional list, all events are disabled.
Optional list of event’s category is :
- service: Generate service events
- task: Generate task events
- access: Generate SMB Hash access events.
-smbhash -service {enable | disable}
Enables or disables the SMB hash generation service (default is started).If
CIFS service is started, this command is taken into account
immediately. If CIFS is not running, this command is executed at the
next "cifs start".
-smbhash -cleanup <fs_name> [-all | -unusedfor <days> | -unusedsince <date>]
Cleans up the SMB Hash Files of the specified file system.
* If no option is specified, only obsolete SMB Hash Files are removed.
* If -all option is specified, the entire "smbhash" directory is removed.
* If -unusedfor <days> option is specified, obsolete SMB Hash Files plus SMB
Hash Files not accessed since the specified number of days are removed.
* If -unusedsince <date> option is specified, SMB Hash Files not accessed
since the specified date are removed. The format of the date is
<YYMMDDHHMM>.
-setspn {-list [server=<full_comp_name>]
| -add <SPN>compname=<comp_name>,
domain=<full_domain_name>,admin=<admin_name>
| -delete <SPN> compname=<comp_name>,
domain=<full_domain_name>,admin=<admin_name}
Displays all SPNs for the specified FQDN server, both for the Data
Mover and for the KDC Windows Active Directory entry. If no server
393
is specified, then the SPNs for all joined CIFS Servers for the specified
movername is displayed. The command fails if an error occurs. For
example, unable to connect to the Active Directory, the specified
server is not joined to the domain.
When the -add and -delete sub-options are used, the user is
prompted for the password associated with the admin name. The
SPN must be the full value to use, including the realm.
The -add sub-option attempts to add the specified SPN to both the
Data Mover and Active Directory. The operation succeeds if the SPN
is added to both the Data Mover and Active Directory. In an entry
already exists in one of these places, it is not duplicated. Otherwise,
the operation fails if an error occurs. For example, unable to connect
to the Active Directory, the specified server is not joined to the
domain, or incorrect admin password.
The -delete sub-option attempts to remove the specified SPN from
both the Data Mover and Active Directory. The operation succeeds if
the SPN is removed from both the Data Mover and Active Directory.
If the entry has already been deleted, it is not considered an error.
Otherwise, the operation fails if an error occurs. For example, unable
to connect to the Active Directory, the specified server is not joined to
the domain, or incorrect admin password.
SEE ALSO
-------Using EMC Utilities for the CIFS Environment, Managing a
Multiprotocol Environment on VNX, Using VNX Replicator, Using
International Character Sets on VNX for File, server_date, server_export,
server_mount, and server_setup.
OUTPUT NOTE
----------The network interface that appears in the output is dependent on the
type of network interface cards that are installed. Dates appearing in
the output are in UTC format.
EXAMPLE #1
---------To display the number and names of open files on server_2, type:
$ server_cifs server_2 -o audit,full
AUDIT Ctx=0xdffcc404, ref=2, Client(fm-main07B60004) Port=36654/139
NS40_1[BRCSLAB] on if=cge0_new
CurrentDC 0xceeab604=W2K3PHYAD
Proto=NT1, Arch=UNKNOWN, RemBufsz=0xfefb, LocBufsz=0xffff, popupMsg=1
0 FNN in FNNlist NbUsr=1 NbCnx=0
Uid=0x3f NTcred(0xcf156a04 RC=1 NTLM Capa=0x401) ’BRCSLAB\gustavo’ CHECKER
AUDIT Ctx=0xde05cc04, ref=2, XP Client(BRCSBARREGL1C) Port=1329/445
NS40_1[BRCSLAB] on if=cge0_new
CurrentDC 0xceeab604=W2K3PHYAD
Proto=NT1, Arch=Win2K, RemBufsz=0xffff, LocBufsz=0xffff, popupMsg=1
0 FNN in FNNlist NbUsr=1 NbCnx=2
Uid=0x3f NTcred(0xceeabc04 RC=3 NTLMSSP Capa=0x11001) ’BRCSLAB\gustavo’
CHECKER
Cnxp(0xceeaae04), Name=IPC$, cUid=0x3f Tid=0x3f, Ref=1, Aborted=0
readOnly=0, umask=22, opened files/dirs=0
Cnxp(0xde4e3204), Name=gustavo, cUid=0x3f Tid=0x41, Ref=1, Aborted=0
readOnly=0, umask=22, opened files/dirs=2
Fid=64, FNN=0x1b0648f0(FREE,0x0,0), FOF=0x0 DIR=\
Notify commands received:
Event=0x17, wt=0, curSize=0x0, maxSize=0x20, buffer=0x0
Tid=0x41, Pid=0xb84, Mid=0xec0, Uid=0x3f, size=0x20
Fid=73, FNN=0x1b019ed0(FREE,0x0,0), FOF=0xdf2ae504 (CHECK) FILE=\New Wordpad
Document.doc
EXAMPLE #2
---------To configure CIFS service on server_2 with a NetBIOS name of
dm110-cge0, in the NT4 domain NASDOCS, with a NetBIOS alias of
dm110-cge0a1, hiding the NetBIOS name in the Network
Neighborhood, with the interface for CIFS service as cge0, the WINS
server as 172.24.102.25, and with the comment string EMC VNX,
type:
$ server_cifs server_2 -add
netbios=dm110-cge0,domain=NASDOCS,alias=dm110-cge0a1,hidden=y,interface=cge0,wi
ns=172.24.102.25
-comment "EMC Celerra"
server_2 : done
EXAMPLE #3
---------To enable the home directory on server_2, type:
$ server_cifs server_2 -option homedir
server_2 : done
EXAMPLE #4
---------To add the WINS servers, 172.24.103.25 and 172.24.102.25, type:
$ server_cifs server_2 -add wins=172.24.103.25,wins=172.24.102.25
server_2 : done
EXAMPLE #5
---------To rename the NetBIOS name from dm110-cge0 to dm112-cge0, type:
$ server_cifs server_2 -rename -netbios dm110-cge0 dm112-cge0
server_2 : done
EXAMPLE #6
---------To display the CIFS configuration for NT4 with Internal Usermapper, type:
$ server_cifs server_2
server_2 :
256 Cifs threads started
Security mode = NT
Max protocol = NT1
I18N mode = UNICODE
Home Directory Shares ENABLED, map=/.etc/homedir
Usermapper auto broadcast enabled
Usermapper[0] = [127.0.0.1] state:active port:14640 (auto discovered)
Default WINS servers = 172.24.103.25:172.24.102.25
Enabled interfaces: (All interfaces are enabled)
Disabled interfaces: (No interface disabled)
DOMAIN NASDOCS RC=3
395
SID=S-1-5-15-99589f8d-9aa3a5f-338728a8-ffffffff
>DC=WINSERVER1(172.24.102.66) ref=2 time=0 ms
CIFS Server DM112-CGE0[NASDOCS] RC=2 (Hidden)
Alias(es): DM110-CGE0A1
Comment=’EMC Celerra’
if=cge0 l=172.24.102.242 b=172.24.102.255 mac=0:60:16:4:35:4f
wins=172.24.102.25
Password change interval: 0 minutes
Where:
Value
Cifs threads started
Security mode
Max protocol
I18N mode
Home Directory Shares
map
Usermapper auto broadcast
enabled
Usermapper
state
Default WINS servers
Enabled interfaces
Disabled interfaces
Unused Interfaces
RC
SID
DC
ref
time
Aliases
if
Password change interval
Definition
Number of CIFS threads used when the CIFS
service was started.
User authorization mechanism used by the Data
Mover.
Maximum dialect supported by the security
mode.
I18N mode (unicode or ASCII).
Whether Home Directory shares are enabled.
Home directory used by the Data Mover.
Usermapper is using its broadcast mechanism to
discover its servers. This only displays when
the mechanism is active. It is disabled when
you manually set the Usermapper server
addresses.
IP address of the servers running the
Usermapper service.
Current state of Usermapper.
Addresses of the default WINS servers.
Data Mover’s enabled interfaces.
Data Mover’s disabled interfaces.
Interfaces not currently used by the Data
Mover.
Reference count indicating the number of
internal objects (such as client contexts)
using the CIFS server.
Security ID of the domain.
Domain controllers used by the Data Mover.
Depending on the number of DCs in the domain,
this list may be large.
Number of internal objects using the Domain
Controller.
Domain Controller response time.
Alternate NetBIOS names assigned to the CIFS
server configuration.
Interfaces used by the CIFS server.
The amount of time between password changes.
EXAMPLE #7
---------To display the CIFS configuration for NT4, type:
$ server_cifs server_2
server_2 :
256 Cifs threads started
Security mode = NT
Max protocol = NT1
I18N mode = UNICODE
Home Directory Shares ENABLED, map=/.etc/homedir
Usermapper auto broadcast suspended
Usermapper[0] = [172.24.102.20] state:available
Default WINS servers = 172.24.103.25:172.24.102.25
Enabled interfaces: (All interfaces are enabled)
Disabled interfaces: (No interface disabled)
DOMAIN NASDOCS RC=3
SID=S-1-5-15-99589f8d-9aa3a5f-338728a8-ffffffff
>DC=WINSERVER1(172.24.102.66) ref=2 time=0 ms
CIFS Server DM112-CGE0[NASDOCS] RC=2 (Hidden)
Alias(es): DM110-CGE0A1
Comment=’EMC Celerra’
if=cge0 l=172.24.102.242 b=172.24.102.255 mac=0:60:16:4:35:4f
wins=172.24.102.25
Password change interval: 0 minutes
EXAMPLE #8
---------To add a Windows server using the compname dm112-cge0, in the
Active Directory domain nasdocs.emc.com, with a NetBIOS alias of
dm112-cge0a1, hiding the NetBIOS name in the Network
Neighborhood, with the interface for CIFS service as cge0, the WINS
servers as 172.24.102.25 and 172.24.103.25, in the DNS domain
nasdocs.emc.com, and with the comment string EMC VNX, type:
$ server_cifs server_2 -add
compname=dm112-cge0,domain=nasdocs.emc.com,alias=dm112-cge0a1,hidden=y,
interface=cge0,wins=172.24.102.25:172.24.103.25,dns=nasdocs.emc.com
-comment "EMC Celerra"
server_2 : done
EXAMPLE #9
---------To join dm112-cge0 into the Active Directory domain nasdocs.emc.com, using
the Administrator account, and to add this server to Engineering\Computers
organizational unit, type:
$ server_cifs server_2 -Join
compname=dm112-cge0,domain=nasdocs.emc.com,admin=administrator,ou="ou=Computers
:ou=Engineering"
server_2 : Enter Password:********
done
EXAMPLE #10
----------To add the NFS service to the CIFS server in order to make it possible for
NFS users to access the Windows KDC, type:
$ server_cifs server_2 -Join
compname=dm112-cge0,domain=nasdocs.emc.com,admin=administrator -option
addservice=nfs
server_2 : Enter Password:********
done
EXAMPLE #11
----------To enable the cge1 interface, type:
$ server_cifs server_2 -Enable cge1
server_2 : done
EXAMPLE #12
-----------
397
To display CIFS information for a Data Mover in a Windows domain
with internal usermapper, type:
$ server_cifs server_2
server_2 :
256 Cifs threads started
Security mode = NT
Max protocol = NT1
I18N mode = UNICODE
Home Directory Shares ENABLED, map=/.etc/homedir
Usermapper auto broadcast enabled
Usermapper[0] = [127.0.0.1] state:active (auto discovered)
Default WINS servers = 172.24.103.25:172.24.102.25
Enabled interfaces: (All interfaces are enabled)
Disabled interfaces: (No interface disabled)
Unused Interface(s):
if=cge1 l=172.24.102.243 b=172.24.102.255 mac=0:60:16:4:35:4e
DOMAIN NASDOCS FQDN=nasdocs.emc.com SITE=Default-First-Site-Name RC=3
SID=S-1-5-15-99589f8d-9aa3a5f-338728a8-ffffffff
>DC=WINSERVER1(172.24.102.66) ref=3 time=1 ms (Closest Site)
CIFS Server DM112-CGE0[NASDOCS] RC=2 (Hidden)
Alias(es): DM112-CGEA1
Full computer name=dm112-cge0.nasdocs.emc.com realm=NASDOCS.EMC.COM
Comment=’EMC Celerra’
if=cge0 l=172.24.102.242 b=172.24.102.255 mac=0:60:16:4:35:4f
wins=172.24.102.25:172.24.103.25
FQDN=dm112-cge0.nasdocs.emc.com (Updated to DNS)
Password change interval: 30 minutes
Last password change: Thu Oct 27 15:59:17 2005
Password versions: 2
EXAMPLE #13
----------To display CIFS information for a Data Mover in a Windows domain, type:
$ server_cifs server_2
server_2 :
256 Cifs threads started
Security mode = NT
Max protocol = NT1
I18N mode = UNICODE
Home Directory Shares ENABLED, map=/.etc/homedir
Usermapper auto broadcast suspended
Usermapper[0] = [172.24.102.20] state:available
Default WINS servers = 172.24.103.25:172.24.102.25
Enabled interfaces: (All interfaces are enabled)
Disabled interfaces: (No interface disabled)
Unused Interface(s):
if=cge1 l=172.24.102.243 b=172.24.102.255 mac=0:60:16:4:35:4e
DOMAIN NASDOCS FQDN=nasdocs.emc.com SITE=Default-First-Site-Name RC=3
SID=S-1-5-15-99589f8d-9aa3a5f-338728a8-ffffffff
>DC=WINSERVER1(172.24.102.66) ref=3 time=1 ms (Closest Site)
CIFS Server DM112-CGE0[NASDOCS] RC=2 (Hidden)
Alias(es): DM112-CGEA1
Full computer name=dm112-cge0.nasdocs.emc.com realm=NASDOCS.EMC.COM
Comment=’EMC Celerra’
if=cge0 l=172.24.102.242 b=172.24.102.255 mac=0:60:16:4:35:4f
wins=172.24.102.25:172.24.103.25
FQDN=dm112-cge0.nasdocs.emc.com (Updated to DNS)
Password change interval: 30 minutes
Last password change: Thu Oct 27 16:29:21 2005
Password versions: 3, 2
EXAMPLE #14
----------To display CIFS information for a Data Mover when CIFS service is not
started, type:
$ server_cifs server_2
server_2 :
Cifs NOT started
Security mode = NT
Max protocol = NT1
I18N mode = UNICODE
Home Directory Shares ENABLED, map=/.etc/homedir
Usermapper auto broadcast suspended
Usermapper[0] = [172.24.102.20] state:available
Default WINS servers = 172.24.103.25:172.24.102.25
Enabled interfaces: (All interfaces are enabled)
Disabled interfaces: (No interface disabled)
Unused Interface(s):
if=cge1 l=172.24.102.243 b=172.24.102.255 mac=0:60:16:4:35:4e
CIFS Server DM112-CGE0[NASDOCS] RC=2 (Hidden)
Alias(es): DM112-CGEA1
Full computer name=dm112-cge0.nasdocs.emc.com realm=NASDOCS.EMC.COM
Comment=’EMC Celerra’
if=cge0 l=172.24.102.242 b=172.24.102.255 mac=0:60:16:4:35:4f
wins=172.24.102.25:172.24.103.25
FQDN=dm112-cge0.nasdocs.emc.com (Updated to DNS)
Password change interval: 30 minutes
Last password change: Thu Oct 27 16:29:21 2005
Password versions: 3, 2
EXAMPLE #15
----------To add a Windows server named, dm112-cge0, in the Active
Directory domain nasdocs.emc.com, with the interface for CIFS service as
cge0, and enable local users support, type:
$ server_cifs server_2 -add
compname=dm112-cge0,domain=nasdocs.emc.com,interface=cge0,local_users
server_2 : Enter Password:********
Enter Password Again:********
done
EXAMPLE #16
----------To set a security mode to NT for a Data Mover, type:
$ server_cifs server_2 -add security=NT
server_2 : done
EXAMPLE #17
399
----------To disable a CIFS interface, type:
$ server_cifs server_2 -Disable cge1
server_2 : done
EXAMPLE #18
----------To display CIFS audit information for a Data Mover, type:
$ server_cifs server_2 -option audit
server_2 :
|||| AUDIT Ctx=0xad3d4820, ref=1, W2K3 Client(WINSERVER1) Port=1638/139
||| DM112-CGE0[NASDOCS] on if=cge0
||| CurrentDC 0xad407620=WINSERVER1
||| Proto=NT1, Arch=Win2K, RemBufsz=0xffff, LocBufsz=0xffff
||| 0 FNN in FNNlist NbUsr=1 NbCnx=1
||| Uid=0x3f NTcred(0xad406a20 RC=2 KERBEROS Capa=0x2)
’NASDOCS\administrator’
|| Cnxp(0xad3d5420), Name=IPC$, cUid=0x3f Tid=0x3f, Ref=1, Aborted=0
| readOnly=0, umask=22, opened files/dirs=1
|||| AUDIT Ctx=0xad43c020, ref=1, W2K3 Client(172.24.102.67) Port=1099/445
||| DM112-CGE0[NASDOCS] on if=cge0
||| CurrentDC 0xad407620=WINSERVER1
||| Proto=NT1, Arch=Win2K, RemBufsz=0xffff, LocBufsz=0xffff
||| 0 FNN in FNNlist NbUsr=1 NbCnx=1
||| Uid=0x3f NTcred(0xad362c20 RC=2 KERBEROS Capa=0x2) ’NASDOCS\user1’
|| Cnxp(0xaec21020), Name=IPC$, cUid=0x3f Tid=0x3f, Ref=1, Aborted=0
| readOnly=0, umask=22, opened files/dirs=2
Where:
Value
Ctx
ref
Definition
Address in memory of the Stream Context.
Reference counter of components using this context at
this time.
Port
The client port and the Data Mover port used in the
current TCP connection.
CurrentDC
Specifies the address of the Domain Controller that is
currently used.
Proto
Dialect level that is currently used.
Arch
Type of the client OS.
RemBufsz
Max buffer size negotiated by the client.
LocBufsz
Max buffer size we have negotiated.
FNN/FNNlist
Number of blocked files that has not yet been checked by
Virus Checker.
NbUsr
Number of sessions connected to the stream context (TCP
connection).
NbCnx
Number of connections to shares for this TCP connection.
Uid/NTcred
User Id(this number is not related to the Unix UID used
to create a file), the credential address, and the type
of authentication.
Cnxp/Name
Share connection address and the name of the share the
user is connecting to.
cUid
User Id who has opened the connection first.
Tid
Tree Id (number which represents the share connection in
any protocol request).
Aborted
Status of the connection.
readOnly
If the share connection is readonly.
umask
A user file-creation mask.
opened files/dirs Number of files or directories opened on this share
connection.
EXAMPLE #19
----------To unjoin the computer dm112-cge0 from the nasdocs.emc.com domain, type:
$ server_cifs server_2 -Unjoin
compname=dm112-cge0,domain=nasdocs.emc.com,admin=administrator
server_2 : Enter Password:********
done
EXAMPLE #20
----------To delete WINS servers, 172.24.102.25, and 172.24.103.25, type:
$ server_cifs server_2 -delete wins=172.24.102.25,wins=172.24.103.25
server_2 : done
EXAMPLE #21
----------To delete a NetBIOS name, dm112-cge0, type:
$ server_cifs server_2 -delete netbios=dm112-cge0
server_2 : done
EXAMPLE #22
----------To delete the compname, dm112-cge0, type:
$ server_cifs server_2 -delete compname=dm112-cge0
server_2 : done
EXAMPLE #23
----------To delete the usermapper, 172.24.102.20, type:
$ server_cifs server_2 -delete usrmapper=172.24.102.20
server_2 : done
EXAMPLE #24
----------To add and join a Windows server in disjoint DNS and Windows domains, type:
$ server_cifs server_2 -add
compname=dm112-cge0,domain=nasdocs.emc.com,interface=cge0,dns=eng.emc.com
-comment "EMC Celerra"
$ server_cifs server_2 -Join
compname=dm112-cge0.eng.emc.com,domain=nasdocs.emc.com,admin=Administrator
EXAMPLE #25
----------To add a Windows server using a delegated account from a trusted domain, type:
$ server_cifs server_2 -Join
compname=dm112-cge0,domain=nasdocs.emc.com,admin=delegateduser@it.emc.com
server_2 : Enter Password:********
done
EXAMPLE #26
----------To add a Windows server in the Active Directory domain using a
401
pre-created computer account, type:
$ server_cifs server_2 -Join
compname=dm112-cge0,domain=nasdocs.emc.com,admin=administrator -option reuse
server_2 : Enter Password:********
done
EXAMPLE #27
----------To update the directory /ufs1/users with a new minimum directory size of
8192, type:
$ server_cifs server_2 -update /ufs1/users mindirsize=8192
server_2 : done
EXAMPLE #28
----------To migrate all SIDs in the ACL database for file system, ufs1, from the
<src_domain>, eng.emc.com:nb=dm112-cge1:if=cge1 to the <dst_domain>,
nasdocs.emc.com:nb=dm112-cge0:if=cge0, type:
$ server_cifs server_2 -Migrate ufs1 -acl eng.emc.com:nb=dm112-cge1:if=cge1
nasdocs.emc.com:nb=dm112-cge0:if=cge0
server_2 : done
EXAMPLE #29
----------To migrate SIDs of members of the local group defined for the specified
NetBIOS name, from the <src_domain>, eng.emc.com:nb=dm112-cge1:if=cge1 to
the <dst_domain>, nasdocs.emc.com:nb=dm112-cge0:if=cge0, type:
$ server_cifs server_2 -Migrate dm112-cge1 -localgroup
eng.emc.com:nb=dm112-cge1:if=cge1 nasdocs.emc.com:nb=dm112-cge0:if=cge0
server_2 : done
EXAMPLE #30
----------To replace the SIDs for ufs1, type:
$ server_cifs server_2 -Replace ufs1 -acl :nb=dm112-cge0:if=cge0
server_2 : done
EXAMPLE #31
----------To configure a stand-alone CIFS server on server_2 with a NetBIOS
name of dm112-cge0, in the workgroup NASDOCS, with a NetBIOS
alias of dm112-cge0a1, hiding the NetBIOS name in the Network
Neighborhood, with the interface for CIFS service as cge0, the WINS
servers as 172.24.102.25 and 172.24.103.25, and with enabled local
users support, type:
$ server_cifs server_2 -add
standalone=dm112-cge0,workgroup=NASDOCS,alias=dm112-cge0a1,hidden=y,interface=c
ge0,
wins=172.24.102.25:172.24.103.25,local_users
server_2 : Enter Password:********
Enter Password Again:********
done
EXAMPLE #32
----------To delete the standalone CIFS server, dm112-cge0, type:
$ server_cifs server_2 -delete standalone=dm112-cge0
server_2 : done
EXAMPLE #33
----------To display a summary of SMB statistics, type:
$ server_cifs server_2 -stats -summary
server_2 :
State info:
Open connection Open files
2
2
SMB total requests:
totalAllSmb
totalSmb
10038
6593
totalTrans2Smb
3437
totalTransNTSmb
8
EXAMPLE #33
----------To display all non-zero CIFS statistics, type:
$ server_cifs server_2 -stats
server_2 :
SMB statistics:
proc
Close
Rename
Trans
Echo
ReadX
WriteX
Trans2Prim
TreeDisco
NegProt
SessSetupX
UserLogoffX
TreeConnectX
TransNT
CreateNTX
CancelNT
ncalls
1305
2
314
21
231
3697
9375
10
29
47
9
13
8
1338
1
%totcalls
7.96
0.01
1.91
0.13
1.41
22.54
57.16
0.06
0.18
0.29
0.05
0.08
0.05
8.16
0.01
maxTime
46.21
0.81
0.77
0.01
0.03
39.96
34.27
0.06
0.42
60.55
0.01
0.39
0.01
47.11
0.03
ms/call
2.16
0.50
0.08
0.00
0.00
0.98
0.46
0.00
0.24
5.81
0.00
0.23
0.00
0.81
0.00
Trans2 SMBs:
proc
FindFirst
QFsInfo
QPathInfo
QFileInfo
SetFileInfo
ncalls
22
3154
1113
2077
3007
%totcalls
0.23
33.65
11.87
22.16
32.08
maxTime
0.22
0.08
6.73
0.04
34.26
ms/call
0.09
0.05
0.15
0.02
1.28
NT SMBs:
proc
NotifyChange
ncalls
8
%totcalls
100.00
maxTime
0.01
ms/call
0.00
Write
Wr/s
All
3697
1021.27
25783
Performance info:
Read
Re/s
Ops/sec
231
231000.00
1575.40
State info:
Open connection Open files
403
2
2
Shadow info:
Reads
0
Writes
0
SMB total requests:
totalAllSmb
totalSmb
25783
16400
Where:
Value
proc
ncalls
%totcalls
maxTime
ms/call
failures
Read
Re/s
Write
Wr/s
Splits
0
Extinsert
0
Truncates
0
totalTrans2Smb
9375
totalTransNTSmb (unsupported)
8
2
Definition
Name of CIFS requests received.
Number of requests received.
Percentage of this type of request compared to all requests.
Maximum amount of time used.
Average time in milliseconds taken to service calls.
Number of times the call has failed.
Total number of read operations.
Number of read operations per second.
Total number of write operations.
Number of write operations per second.
EXAMPLE #35
----------To reset to zero the values for all SMB statistics, type:
$ server_cifs server_2 -stats -zero
server_2 : done
EXAMPLE #36
----------To configure CIFS service in a language that uses multibyte characters, type:
$ server_cifs server_2 -add compname=<computer_name_in_local_language_text>,
domain=nasdocs.emc.com, -comment <comment_in_local_language_text>
server_2 : done
EXAMPLE #37
----------To enable the SMB3 protocol, type:
$ server_cifs server_2 -add security=NT,dialect=SMB3
server_2 :
256 Cifs threads started
Security mode = NT
Max protocol = SMB3.0
I18N mode = UNICODE
EXAMPLE #38
----------To disable both SMB2 and SMB3, type:
$ server_cifs server_2 -add security=NT,dialect=NT1
server_2 : done
------------------------------------------------------Last Modified: September 28, 2011 12:10 pm
server_cifssupport
Provides support services for CIFS users.
SYNOPSIS
-------server_cifssupport {<movername>|ALL}
-accessright
{-name <name> [-domain <domain_name>]
| -sid <SID>|-uname <unix_name>|-uid <user_id>}
{-path <pathname>|-share <sharename>}
[-policy {mixed|native|secure|nt|unix}]
[-build [-admin <admin_name>]]
[-netbios <netbios_servername>|-compname <comp_name>
| -standalone <netbios_ name>]
| -acl {-path <pathname>|-share <sharename>} [-verbose]
| -cred
{-name <name> [-domain <domain_name>]
| -sid <SID>|-uname <unix_name>|-uid <user_id>}
[-build [-ldap][-admin <admin_name>]]
[-netbios <netbios_servername>|-compname <comp_name>
| -standalone <netbios_name>]
| -pingdc
{-netbios <netbios_servername>|-compname <comp_name>}
[-dc <netbios_Dcname>]
[-verbose]}
| -secmap
-list
[-name <name> [-domain <domain_name>
| -domain <domain_name>
| -sid <SID>
| -uid <user_id>
| -gid <group_id>]
| -create
{-name <name> [-domain <domain_name>]}
| -sid <SID>}
| -verify
{-name <name> [-domain <domain_name>]}
| -sid <SID>}
|-update
{-name <name> [-domain <domain_name>]}
| -sid <SID>}
| -delete
{-name <name> [-domain <domain_name>]}
| -sid <SID>}
| -export [-file <filename>]
| -import -file <filename>
| -report
| -migration
DESCRIPTION
----------server_cifssuport checks network connectivity between a CIFS server
and domain controller, manages access rights, generates credentials,and
manages secure mapping cache.
The -accessright option:
* Displays user access rights to a file, directory, or share in a
Windows permission mask.
* Rebuilds and displays a credential for users of a file,
directory, or share, who do not have a session opened in one of the CIFS
servers.
405
* Without impact on the actual access-checking policy of a file
system object, shows how user permissions are reset if you were
to change the policy on that object.
The -acl option displays the access control list (ACL) of files,
directories, or shares in plain text form.
The -cred option generates a credential containing all groups to
which a user belongs, including local groups, without the user being connected
to a CIFS server. This allows you to verify if user’s SIDs are being correctly
mapped to UNIX UIDs and GIDs and to troubleshoot any user access control
issues.
The -pingdc option checks the network connectivity between a CIFS
server and a domain controller then verifies that a CIFS server can
access and use the following domain controller services:
* IPC$ share logon
* Secure Channel when verifying domain users during NT LAN
Manager (NTLM) authentication
* Local Security Authority (LSA) pipe information when mapping
Windows SIDs to UNIX UIDs and GIDs
* SAMR (Remote Security Account Manager) pipe when merging a
user’s UNIX and Windows groups together to create a credential
* Trusted domain information
* Privilege names for internationalization: pingdc
The -secmap option manages the secure mapping (secmap) cache.
Secmap contains all mapping between SIDs and UID/GIDs used by a
Data Mover or Virtual Data Mover (VDM). The Data Mover
permanently caches all mappings it receives from any mapping
mechanism (local files, NIS, iPlanet, Active Directory, and
Usermapper) in the secmap database, making the response to subsequent mapping
requests faster and less susceptible to network
problems. Reverse mapping provides better quota support.
ACCESS RIGHT OPTIONS
--------------------accessright {-name <name> [-domain <domain_name>]|
-sid <SID>|-uname <unix_name>|-uid <user_id>}
{-path <pathname>|-share <sharename>}
Displays user access rights to a file, directory, or share in a Windows
permission mask for the specified:
* Windows username and the optional domain to which the user belongs
or
* <SID> which is the the user.s Windows security identifier
or
* <unix_name>
or
* <user_id> which is the user’s UNIX identifier
The -path option specifies the path of the file or directory to check for
user permissions, or the absolute path of the share to check for user
permissions.
[-policy {mixed|native|secure|nt|unix}]
Specifies an access-checking policy for the specified file, directory,
or share. This does not change the current access-checking policy,
instead it helps you anticipate any access problems before
actually resetting the policy on a file system object. server_cifs
provides more information.
[-build [-admin <admin_name>]]
Rebuilds a credential for a user of a file, directory, or share, who
does not have a session opened in one of the CIFS servers. If
-build is not specified, the system searches the known user
credentials in cache. If none are found, an error message is
generated. The -admin option specifies the name of an
administrative user to use for creating the access right list. The
password of the admin_name user is prompted when executing
the command.
[-netbios <netbios_servername>|-compname <comp_name>|-standalone
<netbios_name>]}
Indicates the CIFS server, as specified by its NetBIOS name or
computer name to use when rebuilding the user credential.
The -standalone option specifies the stand-alone CIFS server, as
specified by its name, to use when rebuilding a user credential.
Note: If no CIFS server is specified, the system uses the default CIFS
server, which uses all interfaces not assigned to other CIFS servers on the
Data Mover.
ACL OPTIONS
-----------acl {-path <pathname>|-share <sharename>}[-verbose]
Displays the ACL of a file, directory, or a share in plain text form.
Windows or UNIX access control data are both displayed in their
native forms. The -verbose option displays the ACE access rights
mask in plain text form in addition to their native forms.
CREDENTIAL OPTIONS
------------------cred {-name <name> [-domain <domain_name>]|-sid
<SID>|-uname <unix_name>|-uid <user_id>}
Generates a credential containing all of the groups to which a user
belongs without being connected to a CIFS server. The credential is
specifies by the user.s:
* Windows username and the domain to which the user belongs
or
* Windows security identifier
or
* UNIX name
or
* UNIX identifier
[-build [-ldap][-admin <admin_name>]]
Rebuilds a user credential. If -build is not specified, the system
407
searches the known user credentials in cache. If none are found,
an error message is generated. The -ldap option retrieves the
user.s universal groups to be included in the credential. If none
are found, no universal groups are incorporated into the
credential. The -admin option indicates the name of an
administrative user for creating the credential. The password of
the <admin_name> is prompted when executing the command.
[-netbios <netbios_servername>|-compname <comp_name>|-standalone
<netbios_ name>]}
Indicates the CIFS server, as specified by its NetBIOS name or
computer name to use when rebuilding the user credential.
The -standalone option specifies the stand-alone CIFS server to
use when rebuilding a user credential.
Note: If no CIFS server is specified, the system uses the default CIFS
server, which uses all interfaces not assigned to other CIFS servers on the
Data Mover.
PINGDC OPTIONS
--------------pingdc {-netbios <netbios_servername>|-compname
<comp_ name>}
Checks the network connectivity for the CIFS server as specified by
its NetBIOS name or by its computer name with a domain controller.
Once connectivity is established, it verifies that a CIFS server can
access and use the domain controller services.
Note: An IP address can be used for the <netbios_servername> and the
<comp_name>.
[-dc <netbios_Dcname>]
Indicates the domain controller to ping for network and resource
connectivity with the CIFS server. If not specified, the command
checks the domain controllers currently in use by the CIFS server.
Note: An IP address can be used for the <netbios_Dcname>.
[-verbose]
Adds troubleshooting information to the command output.
SECMAP OPTIONS
--------------secmap -list
Lists the secmap mapping entries.
-secmap -list -name <name> -domain <domain_name>
Lists the secmap mapping entries with the specified name and
domain name.
-secmap -list -domain <domain_name>
Lists the secmap mapping entries with the specified domain name.
-secmap -list -sid <SID>
Lists the secmap mapping entries with the specified SID.
-secmap -list -uid <user_id>
Lists the secmap mapping entries with the specified UID (reverse
mapping).
-secmap -list -gid <group_id>
Lists the secmap mapping entries with the specified GID (reverse
mapping).
-secmap -create {-name <name> [-domain<domain_name>]}
Creates the secmap mapping entry with the specified name and
domain name.
-secmap -create -sid <SID>
Creates the secmap mapping entry with the specified SID.
-secmap -verify {-name <name> [-domain<domain_name>]}
Checks the mapping entry stored in secmap with the specified name
and optional domain name with what is currently available in the
mapping sources. If a mapping has changed, it is marked.
-secmap -verify -sid <SID>
Checks the secmap mapping entry with the specified SID.
-secmap -update {-name <name> [-domain<domain_name>]}
Updates the specified mapping entry stored in secmap with the
mappings currently available in the mapping sources. Once this
option is performed, force an update of the Data Mover.s file systems
ACLs so that the new mappings are recognized.
-secmap -update -sid <SID>
Updates the secmap mapping entry with the specified SID.
-secmap -delete -name <name> [-domain<domain_name>]
Deletes the secmap mapping entry with the specified name and
domain name.
-secmap -delete -sid <SID>
Deletes the secmap mapping entry with the specified SID.
-secmap -export [-file <filename>]
Exports the secmap mapping entry to the specified file.
Note: If no filename is specified, the secmap database is displayed on the
screen.
-secmap -import -file <filename>
Imports secmap mapping entries from the specified file.
-secmap -report
Displays current secmap status, including database state, domains
handled by secmap, and resource usage.
-secmap -migration
Displays secmap database migration information like start date and
end date of the operation, and migration status.
SEE ALSO
-------EXAMPLE #1
---------To display user access rights to a file for user1, type:
$ server_cifssupport server_2 -accessright -name user1 -domain NASDOCS
-path /ufs1/test/test.txt
server_2 : done
ACCOUNT GENERAL INFORMATIONS
Name
Domain
: user1
: NASDOCS
409
Path
Allowed mask
Action
Action
Action
Action
Action
:
:
:
:
:
:
:
/ufs1/test/test.txt
0x200a9
List Folder / Read data
Read Extended Attributes
Traverse Folder / Execute File
Read Attributes
Read Permissions
EXAMPLE #2
---------To rebuild a credential for a user to a file using the SID, type:
$ server_cifssupport server_2 -accessright -sid
S-1-5-15-b8e641e2-33f0942d-8f03a08f-1f4 -path /ufs1/test/test.txt -build
-compname dm102-cge0
server_2 : done
ACCOUNT GENERAL INFORMATIONS
Path
Allowed mask
Action
Action
Action
Action
Action
Action
Action
Action
Action
Action
Action
:
:
:
:
:
:
:
:
:
:
:
:
:
/ufs1/test/test.txt
0x301ff
List Folder / Read data
Create Files / Write data
Create Folders / Append Data
Read Extended Attributes
Write Extended Attributes
Traverse Folder / Execute File
Delete Subfolders and Files
Read Attributes
Write Attributes
Delete
Read Permissions
EXAMPLE #3
---------To display user access rights to a file for user1 with access-checking policy
UNIX, type:
$ server_cifssupport server_2 -accessright -name user1 -domain NASDOCS -path
/ufs1/test/test.txt -policy unix
server_2 : done
ACCOUNT GENERAL INFORMATIONS
Name
Domain
Path
Allowed mask
Action
Action
Action
Action
:
:
:
:
:
:
:
:
user1
NASDOCS
/ufs1/test/test.txt
0x20089
List Folder / Read data
Read Extended Attributes
Read Attributes
Read Permissions
EXAMPLE #4
---------To rebuild a credential for user1 to a file using an administrative account,
type:
$ server_cifssupport server_2 -accessright -name user1 -domain NASDOCS -path
/ufs1/test/test.txt -build -admin administrator
server_2 : Enter Password:*******
done
ACCOUNT GENERAL INFORMATIONS
Name
Domain
Path
Allowed mask
Action
Action
Action
Action
Action
:
:
:
:
:
:
:
:
:
user1
NASDOCS
/ufs1/test/test.txt
0x200a9
List Folder / Read data
Read Extended Attributes
Traverse Folder / Execute File
Read Attributes
Read Permissions
EXAMPLE #5
---------To display the verbose ACL information of a file, type:
$ server_cifssupport server_2 -acl -path /ufs1/test/test.txt -verbose
server_2 : done
ACL DUMP REPORT
Path
UID
GID
Rights
acl ID
acl size
owner SID
group SID
:
:
:
:
:
:
:
:
/ufs1/test/test.txt
32770
32797
rw-r--r-0x4
174
S-1-5-20-220
S-1-5-15-b8e641e2-33f0942d-8f03a08f-201
DACL
Owner
Access
Rights
: USER 32770 S-1-5-15-b8e641e2-33f0942d-8f03a08f-1f4
: ALLOWED 0x0 0x1f01ff RWXPDO
: List Folder / Read data
Create Files / Write data
Create Folders / Append Data
Read Extended Attributes
Write Extended Attributes
Traverse Folder / Execute File
Delete Subfolders and Files
Read Attributes
Write Attributes
Delete
Read Permissions
Change Permissions
Take Ownership
Synchronize
Owner
Access
Rights
: USER 32771 S-1-5-15-b8e641e2-33f0942d-8f03a08f-a59
: ALLOWED 0x0 0x1200a9 R-X--: List Folder / Read data
Read Extended Attributes
Traverse Folder / Execute File
Read Attributes
Read Permissions
Synchronize
EXAMPLE #6
---------To display the access control level of a share, type:
$ server_cifssupport server_2 -acl -share ufs1
server_2 : done
ACL DUMP REPORT
411
Share
UID
GID
Rights
:
:
:
:
ufs1
0
1
rwxr-xr-x
EXAMPLE #7
---------To generate a credential for user1, type:
$ server_cifssupport server_2 -cred -name user1 -domain NASDOCS
server_2 : done
ACCOUNT GENERAL INFORMATIONS
Name
Domain
Primary SID
UID
GID
Authentification
Credential capability
Privileges
System privileges
Default Options
NT administrator
Backup administrator
Backup
NT credential capability
:
:
:
:
:
:
:
:
:
:
:
:
:
:
user1
NASDOCS
S-1-5-15-b8e641e2-33f0942d-8f03a08f-a59
32771
32768
KERBEROS
0x2
0x8
0x2
0x2
False
False
False
0x2
ACCOUNT GROUPS INFORMATIONS
Type UNIX ID
Name
Domain
NT
32797
S-1-5-15-b8e641e2-33f0942d-8f03a08f-201
NT
32798
S-1-5-15-b8e641e2-33f0942d-8f03a08f-e45
NT
4294967294
NT
4294967294
NT
4294967294
NT
2151678497
UNIX 32797
UNIX 32798
UNIX 4294967294
UNIX 2151678497
SID
S-1-1-0
S-1-5-2
S-1-5-b
S-1-5-20-221
EXAMPLE #8
---------To rebuild a user credential including the user’s universal groups for a
user using SID, type:
$ server_cifssupport server_2 -cred -sid
S-1-5-15-b8e641e2-33f0942d-8f03a08f-1f4 -build -ldap -compname dm102-cge0
server_2 : done
ACCOUNT GENERAL INFORMATIONS
Name
Domain
Server
Primary SID
UID
GID
Authentification
Credential capability
:
:
:
:
:
:
:
:
NASDOCS
dm102-cge0
S-1-5-15-b8e641e2-33f0942d-8f03a08f-1f4
32770
32768
NTLM
0x0
Privileges
System privileges
Default Options
NT administrator
Backup administrator
Backup
NT credential capability
:
:
:
:
:
:
:
0x7f
0x1
0xe
True
True
False
0x0
ACCOUNT GROUPS INFORMATIONS
Type UNIX ID
Name
Domain
NT
32794
Group Policy Cre
NASDOCS
S-1-5-15-b8e641e2-33f0942d-8f03a08f-208
NT
32795
Schema Admins
NASDOCS
S-1-5-15-b8e641e2-33f0942d-8f03a08f-206
NT
32796
Enterprise Admin
NASDOCS
S-1-5-15-b8e641e2-33f0942d-8f03a08f-207
NT
32797
Domain Users
NASDOCS
S-1-5-15-b8e641e2-33f0942d-8f03a08f-201
NT
32793
Domain Admins
NASDOCS
S-1-5-15-b8e641e2-33f0942d-8f03a08f-200
NT
4294967294 Everyone
NT
4294967294 NETWORK
NT AUTHORITY
NT
4294967294 ANONYMOUS LOGON
NT AUTHORITY
NT
2151678496 Administrators
BUILTIN
NT
2151678497 Users
BUILTIN
NT
1
UNIX GID=0x1 &ap
UNIX 32794
UNIX 32795
UNIX 32796
UNIX 32797
UNIX 32793
SID
S-1-1-0
S-1-5-2
S-1-5-7
S-1-5-20-220
S-1-5-20-221
S-1-5-12-2-1
EXAMPLE #9
---------To check the network connectivity for the CIFS server with netbios dm102-cge0,
type:
$ server_cifssupport server_2 -pingdc -netbios dm102-cge0
server_2 : done
PINGDC GENERAL INFORMATIONS
DC SERVER:
Netbios name
: NASDOCSDC
CIFS SERVER :
Compname
: dm102-cge0
Domain
: nasdocs.emc.com
EXAMPLE #10
----------To check the network connectivity between the domain controller and the
CIFS server with compname dm102-cge0, type:
$ server_cifssupport server_2 -pingdc -compname dm102-cge0 -dc NASDOCSDC
-verbose
server_2 : done
PINGDC GENERAL INFORMATIONS
DC SERVER:
Netbios name
CIFS SERVER :
: NASDOCSDC
413
Compname
Domain
: dm102-cge0
: nasdocs.emc.com
EXAMPLE #11
----------To display the secmap mapping entries, type:
$ server_cifssupport server_2 -secmap -list
server_2 : done
SECMAP USER MAPPING TABLE
UID
Origin
Date
Name
SID
32772
usermapper Tue Sep 18 19:08:40 2007 NASDOCS\user2
S-1-5-15-b8e641e2-33f0942d-8f03a08f-452
32771
usermapper Tue Sep 18 17:56:53 2007 NASDOCS\user1
S-1-5-15-b8e641e2-33f0942d-8f03a08f-a59
32770
usermapper Sun Sep 16 07:50:39 2007 NASDOCS\Administrator
S-1-5-15-b8e641e2-33f0942d-8f03a08f-1f4
SECMAP GROUP MAPPING TABLE
GID
Origin
Date
SID
32793
usermapper Wed Sep 12 14:16:18 2007
S-1-5-15-b8e641e2-33f0942d-8f03a08f-200
32797
usermapper Sun Sep 16 07:50:40 2007
S-1-5-15-b8e641e2-33f0942d-8f03a08f-201
32799
usermapper Mon Sep 17 19:13:16 2007
S-1-5-15-b8e641e2-33f0942d-8f03a08f-202
32800
usermapper Mon Sep 17 19:13:22 2007
S-1-5-15-b8e641e2-33f0942d-8f03a08f-203
32795
usermapper Sun Sep 16 07:50:40 2007
S-1-5-15-b8e641e2-33f0942d-8f03a08f-206
32796
usermapper Sun Sep 16 07:50:40 2007
S-1-5-15-b8e641e2-33f0942d-8f03a08f-207
32794
usermapper Sun Sep 16 07:50:40 2007
Owners S-1-5-15-b8e641e2-33f0942d-8f03a08f-208
32798
usermapper Mon Sep 17 19:13:15 2007
S-1-5-15-b8e641e2-33f0942d-8f03a08f-e45
32801
usermapper Tue Sep 18 19:08:41 2007
S-1-5-15-b8e641e2-33f0942d-8f03a08f-45b
Name
NASDOCS\Domain Admins
NASDOCS\Domain Users
NASDOCS\Domain Guests
NASDOCS\Domain Computers
NASDOCS\Schema Admins
NASDOCS\Enterprise Admins
NASDOCS\Group Policy Creator
NASDOCS\CERTSVC_DCOM_ACCESS
NASDOCS\NASDOCS Group
EXAMPLE #12
----------To display the secmap mapping entry for a user user1 in a domain NASDOCS, type:
$ server_cifssupport server_2 -secmap -list -name user1 -domain NASDOCS
server_2 : done
SECMAP USER MAPPING TABLE
UID
Origin
Date
Name
SID
32771
usermapper Tue Sep 18 17:56:53 2007 NASDOCS\user1
S-1-5-15-b8e641e2-33f0942d-8f03a08f-a59
EXAMPLE #13
----------To display the secmap mapping entry for a user with UID 32771, type:
$ server_cifssupport server_2 -secmap -list -uid 32771
server_2 : done
SECMAP USER MAPPING TABLE
UID
Origin
Date
Name
SID
32771
usermapper Tue Sep 18 17:56:53 2007 NASDOCS\user1
S-1-5-15-b8e641e2-33f0942d-8f03a08f-a59
EXAMPLE #14
----------To create the secmap mapping entry for user3 in a domain NASDOCS, type
$ server_cifssupport server_2 -secmap -create -name user3 -domain NASDOCS
server_2 : done
SECMAP USER MAPPING TABLE
UID
Origin
Date
Name
SID
32773
usermapper Tue Sep 18 19:21:59 2007 NASDOCS\user3
S-1-5-15-b8e641e2-33f0942d-8f03a08f-a3d
EXAMPLE #15
----------To check the secmap mapping for user1 in a domain NASDOCS, type:
$ server_cifssupport server_2 -secmap -verify -name user1 -domain NASDOCS
server_2 : done
EXAMPLE #16
----------To update the secmap mapping entry for a user using SID, type:
$ server_cifssupport server_2 -secmap -update -sid
S-1-5-15-b8e641e2-33f0942d-8f03a08f-a3d
server_2 : done
EXAMPLE #17
----------To delete the secmap mapping entry for user3, type:
$ server_cifssupport server_2 -secmap -delete -name user3 -domain NASDOCS
server_2 : done
EXAMPLE #18
----------To display current secmap status, type:
$ server_cifssupport server_2 -secmap -report
server_2 : done
SECMAP GENERAL INFORMATIONS
Name
State
Fs
Used nodes
Used blocks
:
:
:
:
:
server_2
Enabled
/
12
8192
SECMAP MAPPED DOMAIN
Name
SID
415
NASDOCS
S-1-5-15-b8e641e2-33f0942d-8f03a08f-ffffffff
EXAMPLE #19
----------To export the secmap mapping entries to the display, type:
$ server_cifssupport server_2 -secmap -export
server_2 : done
SECMAP MAPPING RECORDS
S-1-5-15-b8e641e2-33f0942d-8f03a08f-200:2:96:8019:8019:NASDOCS\Domain Admins
S-1-5-15-b8e641e2-33f0942d-8f03a08f-201:2:96:801d:801d:NASDOCS\Domain Users
S-1-5-15-b8e641e2-33f0942d-8f03a08f-202:2:96:801f:801f:NASDOCS\Domain Guests
S-1-5-15-b8e641e2-33f0942d-8f03a08f-203:2:96:8020:8020:NASDOCS\Domain Computers
S-1-5-15-b8e641e2-33f0942d-8f03a08f-206:2:96:801b:801b:NASDOCS\Schema Admins
S-1-5-15-b8e641e2-33f0942d-8f03a08f-207:2:96:801c:801c:NASDOCS\Enterprise
Admins
S-1-5-15-b8e641e2-33f0942d-8f03a08f-208:2:96:801a:801a:NASDOCS\Group Policy
Creator Owners
S-1-5-15-b8e641e2-33f0942d-8f03a08f-e45:2:96:801e:801e:NASDOCS\CERTSVC_DCOM_ACC
ESS
S-1-5-15-b8e641e2-33f0942d-8f03a08f-452:1:96:8004:8000:NASDOCS\user2
S-1-5-15-b8e641e2-33f0942d-8f03a08f-a59:1:96:8003:8000:NASDOCS\user1
S-1-5-15-b8e641e2-33f0942d-8f03a08f-45b:2:96:8021:8021:NASDOCS\NASDOCS Group
S-1-5-15-b8e641e2-33f0942d-8f03a08f-1f4:1:96:8002:8000:NASDOCS\Administrator
EXAMPLE #20
----------To export the secmap mapping entries to a file, type
$ server_cifssupport server_2 -secmap -export -file exportfile.txt
server_2 : done
EXAMPLE #21
----------To import the secmap mapping entries from a file, type:
$ server_cifssupport server_2 -secmap -import -file exportfile.txt
server_2 :
Secmap import in progress : #
done
-----------------------------------------------Last Modified: April 06, 2011 3:00 pm
server_cpu
Performs an orderly, timed, or immediate halt or reboot of a Data Mover.
SYNOPSIS
-------server_cpu {<movername>|ALL}
{-halt|-reboot[cold|warm]} [-monitor] <time>
DESCRIPTION
----------server_cpu performs an orderly halt or reboot of the specified Data Mover.
The ALL option executes the command for all Data Movers.
OPTIONS
-------halt
Performs an orderly shutdown of a Data Mover for the VNX. To
restart a Data Mover, perform a -reboot. For the NS series, a -halt
causes a system reboot.
-reboot
Performs an orderly shutdown, and restarts a Data Mover. The
default parameter of the -reboot option will be the warm parameter.
In case the warm rebooting fails, the -reboot option will use the cold
parameter to reboot the Data Mover.
[cold]
A cold reboot or a hardware reset shuts down the Data Mover
completely before restarting, including a Power on Self Test
(POST).
[warm]
A warm reboot or a software reset performs a partial shutdown of
the Data Mover, and skips the POST after restarting. A software
reset is faster than the hardware reset.
Caution: Performing a reboot for ALL Data Movers can be time consuming
relative to the size of the mounted file system configuration.
-monitor
Polls and displays the boot status until completion of the halt or
reboot.
<time>
Specifies the time when the Data Mover is to be halted or
rebooted. Time is specified as {now|+<min>|<hour>:<min>}.
The now option is used for an immediate shutdown or reboot.
After a power fail and crash recovery, the system reboots itself at
power-up unless previously halted.
SEE ALSO
-------VNX System Operations.
EXAMPLE #1
---------To monitor an immediate reboot of server_2, type:
$ server_cpu server_2 -reboot -monitor now
417
server_2 : reboot in progress 0.0.0.0.0.0.0.0.0.0.0.3.3.3.3.3.3.4.done
Where:
Value
0
1
2
3
4
5
7
9
Definition
Reset
DOS booted
SIB failed
Loaded
Configured
Contacted
Panicked
Reboot pending
EXAMPLE #2
---------To immediately halt server_2, type:
$ server_cpu server_2 -halt now
server_2 : done
EXAMPLE #3
---------To immediately reboot server_2, type:
$ server_cpu server_2 -reboot now
server_2 : done
EXAMPLE #4
---------To monitor a reboot of server_2, that is set to take place in one minute, type:
$ server_cpu server_2 -reboot -monitor +1
server_2 : reboot in progress ........3.3.3.3.3.done
-------------------------------------Last Modified: April 06, 2011 6:00 pm
server_date
Displays or sets the date and time for a Data Mover, and synchronizes
time between a Data Mover and an external time source.
SYNOPSIS
-------server_date {<movername>|ALL}
[+<format>][<yymmddhhmm>[<ss>]]
| timesvc start ntp [-sync_delay][-interval <hh>[:<mm>]][<host>[<host>.
..]]
|
|
|
|
|
|
|
|
timesvc update ntp
timesvc stop ntp
timesvc delete ntp
timesvc set ntp
timesvc stats ntp
timesvc
timezone [<timezonestr>]
timezone -name <timezonename>
DESCRIPTION
----------server_date sets and displays the current date and time for the
specified Data Movers.
The server_date timesvc commands control the synchronization of
the Data Mover with external timing sources and gets and sets the
time zone.
The ALL option executes the command for all Data Movers.
OPTIONS
------No arguments
Displays the current date and time for the specified Data Mover.
+<format>
Displays the date information in the format specified by each field
descriptor. Each field descriptor is preceded by percent and is
replaced in the output by its corresponding value. A single percent is
encoded by double percent (%%).
If the argument contains embedded blanks, it must be quoted.
The complete listing of all field descriptors can be viewed using the
Linux strftime (3C) man page.
<yymmddhhmm>[<ss>]
Sets a two-digit number for the year, month, day, hour, minutes, and
seconds in this order where <yy> is the year; the first <mm> is the
month; <dd> is the day; <hh> is the hour (in 24-hour system); and the
second <mm> is the minute, and <ss> is the second.
timesvc start ntp <host> [<host>...]
Starts time synchronization immediately between a Data Mover and
a host, which is the IP address of the time server hosts, and adds an
entry to the database. The host must be running the NTP protocol.
Only four host entries are allowed.
Other options include:
-sync_delay
Indicates that the clock should not be synchronized when the
time server is activated. Instead, when the first poll is taken,
419
latency adjustments are handled slowly. This option is generally
used if time service is started after the Data Mover has already
started, or if synchronization is starting after other services have
already started.
Note: If -sync_delay is not typed, by default, the clock is set at
Data Mover startup. The clock is synchronized after the first poll.
-interval <hh>[:<mm>]
Sets the delay in hours (or hours and minutes) between polls
(default=1 hour which is entered 01 or 00:60). Interval is
displayed in minutes.
timesvc update ntp
Immediately polls the external source and synchronizes the time on
the Data Mover.
timesvc stop ntp
Stops timing synchronization between the Data Mover and an
external timing host for the NTP protocol, and does not remove the
entry from the database.
Note: A stop of time services takes about 12 seconds. If time service is
restarted within this time, a "busy" status message is returned.
timesvc delete ntp
Stops time synchronization and deletes the NTP protocol from the
database.
timesvc set ntp
Immediately polls the external source and synchronizes the time on
the Data Mover without slewing the clock.
timesvc stats ntp
Displays the statistical information of time synchronization for the
Network Time Protocol such as time differences between the Data
Mover and the time server. Also provides information about the
current state of NTP service on the Data Mover.
timesvc
Displays the current time service configuration.
timezone
Displays the current time zone on the specified Data Mover.
[<timezonestr>]
Sets the current time zone on the specified Data Mover. The
<timezonestr> is a POSIX style time zone specification with the
following formats:
<std><offset> (no daylight savings time)
<std><offset><dst>[offset],start[/time],end[/time]
Adjusts for daylight savings time.
Note: The Linux man page for tzset provides information about the format.
timezone -name <timezonename>
Sets the time zone on the Data Mover to the specified
<timezonename>. The <timezonename> is in Linux style time zone
specification. A list of valid Linux timezones is located in the
/usr/share/zoneinfo directory. The third column in the table in this
file lists the valid timezones.
Note: The timezone -name option does not reset time on the Data Mover to
the specified <timezonename> time.
SEE ALSO
-------Configuring Time Services on VNX, server_dns, and server_nis.
EXAMPLE #1
---------To display the current date and time on a Data Mover, type:
$ server_date server_2
server_2 : Thu Jan 6 16:55:09 EST 2005
EXAMPLE #2
---------To customize the display of the date and time on a Data Mover, type:
$ server_date server_2 "+%Y-%m-%d %H:%M:%S"
server_2 : 2005-01-06 16:55:58
EXAMPLE #3
---------To start time synchronization between a Data Mover and an external
source, type:
$ server_date server_2 timesvc start ntp -interval 06:00 172.24.102.20
server_2 : done
EXAMPLE #4
---------To set the time service without slewing the clock, type:
$ server_date server_2 timesvc set ntp
server_2 : done
EXAMPLE #5
---------To display statistical information, type:
$ server_date server_2 timesvc stats ntp
server_2 :
Time synchronization statistics since start:
hits= 2, misses= 0, first poll hit= 2, miss= 0
Last offset: 0 secs, 0 usecs
Current State: Running, connected, interval=360
Time sync hosts:
0 1 172.24.102.20
Where:
Value
-----
Definition
----------
hits
When a client sends a request to the server requesting
misses
first poll hit
the current time, if there is a reply, that is a
hit.
No reply from any of the time servers.
First poll hit which sets the first official time
421 for
miss
Last offset
the Data Mover.
First poll miss.
Time difference between the time server and the Data M
Current State
Time sync hosts
State of the time server.
IP address of the time server.
over.
EXAMPLE #6
---------To update time synchronization between a Data Mover and an external source, typ
e:
$ server_date server_2 timesvc update ntp
server_2 : done
EXAMPLE #7
---------To get the time zone on the specified Data Mover, type:
$ server_date server_2 timezone
server_2 : Local timezone: GMT
EXAMPLE #8
---------To set the time zone to Central Time for a Data Mover when you do
not have to adjust for daylight savings time, type:
$ server_date server_2 timezone CST6
server_2 : done
EXAMPLE #9
---------To set the time zone to Central Time and adjust for daylight
savings time for a Data Mover, type:
$ server_date server_2 timezone CST6CDT5,M4.1.0,M10.5.0
server_2 : done
EXAMPLE #10
----------To set the time zone to Central Time and adjust the daylight
savings time for a Data Mover using the Linux method, type:
$ server_date server_2 timezone -name America/Chicago
server_2 : done
EXAMPLE #11
----------To display the time service configuration for a Data Mover, type:
$ server_date server_2 timesvc
server_2 :
Timeservice State
time: Thu Jan 6 17:04:28 EST 2005
type: ntp
sync delay: off
interval: 360
hosts: 172.24.102.20,
Where:
Value
-----
Definition
----------
time
type
sync delay
interval
hosts
Date and time known to the Data Mover.
Time service protocol configured on the Data Mover.
Whether sync delay is on or off.
Time interval between polls.
Specifies the IP address of the time server.
EXAMPLE #12
----------To stop time services for a Data Mover, type:
$ server_date server_2 timesvc stop ntp
server_2 : done
EXAMPLE #13
----------To delete the time service configuration for a Data Mover, type:
$ server_date server_2 timesvc delete ntp
server_2 : done
EXAMPLE #14
----------To set the timezone on a Data Mover to Los Angeles, type:
$ server_date server_2 timezone -n America/Los_Angeles
server_2 : done
------------------------------------------------Last modified: Feb 20, 2013 4:36 pm.
423
server_dbms
Enables backup and restore of databases, displays database environment
statistics.
SYNOPSIS
-------server_dbms {<movername>|ALL}
{-db
{-list [<db_name>]
| -delete <db_name>
| -check [<db_name>]
| -repair [<db_name>]
| -compact [<db name>]
| -fullbackup -target <pathname>
| -incrbckup -previous <pathname> -target <pathname>
| -restore [<db_name>] -source <pathname>
| -stats [<db_name> [-table <name>]][-reset]}
| -service -stats [transaction|memory|log|lock|mutex][-reset]
}
DESCRIPTION
----------server_dbms provides recovery of media failure or application
corruption, displays database information, checks application
database consistency, and fixes inconsistencies.
The ALL option executes the command for all Data Movers.
OPTIONS
-------db -list [<db_name>]
Gets the list of all application databases and their status. If
<db_name> is specified, displays the list of all tables belonging to
that database.
-db -delete <db_name>
Deletes the target application database.
Note: This command will fail if the target database is not closed.
-db -check [<db_name>]
Checks the consistency of the target database at application level.
-db -repair [<db_name>]
Fixes the application level inconsistencies in the database.
-db -compact [<db_name> [-table <name>]]
Frees up disc space by compacting the target environment or
database.
-db -fullbackup -target <pathname>
Performs an online full backup of the VDM database environment.
The target parameter specifies the location to copy the database files.
The <pathname> specifies the local path of the database environment
on the Control Station.
-db -incrbackup -previous <pathname> -target
<pathname>
Downloads the transactional logs from the VDM and replays them on
a copy of the previous VDM backup specified by previous
<pathname>.
The -target option specifies the location to which the database files
are copied.
-db -restore [<db_name>] -source <pathname>
Restores the environment or database specified by <db_name>.
source <pathname> specifies the location for the backup of the
environment/database to be restored.
Note: The database must be closed before the command is executed.
-db -stats [<db_name> [-table <name>]][-reset]
Displays statistics related to the specified databases and tables. If
-reset is specified, resets the statistics.
-service -stats [transaction|memory|log|lock|
mutex][-reset]
Displays transaction, memory, logging, locking or mutex statistics of
the VDM database environment. If -reset is specified, resets all or
specified statistics.
Note: For this command to be executed, the VDM on which the target
environment resides, must be up.
EXAMPLE #1
---------To get the list of all application databases and their status, type:
$ server_dbms server_3 -db -list
server_3 : done
BASE NAME : Secmap
Version : 1
Comment : CIFS Secure mapping database.
This is a cache of the sid to uid/gid mapping of the VDM.
This database is part of the CIFS application.
It can closed with the command server_setup
Size : 16384
Modification time : Fri May 25 09:58:21 2007
Creation time : Fri May 25 09:58:21 2007
TABLE NAME : Mapping
Version : 1
Comment : Sid to uid/gid mapping table with one secondary key on xid ((1,uid)
&amp; (2,gid))
Size : 16384
Modification time : Fri May 25 09:58:21 2007
Creation time : Fri May 25 09:58:21 2007
BASE NAME : V4NameSpace
Version : 1
Comment : NFSv4 namespace database, this represents the pseudofs and referrals.
Size : 8192
Modification time : Tue Jun 5 08:57:12 2007
Creation time : Tue Jun 5 08:57:12 2007
TABLE NAME : pseudofs
Version : 1
Comment : Pseudofs-table, this holds the export tree heirarchy
Size : 8192
Modification time : Mon Jun 11 11:06:23 2007
Creation time : Mon Jun 11 11:06:23 2007
BASE NAME : Usermapper
Version : 1
Comment : Usermapper database. It allows to assign a new uid or gid to a
given SID.
Size : 57344
Modification time : Tue Jun 12 09:14:31 2007
Creation time : Tue Jun 12 09:14:31 2007
425
TABLE NAME : aliases
Version : 1
Comment : This table allows to retrieve a domain name from one of his aliases
Size : 8192
Modification time : Tue Jun 12 09:14:31 2007
Creation time : Tue Jun 12 09:14:31 2007
TABLE NAME : usrmapc
Version : 1
Comment : Store the uid &amp; gid ranges allocations for domains.
Size : 8192
Modification time : Tue Jun 12 09:14:31 2007
Creation time : Tue Jun 12 09:14:31 2007
TABLE NAME : idxname
Version : 1
Comment : Store the reverse mapping uid/gid to sid.
Size : 8192
Modification time : Tue Jun 12 09:14:31 2007
Creation time : Tue Jun 12 09:14:31 2007
TABLE NAME : usrmapusrc
Version : 1
Comment : Store the mapping SID -&gt; (uid, name).
Size : 8192
Modification time : Tue Jun 12 09:14:31 2007
Creation time : Tue Jun 12 09:14:31 2007
TABLE NAME : usrgrpmapnamesid
Version : 1
Comment : Store the mapping user.domain -&gt; SID.
Size : 8192
Modification time : Tue Jun 12 09:14:31 2007
Creation time : Tue Jun 12 09:14:31 2007
TABLE NAME : usrmapgrpc
Version : 1
Comment : Store the mapping SID -&gt; (gid, name).
Size : 8192
Modification time : Tue Jun 12 09:14:31 2007
Creation time : Tue Jun 12 09:14:31 2007
TABLE NAME : groupmapnamesid
Version : 1
Comment : Store the mapping group.domain -&gt; SID.
Size : 8192
Modification time : Tue Jun 12 09:14:31 2007
Creation time : Tue Jun 12 09:14:31 2007
EXAMPLE #2
---------To display Secmap statistics, type:
$ server_dbms server_3 -db -stats Secmap
server_3 : done
STATISTICS FOR DATABASE : Secmap
TABLE : Mapping
server_dbms
magic 340322 Magic number.
version 9 Table version number.
metaflags 0 Metadata flags.
nkeys 14 Number of unique keys.
ndata 14 Number of data items.
pagesize 4096 Page size.
minkey 2 Minkey value.
re_len 0 Fixed-length record length.
re_pad 32 Fixed-length record pad.
levels 1 Tree levels.
int_pg 0 Internal pages.
leaf_pg 1 Leaf pages.
dup_pg 0 Duplicate pages.
over_pg 0 Overflow pages.
empty_pg 0 Empty pages.
free 0 Pages on the free list.
int_pgfree 0 Bytes free in internal pages.
leaf_pgfree 2982 Bytes free in leaf pages.
dup_pgfree 0 Bytes free in duplicate pages.
over_pgfree 0 Bytes free in overflow pages.
EXAMPLE #3
---------To display statistics of the VDM database environment, type:
$ server_dbms server_3 -service -stats
STATISTICS FOR MODULE : LOG
server_dbms
magic
version
mode
lg_bsize
lg_size
record
w_bytes
w_mbytes
wc_bytes
wc_mbytes
wcount
wcount_fill
rcount
scount
region_wait
region_nowait
cur_file
cur_offset
disk_file
disk_offset
regsize
maxcommitperflush
mincommitperflush
264584
12
0
32768
5242880
96
16001
0
0
0
31
0
137
31
0
0
3
16001
3
16001
98304
1
1
Log file magic number.
Log file version number.
Log file mode.
Log buffer size.
Log file size.
Records entered into the log.
Bytes to log.
Megabytes to log.
Bytes to log since checkpoint.
Megabytes to log since checkpoint.
Total writes to the log.
Overflow writes to the log.
Total I/O reads from the log.
Total syncs to the log.
Region lock granted after wait.
Region lock granted without wait.
Current log file number.
Current log file offset.
Known on disk log file number.
Known on disk log file offset.
Region size.
Max number of commits in a flush.
Min number of commits in a flush.
STATISTICS FOR MODULE : LOCK
server_dbms
last_id
cur_maxid
maxlocks
maxlockers
maxobjects
nmodes
nlocks
maxnlocks
nlockers
maxnlockers
nobjects
maxnobjects
nrequests
nreleases
nupgrade
ndowngrade
lock_wait
lock_nowait
ndeadlocks
locktimeout
nlocktimeouts
txntimeout
ntxntimeouts
region_wait
91
2147483647
1000
1000
1000
9
20
21
49
49
20
21
65711
65691
0
20
0
0
0
0
0
0
0
0
Last allocated locker ID.
Current maximum unused ID.
Maximum number of locks in table.
Maximum num of lockers in table.
Maximum num of objects in table.
Number of lock modes.
Current number of locks.
Maximum number of locks so far.
Current number of lockers.
Maximum number of lockers so far.
Current number of objects.
Maximum number of objects so far.
Number of lock gets.
Number of lock puts.
Number of lock upgrades.
Number of lock downgrades.
Lock conflicts w/ subsequent wait.
Lock conflicts w/o subsequent wait.
Number of lock deadlocks.
Lock timeout.
Number of lock timeouts.
Transaction timeout.
Number of transaction timeouts.
Region lock granted after wait.
427
region_nowait
regsize
0
352256
Region lock granted without wait.
Region size.
STATISTICS FOR MODULE : TXN
server_dbms
last_ckp
time_ckp
last_txnid
maxtxns
naborts
nbegins
ncommits
nactive
nsnapshot
nrestores
after recovery.
maxnactive
maxnsnapshot
region_wait
region_nowait
regsize
3/15945
lsn of the last checkpoint.
Fri Aug 3 09:38:36 2007 time of last checkpoint.
0x8000001a
last transaction id given out.
20
maximum txns possible.
0
number of aborted transactions.
26
number of begun transactions.
26
number of committed transactions.
0
number of active transactions.
0
number of snapshot transactions.
0
number of restored transactions
2
0
0
0
16384
maximum active transactions.
maximum snapshot transactions.
Region lock granted after wait.
Region lock granted without wait.
Region size.
STATISTICS FOR MODULE : MPOOL
server_dbms
gbytes
bytes
ncache
regsize
mmapsize
maxopenfd
maxwrite
maxwrite_sleep
map
cache_hit
cache_miss
page_create
page_in
page_out
ro_evict
rw_evict
page_trickle
pages
page_clean
page_dirty
hash_buckets
hash_searches
hash_longest
hash_examined
hash_nowait
hash_wait
hash_max_nowait
hash_max_wait
region_nowait
region_wait
mvcc_frozen
mvcc_thawed
mvcc_freed
alloc
alloc_buckets
alloc_max_buckets
alloc_pages
alloc_max_pages
io_wait
0
10487684
1
10493952
0
0
0
0
0
65672
36
0
36
2
0
0
0
36
36
0
1031
65744
1
65672
0
0
0
0
0
0
0
0
0
123
0
0
0
0
0
STATISTICS FOR MODULE : MUTEX
Total cache size: GB.
Total cache size: B.
Number of caches.
Region size.
Maximum file size for mmap.
Maximum number of open fd’s.
Maximum buffers to write.
Sleep after writing max buffers.
Pages from mapped files.
Pages found in the cache.
Pages not found in the cache.
Pages created in the cache.
Pages read in.
Pages written out.
Clean pages forced from the cache.
Dirty pages forced from the cache.
Pages written by memp_trickle.
Total number of pages.
Clean pages.
Dirty pages.
Number of hash buckets.
Total hash chain searches.
Longest hash chain searched.
Total hash entries searched.
Hash lock granted with nowait.
Hash lock granted after wait.
Max hash lock granted with nowait
Max hash lock granted after wait.
Region lock granted with nowait.
Region lock granted after wait.
Buffers frozen.
Buffers thawed.
Frozen buffers freed.
Number of page allocations.
Buckets checked during allocation.
Max checked during allocation.
Pages checked during allocation.
Max checked during allocation.
Thread waited on buffer I/O.
server_dbms
mutex_align
mutex_tas_spins
mutex_cnt
mutex_free
mutex_inuse
mutex_inuse_max
region_wait
region_nowait
regsize
4
1
3254
1078
2176
2176
0
0
278528
Mutex alignment.
Mutex test-and-set spins.
Mutex count.
Available mutexes.
Mutexes in use.
Maximum mutexes ever in use.
Region lock granted after wait.
Region lock granted without wait.
Region size.
-------------------------------------Last Modified: April 07, 2011 12:45 PM
429
server_devconfig
Queries, saves, and displays the SCSI over Fibre Channel device
configuration connected to the specified Data Movers.
SYNOPSIS
-------server_devconfig {<movername>|ALL}
| -create -scsi [<chain_number>] {-disks|-nondisks|-all}
[-discovery {y|n}][-monitor {y|n}][-Force {y|n}]
| -list -scsi [<chain_number>] {-disks|-nondisks|-all}
| -probe -scsi [<chain_number>] {-disks|-nondisks|-all}
| -rename <old_name> <new_name>
DESCRIPTION
----------server_devconfig queries the available storage system device and
tape device configuration, and saves the device configuration into the
Data Mover.s database. server_devconfig renames the device name,
and lists SCSI devices.
Caution: It is recommended that all Data Movers have the same device
configuration. When adding devices to the device table for a single
Data Mover only, certain actions such as standby failover will not
be successful unless the standby Data Mover has the same disk
device configuration as the primary Data Mover.
The ALL option executes the command for all Data Movers.
OPTIONS
-------create -scsi [<chain_number>] {-disks|-nondisks|-all}
Queries SCSI devices and saves them into the device table database
on the Data Mover. The <chain_number> specifies a SCSI chain
number.
The -disks option limits operations to disks. The -nondisks option
limits operations to non-disks such as tapes, juke boxes, and
gatekeeper devices. The -all option permits all SCSI devices (disks
and non-disks).
Note: The -create option modifies VNX for lock LUN names to the
VNX_<vnx-hostname>_<lun-id>_<vnx-dvol-name> format, if the LUNs use
the default Unisphere name.
Caution: The time taken to complete this command might be lengthy,
dependent on the number and type of attached devices.
[-discovery {y|n}]
Enables or disables the storage discovery operation.
Caution: Disabling the -discovery option should only be done under the
direction of an EMC Customer Service Engineer.
[-monitor {y|n}]
Displays the progress of the query and discovery operations.
[-Force {y|n}]
Overrides the health check failures and changes the storage
configuration.
Caution: High availability can be lost when changing the storage
configuration. Changing the storage configuration should only
be done under the direction of an EMC Customer Service
Engineer.
-list -scsi [<chain_number>] {-disks|-nondisks|-all}
Lists the SCSI device table database that has been saved on the Data
Mover. The <chain_number> specifies a SCSI chain number.
Note: Fibre Channel devices appear as SCSI devices. Therefore, chain
numbers might be different for Fibre Channel.
The -disks option limits operations to disks. The -nondisks option
limits operations to non-disks such as tapes, juke boxes, and
gatekeeper devices. The -all option permits all SCSI devices (disks
and non-disks).
-probe -scsi [<chain_number>] {-disks|-nondisks|-all}
Queries and displays the SCSI devices without saving them into the
database. The <chain_number> specifies a SCSI chain number.
Note: Fibre Channel devices appear as SCSI devices, therefore, chain
numbers may be different for Fibre Channel.
The -disks option limits operations to disks. The -nondisks option
limits operations to non-disks such as tapes, juke boxes, and
gatekeeper devices. The -all option permits all SCSI devices (disks
and non-disks).
-rename <old_name> <new_name>
Renames the specified non-disk from the <old_name> to
<new_name>. The -rename option is available for non-disks only.
SEE ALSO
-------VNX System Operations, nas_disk, and nas_storage.
STORAGE SYSTEM OUTPUT
The number associated with the storage device is dependent on the
attached storage system. VNX for block displays a prefix of APM
before a set of integers, for example, APM00033900124-0019. For
example, Symmetrix storage systems appear as 002804000190-003C.
EXAMPLE #1
---------For the VNX storage system, to list all devices, type:
$ server_devconfig server_2 -list -scsi -all
server_2:
Scsi Disk Table
Director Port
name
root_disk
root_disk
root_ldisk
root_ldisk
d3
d3
d4
d4
d5
d5
addr
c0t0l0
c16t0l0
c0t0l1
c16t0l1
c0t0l2
c16t0l2
c0t0l3
c16t0l3
c0t0l4
c16t0l4
num
type
num
sts
stor_id
APM00043807043
APM00043807043
APM00043807043
APM00043807043
APM00043807043
APM00043807043
APM00043807043
APM00043807043
APM00043807043
APM00043807043
stor_dev
0000
0000
0001
0001
0002
0002
0003
0003
0004
0004
431
d6
d6
d7
d7
d8
d8
name
gk01
ggk01
gk161
c0t0l5
c16t0l5
c0t1l0
c16t1l0
c16t1l1
c0t1l1
addr
c0t0l
c0t1l0
c16t1l1
APM00043807043
APM00043807043
APM00043807043
APM00043807043
APM00043807043
APM00043807043
Scsi Device Table
type
disk
disk
disk
0005
0005
0010
0010
0011
0011
info
5 020700000000APM00043807043
5 020710001000APM00043807043
5 020711001100APM00043807043
For the VNX with a Symmetrix storage system, to list all the devices
in the SCSI table, type:
$ server_devconfig server_2 -list -scsi -all
server_2 :
Scsi Disk Table
Director Port
name
addr
num
type
num
root_disk
c0t0l0
16C
FA
0
root_disk
c16t0l0
01C
FA
0
root_ldisk c0t0l1
16C
FA
0
root_ldisk c16t0l1
01C
FA
0
d3
c0t1l0
16C
FA
0
d3
c16t1l0
01C
FA
0
d4
c0t1l1
16C
FA
0
d4
c16t1l1
01C
FA
0
d5
c0t1l2
16C
FA
0
d5
c16t1l2
01C
FA
0
d6
c0t1l3
16C
FA
0
d6
c16t1l3
01C
FA
0
d7
c0t1l4
16C
FA
0
d7
c16t1l4
01C
FA
0
<... removed ...>
d377
c1t8l6
16C
FA
0
d377
c17t8l6
01C
FA
0
rootd378
c1t8l7
16C
FA
0
rootd378
c17t8l7
01C
FA
0
rootd379
c1t8l8
16C
FA
0
rootd379
c17t8l8
01C
FA
0
rootd380
c1t8l9
16C
FA
0
rootd380
c17t8l9
01C
FA
0
rootd381
c1t8l10
16C
FA
0
rootd381
c17t8l10
01C
FA
0
name
gk01
gk161
addr
c0t0l15
c16t0l15
sts
On
On
On
On
On
On
On
On
On
On
On
On
On
On
stor_id
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
stor_dev
0000
0000
0001
0001
0006
0006
0007
0007
0008
0008
0009
0009
000A
000A
On
On
On
On
On
On
On
On
On
On
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
000187940268
017C
017C
0180
0180
0181
0181
0182
0182
0183
0183
Scsi Device Table
type
info
disk
56706817D480 000187940268
disk
56706817D330 000187940268
Note: This is a partial display due to the length of the output.
Where:
Value
Definition
name
A unique name for each device in the chain
addr
SCSI chain, target, and LUN information
Director num
Director number. This output is applicable for Symmetrix
storage systems only.
type
Device type, as specified in the SCSI specification for
peripherals. This output is applicable for Symmetrix
storage systems only.
Port num
Port number. This output is applicable for Symmetrix storage
systems only.
sts
Indicates the port status. Possible values are: On, Off, WD
(write disabled), and NA. This output is applicable for
Symmetrix storage systems only.
stor_id
Storage system ID
stor_dev
Storage system device ID
EXAMPLE #2
----------For the VNX, to list all SCSI-attached non-disk devices, type:
$ server_devconfig server_2 -list -scsi -nondisks
server_2 :
Scsi Device Table
name
addr
type
info
gk01
c0t0l0
disk
5 020700000000APM00043807043
ggk01
c0t1l0
disk
5 020710001000APM00043807043
gk161
c16t1l1
disk
5 020711001100APM00043807043
For the VNX with a Symmetrix storage system, to list all
SCSI-attached non-disk devices, type:
$ server_devconfig server_2 -list -scsi -nondisks
server_2 :
Scsi Device Table
name
addr
type
info
gk01
c0t0l15
disk
56706817D480
000187940268
gk161
c16t0l15
disk
56706817D330
000187940268
For info=56706817D480, the following breakdown applies:
5670
68
17D
48
0
Symmcode
Last 2 digits in the Symm S/N
Symm Device ID#
Symm SA #
SA Port # (0=a, 1=b)
EXAMPLE #3
---------To rename a device, type:
$ server_devconfig server_2 -rename gk161 gk201
server_2 : done
EXAMPLE #4
---------For the VNX, to discover SCSI disk devices, without saving them to
the database table, type:
$ server_devconfig server_2 -probe -scsi -disks
server 2 :
SCSI disk devices :
chain= 0, scsi-0
stor_id= APM00043807043 celerra_id= APM000438070430000
tid/lun= 0/0 type= disk sz= 11263 val= 1 info= DGC RAID 5 02070000000000NI
tid/lun= 0/1 type= disk sz= 11263 val= 2 info= DGC RAID 5 02070100010001NI
tid/lun= 0/2 type= disk sz= 2047 val= 3 info= DGC RAID 5 02070200020002NI433
tid/lun=
tid/lun=
tid/lun=
tid/lun=
tid/lun=
tid/lun=
tid/lun=
tid/lun=
0/3
0/4
0/5
1/0
1/1
1/2
1/3
1/4
type=
type=
type=
type=
type=
type=
type=
type=
disk
disk
disk
disk
disk
disk
disk
disk
sz=
sz=
sz=
sz=
sz=
sz=
sz=
sz=
2047 val= 4 info= DGC RAID 5 02070300030003NI
2047 val= 5 info= DGC RAID 5 02070400040004NI
2047 val= 6 info= DGC RAID 5 02070500050005NI
245625 val= 7 info= DGC RAID 5 02071000100010NI
0 val= -5 info= DGC RAID 5 02071100110011NI
273709 val= 9 info= DGC RAID 5 02071200120012NI
0 val= -5 info= DGC RAID 5 02071300130013NI
273709 val= 10 info= DGC RAID 5 02071400140014NI
tid/lun= 1/5 type= disk sz= 0 val= -5 info= DGC RAID 5 02071500150015NI
tid/lun= 1/6 type= disk sz= 273709 val= 11 info= DGC RAID 5 02071600160016NI
tid/lun= 1/7 type= disk sz= 0 val= -5 info= DGC RAID 5 02071700170017NI
tid/lun= 1/8 type= disk sz= 273709 val= 12 info= DGC RAID 5 02071800180018NI
tid/lun= 1/9 type=
chain= 1, scsi-1 :
chain= 2, scsi-2 :
chain= 3, scsi-3 :
chain= 4, scsi-4 :
chain= 5, scsi-5 :
chain= 6, scsi-6 :
chain= 7, scsi-7 :
chain= 8, scsi-8 :
chain= 9, scsi-9 :
chain= 10, scsi-10
chain= 11, scsi-11
chain= 12, scsi-12
chain= 13, scsi-13
chain= 14, scsi-14
chain= 15, scsi-15
disk sz= 0 val= -5 info= DGC RAID 5 02071900190019NI
no devices on chain
no devices on chain
no devices on chain
no devices on chain
no devices on chain
no devices on chain
no devices on chain
no devices on chain
no devices on chain
: no devices on chain
: no devices on chain
: no devices on chain
: no devices on chain
: no devices on chain
: no devices on chain
For the VNX with a Symmetrix storage system, to discover SCSI disk
devices, without saving them to the database table, type:
$ server_devconfig server_2 -probe -scsi -disks
server_2 :
SCSI disk devices :
chain= 0, scsi-0 : no devices on chain
chain= 1, scsi-1 : no devices on chain
chain= 2, scsi-2
stor_id= 000190102173 celerra_id= 0001901021730041
tid/lun= 0/0 type= disk sz= 11507 val= 1 info= 577273041291SI00041
tid/lun= 0/1 type= disk sz= 11507 val= 2 info= 577273042291SI00042
tid/lun= 1/0 type= disk sz= 11501 val= 3 info= 57727304F291SI0004F
tid/lun= 1/1 type= disk sz= 11501 val= 4 info= 577273050291SI00050
tid/lun= 1/2 type= disk sz= 11501 val= 5 info= 577273051291SI00051
tid/lun= 1/3 type= disk sz= 11501 val= 6 info= 577273052291SI00052
tid/lun= 1/4 type= disk sz= 11501 val= 7 info= 577273053291SI00053
tid/lun= 1/5 type= disk sz= 11501 val= 8 info= 577273054291SI00054
tid/lun= 1/6 type= disk sz= 11501 val= 9 info= 577273055291SI00055
tid/lun= 1/7 type= disk sz= 11501 val= 10 info= 577273056291SI00056
tid/lun= 1/8 type= disk sz= 11501 val= 11 info= 577273057291SI00057
tid/lun= 1/9 type= disk sz= 11501 val= 12 info= 577273058291SI00058
tid/lun= 1/10 type= disk sz= 11501 val= 13 info= 577273059291SI00059
tid/lun= 1/11 type= disk sz= 11501 val= 14 info= 57727305A291SI0005A
tid/lun= 1/12 type= disk sz= 11501 val= 15 info= 57727305B291SI0005B
tid/lun= 1/13 type= disk sz= 11501 val= 16 info= 57727305C291SI0005C
tid/lun= 1/14 type= disk sz= 11501 val= 17 info= 57727305D291SI0005D
tid/lun= 1/15 type= disk sz= 11501 val= 18 info= 57727305E291SI0005E
tid/lun= 2/0 type= disk sz= 11501 val= 19 info= 57727305F291SI0005F
tid/lun= 2/1 type= disk sz= 11501 val= 20 info= 577273060291SI00060
tid/lun= 2/2 type= disk sz= 11501 val= 21 info= 577273061291SI00061
<... removed ...>
tid/lun= 7/6 type= disk sz= 11501 val= 105 info= 577273517291SI00517
tid/lun= 7/7 type= disk sz= 11501 val= 106 info= 577273518291SI00518
tid/lun= 7/8 type= disk sz= 11501 val= 107 info= 577273519291SI00519
tid/lun=
tid/lun=
tid/lun=
tid/lun=
tid/lun=
tid/lun=
tid/lun=
7/9 type= disk sz= 11501 val= 108 info= 57727351A291SI0051A
7/10 type= disk sz= 11501 val= 109 info= 57727351B291SI0051B
7/11 type= disk sz= 11501 val= 110 info= 57727351C291SI0051C
7/12 type= disk sz= 11501 val= 111 info= 57727351D291SI0051D
7/13 type= disk sz= 11501 val= 112 info= 57727351E291SI0051E
7/14 type= disk sz= 11501 val= 113 info= 57727351F291SI0051F
7/15 type= disk sz= 11501 val= 114 info= 577273520291SI00520
chain= 3, scsi-3 :
chain= 4, scsi-4 :
chain= 5, scsi-5 :
chain= 6, scsi-6 :
<... removed ...>
no
no
no
no
devices
devices
devices
devices
on
on
on
on
chain= 18, scsi-18
stor_id= 000190102173 celerra_id=
tid/lun= 0/0 type= disk sz= 11507
tid/lun= 0/1 type= disk sz= 11507
tid/lun= 1/0 type= disk sz= 11501
tid/lun= 1/1 type= disk sz= 11501
tid/lun= 1/2 type= disk sz= 11501
tid/lun= 1/3 type= disk sz= 11501
tid/lun= 1/4 type= disk sz= 11501
chain
chain
chain
chain
0001901021730041
val= 1 info= 577273041201SI00041
val= 2 info= 577273042201SI00042
val= 3 info= 57727304F201SI0004F
val= 4 info= 577273050201SI00050
val= 5 info= 577273051201SI00051
val= 6 info= 577273052201SI00052
val= 7 info= 577273053201SI00053
Note: This is a partial listing due to the length of the output.
EXAMPLE #5
---------To discover and save all SCSI devices, type:
$ server_devconfig server_2 -create -scsi -all
Discovering storage (may take several minutes)
server_2 : done
EXAMPLE #6
---------To discover and save all non-disk devices, type:
$ server_devconfig server_2 -create -scsi -nondisks
Discovering storage (may take several minutes)
server_2 : done
EXAMPLE #7
---------To save all SCSI devices with the discovery operation disabled, and display
information regarding the progress, type:
$ server_devconfig ALL -create -scsi -all -discovery n -monitor y
server_2 :
server_2:
chain 0 ..........
chain 16 .....
done
server_3 :
server_3:
chain 0 ..........
chain 16 .....
done
server_4 :
server_4:
chain 0 ..........
chain 16 .....
done
server_5 :
435
server_5:
chain 0 ..........
chain 16 .....
done
-------------------------------------Last Modified: April 07, 2011 03:25 pm
server_df
Reports free and used disk space and inodes for mounted file systems
on the specified Data Movers.
SYNOPSIS
-------server_df {<movername>|ALL}
[-inode][<pathname>|<fs_name>]
DESCRIPTION
----------server_df reports the amount of used and available disk space for a
Data Mover or file system, how much of a file system’s total capacity
has been used, and the number of used and free inodes.
The ALL option executes the command for all Data Movers.
OPTIONS
------No arguments
Displays the amount of disk space in kilobytes used by file systems.
-inode
Reports used and free inodes.
[<pathname>|<fs_name>]
Gets file system information. If <fs_name> specified, gets information
for file system only.
SEE ALSO
-------Managing Volumes and File Systems for VNX Manually, nas_disk, and
nas_fs.
EXAMPLE #1
---------To display the amount of used and available disk space on a Data Mover,
type:
$ server_df server_2
server_2 :
Filesystem
kbytes
ufs1
1075686032
ufs4
101683184
ufs2
206515184
nmfs1
308198368
root_fs_common
13624
root_fs_2
114592
used
477816
584
600
1184
5264
760
avail capacity Mounted on
1075208216
0%
/ufs1
101682600
0%
/nmfs1/ufs4
206514584
0%
/nmfs1/ufs2
308197184
0%
/nmfs1
8360
39%
/.etc_common
113832
1%
/
Where:
Value
Definition
Filesystem
Name of the file system.
kbytes
Total amount of space in kilobytes for the
437 file
system.
used
avail
file system.
Amount of kilobytes used by the file system.
Amount of space in kilobytes available for the
capacity
Percentage capacity that is used.
Mounted on
Mount point of the file system.
EXAMPLE #2
---------To display the amount of disk space and the amount of free and unused inodes
on a Data Mover, type:
$ server_df server_2 -inode
server_2 :
Filesystem
inodes
ufs1
131210494
ufs4
25190398
ufs2
25190398
nmfs1
50380796
root_fs_common
21822
root_fs_2
130942
used
140
10
11
21
26
66
avail capacity Mounted on
131210354
0%
/ufs1
25190388
0%
/nmfs1/ufs4
25190387
0%
/nmfs1/ufs2
50380775
0%
/nmfs1
21796
0%
/.etc_common
130876
0%
/
EXAMPLE #3
---------To display the amount of disk space and the amount of free and unused inodes
on a file system, type:
$ server_df server_2 -inode ufs1
server_2 :
Filesystem
inodes
ufs1
131210494
used
140
-------------------------------------Last Modified: April 07, 2011 03:35 pm
avail capacity Mounted on
131210354
0%
/ufs
server_dns
Manages the Domain Name System (DNS) lookup server
configuration for the specified Data Movers.
SYNOPSIS
-------server_dns {<movername>|ALL}
[[-protocol {tcp|udp}] <domainname> {<ip_addr>,...}]
| [-delete <domainname>]
| [-option {start|stop|flush|dump}]
DESCRIPTION
----------server_dns provides connectivity to the DNS lookup servers for the
specified Data Movers to convert hostnames and IP addresses. Up to
three DNS lookup servers are supported for each domain on the Data
Mover.
server_dns also provides the ability to clear the cache that has been
saved on the Data Mover as a result of the DNS lookup servers.
The ALL option executes the command for all Data Movers.
OPTIONS
------No arguments
Displays the DNS configuration.
-protocol {tcp|udp} <domainname> {<ip_addr>,...}
Sets the protocol for the DNS lookup servers (udp is the default).
<domainname> {<ip_addr>,...}
Creates list of up to three IP addresses to be used as the DNS lookup
servers for the specified <domainname>.
-delete <domainname>
Deletes the DNS lookup servers in the DNS domain name.
-option {start|stop|flush|dump}
The start option activates the link for the DNS lookup servers. The
stop option halts access to the DNS lookup servers. After DNS service
has been halted, the flush option can be used to clear the cache that
has been saved on the Data Mover, and the dump option displays the
DNS cache.
SEE ALSO
-------Configuring VNX Naming Services and server_nis.
EXAMPLE #1
---------To connect to a DNS lookup server, type:
$ server_dns server_2 prod.emc.com 172.10.20.10
server_2 : done
EXAMPLE #2
---------To display the DNS configuration, type:
439
$ server_dns server_2
server_2 :
DNS is running.
prod.emc.com
proto:udp server(s):172.10.20.10
EXAMPLE #3
---------To change the protocol to TCP from UDP, type:
$ server_dns server_2 -protocol tcp prod.emc.com 172.10.20.10
server_2 : done
EXAMPLE #4
---------To halt access to the DNS lookup servers, type:
$ server_dns server_2 -option stop
server_2 : done
EXAMPLE #5
---------To flush the cache on a Data Mover, type:
$ server_dns server_2 -option flush
server_2 : done
EXAMPLE #6
---------To dump the DNS cache, type:
$ server_dns server_2 -option dump
server_2 :
DNS cache size for one record type: 64
DNS cache includes 6 item(s):
dm102-cge0.nasdocs.emc.com
Type:A TTL=184 s dataCount:1
172.24.102.202 (local subnet)
--winserver1.nasdocs.emc.com
Type:A TTL=3258 s dataCount:1
172.24.103.60
--_ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.nasdocs.emc.com
Type:SRV TTL=258 s dataCount:1
priority:0 weight:100 port:389 server:winserver1.nasdocs.emc.com
--_kerberos._tcp.Default-First-Site-Name._sites.dc._msdcs.nasdocs.emc.com
Type:SRV TTL=258 s dataCount:1
priority:0 weight:100 port:88 server:winserver1.nasdocs.emc.com
--Expired item(s): 2
EXAMPLE #7
---------To delete the DNS lookup servers, type:
$ server_dns server_2 -delete prod.emc.com
server_2 : done
--------------------------------------------------------Last modified: May 12, 2011 9:30 am.
441
server_export
Exports file systems and manages access on the specified Data Movers for
NFS and CIFS clients.
SYNOPSIS
-------server_export {<movername>|ALL}
operations on all cifs and/or nfs entries:
| [-Protocol {cifs|nfs}] -list -all
| [-Protocol {cifs|nfs}] -all
| [-Protocol {cifs|nfs}] -unexport [-perm] -all
nfs operations per entry:
| -list <pathname>
| [-Protocol nfs [-name <name>]][-ignore][-option <options>]
[-comment <comment>] <pathname>
| -unexport [-perm] <pathname>
cifs operations per entry:
| -list -name <sharename> [-option <options>]
| -name <sharename> [-ignore][-option <options>][-comment <comment>]
<pathname>
| -unexport -name <sharename> [-option <options>]
-option type = {
CA [:] Encrypted [:][ABE [:] HASH [:][OCAutoI|OCVDO|OCNONE]]|NONE
}
DESCRIPTION
----------server_export provides user access by exporting an NFS pathname, or creating
a CIFS share. Allows specification of multiple clients identified by hostnames
or network and subnet addresses separated by a colon.
server_export removes access by unexporting an NFS pathname, deleting a CIFS
share, and displaying the exported entries and available shares for the
specified Data Mover.
The ALL option executes the command for all of the Data Movers.
Note: NFSv4 does not support the -name option.
GENERAL OPTIONS FOR CIFS AND NFS OPERATIONS
------------------------------------------No arguments
Displays all exported NFS entries and CIFS shares.
[-Protocol {cifs|nfs}] -list -all
Lists all exported entries as defined by the protocol. The default is NFS.
[-Protocol {cifs|nfs}] -all
Exports all entries on a Data Mover as defined by the protocol. The default
is NFS.
[-Protocol {cifs|nfs}] -unexport [-perm] -all
Unexports all entries as defined by the protocol. By default, unexports are
permanent for CIFS, and temporary for NFS, unless -perm is specified. If
-perm is specified, it removes all entries from the export table. When the
entry is temporarily unexported, clients are denied access to the entry
until it is re-exported or the system is rebooted, but the entries are not
removed from the export table. The default is NFS.
FOR NFS OPERATIONS
------------------list <pathname>
Lists a specific NFS entry.
the entire pathname must be
server_export command, IPv6
hosts configured with these
systems over NFS.
If there are extra spaces in the <pathname>,
enclosed by quotes. By using the
addresses can be specified and the
addresses can mount and access file
Note: If you are configuring an IPv6 address for ro, rw, access, and root, it
must be enclosed in [ ] or square brackets. This is to properly handle the
colon used to separate entries. Link local addresses are not supported.
-Protocol nfs [-name <name>] <pathname>
Exports an NFS <pathname> by default as read-write for everyone. If
specified, assigns an optional filesystem name for the <name>.
Pathname length is limited to 1024 bytes (represented as 1024 ASCII
characters or a variable number of Unicode multibyte characters),
and must be enclosed by quotes, if spaces are used. Name length is
limited to 255 bytes.
Note: In a nested mount file system hierarchy, users can export the mount
point path of the component file system. Subdirectories of the component file
system cannot be exported. In a multilevel file system hierarchy, users can
export any part of a file system independent of existing exports.
[-ignore] <pathname>
Overwrites previous options and comments in the export table for
the entry.
[-comment <comment>] <pathname>
Adds a comment for the specified NFS export entry. The comment is
displayed when listing the exported entries.
[-option <options>] <pathname>
Specifies the following comma-separated options:
sec=[sys|krb5|krb5i|krb5p]:<mode> [,<mode>,...]
Specifies a user authentication or security method with an access
mode setting. The sys (default) security option specifies
AUTH_SYS security. The access mode can be one, or a
combination of the following: ro, rw=, ro=, root=, access=, anon=,
webroot, public.
If the sec option is specified, it must always be the first option
specified in the string.
krb5 security specifies Kerberos user and data authentication.
krb5i checks for the integrity of the data by adding a signature to
each NFS packet and krb5p encrypts the data before sending it
over the network.
For krb5, krb5i, and krb5p security, the access mode can be one,
or a combination of the following: ro, rw=, ro=, root=, access=.
ro
Exports the <pathname> for all NFS clients as read-only.
ro=<client>[:<client>]...
Exports the <pathname> for the specified NFS clients as
read-only.
Note: If <client> is an IPv6 address, it must be enclosed in square
brackets or [ ].
443
ro=<-client>[:<-client>]...
Excludes the specified NFS clients from ro privileges. Clients
must be preceded with dash (-) to specify exclusion.
Note: If <client> is an IPv6 address, it must be enclosed in square
brackets or [ ].
rw=<client>[:<client>]...
Exports the <pathname> as read-mostly for the specified NFS
clients. Read-mostly means exported read-only to most machines,
but read-write to those specified. The default is read-write to all.
Note: If <client> is an IPv6 address, it must be enclosed in square
brackets or [ ].
rw=<-client>[:<-client>]...
Excludes the specified NFS clients from rw privileges. The
description of read-mostly provides information. Clients must be
preceded with - (dash) to specify exclusion.
Note: If <client> is an IPv6 address, it must be enclosed in square
brackets or [ ].
root=<client>[:<client>]...
Provides root privileges for the specified NFS clients. By default,
no host is granted root privilege.
Note: If <client> is an IPv6 address, it must be enclosed in square
brackets or [ ].
root=<-client>[:<-client>]...
Excludes the specified NFS clients from root privileges. Clients
must be preceded with - (dash) to specify exclusion.
Note: If <client> is an IPv6 address, it must be enclosed in square
brackets or [ ].
anon=<uid>
If a request comes from an unknown user, the UID should be used
as the effective user ID. Root users (UID =0) are considered
"unknown" by the NFS server unless they are included in the
root option. The default value for anon=<uid> is the user
"nobody". If the user "nobody" does not exist, then the value
65534 is used.
Caution: Using anon=0 is not recommended for security concerns.
access=<client>[:<client>]...
Provides mount access for the specified NFS clients.
Note: If <client> is an IPv6 address, it must be enclosed in square
brackets or [ ].
access=<-client>[:<-client>]...
Excludes the specified NFS clients from access even if they are
part of a subnet or netgroup that is allowed access. Clients must
be preceded with - (dash) to specify exclusion.
Note: If <client> is an IPv6 address, it must be enclosed in square
brackets or [ ].
nfsv4only
Specifies that the NFS export can be accessed only when a client is
using NFSv4.
vlan=<vlanid>[,<vlanid>,...]
Specifies that all hosts belonging to the VLAN ID will have access
to the exported filesystem. Hosts on other VLANs will be denied.
The VLANs are seperated by : or colon, just as any other
server_export option values.
Clients can be a hostname, netgroup, subnet, or IP address and
must be colon-separated, without spaces. A subnet is an IP
address/netmask (for example, 168.159.50.0/255.255.255.0). A
hostname is first checked for in the Data Mover.s local hosts
database, then on the NIS (host database) or DNS server (if
enabled). A netgroup is searched in the local netgroup database
and then on the NIS server’s netgroup database. If the client name
does not exist in any case, then access is denied.
Note: Netgroups are supported. The hosts and netgroup files can be
created on the Control Station using your preferred method (for example,
with an editor, or by copying from another node), then copied to the Data
Mover.
nosuid=<client>[:<client>]... OR
nosuid=<-client>[:<-client>]...
When the nosuid NFS export option is used with a list of client
names, the setuid and setgid bits are cleared from the permissions
before setting the permissions on any file on the exported
pathname for those clients.
When the nosuid NFS export option is used with a dash (-) before
each client name, the setuid and setgid bits are cleared from the
permissions before setting the permissions on any file on the
exported pathname for all clients except for the clients listed.
-unexport [-perm] <pathname>
Temporarily unexports a <pathname> unless -perm is specified. If
-perm is specified, removes the entry from the export table.
FOR CIFS OPERATIONS
-------------------list -name <sharename>
Displays the specified CIFS share.
[-option <options>]
Specifies the following comma-separated list of options:
[netbios=<netbios_name>]
When the share has an associated NetBIOS name, that name is required to
locate the entry. Multiple CIFS entries can have same <sharename> when
belonging to a different NetBIOS name.
-name <sharename> [-ignore] [-option <options>]
[-comment <comment>] <pathname>
Creates a CIFS share. Share name length is limited to 12 ASCII characters
unless Unicode is enabled, in which case the limit is 80 multibyte
characters. Share names cannot include the following characters: /, \, %,
", NUL (Null character), STX (start of header), SOT (start of text), and
LF (line feed). Share names can contain spaces and other nonalphanumeric
characters, but must be enclosed by quotes if spaces are used. Share
names cannot begin with a - (hyphen). Share names are case-sensitive.
Comment length is limited to 256 bytes (represented as 256 ASCII
characters or a variable number of Unicode multibyte characters). A
comment cannot include the following characters: NUL (Null
character), STX (start of header), and SOT (start of text). Comments
can contain spaces and other nonalphanumeric characters, but must
445
be enclosed by quotes if spaces are used. Pathname length is limited
to 1024 bytes.
The -ignore option overwrites the previous options and comment in the
export table.
[-option <options>]
Specifies the following comma-separated options:
ro
Exports the <pathname> for CIFS clients as read-only.
rw=<client>[:<client>]...
Creates the share for CIFS clients as read-mostly. Read-mostly means
shared read-only to most clients, but read-write to those specified. By
default, the <pathname> is shared read-write to all. A client may be
either a <user_name> or <group_name>.
Note: If <client> is an IPv6 address, it must be enclosed in [ ] or square
brackets.
umask=<mask>
Specifies a user file-creation mask for the umask allowing NFS permissions
to
be determined for the share.
user=<default_user>
When using share level access (server_checkup provides
information), specifies a <default_user> which must be entered as
a character string. The user must be defined in the Data Mover.s
password file. There is a 20 character limit for the username.
group=<default_group>
When using share level access (server_checkup provides
information), indicates a <default_group> which must be entered
as a character string. There is a 256 character limit for group
names.
ropasswd=<share_passwd>
When using share level access (server_checkup provides
information), creates a read-only password to allow clients access
to the share. Passwords can be viewed in the list of shared entries.
rwpasswd=<share_rw_passwd>
When using share level access (server_checkup provides
information), creates a read-write password to allow clients
access to the share. Passwords are displayed in the list of shared
entries.
Note: Users from any client machine who know the value of the
ropasswd or rwpasswd can access the share for read-only and read-write
operations.
maxusr=<maxusr>
Sets the maximum number of simultaneous users permitted for a
share.
netbios=<netbiosName>[,netbios=<netbiosName>]...
Associates a share on a single domain with one or more NetBIOS
names created with server_checkup. By default, if a NetBIOS
name is not specified for a share, the share is visible to all
NetBIOS names.
-comment
Adds a comment for the specified CIFS share. The comment is displayed
when listing the shared entries.
-unexport -name <sharename>
Permanently removes access to a share by removing the entry from the export
table.
[-option <options>]
Specifies the following comma-separated options:
netbios=<netbios_name>
When the share has an associated NetBIOS name, the NetBIOS name is
required to locate the entry. This is required because multiple CIFS
entries can have same <sharename> when belonging to different NetBIOS
name.
-option type={CA[:]Encrypted[:][ABE[:]HASH[:][OCAutoI|OCVDO|OCNONE]]|NONE}
Specifies the following colon-separated list of options:
* Continuous Availability (CA): Indicates continuous availability of data on
the
specific share.
* Encrypted: The server requires encrypted messages to access the share.
* Access Based Enumeration (ABE): Only files and directories to which the use
r
has read access are visible (Access Based Enumeration).
* HASH: Indicates that the share supports hash generation for BranchCache ret
rieval.
* Offline Caching Attributes (OC): User MUST allow only manual caching for th
e
files open from this share by default.
- CAutoI: The user MAY cache every file that it opens from this share.
- OCVDO: The user MAY cache every file that it opens from this share. Also,
the
user MAY satisfy the file requests from its local cache.
- OCNone: Indicates that no files or programs from the shared folder are av
ailable offline.
SEE ALSO
-------Configuring NFS on VNX, Managing Volumes and File Systems for VNX
Manually, server_checkup, and server_mount.
EXAMPLE #1
---------To export a specific NFS entry, type:
$ server_export server_2 -Protocol nfs /ufs1
server_2 : done
EXAMPLE #2
---------To export an NFS entry and overwrite existing settings, type:
$ server_export server_2 -Protocol nfs -ignore -option
access=172.24.102.0/255.255.255.0,root=172.24.102.240 -comment ’NFS Export
for ufs1’ /ufs1
server_2 : done
EXAMPLE #3
---------To export NFS entry dir1, a subdirectory of the exported entry /ufs1 in a
multilevel file system hierarchy, type:
447
$ server_export server_2 -Protocol nfs /ufs1/dir1
server_2 : done
EXAMPLE #4
---------To assign a name to a NFS export, type:
$ server_export server_2 -Protocol nfs -name nasdocsfs /ufs1
server_2 : done
EXAMPLE #5
---------To export an NFS entry using Kerberos authentication, type:
$ server_export server_2 -Protocol nfs -option
sec=krb5:ro,root=172.24.102.240,access=172.24.102.0/255.255.255.0 /ufs2
server_2 : done
EXAMPLE #6
---------To export an NFS entry for NFSv4 only, type:
$ server_export server_2 -Protocol nfs -option nfsv4only /ufs1
server_2 : done
EXAMPLE #7
---------To list all NFS entries, type:
$ server_export server_2 -Protocol nfs -list -all
server_2 :
export "/ufs2" sec=krb5 ro root=172.24.102.240
access=172.24.102.0/255.255.255.0
export "/ufs1" name="/nasdocsfs" access=172.24.102.0/255.255.255.0
root=172.24.102.240 nfsv4only comment="NFS Export for ufs1"
export "/" anon=0
access=128.221.252.100:128:221.253.100:128.221.252.101:128.221.253.101
EXAMPLE #8
---------To list NFS entries for the specified path, type:
$ server_export server_2 -list /ufs1
server_2 :
export "/ufs1" name="/nasdocsfs" access=172.24.102.0/255.255.255.0
root=172.24.102.240 nfsv4only comment="NFS Export for ufs1"
EXAMPLE #9
---------To temporarily unexport an NFS entry, type:
$ server_export server_2 -Protocol nfs -unexport /ufs2
server_2 : done
EXAMPLE #10
----------To export all NFS entries, type:
$ server_export server_2 -Protocol nfs -all
server_2 : done
EXAMPLE #11
----------To export a specific NFS entry in a language that uses multibyte
characters, type:
$ server_export server_2 -Protocol nfs
/<nfs_entry_in_local_language_text>
server_2 : done
EXAMPLE #12
----------To permanently unexport an NFS entry, type:
$ server_export server_2 -unexport -perm /ufs1
server_2 : done
EXAMPLE #13
----------To permanently unexport all NFS entries, type:
$ server_export server_2 -Protocol nfs -unexport -perm -all
server_2 : done
EXAMPLE #14
----------To provide access to a CIFS share, type:
$ server_export server_2 -name ufs1 /ufs1
server_2 : done
EXAMPLE #15
----------To create a CIFS share and overwrite existing settings, type:
$ server_export server_2 -name ufs1 -ignore -option
ro,umask=027,maxusr=200,netbios=dm112-cge0 -comment ’CIFS share’ /ufs1
server_2 : done
EXAMPLE #16
----------To create a CIFS share in a language that uses multibyte characters, type:
$ server_export server_2 -P cifs -name <name_in _local _language_text>
-comment <comment_in local_language_text> /accounting
server_2 : done
EXAMPLE #17
----------To list all CIFS entries, type:
$ server_export server_2 -Protocol cifs -list
server_2 :
share "ufs1" "/ufs1" ro umask=027 maxusr=200 netbios=DM112-CGE0
comment="CIFS share"
share "ufs2" "/ufs2" umask=022 maxusr=4294967295
EXAMPLE #18
----------449
To display a specific CIFS share, type:
$ server_export server_2 -list -name ufs1 -option netbios=dm112-cge0
server_2 :
share "ufs1" "/ufs1" ro umask=027 maxusr=200 netbios=DM112-CGE0
comment="CIFS share"
EXAMPLE #19
----------To export all CIFS entries, type:
$ server_export server_2 -Protocol cifs -all
server_2 : done
EXAMPLE #20
----------To list all NFS and CIFS entries, type:
$ server_export server_2
server_2 :
export "/ufs2" sec=krb5 ro root=172.24.102.240
access=172.24.102.0/255.255.255.0
export "/ufs1" nfsv4only
export "/" anon=0
access=128.221.252.100:128.221.253.100:128.221.252.101:128.221.253.101
share "ufs2" "/ufs2" umask=022 maxusr=4294967295
share "ufs1" "/ufs1" ro umask=027 maxusr=200 netbios=DM112-CGE0
comment="CIFS share"
Where:
Value
export
sec
ro
root
access
share
ro
umask
maxuser
netbios
comment
Definition
A file system entry to be exported.
Security mode for the file system.
File system is to be exported as read-only.
IP address with root access.
Access is permitted for those IP addresses.
Entry to be shared.
Filesystem is to be shared as read-only.
User creation mask.
Maximum number of simultaneous users.
NetBIOS name for the share.
Comment specified for the share.
EXAMPLE #21
----------To permanently unexport all CIFS and NFS entries, type:
$ server_export server_2 -unexport -perm -all
server_2 : done
EXAMPLE #22
----------To delete a CIFS share, type:
$ server_export server_2 -unexport -name ufs1 -option netbios=dm112-cge0
server_2 : done
EXAMPLE #23
----------To delete all CIFS shares, type:
$ server_export server_2 -Protocol cifs -unexport -all
server_2 : done
EXAMPLE #24
----------To export a file system for NFS that specifies an IPv4 and IPv6
address, type:
$ server_export server_2 -Protocol nfs -option
access=172.24.108.10:[1080:0:0:0:8:800:200C:417A] /fs1
server_2 : done
EXAMPLE #25
----------To export a file system for NFS that specifies two IPv6 addresses,
type:
$ server_export server_2 -Protocol nfs -option rw=[1080:0:0:0:8:80:200C:417A]:[
1080:0:0:0:8:800:200C:417B] /fs1
server_2 : done
EXAMPLE #26
----------To verify that the file system was exported, type:
$ server_export server_2 -list /fs1
server_2 :
export "/fs1" rw=[1080:0:0:0:8:80:200C:417A]:[1080:0:0:0:8:800:200C:417B]
EXAMPLE #27
----------To export the fs42 file system of the VDM vdm1, type:
$ server_export vdm1 -P nfs /fs42
done
EXAMPLE #28
----------To create a share foo on the server PALIC with HASH and ABE enabled, type:
$ server_export server_3 -name foo -option netbios=PALIC,
type=ABE:HASH /fs3/foo
server_3 : done
EXAMPLE #29
----------To change attributes to this share to ABE only, type:
$ server_export server_3 -name foo -option netbios=PALIC,
type=ABE /fs3/foo
server_3 : done
EXAMPLE #30
----------To remove all the attributes, type:
server_export server_3 -name foo -ignore -option netbios=PALIC,type=None
/fs3/fro
server_3 : done
EXAMPLE #31
451
----------To view the attributes, type:
server_export server_3 share "foo" "/fs3/fro" type=ABE:HASH umask=022
maxusr=4294967295 netbios=PALIC
server_3 : done
EXAMPLE #32
----------To create a share foo on the server palic with CA and ABE enabled,
type:
$ server_export server_3 -name foo -option netbios=PALIC,
type=CA:ABE /fs3/foo
server_3 : done
EXAMPLE #33
----------To change attributes of the share foo to CA only, type:
$ server_export server_3 -name foo -option netbios=PALIC,
type=CA /fs3/foo
server_3 : done
EXAMPLE #34
----------To view the attributes, type:
$ server_export server_3 share "foo" "/fs3/fro" type=CA umask=022
maxusr=4294967295 netbios=PALIC
server_3 : done
EXAMPLE #35
----------To create a share share10 accessible only through encrypted SMB
messages, type:
$ server_export vdm1 -P cifs -name share10 -o
type=Encrypted /fs42/protected_dir1
server_3 : done
EXAMPLE #36
----------To export the NFS pathname "/users/gary" on Data Mover server_2
restricting setuid and setgid bit access for clients host10 and host11,
type:
$ server_export server_2 -Protocol nfs -option
nosuid=host10:host11 /users/gary
server_2 : done
EXAMPLE #37
----------To export the NFS pathname "/production1" on all Data Movers
restricting setuid and setgid bit access for client host123, type:
$ server_export ALL -option nosuid=host123 /production1
server_2 : done
EXAMPLE #38
----------To export the NFS pathname "/fs1" on all Data Movers restricting
setuid and setgid bit access for all clients except for 10.241.216.239,
which is allowed root privileges in addition to setuid and setgid bit
access, type:
$ server_export server_2 -Protocol nfs -option
root=10.241.216.239,nosuid=-10.241.216.239 /fs1
server_2 : done
------------------------------------------------------Last Modified: November 20, 2012 11:55 a.m.
453
server_file
Copies files between the Control Station and the specified Data Movers.
SYNOPSIS
-------server_file {<movername>|ALL}
{-get|-put} <src_file> <dst_file>
DESCRIPTION
----------server_file copies the source file from the specified Data Mover (or
Control Station) to the destination file on the Control Station (or
specified Data Mover). The <src_file> indicates the source file, and
the name <dst_file> indicates destination file. By default, if a
directory is not specified on the Data Mover, the /.etc directory is
used.
The ALL option executes the command for all Data Movers.
OPTIONS
-------get <src_file> <dst_file>
Copies the source file on Data Mover to destination file on Control
Station. Both the <src_file> and <dst_file> may be full pathnames.
-put <src_file> <dst_file>
Copies source file on the Control Station to the destination file on the
Data Mover. Both the <src_file> and <dst_file> must be full
pathnames.
Caution: This command overwrites existing files of the same name without
notification. Use care when copying files.
EXAMPLE #1
---------To copy a file from the Control Station to a Data Mover, type:
$ server_file server_2 -put passwd passwd
server_2 : done
EXAMPLE #2
---------To copy a file from the Data Mover to the Control Station, type:
$ server_file server_2 -get passwd /home/nasadmin/passwd
server_2 : done
-------------------------------------Last Modified: April 11, 2011 01:35 pm
server_fileresolve
Starts, deletes, stops, checks, and displays the fileresolve service
for the specified Data Mover. Filereseolve service facilitates
inode-to-filename translation. This translation is required when administrator
monitors the ’fs.qtreeFile’ and ’fs.filesystem’ statistics.
SYNOPSIS
-------server_fileresolve <movername>
-service { -start [-maxlimit <1M>]
| -stop
| -delete
| -status }
| -list
| -add <path_name> [,...]
| -drop <path_name> [,...]
| -lookup { -filesystem <fs_name> -inode <inode>[,...]
| -qtree <qt_name> -inode <inode>[,...] } [...]
DESCRIPTION
----------Controls and manages the fileresolve service, which crawls through filesystems
specified by the user. To have the fileresolve service started at boot time,
it is recommended that this command be added to the eof config file for the
Data Mover.
OPTIONS
-------service {-start [-maxlimit <1M>]
Starts the fileresolve service on the specified Data Mover. By default, the
fileresolve service caches upto 1 million files (this takes about 32MB of
memory on the Data Mover). By increasing the maximum limit of the
inode-to-filename translation cache, from 1M to 2M, it would increase memory
consumed by the service to 64M.
To change the maxlimit, use the following command:
server_fileresolve <movername> -service -start -maxlimit <new_value>
This new limit will be preserved across Data Mover reboots. However, when a
new limit is applied, the entire inode-to-filename cache will be flushed and
rebuilt. The Filesystem crawler adds files to its cache in the order they are
traversed. Hence, the first 1 million files traversed (by default) go in the
cache.
-stop
Flushes the inode-to-filename cache and stops the service. Deleting the
service also would free up the memory consumed by the fileresolve service and
deletes the configuration files created by the service.
-delete
Deletes the fileresolve service on the specified Data Mover.
-status
Checks the status of the files that are added to the cache
on the specified Data Mover.
-list
Displays the filesystems/ directories that are in the configuration and used
for crawling.
-add <path_name> [,...]
Adds specified path to the server_fileresolver configuration.
Crawls the specified path and buildsthe inode-to-filename cache.
455
To add a specific file that should be included in the
inode-to-filename map,the following command should be used:
server_fileresolve server_X -add <path for file>
-drop <path_name>[,...]
Drops specified path to the server_fileresolver configuration.
Inode-to-filename cache for the specified path is not cleared
until the service is restarted.
-lookup {-filesystem <fs_name> -inode <inode> [,...]
Performs an on demand crawl of the specified filesystem to translate the
inode to a pathname. If the pathname is not found for the inode, the inode
value is returned. For example, server_stats displays this inode value instead
of a path name in its output.
The user can do a ’deep, non-cached’ lookup of the inode to discover the
pathname (if it still exists). However, this could take time (in the order of
minutes). Hence, server_stats only attempts to lookup in the cache and does
not attempt a full Filesystem crawl.
Note: If the file name is successfully resolved, full pathname is returned.
Even if the file name is the same as the inode path is appended.
-lookup -qtree <qt_name> -inode <inode> [,...] } [...]
Performs an on demand crawl of the specified quota tree to
translate the inode to a pathname.
EXAMPLES
-------EXAMPLE #1
---------To display the new paths added,type:
$ server_fileresolve server_2 -add /server_2/ufs_0
server_2 :
New paths are added
EXAMPLE #2
---------To list the specified file paths that are included in the
inode-to-filename map,type:
$ server_fileresolve server_2 -list
server_2 :
PATH
/server_2/ufs_5
/server_2/ufs_4
/server_2/ufs_3
/server_2/ufs_2
/server_2/ufs_1
/server_2/ufs_0
EXAMPLE #3
---------To check the status of the fileresolve services, type:
$ server_fileresolve server_2 -service -status
server_2 :
FileResolve service is running :Max Limit of the cache:1000000 Entries used:10
Dropped entries:0
EXAMPLE #4
---------To drop the specified path to the server_fileresolver configuration,
type:
$ server_fileresolve server_2 -drop /server_2/ufs_0
server_2 :
Paths are dropped
Warning: Restart service to remove the cached entries of dropped paths.
EXAMPLE #5
---------To lookup multiple inodes within the same filesystem, type:
$ server_fileresolve
61697,61670,61660
server_2 :
Filesystem/QTree
ufs_0
server_2 -lookup -filesystem ufs_0
Inode
61660
-inode
ufs_0
61670
ufs_0
61697
Path
/server_2/ufs_0/dir00000/
testdir/yYY_0000039425.tmp
/server_2/ufs_0/dir00000/
testdir/kNt_0000028175.tmp
/server_2/ufs_0/dir00000/
testdir/gwR_0000058176.tmp
EXAMPLE #6
---------To lookup multiple inodes within a Quota Tree, type:
$ server_fileresolve
server_2 :
Filesystem/QTree
dir00000
server_2 -lookup -qtree dir00000 -inode 61697
Inode
61697
Path
/server_2/ufs_0/dir00000/
testdir/gwR_0000058176.tmp
----------------------------------------------------------------------------------------------------------Date updated: June 04, 2012 12:15 p.m.
457
server_ftp
Configures the FTP server configuration for the specified Data Movers.
SYNOPSIS
-------server_ftp {<movername>|ALL}
-service {-status|-start|-stop|{-stats [-all|-reset]}}
| -info
| -modify
[-controlport <controlport>]
[-dataport <dataport>]
[-defaultdir <path>]
[-homedir {enable|disable}]
[-keepalive <keepalive>]
[-highwatermark <highwatermark>]
[-lowwatermark <lowwatermark>]
[-deniedusers [<path>]]
[-welcome [<path>]]
[-motd [<path>]]
[-timeout <timeout>]
[-maxtimeout <maxtimeout>]
[-readsize <readsize>]
[-writesize <writesize>]
[-maxcnx <maxcnx>]
[-umask <umask>]
[-sslcontrol {no|allow|require|requireforauth}]
[-ssldata {allow|require|deny}]
[-sslpersona {anonymous|default|<persona_name>}]
[-sslprotocol {default|ssl3|tls1|all}]
[-sslcipher {default|<cipherlist>}]
[-sslcontrolport <sslcontrolport>]
[-ssldataport <ssldataport>]
DESCRIPTION
----------server_ftp configures the ftp daemon. Optional SSL security support
is available. The modifications are performed when the ftp daemon is
stopped and are reflected after restart of the ftp daemon. There is no
need to reboot the Data Mover for the changes to be reflected.
OPTIONS
------server_ftp {<movername>|ALL}
Sends a request to the Data Mover to get all the parameters of the ftp
daemon.
ALL option executes the command for all Data Movers.
-service {-status|-start|-stop|{-stats [-all|-reset]}}
-status
Retrieves the current status of the ftp daemon.
-start
Starts the ftp daemon. The start option persists after the daemon
is rebooted.
-stop
Stops the ftp daemon.
-stats [all|reset]
Displays the statistics of the ftp daemon. The reset option resets
all the ftp server statistics. The all option displays detailed
statistics.
-info
Retrieves all the parameters for the ftp daemon along with its current
status.
-modify
Modifies the ftp daemon configuration. The ftp daemon has to be
stopped to carry out the changes. The modifications are taken into
account when the service is restarted.
-controlport <controlport>
Sets the local tcp port for control connections. By default, the port
is 21. When control port is set to 0, it disables unsecure ftp usage
and validates port for implicit secure connection on SSL port
(default 990).
Note: This default port can be changed using the sslcontrolport option.
-dataport <dataport>
Sets the local tcp port for active data connections. By default, the
port is 20. When <dataport> is set to 0, the port is allocated
dynamically by the server in active mode.
-defaultdir <path>
Sets the default user directory when the user home directory is
not accessible. This option replaces "ftpd.defaultdir.. By default,
"/" is used.
-homedir {enable|disable}
Restricts or allows user access to their home directory tree. When
enabled the user is allowed access to their home directory only. If
the user home directory is not accesible, access is denied. During
connection the user is denied access to data outside of their home
directory space. By default, this feature is disabled.
Note: Using FTP on VNX provides more information about how the home
directory of a user is managed.
-umask <umask>
Defines the mask to set the mode bits on file or directory creation.
By default the mask is 027, which means that xrw-r---- mode bits
are assigned.
-keepalive <keepalive>
Sets TCP keepalive value for the ftp daemon. This value is given
in seconds. By default, the value is 60. The value 0 disables the
TCP keepalive option. The maximum value is 15300 (255 minutes).
-highwatermark <highwatermark>
Sets TCP high watermark value (amount of data stored without
knowledge of the client) for the ftp daemon. By default, the value
is 65536. The minimum value is 8192, and the maximum value is
1048576 (1 MB).
Caution: Do not modify this parameter without a thorough knowledge of
the impact on FTP client behavior.
-lowwatermark <lowwatermark>
Sets TCP low watermark value (amount of the data to be added,
after the highwatermark has been reached and new data can be
accepted from the client) for the ftp daemon. The minimum value
is 8192, maximum value is 1048576 (1 MB), and default value is
32768.
Caution: Do not modify this parameter without a thorough knowledge of
the impact on FTP client behavior.
459
-deniedusers <deniedusers_file>
Denies FTP access to specific users on a Data Mover. Specifies the
path and text file containing the list of usernames to be denied
access. Places each username on a separate line. By default, all
users are allowed.
-welcome <welcome_file>
Specifies path of the file to be displayed on the welcome screen.
For example, this file can display a login banner before the user is
requested for authentication data. By default, no welcome
message is displayed.
-motd <motd_file>
Specifies path of the file to be displayed on the welcome screen.
Users see a welcome screen ("message of the day") after they
successfully log in. By default, no message of the day is
displayed.
-timeout <timeout>
Specifies the default inactivity time-out period (when not set by
the client). The value is given in seconds. After the specified time
if there is no activity, the client is disconnected from the server
and will have to re-open a connection. By default, the <timeout>
value is 900 seconds. The minimum value is 10 seconds, and the
maximum value is 7200.
-maxtimeout <maxtimeout>
Sets the maximum time-out period allowed by the client. The
value is given in seconds and any value larger than maximum
time-out period is not allowed. By default, the <maxtimeout>
value is 7200 seconds. The minimum value is 10 seconds, and the
maximum value is 7200.
-readsize <readsize>
Sets the size for reading files from the disk. The value must be
greater than 8192, and a multiple of 8K. By default, the
<readsize> is 8192 bytes. The minimum value is 8192, and the
maximum value is 1048576 (1 MB).
-writesize <writesize>
Sets the size for writing files from the disk. The value must be
greater than 8192, and a multiple of 8K. By default, the
<writesize> is 49152 (48 KB). The minimum value is 8192, and the
maximum value is 1048576 (1 MB).
-maxcnx <maxcnx>
Sets the maximum number of control connections the ftp daemon
will support. By default, the <maxcnx> value is set to 65535
(64K-1). The minimum value is 1, and the maximum value is
65535 (64K-1).
-sslcontrol {no|allow|require|requireforauth}
Uses SSL for the ftp control connection depending on the
attributes specified. By default, SSL is disabled. The no option
disables SSL control. The allow option specifies that SSL is
enabled, but the user can still connect without SSL. The require
option specifies that SSL is required for the connection. The
requireforauth option specifies that SSL is required for
authentication.The control path goes back to unsecure after this
authentication. When the client is behind a firewall, this helps the
firewall to filter the ftp commands requiring new port access.
Note: Before the server can be configured with SSL, the Data Mover must be
set up with a private key and a public certificate. This key and
certificate are identified using a persona. In addition, the necessary
Certificate Authority (CA) certificates used to identify trusted
servers must be imported into the Data Mover. Use the system.s PKI
feature to manage the use of certificates prior to configuring SSL
operation.
-ssldata {no|allow|require}
Uses SSL for the data connection depending on the attributes
specified. The no option disables SSL. The allow option specifies
that SSL is enabled, but the user can also transfer data without
SSL. The require option specifies that SSL is required for data
connection.The ssldata value cannot be set to allow or require if
sslcontrol is set to no. By default, SSL is disabled.
Note: These options are set on the server but are dependent on ftp client
capabilities. Some client capabilities may be incompatible with server
settings. Using FTP on VNX provides information on validating
compatibility.
-sslpersona {anonymous|default|<persona_name>}
Specifies the persona associated with the Data Mover. Personas
are used to identify the private key and public certificate used by
SSL. The default value specified is default (each Data Mover is
configured with a persona named default). The anonymous value
specifies that SSL can operate without using a certificate. This
implies that the communication between client and server is
encrypted and data integrity is guaranteed.
Note: Use server_certificate to configure the persona before using
server_ftp.
-sslprotocol {default|ssl3|tls1|all}
Specifies the SSL protocol version that the ftp daemon on the
server accepts:
*
*
*
*
ssl3 - Only SSLv3 connections
tls1 - Only TLSv1 connections
all - Both SSLv3 and TLSv1 connections
default - Uses the value set in the ssl.protocol parameter which,
by default, is 0 (SSLv3 and TLSv1)
-sslcipher {default|<cipherlist>}
Specifies the SSL cipher suite. The value of default is the value set
in the ssl.cipher parameter. This value means that all ciphers are
supported by VNX except the Anonymous Diffie-Hellman,
NULL, and SSLv2 ciphers and that the supported ciphers are
sorted by the size of the encryption key.
-sslcontrolport <sslcontrolport>
Sets the implicit control port for FTP connections over SSL. By
default, the port is 990. To disable implicit FTP connections over
SSL, the <sslcontrolport> must be set to 0.
-ssldataport <ssldataport>
Sets the local tcp port for active data connections using implicit
FTP connections over SSL. By default, the port is 899. If the
ssldataport is set to 0, the Data Mover will use a port allocated by
the system.
SEE ALSO : server_certificate.
-------EXAMPLE #1
---------To retrieve all the parameters for the ftp daemon and its status, type:
$ server_ftp server_2 -info
FTP started
461
=========
controlport
dataport
defaultdir /
homedir
umask
21
20
.etc/ftpd/pub
disable
027
tcp keepalive
tcp high watermark
tcp low watermark
readsize
writesize
denied users file path
welcome file path
motd file path
1 minute
65536 bytes
32768 bytes
8192 bytes
49152 bytes
/.etc/ftpd/conf/ftpusers
/.etc/ftpd/conf/welcome
/.etc/ftpd/conf/motd
session timeout
max session timeoutQ
900 seconds
7200 seconds
Security Options
=============
sslpersona
sslprotocol
sslcipher
default
default
default
FTP over TLS explicit Options
---------------------------------------sslcontrol
SSL require for authentication
ssldata
allow SSL
FTP over SSL implicit Options
----------------------------------------sslcontrolport
990
ssldataport
989
EXAMPLE #2
---------To display the statistics of the ftp daemon, type:
$ server_ftp server_2 -service -stats
Login Type
Successful
Failed
==========
==========
=======
Anonymous
10
0
Unix
3
2
CIFS
7
1
Data transfers Count
============== =====
Write Bin
10
Read Bin
0
Write ASCII
2
Read ASCII
0
SSL Write Bin
5
SSL Read Bin
15
SSL Write ASCII 0
SSL Read ASCII
0
min
====
10.00
---1.00
---5.00
7.00
-------
Throughput (MBytes/sec)
average
=======================
19.00
---1.50
---17.00
25.00
-------
max
=====
20.00
---2.00
---18.00
35.00
-------
Where:
Value
Definition
Throughput
(MBytes/sec)
Throughput is calculated using the size of the file (Mbytes)
divided by the duration of the transfer (in seconds).
average
Average is the average of the throughputs (sum of the
throughputs divided by the number of transfers).
Data transfers
Defines the type of transfer.
Count
Number of operations for a transfer type.
min
Minimum time in milliseconds required to execute the
operation (with regards to Data mover).
max
Maximum time in milliseconds required to execute the
operation (with regards to Data mover).
EXAMPLE #3
---------To display the statistics of the ftp daemon with details, type:
$ server_ftp server_2 -service -stats -all
Commands
Count
========
=====
USER
23
PASS
23
QUIT
23
PORT
45
EPRT
10
....
....
FEAT
23
SITE Commands
=============
UMASK
IDLE
CHMOD
HELP
BANDWIDTH
KEEPALIVE
PASV
Count
=====
0
10
0
0
0
10
56
OPTS Commands
=============
UTF8
Count
=====
10
Login Type
==========
Anonymous
Unix
CIFS
Successful
==========
10
3
7
Connections
===========
Non secure
---------Control
Data
Count
=====
Failed
=======
0
2
1
10
44
Explicit SSL
-----------Control Auth
Control
Data
3
8
20
Implicit SSL
-----------Control
Data
0
0
463
Data transfers
==============
Write Bin
Read Bin
Write ASCII
Read ASCII
SSL Write Bin
SSL Read Bin
SSL Write ASCII
SSL Read ASCII
Count
=====
10
0
2
0
5
15
0
0
Throughput (MBytes/sec)
min
average
======== ====================
10.00
19.00
------1.00
1.50
------5.00
17.00
7.00
25.00
-------------
max
=======
20.00
---2.00
---18.00
35.00
-------
Where:
Value
Definition
Commands
FTP protocol command name.
Count
Number of commands received by Data mover.
SITE Commands
Class of command in FTP protocol.
OPTS Commands
Class of command in FTP protocol.
EXAMPLE #4
---------To retrieve the status of the ftp daemon, type:
$ server_ftp server_3 -service -status
server_3 : done
State
: running
EXAMPLE #5
---------To start the ftp daemon , type:
$ server_ftp server_2 -service -start
server_2 : done
EXAMPLE #6
---------To stop the ftp daemon, type:
$ server_ftp server_2 -service -stop
server_2 : done
EXAMPLE #7
----------To set the local tcp port for the control connections, type:
$ server_ftp server_2 -modify -controlport 256
server_2 :done
FTPD CONFIGURATION
==================
State
Control Port
Data Port
Default dir
Home dir
Keepalive
:
:
:
:
:
:
stopped
256
20
/
disable
1
High watermark
Low watermark
Timeout
Max timeout
Read size
Write size
Umask
Max connection
:
:
:
:
:
:
:
:
65536
32768
900
7200
8192
49152
27
65535
SSL CONFIGURATION
=================
Control channel mode
Data channel mode
Persona
Protocol
Cipher
Control port
Data port
:
:
:
:
:
:
:
disable
disable
default
default
default
990
989
EXAMPLE #8
---------To set the local tcp port for active data connections, type:
$ server_ftp server_2 -modify -dataport 257
server_2 : done
FTPD CONFIGURATION
==================
State
Control Port
Data Port
Default dir
Home dir
Keepalive
High watermark
Low watermark
Timeout
Max timeout
Read size
Write size
Umask
Max connection
:
:
:
:
:
:
:
:
:
:
:
:
:
:
stopped
256
257
/
disable
1
65536
32768
900
7200
8192
49152
27
65535
SSL CONFIGURATION
=================
Control channel mode
Data channel mode
Persona
Protocol
Cipher
Control port
Data port
:
:
:
:
:
:
:
disable
disable
default
default
default
990
989
EXAMPLE #9
---------To change the default directory of a user when his home directory is
not accessible, type:
$ server_ftp server_2 -modify -defaultdir /big
server_2 : done
FTPD CONFIGURATION
==================
State
: stopped
465
Control Port
Data Port
Default dir
Home dir
Keepalive
High watermark
Low watermark
Timeout
Max timeout
Read size
Write size
Umask
Max connection
:
:
:
:
:
:
:
:
:
:
:
:
:
256
257
/big
disable
1
65536
32768
900
7200
8192
49152
27
65535
SSL CONFIGURATION
=================
Control channel mode
Data channel mode
Persona
Protocol
Cipher
Control port
Data port
:
:
:
:
:
:
:
disable
disable
default
default
default
990
989
EXAMPLE #10
----------To allow users access to their home directory tree, type:
$ server_ftp server_2 -modify -homedir enable
server_2 : done
FTPD CONFIGURATION
==================
State
Control Port
Data Port
Default dir
Home dir
Keepalive
High watermark
Low watermark
Timeout
Max timeout
Read size
Write size
Umask
Max connection
:
:
:
:
:
:
:
:
:
:
:
:
:
:
stopped
256
257
/big
enable
1
65536
32768
900
7200
8192
49152
27
65535
SSL CONFIGURATION
=================
Control channel mode
Data channel mode
Persona
Protocol
Cipher
Control port
Data port
:
:
:
:
:
:
:
disable
disable
default
default
default
990
989
EXAMPLE #11
----------To restrict user access to their home directory tree, type:
$ server_ftp server_2 -modify -homedir disable
server_2 : done
FTPD CONFIGURATION
==================
State
Control Port
Data Port
Default dir
Home dir
Keepalive
High watermark
Low watermark
Timeout
Max timeout
Read size
Write size
Umask
Max connection
:
:
:
:
:
:
:
:
:
:
:
:
:
:
stopped
256
257
/big
disable
1
65536
32768
900
7200
8192
49152
27
65535
SSL CONFIGURATION
=================
Control channel mode
Data channel mode
Persona
Protocol
Cipher
Control port
Data port
:
:
:
:
:
:
:
disable
disable
default
default
default
990
989
EXAMPLE #12
----------To set the default umask for creating a file or a directory by means of
the ftp daemon, type:
$ server_ftp server_2 -modify -umask 077
server_2 : done
FTPD CONFIGURATION
==================
State
Control Port
Data Port
Default dir
Home dir
Keepalive
High watermark
Low watermark
Timeout
Max timeout
Read size
Write size
Umask
Max connection
:
:
:
:
:
:
:
:
:
:
:
:
:
:
stopped
256
257
/big
disable
1
65536
32768
900
7200
8192
49152
77
65535
SSL CONFIGURATION
=================
Control channel mode
Data channel mode
Persona
Protocol
Cipher
Control port
Data port
:
:
:
:
:
:
:
disable
disable
default
default
default
990
989
EXAMPLE #13
----------467
To set the TCP keepalive for the ftp daemon, type:
$ server_ftp server_2 -modify -keepalive 120
server_2 : done
FTPD CONFIGURATION
==================
State
Control Port
Data Port
Default dir
Home dir
Keepalive
High watermark
Low watermark
Timeout
Max timeout
Read size
Write size
Umask
Max connection
:
:
:
:
:
:
:
:
:
:
:
:
:
:
stopped
256
257
/big
disable
120
65536
32768
900
7200
8192
49152
77
65535
SSL CONFIGURATION
=================
Control channel mode
Data channel mode
Persona
Protocol
Cipher
Control port
Data port
:
:
:
:
:
:
:
disable
disable
default
default
default
990
989
EXAMPLE #14
----------To set the TCP highwatermark for the ftp daemon, type:
$ server_ftp server_2 -modify -highwatermark 90112
server_2 : done
FTPD CONFIGURATION
==================
State
Control Port
Data Port
Default dir
Home dir
Keepalive
High watermark
Low watermark
Timeout
Max timeout
Read size
Write size
Umask
Max connection
:
:
:
:
:
:
:
:
:
:
:
:
:
:
stopped
256
257
/big
disable
120
90112
32768
900
7200
8192
49152
77
65535
SSL CONFIGURATION
=================
Control channel mode
Data channel mode
Persona
Protocol
Cipher
Control port
Data port
:
:
:
:
:
:
:
disable
disable
default
default
default
990
989
EXAMPLE #15
----------To set the TCP lowwatermark for the ftp daemon, type:
$ server_ftp server_2 -modify -lowwatermark 32768
server_2 : done
FTPD CONFIGURATION
==================
State
Control Port
Data Port
Default dir
Home dir
Keepalive
High watermark
Low watermark
Timeout
Max timeout
Read size
Write size
Umask
Max connection
:
:
:
:
:
:
:
:
:
:
:
:
:
:
stopped
256
257
/big
disable
120
90112
32768
900
7200
8192
49152
77
65535
SSL CONFIGURATION
=================
Control channel mode
Data channel mode
Persona
Protocol
Cipher
Control port
Data port
:
:
:
:
:
:
:
disable
disable
default
default
default
990
989
EXAMPLE #16
----------To restrict FTP server access to specific users, type:
$ server_ftp server_2 -modify -deniedusers /.etc/mydeniedlist
server_2 : done
FTPD CONFIGURATION
==================
State
Control Port
Data Port
Default dir
Home dir
Keepalive
High watermark
Low watermark
Denied users conf file
Timeout
Max timeout
Read size
Write size
Umask
Max connection
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
SSL CONFIGURATION
=================
Control channel mode
Data channel mode
Persona
: disable
: disable
: default
stopped
256
257
/big
disable
120
90112
32768
/.etc/mydeniedlist
900
7200
8192
49152
77
65535
469
Protocol
Cipher
Control port
Data port
:
:
:
:
default
default
990
989
EXAMPLE #17
----------To set the path of the file displayed before the user logs in, type:
$ server_ftp server_2 -modify -welcome /.etc/mywelcomefile
server_2 : done
FTPD CONFIGURATION
==================
State
Control Port
Data Port
Default dir
Home dir
Keepalive
High watermark
Low watermark
Welcome file
Timeout
Max timeout
Read size
Write size
Umask
Max connection
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
stopped
256
257
/big
disable
120
90112
32768
/.etc/mywelcomefile
900
7200
8192
49152
77
65535
SSL CONFIGURATION
=================
Control channel mode
Data channel mode
Persona
Protocol
Cipher
Control port
Data port
:
:
:
:
:
:
:
disable
disable
default
default
default
990
989
----------------------------------------------------------------Last Modified Date: April 12, 2011. Time: 11:20 am
server_http
Configures the HTTP configuration file for independent services,
such as VNX FileMover, for the specified Data Movers.
SYNOPSIS
-------server_http {<movername>|ALL}
-info [<feature>]
| -service <feature> {-start|-stop}
| -service [<feature>] -stats [-reset]
| -modify <feature>
[-threads <threads>]
[-users {valid|<user>[,<user>,<user>...]}]
[-hosts <ip>[,<ip>,<ip>...]]
[-port <port_number>]
[-timeout <max_idle_time>]
[-maxrequests <maxrequests>]
[-authentication {none|basic|digest}]
[-realm <realm_name>]
[-ssl {required|off}]
[-sslpersona {anonymous|default|<persona_name>}]
[-sslprotocol {default|ssl3|tls1|all}]
[-sslcipher {default|<cipherlist>}]
| -append <feature>
[-users {valid|<user>[,<user>,<user>...]}]
[-hosts <ip>[,<ip>,<ip>...]]
| -remove <feature>
[-users {valid|<user>[,<user>,<user>...]}]
[-hosts <ip>[,<ip>,<ip>...]]
DESCRIPTION
----------Server_http manages user and host access to HTTP servers for independent
services such as FileMover.
The ALL option executes the command for all of the Data Movers.
OPTIONS
------[ -info <feature> ]
Displays information about the specified feature or all features including
server status, port, threads, requests allowed, timeout, access control,
and SSL configuration.
-service <feature> {-start |-stop}
Stops or starts the HTTP server for the specified feature.
-service [<feature>] -stats [-reset}
Lists the usage statistics of the HTTP server for the specified feature or
all features. If -reset is specified, statistics are reset to zero.
-modify <feature>
Displays the current HTTP protocol connection for the specified feature. When
issued with options, -modify sets the HTTP protocol connection for the
specified option. Any options previously set will be overwritten.
[-threads <threads>]
Sets the number of threads (default=20) for incoming service requests. The
minimum value is 4, the maximum 99. The HTTP threads are started on the
Data Mover at boot time.
[-users [valid|<user>[,<user>,<user>.]]]
Allows the users who correctly authenticate as defined in the Data Mover
471
passwd file (server_user provides more information) to execute commands for
the specified
<feature>.
If valid is entered, all users in the passwd file are allowed to use
digest authentication. A comma-separated list of users can also be given.
If no users are given, digest authentication is turned off.
[-hosts <ip>[,<ip>,<ip>.]]
Specifies hosts by their IP addresses that are allowed to execute commands
for the specified <feature>.
[-port <port_number>]
Specifies the port on which the HTTP server listens for incoming service
requests. By default, the HTTP server instance for FileMover listens on
port 5080.
[-timeout <max_idle_time>]
Specifies the maximum time the HTTP server waits for a request before
disconnecting from the client. The default value is 60 seconds.
[-maxrequests <max_requests>]
Specifies the maximum number of requests allowed. The default value is
300 requests.
[-authentication {none|basic|digest}]
Specifies the authentication method. none disables user authentication,
allowing for anonymous access (that is, no authentication). basic
authentication uses a clear text password. digest authentication uses a
scripted password. The default value is digest authentication.
[-realm <realm_name>]
Specifies the realm name. This information is required when authentication
is enabled (that is, the -authentication option is set to basic or
digest). The default realm name for FileMover is DHSM_authorization.
[-ssl {required|off}]
Specifies whether the HTTP server runs in secure mode, that is, only
accepts data received on encrypted SSL sessions. The default value is off.
Note: Before the HTTP server can be configured with SSL, the Data Mover
must be set up with a private key and public certificate. This key and
certificate are identified using a persona. In addition, the necessary
Certificate Authority (CA) certificates to identify trusted servers must
be imported into the Data Mover. Use Celerra.s PKI feature to manage the
use of certificates prior to configuring SSL operation.
[-sslpersona {default|anonymous|<persona_name>}]
Specifies the persona associated with the Data Mover. Personas are used to
identify the private key and public certificate used by SSL. The default
value is default (each Data Mover is currently configured with a single
persona named default). anonymous specifies that SSL can operate without
using a certificate.
[-sslprotocol {default|ssl3|tls1|all}]
Specifies the SSL protocol version the HTTPS server accepts.
*
*
*
*
ssl3 - Only SSLv3 connections
tls1 - Only TLSv1 connections
all - Both SSLv3 and TLSv1 connections
default - Uses the value set in the ssl.protocol parameter which,
by default, is 0 (SSLv3 and TLSv1)
[-sslcipher {default|<cipherlist>}]
Specifies the SSL cipher suite. The value of default is the value set in the
ssl.cipher parameter which, by default, is ALL:!ADH:!SSLv2:@STRENGTH. This
value means that all ciphers are supported by Celerra except the Anonymous
Diffie-Hellman, NULL, and SSLv2 ciphers and that the supported ciphers
are sorted by the size of the encryption key.
-append <feature> [-users {valid|<user>[,<user>,<user>.]}]
[-hosts <ip>[,<ip>,<ip>.]}]
Adds the specified users or hosts to the list of those who can execute commands
for the specified <feature> without having to re-enter the existing list.The
users and hosts descriptions provide information. If users or hosts are not
specified, displays the current HTTP configuration.
-remove <feature> [-users {valid|<user>[,<user>,<user>.]}]
[-hosts <ip>[,<ip>,<ip>.]}]
Removes the specified users and hosts from the list of those who can
execute commands for the specified <feature> without impacting
others in the list. The users and hosts descriptions provide
information. If users or hosts are not specified, displays the current
HTTP configuration.
SEE ALSO
-------Using VNX FileMover, Security Configuration Guide for File, fs_dhsm,
server_certificate, and nas_ca_certificate.
EXAMPLE #1
---------To display information about the HTTP protocol connection for the FileMover
service, type:
$ server_http server_2 -info dhsm
server_2 : done
DHSM FACILITY CONFIGURATION
Service name
: EMC File Mover service
Comment
: Service facility for getting DHSM attributes
Active
: False
Port
: 5080
Threads
: 16
Max requests
: 300
Timeout
: 60 seconds
ACCESS CONTROL
Allowed IPs
: any
Authentication
: digest ,Realm : DHSM_Authorization
Allowed user
: nobody
SSL CONFIGURATION
Mode
: OFF
Persona
: default
Protocol
: default
Cipher
: default
Where:
Value
Definition
Service name
Name of the File Mover service.
active
Whether VNX FileMover is enabled or disabled on the
file system.
port
TCP port of the File Mover service.
threads
Number of threads reserved for the File Mover service.
max requests
Maximum number of HTTP requests the service allows,
to keep the connection alive.
timeout
The time in seconds until which the service is kept alive
after a period of no activity.
473
allowed IPs
List of client IP addresses that are allowed to connect
to the service.
authentication
The HTTP authentication method used by the service.
allowed user
Users allowed to connect to the service.
mode
The SSL mode.
persona
Name of the persona associated with the Certificate for
establishing a secure connection.
protocol
The level of SSL protocol used for the service.
cipher
The cipher suite the service is negotiating, for
establishing a secure connection with the client.
EXAMPLE #2
---------To display statistical information about the HTTP protocol connection for
the FileMover service, type:
$ server_http server_2 -service dhsm -stats
server_2 : done
Statistics report for HTTPD facility DHSM :
Thread activity
Maximum in use count
: 0
Connection
IP filtering rejection count : 0
Request
Authentication failure count : 0
SSL
Handshake failure count
: 0
EXAMPLE #3
---------To configure an HTTP protocol connection for FileMover using SSL, type:
$ server_http server_2 -modify dhsm -ssl required
server_2 : done
EXAMPLE #4
---------To modify the threads option of the HTTP protocol connection for FileMover,
type:
$ server_http server_2 -modify dhsm -threads 40
server_2 : done
DHSM FACILITY CONFIGURATION
Service name
: EMC File Mover service
Comment
: Service facility for getting DHSM attributes
Active
: False
Port
: 5080
Threads
: 40
Max requests
: 300
Timeout
: 60 seconds
ACCESS CONTROL
Allowed IPs
: any
Authentication
: digest ,Realm : DHSM_Authorization
Allowed user
: nobody
SSL CONFIGURATION
Mode
: OFF
Persona
: default
Protocol
: default
Cipher
: default
EXAMPLE #5
---------To allow specific users to manage the HTTP protocol connection for FileMover,
type:
$ server_http server_2 -modify dhsm -users valid -hosts 10.240.12.146
server_2 : done
EXAMPLE #6
---------To add specific users who can manage the existing HTTP protocol connection
for FileMover, type:
$ server_http server_2 -append dhsm -users user1,user2,user3
server_2 : done
EXAMPLE #7
---------To add a specific user who can manage the existing HTTP protocol connection
for FileMover, type:
$ server_http server_2 -append dhsm -users user4 -hosts
172.24.102.20,172.24.102.21
server_2 : done
EXAMPLE #8
---------To remove the specified users and hosts so they can no longer manage the
HTTP connection for FileMover, type:
$ server_http server_2 -remove dhsm -users user1,user2 -hosts 10.240.12.146
server_2 : done
--------------------------------------Last Modified: April 12, 2011 12:45 pm
475
server_ifconfig
Manages the network interface configuration for the specified Data
Movers.
SYNOPSIS
-------server_ifconfig { <movername> | ALL }
-all [ -ip4 | -ip6 ]
| -delete <if_name>
| -create -Device <device_name> -name <if_name>
-protocol { IP <ipv4_addr> <ipmask> <ipbroadcast>
| IP6 <ipv6_addr>[/PrefixLength] }
[ mtu=<MTUbytes>] [ vlan=<vlanID>]
[ down ]
| <if_name> [ up | down
| [ mtu=<MTUbytes>] [vlan=<vlanID>] ]
| <if_name> [ sync=<ID> lrdfd=<device,local_ctl> rrdfd=<device,remote_ctl>
]
DESCRIPTION
----------server_ifconfig creates a network interface, assigns an IP address
to a network interface, enables and disables an interface, sets the MTU
size and the VLAN ID, and displays network interface parameters for
the specified Data Mover.
server_ifconfig is used to define the network address of each
interface existing on a machine, to delete and recreate an interface’s
address and operating parameters.
The ALL option executes the command for all Data Movers.
OPTIONS
-------all [ip4|ip6]
Displays parameters for all configured interfaces. The -ip4 option
displays all ipv4 only interfaces and the -ip6 option displays all
ipv6 only interfaces.
-delete <if_name>
Deletes a network interface configuration. However, the autogenerated
link local interfaces cannot be deleted.
-create -Device <device_name> -name <if_name> -protocol IP <ipv4_addr>
<ipmask> <ipbroadcast>|IP6 <ipv6_addr> [/PrefixLength]} [mtu=<MTUbytes>]
[vlan=<vlanID>] [down]
Creates a network interface configuration on the specified device
with the specified name and assigns a protocol to the interface. The
<if_name> must not contain a colon (:).
Available protocols are:
IP <ipv4_addr> <ipmask> <ipbroadcast>|IP6 <ipv6_addr> [/PrefixLength]}
IPv4 assigns the IP protocol with the specified IP address, mask, and
broadcast address. The IP address is the address of a particular
interface. Multiple interfaces are allowed for each device, each
identified by a different IP address. The IP mask includes the
network part of the local address and the subnet, which is taken from
the host field of the address. For example, 255.255.255.0 would be a
mask for a Class C network. The IP broadcast is a special destination
address that specifies a broadcast message to a network. For example,
x.x.x.255 is the broadcast address for a Class C network.
IP6 assigns the IPv6 address and prefix length. When prefix length is
not specified, the default value of 64 is used. It also assigns the
maximum transmission unit (MTU) size in bytes, the ID for the virtual
LAN (VLAN)(valid inputs are 0 (default) to 4094).
When creating the first IPv6 interface with a global unicast address
on a broadcast domain, the system automatically creates an
associated IPv6 link-local interface. Similarly, when deleting the last
remaining IPv6 interface on a broadcast domain, the system
automatically deletes the associated IPv6 link-local interface.
The down option can be specified for both IPv4 and IPv6. If specified,
the network interface will be set to the down state; otherwise, the
network interface is up by default.
For CIFS users, when an interface is created, deleted, or marked up or
down, use the server_setup command to stop and then restart the
CIFS service in order to update the CIFS interface list.
<if_name> up
Allows the interface to receive and transmit data, but does not enable
the physical port. Interfaces are marked up automatically when
initially setting up the IP address.
<if_name> down
Stops data from being transmitted through that interface. If possible,
the interface is reset to disable reception as well. This does not
automatically disable routes using the interface.
<if_name> mtu=<MTUbytes>
Resets the maximum transmission unit (MTU) size in bytes for the
specified interface. By default, the MTU is automatically set
depending on the type of network interface card installed.
Regardless of whether you have Ethernet or Gigabit Ethernet, the
initial default MTU size is 1500 bytes. To take advantage of the
capacity of Gigabit Ethernet, the MTU size can be increased up to
9000 bytes if your switch supports jumbo frames. Jumbo frames
should be used only when the entire infrastructure, including client
NICs, supports them.
For UDP, it is important that both the client and server use the same
MTU size. TCP negotiates the MTU size when the connection is
initialized. The switch.s MTU must be greater than or equal to the
host.s MTU.
Note: The MTU size specified here is for the interface. The MTU size
specified in server_netstat applies to the device and is automatically set.
<if_name> vlan=<vlanID>
Sets the ID for the virtual LAN (VLAN). Valid inputs are 0 (default) to
4094. When a VLAN ID other than 0 is set, the interface only accepts
packets tagged with that specified ID. Outbound packets are also
tagged with the specified ID.
Note: IEEE 802.1Q VLAN tagging is supported. VLAN tagging is not
supported on ana interfaces.
<if_name> sync=<ID> lrdfd=<device,local_ctl> rrdfd=<device,remote_ctl>
Resets the VDM Sync Replication session properties for an interface.
For sync, valid inputs are 0 (indicates the interface is not DR enabled anymore
)
to 65,536. Any other non-zero value indicates the VDM Sync Session is using
this interface.
For lrdfd, pass the local device name and one of its SCSI CTL path.
477
For rrdfd, pass the remote device name and one of its SCSI CTL path.
SEE ALSO
-------Configuring and Managing Networking on VNX and Configuring and
Managing Network High Availability on VNX, server_netstat,
server_setup, and server_sysconfig.
FRONT-END OUTPUT
---------------The network device name is dependent on the front end of the
system (for example, NS series Data Mover, 514 Data Movers, 510 Data
Movers, and so on) and the network device type. NS series and 514
Data Movers network device name display a prefix of cge, for
example, cge0. 510 or earlier Data Movers display a prefix of ana or
ace, for example, ana0, ace0. Internal network devices on a Data
Mover are displayed as el30, el31.
EXAMPLE #1
---------To display parameters of all interfaces on a Data Mover, type:
$ server_ifconfig server_2 -all
server_2 :
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
cge0 protocol=IP device=cge0
inet=172.24.102.238 netmask=255.255.255.0 broadcast=172.24.102.255
UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:4:29:87
el31 protocol=IP device=cge6
inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255
UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:4:11:a6 netname=localho
st
el30 protocol=IP device=fxp0
inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255
UP, ethernet, mtu=1500, vlan=0, macaddr=8:0:1b:43:7e:b8 netname=localho
st
EXAMPLE #2
---------To create an IP interface for Gigabit Ethernet, type:
$ server_ifconfig server_2 -create -Device cge1 -name cge1 -protocol IP
172.24.102.239 255.255.255.0 172.24.102.255
server_2 : done
EXAMPLE #3
---------To create an interface for network device cge0 with an IPv6 address
with a nondefault prefix length on server_2, type:
$ server_ifconfig server_2 -create -Device cge0 -name cge0_int1 -protocol IP6
3ffe:0000:3c4d:0015:0435:0200:0300:ED20/48
server_2 : done
EXAMPLE #4
----------
To create an interface for network device cge0 with an IPv6 address
on server_2, type:
$ server_ifconfig server_2 -create -Device cge0 -name cge0_int1 -protocol IP6
3ffe:0000:3c4d:0015:0435:0200:0300:ED20
server_2 : done
EXAMPLE #5
---------To verify that the settings for the cge0_int1 interface for server_2 are
correct, type:
$ server_ifconfig server_2 cge0_int1
server_2 :
cge0_int1 protocol=IP6 device=cge0
inet=3ffe:0:3c4d:15:435:200:300:ed20 prefix=48
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:c:5:5
Note: The bold item in the output highlights the nondefault 48-bit prefix.
EXAMPLE #6
---------To verify that the interface settings for server_2 are correct, type:
$ server_ifconfig server_2 -all
server_2 :
el30 protocol=IP device=mge0
inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:d:30:b1 netname=localhost
el31 protocol=IP device=mge1
inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:d:30:b2 netname=localhost
loop6 protocol=IP6 device=loop
inet=::1 prefix=128
UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
cge0_int1 protocol=IP6 device=cge0
inet=3ffe:0:3c4d:15:435:200:300:ed20 prefix=64
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:c:2:5
cge0_0000_ll protocol=IP6 device=cge0
inet=fe80::260:16ff:fe0c:205 prefix=64
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:c:2:5
Note: The first bold item in the output highlights the default 64-bit prefix.
The second and third bold items highlight the link-local name and address
that are automatically generated when you configure a global address for
cge0. The automatically created link local interface name is made by
concatinating the device name with the four digit VLAN ID between 0 and
4094. Note that the interface you configured with the IPv6 address
3ffe:0:3c4d:15:435:200:300:ed20 and the address with the link-local address
fe80::260:16ff:fe0c:205 share the same MAC address. The link-local address is
derived from the MAC address.
EXAMPLE #7
---------To verify that the interface settings for server_2 are correct, type:
$ server_ifconfig server_2 -all
479
server_2 :
cge0_int2 protocol=IP device=cge0
inet=172.24.108.10 netmask=255.255.255.0 broadcast=172.24.108.255
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:c:2:5
cge0_int1 protocol=IP6 device=cge0
inet=3ffe:0:3c4d:15:435:200:300:ed20 prefix=64
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:c:2:5
cge0_0000_ll protocol=IP6 device=cge0
inet=fe80::260:16ff:fe0c:205 prefix=64
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:c:2:5
el30 protocol=IP device=mge0
inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:d:30:b1 netname=localho
st
el31 protocol=IP device=mge1
inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:d:30:b2 netname=localho
st
loop6 protocol=IP6 device=loop
inet=::1 prefix=128
UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
Note: The bold items in the output highlight the IPv4 interface, cge0_int2,
and the IPv6 interface, cge0_int1.
EXAMPLE #8
---------To disable an interface, type:
$ server_ifconfig server_2 cge0_int2 down
server_2 : done
EXAMPLE #9
---------To enable an interface, type:
$ server_ifconfig server_2 cge0_int2 up
server_2 : done
EXAMPLE #10
----------To reset the MTU for Gigabit Ethernet, type:
$ server_ifconfig server_2 cge0_int2 mtu=9000
server_2 : done
EXAMPLE #11
----------To set the ID for the Virtual LAN, type:
$ server_ifconfig server_2 cge0_int1 vlan=40
server_2 : done
EXAMPLE #12
-----------
To verify that the VLAN ID in the interface settings for server_2
are correct, type:
$ server_ifconfig server_2 -all
server_2 :
cge0_int1 protocol=IP6 device=cge0
inet=3ffe:0:3c4d:15:435:200:300:ed20 prefix=64
UP, Ethernet, mtu=1500, vlan=40, macaddr=0:60:16:c:2:5
cge0_0040_ll protocol=IP6 device=cge0
inet=fe80::260:16ff:fe0c:205 prefix=64
UP, Ethernet, mtu=1500, vlan=40, macaddr=0:60:16:c:2:5
cge0_int2 protocol=IP device=cge0
inet=172.24.108.10 netmask=255.255.255.0 broadcast=172.24.108.255
UP, Ethernet, mtu=1500, vlan=20, macaddr=0:60:16:c:2:5
el30 protocol=IP device=mge0
inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:d:30:b1 netname=localho
st
el31 protocol=IP device=mge1
inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:d:30:b2 netname=localho
st
loop6 protocol=IP6 device=loop
inet=::1 prefix=128
UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
Note: The bold items in the output highlight the VLAN tag.
Note that the link-local address uses the VLAN tag as part of its name.
EXAMPLE #13
----------To delete an IP interface, type:
$ server_ifconfig server_2 -delete cge1_int2
server_2 : done
Note: The autogenerated link local interfaces cannot be deleted.
---------------------------------------------------------------Last modified: May 12, 2011 1:40 pm.
481
server_ip
Manages the IPv6 neighbor cache and route table for VNX.
SYNOPSIS
-------server_ip {ALL|<mover>}
-neighbor {
-list [<v6addr> [.interface <ifname>]]
| -create <v6addr> -lladdress <macaddr> [-interface <ifname>]
| -delete {-all|<v6addr> [-interface <ifname>]}
}
| -route {
-list
| -create {
-destination <destination> -interface <ifname>
| -default -gateway <v6gw> [-interface <ifname>]
}
| -delete {
-destination <destination>
| -default -gateway <v6gw> [-interface <ifname>]
| -all
}
}
DESCRIPTION
----------server_ip creates, deletes, and lists the neighbor cache and route tables.
OPTIONS
------server_ip {<movername>|ALL}
Sends a request to the Data Mover to get IPv6 parameters related to the IPv6
routing table and neighbor cache.
ALL option executes the command for all of the Data Movers.
-neighbor {-list|-create|-delete}
Lists, creates, or deletes the neighbor cache entries from the neighbor
cache table.
-list
Displays the neighbor cache entries.
-create
Creates a neighbor cache table entry with the specified details.
-delete
Deletes the specified neighbor cache table entries or all entries.
-route {-list|-create|-delete}
Lists, creates, or deletes entries in the IPv6 route table.
-list
Displays the IPv6 route table.
-create
Creates a route table entry with the specified details.
-delete
Deletes the specified route table entries.
EXAMPLE #1
----------
To view a list of neighbor cache entries on the Data Mover server_2, type:
$ server_ip server_2 -neighbor -list
server_2:
Address
Link layer address
fe80::204:23ff:fead:4fd4
0:4:23:ad:4f:d4
fe80::216:9cff:fe15:c00
0:16:9c:15:c:0
fe80::216:9cff:fe15:c00
0:16:9c:15:c:0
fe80::216:9cff:fe15:c00
0:16:9c:15:c:0
fe80::216:9cff:fe15:c00
0:16:9c:15:c:0
3ffe::1
0:16:9c:15:c:10
Where:
Value
Address
Link layer address
Interface
Type
State
Interface
cge1_0000_ll
cge1_0000_ll
cge4_0000_ll
cge3_2998_ll
cge2_2442_ll
cge3_0000_ll
Type
host
router
router
router
router
router
State
STALE
STALE
STALE
STALE
STALE
REACHABLE
Definition
The neighbor IPv6 address.
The link layer address of the neighbor.
Interface name of the interface connecting to the
neighbor.
Type of neighbor. The neighbor can be either host
or router.
The state of the neighbor such as REACHABLE,
INCOMPLETE, STALE, DELAY, or PROBE.
EXAMPLE #2
---------To view a list of neighbor cache entries for a specific IP address on the
Data Mover server_2, type:
$ server_ip server_2 -neighbor -list fe80::216:9cff:fe15:c00
server_2:
Address
Link layer address
Interface
Type
fe80::216:9cff:fe15:c00
0:16:9c:15:c:0
cge1_0000_ll
router
fe80::216:9cff:fe15:c00
0:16:9c:15:c:0
cge4_0000_ll
router
fe80::216:9cff:fe15:c00
0:16:9c:15:c:0
cge3_2998_ll router
fe80::216:9cff:fe15:c00
0:16:9c:15:c:0
cge2_2442_ll router
State
STALE
STALE
STALE
STALE
EXAMPLE #3
---------To view a list of neighbor cache entries for a specific IP address and
interface type, on the Data Mover server_2, type:
$ server_ip server_2 -neighbor -list fe80::216:9cff:fe15:c00 -interface
cge1_0000_ll
server_2:
Address
Link layer address
Interface
Type State
fe80::216:9cff:fe15:c00
0:16:9c:15:c:0 cge1_0000_ll router STALE
EXAMPLE #4
---------To add an entry to the neighbor cache for a global unicast IPv6 address,
on the Data Mover server_2, type:
$ server_ip server_2 -neighbor -create 2002:8c8:0:2310::2 -lladdress
0:16:9c:15:c:15
OK
EXAMPLE #5
---------To add an entry to the neighbor cache for a link local IPv6 address, on
the Data Mover server_2, type:
$ server_ip server_2 -neighbor -create fe80::2 -lladdress 0:16:9c:15:c:12483
-interface cge1v6
OK
EXAMPLE #6
---------To delete an entry from the neighbor cache for a global unicast IPv6 address,
on the Data Mover server_2, type:
$ server_ip server_2 -neighbor -delete 2002:8c8:0:2310:0:2:ac18:f401
OK
EXAMPLE #7
---------To delete an entry from the neighbor cache for a link local IPv6 address,
on all the Data Movers, type:
$ server_ip ALL -neighbor -delete fe80::1 -interface cge1v6
OK
EXAMPLE #8
---------To delete entries from the neighbor cache on the Data Mover server_2 type:
$ server_ip server_2 -neighbor -delete -all
OK
EXAMPLE #9
---------To view a list of route table entries on the Data Mover server_2, type:
$ server_ip server_2
server_2:
Destination
2002:8c8:0:2310::/64
2002:8c8:0:2311::/64
2002:8c8:0:2312::/64
2002:8c8:0:2313::/64
default
default
default
selected default
Where:
Value
Destination
Gateway
Interface
Expires
-route -list
Gateway
fe80::260:16ff:fe05:1bdd
fe80::260:16ff:fe05:1bdc
2002:8c8:0:2314::1
fe80::260:16ff:fe05:1bdd
Interface Expires (secs)
cge1v6
0
cge1v6
0
cge1v6
0
cge1v6
0
cge1_0000_ll
1785
cge1_0000_ll
1785
cge4v6
0
cge1_0000_ll
1785
Definition
The prefix of the destination or the default route entry.
There can be multiple default routes, but only one is
active and shown as selected default. The default sorting
of the destination column displays the default routes at
the bottom of the list and the selected default at the end
of the list.
The default gateway for default route entries. This value
is blank for prefix destination entries.
Interface name of the interface used for the route.
The time until the route entry is valid. Zero denotes route
is permanent and does not have an expiry.
EXAMPLE #10
----------To add a default route table entry on the Data Mover server_2 to the
destination network with the specified prefix, type:
$ server_ip server_2 -route -create -destination 2002:8c8:0:2314::/64
-interface cge4v6
OK
EXAMPLE #11
----------To add a default route table entry on the Data Mover server_2 through the
specified gateway, type:
$ server_ip server_2 -route -create -default -gateway 2002:8c8:0:2314::1
OK
EXAMPLE #12
----------To add a default route table entry on the Data Mover server_2 through the
specified gateway using the link-local interface, type:
$ server_ip server_2 -route -create -default -gateway fe80::1 -interface cge1v6
OK
EXAMPLE #13
----------To delete an entry from the route table with an IPv6 prefix route destination
for all the Data Movers, type:
$ server_ip ALL -route -delete -destination 2002:8c8:0:2314::/64
OK
EXAMPLE #14
----------To delete an entry from the route table for a global unicast IPv6 address,
on the Data Mover server_2, type:
$ server_ip server_2 -route -delete -default -gateway 2002:8c8:0:2314::1
OK
EXAMPLE #15
----------To delete an entry from the route table for a link local IPv6 address,
on the Data Mover server_2, type:
$ server_ip server_2 -route -delete -default -gateway fe80::1 -interface cge1v6
OK
EXAMPLE #16
----------To delete all entries from the IPv6 route table on the Data Mover server_2 type
:
$ server_ip server_2 -route -delete -all
OK
---------------------------------------------------------------------------Last modified: April 12, 2011 1:30 pm
485
server_kerberos
Manages the Kerberos configuration within the specified Data Movers.
SYNOPSIS
-------server_kerberos {<movername>|ALL}
-add realm=<realm_name>,kdc=<fqdn_kdc_name>[:<port>]
[,kdc=<fqdn_kdc_name>[:<port>]...]
[,kpasswd=<fqdn_kpasswd_server_name>]
[,kadmin=<kadmin_server>]
[,domain=<domain_name>][,defaultrealm]
| -delete realm=<realm_name>
| -keytab
| -ccache [-flush]
| -list
| -kadmin [<kadmin_options>]
DESCRIPTION
----------server_kerberos adds, deletes, lists the realms within the Kerberos
configuration of a Data Mover, and manages the Data Movers service
principals and keys.
server_kerberos displays the key table content, and specifies a
kadmin server.
OPTIONS
-------add realm=<realm_name>,kdc=<fqdn_kdc_name>
Adds the specified realm to the Kerberos configuration on the
specified Data Mover. The <realm_name> is the fully qualified
domain name of the Kerberos realm to be added to the key
distribution center (KDC) configuration. The <fqdn_kdc_name> is
the fully qualified domain name of the KDC for the specified realm.
Note: The -add option is only relevant if you are using a UNIX/Linux
Kerberos KDC.
[:<port>]
Specifies a port that the KDC listens on.
[,kdc=<fqdn_kdc_name[:<port>]...]
Specifies additional KDCs with ports that KDCs listen on.
[,kpasswd=<fqdn_kpasswd_server_name>]
Specifies a password server for the KDC. The
<fqdn_kpasswd_server_name> must be a fully qualified domain
name for the server.
[,kadmin=<kadmin_server>]
Specifies the kadmin server.
[,domain=<domain_name>]
The <domain_name> is the full name of the DNS domain for the
realm.
[,defaultrealm]
Indicates that the default realm is to be used.
-delete realm=<realm_name>
Deletes the specified realm from the Kerberos configuration for the
specified Data Mover.
Note: The -delete option is only relevant if you are using a UNIX/Linux
Kerberos KDC.
-keytab
Displays the principal names for the keys stored in the keytab file.
-ccache
Displays the entries in the Data Movers Kerberos credential cache.
Note: The -ccache option can also be used to provide EMC Customer Support
with information for troubleshooting user access problems.
[-flush]
Flushes the Kerberos credential cache removing all entries.
Credential cache entries are automatically flushed when they
expire or during a Data Mover reboot.
Once the cache is flushed, Kerberos obtains new credentials when
needed. The repopulation of credentials may take place
immediately, over several hours, or be put off indefinitely if no
Kerberos activity occurs.
-list
Displays a listing of all configured realms on a specified Data Mover
or on all Data Movers.
-kadmin [<kadmin_options>]
Invokes the kadmin tool with the following specified options:
[-r <realm>]
Specifies a realm as the default database realm.
[-p <principal>]
Specifies the principal for authentication. Otherwise, kadmin will
append "/admin" to the primary principal name of the default
cache, the value of the USER environment variable, or the
username as obtained with getpwuid, in order of preference.
[-q <query>]
Runs kadmin in non-interactive mode. This passes the query
directly to kadmin, which performs the query, then exits.
[-w <password>]
Uses a specified password instead of prompting for a password.
[-s <admin_server> [:<port>]]
Specifies the kadmin server with its associated port.
Note: The kadmin tool is only relevant if you are using a UNIX/Linux
Kerberos KDC. You must be root to execute the -kadmin option.
SEE ALSO
-------Configuring NFS on VNX, server_checkup, and server_nfs.
OUTPUT
-----Dates appearing in output are in UTC format.
EXAMPLE #1
----------
487
To add a realm to the Kerberos configuration of a Data Mover, type:
$ server_kerberos server_2 -add
realm=nasdocs.emc.com,kdc=winserver1.nasdocs.emc.com,domain=nasdocs.emc.com
server_2 : done
EXAMPLE #2
---------To list the keytabs, type:
$ server_kerberos server_2 -keytab
server_2 :
Dumping keytab file
keytab file major version = 0, minor version 0
-- Entry number 1 -principal: DM102-CGE0$@NASDOCS.EMC.COM
realm: NASDOCS.EMC.COM
encryption type: rc4-hmac-md5
principal type 1, key version: 332
key length: 16, key: b1c199a6ac11cd529df172e270326d5e
key flags:(0x0), Dynamic Key, Not Cached
key cache hits: 0
-- Entry number 2 -principal: DM102-CGE0$@NASDOCS.EMC.COM
realm: NASDOCS.EMC.COM
encryption type: des-cbc-md5
principal type 1, key version: 332
key length: 8, key: ced9a23183619267
key flags:(0x0), Dynamic Key, Not Cached
key cache hits: 0
-- Entry number 3 -principal: DM102-CGE0$@NASDOCS.EMC.COM
realm: NASDOCS.EMC.COM
encryption type: des-cbc-crc
principal type 1, key version: 332
key length: 8, key: ced9a23183619267
key flags:(0x0), Dynamic Key, Not Cached
key cache hits: 0
-- Entry number 4 -principal: host/dm102-cge0@NASDOCS.EMC.COM
realm: NASDOCS.EMC.COM
encryption type: rc4-hmac-md5
principal type 1, key version: 332
key length: 16, key: b1c199a6ac11cd529df172e270326d5e
key flags:(0x0), Dynamic Key, Not Cached
key cache hits: 0
<... removed ...>
-- Entry number 30 -principal: cifs/dm102-cge0.nasdocs.emc.com@NASDOCS.EMC.COM
realm: NASDOCS.EMC.COM
encryption type: des-cbc-crc
principal type 1, key version: 333
key length: 8, key: d95e1940b910ec61
key flags:(0x0), Dynamic Key, Not Cached
key cache hits: 0
End of keytab entries. 30 entries found.
This is a partial listing due to the length of the output.
Where:
Value
principal type
key version
Definition
Type of the principal as defined in the GSS-API. Reference
to RFC 2743.
Every time a key is regenerated its version changes.
EXAMPLE #3
---------To list all of the realms on a Data Mover, type:
$ server_kerberos server_2 -list
server_2 :
Kerberos common attributes section:
Supported TGS encryption types: rc4-hmac-md5 des-cbc-md5 des-cbc-crc
Supported TKT encryption types: rc4-hmac-md5 des-cbc-md5 des-cbc-crc
Use DNS locator: yes
End of Kerberos common attributes.
Kerberos realm configuration:
realm name: NASDOCS.EMC.COM
kdc: winserver1.nasdocs.emc.com
admin server: winserver1.nasdocs.emc.com
kpasswd server: winserver1.nasdocs.emc.com
default domain: nasdocs.emc.com
End of Kerberos realm configuration.
Kerberos domain_realm section:
DNS domain = Kerberos realm
.nasdocs.emc.com = NASDOCS.EMC.COM
End of Krb5.conf domain_realm section.
EXAMPLE #4
---------To specify a kadmin server, type:
# server_kerberos server_2 -add
realm=eng.nasdocs.emc.com,kdc=winserver1.nasdocs.emc.com,kadmin=172.24.102.67
server_2 : done
Note: You must be root to execute the -kadmin option. Replace $ with # as the
root login is a requirement.
EXAMPLE #5
---------To delete a realm on a Data Mover, type:
$ server_kerberos server_2 -delete realm=eng.nasdocs.emc.com
server_2 : done
EXAMPLE #6
---------To display the credential cache on a Data Mover, type:
$ server_kerberos server_2 -ccache
server_2 :
489
Dumping credential cache
Names:
Client: DM102-CGE0$@NASDOCS.EMC.COM
Service: WINSERVER1.NASDOCS.EMC.COM
Target: HOST/WINSERVER1.NASDOCS.EMC.COM@NASDOCS.EMC.COM
Times:
Auth: 09/12/2005 07:15:04 GMT
Start: 09/12/2005 07:15:04 GMT
End: 09/12/2005 17:15:04 GMT
Flags: PRE_AUTH,OK_AS_DELEGATE
Encryption Types:
Key: rc4-hmac-md5
Ticket: rc4-hmac-md5
Names:
Client: DM102-CGE0$@NASDOCS.EMC.COM
Service: winserver1.nasdocs.emc.com
Target: ldap/winserver1.nasdocs.emc.com@NASDOCS.EMC.COM
Times:
Auth: 09/12/2005 07:15:04 GMT
Start: 09/12/2005 07:15:04 GMT
End: 09/12/2005 17:15:04 GMT
Flags: PRE_AUTH,OK_AS_DELEGATE
Encryption Types:
Key: rc4-hmac-md5
Ticket: rc4-hmac-md5
Names:
Client: DM102-CGE0$@NASDOCS.EMC.COM
Service: NASDOCS.EMC.COM
Target: krbtgt/NASDOCS.EMC.COM@NASDOCS.EMC.COM
Times:
Auth: 09/12/2005 07:15:04 GMT
Start: 09/12/2005 07:15:04 GMT
End: 09/12/2005 17:15:04 GMT
Flags: INITIAL,PRE_AUTH
Encryption Types:
Key: rc4-hmac-md5
Ticket: rc4-hmac-md5
End of credential cache entries.
Where:
Value
client
service
target
auth
start
end
flags
key
ticket
Definition
Client name and its realm.
Domain controller and its realm.
Target name and its realm.
Time of the initial authentication for the named principal.
Time after which the ticket is valid.
Time after which the ticket will not be honored (its expiration
time).
Options used or requested when the ticket was issued.
Key encryption type.
Ticket encryption type.
EXAMPLE #7
---------To flush the credential cache on a Data Mover, type:
$ server_kerberos server_2 -ccache flush
server_2 :
Purging credential cache.
Credential cache flushed.
--------------------------------------
Last Modified: April 13, 2011 11:35 am
491
server_ldap
Manages the LDAP-based directory client configuration and LDAP over SSL
for the specified Data Movers.
SYNOPSIS
-------server_ldap {<movername>|ALL}
{-set|-add} [-p] {-domain <FQDN>|-basedn
<attribute_name>=<attribute_value>[,...]}
[-servers {<IPv4_addr>[:<port>]|<IPv6_addr>|<\[IPv6_addr\]:port>}[,...]]
[-profile <profile_name>]|{-file <file_name>}
[-nisdomain <NIS_domain>]
[-binddn <bind_DN>|{-kerberos -kaccount <account_name> [-realm
<realm_name>]}]
[-sslenabled {y|n}]
[-sslpersona {none|<persona_name>}]
[-sslcipher {default|<cipher_list>}]
| -clear [-all|-domain <FQDN>|-basedn
<attribute_name>=<attribute_value>[,...]]
| -info [-all | -domain <FQDN> | -basedn
<attribute_name>=<attribute_value>[,...]][-verbose]
| -service {-start|-stop|-status}
| -lookup [-domain <FQDN> | -basedn
<attribute_name>=<attribute_value>[,...]]{-user <username>
| -group <groupname>
| -uid <uid>
| -gid <gid>
| -hostbyname <hostname>
| -netgroup <groupname>}
DESCRIPTION
----------server_ldap configures, starts, stops, deletes, and displays the status
of the LDAP-based directory client configuration, and queries the
LDAP-based directory server.
OPTIONS
------{-set|-add} [-p] {-domain <FQDN>|-basedn <attribute_name>=<attribute_value>[,..
.]}
Specifies the LDAP-based directory client domain for the specified
Data Mover and starts the service.The -add and -set options can be used to
configure one initial LDAP-based directory client domain for the specified
Data Mover and start the service. The -add option supersedes the \226set optio
n
as the preferred method to configure one initial LDAP-based directory client
domain for the specified Data Mover. The -add option must be used to add
domains and extend the configuration if multiple domains are required.
Domains must be configured or added one at a time. The -p option requests
a prompt for the password. A password is required in conjunction with a
bind distinguished name in order to specify the use of simple authentication.
The -basedn option specifies the Distinguished Name (DN) of the directory base,
an x509 formatted name that uniquely identifies the directory base.
For example: ou=abc,o=def,c=ghi. If a base distinguished name contains
space characters, enclose the entire string within double quotation marks
and enclose the name with a backslash and double quotation mark. For example,
"\"cn=abc,cn=def ghi,dc=com\"".
It is recommended configuring an LDAP-based directory client by
using the -basedn option instead of the -domain option. The DN
provides the root position for:
*
*
Searching for iPlanet profiles
Defining default search containers for users, groups, hosts, and
netgroups according to RFC 2307. An iPlanet profile and
OpenLDAP or Active Directory with SFU or IdMU ldap.conf file
are only required for customized setups.
Note: In the case in which the DN of the directory base contains dots and the
client is configured using the domain name, the default containers may not
be set up correctly. For example, if the name is dc=my.company,dc=com and
it is specified as domain name my.company.com, VNX incorrectly defines the
default containers as dc=my,dc=company,dc=com.
[-servers {<IPv4_addr>[:<port>]|<IPv6_addr>| <\[IPv6_addr\]:port>}[,...]]
Specifies the IP addresses of the LDAP-based directory client
servers. <IPv4_addr> or <IPv6_addr> indicates the IP address of
the LDAP-based directory servers. IPv6 addresses need to be
enclosed in square brackets if a port is specified; the brackets do
not signify optional content. The <port> option specifies the
LDAP-based directory server TCP port number. If the port is not
specified, the default port is 389 for LDAP and 636 for SSL-based
LDAP. It is recommended that at least two LDAP servers are
defined, so that DART can switch to the second server in case the
first cannot be reached.
Note: IP addresses of the LDAP-based directory servers do not have to
be included every time with the server_ldap command once you have
indicated the configuration server, and if configuring the same
LDAP-based directory service.
[-profile <profile>]
Specifies the profile name or the profile distinguished name
which provides the iPlanet client with configuration information
about the directory service. For example, both the following
values are allowed: -profile vnx_profile and -profile
cn=vnx_profile,ou=admin,dc=mycompany,dc=com.
Note: It is recommended that unique profile names be used in the
Directory Information Tree (DIT). The specified profile is searched for by
scanning the entire tree and if it is present in multiple locations, the first
available profile is used unless the profile distinguished name is
specified.
{-file <file_name>}
Allows to specify a LDAP configuration file per domain:
* The various LDAP domains may have different schemas
(OpenLDAP, IdMU, and so on) or different customizations
(non-standard containers).
* All LDAP domains can share the same /.etc/ldap.conf setup
file or even no file if all the domains comply with the RFC2307.
* The configuration files must be put in /.etc using server_file.
In order to prevent collisions with other system files, it is
required that the LDAP configuration is prefixed by "ldap"
and suffixed by ".conf", i.e. "ldap<anything>.conf".
* The default value of the -file option is "ldap.conf".
* server_ldap -service -status lists all the configured domains,
and their configuration source (default, file or profile). Several
LDAP domains can be configured using the same LDAP
configuration file.
[-nisdomain <NIS_domain>]
Specifies the NIS domain of which the Data Mover is a member
since an LDAP-based directory domain can host more than one
493
NIS domain.
[-binddn <bind_DN>|{-kerberos -kaccount <account_name>
[-realm <realm_name>]}]
Specifies the distinguished name (DN) or Kerberos account of the
identity used to bind to the service. Active Directory with SFU or
IdMU requires an authentication method that uses simple
authentication, SSL, or Kerberos.
Simple authentication requires that a DN be specified along with
a password. For SSL-based client authentication to succeed, the
Data Mover certificate Subject must match the distinguished
name for an existing user (account) at the directory server.
Note: To configure a LDAP-based directory service for authentication,
-binddn is not required if the -sslpersona option is specified. In this ca
se,
SSL-based client authentication will be used.
The Kerberos account name must be the CIFS server computer
name known by the KDC. The account name must terminate with
a $ symbol.
By default, the Data Mover assumes that the realm is the same as
the LDAP domain provided in the -domain or -basedn options.
But a different realm name can be specified, if necessary.
[-sslenabled {y|n}]
Enables (y) or disables (n) SSL. SSL is disabled by default.
[-sslpersona {none|<persona_name>}]
Specifies the key and certificate of the directory server. If a persona
has been previously configured, none disables the user of a client key
and certificate. The -sslpersona option without the -binddn option
indicates that the user wants to authenticate using the client (persona)
certificate. To authenticate using the client certificate, the LDAP server
must be configured to always request (or require) the persona certificate
during the SSL transaction, or the authentication will fail. If
authentication using the client certificate is not desired, then the
-binddn option must be used. The configuration rules are explained in
the table below.
Note: The -sslpersona option does not automatically enable SSL, but
configures the specified value. The value remains persistent and is used
whenever SSL is enabled.
Configuration rules
----------------------------------------------------------------------Description
Data Mover Configuration
----------------------------------------------------------------------SSL enabled on Data Mover, LDAP server
server_ldap -sslenabled y
accept SSL, anonymous authentication
is used.
SSL enabled, password-based
authentication, is used.
server_ldap -p -binddn cn=foo
-sslenabled y
SSL enabled, SSL certificate
authentication is used, LDAP server
should be configured to request client
certificate.
server_ldap -sslenabled y
-sslpersona default (use
server_certificate to verify
that the certificate for the
Data Mover’s default persona
exists)
-----------------------------------------------------------------------
Note: The user should refer to the LDAP server documentation for
information about configuring the server to request the client certificate
.
[-sslcipher {default|<cipher_list>}]
Specifies default or the cipher list.
Note: The -sslcipher option does not automatically enable SSL, but
configures specified value. The value remains persistent and is used
whenever SSL is enabled.
-clear
Deletes the LDAP-based directory client configuration for the specified Data
Mover and stops the service.
-info
Displays the service status as well as the static and dynamic configuration.
[-verbose]
Adds troubleshooting information to the output.
-service {-start|-stop|-status}
The -start option enables the LDAP-based directory client service. The
LDAP-based directory client service is also restarted when the
VNX is rebooted. The -stop option disables the LDAP-based
directory client service, and the -status option displays the status of
the LDAP-based directory service.
-lookup {user=<username>|group=<groupname>|uid=<uid>|gid=<gid>|
hostbyname=<hostname>|netgroup=<groupname>}
Provides lookup information about the specified resource for troubleshooting
purposes.
Note: The server_ldap requires the user to specify the domain name when
more than one domain is configured for the -clear, -info, and -lookup
options. Other options are not changed and they are applicable for each
different domain.
SEE ALSO
-------Configuring VNX Naming Services.
EXAMPLE #1
---------To configure the use of an LDAP-based directory by a Data Mover, type:
$ server_ldap server_4 -set -domain nasdocs.emc.com -servers 172.24.102.62
server_4 : done
EXAMPLE #2
---------To configure the use of an LDAP-based directory by a Data Mover using the
Distinguished Name of the server at IPv4 address 172.24.102.62 with the
default port, type:
$ server_ldap server_2 -set -basedn dc=nasdocs,dc=emc,dc=com -servers
172.24.102.62
server_2 : done
EXAMPLE #3
---------To configure the use of an LDAP-based directory by a Data Mover using the
Distinguished Name of the server at IPv6 address 2002:c8c::24:172:63 with495
the default port, type:
$ server_ldap server_2 -set -basedn dc=nasdocs,dc=emc,dc=com -servers
2002:c8c::24:172:63
server_2 : done
EXAMPLE #4
---------To configure the use of an LDAP-based directory by a Data Mover and specify
the use of the client profile, type:
$ server_ldap server_4 -set -domain nasdocs.emc.com -servers 172.24.102.62
-profile celerra_profile -nisdomain nasdocs -sslenabled y
server_4 : done
EXAMPLE #5
---------To configure the use of an LDAP-based directory by a Data Mover and specify
the use of the client profile using its distinguished name, type:
$ server_ldap server_4 -set -domain nasdocs.emc.com -servers 172.24.102.62
-profile cn=celerra_profile,dc=nasdocs,dc=emc,dc=com -nisdomain nasdocs
-sslenabled y
server_4 : done
EXAMPLE #6
---------To specify the NIS domain to which the Data Mover is a member, type:
$ server_ldap server_2 -set -domain nasdocs.emc.com -servers 172.24.102.62
-nisdomain nasdocs
server_2 : done
EXAMPLE #7
---------To configure the use of simple authentication by specifying a bind
Distinguished Name (DN) and password, type:
$ server_ldap server_2 -set -p -domain nasdocs.emc.com -servers 172.24.102.10
-binddn "cn=admin,cn=users,dc=nasdocs,dc=emc"
server_2 : Enter Password:********
done
EXAMPLE #8
---------To configure the use of an LDAP-based directory by a Data Mover using SSL,
type:
$ server_ldap server_4 -set -basedn dc=nasdocs,dc=emc,dc=com -servers
172.24.102.62 -sslenabled y
server_4 : done
EXAMPLE #9
---------To configure the use of an LDAP-based directory by a Data Mover using SSL and
user key and certificate, type:
$ server_ldap server_4 -set -basedn dc=nasdocs,dc=emc,dc=com -servers
172.24.102.62 -sslenabled y -sslpersona default
server_4 : done
EXAMPLE #10
----------To configure the use of an LDAP-based directory by a Data Mover using SSL and
using specified ciphers, type:
$ server_ldap server_4 -set -basedn dc=nasdocs,dc=emc,dc=com -servers
172.24.102.62 -sslenabled y -sslcipher "RC4-MD5,RC4-SHA"
server_4 : done
EXAMPLE #11
----------To display informaton about the LDAP-based directory configuration on a Data
Mover, type:
$ server_ldap server_4 -info
server_4 :
LDAP domain: nasdocs.emc.com
base DN: dc=nasdocs,dc=emc,dc=com
State: Configured - Connected
NIS domain: nasdocs.emc.com
No client profile nor config. file provided (using default setup)
Connected to LDAP server address: 172.24.102.62 - port 636
SSL enabled/disabled by Command line, cipher suites configured by Command line
EXAMPLE #12
----------To configure the use of Kerberos authentication by specifying a Kerberos
account, type:
$ server_ldap server_2 -set -basedn dc=nasdocs,dc=emc,dc=com -servers
172.24.102.62 -kerberos -kaccount cifs_compname$
server_2 : done
EXAMPLE #13
----------To display detailed informaton about the LDAP-based directory configuration on
a Data Mover, type:
$ server_ldap server_2 -info -verbose
server_2 :
LDAP domain: devldapdom1.lcsc
State: Configured - Connected
Schema: OpenLDAP
Base dn: dc=devldapdom1,dc=lcsc
Bind dn: <anonymous>
Configuration: RFC-2307 defaults
Global warnings & errors
{
The LDAP cache is disabled.
}
LDAP server: 192.168.67.11 - Port: 389 - Active
SSL: Not enabled
Naming ctx: (baseDn is ticked)
[x] dc=devldapdom1,dc=lcsc
Containers: (no [scope] means ignored, unless parent container with sub scope
is valid)
Passwd: Class: posixAccount - Attributes: uid, uidNumber, gidNumber,
userPassword, homeDirectory
[one] ou=People,dc=devldapdom1,dc=lcsc - prefix=uid
Group: Class: posixGroup - Attributes: gidNumber, memberUid
memberUid syntax is DN (Windows)
[one] ou=Group,dc=devldapdom1,dc=lcsc - prefix=cn
497
Hosts: Class: ipHost - Attributes: ipHostNumber
[one] ou=Hosts,dc=devldapdom1,dc=lcsc - prefix=cn
Netgroup: Class: nisNetgroup - Attributes: nisNetgroupTriple,
memberNisNetgroup
[one] ou=netgroup,dc=devldapdom1,dc=lcsc - prefix=cn
LDAP server: 10.64.220.148 - Port: 389 - Spare
SSL: Not enabled
EXAMPLE #14
----------To display lookup information about the user nasadmin, type:
$ server_ldap server_4 -lookup -user nasadmin
server_4:
user: nasadmin, uid: 1, gid: 201, gecos: nasadmin, home dir: /home/nasadmin,
shell: /bin/csh
EXAMPLE #15
----------To display the status of the LDAP-based directory service, type:
$ server_ldap server_2 -service -status
server_2 :
LDAP domain "devldapdom1.lcsc" is active - Configured with RFC-2307 defaults
EXAMPLE #16
----------To stop the LDAP-based directory service, type:
$ server_ldap server_4 -service -stop
server_4 : done
EXAMPLE #17
----------To delete the LDAP configuration for the specified Data Mover and stop the
service, type:
$ server_ldap server_4 -clear
server_4 : done
EXAMPLE #18
----------To check if any ldap domain is configured, type:
server_ldap server_3 -service -status
server_3 :
LDAP domain is not configured yet.
EXAMPLE #19
----------To configure a domain for openLdap with standard schema, type:
server_ldap server_3 -set -domain devldapdom1.lcsc
-servers 192.168.67.114, 192.168.67.148
server_3 : done
Note: Since this is the first domain, you can use -set or -add option.
EXAMPLE #20
-----------
To configure a domain for Fedora Directory Service (same as
openLdap), type:
server_ldap server_3 -add -p -basedn dc=389-ds,dc=lcsc
-servers 192.168.67.10.64.223.182 -binddn
"\"cn=Directory Manager\""
server_3 : Enter Password:********
done
Note: Since a domain is already set up, you must use -add option.
EXAMPLE #21
----------To configure a domain for iPlanet using specific configuration profile,
type:
server_ldap server_3 -add -domain dvt.emc -servers
192.168.67.140 -profile profilecad3
server_3 : done
EXAMPLE #22
----------To configure a domain for IDMU using specific configuration file,
type:
server_ldap server_3 -add -p -basedn dc=eng,dc=lcsc
-servers 192.168.67.82 -binddn
cn=administrator,cn=Users,dc=eng,dc=lcsc -file
ldap.conf
server_3 : Enter Password:******
done
EXAMPLE #23
----------To check if the domains are ok, type:
server_ldap
server_3 :
LDAP domain
LDAP domain
LDAP domain
LDAP domain
server_3 -service -status
"dev.lcsc" is active - Configured with RFC-2307 defaults
"ds.lcsc" is inactive - Configured with RFC-2307 defaults
"dvt.emc" is active - Configured with profile "profilecad3"
"eng.lcsc" is active - Configured with file "ldap.conf"
EXAMPLE #24
----------To get the details about the domain ds.lcsc, type:
server_ldap server_3 -info -verbose -domain ds.lcsc
server_3 :
LDAP domain: ds.lcsc
State: Uninitialized - Disconnected
Schema: Unknown yet (must succeed to connect)
Base dn: dc=ds,dc=lcsc
Bind dn: cn=Directory Manager
Configuration: RFC-2307 defaults
Global warnings & errors
{
Only one LDAP server is configured for LDAP domain ds.lcsc.
}
LDAP server: 192.168.67.182 - Port: 389 - Spare
SSL: Not enabled
Last error: 91 / Connect error
Server warnings & errors
499
{
LDAP server 192.168.67.182: LDAP protocol error: LDAP is unable to connect to
the
specified port.
LDAP server 192.168.67.182: LDAP protocol error: Connect error.
EXAMPLE #25
----------To delete the domain ds.lcsc, type:
server_ldap server_3 -clear -domain ds.lcsc
server_3 : done
server_ldap server_3 -service -status
server_3 :
LDAP domain "dev.lcsc" is active - Configured with RFC-2307 defaults
LDAP domain "dvt.emc" is active - Configured with profile "profilecad3"
LDAP domain "eng.lcsc" is active - Configured with file "ldap.conf"
EXAMPLE #26
----------To lookup a user in a given domain, type:
server_ldap server_3 -lookup -user cad -domain eng.lcsc
server_3 :
user: cad, uid: 33021, gid: 32769, homeDir: /emc/cad
EXAMPLE #27
----------To get info on all domains, type:
server_ldap server_3 -info -all
server_3 :
LDAP domain: dev.lcsc
State: Configured - Connected
Schema: OpenLDAP
Base dn: dc=devldapdom1,dc=lcsc
Bind dn: <anonymous>
Configuration: RFC-2307 defaults
LDAP server: 192.168.67.114 - Port: 389 - Active
SSL: Not enabled
LDAP server: 192.168.67.148 - Port: 389 - Spare
SSL: Not enabled
LDAP domain: dvt.emc
State: Configured - Connected
Schema: Sun Directory Server (iPlanet) (Sun-ONE-Directory/5.2)
Base dn: dc=dvt,dc=emc
Bind dn: <anonymous>
Configuration: Profile name: profilecad3 - TTL: 11 s
LDAP conf server: 192.168.67.140 - Port: 389
SSL: Not enabled
LDAP default servers:
LDAP server: 192.168.67.140 - Port: 389 - Active
SSL: Not enabled
LDAP domain: eng.lcsc
State: Configured - Connected
Schema: Active Directory
Base dn: dc=eng,dc=lcsc
Bind dn: cn=administrator,cn=Users,dc=eng,dc=lcsc
Configuration: File: ldap.conf - TTL: 1200 s
LDAP server: 192.168.67.82 - Port: 389 - Active
SSL: Not enabled
EXAMPLE #28
-----------
To clear all the domains, type:
server_ldap server_3 -clear -all
server_3 : done
server_ldap server_3 -service -status
server_3 :
LDAP domain is not configured yet.
--------------------------------------------------Last Modified: October 20, 2011 12:30 pm.
501
server_log
Displays the log generated by the specified Data Mover.
SYNOPSIS
-------server_log <movername>
[-a][-f][-n][-s][-v|-t]
DESCRIPTION
----------server_log reads and displays the log generated by the Data Mover.
Information in the log file is read from oldest to newest. To view that
most recent log activity, add|tail to the end of your command line.
OPTIONS
------No arguments
Displays the contents of the log added since the last reboot.
-a
Displays the complete log.
-f
Displays the contents of the log added since the last reboot. Additionally
monitors the growth of the log by entering into an endless loop, pausing,
reading the log being generated. The output is updated every second. To exit,
press Ctrl-C together.
-n
Displays the log without the time stamp.
-s
Displays the time in yyyy-mm-dd format when each command in the log was
executed.
-v|-t
Displays the log files in verbose form or terse form.
EXAMPLE #1
---------To monitor the growth of the current log, type:
$ server_log server_2 -f
-------------------NAS LOG for slot 2:
-------------------0 keys=0 h=0 nc=0
1200229390: VRPL: 6: 122: Allocating chunk:3 Add:50176 Chunks:24
1200229390: SVFS: 6: Merge Start FsVol:118 event:0x0
1200229390: SVFS: 6: D113118_736: hdr:82944 currInd:6, Destpmdv:D114118_503
1200229390: CFS: 6: Resuming fs 24
1200229390: SVFS: 6: 118:D113118_736:Merge hdr=82944 prev=99328 id=113 chunk=0
s
tableEntry=7
1200229390: UFS: 6: Volume name:Sh122113
1200229390: UFS: 6: starting gid map file processing.
1200229390: UFS: 6: gid map file processing is completed.
1200229390: DPSVC: 6: DpRequest::done() BEGIN
reqType:DpRequest_VersionInt_SchSr
cRefresh reqCaller:DpRequest_Caller_Scheduler status:0
1200229390: DPSVC: 6:
SchedulerSrc=118_APM00062400708_0000_253_APM00062400708_00
00, curState=active, input=refreshDone
1200229390: DPSVC: 6: DpVersion::getTotalBlocksVolume enter
1200229390: DPSVC: 6: DpVersion::getTotalBlocksVolume found newV 118.ckpt003,
bl
ocks 17534
1200229390: DPSVC: 6: DpVersion::getTotalBlocksVolume 0 blocks for vnumber
1038
totalB 0
1200229390: DPSVC: 6: DpVersion::getTotalBlocksVolume found oldV 118.ckpt004
1200229390: DPSVC: 6: DpVersion::getTotalBlocksVolume exit
1200229390: DPSVC: 6: DpVersion::getTotalBytes 0 blocks 0 bytes
1200229390: DPSVC: 6:
SchedulerSrc=118_APM00062400708_0000_253_APM00062400708_00
00, newState=active
1200229390: SVFS: 6: D113118_736: After Merge err:4 full:0 mD:0
1200229390: SVFS: 6: D113118_736: prev !full release ch:82944 newPrev:99328
1200229390: SVFS: 6: D113118_737: Chunk:0 hdrAdd:50176 ==> prevChunk:82944
befor
e changePrevChunk
1200229390: SVFS: 6: D113118_737: Ch:0 hdr:50176 : prevCh:99328 after
changePrev
1200229510: DPSVC: 6: refreshSnap: cur=1200229510, dl=1200229520, kbytes=0,
setu
p=0, rate=1000
1200229510: DPSVC: 6:
SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_00
00, curState=active, input=refresh
1200229510: DPSVC: 6: DpRequest::execute() BEGIN
reqType:DpRequest_VersionInt_Sc
hSrcRefresh reqCaller:DpRequest_Caller_Scheduler reqMode:0
1200229510: DPSVC: 6: DpRequest::execute() END
reqType:DpRequest_VersionInt_SchS
rcRefresh reqCaller:DpRequest_Caller_Scheduler status:0 reqMode:0
1200229510: DPSVC: 6:
SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_00
00, newState=active
--More-Note: This is a partial listing due to the length of the output.
EXAMPLE #2
---------To display the current log, type:
$ server_log server_2
NAS LOG for slot 2:
-------------------0 keys=0 h=0 nc=0
0 keys=0 h=0 nc=0
2008-01-13 08:03:10: VRPL: 6: 122: Allocating chunk:3 Add:50176 Chunks:24
2008-01-13 08:03:10: SVFS: 6: Merge Start FsVol:118 event:0x0
2008-01-13 08:03:10: SVFS: 6: D113118_736: hdr:82944 currInd:6,
Destpmdv:D114118
_503
2008-01-13 08:03:10: CFS: 6: Resuming fs 24
2008-01-13 08:03:10: SVFS: 6: 118:D113118_736:Merge hdr=82944 prev=99328
id=113
chunk=0 stableEntry=7
2008-01-13 08:03:10: UFS: 6: Volume name:Sh122113
2008-01-13 08:03:10: UFS: 6: starting gid map file processing.
503
2008-01-13 08:03:10: UFS: 6: gid map file processing is completed.
2008-01-13 08:03:10: DPSVC: 6: DpRequest::done() BEGIN
reqType:DpRequest_Version
Int_SchSrcRefresh reqCaller:DpRequest_Caller_Scheduler status:0
2008-01-13 08:03:10: DPSVC: 6:
SchedulerSrc=118_APM00062400708_0000_253_APM00062
400708_0000, curState=active, input=refreshDone
2008-01-13 08:03:10: DPSVC: 6: DpVersion::getTotalBlocksVolume enter
2008-01-13 08:03:10: DPSVC: 6: DpVersion::getTotalBlocksVolume found newV
118.ck
pt003, blocks 17534
2008-01-13 08:03:10: DPSVC: 6: DpVersion::getTotalBlocksVolume 0 blocks for
vnum
ber 1038 totalB 0
2008-01-13 08:03:10: DPSVC: 6: DpVersion::getTotalBlocksVolume found oldV
118.ck
pt004
2008-01-13 08:03:10: DPSVC: 6: DpVersion::getTotalBlocksVolume exit
2008-01-13 08:03:10: DPSVC: 6: DpVersion::getTotalBytes 0 blocks 0 bytes
2008-01-13 08:03:10: DPSVC: 6:
SchedulerSrc=118_APM00062400708_0000_253_APM00062
400708_0000, newState=active
2008-01-13 08:03:10: SVFS: 6: D113118_736: After Merge err:4 full:0 mD:0
2008-01-13 08:03:10: SVFS: 6: D113118_736: prev !full release ch:82944
newPrev:9
9328
2008-01-13 08:03:10: SVFS: 6: D113118_737: Chunk:0 hdrAdd:50176 ==>
prevChunk:82
944 before changePrevChunk
2008-01-13 08:03:10: SVFS: 6: D113118_737: Ch:0 hdr:50176 : prevCh:99328
after c hangePrev
2008-01-13 08:05:10: DPSVC: 6: refreshSnap: cur=1200229510, dl=1200229520,
kbyte
s=0, setup=0, rate=1000
2008-01-13 08:05:10: DPSVC: 6:
SchedulerSrc=199_APM00062400708_0000_258_APM00062
400708_0000, curState=active, input=refresh
2008-01-13 08:05:10: DPSVC: 6: DpRequest::execute() BEGIN
reqType:DpRequest_Vers
ionInt_SchSrcRefresh reqCaller:DpRequest_Caller_Scheduler reqMode:0
Note: This is a partial listing due to the length of the output.
EXAMPLE #3
---------To display the log file without the time stamp, type:
$ server_log server_2 -n
NAS LOG for slot 2:
-------------------0 keys=0 h=0 nc=0
VRPL: 6: 122: Allocating chunk:3 Add:50176 Chunks:24
SVFS: 6: Merge Start FsVol:118 event:0x0
SVFS: 6: D113118_736: hdr:82944 currInd:6, Destpmdv:D114118_503
CFS: 6: Resuming fs 24
SVFS: 6: 118:D113118_736:Merge hdr=82944 prev=99328 id=113 chunk=0
stableEntry=7
UFS: 6: Volume name:Sh122113
UFS: 6: starting gid map file processing.
UFS: 6: gid map file processing is completed.
DPSVC: 6: DpRequest::done() BEGIN reqType:DpRequest_VersionInt_SchSrcRefresh
req
Caller:DpRequest_Caller_Scheduler status:0
DPSVC: 6: SchedulerSrc=118_APM00062400708_0000_253_APM00062400708_0000,
curState
=active, input=refreshDone
DPSVC: 6: DpVersion::getTotalBlocksVolume enter
DPSVC: 6: DpVersion::getTotalBlocksVolume found newV 118.ckpt003, blocks 17534
DPSVC: 6: DpVersion::getTotalBlocksVolume 0 blocks for vnumber 1038 totalB 0
DPSVC: 6: DpVersion::getTotalBlocksVolume found oldV 118.ckpt004
DPSVC: 6: DpVersion::getTotalBlocksVolume exit
DPSVC: 6: DpVersion::getTotalBytes 0 blocks 0 bytes
DPSVC: 6: SchedulerSrc=118_APM00062400708_0000_253_APM00062400708_0000,
newState
=active
SVFS: 6: D113118_736: After Merge err:4 full:0 mD:0
SVFS: 6: D113118_736: prev !full release ch:82944 newPrev:99328
SVFS: 6: D113118_737: Chunk:0 hdrAdd:50176 ==> prevChunk:82944 before
changePrev
Chunk
SVFS: 6: D113118_737: Ch:0 hdr:50176 : prevCh:99328 after changePrev
DPSVC: 6: refreshSnap: cur=1200229510, dl=1200229520, kbytes=0, setup=0,
rate=10
00
DPSVC: 6: SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_0000,
curState
=active, input=refresh
DPSVC: 6: DpRequest::execute() BEGIN reqType:DpRequest_VersionInt_SchSrcRefresh
reqCaller:DpRequest_Caller_Scheduler reqMode:0
DPSVC: 6: DpRequest::execute() END reqType:DpRequest_VersionInt_SchSrcRefresh
re
qCaller:DpRequest_Caller_Scheduler status:0 reqMode:0
DPSVC: 6: SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_0000,
newState
=active
VBB: 6: VBB session list empty
CFS: 6: fs 0x78 type = dhfs being unmounted. Waiting for quiesce ...
CFS: 6: fs 0x78 type = dhfs unmounted
--More-Note: This is a partial listing due to the length of the output.
EXAMPLE #4
---------To display all of the current logs available, type:
$ server_log server_2 -a
NAS LOG for slot 2:
-------------------1200152690: SVFS: 6: D113118_606: prev !full release ch:82944 newPrev:99328
1200152690: SVFS: 6: D113118_607: Chunk:0 hdrAdd:50176 ==> prevChunk:82944
befor
e changePrevChunk
1200152690: SVFS: 6: D113118_607: Ch:0 hdr:50176 : prevCh:99328 after
changePrev
1200152950: DPSVC: 6: refreshSnap: cur=1200152950, dl=1200152960, kbytes=0,
setu
p=0, rate=666
1200152950: DPSVC: 6:
SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_00
00, curState=active, input=refresh
1200152950: DPSVC: 6: DpRequest::execute() BEGIN
reqType:DpRequest_VersionInt_Sc
hSrcRefresh reqCaller:DpRequest_Caller_Scheduler reqMode:0
1200152950: DPSVC: 6: DpRequest::execute() END
reqType:DpRequest_VersionInt_SchS
rcRefresh reqCaller:DpRequest_Caller_Scheduler status:0 reqMode:0
1200152950: DPSVC: 6:
SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_00
00, newState=active
1200152950: VBB: 6: VBB session list empty
505
1200152950: CFS: 6: fs 0x78 type = dhfs being unmounted. Waiting for
quiesce ...
1200152950: CFS: 6: fs 0x78 type = dhfs unmounted
1200152950: SVFS: 6: pause() requested on fsid:78
1200152950: SVFS: 6: pause done on fsid:78
1200152950: SVFS: 6: Cascaded Delete...
1200152950: SVFS: 6: D120199_1131: createBlockMap PBM root=0 keys=0 h=0 nc=0
1200152950: VRPL: 6: 217: Allocating chunk:4 Add:66560 Chunks:15
1200152950: SVFS: 6: Merge Start FsVol:199 event:0x0
1200152950: SVFS: 6: D120199_1130: hdr:99328 currInd:6, Destpmdv:D119199_1124
1200152950: CFS: 6: Resuming fs 78
1200152950: SVFS: 6: 199:D120199_1130:Merge hdr=99328 prev=82944 id=120 chunk=0
stableEntry=7
1200152950: UFS: 6: Volume name:Sh217120
1200152950: UFS: 6: starting gid map file processing.
1200152950: SVFS: 6: D120199_1130: After Merge err:4 full:0 mD:0
1200152950: SVFS: 6: D120199_1130: prev !full release ch:99328 newPrev:82944
1200152950: SVFS: 6: D120199_1131: Chunk:0 hdrAdd:66560 ==> prevChunk:99328
befo
re changePrevChunk
1200152950: SVFS: 6: D120199_1131: Ch:0 hdr:66560 : prevCh:82944 after
changePre
v
1200152950: UFS: 6: gid map file processing is completed.
1200152950: DPSVC: 6: DpRequest::done() BEGIN
reqType:DpRequest_VersionInt_SchSr
cRefresh reqCaller:DpRequest_Caller_Scheduler status:0
1200152950: DPSVC: 6:
SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_00
00, curState=active, input=refreshDone
--More-Note: This is a partial listing due to the length of the output.
EXAMPLE #5
---------To display the current log in terse form, type:
$ server_log server_2 -t
NAS LOG for slot 2:
-------------------0 keys=0 h=0 nc=0
1200229390: 26043285504: 122: Allocating chunk:3 Add:50176 Chunks:24
1200229390: 26042826752: Merge Start FsVol:118 event:0x0
1200229390: 26042826752: D113118_736: hdr:82944 currInd:6, Destpmdv:D114118_503
1200229390: 26040008704: Resuming fs 24
1200229390: 26042826752: 118:D113118_736:Merge hdr=82944 prev=99328 id=113
chunk
=0 stableEntry=7
1200229390: 26042433536: Volume name:Sh122113
1200229390: 26042433536: starting gid map file processing.
1200229390: 26042433536: gid map file processing is completed.
1200229390: 26045513728: DpRequest::done() BEGIN
reqType:DpRequest_VersionInt_Sc
hSrcRefresh reqCaller:DpRequest_Caller_Scheduler status:0
1200229390: 26045513728:
SchedulerSrc=118_APM00062400708_0000_253_APM00062400708
_0000, curState=active, input=refreshDone
1200229390: 26045513728: DpVersion::getTotalBlocksVolume enter
1200229390: 26045513728: DpVersion::getTotalBlocksVolume found newV
118.ckpt003,
blocks 17534
1200229390: 26045513728: DpVersion::getTotalBlocksVolume 0 blocks for
vnumber 10
38 totalB 0
1200229390: 26045513728: DpVersion::getTotalBlocksVolume found oldV 118.ckpt004
1200229390: 26045513728: DpVersion::getTotalBlocksVolume exit
1200229390: 26045513728: DpVersion::getTotalBytes 0 blocks 0 bytes
1200229390: 26045513728:
SchedulerSrc=118_APM00062400708_0000_253_APM00062400708
_0000, newState=active
1200229390: 26042826752: D113118_736: After Merge err:4 full:0 mD:0
1200229390: 26042826752: D113118_736: prev !full release ch:82944 newPrev:99328
1200229390: 26042826752: D113118_737: Chunk:0 hdrAdd:50176 ==> prevChunk:82944
b
efore changePrevChunk
1200229390: 26042826752: D113118_737: Ch:0 hdr:50176 : prevCh:99328 after
change
Prev
1200229510: 26045513728: refreshSnap: cur=1200229510, dl=1200229520,
kbytes=0, s
etup=0, rate=1000
1200229510: 26045513728:
SchedulerSrc=199_APM00062400708_0000_258_APM00062400708
_0000, curState=active, input=refresh
1200229510: 26045513728: DpRequest::execute() BEGIN
reqType:DpRequest_VersionInt
_SchSrcRefresh reqCaller:DpRequest_Caller_Scheduler reqMode:0
1200229510: 26045513728: DpRequest::execute() END
reqType:DpRequest_VersionInt_S
--More-Note: This is a partial listing due to the length of the output.
EXAMPLE #6
---------To display the current log in verbose form, type:
$ server_log server_2 -v
DART Work Partition Layout found @ LBA 0x43000 (134MB boundary)
slot 2) About to dump log @ LBA 0xc7800
NAS LOG for slot 2:
-------------------About to print log from LBA c8825 to c97ff
0 keys=0 h=0 nc=0
logged time
id
severity
component
facility
baseid
type
argument name
argument value
=
=
=
=
=
=
=
=
=
2008-01-13 08:03:10
26043285504
INFO
DART
VRPL
0
STATUS
arg0
122: Allocating chunk:3 Add:50176 Chunks:24
argument type
brief description
= string (8)
= 122: Allocating chunk:3 Add:50176 Chunks:24
full description
= No additional information is available.
recommended action = No recommended action is available. Use the text from the
error message’s brief description to search the Knowledgebase on
Powerlink. After logging in to Powerlink, go to Support > Knowledgebase
Search > Support Solutions Search.
logged time
id
severity
component
=
=
=
=
2008-01-13 08:03:10
26042826752
INFO
DART
507
facility
baseid
type
argument name
argument value
=
=
=
=
=
SVFS
0
STATUS
arg0
Merge Start FsVol:118 event:0x0
argument type
brief description
= string (8)
= Merge Start FsVol:118 event:0x0
full description
= No additional information is available.
recommended action = No recommended action is available. Use the text from the
error message’s brief description to search the Knowledgebase on
Powerlink. After logging in to Powerlink, go to Support > Knowledgebase Search
> Support
Solutions Search.
--More-Note: This is a partial listing due to the length of the output.
-------------------------------------Last Modified: June 2, 2011 2:00 pm
server_mount
-----------Mounts file systems and manages mount options for the specified Data
Movers.
SYNOPSIS
-------server_mount {<movername>|ALL}
[-all]
|[-Force][-check][-option options] <fs_name> [mount_point]
options:
[ro|rw][primary=movername]
[nonotify][nooplock]
[notifyonaccess][notifyonwrite]
[accesspolicy={NT|UNIX|SECURE|NATIVE|MIXED|MIXED_COMPAT}]
[nolock|wlock|rwlock]
[cvfsname=<newname>]
[noscan]
[noprefetch]
[uncached]
[cifssyncwrite]
[triggerlevel=<value>]
[ntcredential]
[renamepolicy={CIFS|FULL|NO}]
[cifsnanoroundup]
[ceppcifs|ceppnfs|ceppcifs,ceppnfs|nocepp]]
[nfsv4delegation={NONE|READ|RW}]
[smbca]
DESCRIPTION
----------server_mount attaches a file system to the specified <mount_point> with the
specified options, and displays a listing of mounted file systems.
server_umount unmounts the file system.
The ALL option executes the command for all of the Data Movers.
Note: The primary=movername option is not used.
OPTIONS
------No arguments
Displays a listing of all mounted and temporarily unmounted file systems.
-all
Mounts all file systems in the mount table.
-Force -option rw <fs_name> [mount_point]
Forces a mount of a file system copy (created using fs_timefinder)
as read-write. By default, all file system copies are mounted as
read-only.
[-check]
Checks if there is a diskmark value mismatch between NAS database and the Dat
a
Mover for the file system, and also checks if the diskmark on Data Mover
exists. This option is required for SRDF STAR feature.
Note: If the check option is not used, the diskmark mismatch case or
missing diskmark case could cause Data Mover panic.
509
<fs_name> [mount_point]
Mounts a file system to the specified <mount_point>. When a file system
is initially mounted, the <mount_point> is required; however, remounting
a file system after a temporary unmount, does not require the use of a
<mount_point>.
[-option options]
Specifies the following comma-separated options:
[ro|rw]
Specifies the mount as read-write (default), or read-only which is the
default for checkpoints and TimeFinder/FS.
Note: MPFS clients do not acknowledge file systems that are mounted
read-only and allow their clients to write to the file system.
[accesspolicy={NT|UNIX|SECURE|NATIVE|MIXED|MIXED_COMPAT}]
Indicates the access control policy as defined in the table below:
Note: When accessed from a Windows client, ACLs are only checked if
the CIFS user authentication method is set to the recommended default,
NT. This is set using the -add security option in the server_cifs
command.
Access Policy
-------------
CIFS clients
------------
NFS clients
-----------
NATIVE
(default)
ACL is checked.
UNIX rights are chec
UNIX
ACL and UNIX rights are
UNIX rights are chec
ked.
ked.
checked.
NT
ACL is checked.
ACL and UNIX rights
are
checked.
SECURE
ACL and UNIX rights are
ACL and UNIX rights
checked.
checked.
ACL is checked. If there
ACL is checked. If t
is not an ACL, one is
is not an ACL, one i
created based on the UNIX
created based on the
mode bits. Access is also
mode bits. Access is
determined by the ACL.
determined by the AC
NFSv4 clients can manage
NFSv4 clients can ma
the ACL. An ACL
the ACL. A modificat
modification rebuilds the
UNIX mode bits but the
UNIX rights are not
the UNIX mode bits
rebuilds the ACL
permissions but the
checked.
rights are not check
If the permissions of a
If the permissions o
file or directory were
file or directory we
are
here
s
UNIX
also
L.
MIXED
nage
ion to
UNIX
ed.
MIXED_COMPAT
f a
re
last set or changed by a
last set or changed
CIFS client, the ACL is
checked and the UNIX
NFS client, the UNIX
rights are checked a
rights are rebuilt but are
ACL is rebuilt but i
by an
nd the
s not
not checked.
If the permissions of a file
If the permissions o
or directory were last set
or directory were la
or changed by an NFS client,
changed by a CIFS cl
the UNIX rights are checked
ACL is checked and t
and the ACL is rebuilt but is
rights are rebuilt b
not checked.
NFSv4 clients can
not checked.
NFSV4 clients can ma
manage the ACL.
ACL.
f the file
st set or
ient, the
he UNIX
ut are
nage the
Note: The MIXED policy translates the UNIX ownership mo
de bits into
three ACEs: Owner, Group, and Everyone, which can resul
t in different
permissions for the Group ACE and the Everyone ACE. The
MIXED_COMPAT policy does not translate a UNIX Group int
o a Group
ACE. The Everyone ACE is generated from the UNIX Group.
[cvfsname=<newname>]
Changes the default name of the checkpoint in each of the .ckpt
directories. The default name is the timestamp of when the checkpoint
was taken.
[noprefetch]
Turns prefetch processing off. When on (default), performs read ahead
processing for file systems.
Caution: Turning the prefetch option to off may affect performance.
[uncached]
Allows well-formed writes (that is, multiple of a disk block and
disk block aligned) to be sent directly to the disk without being
cached on the server.
For CIFS Clients Only
--------------------When mounting a file system, if the default options are not manually
entered, the options are active but not displayed in the listing of
mounted file systems. Available options are:
[nonotify]
Turns notify off. When on (default), the notify option informs the
client of changes made to the directory file structure.
[nooplock]
Turns opportunistic locks (oplocks) off. When oplocks are on
(default), they reduce network traffic by enabling clients to cache
the file and make changes locally. To turn Windows oplocks off,
unmount the file system, then remount with nooplock.
511
[notifyonaccess]
Provides a notification when a file system is accessed. By default,
notifyonaccess is disabled.
[notifyonwrite]
Provides a notification of write access to a file system. By default,
the notifyonwrite option is disabled.
[noscan]
Disables the Virus Checker protocol for a file system. The Virus
Checker protocol is enabled using server_setup and managed by
server_viruschk.
[cifssyncwrite]
Performs an immediate synchronous write on disk independently
of CIFS write protocol option. This can impact write performance.
[triggerlevel=<value>]
Specifies the deepest directory level at which notification occurs.
The default is 512. The value -1 disables the notification feature.
[ntcredential]
Enables the VNX to take full account of a user’s Windows group
memberships when checking an ACL for access through NFS.
When a UNIX user intiates a full request for a file system object,
the UNIX UID is mapped to the Windows SID, then merges the
user’s UNIX and Windows groups together to generate a
Windows NT Credential. This applies to NT, SECURE, MIXED,
and MIXED_COMPAT access-checking policies.
[renamepolicy={CIFS|FULL|NO}]
Enables or disables control if any file or directory is opened on the
current directory or any subdirectory, before the current directory
is renamed. CIFS (default) stops the renaming of CIFS directories
when in use by CIFS clients. FULL denies permission for the
renaming of CIFS and NFS directories when in use by CIFS or
NFS clients. NO automatically performs the directory rename
without checking if a CIFS or NFS client is opening the directory.
Note: The renamepolicy is not supported by NFSv4.
cifsnanoroundup
Rounds up to the next second any date set by a CIFS client.
[ceppcifs]
It enables the CEPA events for CIFS on a file system. This option
is enabled by default.
[smbca]
Sets the CA bit on a share, the primary file system must be
mounted with the smbca option.
* The lock policy is RWLock (lock checking mandatory)
* The CIFS and NFS access to this file system is denied until the
CIFS CA service is started up to the time in seconds defined by
the parameters cifs.smb2.maxCaTimeout (default is 2 minutes)
* The CA attribute can be set on a share located on this file
system.
For NFS Clients Only
-------------------[nolock|wlock|rwlock]
Indicates the impact of locking behavior on NFSv2 and NFSv3
clients against NFSv4 and CIFS file locking. In NFSv2 and NFSv3,
locking rules are cooperative, so a client is not prevented from
accessing a file locked by another client if it does not use the lock
procedure. NFSv2 and NFSv3 locks as advisory. An advisory lock
does not affect read and write access to the file, but informs other
users that the file is already in use.
Note: NFSv4 and ClFS clients have mandatory locking schemes and do
not require a locking policy.
Locking Policy
--------------
NFS clients
-----------
nolock
This(default) can open and write to a file when it is
locked by CIFS or NFSv4 clients.
wlock
This can read but cannot write data to a file locked by
CIFS or NFSv4 clients.
rwlock
This (recommended) cannot read or write data to files loc
ked
by CIFS or NFSv4 clients.
[ceppnfs]
It enables the CEPA events for NFS on a file system.
Note: If ceppnfs is used without the ceppcifs option, the CEPA events f
or CIFS are
disabled. To enable CEPA events for NFS and CIFS on a file system, ensu
re that
you add both these options in the command.
nfsv4delegation={NONE|READ|RW}
Indicates that specific actions on a file are delegated to the NFSv4
client. NONE indicates that no file delegation is granted. READ
indicates only read delegation is granted. RW (default) indicates
write delegation is granted.
SEE ALSO
-------Managing Volumes and File Systems with VNX Automatic Volume
Management, Managing Volumes and File Systems for VNX Manually,
Configuring NFS on VNX, Configuring and Managing CIFS on VNX,
Using VNX SnapSure, nas_fs, server_checkup, server_export,
server_mountpoint, server_nfs, server_setup, server_umount, and
server_viruschk.
EXAMPLE #1
---------To display all mounted file systems on server_2, type:
$ server_mount server_2
server_2 :
root_fs_2 on / uxfs,perm,rw
root_fs_common on /.etc_common uxfs,perm,ro
ufs2 on /ufs2 uxfs,perm,rw
EXAMPLE #2
---------To mount all file systems temporarily umounted from the mount table
of server_2, type:
$ server_mount server_2 -all
server_2 : done
EXAMPLE #3
513
---------To mount ufs1, on mount point/ufs1, and enable CEPP for both CIFS
and NFS, type:
$ server_mount server_2 -o ceppcifs,ceppnfs ufs1 /ufs1
server_2 : done
EXAMPLE #4
---------To mount ufs1, on mount point/ufs1, with nonotify, nolock and
cifssyncwrite turned on, type:
$ server_mount server_2 -option nonotify,nolock,cifssyncwrite ufs1 /ufs1
server_2 : done
EXAMPLE #5
---------To mount ufs1, on mount point/ufs1, with the access policy set to NATIVE,
and nooplock turned on, type:
$ server_mount server_2 -option accesspolicy=NATIVE,nooplock ufs1 /ufs1
server_2 : done
EXAMPLE #6
---------To mount ufs1, on mount point/ufs1, with noscan and noprefetch set
to on, type:
$ server_mount server_2 -option noscan,noprefetch ufs1 /ufs1
server_2 : done
EXAMPLE #7
---------To mount ufs1, on mount point /ufs1, with notifyonaccess,notifyonwrite
set to on, type:
$ server_mount server_2 -option notifyonaccess,notifyonwrite ufs1 /ufs1
server_2 : done
EXAMPLE #8
---------To mount a copy of a file system, ufs1_snap1 on mount point/ufs1_snap1
with read-write access, type:
$ server_mount server_2 -Force -option rw ufs1_snap1 /ufs1_snap1
server_2 : done
EXAMPLE #9
---------To mount ufs1, on mount point/ufs1, with uncached writes turned
on, type:
$ server_mount server_2 -option uncached ufs1 /ufs1
server_2 : done
EXAMPLE #10
----------To mount ufs1, on mount point/ufs1, with the trigger level of
notification change set to 256, type:
$ server_mount server_2 -option triggerlevel=256 ufs1 /ufs1
server_2 : done
EXAMPLE #11
----------To mount ufs1, on mount point/ufs1, change the default name of the
checkpoint in the ".ckpt" directory, and specify a mount point, type:
$ server_mount server_2 -option cvfsname=test ufs1 /ufs1
server_2 : done
EXAMPLE #12
----------To mount ufs1, on mount point/ufs1, with the access policy set
to MIXED, type:
$ server_mount server_2 -option accesspolicy=MIXED ufs1 /ufs1
server_2 : done
EXAMPLE #13
----------To mount ufs1, on mount point/ufs1, with the access policy set to
MIXED_COMPAT, type:
$ server_mount server_2 -option accesspolicy=MIXED_COMPAT ufs1 /ufs1
server_2 : done
EXAMPLE #14
----------To mount ufs1, as a part of the nested file system nmfs1, type:
$ server_mount server_2 ufs1 /nmfs1/ufs1
server_2 : done
EXAMPLE #15
----------To mount ufs1, specifying that no file is granted to the NFSv4
client, type:
$ server_mount server_2 ufs1 nfsv4delegation=NONE
server_2 : done
EXAMPLE #16
----------To check diskmark value for the file system ufs1632_snap1, type:
$ server_mount server_2 -check ufs1632_snap1/ufs1632_snap1
server_2 :
Error 13423542320: server_2 : The marks on disks rootd17 with file system
ufs1632_snap1 are not the same on NAS_DB and the Data Mover.
EXAMPLE #17
----------To check if the diskmark for the file system ufs1632_snap1 exists, type:
$ server_mount server_2 -check ufs1632_snap1/ufs1632_snap1
server_2 :
Error 13423542324: server_2 : The marks on disks rootd17 with file system515
ufs1632_snap1 cannot be found on the Data Mover.
EXAMPLE #18
----------To mount the file system named "fs 105" on the VDM "vdm1" to the
mount point /fs 105, type:
$ server_mount vdm1 -o smbca fs105/fs105
vdm1:done
-------------------------------------Last Modified: November 20, 2012 12:15 pm
server_mountpoint
Manages mount points for the specified Data Movers.
SYNOPSIS
-------server_mountpoint {<movername>|ALL}
-list
| {-create|-delete|-exist} <pathname>
DESCRIPTION
----------server_mountpoint creates, deletes, lists, or queries a mount point
for the specified Data Mover or all Data Movers.
The ALL option executes the command for all Data Movers.
OPTIONS
-------list
Lists all mount points for the specified Data Movers.
-create <pathname>
Creates a mount point. A <pathname> must begin with a slash (/).
-delete <pathname>
Deletes a mount point.
-exist <pathname>
Displays whether or not a mount point exists.
SEE ALSO
-------Managing Volumes and File Systems with VNX Automatic Volume
Management, Managing Volumes and File Systems for VNX Manually,
nas_fs, server_export, and server_mountpoint.
EXAMPLE #1
---------To create a mount point on server_2, type:
$ server_mountpoint server_2 -create /ufs1
server_2 : done
EXAMPLE #2
---------To list all mount points on a server_2, type:
$ server_mountpoint server_2 -list
server_2 :
/.etc_common
/ufs1
/ufs1_ckpt1
/ufs2
/ufs3
EXAMPLE #3
----------
517
To verify that the mount point /ufs1, exists on all of the Data Movers,
type:
$ server_mountpoint ALL -exist /ufs1
server_2 : /ufs1 : exists
server_3 : /ufs1 : does not exist
EXAMPLE #4
---------To delete the mount point /ufs1, on server_2, type:
$ server_mountpoint server_2 -delete /ufs1
server_2 : done
-------------------------------------Last Modified: April 14, 2011 12:50 pm
server_mpfs
Sets up and configures MPFS protocol.
SYNOPSIS
-------server_mpfs {<movername>|ALL}
-set <var>=<value>
| -add <number_of_threads>
| -delete <number_of_threads>
| -Stats
| -Default [<var>]
| -mountstatus
DESCRIPTION
-----------server_mpfs sets up the MPFS protocol. The configuration values
entered with this command are saved into a configuration file on the
Data Mover. MPFS is not supported on the NS series.
server_setup provides information to start and stop MPFS for a Data
Mover.
The ALL option executes the command for all Data Movers.
OPTIONS
-------set <var>=<value>
Sets the specifed value for the specified variable. Currently, the only
valid <var> is threads.
If this command is executed before the server_setup -P mpfs -o start
command is issued, the system sets the number of threads that will be
started with the server_setup -o start command, thereby overriding
the default number of threads. If this command is executed after
MPFS service is started, threads are to be added and removed
dynamically.
-add <number_of_threads>
Increases the previously specified number of MPFS threads
(default=16) by <number_of_threads> for the specified Data Movers.
-delete <number_of_threads>
Decreases the number of threads by the <number_of_threads>
indicated for the specified Data Movers.
-Stats
Displays the current MPFS server statistics.
-mountstatus
Displays the mountability of file systems for MPFS.
Certain file systems cannot be mounted using MPFS, therefore before
attempting to mount a file system on an MPFS client, compatibility
should be determined. File systems that are not supported are
running quotas, have checkpoints, or are using TimeFinder/FS.
-Default [<var>]
Without a <var> entry, resets all variables to their factory-default
values. Currently the only valid <var> is threads.
519
If a <var> is specified, only the specified value is reset to its
factory-default value.
Note: Variable names are case-sensitive.
SEE ALSO
--------Using VNX Multi-Path File System, server_setup, and server_mt.
EXAMPLE #1
---------To set a value for a specified MPFS variable, type:
$ server_mpfs server_2 -set threads=32
server_2 :done
EXAMPLE #2
---------To display the MPFS stats for server_2, type:
$ server_mpfs server_2 -Stats
server_2 :
Server ID=server_2
FMP Threads=32
Max Threads Used=2
FMP Open Files=0
FMP Port=4656
HeartBeat Time Interval=30
EXAMPLE #3
---------To reset all variables back to their factory default value, type:
$ server_mpfs server_2 -Default
server_2 :done
EXAMPLE #4
---------T